Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next
Pablo Neira Ayuso says:
====================
pull request: netfilter/ipvs updates for net-next
The following patchset contains Netfilter/IPVS updates for net-next,
most relevantly they are:
1) Four patches to make the new nf_tables masquerading support
independent of the x_tables infrastructure. This also resolves a
compilation breakage if the masquerade target is disabled but the
nf_tables masq expression is enabled.
2) ipset updates via Jozsef Kadlecsik. This includes the addition of the
skbinfo extension that allows you to store packet metainformation in the
elements. This can be used to fetch and restore this to the packets through
the iptables SET target, patches from Anton Danilov.
3) Add the hash:mac set type to ipset, from Jozsef Kadlecsick.
4) Add simple weighted fail-over scheduler via Simon Horman. This provides
a fail-over IPVS scheduler (unlike existing load balancing schedulers).
Connections are directed to the appropriate server based solely on
highest weight value and server availability, patch from Kenny Mathis.
5) Support IPv6 real servers in IPv4 virtual-services and vice versa.
Simon Horman informs that the motivation for this is to allow more
flexibility in the choice of IP version offered by both virtual-servers
and real-servers as they no longer need to match: An IPv4 connection
from an end-user may be forwarded to a real-server using IPv6 and
vice versa. No ip_vs_sync support yet though. Patches from Alex Gartrell
and Julian Anastasov.
6) Add global generation ID to the nf_tables ruleset. When dumping from
several different object lists, we need a way to identify that an update
has ocurred so userspace knows that it needs to refresh its lists. This
also includes a new command to obtain the 32-bits generation ID. The
less significant 16-bits of this ID is also exposed through res_id field
in the nfnetlink header to quickly detect the interference and retry when
there is no risk of ID wraparound.
7) Move br_netfilter out of the bridge core. The br_netfilter code is
built in the bridge core by default. This causes problems of different
kind to people that don't want this: Jesper reported performance drop due
to the inconditional hook registration and I remember to have read complains
on netdev from people regarding the unexpected behaviour of our bridging
stack when br_netfilter is enabled (fragmentation handling, layer 3 and
upper inspection). People that still need this should easily undo the
damage by modprobing the new br_netfilter module.
8) Dump the set policy nf_tables that allows set parameterization. So
userspace can keep user-defined preferences when saving the ruleset.
From Arturo Borrero.
9) Use __seq_open_private() helper function to reduce boiler plate code
in x_tables, From Rob Jones.
10) Safer default behaviour in case that you forget to load the protocol
tracker. Daniel Borkmann and Florian Westphal detected that if your
ruleset is stateful, you allow traffic to at least one single SCTP port
and the SCTP protocol tracker is not loaded, then any SCTP traffic may
be pass through unfiltered. After this patch, the connection tracking
classifies SCTP/DCCP/UDPlite/GRE packets as invalid if your kernel has
been compiled with support for these modules.
====================
Trivially resolved conflict in include/linux/skbuff.h, Eric moved some
netfilter skbuff members around, and the netfilter tree adjusted the
ifdef guards for the bridging info pointer.
Signed-off-by: David S. Miller <davem@davemloft.net>
diff --git a/Documentation/DocBook/media/v4l/compat.xml b/Documentation/DocBook/media/v4l/compat.xml
index eee6f0f..3a626d1 100644
--- a/Documentation/DocBook/media/v4l/compat.xml
+++ b/Documentation/DocBook/media/v4l/compat.xml
@@ -2545,6 +2545,30 @@
</orderedlist>
</section>
+ <section>
+ <title>V4L2 in Linux 3.16</title>
+ <orderedlist>
+ <listitem>
+ <para>Added event V4L2_EVENT_SOURCE_CHANGE.
+ </para>
+ </listitem>
+ </orderedlist>
+ </section>
+
+ <section>
+ <title>V4L2 in Linux 3.17</title>
+ <orderedlist>
+ <listitem>
+ <para>Extended &v4l2-pix-format;. Added format flags.
+ </para>
+ </listitem>
+ <listitem>
+ <para>Added compound control types and &VIDIOC-QUERY-EXT-CTRL;.
+ </para>
+ </listitem>
+ </orderedlist>
+ </section>
+
<section id="other">
<title>Relation of V4L2 to other Linux multimedia APIs</title>
diff --git a/Documentation/DocBook/media/v4l/func-poll.xml b/Documentation/DocBook/media/v4l/func-poll.xml
index 85cad8b..4c73f11 100644
--- a/Documentation/DocBook/media/v4l/func-poll.xml
+++ b/Documentation/DocBook/media/v4l/func-poll.xml
@@ -29,9 +29,12 @@
to accept data for output.</para>
<para>When streaming I/O has been negotiated this function waits
-until a buffer has been filled or displayed and can be dequeued with
-the &VIDIOC-DQBUF; ioctl. When buffers are already in the outgoing
-queue of the driver the function returns immediately.</para>
+until a buffer has been filled by the capture device and can be dequeued
+with the &VIDIOC-DQBUF; ioctl. For output devices this function waits
+until the device is ready to accept a new buffer to be queued up with
+the &VIDIOC-QBUF; ioctl for display. When buffers are already in the outgoing
+queue of the driver (capture) or the incoming queue isn't full (display)
+the function returns immediately.</para>
<para>On success <function>poll()</function> returns the number of
file descriptors that have been selected (that is, file descriptors
@@ -44,10 +47,22 @@
flags. When the function timed out it returns a value of zero, on
failure it returns <returnvalue>-1</returnvalue> and the
<varname>errno</varname> variable is set appropriately. When the
-application did not call &VIDIOC-QBUF; or &VIDIOC-STREAMON; yet the
+application did not call &VIDIOC-STREAMON; the
<function>poll()</function> function succeeds, but sets the
<constant>POLLERR</constant> flag in the
-<structfield>revents</structfield> field.</para>
+<structfield>revents</structfield> field. When the
+application has called &VIDIOC-STREAMON; for a capture device but hasn't
+yet called &VIDIOC-QBUF;, the <function>poll()</function> function
+succeeds and sets the <constant>POLLERR</constant> flag in the
+<structfield>revents</structfield> field. For output devices this
+same situation will cause <function>poll()</function> to succeed
+as well, but it sets the <constant>POLLOUT</constant> and
+<constant>POLLWRNORM</constant> flags in the <structfield>revents</structfield>
+field.</para>
+
+ <para>If an event occurred (see &VIDIOC-DQEVENT;) then
+<constant>POLLPRI</constant> will be set in the <structfield>revents</structfield>
+field and <function>poll()</function> will return.</para>
<para>When use of the <function>read()</function> function has
been negotiated and the driver does not capture yet, the
@@ -58,10 +73,18 @@
may return immediately.</para>
<para>When use of the <function>write()</function> function has
-been negotiated the <function>poll</function> function just waits
+been negotiated and the driver does not stream yet, the
+<function>poll</function> function starts streaming. When that fails
+it returns a <constant>POLLERR</constant> as above. Otherwise it waits
until the driver is ready for a non-blocking
<function>write()</function> call.</para>
+ <para>If the caller is only interested in events (just
+<constant>POLLPRI</constant> is set in the <structfield>events</structfield>
+field), then <function>poll()</function> will <emphasis>not</emphasis>
+start streaming if the driver does not stream yet. This makes it
+possible to just poll for events and not for buffers.</para>
+
<para>All drivers implementing the <function>read()</function> or
<function>write()</function> function or streaming I/O must also
support the <function>poll()</function> function.</para>
diff --git a/Documentation/DocBook/media/v4l/v4l2.xml b/Documentation/DocBook/media/v4l/v4l2.xml
index f2f81f0..7cfe618 100644
--- a/Documentation/DocBook/media/v4l/v4l2.xml
+++ b/Documentation/DocBook/media/v4l/v4l2.xml
@@ -152,10 +152,11 @@
applications. -->
<revision>
- <revnumber>3.16</revnumber>
- <date>2014-05-27</date>
- <authorinitials>lp</authorinitials>
- <revremark>Extended &v4l2-pix-format;. Added format flags.
+ <revnumber>3.17</revnumber>
+ <date>2014-08-04</date>
+ <authorinitials>lp, hv</authorinitials>
+ <revremark>Extended &v4l2-pix-format;. Added format flags. Added compound control types
+and VIDIOC_QUERY_EXT_CTRL.
</revremark>
</revision>
@@ -538,7 +539,7 @@
</partinfo>
<title>Video for Linux Two API Specification</title>
- <subtitle>Revision 3.14</subtitle>
+ <subtitle>Revision 3.17</subtitle>
<chapter id="common">
&sub-common;
diff --git a/Documentation/DocBook/media/v4l/vidioc-subdev-g-selection.xml b/Documentation/DocBook/media/v4l/vidioc-subdev-g-selection.xml
index 1ba9e99..c62a736 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subdev-g-selection.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subdev-g-selection.xml
@@ -119,7 +119,7 @@
</row>
<row>
<entry>&v4l2-rect;</entry>
- <entry><structfield>rect</structfield></entry>
+ <entry><structfield>r</structfield></entry>
<entry>Selection rectangle, in pixels.</entry>
</row>
<row>
diff --git a/Documentation/devicetree/bindings/dma/rcar-audmapp.txt b/Documentation/devicetree/bindings/dma/rcar-audmapp.txt
index 9f1d750..61bca50 100644
--- a/Documentation/devicetree/bindings/dma/rcar-audmapp.txt
+++ b/Documentation/devicetree/bindings/dma/rcar-audmapp.txt
@@ -16,9 +16,9 @@
* DMA client
Required properties:
-- dmas: a list of <[DMA multiplexer phandle] [SRS/DRS value]> pairs,
- where SRS/DRS values are fixed handles, specified in the SoC
- manual as the value that would be written into the PDMACHCR.
+- dmas: a list of <[DMA multiplexer phandle] [SRS << 8 | DRS]> pairs.
+ where SRS/DRS are specified in the SoC manual.
+ It will be written into PDMACHCR as high 16-bit parts.
- dma-names: a list of DMA channel names, one per "dmas" entry
Example:
diff --git a/Documentation/devicetree/bindings/input/atmel,maxtouch.txt b/Documentation/devicetree/bindings/input/atmel,maxtouch.txt
index 0ac23f2..1852906 100644
--- a/Documentation/devicetree/bindings/input/atmel,maxtouch.txt
+++ b/Documentation/devicetree/bindings/input/atmel,maxtouch.txt
@@ -11,10 +11,6 @@
Optional properties for main touchpad device:
-- linux,gpio-keymap: An array of up to 4 entries indicating the Linux
- keycode generated by each GPIO. Linux keycodes are defined in
- <dt-bindings/input/input.h>.
-
- linux,gpio-keymap: When enabled, the SPT_GPIOPWN_T19 object sends messages
on GPIO bit changes. An array of up to 8 entries can be provided
indicating the Linux keycode mapped to each bit of the status byte,
diff --git a/Documentation/devicetree/bindings/net/fsl-fec.txt b/Documentation/devicetree/bindings/net/fsl-fec.txt
index 8a2c7b5..0c8775c 100644
--- a/Documentation/devicetree/bindings/net/fsl-fec.txt
+++ b/Documentation/devicetree/bindings/net/fsl-fec.txt
@@ -16,6 +16,12 @@
- phy-handle : phandle to the PHY device connected to this device.
- fixed-link : Assume a fixed link. See fixed-link.txt in the same directory.
Use instead of phy-handle.
+- fsl,num-tx-queues : The property is valid for enet-avb IP, which supports
+ hw multi queues. Should specify the tx queue number, otherwise set tx queue
+ number to 1.
+- fsl,num-rx-queues : The property is valid for enet-avb IP, which supports
+ hw multi queues. Should specify the rx queue number, otherwise set rx queue
+ number to 1.
Optional subnodes:
- mdio : specifies the mdio bus in the FEC, used as a container for phy nodes
diff --git a/Documentation/devicetree/bindings/net/meson-dwmac.txt b/Documentation/devicetree/bindings/net/meson-dwmac.txt
new file mode 100644
index 0000000..ec633d7
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/meson-dwmac.txt
@@ -0,0 +1,25 @@
+* Amlogic Meson DWMAC Ethernet controller
+
+The device inherits all the properties of the dwmac/stmmac devices
+described in the file net/stmmac.txt with the following changes.
+
+Required properties:
+
+- compatible: should be "amlogic,meson6-dwmac" along with "snps,dwmac"
+ and any applicable more detailed version number
+ described in net/stmmac.txt
+
+- reg: should contain a register range for the dwmac controller and
+ another one for the Amlogic specific configuration
+
+Example:
+
+ ethmac: ethernet@c9410000 {
+ compatible = "amlogic,meson6-dwmac", "snps,dwmac";
+ reg = <0xc9410000 0x10000
+ 0xc1108108 0x4>;
+ interrupts = <0 8 1>;
+ interrupt-names = "macirq";
+ clocks = <&clk81>;
+ clock-names = "stmmaceth";
+ }
diff --git a/Documentation/devicetree/bindings/net/qca-qca7000-spi.txt b/Documentation/devicetree/bindings/net/qca-qca7000-spi.txt
new file mode 100644
index 0000000..c74989c
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/qca-qca7000-spi.txt
@@ -0,0 +1,47 @@
+* Qualcomm QCA7000 (Ethernet over SPI protocol)
+
+Note: The QCA7000 is useable as a SPI device. In this case it must be defined
+as a child of a SPI master in the device tree.
+
+Required properties:
+- compatible : Should be "qca,qca7000"
+- reg : Should specify the SPI chip select
+- interrupts : The first cell should specify the index of the source interrupt
+ and the second cell should specify the trigger type as rising edge
+- spi-cpha : Must be set
+- spi-cpol: Must be set
+
+Optional properties:
+- interrupt-parent : Specify the pHandle of the source interrupt
+- spi-max-frequency : Maximum frequency of the SPI bus the chip can operate at.
+ Numbers smaller than 1000000 or greater than 16000000 are invalid. Missing
+ the property will set the SPI frequency to 8000000 Hertz.
+- local-mac-address: 6 bytes, MAC address
+- qca,legacy-mode : Set the SPI data transfer of the QCA7000 to legacy mode.
+ In this mode the SPI master must toggle the chip select between each data
+ word. In burst mode these gaps aren't necessary, which is faster.
+ This setting depends on how the QCA7000 is setup via GPIO pin strapping.
+ If the property is missing the driver defaults to burst mode.
+
+Example:
+
+/* Freescale i.MX28 SPI master*/
+ssp2: spi@80014000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "fsl,imx28-spi";
+ pinctrl-names = "default";
+ pinctrl-0 = <&spi2_pins_a>;
+ status = "okay";
+
+ qca7000: ethernet@0 {
+ compatible = "qca,qca7000";
+ reg = <0x0>;
+ interrupt-parent = <&gpio3>; /* GPIO Bank 3 */
+ interrupts = <25 0x1>; /* Index: 25, rising edge */
+ spi-cpha; /* SPI mode: CPHA=1 */
+ spi-cpol; /* SPI mode: CPOL=1 */
+ spi-max-frequency = <8000000>; /* freq: 8 MHz */
+ local-mac-address = [ A0 B0 C0 D0 E0 F0 ];
+ };
+};
diff --git a/Documentation/devicetree/bindings/sound/rockchip-i2s.txt b/Documentation/devicetree/bindings/sound/rockchip-i2s.txt
index 6c55fcf..9b82c20 100644
--- a/Documentation/devicetree/bindings/sound/rockchip-i2s.txt
+++ b/Documentation/devicetree/bindings/sound/rockchip-i2s.txt
@@ -31,7 +31,7 @@
#address-cells = <1>;
#size-cells = <0>;
dmas = <&pdma1 0>, <&pdma1 1>;
- dma-names = "rx", "tx";
+ dma-names = "tx", "rx";
clock-names = "i2s_hclk", "i2s_clk";
clocks = <&cru HCLK_I2S0>, <&cru SCLK_I2S0>;
};
diff --git a/Documentation/devicetree/bindings/spi/spi-rockchip.txt b/Documentation/devicetree/bindings/spi/spi-rockchip.txt
index 7bab355..467dec4 100644
--- a/Documentation/devicetree/bindings/spi/spi-rockchip.txt
+++ b/Documentation/devicetree/bindings/spi/spi-rockchip.txt
@@ -16,11 +16,15 @@
- clocks: Must contain an entry for each entry in clock-names.
- clock-names: Shall be "spiclk" for the transfer-clock, and "apb_pclk" for
the peripheral clock.
+- #address-cells: should be 1.
+- #size-cells: should be 0.
+
+Optional Properties:
+
- dmas: DMA specifiers for tx and rx dma. See the DMA client binding,
Documentation/devicetree/bindings/dma/dma.txt
- dma-names: DMA request names should include "tx" and "rx" if present.
-- #address-cells: should be 1.
-- #size-cells: should be 0.
+
Example:
diff --git a/Documentation/devicetree/bindings/usb/mxs-phy.txt b/Documentation/devicetree/bindings/usb/mxs-phy.txt
index cef181a..96681c9 100644
--- a/Documentation/devicetree/bindings/usb/mxs-phy.txt
+++ b/Documentation/devicetree/bindings/usb/mxs-phy.txt
@@ -5,6 +5,7 @@
* "fsl,imx23-usbphy" for imx23 and imx28
* "fsl,imx6q-usbphy" for imx6dq and imx6dl
* "fsl,imx6sl-usbphy" for imx6sl
+ * "fsl,imx6sx-usbphy" for imx6sx
"fsl,imx23-usbphy" is still a fallback for other strings
- reg: Should contain registers location and length
- interrupts: Should contain phy interrupt
diff --git a/Documentation/devicetree/bindings/video/analog-tv-connector.txt b/Documentation/devicetree/bindings/video/analog-tv-connector.txt
index 0218fcd..0c0970c 100644
--- a/Documentation/devicetree/bindings/video/analog-tv-connector.txt
+++ b/Documentation/devicetree/bindings/video/analog-tv-connector.txt
@@ -2,7 +2,7 @@
===================
Required properties:
-- compatible: "composite-connector" or "svideo-connector"
+- compatible: "composite-video-connector" or "svideo-connector"
Optional properties:
- label: a symbolic name for the connector
@@ -14,7 +14,7 @@
-------
tv: connector {
- compatible = "composite-connector";
+ compatible = "composite-video-connector";
label = "tv";
port {
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 5ae8608..10d51c2 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -3541,6 +3541,7 @@
bogus residue values);
s = SINGLE_LUN (the device has only one
Logical Unit);
+ u = IGNORE_UAS (don't bind to the uas driver);
w = NO_WP_DETECT (don't test whether the
medium is write-protected).
Example: quirks=0419:aaf5:rl,0421:0433:rc
diff --git a/Documentation/networking/dctcp.txt b/Documentation/networking/dctcp.txt
new file mode 100644
index 0000000..0d5dfbc
--- /dev/null
+++ b/Documentation/networking/dctcp.txt
@@ -0,0 +1,43 @@
+DCTCP (DataCenter TCP)
+----------------------
+
+DCTCP is an enhancement to the TCP congestion control algorithm for data
+center networks and leverages Explicit Congestion Notification (ECN) in
+the data center network to provide multi-bit feedback to the end hosts.
+
+To enable it on end hosts:
+
+ sysctl -w net.ipv4.tcp_congestion_control=dctcp
+
+All switches in the data center network running DCTCP must support ECN
+marking and be configured for marking when reaching defined switch buffer
+thresholds. The default ECN marking threshold heuristic for DCTCP on
+switches is 20 packets (30KB) at 1Gbps, and 65 packets (~100KB) at 10Gbps,
+but might need further careful tweaking.
+
+For more details, see below documents:
+
+Paper:
+
+The algorithm is further described in detail in the following two
+SIGCOMM/SIGMETRICS papers:
+
+ i) Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye,
+ Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, and Murari Sridharan:
+ "Data Center TCP (DCTCP)", Data Center Networks session
+ Proc. ACM SIGCOMM, New Delhi, 2010.
+ http://simula.stanford.edu/~alizade/Site/DCTCP_files/dctcp-final.pdf
+ http://www.sigcomm.org/ccr/papers/2010/October/1851275.1851192
+
+ii) Mohammad Alizadeh, Adel Javanmard, and Balaji Prabhakar:
+ "Analysis of DCTCP: Stability, Convergence, and Fairness"
+ Proc. ACM SIGMETRICS, San Jose, 2011.
+ http://simula.stanford.edu/~alizade/Site/DCTCP_files/dctcp_analysis-full.pdf
+
+IETF informational draft:
+
+ http://tools.ietf.org/html/draft-bensley-tcpm-dctcp-00
+
+DCTCP site:
+
+ http://simula.stanford.edu/~alizade/Site/DCTCP.html
diff --git a/Documentation/networking/filter.txt b/Documentation/networking/filter.txt
index 81916ab..5ce4d07 100644
--- a/Documentation/networking/filter.txt
+++ b/Documentation/networking/filter.txt
@@ -462,9 +462,9 @@
------------
The Linux kernel has a built-in BPF JIT compiler for x86_64, SPARC, PowerPC,
-ARM and s390 and can be enabled through CONFIG_BPF_JIT. The JIT compiler is
-transparently invoked for each attached filter from user space or for internal
-kernel users if it has been previously enabled by root:
+ARM, MIPS and s390 and can be enabled through CONFIG_BPF_JIT. The JIT compiler
+is transparently invoked for each attached filter from user space or for
+internal kernel users if it has been previously enabled by root:
echo 1 > /proc/sys/net/core/bpf_jit_enable
@@ -1001,6 +1001,269 @@
Classic BPF has similar instruction: BPF_LD | BPF_W | BPF_IMM which loads
32-bit immediate value into a register.
+eBPF verifier
+-------------
+The safety of the eBPF program is determined in two steps.
+
+First step does DAG check to disallow loops and other CFG validation.
+In particular it will detect programs that have unreachable instructions.
+(though classic BPF checker allows them)
+
+Second step starts from the first insn and descends all possible paths.
+It simulates execution of every insn and observes the state change of
+registers and stack.
+
+At the start of the program the register R1 contains a pointer to context
+and has type PTR_TO_CTX.
+If verifier sees an insn that does R2=R1, then R2 has now type
+PTR_TO_CTX as well and can be used on the right hand side of expression.
+If R1=PTR_TO_CTX and insn is R2=R1+R1, then R2=UNKNOWN_VALUE,
+since addition of two valid pointers makes invalid pointer.
+(In 'secure' mode verifier will reject any type of pointer arithmetic to make
+sure that kernel addresses don't leak to unprivileged users)
+
+If register was never written to, it's not readable:
+ bpf_mov R0 = R2
+ bpf_exit
+will be rejected, since R2 is unreadable at the start of the program.
+
+After kernel function call, R1-R5 are reset to unreadable and
+R0 has a return type of the function.
+
+Since R6-R9 are callee saved, their state is preserved across the call.
+ bpf_mov R6 = 1
+ bpf_call foo
+ bpf_mov R0 = R6
+ bpf_exit
+is a correct program. If there was R1 instead of R6, it would have
+been rejected.
+
+load/store instructions are allowed only with registers of valid types, which
+are PTR_TO_CTX, PTR_TO_MAP, FRAME_PTR. They are bounds and alignment checked.
+For example:
+ bpf_mov R1 = 1
+ bpf_mov R2 = 2
+ bpf_xadd *(u32 *)(R1 + 3) += R2
+ bpf_exit
+will be rejected, since R1 doesn't have a valid pointer type at the time of
+execution of instruction bpf_xadd.
+
+At the start R1 type is PTR_TO_CTX (a pointer to generic 'struct bpf_context')
+A callback is used to customize verifier to restrict eBPF program access to only
+certain fields within ctx structure with specified size and alignment.
+
+For example, the following insn:
+ bpf_ld R0 = *(u32 *)(R6 + 8)
+intends to load a word from address R6 + 8 and store it into R0
+If R6=PTR_TO_CTX, via is_valid_access() callback the verifier will know
+that offset 8 of size 4 bytes can be accessed for reading, otherwise
+the verifier will reject the program.
+If R6=FRAME_PTR, then access should be aligned and be within
+stack bounds, which are [-MAX_BPF_STACK, 0). In this example offset is 8,
+so it will fail verification, since it's out of bounds.
+
+The verifier will allow eBPF program to read data from stack only after
+it wrote into it.
+Classic BPF verifier does similar check with M[0-15] memory slots.
+For example:
+ bpf_ld R0 = *(u32 *)(R10 - 4)
+ bpf_exit
+is invalid program.
+Though R10 is correct read-only register and has type FRAME_PTR
+and R10 - 4 is within stack bounds, there were no stores into that location.
+
+Pointer register spill/fill is tracked as well, since four (R6-R9)
+callee saved registers may not be enough for some programs.
+
+Allowed function calls are customized with bpf_verifier_ops->get_func_proto()
+The eBPF verifier will check that registers match argument constraints.
+After the call register R0 will be set to return type of the function.
+
+Function calls is a main mechanism to extend functionality of eBPF programs.
+Socket filters may let programs to call one set of functions, whereas tracing
+filters may allow completely different set.
+
+If a function made accessible to eBPF program, it needs to be thought through
+from safety point of view. The verifier will guarantee that the function is
+called with valid arguments.
+
+seccomp vs socket filters have different security restrictions for classic BPF.
+Seccomp solves this by two stage verifier: classic BPF verifier is followed
+by seccomp verifier. In case of eBPF one configurable verifier is shared for
+all use cases.
+
+See details of eBPF verifier in kernel/bpf/verifier.c
+
+eBPF maps
+---------
+'maps' is a generic storage of different types for sharing data between kernel
+and userspace.
+
+The maps are accessed from user space via BPF syscall, which has commands:
+- create a map with given type and attributes
+ map_fd = bpf(BPF_MAP_CREATE, union bpf_attr *attr, u32 size)
+ using attr->map_type, attr->key_size, attr->value_size, attr->max_entries
+ returns process-local file descriptor or negative error
+
+- lookup key in a given map
+ err = bpf(BPF_MAP_LOOKUP_ELEM, union bpf_attr *attr, u32 size)
+ using attr->map_fd, attr->key, attr->value
+ returns zero and stores found elem into value or negative error
+
+- create or update key/value pair in a given map
+ err = bpf(BPF_MAP_UPDATE_ELEM, union bpf_attr *attr, u32 size)
+ using attr->map_fd, attr->key, attr->value
+ returns zero or negative error
+
+- find and delete element by key in a given map
+ err = bpf(BPF_MAP_DELETE_ELEM, union bpf_attr *attr, u32 size)
+ using attr->map_fd, attr->key
+
+- to delete map: close(fd)
+ Exiting process will delete maps automatically
+
+userspace programs use this syscall to create/access maps that eBPF programs
+are concurrently updating.
+
+maps can have different types: hash, array, bloom filter, radix-tree, etc.
+
+The map is defined by:
+ . type
+ . max number of elements
+ . key size in bytes
+ . value size in bytes
+
+Understanding eBPF verifier messages
+------------------------------------
+
+The following are few examples of invalid eBPF programs and verifier error
+messages as seen in the log:
+
+Program with unreachable instructions:
+static struct bpf_insn prog[] = {
+ BPF_EXIT_INSN(),
+ BPF_EXIT_INSN(),
+};
+Error:
+ unreachable insn 1
+
+Program that reads uninitialized register:
+ BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+ BPF_EXIT_INSN(),
+Error:
+ 0: (bf) r0 = r2
+ R2 !read_ok
+
+Program that doesn't initialize R0 before exiting:
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
+ BPF_EXIT_INSN(),
+Error:
+ 0: (bf) r2 = r1
+ 1: (95) exit
+ R0 !read_ok
+
+Program that accesses stack out of bounds:
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, 8, 0),
+ BPF_EXIT_INSN(),
+Error:
+ 0: (7a) *(u64 *)(r10 +8) = 0
+ invalid stack off=8 size=8
+
+Program that doesn't initialize stack before passing its address into function:
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+Error:
+ 0: (bf) r2 = r10
+ 1: (07) r2 += -8
+ 2: (b7) r1 = 0x0
+ 3: (85) call 1
+ invalid indirect read from stack off -8+0 size 8
+
+Program that uses invalid map_fd=0 while calling to map_lookup_elem() function:
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+Error:
+ 0: (7a) *(u64 *)(r10 -8) = 0
+ 1: (bf) r2 = r10
+ 2: (07) r2 += -8
+ 3: (b7) r1 = 0x0
+ 4: (85) call 1
+ fd 0 is not pointing to valid bpf_map
+
+Program that doesn't check return value of map_lookup_elem() before accessing
+map element:
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+ BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
+ BPF_EXIT_INSN(),
+Error:
+ 0: (7a) *(u64 *)(r10 -8) = 0
+ 1: (bf) r2 = r10
+ 2: (07) r2 += -8
+ 3: (b7) r1 = 0x0
+ 4: (85) call 1
+ 5: (7a) *(u64 *)(r0 +0) = 0
+ R0 invalid mem access 'map_value_or_null'
+
+Program that correctly checks map_lookup_elem() returned value for NULL, but
+accesses the memory with incorrect alignment:
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+ BPF_ST_MEM(BPF_DW, BPF_REG_0, 4, 0),
+ BPF_EXIT_INSN(),
+Error:
+ 0: (7a) *(u64 *)(r10 -8) = 0
+ 1: (bf) r2 = r10
+ 2: (07) r2 += -8
+ 3: (b7) r1 = 1
+ 4: (85) call 1
+ 5: (15) if r0 == 0x0 goto pc+1
+ R0=map_ptr R10=fp
+ 6: (7a) *(u64 *)(r0 +4) = 0
+ misaligned access off 4 size 8
+
+Program that correctly checks map_lookup_elem() returned value for NULL and
+accesses memory with correct alignment in one side of 'if' branch, but fails
+to do so in the other side of 'if' branch:
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+ BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
+ BPF_EXIT_INSN(),
+ BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+Error:
+ 0: (7a) *(u64 *)(r10 -8) = 0
+ 1: (bf) r2 = r10
+ 2: (07) r2 += -8
+ 3: (b7) r1 = 1
+ 4: (85) call 1
+ 5: (15) if r0 == 0x0 goto pc+2
+ R0=map_ptr R10=fp
+ 6: (7a) *(u64 *)(r0 +0) = 0
+ 7: (95) exit
+
+ from 5 to 8: R0=imm0 R10=fp
+ 8: (7a) *(u64 *)(r0 +0) = 1
+ R0 invalid mem access 'imm'
+
Testing
-------
diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt
index db2383c..c7a81ac 100644
--- a/Documentation/networking/ip-sysctl.txt
+++ b/Documentation/networking/ip-sysctl.txt
@@ -769,8 +769,21 @@
icmp_ratemask (see below) to specific targets.
0 to disable any limiting,
otherwise the minimal space between responses in milliseconds.
+ Note that another sysctl, icmp_msgs_per_sec limits the number
+ of ICMP packets sent on all targets.
Default: 1000
+icmp_msgs_per_sec - INTEGER
+ Limit maximal number of ICMP packets sent per second from this host.
+ Only messages whose type matches icmp_ratemask (see below) are
+ controlled by this limit.
+ Default: 1000
+
+icmp_msgs_burst - INTEGER
+ icmp_msgs_per_sec controls number of ICMP packets sent per second,
+ while icmp_msgs_burst controls the burst size of these packets.
+ Default: 50
+
icmp_ratemask - INTEGER
Mask made of ICMP types for which rates are being limited.
Significant bits: IHGFEDCBA9876543210
@@ -952,14 +965,9 @@
FALSE (host)
accept_local - BOOLEAN
- Accept packets with local source addresses. In combination
- with suitable routing, this can be used to direct packets
- between two local interfaces over the wire and have them
- accepted properly.
-
- rp_filter must be set to a non-zero value in order for
- accept_local to have an effect.
-
+ Accept packets with local source addresses. In combination with
+ suitable routing, this can be used to direct packets between two
+ local interfaces over the wire and have them accepted properly.
default FALSE
route_localnet - BOOLEAN
diff --git a/MAINTAINERS b/MAINTAINERS
index 5e3709e..f8db3c3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6425,7 +6425,8 @@
F: drivers/scsi/nsp32*
NTB DRIVER
-M: Jon Mason <jon.mason@intel.com>
+M: Jon Mason <jdmason@kudzu.us>
+M: Dave Jiang <dave.jiang@intel.com>
S: Supported
W: https://github.com/jonmason/ntb/wiki
T: git git://github.com/jonmason/ntb.git
@@ -6876,7 +6877,7 @@
PCI DRIVER FOR IMX6
M: Richard Zhu <r65037@freescale.com>
-M: Shawn Guo <shawn.guo@freescale.com>
+M: Lucas Stach <l.stach@pengutronix.de>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
@@ -7054,7 +7055,7 @@
F: drivers/pinctrl/sh-pfc/
PIN CONTROLLER - SAMSUNG
-M: Tomasz Figa <t.figa@samsung.com>
+M: Tomasz Figa <tomasz.figa@gmail.com>
M: Thomas Abraham <thomas.abraham@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
@@ -7385,9 +7386,9 @@
F: drivers/net/ethernet/qlogic/qlcnic/
QLOGIC QLGE 10Gb ETHERNET DRIVER
-M: Shahed Shaikh <shahed.shaikh@qlogic.com>
-M: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
-M: Ron Mercer <ron.mercer@qlogic.com>
+M: Harish Patil <harish.patil@qlogic.com>
+M: Sudarsana Kalluru <sudarsana.kalluru@qlogic.com>
+M: Dept-GELinuxNICDev@qlogic.com
M: linux-driver@qlogic.com
L: netdev@vger.kernel.org
S: Supported
@@ -7900,7 +7901,8 @@
F: drivers/media/i2c/s5k5baf.c
SAMSUNG SOC CLOCK DRIVERS
-M: Tomasz Figa <t.figa@samsung.com>
+M: Sylwester Nawrocki <s.nawrocki@samsung.com>
+M: Tomasz Figa <tomasz.figa@gmail.com>
S: Supported
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
F: drivers/clk/samsung/
@@ -7913,6 +7915,19 @@
L: netdev@vger.kernel.org
F: drivers/net/ethernet/samsung/sxgbe/
+SAMSUNG USB2 PHY DRIVER
+M: Kamil Debski <k.debski@samsung.com>
+L: linux-kernel@vger.kernel.org
+S: Supported
+F: Documentation/devicetree/bindings/phy/samsung-phy.txt
+F: Documentation/phy/samsung-usb2.txt
+F: drivers/phy/phy-exynos4210-usb2.c
+F: drivers/phy/phy-exynos4x12-usb2.c
+F: drivers/phy/phy-exynos5250-usb2.c
+F: drivers/phy/phy-s5pv210-usb2.c
+F: drivers/phy/phy-samsung-usb2.c
+F: drivers/phy/phy-samsung-usb2.h
+
SERIAL DRIVERS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-serial@vger.kernel.org
diff --git a/Makefile b/Makefile
index 1a60bdd..a192280 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 17
SUBLEVEL = 0
-EXTRAVERSION = -rc4
+EXTRAVERSION = -rc6
NAME = Shuffling Zombie Juror
# *DOCUMENTATION*
diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi
index f4b9da6..0a03260 100644
--- a/arch/arm/boot/dts/imx6sx.dtsi
+++ b/arch/arm/boot/dts/imx6sx.dtsi
@@ -776,6 +776,8 @@
<&clks IMX6SX_CLK_ENET_PTP>;
clock-names = "ipg", "ahb", "ptp",
"enet_clk_ref", "enet_out";
+ fsl,num-tx-queues=<3>;
+ fsl,num-rx-queues=<3>;
status = "disabled";
};
diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
index 1fe45d1..4361777 100644
--- a/arch/arm/boot/dts/omap3-n900.dts
+++ b/arch/arm/boot/dts/omap3-n900.dts
@@ -93,7 +93,7 @@
};
tv: connector {
- compatible = "composite-connector";
+ compatible = "composite-video-connector";
label = "tv";
port {
diff --git a/arch/arm/include/asm/tls.h b/arch/arm/include/asm/tls.h
index 83259b8..36172ad 100644
--- a/arch/arm/include/asm/tls.h
+++ b/arch/arm/include/asm/tls.h
@@ -1,6 +1,9 @@
#ifndef __ASMARM_TLS_H
#define __ASMARM_TLS_H
+#include <linux/compiler.h>
+#include <asm/thread_info.h>
+
#ifdef __ASSEMBLY__
#include <asm/asm-offsets.h>
.macro switch_tls_none, base, tp, tpuser, tmp1, tmp2
@@ -50,6 +53,47 @@
#endif
#ifndef __ASSEMBLY__
+
+static inline void set_tls(unsigned long val)
+{
+ struct thread_info *thread;
+
+ thread = current_thread_info();
+
+ thread->tp_value[0] = val;
+
+ /*
+ * This code runs with preemption enabled and therefore must
+ * be reentrant with respect to switch_tls.
+ *
+ * We need to ensure ordering between the shadow state and the
+ * hardware state, so that we don't corrupt the hardware state
+ * with a stale shadow state during context switch.
+ *
+ * If we're preempted here, switch_tls will load TPIDRURO from
+ * thread_info upon resuming execution and the following mcr
+ * is merely redundant.
+ */
+ barrier();
+
+ if (!tls_emu) {
+ if (has_tls_reg) {
+ asm("mcr p15, 0, %0, c13, c0, 3"
+ : : "r" (val));
+ } else {
+ /*
+ * User space must never try to access this
+ * directly. Expect your app to break
+ * eventually if you do so. The user helper
+ * at 0xffff0fe0 must be used instead. (see
+ * entry-armv.S for details)
+ */
+ *((unsigned int *)0xffff0ff0) = val;
+ }
+
+ }
+}
+
static inline unsigned long get_tpuser(void)
{
unsigned long reg = 0;
@@ -59,5 +103,23 @@
return reg;
}
+
+static inline void set_tpuser(unsigned long val)
+{
+ /* Since TPIDRURW is fully context-switched (unlike TPIDRURO),
+ * we need not update thread_info.
+ */
+ if (has_tls_reg && !tls_emu) {
+ asm("mcr p15, 0, %0, c13, c0, 2"
+ : : "r" (val));
+ }
+}
+
+static inline void flush_tls(void)
+{
+ set_tls(0);
+ set_tpuser(0);
+}
+
#endif
#endif /* __ASMARM_TLS_H */
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index a4cd7af..4767eb9 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -107,8 +107,11 @@
extern int __get_user_1(void *);
extern int __get_user_2(void *);
extern int __get_user_4(void *);
-extern int __get_user_lo8(void *);
+extern int __get_user_32t_8(void *);
extern int __get_user_8(void *);
+extern int __get_user_64t_1(void *);
+extern int __get_user_64t_2(void *);
+extern int __get_user_64t_4(void *);
#define __GUP_CLOBBER_1 "lr", "cc"
#ifdef CONFIG_CPU_USE_DOMAINS
@@ -117,7 +120,7 @@
#define __GUP_CLOBBER_2 "lr", "cc"
#endif
#define __GUP_CLOBBER_4 "lr", "cc"
-#define __GUP_CLOBBER_lo8 "lr", "cc"
+#define __GUP_CLOBBER_32t_8 "lr", "cc"
#define __GUP_CLOBBER_8 "lr", "cc"
#define __get_user_x(__r2,__p,__e,__l,__s) \
@@ -131,12 +134,30 @@
/* narrowing a double-word get into a single 32bit word register: */
#ifdef __ARMEB__
-#define __get_user_xb(__r2, __p, __e, __l, __s) \
- __get_user_x(__r2, __p, __e, __l, lo8)
+#define __get_user_x_32t(__r2, __p, __e, __l, __s) \
+ __get_user_x(__r2, __p, __e, __l, 32t_8)
#else
-#define __get_user_xb __get_user_x
+#define __get_user_x_32t __get_user_x
#endif
+/*
+ * storing result into proper least significant word of 64bit target var,
+ * different only for big endian case where 64 bit __r2 lsw is r3:
+ */
+#ifdef __ARMEB__
+#define __get_user_x_64t(__r2, __p, __e, __l, __s) \
+ __asm__ __volatile__ ( \
+ __asmeq("%0", "r0") __asmeq("%1", "r2") \
+ __asmeq("%3", "r1") \
+ "bl __get_user_64t_" #__s \
+ : "=&r" (__e), "=r" (__r2) \
+ : "0" (__p), "r" (__l) \
+ : __GUP_CLOBBER_##__s)
+#else
+#define __get_user_x_64t __get_user_x
+#endif
+
+
#define __get_user_check(x,p) \
({ \
unsigned long __limit = current_thread_info()->addr_limit - 1; \
@@ -146,17 +167,26 @@
register int __e asm("r0"); \
switch (sizeof(*(__p))) { \
case 1: \
- __get_user_x(__r2, __p, __e, __l, 1); \
+ if (sizeof((x)) >= 8) \
+ __get_user_x_64t(__r2, __p, __e, __l, 1); \
+ else \
+ __get_user_x(__r2, __p, __e, __l, 1); \
break; \
case 2: \
- __get_user_x(__r2, __p, __e, __l, 2); \
+ if (sizeof((x)) >= 8) \
+ __get_user_x_64t(__r2, __p, __e, __l, 2); \
+ else \
+ __get_user_x(__r2, __p, __e, __l, 2); \
break; \
case 4: \
- __get_user_x(__r2, __p, __e, __l, 4); \
+ if (sizeof((x)) >= 8) \
+ __get_user_x_64t(__r2, __p, __e, __l, 4); \
+ else \
+ __get_user_x(__r2, __p, __e, __l, 4); \
break; \
case 8: \
if (sizeof((x)) < 8) \
- __get_user_xb(__r2, __p, __e, __l, 4); \
+ __get_user_x_32t(__r2, __p, __e, __l, 4); \
else \
__get_user_x(__r2, __p, __e, __l, 8); \
break; \
diff --git a/arch/arm/include/asm/xen/page-coherent.h b/arch/arm/include/asm/xen/page-coherent.h
index 1109017..e8275ea 100644
--- a/arch/arm/include/asm/xen/page-coherent.h
+++ b/arch/arm/include/asm/xen/page-coherent.h
@@ -26,25 +26,14 @@
__generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs);
}
-static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
+void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
- struct dma_attrs *attrs)
-{
- if (__generic_dma_ops(hwdev)->unmap_page)
- __generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs);
-}
+ struct dma_attrs *attrs);
-static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
- dma_addr_t handle, size_t size, enum dma_data_direction dir)
-{
- if (__generic_dma_ops(hwdev)->sync_single_for_cpu)
- __generic_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir);
-}
+void xen_dma_sync_single_for_cpu(struct device *hwdev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir);
-static inline void xen_dma_sync_single_for_device(struct device *hwdev,
- dma_addr_t handle, size_t size, enum dma_data_direction dir)
-{
- if (__generic_dma_ops(hwdev)->sync_single_for_device)
- __generic_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir);
-}
+void xen_dma_sync_single_for_device(struct device *hwdev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir);
+
#endif /* _ASM_ARM_XEN_PAGE_COHERENT_H */
diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index ded062f..135c24a 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -33,7 +33,6 @@
#define INVALID_P2M_ENTRY (~0UL)
unsigned long __pfn_to_mfn(unsigned long pfn);
-unsigned long __mfn_to_pfn(unsigned long mfn);
extern struct rb_root phys_to_mach;
static inline unsigned long pfn_to_mfn(unsigned long pfn)
@@ -51,14 +50,6 @@
static inline unsigned long mfn_to_pfn(unsigned long mfn)
{
- unsigned long pfn;
-
- if (phys_to_mach.rb_node != NULL) {
- pfn = __mfn_to_pfn(mfn);
- if (pfn != INVALID_P2M_ENTRY)
- return pfn;
- }
-
return mfn;
}
diff --git a/arch/arm/kernel/armksyms.c b/arch/arm/kernel/armksyms.c
index f7b450f..a88671c 100644
--- a/arch/arm/kernel/armksyms.c
+++ b/arch/arm/kernel/armksyms.c
@@ -98,6 +98,14 @@
EXPORT_SYMBOL(__get_user_1);
EXPORT_SYMBOL(__get_user_2);
EXPORT_SYMBOL(__get_user_4);
+EXPORT_SYMBOL(__get_user_8);
+
+#ifdef __ARMEB__
+EXPORT_SYMBOL(__get_user_64t_1);
+EXPORT_SYMBOL(__get_user_64t_2);
+EXPORT_SYMBOL(__get_user_64t_4);
+EXPORT_SYMBOL(__get_user_32t_8);
+#endif
EXPORT_SYMBOL(__put_user_1);
EXPORT_SYMBOL(__put_user_2);
diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c
index 2c42576..5c4d38e 100644
--- a/arch/arm/kernel/irq.c
+++ b/arch/arm/kernel/irq.c
@@ -175,7 +175,7 @@
c = irq_data_get_irq_chip(d);
if (!c->irq_set_affinity)
pr_debug("IRQ%u: unable to set affinity\n", d->irq);
- else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret)
+ else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret)
cpumask_copy(d->affinity, affinity);
return ret;
diff --git a/arch/arm/kernel/perf_event_cpu.c b/arch/arm/kernel/perf_event_cpu.c
index e6a6edb..4bf4cce 100644
--- a/arch/arm/kernel/perf_event_cpu.c
+++ b/arch/arm/kernel/perf_event_cpu.c
@@ -76,21 +76,15 @@
static void cpu_pmu_enable_percpu_irq(void *data)
{
- struct arm_pmu *cpu_pmu = data;
- struct platform_device *pmu_device = cpu_pmu->plat_device;
- int irq = platform_get_irq(pmu_device, 0);
+ int irq = *(int *)data;
enable_percpu_irq(irq, IRQ_TYPE_NONE);
- cpumask_set_cpu(smp_processor_id(), &cpu_pmu->active_irqs);
}
static void cpu_pmu_disable_percpu_irq(void *data)
{
- struct arm_pmu *cpu_pmu = data;
- struct platform_device *pmu_device = cpu_pmu->plat_device;
- int irq = platform_get_irq(pmu_device, 0);
+ int irq = *(int *)data;
- cpumask_clear_cpu(smp_processor_id(), &cpu_pmu->active_irqs);
disable_percpu_irq(irq);
}
@@ -103,7 +97,7 @@
irq = platform_get_irq(pmu_device, 0);
if (irq >= 0 && irq_is_percpu(irq)) {
- on_each_cpu(cpu_pmu_disable_percpu_irq, cpu_pmu, 1);
+ on_each_cpu(cpu_pmu_disable_percpu_irq, &irq, 1);
free_percpu_irq(irq, &percpu_pmu);
} else {
for (i = 0; i < irqs; ++i) {
@@ -138,7 +132,7 @@
irq);
return err;
}
- on_each_cpu(cpu_pmu_enable_percpu_irq, cpu_pmu, 1);
+ on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1);
} else {
for (i = 0; i < irqs; ++i) {
err = 0;
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
index 81ef686..a35f6eb 100644
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -334,6 +334,8 @@
memset(&tsk->thread.debug, 0, sizeof(struct debug_info));
memset(&thread->fpstate, 0, sizeof(union fp_state));
+ flush_tls();
+
thread_notify(THREAD_NOTIFY_FLUSH, thread);
}
diff --git a/arch/arm/kernel/swp_emulate.c b/arch/arm/kernel/swp_emulate.c
index 67ca857..587fdfe 100644
--- a/arch/arm/kernel/swp_emulate.c
+++ b/arch/arm/kernel/swp_emulate.c
@@ -142,14 +142,6 @@
while (1) {
unsigned long temp;
- /*
- * Barrier required between accessing protected resource and
- * releasing a lock for it. Legacy code might not have done
- * this, and we cannot determine that this is not the case
- * being emulated, so insert always.
- */
- smp_mb();
-
if (type == TYPE_SWPB)
__user_swpb_asm(*data, address, res, temp);
else
@@ -162,13 +154,6 @@
}
if (res == 0) {
- /*
- * Barrier also required between acquiring a lock for a
- * protected resource and accessing the resource. Inserted for
- * same reason as above.
- */
- smp_mb();
-
if (type == TYPE_SWPB)
swpbcounter++;
else
diff --git a/arch/arm/kernel/thumbee.c b/arch/arm/kernel/thumbee.c
index 7b8403b..80f0d69 100644
--- a/arch/arm/kernel/thumbee.c
+++ b/arch/arm/kernel/thumbee.c
@@ -45,7 +45,7 @@
switch (cmd) {
case THREAD_NOTIFY_FLUSH:
- thread->thumbee_state = 0;
+ teehbr_write(0);
break;
case THREAD_NOTIFY_SWITCH:
current_thread_info()->thumbee_state = teehbr_read();
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index c8e4bb7..a964c9f 100644
--- a/arch/arm/kernel/traps.c
+++ b/arch/arm/kernel/traps.c
@@ -581,7 +581,6 @@
#define NR(x) ((__ARM_NR_##x) - __ARM_NR_BASE)
asmlinkage int arm_syscall(int no, struct pt_regs *regs)
{
- struct thread_info *thread = current_thread_info();
siginfo_t info;
if ((no >> 16) != (__ARM_NR_BASE>> 16))
@@ -632,21 +631,7 @@
return regs->ARM_r0;
case NR(set_tls):
- thread->tp_value[0] = regs->ARM_r0;
- if (tls_emu)
- return 0;
- if (has_tls_reg) {
- asm ("mcr p15, 0, %0, c13, c0, 3"
- : : "r" (regs->ARM_r0));
- } else {
- /*
- * User space must never try to access this directly.
- * Expect your app to break eventually if you do so.
- * The user helper at 0xffff0fe0 must be used instead.
- * (see entry-armv.S for details)
- */
- *((unsigned int *)0xffff0ff0) = regs->ARM_r0;
- }
+ set_tls(regs->ARM_r0);
return 0;
#ifdef CONFIG_NEEDS_SYSCALL_FOR_CMPXCHG
diff --git a/arch/arm/lib/getuser.S b/arch/arm/lib/getuser.S
index 9386000..8ecfd15 100644
--- a/arch/arm/lib/getuser.S
+++ b/arch/arm/lib/getuser.S
@@ -80,7 +80,7 @@
ENDPROC(__get_user_8)
#ifdef __ARMEB__
-ENTRY(__get_user_lo8)
+ENTRY(__get_user_32t_8)
check_uaccess r0, 8, r1, r2, __get_user_bad
#ifdef CONFIG_CPU_USE_DOMAINS
add r0, r0, #4
@@ -90,7 +90,37 @@
#endif
mov r0, #0
ret lr
-ENDPROC(__get_user_lo8)
+ENDPROC(__get_user_32t_8)
+
+ENTRY(__get_user_64t_1)
+ check_uaccess r0, 1, r1, r2, __get_user_bad8
+8: TUSER(ldrb) r3, [r0]
+ mov r0, #0
+ ret lr
+ENDPROC(__get_user_64t_1)
+
+ENTRY(__get_user_64t_2)
+ check_uaccess r0, 2, r1, r2, __get_user_bad8
+#ifdef CONFIG_CPU_USE_DOMAINS
+rb .req ip
+9: ldrbt r3, [r0], #1
+10: ldrbt rb, [r0], #0
+#else
+rb .req r0
+9: ldrb r3, [r0]
+10: ldrb rb, [r0, #1]
+#endif
+ orr r3, rb, r3, lsl #8
+ mov r0, #0
+ ret lr
+ENDPROC(__get_user_64t_2)
+
+ENTRY(__get_user_64t_4)
+ check_uaccess r0, 4, r1, r2, __get_user_bad8
+11: TUSER(ldr) r3, [r0]
+ mov r0, #0
+ ret lr
+ENDPROC(__get_user_64t_4)
#endif
__get_user_bad8:
@@ -111,5 +141,9 @@
.long 6b, __get_user_bad8
#ifdef __ARMEB__
.long 7b, __get_user_bad
+ .long 8b, __get_user_bad8
+ .long 9b, __get_user_bad8
+ .long 10b, __get_user_bad8
+ .long 11b, __get_user_bad8
#endif
.popsection
diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
index 1a24e92..b64e67c 100644
--- a/arch/arm/mm/proc-v7-3level.S
+++ b/arch/arm/mm/proc-v7-3level.S
@@ -146,7 +146,6 @@
mov \tmp, \ttbr1, lsr #(32 - ARCH_PGD_SHIFT) @ upper bits
mov \ttbr1, \ttbr1, lsl #ARCH_PGD_SHIFT @ lower bits
addls \ttbr1, \ttbr1, #TTBR1_OFFSET
- adcls \tmp, \tmp, #0
mcrr p15, 1, \ttbr1, \tmp, c2 @ load TTBR1
mov \tmp, \ttbr0, lsr #(32 - ARCH_PGD_SHIFT) @ upper bits
mov \ttbr0, \ttbr0, lsl #ARCH_PGD_SHIFT @ lower bits
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 6b45f64..e1268f9 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -16,6 +16,7 @@
#include <linux/string.h>
#include <linux/slab.h>
#include <linux/if_vlan.h>
+
#include <asm/cacheflush.h>
#include <asm/hwcap.h>
#include <asm/opcodes.h>
@@ -175,11 +176,10 @@
static void jit_fill_hole(void *area, unsigned int size)
{
- /* Insert illegal UND instructions. */
- u32 *ptr, fill_ins = 0xe7ffffff;
+ u32 *ptr;
/* We are guaranteed to have aligned memory. */
for (ptr = area; size >= sizeof(u32); size -= sizeof(u32))
- *ptr++ = fill_ins;
+ *ptr++ = __opcode_to_mem_arm(ARM_INST_UDF);
}
static void build_prologue(struct jit_ctx *ctx)
diff --git a/arch/arm/net/bpf_jit_32.h b/arch/arm/net/bpf_jit_32.h
index afb8462..b2d7d92 100644
--- a/arch/arm/net/bpf_jit_32.h
+++ b/arch/arm/net/bpf_jit_32.h
@@ -114,6 +114,20 @@
#define ARM_INST_UMULL 0x00800090
+/*
+ * Use a suitable undefined instruction to use for ARM/Thumb2 faulting.
+ * We need to be careful not to conflict with those used by other modules
+ * (BUG, kprobes, etc) and the register_undef_hook() system.
+ *
+ * The ARM architecture reference manual guarantees that the following
+ * instruction space will produce an undefined instruction exception on
+ * all CPUs:
+ *
+ * ARM: xxxx 0111 1111 xxxx xxxx xxxx 1111 xxxx ARMv7-AR, section A5.4
+ * Thumb: 1101 1110 xxxx xxxx ARMv7-M, section A5.2.6
+ */
+#define ARM_INST_UDF 0xe7fddef1
+
/* register */
#define _AL3_R(op, rd, rn, rm) ((op ## _R) | (rd) << 12 | (rn) << 16 | (rm))
/* immediate */
diff --git a/arch/arm/plat-orion/common.c b/arch/arm/plat-orion/common.c
index 3ec6e8e..f5b00f4 100644
--- a/arch/arm/plat-orion/common.c
+++ b/arch/arm/plat-orion/common.c
@@ -499,7 +499,7 @@
d->netdev = &orion_ge00.dev;
for (i = 0; i < d->nr_chips; i++)
- d->chip[i].mii_bus = &orion_ge00_shared.dev;
+ d->chip[i].host_dev = &orion_ge00_shared.dev;
orion_switch_device.dev.platform_data = d;
platform_device_register(&orion_switch_device);
diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
index 1296952..1f85bfe 100644
--- a/arch/arm/xen/Makefile
+++ b/arch/arm/xen/Makefile
@@ -1 +1 @@
-obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o
+obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o mm32.o
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 98544c5..0e15f01 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -260,6 +260,12 @@
xen_domain_type = XEN_HVM_DOMAIN;
xen_setup_features();
+
+ if (!xen_feature(XENFEAT_grant_map_identity)) {
+ pr_warn("Please upgrade your Xen.\n"
+ "If your platform has any non-coherent DMA devices, they won't work properly.\n");
+ }
+
if (xen_feature(XENFEAT_dom0))
xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
else
diff --git a/arch/arm/xen/mm32.c b/arch/arm/xen/mm32.c
new file mode 100644
index 0000000..3b99860
--- /dev/null
+++ b/arch/arm/xen/mm32.c
@@ -0,0 +1,202 @@
+#include <linux/cpu.h>
+#include <linux/dma-mapping.h>
+#include <linux/gfp.h>
+#include <linux/highmem.h>
+
+#include <xen/features.h>
+
+static DEFINE_PER_CPU(unsigned long, xen_mm32_scratch_virt);
+static DEFINE_PER_CPU(pte_t *, xen_mm32_scratch_ptep);
+
+static int alloc_xen_mm32_scratch_page(int cpu)
+{
+ struct page *page;
+ unsigned long virt;
+ pmd_t *pmdp;
+ pte_t *ptep;
+
+ if (per_cpu(xen_mm32_scratch_ptep, cpu) != NULL)
+ return 0;
+
+ page = alloc_page(GFP_KERNEL);
+ if (page == NULL) {
+ pr_warn("Failed to allocate xen_mm32_scratch_page for cpu %d\n", cpu);
+ return -ENOMEM;
+ }
+
+ virt = (unsigned long)__va(page_to_phys(page));
+ pmdp = pmd_offset(pud_offset(pgd_offset_k(virt), virt), virt);
+ ptep = pte_offset_kernel(pmdp, virt);
+
+ per_cpu(xen_mm32_scratch_virt, cpu) = virt;
+ per_cpu(xen_mm32_scratch_ptep, cpu) = ptep;
+
+ return 0;
+}
+
+static int xen_mm32_cpu_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ int cpu = (long)hcpu;
+ switch (action) {
+ case CPU_UP_PREPARE:
+ if (alloc_xen_mm32_scratch_page(cpu))
+ return NOTIFY_BAD;
+ break;
+ default:
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static struct notifier_block xen_mm32_cpu_notifier = {
+ .notifier_call = xen_mm32_cpu_notify,
+};
+
+static void* xen_mm32_remap_page(dma_addr_t handle)
+{
+ unsigned long virt = get_cpu_var(xen_mm32_scratch_virt);
+ pte_t *ptep = __get_cpu_var(xen_mm32_scratch_ptep);
+
+ *ptep = pfn_pte(handle >> PAGE_SHIFT, PAGE_KERNEL);
+ local_flush_tlb_kernel_page(virt);
+
+ return (void*)virt;
+}
+
+static void xen_mm32_unmap(void *vaddr)
+{
+ put_cpu_var(xen_mm32_scratch_virt);
+}
+
+
+/* functions called by SWIOTLB */
+
+static void dma_cache_maint(dma_addr_t handle, unsigned long offset,
+ size_t size, enum dma_data_direction dir,
+ void (*op)(const void *, size_t, int))
+{
+ unsigned long pfn;
+ size_t left = size;
+
+ pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE;
+ offset %= PAGE_SIZE;
+
+ do {
+ size_t len = left;
+ void *vaddr;
+
+ if (!pfn_valid(pfn))
+ {
+ /* Cannot map the page, we don't know its physical address.
+ * Return and hope for the best */
+ if (!xen_feature(XENFEAT_grant_map_identity))
+ return;
+ vaddr = xen_mm32_remap_page(handle) + offset;
+ op(vaddr, len, dir);
+ xen_mm32_unmap(vaddr - offset);
+ } else {
+ struct page *page = pfn_to_page(pfn);
+
+ if (PageHighMem(page)) {
+ if (len + offset > PAGE_SIZE)
+ len = PAGE_SIZE - offset;
+
+ if (cache_is_vipt_nonaliasing()) {
+ vaddr = kmap_atomic(page);
+ op(vaddr + offset, len, dir);
+ kunmap_atomic(vaddr);
+ } else {
+ vaddr = kmap_high_get(page);
+ if (vaddr) {
+ op(vaddr + offset, len, dir);
+ kunmap_high(page);
+ }
+ }
+ } else {
+ vaddr = page_address(page) + offset;
+ op(vaddr, len, dir);
+ }
+ }
+
+ offset = 0;
+ pfn++;
+ left -= len;
+ } while (left);
+}
+
+static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle,
+ size_t size, enum dma_data_direction dir)
+{
+ /* Cannot use __dma_page_dev_to_cpu because we don't have a
+ * struct page for handle */
+
+ if (dir != DMA_TO_DEVICE)
+ outer_inv_range(handle, handle + size);
+
+ dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_unmap_area);
+}
+
+static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle,
+ size_t size, enum dma_data_direction dir)
+{
+
+ dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_map_area);
+
+ if (dir == DMA_FROM_DEVICE) {
+ outer_inv_range(handle, handle + size);
+ } else {
+ outer_clean_range(handle, handle + size);
+ }
+}
+
+void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+
+{
+ if (!__generic_dma_ops(hwdev)->unmap_page)
+ return;
+ if (dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ return;
+
+ __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir);
+}
+
+void xen_dma_sync_single_for_cpu(struct device *hwdev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+ if (!__generic_dma_ops(hwdev)->sync_single_for_cpu)
+ return;
+ __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir);
+}
+
+void xen_dma_sync_single_for_device(struct device *hwdev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+ if (!__generic_dma_ops(hwdev)->sync_single_for_device)
+ return;
+ __xen_dma_page_cpu_to_dev(hwdev, handle, size, dir);
+}
+
+int __init xen_mm32_init(void)
+{
+ int cpu;
+
+ if (!xen_initial_domain())
+ return 0;
+
+ register_cpu_notifier(&xen_mm32_cpu_notifier);
+ get_online_cpus();
+ for_each_online_cpu(cpu) {
+ if (alloc_xen_mm32_scratch_page(cpu)) {
+ put_online_cpus();
+ unregister_cpu_notifier(&xen_mm32_cpu_notifier);
+ return -ENOMEM;
+ }
+ }
+ put_online_cpus();
+
+ return 0;
+}
+arch_initcall(xen_mm32_init);
diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
index 97baf44..0548577 100644
--- a/arch/arm/xen/p2m.c
+++ b/arch/arm/xen/p2m.c
@@ -21,14 +21,12 @@
unsigned long pfn;
unsigned long mfn;
unsigned long nr_pages;
- struct rb_node rbnode_mach;
struct rb_node rbnode_phys;
};
static rwlock_t p2m_lock;
struct rb_root phys_to_mach = RB_ROOT;
EXPORT_SYMBOL_GPL(phys_to_mach);
-static struct rb_root mach_to_phys = RB_ROOT;
static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new)
{
@@ -41,8 +39,6 @@
parent = *link;
entry = rb_entry(parent, struct xen_p2m_entry, rbnode_phys);
- if (new->mfn == entry->mfn)
- goto err_out;
if (new->pfn == entry->pfn)
goto err_out;
@@ -88,64 +84,6 @@
}
EXPORT_SYMBOL_GPL(__pfn_to_mfn);
-static int xen_add_mach_to_phys_entry(struct xen_p2m_entry *new)
-{
- struct rb_node **link = &mach_to_phys.rb_node;
- struct rb_node *parent = NULL;
- struct xen_p2m_entry *entry;
- int rc = 0;
-
- while (*link) {
- parent = *link;
- entry = rb_entry(parent, struct xen_p2m_entry, rbnode_mach);
-
- if (new->mfn == entry->mfn)
- goto err_out;
- if (new->pfn == entry->pfn)
- goto err_out;
-
- if (new->mfn < entry->mfn)
- link = &(*link)->rb_left;
- else
- link = &(*link)->rb_right;
- }
- rb_link_node(&new->rbnode_mach, parent, link);
- rb_insert_color(&new->rbnode_mach, &mach_to_phys);
- goto out;
-
-err_out:
- rc = -EINVAL;
- pr_warn("%s: cannot add pfn=%pa -> mfn=%pa: pfn=%pa -> mfn=%pa already exists\n",
- __func__, &new->pfn, &new->mfn, &entry->pfn, &entry->mfn);
-out:
- return rc;
-}
-
-unsigned long __mfn_to_pfn(unsigned long mfn)
-{
- struct rb_node *n = mach_to_phys.rb_node;
- struct xen_p2m_entry *entry;
- unsigned long irqflags;
-
- read_lock_irqsave(&p2m_lock, irqflags);
- while (n) {
- entry = rb_entry(n, struct xen_p2m_entry, rbnode_mach);
- if (entry->mfn <= mfn &&
- entry->mfn + entry->nr_pages > mfn) {
- read_unlock_irqrestore(&p2m_lock, irqflags);
- return entry->pfn + (mfn - entry->mfn);
- }
- if (mfn < entry->mfn)
- n = n->rb_left;
- else
- n = n->rb_right;
- }
- read_unlock_irqrestore(&p2m_lock, irqflags);
-
- return INVALID_P2M_ENTRY;
-}
-EXPORT_SYMBOL_GPL(__mfn_to_pfn);
-
int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
struct gnttab_map_grant_ref *kmap_ops,
struct page **pages, unsigned int count)
@@ -192,7 +130,6 @@
p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
if (p2m_entry->pfn <= pfn &&
p2m_entry->pfn + p2m_entry->nr_pages > pfn) {
- rb_erase(&p2m_entry->rbnode_mach, &mach_to_phys);
rb_erase(&p2m_entry->rbnode_phys, &phys_to_mach);
write_unlock_irqrestore(&p2m_lock, irqflags);
kfree(p2m_entry);
@@ -217,8 +154,7 @@
p2m_entry->mfn = mfn;
write_lock_irqsave(&p2m_lock, irqflags);
- if ((rc = xen_add_phys_to_mach_entry(p2m_entry) < 0) ||
- (rc = xen_add_mach_to_phys_entry(p2m_entry) < 0)) {
+ if ((rc = xen_add_phys_to_mach_entry(p2m_entry)) < 0) {
write_unlock_irqrestore(&p2m_lock, irqflags);
return false;
}
diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
index 0f08dfd..dfa6e3e 100644
--- a/arch/arm64/kernel/irq.c
+++ b/arch/arm64/kernel/irq.c
@@ -97,19 +97,15 @@
if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))
return false;
- if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids)
+ if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
+ affinity = cpu_online_mask;
ret = true;
+ }
- /*
- * when using forced irq_set_affinity we must ensure that the cpu
- * being offlined is not present in the affinity mask, it may be
- * selected as the target CPU otherwise
- */
- affinity = cpu_online_mask;
c = irq_data_get_irq_chip(d);
if (!c->irq_set_affinity)
pr_debug("IRQ%u: unable to set affinity\n", d->irq);
- else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret)
+ else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret)
cpumask_copy(d->affinity, affinity);
return ret;
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 1309d64..29d4869 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -230,9 +230,27 @@
{
}
+static void tls_thread_flush(void)
+{
+ asm ("msr tpidr_el0, xzr");
+
+ if (is_compat_task()) {
+ current->thread.tp_value = 0;
+
+ /*
+ * We need to ensure ordering between the shadow state and the
+ * hardware state, so that we don't corrupt the hardware state
+ * with a stale shadow state during context switch.
+ */
+ barrier();
+ asm ("msr tpidrro_el0, xzr");
+ }
+}
+
void flush_thread(void)
{
fpsimd_flush_thread();
+ tls_thread_flush();
flush_ptrace_hw_breakpoint(current);
}
diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
index de2b022..dc47e53 100644
--- a/arch/arm64/kernel/sys_compat.c
+++ b/arch/arm64/kernel/sys_compat.c
@@ -79,6 +79,12 @@
case __ARM_NR_compat_set_tls:
current->thread.tp_value = regs->regs[0];
+
+ /*
+ * Protect against register corruption from context switch.
+ * See comment in tls_thread_flush.
+ */
+ barrier();
asm ("msr tpidrro_el0, %0" : : "r" (regs->regs[0]));
return 0;
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 5472c24..a83061f 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -149,8 +149,7 @@
memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start);
#endif
- if (!efi_enabled(EFI_MEMMAP))
- early_init_fdt_scan_reserved_mem();
+ early_init_fdt_scan_reserved_mem();
/* 4GB maximum for 32-bit only capable devices */
if (IS_ENABLED(CONFIG_ZONE_DMA))
diff --git a/arch/ia64/configs/bigsur_defconfig b/arch/ia64/configs/bigsur_defconfig
index 4c4ac16..b6bda18 100644
--- a/arch/ia64/configs/bigsur_defconfig
+++ b/arch/ia64/configs/bigsur_defconfig
@@ -1,4 +1,3 @@
-CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_LOG_BUF_SHIFT=16
@@ -6,6 +5,8 @@
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
+CONFIG_PARTITION_ADVANCED=y
+CONFIG_SGI_PARTITION=y
CONFIG_IA64_DIG=y
CONFIG_SMP=y
CONFIG_NR_CPUS=2
@@ -51,9 +52,6 @@
CONFIG_DM_ZERO=m
CONFIG_NETDEVICES=y
CONFIG_DUMMY=y
-CONFIG_NET_ETHERNET=y
-CONFIG_MII=y
-CONFIG_NET_PCI=y
CONFIG_INPUT_EVDEV=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
@@ -85,7 +83,6 @@
CONFIG_XFS_FS=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
-CONFIG_AUTOFS_FS=m
CONFIG_AUTOFS4_FS=m
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
@@ -95,17 +92,13 @@
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m
-CONFIG_NFS_V3=y
-CONFIG_NFS_V4=y
+CONFIG_NFS_V4=m
CONFIG_NFSD=m
CONFIG_NFSD_V4=y
CONFIG_CIFS=m
CONFIG_CIFS_STATS=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
-CONFIG_PARTITION_ADVANCED=y
-CONFIG_SGI_PARTITION=y
-CONFIG_EFI_PARTITION=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_UTF8=m
diff --git a/arch/ia64/configs/generic_defconfig b/arch/ia64/configs/generic_defconfig
index e8ed3ae..81f686d 100644
--- a/arch/ia64/configs/generic_defconfig
+++ b/arch/ia64/configs/generic_defconfig
@@ -1,4 +1,3 @@
-CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_IKCONFIG=y
@@ -6,13 +5,13 @@
CONFIG_LOG_BUF_SHIFT=20
CONFIG_CGROUPS=y
CONFIG_CPUSETS=y
-CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_KALLSYMS_ALL=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
-# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+CONFIG_SGI_PARTITION=y
CONFIG_MCKINLEY=y
CONFIG_IA64_PAGE_SIZE_64KB=y
CONFIG_IA64_CYCLONE=y
@@ -29,14 +28,13 @@
CONFIG_ACPI_FAN=m
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_PROCESSOR=m
-CONFIG_ACPI_CONTAINER=y
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
-CONFIG_ARPD=y
CONFIG_SYN_COOKIES=y
# CONFIG_IPV6 is not set
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
@@ -82,16 +80,13 @@
CONFIG_FUSION_SAS=y
CONFIG_NETDEVICES=y
CONFIG_DUMMY=m
-CONFIG_NET_ETHERNET=y
+CONFIG_NETCONSOLE=y
+CONFIG_TIGON3=y
CONFIG_NET_TULIP=y
CONFIG_TULIP=m
-CONFIG_NET_PCI=y
-CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m
CONFIG_E1000=y
CONFIG_IGB=y
-CONFIG_TIGON3=y
-CONFIG_NETCONSOLE=y
# CONFIG_SERIO_SERPORT is not set
CONFIG_GAMEPORT=m
CONFIG_SERIAL_NONSTANDARD=y
@@ -151,6 +146,7 @@
CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_MTHCA=m
CONFIG_INFINIBAND_IPOIB=m
+CONFIG_INTEL_IOMMU=y
CONFIG_MSPEC=m
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
@@ -164,7 +160,6 @@
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_XFS_FS=y
-CONFIG_AUTOFS_FS=m
CONFIG_AUTOFS4_FS=m
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
@@ -175,16 +170,10 @@
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m
-CONFIG_NFS_V3=y
-CONFIG_NFS_V4=y
+CONFIG_NFS_V4=m
CONFIG_NFSD=m
CONFIG_NFSD_V4=y
-CONFIG_SMB_FS=m
-CONFIG_SMB_NLS_DEFAULT=y
CONFIG_CIFS=m
-CONFIG_PARTITION_ADVANCED=y
-CONFIG_SGI_PARTITION=y
-CONFIG_EFI_PARTITION=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
@@ -225,11 +214,7 @@
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y
-# CONFIG_RCU_CPU_STALL_DETECTOR is not set
-CONFIG_SYSCTL_SYSCALL_CHECK=y
-CONFIG_CRYPTO_ECB=m
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_ANSI_CPRNG is not set
CONFIG_CRC_T10DIF=y
-CONFIG_INTEL_IOMMU=y
diff --git a/arch/ia64/configs/gensparse_defconfig b/arch/ia64/configs/gensparse_defconfig
index d663efd..5b4fcdd 100644
--- a/arch/ia64/configs/gensparse_defconfig
+++ b/arch/ia64/configs/gensparse_defconfig
@@ -1,4 +1,3 @@
-CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_IKCONFIG=y
@@ -9,6 +8,8 @@
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
+CONFIG_PARTITION_ADVANCED=y
+CONFIG_SGI_PARTITION=y
CONFIG_MCKINLEY=y
CONFIG_IA64_CYCLONE=y
CONFIG_SMP=y
@@ -24,14 +25,12 @@
CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m
CONFIG_ACPI_PROCESSOR=m
-CONFIG_ACPI_CONTAINER=m
CONFIG_HOTPLUG_PCI=y
-CONFIG_HOTPLUG_PCI_ACPI=m
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
-CONFIG_ARPD=y
CONFIG_SYN_COOKIES=y
# CONFIG_IPV6 is not set
CONFIG_BLK_DEV_LOOP=m
@@ -71,15 +70,12 @@
CONFIG_FUSION_FC=m
CONFIG_NETDEVICES=y
CONFIG_DUMMY=m
-CONFIG_NET_ETHERNET=y
+CONFIG_NETCONSOLE=y
+CONFIG_TIGON3=y
CONFIG_NET_TULIP=y
CONFIG_TULIP=m
-CONFIG_NET_PCI=y
-CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m
CONFIG_E1000=y
-CONFIG_TIGON3=y
-CONFIG_NETCONSOLE=y
# CONFIG_SERIO_SERPORT is not set
CONFIG_GAMEPORT=m
CONFIG_SERIAL_NONSTANDARD=y
@@ -146,7 +142,6 @@
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_XFS_FS=y
-CONFIG_AUTOFS_FS=y
CONFIG_AUTOFS4_FS=y
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
@@ -157,16 +152,10 @@
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m
-CONFIG_NFS_V3=y
-CONFIG_NFS_V4=y
+CONFIG_NFS_V4=m
CONFIG_NFSD=m
CONFIG_NFSD_V4=y
-CONFIG_SMB_FS=m
-CONFIG_SMB_NLS_DEFAULT=y
CONFIG_CIFS=m
-CONFIG_PARTITION_ADVANCED=y
-CONFIG_SGI_PARTITION=y
-CONFIG_EFI_PARTITION=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
diff --git a/arch/ia64/configs/sim_defconfig b/arch/ia64/configs/sim_defconfig
index b4548a3..f0f69fd 100644
--- a/arch/ia64/configs/sim_defconfig
+++ b/arch/ia64/configs/sim_defconfig
@@ -1,13 +1,12 @@
-CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=16
-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_MODVERSIONS=y
+CONFIG_PARTITION_ADVANCED=y
CONFIG_IA64_HP_SIM=y
CONFIG_MCKINLEY=y
CONFIG_IA64_PAGE_SIZE_64KB=y
@@ -27,7 +26,6 @@
CONFIG_BLK_DEV_RAM=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
-CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SPI_ATTRS=y
@@ -49,8 +47,6 @@
CONFIG_NFS_FS=y
CONFIG_NFSD=y
CONFIG_NFSD_V3=y
-CONFIG_PARTITION_ADVANCED=y
-CONFIG_EFI_PARTITION=y
+CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y
-CONFIG_DEBUG_INFO=y
diff --git a/arch/ia64/configs/tiger_defconfig b/arch/ia64/configs/tiger_defconfig
index c8a3f40..192ed15 100644
--- a/arch/ia64/configs/tiger_defconfig
+++ b/arch/ia64/configs/tiger_defconfig
@@ -1,4 +1,3 @@
-CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_IKCONFIG=y
@@ -11,6 +10,8 @@
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+CONFIG_SGI_PARTITION=y
CONFIG_IA64_DIG=y
CONFIG_MCKINLEY=y
CONFIG_IA64_PAGE_SIZE_64KB=y
@@ -29,14 +30,12 @@
CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m
CONFIG_ACPI_PROCESSOR=m
-CONFIG_ACPI_CONTAINER=m
CONFIG_HOTPLUG_PCI=y
-CONFIG_HOTPLUG_PCI_ACPI=m
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
-CONFIG_ARPD=y
CONFIG_SYN_COOKIES=y
# CONFIG_IPV6 is not set
CONFIG_BLK_DEV_LOOP=m
@@ -53,6 +52,7 @@
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m
+CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_QLOGIC_1280=y
CONFIG_MD=y
@@ -72,15 +72,12 @@
CONFIG_FUSION_CTL=y
CONFIG_NETDEVICES=y
CONFIG_DUMMY=m
-CONFIG_NET_ETHERNET=y
+CONFIG_NETCONSOLE=y
+CONFIG_TIGON3=y
CONFIG_NET_TULIP=y
CONFIG_TULIP=m
-CONFIG_NET_PCI=y
-CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m
CONFIG_E1000=y
-CONFIG_TIGON3=y
-CONFIG_NETCONSOLE=y
# CONFIG_SERIO_SERPORT is not set
CONFIG_GAMEPORT=m
CONFIG_SERIAL_NONSTANDARD=y
@@ -118,7 +115,6 @@
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_XFS_FS=y
-CONFIG_AUTOFS_FS=y
CONFIG_AUTOFS4_FS=y
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
@@ -129,16 +125,10 @@
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m
-CONFIG_NFS_V3=y
-CONFIG_NFS_V4=y
+CONFIG_NFS_V4=m
CONFIG_NFSD=m
CONFIG_NFSD_V4=y
-CONFIG_SMB_FS=m
-CONFIG_SMB_NLS_DEFAULT=y
CONFIG_CIFS=m
-CONFIG_PARTITION_ADVANCED=y
-CONFIG_SGI_PARTITION=y
-CONFIG_EFI_PARTITION=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
@@ -180,6 +170,5 @@
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_IA64_GRANULE_16MB=y
-CONFIG_CRYPTO_ECB=m
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_MD5=y
diff --git a/arch/ia64/configs/zx1_defconfig b/arch/ia64/configs/zx1_defconfig
index 54bc72e..b504c8e 100644
--- a/arch/ia64/configs/zx1_defconfig
+++ b/arch/ia64/configs/zx1_defconfig
@@ -1,9 +1,9 @@
-CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
+CONFIG_PARTITION_ADVANCED=y
CONFIG_IA64_HP_ZX1=y
CONFIG_MCKINLEY=y
CONFIG_SMP=y
@@ -18,6 +18,7 @@
CONFIG_BINFMT_MISC=y
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
@@ -37,9 +38,9 @@
CONFIG_BLK_DEV_SR=y
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=y
-CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
+CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_QLOGIC_1280=y
CONFIG_FUSION=y
@@ -48,18 +49,15 @@
CONFIG_FUSION_CTL=m
CONFIG_NETDEVICES=y
CONFIG_DUMMY=y
-CONFIG_NET_ETHERNET=y
+CONFIG_TIGON3=y
CONFIG_NET_TULIP=y
CONFIG_TULIP=y
CONFIG_TULIP_MWI=y
CONFIG_TULIP_MMIO=y
CONFIG_TULIP_NAPI=y
CONFIG_TULIP_NAPI_HW_MITIGATION=y
-CONFIG_NET_PCI=y
-CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=y
CONFIG_E1000=y
-CONFIG_TIGON3=y
CONFIG_INPUT_JOYDEV=y
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_KEYBOARD is not set
@@ -100,7 +98,6 @@
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT3_FS=y
-CONFIG_AUTOFS_FS=y
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_UDF_FS=y
@@ -110,12 +107,9 @@
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=y
-CONFIG_NFS_V3=y
CONFIG_NFS_V4=y
CONFIG_NFSD=y
CONFIG_NFSD_V3=y
-CONFIG_PARTITION_ADVANCED=y
-CONFIG_EFI_PARTITION=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=y
CONFIG_NLS_CODEPAGE_775=y
diff --git a/arch/ia64/include/uapi/asm/unistd.h b/arch/ia64/include/uapi/asm/unistd.h
index 6a65bb7..18026b2 100644
--- a/arch/ia64/include/uapi/asm/unistd.h
+++ b/arch/ia64/include/uapi/asm/unistd.h
@@ -329,6 +329,6 @@
#define __NR_sched_getattr 1337
#define __NR_renameat2 1338
#define __NR_getrandom 1339
-#define __NR_memfd_create 1339
+#define __NR_memfd_create 1340
#endif /* _UAPI_ASM_IA64_UNISTD_H */
diff --git a/arch/ia64/pci/fixup.c b/arch/ia64/pci/fixup.c
index ec73b2c..fc505d5 100644
--- a/arch/ia64/pci/fixup.c
+++ b/arch/ia64/pci/fixup.c
@@ -38,27 +38,6 @@
return;
/* Maybe, this machine supports legacy memory map. */
- if (!vga_default_device()) {
- resource_size_t start, end;
- int i;
-
- /* Does firmware framebuffer belong to us? */
- for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
- if (!(pci_resource_flags(pdev, i) & IORESOURCE_MEM))
- continue;
-
- start = pci_resource_start(pdev, i);
- end = pci_resource_end(pdev, i);
-
- if (!start || !end)
- continue;
-
- if (screen_info.lfb_base >= start &&
- (screen_info.lfb_base + screen_info.lfb_size) < end)
- vga_set_default_device(pdev);
- }
- }
-
/* Is VGA routed to us? */
bus = pdev->bus;
while (bus) {
@@ -83,8 +62,7 @@
pci_read_config_word(pdev, PCI_COMMAND, &config);
if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) {
pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW;
- dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n");
- vga_set_default_device(pdev);
+ dev_printk(KERN_DEBUG, &pdev->dev, "Video device with shadowed ROM\n");
}
}
}
diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
index 40e1c1d..6feded3 100644
--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -127,7 +127,7 @@
endmenu
-menu "Advanced setup"
+menu "Kernel features"
config ADVANCED_OPTIONS
bool "Prompt for advanced kernel configuration options"
@@ -248,10 +248,10 @@
endchoice
-endmenu
-
source "mm/Kconfig"
+endmenu
+
menu "Executable file formats"
source "fs/Kconfig.binfmt"
diff --git a/arch/microblaze/include/asm/entry.h b/arch/microblaze/include/asm/entry.h
index b4a4cb1..596e485 100644
--- a/arch/microblaze/include/asm/entry.h
+++ b/arch/microblaze/include/asm/entry.h
@@ -15,6 +15,7 @@
#include <asm/percpu.h>
#include <asm/ptrace.h>
+#include <linux/linkage.h>
/*
* These are per-cpu variables required in entry.S, among other
diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/include/asm/uaccess.h
index 0aa0057..59a89a6 100644
--- a/arch/microblaze/include/asm/uaccess.h
+++ b/arch/microblaze/include/asm/uaccess.h
@@ -98,13 +98,13 @@
if ((get_fs().seg < ((unsigned long)addr)) ||
(get_fs().seg < ((unsigned long)addr + size - 1))) {
- pr_debug("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
+ pr_devel("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
type ? "WRITE" : "READ ", (__force u32)addr, (u32)size,
(u32)get_fs().seg);
return 0;
}
ok:
- pr_debug("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
+ pr_devel("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
type ? "WRITE" : "READ ", (__force u32)addr, (u32)size,
(u32)get_fs().seg);
return 1;
diff --git a/arch/microblaze/include/asm/unistd.h b/arch/microblaze/include/asm/unistd.h
index fd56a8f..ea4b233 100644
--- a/arch/microblaze/include/asm/unistd.h
+++ b/arch/microblaze/include/asm/unistd.h
@@ -38,6 +38,6 @@
#endif /* __ASSEMBLY__ */
-#define __NR_syscalls 381
+#define __NR_syscalls 387
#endif /* _ASM_MICROBLAZE_UNISTD_H */
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 900c7e5..574c430 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -546,6 +546,7 @@
# select SYS_HAS_EARLY_PRINTK
select SYS_SUPPORTS_64BIT_KERNEL
select SYS_SUPPORTS_BIG_ENDIAN
+ select MIPS_L1_CACHE_SHIFT_7
help
This is the SGI Indigo2 with R10000 processor. To compile a Linux
kernel that runs on these, say Y here.
@@ -2029,7 +2030,9 @@
bool "MIPS CMP framework support (DEPRECATED)"
depends on SYS_SUPPORTS_MIPS_CMP
select MIPS_GIC_IPI
+ select SMP
select SYNC_R4K
+ select SYS_SUPPORTS_SMP
select WEAK_ORDERING
default n
help
diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index 9336509..bbac51e1 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -113,7 +113,16 @@
cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(shell $(CC) -dumpmachine |grep -q 'mips.*el-.*' && echo -EB $(undef-all) $(predef-be))
cflags-$(CONFIG_CPU_LITTLE_ENDIAN) += $(shell $(CC) -dumpmachine |grep -q 'mips.*el-.*' || echo -EL $(undef-all) $(predef-le))
-cflags-$(CONFIG_CPU_HAS_SMARTMIPS) += $(call cc-option,-msmartmips)
+# For smartmips configurations, there are hundreds of warnings due to ISA overrides
+# in assembly and header files. smartmips is only supported for MIPS32r1 onwards
+# and there is no support for 64-bit. Various '.set mips2' or '.set mips3' or
+# similar directives in the kernel will spam the build logs with the following warnings:
+# Warning: the `smartmips' extension requires MIPS32 revision 1 or greater
+# or
+# Warning: the 64-bit MIPS architecture does not support the `smartmips' extension
+# Pass -Wa,--no-warn to disable all assembler warnings until the kernel code has
+# been fixed properly.
+cflags-$(CONFIG_CPU_HAS_SMARTMIPS) += $(call cc-option,-msmartmips) -Wa,--no-warn
cflags-$(CONFIG_CPU_MICROMIPS) += $(call cc-option,-mmicromips)
cflags-$(CONFIG_SB1XXX_CORELIS) += $(call cc-option,-mno-sched-prolog) \
diff --git a/arch/mips/bcm47xx/setup.c b/arch/mips/bcm47xx/setup.c
index ad439c2..c00585d 100644
--- a/arch/mips/bcm47xx/setup.c
+++ b/arch/mips/bcm47xx/setup.c
@@ -211,6 +211,10 @@
err = bcma_host_soc_register(&bcm47xx_bus.bcma);
if (err)
+ panic("Failed to register BCMA bus (err %d)", err);
+
+ err = bcma_host_soc_init(&bcm47xx_bus.bcma);
+ if (err)
panic("Failed to initialize BCMA bus (err %d)", err);
bcm47xx_fill_bcma_boardinfo(&bcm47xx_bus.bcma.bus.boardinfo, NULL);
diff --git a/arch/mips/bcm63xx/irq.c b/arch/mips/bcm63xx/irq.c
index 37eb2d1..b94bf44d 100644
--- a/arch/mips/bcm63xx/irq.c
+++ b/arch/mips/bcm63xx/irq.c
@@ -434,7 +434,7 @@
irq_stat_addr[0] += PERF_IRQSTAT_3368_REG;
irq_mask_addr[0] += PERF_IRQMASK_3368_REG;
irq_stat_addr[1] = 0;
- irq_stat_addr[1] = 0;
+ irq_mask_addr[1] = 0;
irq_bits = 32;
ext_irq_count = 4;
ext_irq_cfg_reg1 = PERF_EXTIRQ_CFG_REG_3368;
@@ -443,7 +443,7 @@
irq_stat_addr[0] += PERF_IRQSTAT_6328_REG(0);
irq_mask_addr[0] += PERF_IRQMASK_6328_REG(0);
irq_stat_addr[1] += PERF_IRQSTAT_6328_REG(1);
- irq_stat_addr[1] += PERF_IRQMASK_6328_REG(1);
+ irq_mask_addr[1] += PERF_IRQMASK_6328_REG(1);
irq_bits = 64;
ext_irq_count = 4;
is_ext_irq_cascaded = 1;
diff --git a/arch/mips/boot/compressed/decompress.c b/arch/mips/boot/compressed/decompress.c
index b49c7ad..31903cf 100644
--- a/arch/mips/boot/compressed/decompress.c
+++ b/arch/mips/boot/compressed/decompress.c
@@ -13,6 +13,7 @@
#include <linux/types.h>
#include <linux/kernel.h>
+#include <linux/string.h>
#include <asm/addrspace.h>
diff --git a/arch/mips/configs/gpr_defconfig b/arch/mips/configs/gpr_defconfig
index 8f219da..e24feb0 100644
--- a/arch/mips/configs/gpr_defconfig
+++ b/arch/mips/configs/gpr_defconfig
@@ -19,6 +19,7 @@
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PCI=y
CONFIG_BINFMT_MISC=m
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
diff --git a/arch/mips/configs/ip27_defconfig b/arch/mips/configs/ip27_defconfig
index cc07560..48e16d9 100644
--- a/arch/mips/configs/ip27_defconfig
+++ b/arch/mips/configs/ip27_defconfig
@@ -28,6 +28,7 @@
CONFIG_MIPS32_O32=y
CONFIG_MIPS32_N32=y
CONFIG_PM=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/mips/configs/jazz_defconfig b/arch/mips/configs/jazz_defconfig
index 2575302..4f37a59 100644
--- a/arch/mips/configs/jazz_defconfig
+++ b/arch/mips/configs/jazz_defconfig
@@ -18,6 +18,7 @@
CONFIG_MODVERSIONS=y
CONFIG_BINFMT_MISC=m
CONFIG_PM=y
+CONFIG_NET=y
CONFIG_PACKET=m
CONFIG_UNIX=y
CONFIG_NET_KEY=m
diff --git a/arch/mips/configs/loongson3_defconfig b/arch/mips/configs/loongson3_defconfig
index 4cb787f..1c6191e 100644
--- a/arch/mips/configs/loongson3_defconfig
+++ b/arch/mips/configs/loongson3_defconfig
@@ -59,6 +59,7 @@
CONFIG_MIPS32_O32=y
CONFIG_MIPS32_N32=y
CONFIG_PM_RUNTIME=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=y
diff --git a/arch/mips/configs/malta_defconfig b/arch/mips/configs/malta_defconfig
index e18741e..f57b96d 100644
--- a/arch/mips/configs/malta_defconfig
+++ b/arch/mips/configs/malta_defconfig
@@ -19,6 +19,7 @@
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_PCI=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/mips/configs/malta_kvm_defconfig b/arch/mips/configs/malta_kvm_defconfig
index cf0e01f..d41742d 100644
--- a/arch/mips/configs/malta_kvm_defconfig
+++ b/arch/mips/configs/malta_kvm_defconfig
@@ -20,6 +20,7 @@
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_PCI=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/mips/configs/malta_kvm_guest_defconfig b/arch/mips/configs/malta_kvm_guest_defconfig
index edd9ec9..a7806e8 100644
--- a/arch/mips/configs/malta_kvm_guest_defconfig
+++ b/arch/mips/configs/malta_kvm_guest_defconfig
@@ -19,6 +19,7 @@
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_PCI=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/mips/configs/mtx1_defconfig b/arch/mips/configs/mtx1_defconfig
index d269a53..9b6926d 100644
--- a/arch/mips/configs/mtx1_defconfig
+++ b/arch/mips/configs/mtx1_defconfig
@@ -27,6 +27,7 @@
CONFIG_I82092=m
CONFIG_BINFMT_MISC=m
CONFIG_PM=y
+CONFIG_NET=y
CONFIG_PACKET=m
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/mips/configs/nlm_xlp_defconfig b/arch/mips/configs/nlm_xlp_defconfig
index 2f660e9..70509a4 100644
--- a/arch/mips/configs/nlm_xlp_defconfig
+++ b/arch/mips/configs/nlm_xlp_defconfig
@@ -63,6 +63,7 @@
CONFIG_MIPS32_N32=y
CONFIG_PM_RUNTIME=y
CONFIG_PM_DEBUG=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/mips/configs/nlm_xlr_defconfig b/arch/mips/configs/nlm_xlr_defconfig
index c6f8465..82207e8 100644
--- a/arch/mips/configs/nlm_xlr_defconfig
+++ b/arch/mips/configs/nlm_xlr_defconfig
@@ -43,6 +43,7 @@
CONFIG_BINFMT_MISC=m
CONFIG_PM_RUNTIME=y
CONFIG_PM_DEBUG=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/mips/configs/rm200_defconfig b/arch/mips/configs/rm200_defconfig
index 29d79ae..db029f4 100644
--- a/arch/mips/configs/rm200_defconfig
+++ b/arch/mips/configs/rm200_defconfig
@@ -20,6 +20,7 @@
CONFIG_PCI=y
CONFIG_BINFMT_MISC=m
CONFIG_PM=y
+CONFIG_NET=y
CONFIG_PACKET=m
CONFIG_UNIX=y
CONFIG_NET_KEY=m
diff --git a/arch/mips/include/asm/cop2.h b/arch/mips/include/asm/cop2.h
index d035298..51f80bd 100644
--- a/arch/mips/include/asm/cop2.h
+++ b/arch/mips/include/asm/cop2.h
@@ -16,8 +16,8 @@
extern void octeon_cop2_save(struct octeon_cop2_state *);
extern void octeon_cop2_restore(struct octeon_cop2_state *);
-#define cop2_save(r) octeon_cop2_save(r)
-#define cop2_restore(r) octeon_cop2_restore(r)
+#define cop2_save(r) octeon_cop2_save(&(r)->thread.cp2)
+#define cop2_restore(r) octeon_cop2_restore(&(r)->thread.cp2)
#define cop2_present 1
#define cop2_lazy_restore 1
@@ -26,26 +26,26 @@
extern void nlm_cop2_save(struct nlm_cop2_state *);
extern void nlm_cop2_restore(struct nlm_cop2_state *);
-#define cop2_save(r) nlm_cop2_save(r)
-#define cop2_restore(r) nlm_cop2_restore(r)
+
+#define cop2_save(r) nlm_cop2_save(&(r)->thread.cp2)
+#define cop2_restore(r) nlm_cop2_restore(&(r)->thread.cp2)
#define cop2_present 1
#define cop2_lazy_restore 0
#elif defined(CONFIG_CPU_LOONGSON3)
-#define cop2_save(r)
-#define cop2_restore(r)
-
#define cop2_present 1
#define cop2_lazy_restore 1
+#define cop2_save(r) do { (r); } while (0)
+#define cop2_restore(r) do { (r); } while (0)
#else
#define cop2_present 0
#define cop2_lazy_restore 0
-#define cop2_save(r)
-#define cop2_restore(r)
+#define cop2_save(r) do { (r); } while (0)
+#define cop2_restore(r) do { (r); } while (0)
#endif
enum cu2_ops {
diff --git a/arch/mips/include/asm/mach-ip28/spaces.h b/arch/mips/include/asm/mach-ip28/spaces.h
index 5d6a764..c4a9127 100644
--- a/arch/mips/include/asm/mach-ip28/spaces.h
+++ b/arch/mips/include/asm/mach-ip28/spaces.h
@@ -11,15 +11,8 @@
#ifndef _ASM_MACH_IP28_SPACES_H
#define _ASM_MACH_IP28_SPACES_H
-#define CAC_BASE _AC(0xa800000000000000, UL)
-
-#define HIGHMEM_START (~0UL)
-
#define PHYS_OFFSET _AC(0x20000000, UL)
-#define UNCAC_BASE _AC(0xc0000000, UL) /* 0xa0000000 + PHYS_OFFSET */
-#define IO_BASE UNCAC_BASE
-
#include <asm/mach-generic/spaces.h>
#endif /* _ASM_MACH_IP28_SPACES_H */
diff --git a/arch/mips/include/asm/page.h b/arch/mips/include/asm/page.h
index 5699ec3..3be8180 100644
--- a/arch/mips/include/asm/page.h
+++ b/arch/mips/include/asm/page.h
@@ -37,7 +37,7 @@
/*
* This is used for calculating the real page sizes
- * for FTLB or VTLB + FTLB confugrations.
+ * for FTLB or VTLB + FTLB configurations.
*/
static inline unsigned int page_size_ftlb(unsigned int mmuextdef)
{
@@ -223,7 +223,8 @@
#endif
-#define virt_to_page(kaddr) pfn_to_page(PFN_DOWN(virt_to_phys(kaddr)))
+#define virt_to_page(kaddr) pfn_to_page(PFN_DOWN(virt_to_phys((void *) \
+ (kaddr))))
extern int __virt_addr_valid(const volatile void *kaddr);
#define virt_addr_valid(kaddr) \
diff --git a/arch/mips/include/asm/smp.h b/arch/mips/include/asm/smp.h
index 1e0f20a..eacf865 100644
--- a/arch/mips/include/asm/smp.h
+++ b/arch/mips/include/asm/smp.h
@@ -37,11 +37,6 @@
#define NO_PROC_ID (-1)
-#define topology_physical_package_id(cpu) (cpu_data[cpu].package)
-#define topology_core_id(cpu) (cpu_data[cpu].core)
-#define topology_core_cpumask(cpu) (&cpu_core_map[cpu])
-#define topology_thread_cpumask(cpu) (&cpu_sibling_map[cpu])
-
#define SMP_RESCHEDULE_YOURSELF 0x1 /* XXX braindead */
#define SMP_CALL_FUNCTION 0x2
/* Octeon - Tell another core to flush its icache */
diff --git a/arch/mips/include/asm/switch_to.h b/arch/mips/include/asm/switch_to.h
index 495c104..b928b6f 100644
--- a/arch/mips/include/asm/switch_to.h
+++ b/arch/mips/include/asm/switch_to.h
@@ -92,7 +92,7 @@
KSTK_STATUS(prev) &= ~ST0_CU2; \
__c0_stat = read_c0_status(); \
write_c0_status(__c0_stat | ST0_CU2); \
- cop2_save(&prev->thread.cp2); \
+ cop2_save(prev); \
write_c0_status(__c0_stat & ~ST0_CU2); \
} \
__clear_software_ll_bit(); \
@@ -111,7 +111,7 @@
(KSTK_STATUS(current) & ST0_CU2)) { \
__c0_stat = read_c0_status(); \
write_c0_status(__c0_stat | ST0_CU2); \
- cop2_restore(¤t->thread.cp2); \
+ cop2_restore(current); \
write_c0_status(__c0_stat & ~ST0_CU2); \
} \
if (cpu_has_dsp) \
diff --git a/arch/mips/include/asm/topology.h b/arch/mips/include/asm/topology.h
index 20ea485..3e307ec 100644
--- a/arch/mips/include/asm/topology.h
+++ b/arch/mips/include/asm/topology.h
@@ -9,5 +9,13 @@
#define __ASM_TOPOLOGY_H
#include <topology.h>
+#include <linux/smp.h>
+
+#ifdef CONFIG_SMP
+#define topology_physical_package_id(cpu) (cpu_data[cpu].package)
+#define topology_core_id(cpu) (cpu_data[cpu].core)
+#define topology_core_cpumask(cpu) (&cpu_core_map[cpu])
+#define topology_thread_cpumask(cpu) (&cpu_sibling_map[cpu])
+#endif
#endif /* __ASM_TOPOLOGY_H */
diff --git a/arch/mips/include/uapi/asm/unistd.h b/arch/mips/include/uapi/asm/unistd.h
index 9bc13ea..fdb4923 100644
--- a/arch/mips/include/uapi/asm/unistd.h
+++ b/arch/mips/include/uapi/asm/unistd.h
@@ -373,16 +373,18 @@
#define __NR_sched_getattr (__NR_Linux + 350)
#define __NR_renameat2 (__NR_Linux + 351)
#define __NR_seccomp (__NR_Linux + 352)
+#define __NR_getrandom (__NR_Linux + 353)
+#define __NR_memfd_create (__NR_Linux + 354)
/*
* Offset of the last Linux o32 flavoured syscall
*/
-#define __NR_Linux_syscalls 352
+#define __NR_Linux_syscalls 354
#endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */
#define __NR_O32_Linux 4000
-#define __NR_O32_Linux_syscalls 352
+#define __NR_O32_Linux_syscalls 354
#if _MIPS_SIM == _MIPS_SIM_ABI64
@@ -703,16 +705,18 @@
#define __NR_sched_getattr (__NR_Linux + 310)
#define __NR_renameat2 (__NR_Linux + 311)
#define __NR_seccomp (__NR_Linux + 312)
+#define __NR_getrandom (__NR_Linux + 313)
+#define __NR_memfd_create (__NR_Linux + 314)
/*
* Offset of the last Linux 64-bit flavoured syscall
*/
-#define __NR_Linux_syscalls 312
+#define __NR_Linux_syscalls 314
#endif /* _MIPS_SIM == _MIPS_SIM_ABI64 */
#define __NR_64_Linux 5000
-#define __NR_64_Linux_syscalls 312
+#define __NR_64_Linux_syscalls 314
#if _MIPS_SIM == _MIPS_SIM_NABI32
@@ -1037,15 +1041,17 @@
#define __NR_sched_getattr (__NR_Linux + 314)
#define __NR_renameat2 (__NR_Linux + 315)
#define __NR_seccomp (__NR_Linux + 316)
+#define __NR_getrandom (__NR_Linux + 317)
+#define __NR_memfd_create (__NR_Linux + 318)
/*
* Offset of the last N32 flavoured syscall
*/
-#define __NR_Linux_syscalls 316
+#define __NR_Linux_syscalls 318
#endif /* _MIPS_SIM == _MIPS_SIM_NABI32 */
#define __NR_N32_Linux 6000
-#define __NR_N32_Linux_syscalls 316
+#define __NR_N32_Linux_syscalls 318
#endif /* _UAPI_ASM_UNISTD_H */
diff --git a/arch/mips/kernel/machine_kexec.c b/arch/mips/kernel/machine_kexec.c
index 992e184..50980bf3 100644
--- a/arch/mips/kernel/machine_kexec.c
+++ b/arch/mips/kernel/machine_kexec.c
@@ -71,8 +71,12 @@
kexec_start_address =
(unsigned long) phys_to_virt(image->start);
- kexec_indirection_page =
- (unsigned long) phys_to_virt(image->head & PAGE_MASK);
+ if (image->type == KEXEC_TYPE_DEFAULT) {
+ kexec_indirection_page =
+ (unsigned long) phys_to_virt(image->head & PAGE_MASK);
+ } else {
+ kexec_indirection_page = (unsigned long)&image->head;
+ }
memcpy((void*)reboot_code_buffer, relocate_new_kernel,
relocate_new_kernel_size);
diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S
index f93b4cb..744cd10 100644
--- a/arch/mips/kernel/scall32-o32.S
+++ b/arch/mips/kernel/scall32-o32.S
@@ -577,3 +577,5 @@
PTR sys_sched_getattr /* 4350 */
PTR sys_renameat2
PTR sys_seccomp
+ PTR sys_getrandom
+ PTR sys_memfd_create
diff --git a/arch/mips/kernel/scall64-64.S b/arch/mips/kernel/scall64-64.S
index 03ebd99..002b1bc 100644
--- a/arch/mips/kernel/scall64-64.S
+++ b/arch/mips/kernel/scall64-64.S
@@ -432,4 +432,6 @@
PTR sys_sched_getattr /* 5310 */
PTR sys_renameat2
PTR sys_seccomp
+ PTR sys_getrandom
+ PTR sys_memfd_create
.size sys_call_table,.-sys_call_table
diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S
index ebc9228..ca6cbbe 100644
--- a/arch/mips/kernel/scall64-n32.S
+++ b/arch/mips/kernel/scall64-n32.S
@@ -425,4 +425,6 @@
PTR sys_sched_getattr
PTR sys_renameat2 /* 6315 */
PTR sys_seccomp
+ PTR sys_getrandom
+ PTR sys_memfd_create
.size sysn32_call_table,.-sysn32_call_table
diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S
index 25bb840..9e10d11 100644
--- a/arch/mips/kernel/scall64-o32.S
+++ b/arch/mips/kernel/scall64-o32.S
@@ -562,4 +562,6 @@
PTR sys_sched_getattr /* 4350 */
PTR sys_renameat2
PTR sys_seccomp
+ PTR sys_getrandom
+ PTR sys_memfd_create
.size sys32_call_table,.-sys32_call_table
diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index 571aab0..f42e35e 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -53,6 +53,7 @@
*/
unsigned long empty_zero_page, zero_page_mask;
EXPORT_SYMBOL_GPL(empty_zero_page);
+EXPORT_SYMBOL(zero_page_mask);
/*
* Not static inline because used by IP27 special magic initialization code
diff --git a/arch/mips/net/bpf_jit.c b/arch/mips/net/bpf_jit.c
index 0e97ccd..64ecf9a 100644
--- a/arch/mips/net/bpf_jit.c
+++ b/arch/mips/net/bpf_jit.c
@@ -765,27 +765,6 @@
return (u64)err << 32 | ntohl(ret);
}
-#ifdef __BIG_ENDIAN_BITFIELD
-#define PKT_TYPE_MAX (7 << 5)
-#else
-#define PKT_TYPE_MAX 7
-#endif
-static int pkt_type_offset(void)
-{
- struct sk_buff skb_probe = {
- .pkt_type = ~0,
- };
- u8 *ct = (u8 *)&skb_probe;
- unsigned int off;
-
- for (off = 0; off < sizeof(struct sk_buff); off++) {
- if (ct[off] == PKT_TYPE_MAX)
- return off;
- }
- pr_err_once("Please fix pkt_type_offset(), as pkt_type couldn't be found\n");
- return -1;
-}
-
static int build_body(struct jit_ctx *ctx)
{
void *load_func[] = {jit_get_skb_b, jit_get_skb_h, jit_get_skb_w};
@@ -793,6 +772,7 @@
const struct sock_filter *inst;
unsigned int i, off, load_order, condt;
u32 k, b_off __maybe_unused;
+ int tmp;
for (i = 0; i < prog->len; i++) {
u16 code;
@@ -1332,11 +1312,7 @@
case BPF_ANC | SKF_AD_PKTTYPE:
ctx->flags |= SEEN_SKB;
- off = pkt_type_offset();
-
- if (off < 0)
- return -1;
- emit_load_byte(r_tmp, r_skb, off, ctx);
+ emit_load_byte(r_tmp, r_skb, PKT_TYPE_OFFSET(), ctx);
/* Keep only the last 3 bits */
emit_andi(r_A, r_tmp, PKT_TYPE_MAX, ctx);
#ifdef __BIG_ENDIAN_BITFIELD
diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index 6e75e20..1554a6f 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -321,6 +321,22 @@
source "arch/parisc/Kconfig.debug"
+config SECCOMP
+ def_bool y
+ prompt "Enable seccomp to safely compute untrusted bytecode"
+ ---help---
+ This kernel feature is useful for number crunching applications
+ that may need to compute untrusted bytecode during their
+ execution. By using pipes or other transports made available to
+ the process as file descriptors supporting the read/write
+ syscalls, it's possible to isolate those applications in
+ their own address space using seccomp. Once seccomp is
+ enabled via prctl(PR_SET_SECCOMP), it cannot be disabled
+ and the task is only allowed to execute a few safe syscalls
+ defined by each seccomp mode.
+
+ If unsure, say Y. Only embedded should say N here.
+
source "security/Kconfig"
source "crypto/Kconfig"
diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile
index 7187664..5db8882 100644
--- a/arch/parisc/Makefile
+++ b/arch/parisc/Makefile
@@ -48,7 +48,12 @@
# These flags should be implied by an hppa-linux configuration, but they
# are not in gcc 3.2.
-cflags-y += -mno-space-regs -mfast-indirect-calls
+cflags-y += -mno-space-regs
+
+# -mfast-indirect-calls is only relevant for 32-bit kernels.
+ifndef CONFIG_64BIT
+cflags-y += -mfast-indirect-calls
+endif
# Currently we save and restore fpregs on all kernel entry/interruption paths.
# If that gets optimized, we might need to disable the use of fpregs in the
diff --git a/arch/parisc/configs/a500_defconfig b/arch/parisc/configs/a500_defconfig
index 9002532..0490199 100644
--- a/arch/parisc/configs/a500_defconfig
+++ b/arch/parisc/configs/a500_defconfig
@@ -31,6 +31,7 @@
CONFIG_I82092=m
# CONFIG_SUPERIO is not set
# CONFIG_CHASSIS_LCD_LED is not set
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/parisc/configs/c8000_defconfig b/arch/parisc/configs/c8000_defconfig
index 8249ac9..269c23d 100644
--- a/arch/parisc/configs/c8000_defconfig
+++ b/arch/parisc/configs/c8000_defconfig
@@ -33,6 +33,7 @@
# CONFIG_PDC_CHASSIS_WARN is not set
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=m
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/parisc/hpux/sys_hpux.c b/arch/parisc/hpux/sys_hpux.c
index d9dc6cd..e5c4da0 100644
--- a/arch/parisc/hpux/sys_hpux.c
+++ b/arch/parisc/hpux/sys_hpux.c
@@ -456,7 +456,7 @@
}
/* String could be altered by userspace after strlen_user() */
- fsname[len] = '\0';
+ fsname[len - 1] = '\0';
printk(KERN_DEBUG "that is '%s' as (char *)\n", fsname);
if ( !strcmp(fsname, "hfs") ) {
diff --git a/arch/parisc/include/asm/seccomp.h b/arch/parisc/include/asm/seccomp.h
new file mode 100644
index 0000000..015f788
--- /dev/null
+++ b/arch/parisc/include/asm/seccomp.h
@@ -0,0 +1,16 @@
+#ifndef _ASM_PARISC_SECCOMP_H
+#define _ASM_PARISC_SECCOMP_H
+
+#include <linux/unistd.h>
+
+#define __NR_seccomp_read __NR_read
+#define __NR_seccomp_write __NR_write
+#define __NR_seccomp_exit __NR_exit
+#define __NR_seccomp_sigreturn __NR_rt_sigreturn
+
+#define __NR_seccomp_read_32 __NR_read
+#define __NR_seccomp_write_32 __NR_write
+#define __NR_seccomp_exit_32 __NR_exit
+#define __NR_seccomp_sigreturn_32 __NR_rt_sigreturn
+
+#endif /* _ASM_PARISC_SECCOMP_H */
diff --git a/arch/parisc/include/asm/thread_info.h b/arch/parisc/include/asm/thread_info.h
index 4b9b10c..a846118 100644
--- a/arch/parisc/include/asm/thread_info.h
+++ b/arch/parisc/include/asm/thread_info.h
@@ -60,6 +60,7 @@
#define TIF_NOTIFY_RESUME 8 /* callback before returning to user */
#define TIF_SINGLESTEP 9 /* single stepping? */
#define TIF_BLOCKSTEP 10 /* branch stepping? */
+#define TIF_SECCOMP 11 /* secure computing */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
@@ -70,11 +71,13 @@
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
+#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \
_TIF_NEED_RESCHED)
#define _TIF_SYSCALL_TRACE_MASK (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP | \
- _TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT)
+ _TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT | \
+ _TIF_SECCOMP)
#ifdef CONFIG_64BIT
# ifdef CONFIG_COMPAT
diff --git a/arch/parisc/include/uapi/asm/unistd.h b/arch/parisc/include/uapi/asm/unistd.h
index 47e0e21..8667f18 100644
--- a/arch/parisc/include/uapi/asm/unistd.h
+++ b/arch/parisc/include/uapi/asm/unistd.h
@@ -830,8 +830,11 @@
#define __NR_sched_getattr (__NR_Linux + 335)
#define __NR_utimes (__NR_Linux + 336)
#define __NR_renameat2 (__NR_Linux + 337)
+#define __NR_seccomp (__NR_Linux + 338)
+#define __NR_getrandom (__NR_Linux + 339)
+#define __NR_memfd_create (__NR_Linux + 340)
-#define __NR_Linux_syscalls (__NR_renameat2 + 1)
+#define __NR_Linux_syscalls (__NR_memfd_create + 1)
#define __IGNORE_select /* newselect */
diff --git a/arch/parisc/kernel/ptrace.c b/arch/parisc/kernel/ptrace.c
index e842ee2..92438c2 100644
--- a/arch/parisc/kernel/ptrace.c
+++ b/arch/parisc/kernel/ptrace.c
@@ -17,6 +17,7 @@
#include <linux/user.h>
#include <linux/personality.h>
#include <linux/security.h>
+#include <linux/seccomp.h>
#include <linux/compat.h>
#include <linux/signal.h>
#include <linux/audit.h>
@@ -270,6 +271,9 @@
{
long ret = 0;
+ /* Do the secure computing check first. */
+ secure_computing_strict(regs->gr[20]);
+
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs))
ret = -1L;
diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
index 8387860..7ef22e3 100644
--- a/arch/parisc/kernel/syscall.S
+++ b/arch/parisc/kernel/syscall.S
@@ -74,7 +74,7 @@
/* ADDRESS 0xb0 to 0xb8, lws uses two insns for entry */
/* Light-weight-syscall entry must always be located at 0xb0 */
/* WARNING: Keep this number updated with table size changes */
-#define __NR_lws_entries (2)
+#define __NR_lws_entries (3)
lws_entry:
gate lws_start, %r0 /* increase privilege */
@@ -502,7 +502,7 @@
/***************************************************
- Implementing CAS as an atomic operation:
+ Implementing 32bit CAS as an atomic operation:
%r26 - Address to examine
%r25 - Old value to check (old)
@@ -659,6 +659,230 @@
ASM_EXCEPTIONTABLE_ENTRY(2b-linux_gateway_page, 3b-linux_gateway_page)
+ /***************************************************
+ New CAS implementation which uses pointers and variable size
+ information. The value pointed by old and new MUST NOT change
+ while performing CAS. The lock only protect the value at %r26.
+
+ %r26 - Address to examine
+ %r25 - Pointer to the value to check (old)
+ %r24 - Pointer to the value to set (new)
+ %r23 - Size of the variable (0/1/2/3 for 8/16/32/64 bit)
+ %r28 - Return non-zero on failure
+ %r21 - Kernel error code
+
+ %r21 has the following meanings:
+
+ EAGAIN - CAS is busy, ldcw failed, try again.
+ EFAULT - Read or write failed.
+
+ Scratch: r20, r22, r28, r29, r1, fr4 (32bit for 64bit CAS only)
+
+ ****************************************************/
+
+ /* ELF32 Process entry path */
+lws_compare_and_swap_2:
+#ifdef CONFIG_64BIT
+ /* Clip the input registers */
+ depdi 0, 31, 32, %r26
+ depdi 0, 31, 32, %r25
+ depdi 0, 31, 32, %r24
+ depdi 0, 31, 32, %r23
+#endif
+
+ /* Check the validity of the size pointer */
+ subi,>>= 4, %r23, %r0
+ b,n lws_exit_nosys
+
+ /* Jump to the functions which will load the old and new values into
+ registers depending on the their size */
+ shlw %r23, 2, %r29
+ blr %r29, %r0
+ nop
+
+ /* 8bit load */
+4: ldb 0(%sr3,%r25), %r25
+ b cas2_lock_start
+5: ldb 0(%sr3,%r24), %r24
+ nop
+ nop
+ nop
+ nop
+ nop
+
+ /* 16bit load */
+6: ldh 0(%sr3,%r25), %r25
+ b cas2_lock_start
+7: ldh 0(%sr3,%r24), %r24
+ nop
+ nop
+ nop
+ nop
+ nop
+
+ /* 32bit load */
+8: ldw 0(%sr3,%r25), %r25
+ b cas2_lock_start
+9: ldw 0(%sr3,%r24), %r24
+ nop
+ nop
+ nop
+ nop
+ nop
+
+ /* 64bit load */
+#ifdef CONFIG_64BIT
+10: ldd 0(%sr3,%r25), %r25
+11: ldd 0(%sr3,%r24), %r24
+#else
+ /* Load new value into r22/r23 - high/low */
+10: ldw 0(%sr3,%r25), %r22
+11: ldw 4(%sr3,%r25), %r23
+ /* Load new value into fr4 for atomic store later */
+12: flddx 0(%sr3,%r24), %fr4
+#endif
+
+cas2_lock_start:
+ /* Load start of lock table */
+ ldil L%lws_lock_start, %r20
+ ldo R%lws_lock_start(%r20), %r28
+
+ /* Extract four bits from r26 and hash lock (Bits 4-7) */
+ extru %r26, 27, 4, %r20
+
+ /* Find lock to use, the hash is either one of 0 to
+ 15, multiplied by 16 (keep it 16-byte aligned)
+ and add to the lock table offset. */
+ shlw %r20, 4, %r20
+ add %r20, %r28, %r20
+
+ rsm PSW_SM_I, %r0 /* Disable interrupts */
+ /* COW breaks can cause contention on UP systems */
+ LDCW 0(%sr2,%r20), %r28 /* Try to acquire the lock */
+ cmpb,<>,n %r0, %r28, cas2_action /* Did we get it? */
+cas2_wouldblock:
+ ldo 2(%r0), %r28 /* 2nd case */
+ ssm PSW_SM_I, %r0
+ b lws_exit /* Contended... */
+ ldo -EAGAIN(%r0), %r21 /* Spin in userspace */
+
+ /*
+ prev = *addr;
+ if ( prev == old )
+ *addr = new;
+ return prev;
+ */
+
+ /* NOTES:
+ This all works becuse intr_do_signal
+ and schedule both check the return iasq
+ and see that we are on the kernel page
+ so this process is never scheduled off
+ or is ever sent any signal of any sort,
+ thus it is wholly atomic from usrspaces
+ perspective
+ */
+cas2_action:
+ /* Jump to the correct function */
+ blr %r29, %r0
+ /* Set %r28 as non-zero for now */
+ ldo 1(%r0),%r28
+
+ /* 8bit CAS */
+13: ldb,ma 0(%sr3,%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+14: stb,ma %r24, 0(%sr3,%r26)
+ b cas2_end
+ copy %r0, %r28
+ nop
+ nop
+
+ /* 16bit CAS */
+15: ldh,ma 0(%sr3,%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+16: sth,ma %r24, 0(%sr3,%r26)
+ b cas2_end
+ copy %r0, %r28
+ nop
+ nop
+
+ /* 32bit CAS */
+17: ldw,ma 0(%sr3,%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+18: stw,ma %r24, 0(%sr3,%r26)
+ b cas2_end
+ copy %r0, %r28
+ nop
+ nop
+
+ /* 64bit CAS */
+#ifdef CONFIG_64BIT
+19: ldd,ma 0(%sr3,%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+20: std,ma %r24, 0(%sr3,%r26)
+ copy %r0, %r28
+#else
+ /* Compare first word */
+19: ldw,ma 0(%sr3,%r26), %r29
+ sub,= %r29, %r22, %r0
+ b,n cas2_end
+ /* Compare second word */
+20: ldw,ma 4(%sr3,%r26), %r29
+ sub,= %r29, %r23, %r0
+ b,n cas2_end
+ /* Perform the store */
+21: fstdx %fr4, 0(%sr3,%r26)
+ copy %r0, %r28
+#endif
+
+cas2_end:
+ /* Free lock */
+ stw,ma %r20, 0(%sr2,%r20)
+ /* Enable interrupts */
+ ssm PSW_SM_I, %r0
+ /* Return to userspace, set no error */
+ b lws_exit
+ copy %r0, %r21
+
+22:
+ /* Error occurred on load or store */
+ /* Free lock */
+ stw %r20, 0(%sr2,%r20)
+ ssm PSW_SM_I, %r0
+ ldo 1(%r0),%r28
+ b lws_exit
+ ldo -EFAULT(%r0),%r21 /* set errno */
+ nop
+ nop
+ nop
+
+ /* Exception table entries, for the load and store, return EFAULT.
+ Each of the entries must be relocated. */
+ ASM_EXCEPTIONTABLE_ENTRY(4b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(5b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(6b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(7b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(8b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(9b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(10b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(11b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(13b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(14b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(15b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(16b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(17b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(18b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(19b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(20b-linux_gateway_page, 22b-linux_gateway_page)
+#ifndef CONFIG_64BIT
+ ASM_EXCEPTIONTABLE_ENTRY(12b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(21b-linux_gateway_page, 22b-linux_gateway_page)
+#endif
+
/* Make sure nothing else is placed on this page */
.align PAGE_SIZE
END(linux_gateway_page)
@@ -675,8 +899,9 @@
/* Light-weight-syscall table */
/* Start of lws table. */
ENTRY(lws_table)
- LWS_ENTRY(compare_and_swap32) /* 0 - ELF32 Atomic compare and swap */
- LWS_ENTRY(compare_and_swap64) /* 1 - ELF64 Atomic compare and swap */
+ LWS_ENTRY(compare_and_swap32) /* 0 - ELF32 Atomic 32bit CAS */
+ LWS_ENTRY(compare_and_swap64) /* 1 - ELF64 Atomic 32bit CAS */
+ LWS_ENTRY(compare_and_swap_2) /* 2 - ELF32 Atomic 64bit CAS */
END(lws_table)
/* End of lws table */
diff --git a/arch/parisc/kernel/syscall_table.S b/arch/parisc/kernel/syscall_table.S
index 84c5d3a..b563d9c 100644
--- a/arch/parisc/kernel/syscall_table.S
+++ b/arch/parisc/kernel/syscall_table.S
@@ -433,6 +433,9 @@
ENTRY_SAME(sched_getattr) /* 335 */
ENTRY_COMP(utimes)
ENTRY_SAME(renameat2)
+ ENTRY_SAME(seccomp)
+ ENTRY_SAME(getrandom)
+ ENTRY_SAME(memfd_create) /* 340 */
/* Nothing yet */
diff --git a/arch/powerpc/configs/c2k_defconfig b/arch/powerpc/configs/c2k_defconfig
index 5e2aa43..5973491 100644
--- a/arch/powerpc/configs/c2k_defconfig
+++ b/arch/powerpc/configs/c2k_defconfig
@@ -29,6 +29,7 @@
CONFIG_PCI_MSI=y
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_SHPC=m
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=y
diff --git a/arch/powerpc/configs/cell_defconfig b/arch/powerpc/configs/cell_defconfig
index 4bee1a6..45fd06c 100644
--- a/arch/powerpc/configs/cell_defconfig
+++ b/arch/powerpc/configs/cell_defconfig
@@ -5,6 +5,7 @@
CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
+CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15
diff --git a/arch/powerpc/configs/celleb_defconfig b/arch/powerpc/configs/celleb_defconfig
index 6d7b22f..77d7bf3 100644
--- a/arch/powerpc/configs/celleb_defconfig
+++ b/arch/powerpc/configs/celleb_defconfig
@@ -5,6 +5,7 @@
CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
+CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15
diff --git a/arch/powerpc/configs/corenet64_smp_defconfig b/arch/powerpc/configs/corenet64_smp_defconfig
index 4b07bad..269d6e4 100644
--- a/arch/powerpc/configs/corenet64_smp_defconfig
+++ b/arch/powerpc/configs/corenet64_smp_defconfig
@@ -4,6 +4,7 @@
CONFIG_SMP=y
CONFIG_NR_CPUS=24
CONFIG_SYSVIPC=y
+CONFIG_FHANDLE=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
diff --git a/arch/powerpc/configs/g5_defconfig b/arch/powerpc/configs/g5_defconfig
index 3c72fa6..7594c5a 100644
--- a/arch/powerpc/configs/g5_defconfig
+++ b/arch/powerpc/configs/g5_defconfig
@@ -5,6 +5,7 @@
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_BLK_DEV_INITRD=y
diff --git a/arch/powerpc/configs/maple_defconfig b/arch/powerpc/configs/maple_defconfig
index 95e545d..c8b6a9d 100644
--- a/arch/powerpc/configs/maple_defconfig
+++ b/arch/powerpc/configs/maple_defconfig
@@ -4,6 +4,7 @@
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_COMPAT_BRK is not set
diff --git a/arch/powerpc/configs/pasemi_defconfig b/arch/powerpc/configs/pasemi_defconfig
index cec044a..e5e7838 100644
--- a/arch/powerpc/configs/pasemi_defconfig
+++ b/arch/powerpc/configs/pasemi_defconfig
@@ -3,6 +3,7 @@
CONFIG_SMP=y
CONFIG_NR_CPUS=2
CONFIG_SYSVIPC=y
+CONFIG_FHANDLE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y
diff --git a/arch/powerpc/configs/pmac32_defconfig b/arch/powerpc/configs/pmac32_defconfig
index 553e662..0351b5f 100644
--- a/arch/powerpc/configs/pmac32_defconfig
+++ b/arch/powerpc/configs/pmac32_defconfig
@@ -31,6 +31,7 @@
CONFIG_APM_EMULATION=y
CONFIG_PCCARD=m
CONFIG_YENTA=m
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=y
diff --git a/arch/powerpc/configs/ppc64_defconfig b/arch/powerpc/configs/ppc64_defconfig
index f26b267..3651887 100644
--- a/arch/powerpc/configs/ppc64_defconfig
+++ b/arch/powerpc/configs/ppc64_defconfig
@@ -4,6 +4,7 @@
CONFIG_SMP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
@@ -57,6 +58,7 @@
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_RPA=m
CONFIG_HOTPLUG_PCI_RPA_DLPAR=m
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/powerpc/configs/ppc64e_defconfig b/arch/powerpc/configs/ppc64e_defconfig
index 438e813..c3a3269 100644
--- a/arch/powerpc/configs/ppc64e_defconfig
+++ b/arch/powerpc/configs/ppc64e_defconfig
@@ -3,6 +3,7 @@
CONFIG_SMP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_TASKSTATS=y
@@ -32,6 +33,7 @@
CONFIG_PCI_MSI=y
CONFIG_PCCARD=y
CONFIG_HOTPLUG_PCI=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/powerpc/configs/ps3_defconfig b/arch/powerpc/configs/ps3_defconfig
index fdee37f..2e637c8 100644
--- a/arch/powerpc/configs/ps3_defconfig
+++ b/arch/powerpc/configs/ps3_defconfig
@@ -5,6 +5,7 @@
CONFIG_NR_CPUS=2
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_RD_LZMA=y
diff --git a/arch/powerpc/configs/pseries_defconfig b/arch/powerpc/configs/pseries_defconfig
index a905063..dd2a9ca 100644
--- a/arch/powerpc/configs/pseries_defconfig
+++ b/arch/powerpc/configs/pseries_defconfig
@@ -5,6 +5,7 @@
CONFIG_NR_CPUS=2048
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_IRQ_DOMAIN_DEBUG=y
@@ -52,6 +53,7 @@
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_RPA=m
CONFIG_HOTPLUG_PCI_RPA_DLPAR=m
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/powerpc/configs/pseries_le_defconfig b/arch/powerpc/configs/pseries_le_defconfig
index 58e3dbf..63392f4 100644
--- a/arch/powerpc/configs/pseries_le_defconfig
+++ b/arch/powerpc/configs/pseries_le_defconfig
@@ -6,6 +6,7 @@
CONFIG_CPU_LITTLE_ENDIAN=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_IRQ_DOMAIN_DEBUG=y
@@ -54,6 +55,7 @@
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_RPA=m
CONFIG_HOTPLUG_PCI_RPA_DLPAR=m
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
index 279b80f..c0c61fa 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -47,6 +47,12 @@
STACK_FRAME_OVERHEAD + KERNEL_REDZONE_SIZE)
#define STACK_FRAME_MARKER 12
+#if defined(_CALL_ELF) && _CALL_ELF == 2
+#define STACK_FRAME_MIN_SIZE 32
+#else
+#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD
+#endif
+
/* Size of dummy stack frame allocated when calling signal handler. */
#define __SIGNAL_FRAMESIZE 128
#define __SIGNAL_FRAMESIZE32 64
@@ -60,6 +66,7 @@
#define STACK_FRAME_REGS_MARKER ASM_CONST(0x72656773)
#define STACK_INT_FRAME_SIZE (sizeof(struct pt_regs) + STACK_FRAME_OVERHEAD)
#define STACK_FRAME_MARKER 2
+#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD
/* Size of stack frame allocated when calling signal handler. */
#define __SIGNAL_FRAMESIZE 64
diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h
index 542bc0f..7d8a6006 100644
--- a/arch/powerpc/include/asm/systbl.h
+++ b/arch/powerpc/include/asm/systbl.h
@@ -362,3 +362,6 @@
SYSCALL_SPU(sched_setattr)
SYSCALL_SPU(sched_getattr)
SYSCALL_SPU(renameat2)
+SYSCALL_SPU(seccomp)
+SYSCALL_SPU(getrandom)
+SYSCALL_SPU(memfd_create)
diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h
index 5ce5552..4e9af3f 100644
--- a/arch/powerpc/include/asm/unistd.h
+++ b/arch/powerpc/include/asm/unistd.h
@@ -12,7 +12,7 @@
#include <uapi/asm/unistd.h>
-#define __NR_syscalls 358
+#define __NR_syscalls 361
#define __NR__exit __NR_exit
#define NR_syscalls __NR_syscalls
diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h
index 2d526f7..0688fc0 100644
--- a/arch/powerpc/include/uapi/asm/unistd.h
+++ b/arch/powerpc/include/uapi/asm/unistd.h
@@ -380,5 +380,8 @@
#define __NR_sched_setattr 355
#define __NR_sched_getattr 356
#define __NR_renameat2 357
+#define __NR_seccomp 358
+#define __NR_getrandom 359
+#define __NR_memfd_create 360
#endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index 74d1e78..2396dda 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -35,7 +35,7 @@
return 0; /* must be 16-byte aligned */
if (!validate_sp(sp, current, STACK_FRAME_OVERHEAD))
return 0;
- if (sp >= prev_sp + STACK_FRAME_OVERHEAD)
+ if (sp >= prev_sp + STACK_FRAME_MIN_SIZE)
return 1;
/*
* sp could decrease when we jump off an interrupt stack
diff --git a/arch/powerpc/platforms/powernv/opal-hmi.c b/arch/powerpc/platforms/powernv/opal-hmi.c
index 97ac8dc..5e1ed15 100644
--- a/arch/powerpc/platforms/powernv/opal-hmi.c
+++ b/arch/powerpc/platforms/powernv/opal-hmi.c
@@ -28,6 +28,7 @@
#include <asm/opal.h>
#include <asm/cputable.h>
+#include <asm/machdep.h>
static int opal_hmi_handler_nb_init;
struct OpalHmiEvtNode {
@@ -185,4 +186,4 @@
}
return 0;
}
-subsys_initcall(opal_hmi_handler_init);
+machine_subsys_initcall(powernv, opal_hmi_handler_init);
diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
index c904583..17ee193 100644
--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
+++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
@@ -113,7 +113,7 @@
static int pseries_remove_mem_node(struct device_node *np)
{
const char *type;
- const unsigned int *regs;
+ const __be32 *regs;
unsigned long base;
unsigned int lmb_size;
int ret = -EINVAL;
@@ -132,8 +132,8 @@
if (!regs)
return ret;
- base = *(unsigned long *)regs;
- lmb_size = regs[3];
+ base = be64_to_cpu(*(unsigned long *)regs);
+ lmb_size = be32_to_cpu(regs[3]);
pseries_remove_memblock(base, lmb_size);
return 0;
@@ -153,7 +153,7 @@
static int pseries_add_mem_node(struct device_node *np)
{
const char *type;
- const unsigned int *regs;
+ const __be32 *regs;
unsigned long base;
unsigned int lmb_size;
int ret = -EINVAL;
@@ -172,8 +172,8 @@
if (!regs)
return ret;
- base = *(unsigned long *)regs;
- lmb_size = regs[3];
+ base = be64_to_cpu(*(unsigned long *)regs);
+ lmb_size = be32_to_cpu(regs[3]);
/*
* Update memory region to represent the memory add
@@ -187,14 +187,14 @@
struct of_drconf_cell *new_drmem, *old_drmem;
unsigned long memblock_size;
u32 entries;
- u32 *p;
+ __be32 *p;
int i, rc = -EINVAL;
memblock_size = pseries_memory_block_size();
if (!memblock_size)
return -EINVAL;
- p = (u32 *) pr->old_prop->value;
+ p = (__be32 *) pr->old_prop->value;
if (!p)
return -EINVAL;
@@ -203,28 +203,30 @@
* entries. Get the niumber of entries and skip to the array of
* of_drconf_cell's.
*/
- entries = *p++;
+ entries = be32_to_cpu(*p++);
old_drmem = (struct of_drconf_cell *)p;
- p = (u32 *)pr->prop->value;
+ p = (__be32 *)pr->prop->value;
p++;
new_drmem = (struct of_drconf_cell *)p;
for (i = 0; i < entries; i++) {
- if ((old_drmem[i].flags & DRCONF_MEM_ASSIGNED) &&
- (!(new_drmem[i].flags & DRCONF_MEM_ASSIGNED))) {
- rc = pseries_remove_memblock(old_drmem[i].base_addr,
+ if ((be32_to_cpu(old_drmem[i].flags) & DRCONF_MEM_ASSIGNED) &&
+ (!(be32_to_cpu(new_drmem[i].flags) & DRCONF_MEM_ASSIGNED))) {
+ rc = pseries_remove_memblock(
+ be64_to_cpu(old_drmem[i].base_addr),
memblock_size);
break;
- } else if ((!(old_drmem[i].flags & DRCONF_MEM_ASSIGNED)) &&
- (new_drmem[i].flags & DRCONF_MEM_ASSIGNED)) {
- rc = memblock_add(old_drmem[i].base_addr,
+ } else if ((!(be32_to_cpu(old_drmem[i].flags) &
+ DRCONF_MEM_ASSIGNED)) &&
+ (be32_to_cpu(new_drmem[i].flags) &
+ DRCONF_MEM_ASSIGNED)) {
+ rc = memblock_add(be64_to_cpu(old_drmem[i].base_addr),
memblock_size);
rc = (rc < 0) ? -EINVAL : 0;
break;
}
}
-
return rc;
}
diff --git a/arch/s390/configs/default_defconfig b/arch/s390/configs/default_defconfig
index 3ca1894..9d94fdd 100644
--- a/arch/s390/configs/default_defconfig
+++ b/arch/s390/configs/default_defconfig
@@ -63,6 +63,7 @@
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=m
CONFIG_HIBERNATION=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
diff --git a/arch/s390/configs/gcov_defconfig b/arch/s390/configs/gcov_defconfig
index 4830aa6..90f514b 100644
--- a/arch/s390/configs/gcov_defconfig
+++ b/arch/s390/configs/gcov_defconfig
@@ -61,6 +61,7 @@
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=m
CONFIG_HIBERNATION=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
diff --git a/arch/s390/configs/performance_defconfig b/arch/s390/configs/performance_defconfig
index 61db449..13559d3 100644
--- a/arch/s390/configs/performance_defconfig
+++ b/arch/s390/configs/performance_defconfig
@@ -59,6 +59,7 @@
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=m
CONFIG_HIBERNATION=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
diff --git a/arch/s390/configs/zfcpdump_defconfig b/arch/s390/configs/zfcpdump_defconfig
index 948e0e0..e376789 100644
--- a/arch/s390/configs/zfcpdump_defconfig
+++ b/arch/s390/configs/zfcpdump_defconfig
@@ -23,6 +23,7 @@
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
# CONFIG_SECCOMP is not set
# CONFIG_IUCV is not set
+CONFIG_NET=y
CONFIG_ATM=y
CONFIG_ATM_LANE=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
diff --git a/arch/s390/defconfig b/arch/s390/defconfig
index 2e56498..fab35a8 100644
--- a/arch/s390/defconfig
+++ b/arch/s390/defconfig
@@ -50,6 +50,7 @@
CONFIG_CRASH_DUMP=y
CONFIG_BINFMT_MISC=m
CONFIG_HIBERNATION=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_NET_KEY=y
diff --git a/arch/s390/include/asm/ipl.h b/arch/s390/include/asm/ipl.h
index 2fcccc0..c81661e 100644
--- a/arch/s390/include/asm/ipl.h
+++ b/arch/s390/include/asm/ipl.h
@@ -17,12 +17,12 @@
#define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_fcp))
-#define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 8)
+#define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 16)
#define IPL_PARM_BLK_CCW_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_ccw))
-#define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 8)
+#define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 16)
#define IPL_MAX_SUPPORTED_VERSION (0)
@@ -38,10 +38,11 @@
u8 pbt;
u8 flags;
u16 reserved2;
+ u8 loadparm[8];
} __attribute__((packed));
struct ipl_block_fcp {
- u8 reserved1[313-1];
+ u8 reserved1[305-1];
u8 opt;
u8 reserved2[3];
u16 reserved3;
@@ -62,7 +63,6 @@
offsetof(struct ipl_block_fcp, scp_data)))
struct ipl_block_ccw {
- u8 load_parm[8];
u8 reserved1[84];
u8 reserved2[2];
u16 devno;
diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
index 22aac58..39badb9 100644
--- a/arch/s390/kernel/ipl.c
+++ b/arch/s390/kernel/ipl.c
@@ -455,22 +455,6 @@
DEFINE_IPL_ATTR_RO(ipl_fcp, br_lba, "%lld\n", (unsigned long long)
IPL_PARMBLOCK_START->ipl_info.fcp.br_lba);
-static struct attribute *ipl_fcp_attrs[] = {
- &sys_ipl_type_attr.attr,
- &sys_ipl_device_attr.attr,
- &sys_ipl_fcp_wwpn_attr.attr,
- &sys_ipl_fcp_lun_attr.attr,
- &sys_ipl_fcp_bootprog_attr.attr,
- &sys_ipl_fcp_br_lba_attr.attr,
- NULL,
-};
-
-static struct attribute_group ipl_fcp_attr_group = {
- .attrs = ipl_fcp_attrs,
-};
-
-/* CCW ipl device attributes */
-
static ssize_t ipl_ccw_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
@@ -487,6 +471,23 @@
static struct kobj_attribute sys_ipl_ccw_loadparm_attr =
__ATTR(loadparm, 0444, ipl_ccw_loadparm_show, NULL);
+static struct attribute *ipl_fcp_attrs[] = {
+ &sys_ipl_type_attr.attr,
+ &sys_ipl_device_attr.attr,
+ &sys_ipl_fcp_wwpn_attr.attr,
+ &sys_ipl_fcp_lun_attr.attr,
+ &sys_ipl_fcp_bootprog_attr.attr,
+ &sys_ipl_fcp_br_lba_attr.attr,
+ &sys_ipl_ccw_loadparm_attr.attr,
+ NULL,
+};
+
+static struct attribute_group ipl_fcp_attr_group = {
+ .attrs = ipl_fcp_attrs,
+};
+
+/* CCW ipl device attributes */
+
static struct attribute *ipl_ccw_attrs_vm[] = {
&sys_ipl_type_attr.attr,
&sys_ipl_device_attr.attr,
@@ -765,28 +766,10 @@
DEFINE_IPL_ATTR_RW(reipl_fcp, device, "0.0.%04llx\n", "0.0.%llx\n",
reipl_block_fcp->ipl_info.fcp.devno);
-static struct attribute *reipl_fcp_attrs[] = {
- &sys_reipl_fcp_device_attr.attr,
- &sys_reipl_fcp_wwpn_attr.attr,
- &sys_reipl_fcp_lun_attr.attr,
- &sys_reipl_fcp_bootprog_attr.attr,
- &sys_reipl_fcp_br_lba_attr.attr,
- NULL,
-};
-
-static struct attribute_group reipl_fcp_attr_group = {
- .attrs = reipl_fcp_attrs,
-};
-
-/* CCW reipl device attributes */
-
-DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n",
- reipl_block_ccw->ipl_info.ccw.devno);
-
static void reipl_get_ascii_loadparm(char *loadparm,
struct ipl_parameter_block *ibp)
{
- memcpy(loadparm, ibp->ipl_info.ccw.load_parm, LOADPARM_LEN);
+ memcpy(loadparm, ibp->hdr.loadparm, LOADPARM_LEN);
EBCASC(loadparm, LOADPARM_LEN);
loadparm[LOADPARM_LEN] = 0;
strim(loadparm);
@@ -821,13 +804,50 @@
return -EINVAL;
}
/* initialize loadparm with blanks */
- memset(ipb->ipl_info.ccw.load_parm, ' ', LOADPARM_LEN);
+ memset(ipb->hdr.loadparm, ' ', LOADPARM_LEN);
/* copy and convert to ebcdic */
- memcpy(ipb->ipl_info.ccw.load_parm, buf, lp_len);
- ASCEBC(ipb->ipl_info.ccw.load_parm, LOADPARM_LEN);
+ memcpy(ipb->hdr.loadparm, buf, lp_len);
+ ASCEBC(ipb->hdr.loadparm, LOADPARM_LEN);
return len;
}
+/* FCP wrapper */
+static ssize_t reipl_fcp_loadparm_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *page)
+{
+ return reipl_generic_loadparm_show(reipl_block_fcp, page);
+}
+
+static ssize_t reipl_fcp_loadparm_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t len)
+{
+ return reipl_generic_loadparm_store(reipl_block_fcp, buf, len);
+}
+
+static struct kobj_attribute sys_reipl_fcp_loadparm_attr =
+ __ATTR(loadparm, S_IRUGO | S_IWUSR, reipl_fcp_loadparm_show,
+ reipl_fcp_loadparm_store);
+
+static struct attribute *reipl_fcp_attrs[] = {
+ &sys_reipl_fcp_device_attr.attr,
+ &sys_reipl_fcp_wwpn_attr.attr,
+ &sys_reipl_fcp_lun_attr.attr,
+ &sys_reipl_fcp_bootprog_attr.attr,
+ &sys_reipl_fcp_br_lba_attr.attr,
+ &sys_reipl_fcp_loadparm_attr.attr,
+ NULL,
+};
+
+static struct attribute_group reipl_fcp_attr_group = {
+ .attrs = reipl_fcp_attrs,
+};
+
+/* CCW reipl device attributes */
+
+DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n",
+ reipl_block_ccw->ipl_info.ccw.devno);
+
/* NSS wrapper */
static ssize_t reipl_nss_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
@@ -1125,11 +1145,10 @@
/* LOADPARM */
/* check if read scp info worked and set loadparm */
if (sclp_ipl_info.is_valid)
- memcpy(ipb->ipl_info.ccw.load_parm,
- &sclp_ipl_info.loadparm, LOADPARM_LEN);
+ memcpy(ipb->hdr.loadparm, &sclp_ipl_info.loadparm, LOADPARM_LEN);
else
/* read scp info failed: set empty loadparm (EBCDIC blanks) */
- memset(ipb->ipl_info.ccw.load_parm, 0x40, LOADPARM_LEN);
+ memset(ipb->hdr.loadparm, 0x40, LOADPARM_LEN);
ipb->hdr.flags = DIAG308_FLAGS_LP_VALID;
/* VM PARM */
@@ -1251,9 +1270,16 @@
return rc;
}
- if (ipl_info.type == IPL_TYPE_FCP)
+ if (ipl_info.type == IPL_TYPE_FCP) {
memcpy(reipl_block_fcp, IPL_PARMBLOCK_START, PAGE_SIZE);
- else {
+ /*
+ * Fix loadparm: There are systems where the (SCSI) LOADPARM
+ * is invalid in the SCSI IPL parameter block, so take it
+ * always from sclp_ipl_info.
+ */
+ memcpy(reipl_block_fcp->hdr.loadparm, sclp_ipl_info.loadparm,
+ LOADPARM_LEN);
+ } else {
reipl_block_fcp->hdr.len = IPL_PARM_BLK_FCP_LEN;
reipl_block_fcp->hdr.version = IPL_PARM_BLOCK_VERSION;
reipl_block_fcp->hdr.blk0_len = IPL_PARM_BLK0_FCP_LEN;
@@ -1864,7 +1890,23 @@
static int __init s390_ipl_init(void)
{
+ char str[8] = {0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40};
+
sclp_get_ipl_info(&sclp_ipl_info);
+ /*
+ * Fix loadparm: There are systems where the (SCSI) LOADPARM
+ * returned by read SCP info is invalid (contains EBCDIC blanks)
+ * when the system has been booted via diag308. In that case we use
+ * the value from diag308, if available.
+ *
+ * There are also systems where diag308 store does not work in
+ * case the system is booted from HMC. Fortunately in this case
+ * READ SCP info provides the correct value.
+ */
+ if (memcmp(sclp_ipl_info.loadparm, str, sizeof(str)) == 0 &&
+ diag308_set_works)
+ memcpy(sclp_ipl_info.loadparm, ipl_block.hdr.loadparm,
+ LOADPARM_LEN);
shutdown_actions_init();
shutdown_triggers_init();
return 0;
diff --git a/arch/s390/kernel/vdso32/clock_gettime.S b/arch/s390/kernel/vdso32/clock_gettime.S
index 65fc397..7cf18f8 100644
--- a/arch/s390/kernel/vdso32/clock_gettime.S
+++ b/arch/s390/kernel/vdso32/clock_gettime.S
@@ -22,13 +22,11 @@
basr %r5,0
0: al %r5,21f-0b(%r5) /* get &_vdso_data */
chi %r2,__CLOCK_REALTIME
- je 10f
+ je 11f
chi %r2,__CLOCK_MONOTONIC
jne 19f
/* CLOCK_MONOTONIC */
- ltr %r3,%r3
- jz 9f /* tp == NULL */
1: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */
tml %r4,0x0001 /* pending update ? loop */
jnz 1b
@@ -67,12 +65,10 @@
j 6b
8: st %r2,0(%r3) /* store tp->tv_sec */
st %r1,4(%r3) /* store tp->tv_nsec */
-9: lhi %r2,0
+ lhi %r2,0
br %r14
/* CLOCK_REALTIME */
-10: ltr %r3,%r3 /* tp == NULL */
- jz 18f
11: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */
tml %r4,0x0001 /* pending update ? loop */
jnz 11b
@@ -111,7 +107,7 @@
j 15b
17: st %r2,0(%r3) /* store tp->tv_sec */
st %r1,4(%r3) /* store tp->tv_nsec */
-18: lhi %r2,0
+ lhi %r2,0
br %r14
/* Fallback to system call */
diff --git a/arch/s390/kernel/vdso64/clock_gettime.S b/arch/s390/kernel/vdso64/clock_gettime.S
index 91940ed..3f34e09 100644
--- a/arch/s390/kernel/vdso64/clock_gettime.S
+++ b/arch/s390/kernel/vdso64/clock_gettime.S
@@ -21,7 +21,7 @@
.cfi_startproc
larl %r5,_vdso_data
cghi %r2,__CLOCK_REALTIME
- je 4f
+ je 5f
cghi %r2,__CLOCK_THREAD_CPUTIME_ID
je 9f
cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */
@@ -30,8 +30,6 @@
jne 12f
/* CLOCK_MONOTONIC */
- ltgr %r3,%r3
- jz 3f /* tp == NULL */
0: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */
tmll %r4,0x0001 /* pending update ? loop */
jnz 0b
@@ -53,12 +51,10 @@
j 1b
2: stg %r0,0(%r3) /* store tp->tv_sec */
stg %r1,8(%r3) /* store tp->tv_nsec */
-3: lghi %r2,0
+ lghi %r2,0
br %r14
/* CLOCK_REALTIME */
-4: ltr %r3,%r3 /* tp == NULL */
- jz 8f
5: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */
tmll %r4,0x0001 /* pending update ? loop */
jnz 5b
@@ -80,7 +76,7 @@
j 6b
7: stg %r0,0(%r3) /* store tp->tv_sec */
stg %r1,8(%r3) /* store tp->tv_nsec */
-8: lghi %r2,0
+ lghi %r2,0
br %r14
/* CLOCK_THREAD_CPUTIME_ID for this thread */
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 0c1073e..c7235e0 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -43,6 +43,7 @@
unsigned long empty_zero_page, zero_page_mask;
EXPORT_SYMBOL(empty_zero_page);
+EXPORT_SYMBOL(zero_page_mask);
static void __init setup_zero_pages(void)
{
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 555f5c7..c52ac77 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -227,37 +227,6 @@
EMIT2(0x07fe);
}
-/* Helper to find the offset of pkt_type in sk_buff
- * Make sure its still a 3bit field starting at the MSBs within a byte.
- */
-#define PKT_TYPE_MAX 0xe0
-static int pkt_type_offset;
-
-static int __init bpf_pkt_type_offset_init(void)
-{
- struct sk_buff skb_probe = {
- .pkt_type = ~0,
- };
- char *ct = (char *)&skb_probe;
- int off;
-
- pkt_type_offset = -1;
- for (off = 0; off < sizeof(struct sk_buff); off++) {
- if (!ct[off])
- continue;
- if (ct[off] == PKT_TYPE_MAX)
- pkt_type_offset = off;
- else {
- /* Found non matching bit pattern, fix needed. */
- WARN_ON_ONCE(1);
- pkt_type_offset = -1;
- return -1;
- }
- }
- return 0;
-}
-device_initcall(bpf_pkt_type_offset_init);
-
/*
* make sure we dont leak kernel information to user
*/
@@ -757,12 +726,10 @@
}
break;
case BPF_ANC | SKF_AD_PKTTYPE:
- if (pkt_type_offset < 0)
- goto out;
/* lhi %r5,0 */
EMIT4(0xa7580000);
/* ic %r5,<d(pkt_type_offset)>(%r2) */
- EMIT4_DISP(0x43502000, pkt_type_offset);
+ EMIT4_DISP(0x43502000, PKT_TYPE_OFFSET());
/* srl %r5,5 */
EMIT4_DISP(0x88500000, 5);
break;
diff --git a/arch/sh/configs/sdk7780_defconfig b/arch/sh/configs/sdk7780_defconfig
index 6a96b9a..bbd4c22 100644
--- a/arch/sh/configs/sdk7780_defconfig
+++ b/arch/sh/configs/sdk7780_defconfig
@@ -30,6 +30,7 @@
CONFIG_PCCARD=y
CONFIG_YENTA=y
CONFIG_HOTPLUG_PCI=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
diff --git a/arch/sh/configs/sh2007_defconfig b/arch/sh/configs/sh2007_defconfig
index e741b1e..df25ae7 100644
--- a/arch/sh/configs/sh2007_defconfig
+++ b/arch/sh/configs/sh2007_defconfig
@@ -25,6 +25,7 @@
CONFIG_CMDLINE="console=ttySC1,115200 ip=dhcp root=/dev/nfs rw nfsroot=/nfs/rootfs,rsize=1024,wsize=1024 earlyprintk=sh-sci.1"
CONFIG_PCCARD=y
CONFIG_BINFMT_MISC=y
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=y
diff --git a/arch/sh/mm/gup.c b/arch/sh/mm/gup.c
index bf8daf9..37458f3 100644
--- a/arch/sh/mm/gup.c
+++ b/arch/sh/mm/gup.c
@@ -105,6 +105,8 @@
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
get_page(page);
+ __flush_anon_page(page, addr);
+ flush_dcache_page(page);
pages[*nr] = page;
(*nr)++;
diff --git a/arch/sparc/configs/sparc64_defconfig b/arch/sparc/configs/sparc64_defconfig
index 9d8521b..6b68f12 100644
--- a/arch/sparc/configs/sparc64_defconfig
+++ b/arch/sparc/configs/sparc64_defconfig
@@ -29,6 +29,7 @@
CONFIG_PCI_MSI=y
CONFIG_SUN_OPENPROMFS=m
CONFIG_BINFMT_MISC=m
+CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
diff --git a/arch/sparc/net/bpf_jit_asm.S b/arch/sparc/net/bpf_jit_asm.S
index 9d016c7..8c83f4b 100644
--- a/arch/sparc/net/bpf_jit_asm.S
+++ b/arch/sparc/net/bpf_jit_asm.S
@@ -6,10 +6,12 @@
#define SAVE_SZ 176
#define SCRATCH_OFF STACK_BIAS + 128
#define BE_PTR(label) be,pn %xcc, label
+#define SIGN_EXTEND(reg) sra reg, 0, reg
#else
#define SAVE_SZ 96
#define SCRATCH_OFF 72
#define BE_PTR(label) be label
+#define SIGN_EXTEND(reg)
#endif
#define SKF_MAX_NEG_OFF (-0x200000) /* SKF_LL_OFF from filter.h */
@@ -135,6 +137,7 @@
save %sp, -SAVE_SZ, %sp; \
mov %i0, %o0; \
mov r_OFF, %o1; \
+ SIGN_EXTEND(%o1); \
call bpf_internal_load_pointer_neg_helper; \
mov (LEN), %o2; \
mov %o0, r_TMP; \
diff --git a/arch/sparc/net/bpf_jit_comp.c b/arch/sparc/net/bpf_jit_comp.c
index b2ad9dc..f33e7c7 100644
--- a/arch/sparc/net/bpf_jit_comp.c
+++ b/arch/sparc/net/bpf_jit_comp.c
@@ -184,7 +184,7 @@
*/
#define emit_alu_K(OPCODE, K) \
do { \
- if (K) { \
+ if (K || OPCODE == AND || OPCODE == MUL) { \
unsigned int _insn = OPCODE; \
_insn |= RS1(r_A) | RD(r_A); \
if (is_simm13(K)) { \
@@ -234,12 +234,18 @@
__emit_load8(BASE, STRUCT, FIELD, DEST); \
} while (0)
-#define emit_ldmem(OFF, DEST) \
-do { *prog++ = LD32I | RS1(FP) | S13(-(OFF)) | RD(DEST); \
+#ifdef CONFIG_SPARC64
+#define BIAS (STACK_BIAS - 4)
+#else
+#define BIAS (-4)
+#endif
+
+#define emit_ldmem(OFF, DEST) \
+do { *prog++ = LD32I | RS1(SP) | S13(BIAS - (OFF)) | RD(DEST); \
} while (0)
-#define emit_stmem(OFF, SRC) \
-do { *prog++ = LD32I | RS1(FP) | S13(-(OFF)) | RD(SRC); \
+#define emit_stmem(OFF, SRC) \
+do { *prog++ = ST32I | RS1(SP) | S13(BIAS - (OFF)) | RD(SRC); \
} while (0)
#ifdef CONFIG_SMP
@@ -579,16 +585,11 @@
case BPF_ANC | SKF_AD_PROTOCOL:
emit_skb_load16(protocol, r_A);
break;
-#if 0
- /* GCC won't let us take the address of
- * a bit field even though we very much
- * know what we are doing here.
- */
case BPF_ANC | SKF_AD_PKTTYPE:
- __emit_skb_load8(pkt_type, r_A);
+ __emit_skb_load8(__pkt_type_offset, r_A);
+ emit_andi(r_A, PKT_TYPE_MAX, r_A);
emit_alu_K(SRL, 5);
break;
-#endif
case BPF_ANC | SKF_AD_IFINDEX:
emit_skb_loadptr(dev, r_A);
emit_cmpi(r_A, 0);
@@ -615,14 +616,20 @@
case BPF_ANC | SKF_AD_VLAN_TAG:
case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT:
emit_skb_load16(vlan_tci, r_A);
- if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) {
- emit_andi(r_A, VLAN_VID_MASK, r_A);
+ if (code != (BPF_ANC | SKF_AD_VLAN_TAG)) {
+ emit_alu_K(SRL, 12);
+ emit_andi(r_A, 1, r_A);
} else {
- emit_loadimm(VLAN_TAG_PRESENT, r_TMP);
+ emit_loadimm(~VLAN_TAG_PRESENT, r_TMP);
emit_and(r_A, r_TMP, r_A);
}
break;
-
+ case BPF_LD | BPF_W | BPF_LEN:
+ emit_skb_load32(len, r_A);
+ break;
+ case BPF_LDX | BPF_W | BPF_LEN:
+ emit_skb_load32(len, r_X);
+ break;
case BPF_LD | BPF_IMM:
emit_loadimm(K, r_A);
break;
@@ -630,15 +637,19 @@
emit_loadimm(K, r_X);
break;
case BPF_LD | BPF_MEM:
+ seen |= SEEN_MEM;
emit_ldmem(K * 4, r_A);
break;
case BPF_LDX | BPF_MEM:
+ seen |= SEEN_MEM | SEEN_XREG;
emit_ldmem(K * 4, r_X);
break;
case BPF_ST:
+ seen |= SEEN_MEM;
emit_stmem(K * 4, r_A);
break;
case BPF_STX:
+ seen |= SEEN_MEM | SEEN_XREG;
emit_stmem(K * 4, r_X);
break;
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 778178f..3632743 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -23,6 +23,7 @@
def_bool y
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS
+ select ARCH_HAS_FAST_MULTIPLIER
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select HAVE_AOUT if X86_32
diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index f277184..dca9842 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -1032,7 +1032,6 @@
int i;
unsigned long ramdisk_addr;
unsigned long ramdisk_size;
- unsigned long initrd_addr_max;
efi_early = c;
sys_table = (efi_system_table_t *)(unsigned long)efi_early->table;
@@ -1095,15 +1094,20 @@
memset(sdt, 0, sizeof(*sdt));
- if (hdr->xloadflags & XLF_CAN_BE_LOADED_ABOVE_4G)
- initrd_addr_max = -1UL;
- else
- initrd_addr_max = hdr->initrd_addr_max;
-
status = handle_cmdline_files(sys_table, image,
(char *)(unsigned long)hdr->cmd_line_ptr,
- "initrd=", initrd_addr_max,
+ "initrd=", hdr->initrd_addr_max,
&ramdisk_addr, &ramdisk_size);
+
+ if (status != EFI_SUCCESS &&
+ hdr->xloadflags & XLF_CAN_BE_LOADED_ABOVE_4G) {
+ efi_printk(sys_table, "Trying to load files to higher address\n");
+ status = handle_cmdline_files(sys_table, image,
+ (char *)(unsigned long)hdr->cmd_line_ptr,
+ "initrd=", -1UL,
+ &ramdisk_addr, &ramdisk_size);
+ }
+
if (status != EFI_SUCCESS)
goto fail2;
hdr->ramdisk_image = ramdisk_addr & 0xffffffff;
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 888950f..a7ccd57 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -481,7 +481,7 @@
crypto_inc(ctrblk, AES_BLOCK_SIZE);
}
-#ifdef CONFIG_AS_AVX
+#if 0 /* temporary disabled due to failing crypto tests */
static void aesni_ctr_enc_avx_tfm(struct crypto_aes_ctx *ctx, u8 *out,
const u8 *in, unsigned int len, u8 *iv)
{
@@ -1522,7 +1522,7 @@
aesni_gcm_dec_tfm = aesni_gcm_dec;
}
aesni_ctr_enc_tfm = aesni_ctr_enc;
-#ifdef CONFIG_AS_AVX
+#if 0 /* temporary disabled due to failing crypto tests */
if (cpu_has_avx) {
/* optimize performance of ctr mode encryption transform */
aesni_ctr_enc_tfm = aesni_ctr_enc_avx_tfm;
diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index afcd35d..cfe3b95 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -497,8 +497,6 @@
#include <asm-generic/bitops/sched.h>
-#define ARCH_HAS_FAST_MULTIPLIER 1
-
#include <asm/arch_hweight.h>
#include <asm-generic/bitops/const_hweight.h>
diff --git a/arch/x86/include/asm/io_apic.h b/arch/x86/include/asm/io_apic.h
index 478c490..1733ab4 100644
--- a/arch/x86/include/asm/io_apic.h
+++ b/arch/x86/include/asm/io_apic.h
@@ -239,6 +239,7 @@
static inline u32 mp_pin_to_gsi(int ioapic, int pin) { return UINT_MAX; }
static inline int mp_map_gsi_to_irq(u32 gsi, unsigned int flags) { return gsi; }
static inline void mp_unmap_irq(int irq) { }
+static inline bool mp_should_keep_irq(struct device *dev) { return 1; }
static inline int save_ioapic_entries(void)
{
diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
index 5be9063..3874693 100644
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -19,6 +19,7 @@
extern pmd_t level2_kernel_pgt[512];
extern pmd_t level2_fixmap_pgt[512];
extern pmd_t level2_ident_pgt[512];
+extern pte_t level1_fixmap_pgt[512];
extern pgd_t init_level4_pgt[];
#define swapper_pg_dir init_level4_pgt
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index f304773..f1314d0 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -338,8 +338,10 @@
* a relative jump.
*/
rel = (long)op->optinsn.insn - (long)op->kp.addr + RELATIVEJUMP_SIZE;
- if (abs(rel) > 0x7fffffff)
+ if (abs(rel) > 0x7fffffff) {
+ __arch_remove_optimized_kprobe(op, 0);
return -ERANGE;
+ }
buf = (u8 *)op->optinsn.insn;
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index 167ffca..95a427e 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -48,7 +48,9 @@
LOW_KERNEL_NR,
VMALLOC_START_NR,
VMEMMAP_START_NR,
+# ifdef CONFIG_X86_ESPFIX64
ESPFIX_START_NR,
+# endif
HIGH_KERNEL_NR,
MODULES_VADDR_NR,
MODULES_END_NR,
@@ -71,7 +73,9 @@
{ PAGE_OFFSET, "Low Kernel Mapping" },
{ VMALLOC_START, "vmalloc() Area" },
{ VMEMMAP_START, "Vmemmap" },
+# ifdef CONFIG_X86_ESPFIX64
{ ESPFIX_BASE_ADDR, "ESPfix Area", 16 },
+# endif
{ __START_KERNEL_map, "High Kernel Mapping" },
{ MODULES_VADDR, "Modules" },
{ MODULES_END, "End Modules" },
diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
index 25e7e13..919b912 100644
--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -31,7 +31,7 @@
#include <linux/sched.h>
#include <asm/elf.h>
-struct __read_mostly va_alignment va_align = {
+struct va_alignment __read_mostly va_align = {
.flags = -1,
};
diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
index c61ea57..9a2b710 100644
--- a/arch/x86/pci/fixup.c
+++ b/arch/x86/pci/fixup.c
@@ -326,27 +326,6 @@
struct pci_bus *bus;
u16 config;
- if (!vga_default_device()) {
- resource_size_t start, end;
- int i;
-
- /* Does firmware framebuffer belong to us? */
- for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
- if (!(pci_resource_flags(pdev, i) & IORESOURCE_MEM))
- continue;
-
- start = pci_resource_start(pdev, i);
- end = pci_resource_end(pdev, i);
-
- if (!start || !end)
- continue;
-
- if (screen_info.lfb_base >= start &&
- (screen_info.lfb_base + screen_info.lfb_size) < end)
- vga_set_default_device(pdev);
- }
- }
-
/* Is VGA routed to us? */
bus = pdev->bus;
while (bus) {
@@ -371,8 +350,7 @@
pci_read_config_word(pdev, PCI_COMMAND, &config);
if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) {
pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW;
- dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n");
- vga_set_default_device(pdev);
+ dev_printk(KERN_DEBUG, &pdev->dev, "Video device with shadowed ROM\n");
}
}
}
diff --git a/arch/x86/syscalls/syscall_32.tbl b/arch/x86/syscalls/syscall_32.tbl
index 028b781..9fe1b5d 100644
--- a/arch/x86/syscalls/syscall_32.tbl
+++ b/arch/x86/syscalls/syscall_32.tbl
@@ -363,3 +363,4 @@
354 i386 seccomp sys_seccomp
355 i386 getrandom sys_getrandom
356 i386 memfd_create sys_memfd_create
+357 i386 bpf sys_bpf
diff --git a/arch/x86/syscalls/syscall_64.tbl b/arch/x86/syscalls/syscall_64.tbl
index 35dd922..281150b 100644
--- a/arch/x86/syscalls/syscall_64.tbl
+++ b/arch/x86/syscalls/syscall_64.tbl
@@ -327,6 +327,7 @@
318 common getrandom sys_getrandom
319 common memfd_create sys_memfd_create
320 common kexec_file_load sys_kexec_file_load
+321 common bpf sys_bpf
#
# x32-specific system call numbers start at 512 to avoid cache impact
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index e8a1201..16fb009 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1866,12 +1866,11 @@
*
* We can construct this by grafting the Xen provided pagetable into
* head_64.S's preconstructed pagetables. We copy the Xen L2's into
- * level2_ident_pgt, level2_kernel_pgt and level2_fixmap_pgt. This
- * means that only the kernel has a physical mapping to start with -
- * but that's enough to get __va working. We need to fill in the rest
- * of the physical mapping once some sort of allocator has been set
- * up.
- * NOTE: for PVH, the page tables are native.
+ * level2_ident_pgt, and level2_kernel_pgt. This means that only the
+ * kernel has a physical mapping to start with - but that's enough to
+ * get __va working. We need to fill in the rest of the physical
+ * mapping once some sort of allocator has been set up. NOTE: for
+ * PVH, the page tables are native.
*/
void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
{
@@ -1902,8 +1901,11 @@
/* L3_i[0] -> level2_ident_pgt */
convert_pfn_mfn(level3_ident_pgt);
/* L3_k[510] -> level2_kernel_pgt
- * L3_i[511] -> level2_fixmap_pgt */
+ * L3_k[511] -> level2_fixmap_pgt */
convert_pfn_mfn(level3_kernel_pgt);
+
+ /* L3_k[511][506] -> level1_fixmap_pgt */
+ convert_pfn_mfn(level2_fixmap_pgt);
}
/* We get [511][511] and have Xen's version of level2_kernel_pgt */
l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
@@ -1913,21 +1915,15 @@
addr[1] = (unsigned long)l3;
addr[2] = (unsigned long)l2;
/* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
- * Both L4[272][0] and L4[511][511] have entries that point to the same
+ * Both L4[272][0] and L4[511][510] have entries that point to the same
* L2 (PMD) tables. Meaning that if you modify it in __va space
* it will be also modified in the __ka space! (But if you just
* modify the PMD table to point to other PTE's or none, then you
* are OK - which is what cleanup_highmap does) */
copy_page(level2_ident_pgt, l2);
- /* Graft it onto L4[511][511] */
+ /* Graft it onto L4[511][510] */
copy_page(level2_kernel_pgt, l2);
- /* Get [511][510] and graft that in level2_fixmap_pgt */
- l3 = m2v(pgd[pgd_index(__START_KERNEL_map + PMD_SIZE)].pgd);
- l2 = m2v(l3[pud_index(__START_KERNEL_map + PMD_SIZE)].pud);
- copy_page(level2_fixmap_pgt, l2);
- /* Note that we don't do anything with level1_fixmap_pgt which
- * we don't need. */
if (!xen_feature(XENFEAT_auto_translated_physmap)) {
/* Make pagetable pieces RO */
set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
@@ -1937,6 +1933,7 @@
set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
+ set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO);
/* Pin down new L4 */
pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
diff --git a/block/blk-exec.c b/block/blk-exec.c
index f4d27b1..9924725 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -56,6 +56,7 @@
bool is_pm_resume;
WARN_ON(irqs_disabled());
+ WARN_ON(rq->cmd_type == REQ_TYPE_FS);
rq->rq_disk = bd_disk;
rq->end_io = done;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 5453583..7788179 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -10,10 +10,11 @@
#include "blk.h"
static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
- struct bio *bio)
+ struct bio *bio,
+ bool no_sg_merge)
{
struct bio_vec bv, bvprv = { NULL };
- int cluster, high, highprv = 1, no_sg_merge;
+ int cluster, high, highprv = 1;
unsigned int seg_size, nr_phys_segs;
struct bio *fbio, *bbio;
struct bvec_iter iter;
@@ -35,7 +36,6 @@
cluster = blk_queue_cluster(q);
seg_size = 0;
nr_phys_segs = 0;
- no_sg_merge = test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags);
high = 0;
for_each_bio(bio) {
bio_for_each_segment(bv, bio, iter) {
@@ -88,18 +88,23 @@
void blk_recalc_rq_segments(struct request *rq)
{
- rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio);
+ bool no_sg_merge = !!test_bit(QUEUE_FLAG_NO_SG_MERGE,
+ &rq->q->queue_flags);
+
+ rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio,
+ no_sg_merge);
}
void blk_recount_segments(struct request_queue *q, struct bio *bio)
{
- if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags))
+ if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags) &&
+ bio->bi_vcnt < queue_max_segments(q))
bio->bi_phys_segments = bio->bi_vcnt;
else {
struct bio *nxt = bio->bi_next;
bio->bi_next = NULL;
- bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio);
+ bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, false);
bio->bi_next = nxt;
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4aac826..df8e1e0 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -119,7 +119,16 @@
spin_unlock_irq(q->queue_lock);
if (freeze) {
- percpu_ref_kill(&q->mq_usage_counter);
+ /*
+ * XXX: Temporary kludge to work around SCSI blk-mq stall.
+ * SCSI synchronously creates and destroys many queues
+ * back-to-back during probe leading to lengthy stalls.
+ * This will be fixed by keeping ->mq_usage_counter in
+ * atomic mode until genhd registration, but, for now,
+ * let's work around using expedited synchronization.
+ */
+ __percpu_ref_kill_expedited(&q->mq_usage_counter);
+
blk_mq_run_queues(q, false);
}
wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->mq_usage_counter));
@@ -203,7 +212,6 @@
if (tag != BLK_MQ_TAG_FAIL) {
rq = data->hctx->tags->rqs[tag];
- rq->cmd_flags = 0;
if (blk_mq_tag_busy(data->hctx)) {
rq->cmd_flags = REQ_MQ_INFLIGHT;
atomic_inc(&data->hctx->nr_active);
@@ -258,6 +266,7 @@
if (rq->cmd_flags & REQ_MQ_INFLIGHT)
atomic_dec(&hctx->nr_active);
+ rq->cmd_flags = 0;
clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags);
blk_mq_put_tag(hctx, tag, &ctx->last_tag);
@@ -393,6 +402,12 @@
blk_add_timer(rq);
/*
+ * Ensure that ->deadline is visible before set the started
+ * flag and clear the completed flag.
+ */
+ smp_mb__before_atomic();
+
+ /*
* Mark us as started and clear complete. Complete might have been
* set if requeue raced with timeout, which then marked it as
* complete. So be sure to clear complete again when we start
@@ -473,7 +488,11 @@
blk_mq_insert_request(rq, false, false, false);
}
- blk_mq_run_queues(q, false);
+ /*
+ * Use the start variant of queue running here, so that running
+ * the requeue work will kick stopped queues.
+ */
+ blk_mq_start_hw_queues(q);
}
void blk_mq_add_to_requeue_list(struct request *rq, bool at_head)
@@ -957,14 +976,9 @@
hctx = q->mq_ops->map_queue(q, ctx->cpu);
- if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA) &&
- !(rq->cmd_flags & (REQ_FLUSH_SEQ))) {
- blk_insert_flush(rq);
- } else {
- spin_lock(&ctx->lock);
- __blk_mq_insert_request(hctx, rq, at_head);
- spin_unlock(&ctx->lock);
- }
+ spin_lock(&ctx->lock);
+ __blk_mq_insert_request(hctx, rq, at_head);
+ spin_unlock(&ctx->lock);
if (run_queue)
blk_mq_run_hw_queue(hctx, async);
@@ -1321,6 +1335,7 @@
continue;
set->ops->exit_request(set->driver_data, tags->rqs[i],
hctx_idx, i);
+ tags->rqs[i] = NULL;
}
}
@@ -1354,8 +1369,9 @@
INIT_LIST_HEAD(&tags->page_list);
- tags->rqs = kmalloc_node(set->queue_depth * sizeof(struct request *),
- GFP_KERNEL, set->numa_node);
+ tags->rqs = kzalloc_node(set->queue_depth * sizeof(struct request *),
+ GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY,
+ set->numa_node);
if (!tags->rqs) {
blk_mq_free_tags(tags);
return NULL;
@@ -1379,8 +1395,9 @@
this_order--;
do {
- page = alloc_pages_node(set->numa_node, GFP_KERNEL,
- this_order);
+ page = alloc_pages_node(set->numa_node,
+ GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY,
+ this_order);
if (page)
break;
if (!this_order--)
@@ -1401,11 +1418,15 @@
left -= to_do * rq_size;
for (j = 0; j < to_do; j++) {
tags->rqs[i] = p;
+ tags->rqs[i]->atomic_flags = 0;
+ tags->rqs[i]->cmd_flags = 0;
if (set->ops->init_request) {
if (set->ops->init_request(set->driver_data,
tags->rqs[i], hctx_idx, i,
- set->numa_node))
+ set->numa_node)) {
+ tags->rqs[i] = NULL;
goto fail;
+ }
}
p += rq_size;
@@ -1416,7 +1437,6 @@
return tags;
fail:
- pr_warn("%s: failed to allocate requests\n", __func__);
blk_mq_free_rq_map(set, tags, hctx_idx);
return NULL;
}
@@ -1936,6 +1956,60 @@
return NOTIFY_OK;
}
+static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
+{
+ int i;
+
+ for (i = 0; i < set->nr_hw_queues; i++) {
+ set->tags[i] = blk_mq_init_rq_map(set, i);
+ if (!set->tags[i])
+ goto out_unwind;
+ }
+
+ return 0;
+
+out_unwind:
+ while (--i >= 0)
+ blk_mq_free_rq_map(set, set->tags[i], i);
+
+ return -ENOMEM;
+}
+
+/*
+ * Allocate the request maps associated with this tag_set. Note that this
+ * may reduce the depth asked for, if memory is tight. set->queue_depth
+ * will be updated to reflect the allocated depth.
+ */
+static int blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
+{
+ unsigned int depth;
+ int err;
+
+ depth = set->queue_depth;
+ do {
+ err = __blk_mq_alloc_rq_maps(set);
+ if (!err)
+ break;
+
+ set->queue_depth >>= 1;
+ if (set->queue_depth < set->reserved_tags + BLK_MQ_TAG_MIN) {
+ err = -ENOMEM;
+ break;
+ }
+ } while (set->queue_depth);
+
+ if (!set->queue_depth || err) {
+ pr_err("blk-mq: failed to allocate request map\n");
+ return -ENOMEM;
+ }
+
+ if (depth != set->queue_depth)
+ pr_info("blk-mq: reduced tag depth (%u -> %u)\n",
+ depth, set->queue_depth);
+
+ return 0;
+}
+
/*
* Alloc a tag set to be associated with one or more request queues.
* May fail with EINVAL for various error conditions. May adjust the
@@ -1944,8 +2018,6 @@
*/
int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set)
{
- int i;
-
if (!set->nr_hw_queues)
return -EINVAL;
if (!set->queue_depth)
@@ -1966,23 +2038,18 @@
sizeof(struct blk_mq_tags *),
GFP_KERNEL, set->numa_node);
if (!set->tags)
- goto out;
+ return -ENOMEM;
- for (i = 0; i < set->nr_hw_queues; i++) {
- set->tags[i] = blk_mq_init_rq_map(set, i);
- if (!set->tags[i])
- goto out_unwind;
- }
+ if (blk_mq_alloc_rq_maps(set))
+ goto enomem;
mutex_init(&set->tag_list_lock);
INIT_LIST_HEAD(&set->tag_list);
return 0;
-
-out_unwind:
- while (--i >= 0)
- blk_mq_free_rq_map(set, set->tags[i], i);
-out:
+enomem:
+ kfree(set->tags);
+ set->tags = NULL;
return -ENOMEM;
}
EXPORT_SYMBOL(blk_mq_alloc_tag_set);
@@ -1997,6 +2064,7 @@
}
kfree(set->tags);
+ set->tags = NULL;
}
EXPORT_SYMBOL(blk_mq_free_tag_set);
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 4db5abf..17f5c84 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -554,8 +554,10 @@
* Initialization must be complete by now. Finish the initial
* bypass from queue allocation.
*/
- queue_flag_set_unlocked(QUEUE_FLAG_INIT_DONE, q);
- blk_queue_bypass_end(q);
+ if (!blk_queue_init_done(q)) {
+ queue_flag_set_unlocked(QUEUE_FLAG_INIT_DONE, q);
+ blk_queue_bypass_end(q);
+ }
ret = blk_trace_init_sysfs(dev);
if (ret)
diff --git a/block/genhd.c b/block/genhd.c
index 791f419..e6723bd 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -28,10 +28,10 @@
/* for extended dynamic devt allocation, currently only one major is used */
#define NR_EXT_DEVT (1 << MINORBITS)
-/* For extended devt allocation. ext_devt_mutex prevents look up
+/* For extended devt allocation. ext_devt_lock prevents look up
* results from going away underneath its user.
*/
-static DEFINE_MUTEX(ext_devt_mutex);
+static DEFINE_SPINLOCK(ext_devt_lock);
static DEFINE_IDR(ext_devt_idr);
static struct device_type disk_type;
@@ -420,9 +420,13 @@
}
/* allocate ext devt */
- mutex_lock(&ext_devt_mutex);
- idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_KERNEL);
- mutex_unlock(&ext_devt_mutex);
+ idr_preload(GFP_KERNEL);
+
+ spin_lock(&ext_devt_lock);
+ idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT);
+ spin_unlock(&ext_devt_lock);
+
+ idr_preload_end();
if (idx < 0)
return idx == -ENOSPC ? -EBUSY : idx;
@@ -441,15 +445,13 @@
*/
void blk_free_devt(dev_t devt)
{
- might_sleep();
-
if (devt == MKDEV(0, 0))
return;
if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
- mutex_lock(&ext_devt_mutex);
+ spin_lock(&ext_devt_lock);
idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
- mutex_unlock(&ext_devt_mutex);
+ spin_unlock(&ext_devt_lock);
}
}
@@ -665,7 +667,6 @@
sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
device_del(disk_to_dev(disk));
- blk_free_devt(disk_to_dev(disk)->devt);
}
EXPORT_SYMBOL(del_gendisk);
@@ -690,13 +691,13 @@
} else {
struct hd_struct *part;
- mutex_lock(&ext_devt_mutex);
+ spin_lock(&ext_devt_lock);
part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
if (part && get_disk(part_to_disk(part))) {
*partno = part->partno;
disk = part_to_disk(part);
}
- mutex_unlock(&ext_devt_mutex);
+ spin_unlock(&ext_devt_lock);
}
return disk;
@@ -1098,6 +1099,7 @@
{
struct gendisk *disk = dev_to_disk(dev);
+ blk_free_devt(dev->devt);
disk_release_events(disk);
kfree(disk->random);
disk_replace_part_tbl(disk, NULL);
diff --git a/block/partition-generic.c b/block/partition-generic.c
index 789cdea..0d9e5f9 100644
--- a/block/partition-generic.c
+++ b/block/partition-generic.c
@@ -211,6 +211,7 @@
static void part_release(struct device *dev)
{
struct hd_struct *p = dev_to_part(dev);
+ blk_free_devt(dev->devt);
free_part_stats(p);
free_part_info(p);
kfree(p);
@@ -253,7 +254,6 @@
rcu_assign_pointer(ptbl->last_lookup, NULL);
kobject_put(part->holder_dir);
device_del(part_to_dev(part));
- blk_free_devt(part_devt(part));
hd_struct_put(part);
}
diff --git a/crypto/drbg.c b/crypto/drbg.c
index 7894db9..a53ee09 100644
--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -1922,9 +1922,6 @@
/* overflow max addtllen with personalization string */
ret = drbg_instantiate(drbg, &addtl, coreref, pr);
BUG_ON(0 == ret);
- /* test uninstantated DRBG */
- len = drbg_generate(drbg, buf, (max_request_bytes + 1), NULL);
- BUG_ON(0 < len);
/* all tests passed */
rc = 0;
diff --git a/drivers/acpi/acpi_cmos_rtc.c b/drivers/acpi/acpi_cmos_rtc.c
index 2da8660..81dc750 100644
--- a/drivers/acpi/acpi_cmos_rtc.c
+++ b/drivers/acpi/acpi_cmos_rtc.c
@@ -33,7 +33,7 @@
void *handler_context, void *region_context)
{
int i;
- u8 *value = (u8 *)&value64;
+ u8 *value = (u8 *)value64;
if (address > 0xff || !value64)
return AE_BAD_PARAMETER;
diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
index 9dfec48..fddc1e8 100644
--- a/drivers/acpi/acpi_lpss.c
+++ b/drivers/acpi/acpi_lpss.c
@@ -610,7 +610,7 @@
return acpi_dev_suspend_late(dev);
}
-static int acpi_lpss_restore_early(struct device *dev)
+static int acpi_lpss_resume_early(struct device *dev)
{
int ret = acpi_dev_resume_early(dev);
@@ -650,15 +650,15 @@
static struct dev_pm_domain acpi_lpss_pm_domain = {
.ops = {
#ifdef CONFIG_PM_SLEEP
- .suspend_late = acpi_lpss_suspend_late,
- .restore_early = acpi_lpss_restore_early,
.prepare = acpi_subsys_prepare,
.complete = acpi_subsys_complete,
.suspend = acpi_subsys_suspend,
- .resume_early = acpi_subsys_resume_early,
+ .suspend_late = acpi_lpss_suspend_late,
+ .resume_early = acpi_lpss_resume_early,
.freeze = acpi_subsys_freeze,
.poweroff = acpi_subsys_suspend,
- .poweroff_late = acpi_subsys_suspend_late,
+ .poweroff_late = acpi_lpss_suspend_late,
+ .restore_early = acpi_lpss_resume_early,
#endif
#ifdef CONFIG_PM_RUNTIME
.runtime_suspend = acpi_lpss_runtime_suspend,
diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
index 1c162e7..5fdfe65 100644
--- a/drivers/acpi/battery.c
+++ b/drivers/acpi/battery.c
@@ -534,20 +534,6 @@
" invalid.\n");
}
- /*
- * When fully charged, some batteries wrongly report
- * capacity_now = design_capacity instead of = full_charge_capacity
- */
- if (battery->capacity_now > battery->full_charge_capacity
- && battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) {
- if (battery->capacity_now != battery->design_capacity)
- printk_once(KERN_WARNING FW_BUG
- "battery: reported current charge level (%d) "
- "is higher than reported maximum charge level (%d).\n",
- battery->capacity_now, battery->full_charge_capacity);
- battery->capacity_now = battery->full_charge_capacity;
- }
-
if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags)
&& battery->capacity_now >= 0 && battery->capacity_now <= 100)
battery->capacity_now = (battery->capacity_now *
diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
index 8581f5b..8b67bd0 100644
--- a/drivers/acpi/bus.c
+++ b/drivers/acpi/bus.c
@@ -177,16 +177,6 @@
}
EXPORT_SYMBOL_GPL(acpi_bus_detach_private_data);
-void acpi_bus_no_hotplug(acpi_handle handle)
-{
- struct acpi_device *adev = NULL;
-
- acpi_bus_get_device(handle, &adev);
- if (adev)
- adev->flags.no_hotplug = true;
-}
-EXPORT_SYMBOL_GPL(acpi_bus_no_hotplug);
-
static void acpi_print_osc_error(acpi_handle handle,
struct acpi_osc_context *context, char *error)
{
diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
index 65ea7b2..0c94b66 100644
--- a/drivers/base/regmap/regmap-debugfs.c
+++ b/drivers/base/regmap/regmap-debugfs.c
@@ -512,7 +512,14 @@
map, ®map_reg_ranges_fops);
if (map->max_register || regmap_readable(map, 0)) {
- debugfs_create_file("registers", 0400, map->debugfs,
+ umode_t registers_mode;
+
+ if (IS_ENABLED(REGMAP_ALLOW_WRITE_DEBUGFS))
+ registers_mode = 0600;
+ else
+ registers_mode = 0400;
+
+ debugfs_create_file("registers", registers_mode, map->debugfs,
map, ®map_map_fops);
debugfs_create_file("access", 0400, map->debugfs,
map, ®map_access_fops);
diff --git a/drivers/bcma/Makefile b/drivers/bcma/Makefile
index 91290f7..838b4b9 100644
--- a/drivers/bcma/Makefile
+++ b/drivers/bcma/Makefile
@@ -1,5 +1,6 @@
bcma-y += main.o scan.o core.o sprom.o
bcma-y += driver_chipcommon.o driver_chipcommon_pmu.o
+bcma-y += driver_chipcommon_b.o
bcma-$(CONFIG_BCMA_SFLASH) += driver_chipcommon_sflash.o
bcma-$(CONFIG_BCMA_NFLASH) += driver_chipcommon_nflash.o
bcma-y += driver_pci.o
diff --git a/drivers/bcma/bcma_private.h b/drivers/bcma/bcma_private.h
index 09b632a..b40be43 100644
--- a/drivers/bcma/bcma_private.h
+++ b/drivers/bcma/bcma_private.h
@@ -50,6 +50,10 @@
extern struct platform_device bcma_pflash_dev;
#endif /* CONFIG_BCMA_DRIVER_MIPS */
+/* driver_chipcommon_b.c */
+int bcma_core_chipcommon_b_init(struct bcma_drv_cc_b *ccb);
+void bcma_core_chipcommon_b_free(struct bcma_drv_cc_b *ccb);
+
/* driver_chipcommon_pmu.c */
u32 bcma_pmu_get_alp_clock(struct bcma_drv_cc *cc);
u32 bcma_pmu_get_cpu_clock(struct bcma_drv_cc *cc);
diff --git a/drivers/bcma/driver_chipcommon_b.c b/drivers/bcma/driver_chipcommon_b.c
new file mode 100644
index 0000000..c20b5f4
--- /dev/null
+++ b/drivers/bcma/driver_chipcommon_b.c
@@ -0,0 +1,61 @@
+/*
+ * Broadcom specific AMBA
+ * ChipCommon B Unit driver
+ *
+ * Copyright 2014, Hauke Mehrtens <hauke@hauke-m.de>
+ *
+ * Licensed under the GNU/GPL. See COPYING for details.
+ */
+
+#include "bcma_private.h"
+#include <linux/export.h>
+#include <linux/bcma/bcma.h>
+
+static bool bcma_wait_reg(struct bcma_bus *bus, void __iomem *addr, u32 mask,
+ u32 value, int timeout)
+{
+ unsigned long deadline = jiffies + timeout;
+ u32 val;
+
+ do {
+ val = readl(addr);
+ if ((val & mask) == value)
+ return true;
+ cpu_relax();
+ udelay(10);
+ } while (!time_after_eq(jiffies, deadline));
+
+ bcma_err(bus, "Timeout waiting for register %p\n", addr);
+
+ return false;
+}
+
+void bcma_chipco_b_mii_write(struct bcma_drv_cc_b *ccb, u32 offset, u32 value)
+{
+ struct bcma_bus *bus = ccb->core->bus;
+
+ writel(offset, ccb->mii + 0x00);
+ bcma_wait_reg(bus, ccb->mii + 0x00, 0x0100, 0x0000, 100);
+ writel(value, ccb->mii + 0x04);
+ bcma_wait_reg(bus, ccb->mii + 0x00, 0x0100, 0x0000, 100);
+}
+EXPORT_SYMBOL_GPL(bcma_chipco_b_mii_write);
+
+int bcma_core_chipcommon_b_init(struct bcma_drv_cc_b *ccb)
+{
+ if (ccb->setup_done)
+ return 0;
+
+ ccb->setup_done = 1;
+ ccb->mii = ioremap_nocache(ccb->core->addr_s[1], BCMA_CORE_SIZE);
+ if (!ccb->mii)
+ return -ENOMEM;
+
+ return 0;
+}
+
+void bcma_core_chipcommon_b_free(struct bcma_drv_cc_b *ccb)
+{
+ if (ccb->mii)
+ iounmap(ccb->mii);
+}
diff --git a/drivers/bcma/host_pci.c b/drivers/bcma/host_pci.c
index f032ed6..1e5ac0a 100644
--- a/drivers/bcma/host_pci.c
+++ b/drivers/bcma/host_pci.c
@@ -208,6 +208,9 @@
bus->boardinfo.vendor = bus->host_pci->subsystem_vendor;
bus->boardinfo.type = bus->host_pci->subsystem_device;
+ /* Initialize struct, detect chip */
+ bcma_init_bus(bus);
+
/* Register */
err = bcma_bus_register(bus);
if (err)
diff --git a/drivers/bcma/host_soc.c b/drivers/bcma/host_soc.c
index 1edd7e0..718e054 100644
--- a/drivers/bcma/host_soc.c
+++ b/drivers/bcma/host_soc.c
@@ -165,7 +165,6 @@
int __init bcma_host_soc_register(struct bcma_soc *soc)
{
struct bcma_bus *bus = &soc->bus;
- int err;
/* iomap only first core. We have to read some register on this core
* to scan the bus.
@@ -178,7 +177,18 @@
bus->hosttype = BCMA_HOSTTYPE_SOC;
bus->ops = &bcma_host_soc_ops;
- /* Register */
+ /* Initialize struct, detect chip */
+ bcma_init_bus(bus);
+
+ return 0;
+}
+
+int __init bcma_host_soc_init(struct bcma_soc *soc)
+{
+ struct bcma_bus *bus = &soc->bus;
+ int err;
+
+ /* Scan bus and initialize it */
err = bcma_bus_early_register(bus, &soc->core_cc, &soc->core_mips);
if (err)
iounmap(bus->mmio);
diff --git a/drivers/bcma/main.c b/drivers/bcma/main.c
index 0ff8d58..c421403 100644
--- a/drivers/bcma/main.c
+++ b/drivers/bcma/main.c
@@ -120,16 +120,60 @@
kfree(core);
}
-static int bcma_register_cores(struct bcma_bus *bus)
+static bool bcma_is_core_needed_early(u16 core_id)
+{
+ switch (core_id) {
+ case BCMA_CORE_NS_NAND:
+ case BCMA_CORE_NS_QSPI:
+ return true;
+ }
+
+ return false;
+}
+
+static void bcma_register_core(struct bcma_bus *bus, struct bcma_device *core)
+{
+ int err;
+
+ core->dev.release = bcma_release_core_dev;
+ core->dev.bus = &bcma_bus_type;
+ dev_set_name(&core->dev, "bcma%d:%d", bus->num, core->core_index);
+
+ switch (bus->hosttype) {
+ case BCMA_HOSTTYPE_PCI:
+ core->dev.parent = &bus->host_pci->dev;
+ core->dma_dev = &bus->host_pci->dev;
+ core->irq = bus->host_pci->irq;
+ break;
+ case BCMA_HOSTTYPE_SOC:
+ core->dev.dma_mask = &core->dev.coherent_dma_mask;
+ core->dma_dev = &core->dev;
+ break;
+ case BCMA_HOSTTYPE_SDIO:
+ break;
+ }
+
+ err = device_register(&core->dev);
+ if (err) {
+ bcma_err(bus, "Could not register dev for core 0x%03X\n",
+ core->id.id);
+ put_device(&core->dev);
+ return;
+ }
+ core->dev_registered = true;
+}
+
+static int bcma_register_devices(struct bcma_bus *bus)
{
struct bcma_device *core;
- int err, dev_id = 0;
+ int err;
list_for_each_entry(core, &bus->cores, list) {
/* We support that cores ourself */
switch (core->id.id) {
case BCMA_CORE_4706_CHIPCOMMON:
case BCMA_CORE_CHIPCOMMON:
+ case BCMA_CORE_NS_CHIPCOMMON_B:
case BCMA_CORE_PCI:
case BCMA_CORE_PCIE:
case BCMA_CORE_PCIE2:
@@ -138,39 +182,16 @@
continue;
}
+ /* Early cores were already registered */
+ if (bcma_is_core_needed_early(core->id.id))
+ continue;
+
/* Only first GMAC core on BCM4706 is connected and working */
if (core->id.id == BCMA_CORE_4706_MAC_GBIT &&
core->core_unit > 0)
continue;
- core->dev.release = bcma_release_core_dev;
- core->dev.bus = &bcma_bus_type;
- dev_set_name(&core->dev, "bcma%d:%d", bus->num, dev_id);
-
- switch (bus->hosttype) {
- case BCMA_HOSTTYPE_PCI:
- core->dev.parent = &bus->host_pci->dev;
- core->dma_dev = &bus->host_pci->dev;
- core->irq = bus->host_pci->irq;
- break;
- case BCMA_HOSTTYPE_SOC:
- core->dev.dma_mask = &core->dev.coherent_dma_mask;
- core->dma_dev = &core->dev;
- break;
- case BCMA_HOSTTYPE_SDIO:
- break;
- }
-
- err = device_register(&core->dev);
- if (err) {
- bcma_err(bus,
- "Could not register dev for core 0x%03X\n",
- core->id.id);
- put_device(&core->dev);
- continue;
- }
- core->dev_registered = true;
- dev_id++;
+ bcma_register_core(bus, core);
}
#ifdef CONFIG_BCMA_DRIVER_MIPS
@@ -247,6 +268,12 @@
bcma_core_chipcommon_early_init(&bus->drv_cc);
}
+ /* Cores providing flash access go before SPROM init */
+ list_for_each_entry(core, &bus->cores, list) {
+ if (bcma_is_core_needed_early(core->id.id))
+ bcma_register_core(bus, core);
+ }
+
/* Try to get SPROM */
err = bcma_sprom_get(bus);
if (err == -ENOENT) {
@@ -261,6 +288,13 @@
bcma_core_chipcommon_init(&bus->drv_cc);
}
+ /* Init CC core */
+ core = bcma_find_core(bus, BCMA_CORE_NS_CHIPCOMMON_B);
+ if (core) {
+ bus->drv_cc_b.core = core;
+ bcma_core_chipcommon_b_init(&bus->drv_cc_b);
+ }
+
/* Init MIPS core */
core = bcma_find_core(bus, BCMA_CORE_MIPS_74K);
if (core) {
@@ -297,7 +331,7 @@
}
/* Register found cores */
- bcma_register_cores(bus);
+ bcma_register_devices(bus);
bcma_info(bus, "Bus registered\n");
@@ -315,6 +349,8 @@
else if (err)
bcma_err(bus, "Can not unregister GPIO driver: %i\n", err);
+ bcma_core_chipcommon_b_free(&bus->drv_cc_b);
+
cores[0] = bcma_find_core(bus, BCMA_CORE_MIPS_74K);
cores[1] = bcma_find_core(bus, BCMA_CORE_PCIE);
cores[2] = bcma_find_core(bus, BCMA_CORE_4706_MAC_GBIT_COMMON);
@@ -334,8 +370,6 @@
struct bcma_device *core;
struct bcma_device_id match;
- bcma_init_bus(bus);
-
match.manuf = BCMA_MANUF_BCM;
match.id = bcma_cc_core_id(bus);
match.class = BCMA_CL_SIM;
diff --git a/drivers/bcma/scan.c b/drivers/bcma/scan.c
index e9bd772..b3a403c 100644
--- a/drivers/bcma/scan.c
+++ b/drivers/bcma/scan.c
@@ -276,7 +276,7 @@
struct bcma_device *core)
{
u32 tmp;
- u8 i, j;
+ u8 i, j, k;
s32 cia, cib;
u8 ports[2], wrappers[2];
@@ -314,6 +314,7 @@
/* Some specific cores don't need wrappers */
switch (core->id.id) {
case BCMA_CORE_4706_MAC_GBIT_COMMON:
+ case BCMA_CORE_NS_CHIPCOMMON_B:
/* Not used yet: case BCMA_CORE_OOB_ROUTER: */
break;
default:
@@ -367,6 +368,7 @@
core->addr = tmp;
/* get & parse slave ports */
+ k = 0;
for (i = 0; i < ports[1]; i++) {
for (j = 0; ; j++) {
tmp = bcma_erom_get_addr_desc(bus, eromptr,
@@ -376,9 +378,9 @@
/* pr_debug("erom: slave port %d "
* "has %d descriptors\n", i, j); */
break;
- } else {
- if (i == 0 && j == 0)
- core->addr1 = tmp;
+ } else if (k < ARRAY_SIZE(core->addr_s)) {
+ core->addr_s[k] = tmp;
+ k++;
}
}
}
@@ -438,9 +440,6 @@
s32 tmp;
struct bcma_chipinfo *chipinfo = &(bus->chipinfo);
- if (bus->init_done)
- return;
-
INIT_LIST_HEAD(&bus->cores);
bus->nr_cores = 0;
@@ -452,8 +451,6 @@
chipinfo->pkg = (tmp & BCMA_CC_ID_PKG) >> BCMA_CC_ID_PKG_SHIFT;
bcma_info(bus, "Found chip with id 0x%04X, rev 0x%02X and package 0x%02X\n",
chipinfo->id, chipinfo->rev, chipinfo->pkg);
-
- bus->init_done = true;
}
int bcma_bus_scan(struct bcma_bus *bus)
@@ -463,8 +460,6 @@
int err, core_num = 0;
- bcma_init_bus(bus);
-
erombase = bcma_scan_read32(bus, 0, BCMA_CC_EROM);
if (bus->hosttype == BCMA_HOSTTYPE_SOC) {
eromptr = ioremap_nocache(erombase, BCMA_CORE_SIZE);
diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index db1e956..5c8e7fe 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3918,7 +3918,6 @@
if (rv) {
dev_err(&dd->pdev->dev,
"Unable to allocate request queue\n");
- rv = -ENOMEM;
goto block_queue_alloc_init_error;
}
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index a3b042c..00d469c 100644
--- a/drivers/block/null_blk.c
+++ b/drivers/block/null_blk.c
@@ -462,17 +462,21 @@
struct gendisk *disk;
struct nullb *nullb;
sector_t size;
+ int rv;
nullb = kzalloc_node(sizeof(*nullb), GFP_KERNEL, home_node);
- if (!nullb)
+ if (!nullb) {
+ rv = -ENOMEM;
goto out;
+ }
spin_lock_init(&nullb->lock);
if (queue_mode == NULL_Q_MQ && use_per_node_hctx)
submit_queues = nr_online_nodes;
- if (setup_queues(nullb))
+ rv = setup_queues(nullb);
+ if (rv)
goto out_free_nullb;
if (queue_mode == NULL_Q_MQ) {
@@ -484,22 +488,29 @@
nullb->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
nullb->tag_set.driver_data = nullb;
- if (blk_mq_alloc_tag_set(&nullb->tag_set))
+ rv = blk_mq_alloc_tag_set(&nullb->tag_set);
+ if (rv)
goto out_cleanup_queues;
nullb->q = blk_mq_init_queue(&nullb->tag_set);
- if (!nullb->q)
+ if (!nullb->q) {
+ rv = -ENOMEM;
goto out_cleanup_tags;
+ }
} else if (queue_mode == NULL_Q_BIO) {
nullb->q = blk_alloc_queue_node(GFP_KERNEL, home_node);
- if (!nullb->q)
+ if (!nullb->q) {
+ rv = -ENOMEM;
goto out_cleanup_queues;
+ }
blk_queue_make_request(nullb->q, null_queue_bio);
init_driver_queues(nullb);
} else {
nullb->q = blk_init_queue_node(null_request_fn, &nullb->lock, home_node);
- if (!nullb->q)
+ if (!nullb->q) {
+ rv = -ENOMEM;
goto out_cleanup_queues;
+ }
blk_queue_prep_rq(nullb->q, null_rq_prep_fn);
blk_queue_softirq_done(nullb->q, null_softirq_done_fn);
init_driver_queues(nullb);
@@ -509,8 +520,10 @@
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, nullb->q);
disk = nullb->disk = alloc_disk_node(1, home_node);
- if (!disk)
+ if (!disk) {
+ rv = -ENOMEM;
goto out_cleanup_blk_queue;
+ }
mutex_lock(&lock);
list_add_tail(&nullb->list, &nullb_list);
@@ -544,7 +557,7 @@
out_free_nullb:
kfree(nullb);
out:
- return -ENOMEM;
+ return rv;
}
static int __init null_init(void)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 623c841..4b97baf 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -5087,9 +5087,11 @@
set_capacity(rbd_dev->disk, rbd_dev->mapping.size / SECTOR_SIZE);
set_disk_ro(rbd_dev->disk, rbd_dev->mapping.read_only);
- rbd_dev->rq_wq = alloc_workqueue(rbd_dev->disk->disk_name, 0, 0);
- if (!rbd_dev->rq_wq)
+ rbd_dev->rq_wq = alloc_workqueue("%s", 0, 0, rbd_dev->disk->disk_name);
+ if (!rbd_dev->rq_wq) {
+ ret = -ENOMEM;
goto err_out_mapping;
+ }
ret = rbd_bus_add_dev(rbd_dev);
if (ret)
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 0527b29..a79d657 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -331,6 +331,9 @@
BT_ERR("%s corrupted event packet", hdev->name);
hdev->stat.err_rx++;
}
+ } else if (urb->status == -ENOENT) {
+ /* Avoid suspend failed when usb_kill_urb */
+ return;
}
if (!test_bit(BTUSB_INTR_RUNNING, &data->flags))
@@ -419,6 +422,9 @@
BT_ERR("%s corrupted ACL packet", hdev->name);
hdev->stat.err_rx++;
}
+ } else if (urb->status == -ENOENT) {
+ /* Avoid suspend failed when usb_kill_urb */
+ return;
}
if (!test_bit(BTUSB_BULK_RUNNING, &data->flags))
@@ -513,6 +519,9 @@
hdev->stat.err_rx++;
}
}
+ } else if (urb->status == -ENOENT) {
+ /* Avoid suspend failed when usb_kill_urb */
+ return;
}
if (!test_bit(BTUSB_ISOC_RUNNING, &data->flags))
diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
index 2e3139e..132c9cc 100644
--- a/drivers/char/hw_random/virtio-rng.c
+++ b/drivers/char/hw_random/virtio-rng.c
@@ -36,6 +36,7 @@
int index;
bool busy;
bool hwrng_register_done;
+ bool hwrng_removed;
};
@@ -68,6 +69,9 @@
int ret;
struct virtrng_info *vi = (struct virtrng_info *)rng->priv;
+ if (vi->hwrng_removed)
+ return -ENODEV;
+
if (!vi->busy) {
vi->busy = true;
init_completion(&vi->have_data);
@@ -137,6 +141,9 @@
{
struct virtrng_info *vi = vdev->priv;
+ vi->hwrng_removed = true;
+ vi->data_avail = 0;
+ complete(&vi->have_data);
vdev->config->reset(vdev);
vi->busy = false;
if (vi->hwrng_register_done)
diff --git a/drivers/clk/at91/clk-slow.c b/drivers/clk/at91/clk-slow.c
index 0300c46..32f7c1b 100644
--- a/drivers/clk/at91/clk-slow.c
+++ b/drivers/clk/at91/clk-slow.c
@@ -447,7 +447,7 @@
int i;
num_parents = of_count_phandle_with_args(np, "clocks", "#clock-cells");
- if (num_parents <= 0 || num_parents > 1)
+ if (num_parents != 2)
return;
for (i = 0; i < num_parents; ++i) {
diff --git a/drivers/clk/clk-efm32gg.c b/drivers/clk/clk-efm32gg.c
index bac2ddf..73a8d0f 100644
--- a/drivers/clk/clk-efm32gg.c
+++ b/drivers/clk/clk-efm32gg.c
@@ -22,7 +22,7 @@
.clk_num = ARRAY_SIZE(clk),
};
-static int __init efm32gg_cmu_init(struct device_node *np)
+static void __init efm32gg_cmu_init(struct device_node *np)
{
int i;
void __iomem *base;
@@ -33,7 +33,7 @@
base = of_iomap(np, 0);
if (!base) {
pr_warn("Failed to map address range for efm32gg,cmu node\n");
- return -EADDRNOTAVAIL;
+ return;
}
clk[clk_HFXO] = clk_register_fixed_rate(NULL, "HFXO", NULL,
@@ -76,6 +76,6 @@
clk[clk_HFPERCLKDAC0] = clk_register_gate(NULL, "HFPERCLK.DAC0",
"HFXO", 0, base + CMU_HFPERCLKEN0, 17, 0, NULL);
- return of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data);
+ of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data);
}
CLK_OF_DECLARE(efm32ggcmu, "efm32gg,cmu", efm32gg_cmu_init);
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index b76fa69..bacc06f 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1467,6 +1467,7 @@
static void clk_change_rate(struct clk *clk)
{
struct clk *child;
+ struct hlist_node *tmp;
unsigned long old_rate;
unsigned long best_parent_rate = 0;
bool skip_set_rate = false;
@@ -1502,7 +1503,11 @@
if (clk->notifier_count && old_rate != clk->rate)
__clk_notify(clk, POST_RATE_CHANGE, old_rate, clk->rate);
- hlist_for_each_entry(child, &clk->children, child_node) {
+ /*
+ * Use safe iteration, as change_rate can actually swap parents
+ * for certain clock types.
+ */
+ hlist_for_each_entry_safe(child, tmp, &clk->children, child_node) {
/* Skip children who will be reparented to another clock */
if (child->new_parent && child->new_parent != clk)
continue;
diff --git a/drivers/clk/qcom/gcc-ipq806x.c b/drivers/clk/qcom/gcc-ipq806x.c
index 4032e51..3b83b7d 100644
--- a/drivers/clk/qcom/gcc-ipq806x.c
+++ b/drivers/clk/qcom/gcc-ipq806x.c
@@ -1095,7 +1095,7 @@
};
static const struct freq_tbl clk_tbl_sdc[] = {
- { 144000, P_PXO, 5, 18,625 },
+ { 200000, P_PXO, 2, 2, 125 },
{ 400000, P_PLL8, 4, 1, 240 },
{ 16000000, P_PLL8, 4, 1, 6 },
{ 17070000, P_PLL8, 1, 2, 45 },
diff --git a/drivers/clk/rockchip/clk-rk3288.c b/drivers/clk/rockchip/clk-rk3288.c
index 0d8c6c5..b22a2d2 100644
--- a/drivers/clk/rockchip/clk-rk3288.c
+++ b/drivers/clk/rockchip/clk-rk3288.c
@@ -545,7 +545,7 @@
GATE(PCLK_PWM, "pclk_pwm", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 0, GFLAGS),
GATE(PCLK_TIMER, "pclk_timer", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 1, GFLAGS),
GATE(PCLK_I2C0, "pclk_i2c0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 2, GFLAGS),
- GATE(PCLK_I2C1, "pclk_i2c1", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 3, GFLAGS),
+ GATE(PCLK_I2C2, "pclk_i2c2", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 3, GFLAGS),
GATE(0, "pclk_ddrupctl0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 14, GFLAGS),
GATE(0, "pclk_publ0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 15, GFLAGS),
GATE(0, "pclk_ddrupctl1", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 0, GFLAGS),
@@ -603,7 +603,7 @@
GATE(PCLK_I2C4, "pclk_i2c4", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 15, GFLAGS),
GATE(PCLK_UART3, "pclk_uart3", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 11, GFLAGS),
GATE(PCLK_UART4, "pclk_uart4", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 12, GFLAGS),
- GATE(PCLK_I2C2, "pclk_i2c2", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 13, GFLAGS),
+ GATE(PCLK_I2C1, "pclk_i2c1", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 13, GFLAGS),
GATE(PCLK_I2C3, "pclk_i2c3", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 14, GFLAGS),
GATE(PCLK_SARADC, "pclk_saradc", "pclk_peri", 0, RK3288_CLKGATE_CON(7), 1, GFLAGS),
GATE(PCLK_TSADC, "pclk_tsadc", "pclk_peri", 0, RK3288_CLKGATE_CON(7), 2, GFLAGS),
diff --git a/drivers/clk/ti/clk-dra7-atl.c b/drivers/clk/ti/clk-dra7-atl.c
index 4a65b41..af29359 100644
--- a/drivers/clk/ti/clk-dra7-atl.c
+++ b/drivers/clk/ti/clk-dra7-atl.c
@@ -139,9 +139,13 @@
static int atl_clk_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
{
- struct dra7_atl_desc *cdesc = to_atl_desc(hw);
+ struct dra7_atl_desc *cdesc;
u32 divider;
+ if (!hw || !rate)
+ return -EINVAL;
+
+ cdesc = to_atl_desc(hw);
divider = ((parent_rate + rate / 2) / rate) - 1;
if (divider > DRA7_ATL_DIVIDER_MASK)
divider = DRA7_ATL_DIVIDER_MASK;
diff --git a/drivers/clk/ti/divider.c b/drivers/clk/ti/divider.c
index e6aa10d..a837f70 100644
--- a/drivers/clk/ti/divider.c
+++ b/drivers/clk/ti/divider.c
@@ -211,11 +211,16 @@
static int ti_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
{
- struct clk_divider *divider = to_clk_divider(hw);
+ struct clk_divider *divider;
unsigned int div, value;
unsigned long flags = 0;
u32 val;
+ if (!hw || !rate)
+ return -EINVAL;
+
+ divider = to_clk_divider(hw);
+
div = DIV_ROUND_UP(parent_rate, rate);
value = _get_val(divider, div);
diff --git a/drivers/cpufreq/cpufreq_opp.c b/drivers/cpufreq/cpufreq_opp.c
index f7a32d2..773bcde 100644
--- a/drivers/cpufreq/cpufreq_opp.c
+++ b/drivers/cpufreq/cpufreq_opp.c
@@ -60,7 +60,7 @@
goto out;
}
- freq_table = kcalloc(sizeof(*freq_table), (max_opps + 1), GFP_ATOMIC);
+ freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC);
if (!freq_table) {
ret = -ENOMEM;
goto out;
diff --git a/drivers/crypto/ccp/ccp-crypto-main.c b/drivers/crypto/ccp/ccp-crypto-main.c
index 20dc848..4d4e016 100644
--- a/drivers/crypto/ccp/ccp-crypto-main.c
+++ b/drivers/crypto/ccp/ccp-crypto-main.c
@@ -367,6 +367,10 @@
{
int ret;
+ ret = ccp_present();
+ if (ret)
+ return ret;
+
spin_lock_init(&req_queue_lock);
INIT_LIST_HEAD(&req_queue.cmds);
req_queue.backlog = &req_queue.cmds;
diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
index a7d1106..c6e6171 100644
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -55,6 +55,20 @@
}
/**
+ * ccp_present - check if a CCP device is present
+ *
+ * Returns zero if a CCP device is present, -ENODEV otherwise.
+ */
+int ccp_present(void)
+{
+ if (ccp_get_device())
+ return 0;
+
+ return -ENODEV;
+}
+EXPORT_SYMBOL_GPL(ccp_present);
+
+/**
* ccp_enqueue_cmd - queue an operation for processing by the CCP
*
* @cmd: ccp_cmd struct to be processed
diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.h b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.h
index b707f29..65dd1ff 100644
--- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.h
+++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.h
@@ -66,7 +66,7 @@
#define ADF_DH895XCC_ETR_MAX_BANKS 32
#define ADF_DH895XCC_SMIAPF0_MASK_OFFSET (0x3A000 + 0x28)
#define ADF_DH895XCC_SMIAPF1_MASK_OFFSET (0x3A000 + 0x30)
-#define ADF_DH895XCC_SMIA0_MASK 0xFFFF
+#define ADF_DH895XCC_SMIA0_MASK 0xFFFFFFFF
#define ADF_DH895XCC_SMIA1_MASK 0x1
/* Error detection and correction */
#define ADF_DH895XCC_AE_CTX_ENABLES(i) (i * 0x1000 + 0x20818)
diff --git a/drivers/dma/dma-jz4740.c b/drivers/dma/dma-jz4740.c
index 6a9d89c..ae2ab14 100644
--- a/drivers/dma/dma-jz4740.c
+++ b/drivers/dma/dma-jz4740.c
@@ -362,8 +362,9 @@
vchan_cyclic_callback(&chan->desc->vdesc);
} else {
if (chan->next_sg == chan->desc->num_sgs) {
- chan->desc = NULL;
+ list_del(&chan->desc->vdesc.node);
vchan_cookie_complete(&chan->desc->vdesc);
+ chan->desc = NULL;
}
}
}
diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
index a56bb35..c846a96 100644
--- a/drivers/firmware/efi/libstub/fdt.c
+++ b/drivers/firmware/efi/libstub/fdt.c
@@ -22,7 +22,7 @@
unsigned long map_size, unsigned long desc_size,
u32 desc_ver)
{
- int node, prev;
+ int node, prev, num_rsv;
int status;
u32 fdt_val32;
u64 fdt_val64;
@@ -73,6 +73,14 @@
prev = node;
}
+ /*
+ * Delete all memory reserve map entries. When booting via UEFI,
+ * kernel will use the UEFI memory map to find reserved regions.
+ */
+ num_rsv = fdt_num_mem_rsv(fdt);
+ while (num_rsv-- > 0)
+ fdt_del_mem_rsv(fdt, num_rsv);
+
node = fdt_subnode_offset(fdt, 0, "chosen");
if (node < 0) {
node = fdt_add_subnode(fdt, 0, "chosen");
diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
index a2cc6be..b792194 100644
--- a/drivers/gpu/drm/ast/ast_main.c
+++ b/drivers/gpu/drm/ast/ast_main.c
@@ -67,6 +67,7 @@
{
struct ast_private *ast = dev->dev_private;
uint32_t data, jreg;
+ ast_open_key(ast);
if (dev->pdev->device == PCI_CHIP_AST1180) {
ast->chip = AST1100;
@@ -104,7 +105,7 @@
}
ast->vga2_clone = false;
} else {
- ast->chip = 2000;
+ ast->chip = AST2000;
DRM_INFO("AST 2000 detected\n");
}
}
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 9d7346b..6b7efcf3 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -250,6 +250,7 @@
DRM_MODE_CONNECTOR_VIRTUAL);
drm_connector_helper_add(connector,
&bochs_connector_connector_helper_funcs);
+ drm_connector_register(connector);
}
diff --git a/drivers/gpu/drm/cirrus/cirrus_mode.c b/drivers/gpu/drm/cirrus/cirrus_mode.c
index e1c5c32..c7c5a9d 100644
--- a/drivers/gpu/drm/cirrus/cirrus_mode.c
+++ b/drivers/gpu/drm/cirrus/cirrus_mode.c
@@ -555,6 +555,7 @@
drm_connector_helper_add(connector, &cirrus_vga_connector_helper_funcs);
+ drm_connector_register(connector);
return connector;
}
diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index 2e7f03a..9933c26 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -1336,12 +1336,17 @@
intel_power_domains_init_hw(dev_priv);
+ /*
+ * We enable some interrupt sources in our postinstall hooks, so mark
+ * interrupts as enabled _before_ actually enabling them to avoid
+ * special cases in our ordering checks.
+ */
+ dev_priv->pm._irqs_disabled = false;
+
ret = drm_irq_install(dev, dev->pdev->irq);
if (ret)
goto cleanup_gem_stolen;
- dev_priv->pm._irqs_disabled = false;
-
/* Important: The output setup functions called by modeset_init need
* working irqs for e.g. gmbus and dp aux transfers. */
intel_modeset_init(dev);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 7a830ea..3524306 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -184,6 +184,7 @@
if ((1 << (domain)) & (mask))
struct drm_i915_private;
+struct i915_mm_struct;
struct i915_mmu_object;
enum intel_dpll_id {
@@ -1506,9 +1507,8 @@
struct i915_gtt gtt; /* VM representing the global address space */
struct i915_gem_mm mm;
-#if defined(CONFIG_MMU_NOTIFIER)
- DECLARE_HASHTABLE(mmu_notifiers, 7);
-#endif
+ DECLARE_HASHTABLE(mm_structs, 7);
+ struct mutex mm_lock;
/* Kernel Modesetting */
@@ -1814,8 +1814,8 @@
unsigned workers :4;
#define I915_GEM_USERPTR_MAX_WORKERS 15
- struct mm_struct *mm;
- struct i915_mmu_object *mn;
+ struct i915_mm_struct *mm;
+ struct i915_mmu_object *mmu_object;
struct work_struct *work;
} userptr;
};
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ba7f5c6..ad55b06 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1590,10 +1590,13 @@
out:
switch (ret) {
case -EIO:
- /* If this -EIO is due to a gpu hang, give the reset code a
- * chance to clean up the mess. Otherwise return the proper
- * SIGBUS. */
- if (i915_terminally_wedged(&dev_priv->gpu_error)) {
+ /*
+ * We eat errors when the gpu is terminally wedged to avoid
+ * userspace unduly crashing (gl has no provisions for mmaps to
+ * fail). But any other -EIO isn't ours (e.g. swap in failure)
+ * and so needs to be reported.
+ */
+ if (!i915_terminally_wedged(&dev_priv->gpu_error)) {
ret = VM_FAULT_SIGBUS;
break;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index fe69fc8..d384139 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -32,6 +32,15 @@
#include <linux/mempolicy.h>
#include <linux/swap.h>
+struct i915_mm_struct {
+ struct mm_struct *mm;
+ struct drm_device *dev;
+ struct i915_mmu_notifier *mn;
+ struct hlist_node node;
+ struct kref kref;
+ struct work_struct work;
+};
+
#if defined(CONFIG_MMU_NOTIFIER)
#include <linux/interval_tree.h>
@@ -41,16 +50,12 @@
struct mmu_notifier mn;
struct rb_root objects;
struct list_head linear;
- struct drm_device *dev;
- struct mm_struct *mm;
- struct work_struct work;
- unsigned long count;
unsigned long serial;
bool has_linear;
};
struct i915_mmu_object {
- struct i915_mmu_notifier *mmu;
+ struct i915_mmu_notifier *mn;
struct interval_tree_node it;
struct list_head link;
struct drm_i915_gem_object *obj;
@@ -96,18 +101,18 @@
unsigned long start,
unsigned long end)
{
- struct i915_mmu_object *mmu;
+ struct i915_mmu_object *mo;
unsigned long serial;
restart:
serial = mn->serial;
- list_for_each_entry(mmu, &mn->linear, link) {
+ list_for_each_entry(mo, &mn->linear, link) {
struct drm_i915_gem_object *obj;
- if (mmu->it.last < start || mmu->it.start > end)
+ if (mo->it.last < start || mo->it.start > end)
continue;
- obj = mmu->obj;
+ obj = mo->obj;
drm_gem_object_reference(&obj->base);
spin_unlock(&mn->lock);
@@ -160,130 +165,47 @@
};
static struct i915_mmu_notifier *
-__i915_mmu_notifier_lookup(struct drm_device *dev, struct mm_struct *mm)
+i915_mmu_notifier_create(struct mm_struct *mm)
{
- struct drm_i915_private *dev_priv = to_i915(dev);
- struct i915_mmu_notifier *mmu;
-
- /* Protected by dev->struct_mutex */
- hash_for_each_possible(dev_priv->mmu_notifiers, mmu, node, (unsigned long)mm)
- if (mmu->mm == mm)
- return mmu;
-
- return NULL;
-}
-
-static struct i915_mmu_notifier *
-i915_mmu_notifier_get(struct drm_device *dev, struct mm_struct *mm)
-{
- struct drm_i915_private *dev_priv = to_i915(dev);
- struct i915_mmu_notifier *mmu;
+ struct i915_mmu_notifier *mn;
int ret;
- lockdep_assert_held(&dev->struct_mutex);
-
- mmu = __i915_mmu_notifier_lookup(dev, mm);
- if (mmu)
- return mmu;
-
- mmu = kmalloc(sizeof(*mmu), GFP_KERNEL);
- if (mmu == NULL)
+ mn = kmalloc(sizeof(*mn), GFP_KERNEL);
+ if (mn == NULL)
return ERR_PTR(-ENOMEM);
- spin_lock_init(&mmu->lock);
- mmu->dev = dev;
- mmu->mn.ops = &i915_gem_userptr_notifier;
- mmu->mm = mm;
- mmu->objects = RB_ROOT;
- mmu->count = 0;
- mmu->serial = 1;
- INIT_LIST_HEAD(&mmu->linear);
- mmu->has_linear = false;
+ spin_lock_init(&mn->lock);
+ mn->mn.ops = &i915_gem_userptr_notifier;
+ mn->objects = RB_ROOT;
+ mn->serial = 1;
+ INIT_LIST_HEAD(&mn->linear);
+ mn->has_linear = false;
- /* Protected by mmap_sem (write-lock) */
- ret = __mmu_notifier_register(&mmu->mn, mm);
+ /* Protected by mmap_sem (write-lock) */
+ ret = __mmu_notifier_register(&mn->mn, mm);
if (ret) {
- kfree(mmu);
+ kfree(mn);
return ERR_PTR(ret);
}
- /* Protected by dev->struct_mutex */
- hash_add(dev_priv->mmu_notifiers, &mmu->node, (unsigned long)mm);
- return mmu;
+ return mn;
}
-static void
-__i915_mmu_notifier_destroy_worker(struct work_struct *work)
+static void __i915_mmu_notifier_update_serial(struct i915_mmu_notifier *mn)
{
- struct i915_mmu_notifier *mmu = container_of(work, typeof(*mmu), work);
- mmu_notifier_unregister(&mmu->mn, mmu->mm);
- kfree(mmu);
-}
-
-static void
-__i915_mmu_notifier_destroy(struct i915_mmu_notifier *mmu)
-{
- lockdep_assert_held(&mmu->dev->struct_mutex);
-
- /* Protected by dev->struct_mutex */
- hash_del(&mmu->node);
-
- /* Our lock ordering is: mmap_sem, mmu_notifier_scru, struct_mutex.
- * We enter the function holding struct_mutex, therefore we need
- * to drop our mutex prior to calling mmu_notifier_unregister in
- * order to prevent lock inversion (and system-wide deadlock)
- * between the mmap_sem and struct-mutex. Hence we defer the
- * unregistration to a workqueue where we hold no locks.
- */
- INIT_WORK(&mmu->work, __i915_mmu_notifier_destroy_worker);
- schedule_work(&mmu->work);
-}
-
-static void __i915_mmu_notifier_update_serial(struct i915_mmu_notifier *mmu)
-{
- if (++mmu->serial == 0)
- mmu->serial = 1;
-}
-
-static bool i915_mmu_notifier_has_linear(struct i915_mmu_notifier *mmu)
-{
- struct i915_mmu_object *mn;
-
- list_for_each_entry(mn, &mmu->linear, link)
- if (mn->is_linear)
- return true;
-
- return false;
-}
-
-static void
-i915_mmu_notifier_del(struct i915_mmu_notifier *mmu,
- struct i915_mmu_object *mn)
-{
- lockdep_assert_held(&mmu->dev->struct_mutex);
-
- spin_lock(&mmu->lock);
- list_del(&mn->link);
- if (mn->is_linear)
- mmu->has_linear = i915_mmu_notifier_has_linear(mmu);
- else
- interval_tree_remove(&mn->it, &mmu->objects);
- __i915_mmu_notifier_update_serial(mmu);
- spin_unlock(&mmu->lock);
-
- /* Protected against _add() by dev->struct_mutex */
- if (--mmu->count == 0)
- __i915_mmu_notifier_destroy(mmu);
+ if (++mn->serial == 0)
+ mn->serial = 1;
}
static int
-i915_mmu_notifier_add(struct i915_mmu_notifier *mmu,
- struct i915_mmu_object *mn)
+i915_mmu_notifier_add(struct drm_device *dev,
+ struct i915_mmu_notifier *mn,
+ struct i915_mmu_object *mo)
{
struct interval_tree_node *it;
int ret;
- ret = i915_mutex_lock_interruptible(mmu->dev);
+ ret = i915_mutex_lock_interruptible(dev);
if (ret)
return ret;
@@ -291,11 +213,11 @@
* remove the objects from the interval tree) before we do
* the check for overlapping objects.
*/
- i915_gem_retire_requests(mmu->dev);
+ i915_gem_retire_requests(dev);
- spin_lock(&mmu->lock);
- it = interval_tree_iter_first(&mmu->objects,
- mn->it.start, mn->it.last);
+ spin_lock(&mn->lock);
+ it = interval_tree_iter_first(&mn->objects,
+ mo->it.start, mo->it.last);
if (it) {
struct drm_i915_gem_object *obj;
@@ -312,86 +234,122 @@
obj = container_of(it, struct i915_mmu_object, it)->obj;
if (!obj->userptr.workers)
- mmu->has_linear = mn->is_linear = true;
+ mn->has_linear = mo->is_linear = true;
else
ret = -EAGAIN;
} else
- interval_tree_insert(&mn->it, &mmu->objects);
+ interval_tree_insert(&mo->it, &mn->objects);
if (ret == 0) {
- list_add(&mn->link, &mmu->linear);
- __i915_mmu_notifier_update_serial(mmu);
+ list_add(&mo->link, &mn->linear);
+ __i915_mmu_notifier_update_serial(mn);
}
- spin_unlock(&mmu->lock);
- mutex_unlock(&mmu->dev->struct_mutex);
+ spin_unlock(&mn->lock);
+ mutex_unlock(&dev->struct_mutex);
return ret;
}
+static bool i915_mmu_notifier_has_linear(struct i915_mmu_notifier *mn)
+{
+ struct i915_mmu_object *mo;
+
+ list_for_each_entry(mo, &mn->linear, link)
+ if (mo->is_linear)
+ return true;
+
+ return false;
+}
+
+static void
+i915_mmu_notifier_del(struct i915_mmu_notifier *mn,
+ struct i915_mmu_object *mo)
+{
+ spin_lock(&mn->lock);
+ list_del(&mo->link);
+ if (mo->is_linear)
+ mn->has_linear = i915_mmu_notifier_has_linear(mn);
+ else
+ interval_tree_remove(&mo->it, &mn->objects);
+ __i915_mmu_notifier_update_serial(mn);
+ spin_unlock(&mn->lock);
+}
+
static void
i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
{
- struct i915_mmu_object *mn;
+ struct i915_mmu_object *mo;
- mn = obj->userptr.mn;
- if (mn == NULL)
+ mo = obj->userptr.mmu_object;
+ if (mo == NULL)
return;
- i915_mmu_notifier_del(mn->mmu, mn);
- obj->userptr.mn = NULL;
+ i915_mmu_notifier_del(mo->mn, mo);
+ kfree(mo);
+
+ obj->userptr.mmu_object = NULL;
+}
+
+static struct i915_mmu_notifier *
+i915_mmu_notifier_find(struct i915_mm_struct *mm)
+{
+ if (mm->mn == NULL) {
+ down_write(&mm->mm->mmap_sem);
+ mutex_lock(&to_i915(mm->dev)->mm_lock);
+ if (mm->mn == NULL)
+ mm->mn = i915_mmu_notifier_create(mm->mm);
+ mutex_unlock(&to_i915(mm->dev)->mm_lock);
+ up_write(&mm->mm->mmap_sem);
+ }
+ return mm->mn;
}
static int
i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
unsigned flags)
{
- struct i915_mmu_notifier *mmu;
- struct i915_mmu_object *mn;
+ struct i915_mmu_notifier *mn;
+ struct i915_mmu_object *mo;
int ret;
if (flags & I915_USERPTR_UNSYNCHRONIZED)
return capable(CAP_SYS_ADMIN) ? 0 : -EPERM;
- down_write(&obj->userptr.mm->mmap_sem);
- ret = i915_mutex_lock_interruptible(obj->base.dev);
- if (ret == 0) {
- mmu = i915_mmu_notifier_get(obj->base.dev, obj->userptr.mm);
- if (!IS_ERR(mmu))
- mmu->count++; /* preemptive add to act as a refcount */
- else
- ret = PTR_ERR(mmu);
- mutex_unlock(&obj->base.dev->struct_mutex);
- }
- up_write(&obj->userptr.mm->mmap_sem);
- if (ret)
+ if (WARN_ON(obj->userptr.mm == NULL))
+ return -EINVAL;
+
+ mn = i915_mmu_notifier_find(obj->userptr.mm);
+ if (IS_ERR(mn))
+ return PTR_ERR(mn);
+
+ mo = kzalloc(sizeof(*mo), GFP_KERNEL);
+ if (mo == NULL)
+ return -ENOMEM;
+
+ mo->mn = mn;
+ mo->it.start = obj->userptr.ptr;
+ mo->it.last = mo->it.start + obj->base.size - 1;
+ mo->obj = obj;
+
+ ret = i915_mmu_notifier_add(obj->base.dev, mn, mo);
+ if (ret) {
+ kfree(mo);
return ret;
-
- mn = kzalloc(sizeof(*mn), GFP_KERNEL);
- if (mn == NULL) {
- ret = -ENOMEM;
- goto destroy_mmu;
}
- mn->mmu = mmu;
- mn->it.start = obj->userptr.ptr;
- mn->it.last = mn->it.start + obj->base.size - 1;
- mn->obj = obj;
-
- ret = i915_mmu_notifier_add(mmu, mn);
- if (ret)
- goto free_mn;
-
- obj->userptr.mn = mn;
+ obj->userptr.mmu_object = mo;
return 0;
+}
-free_mn:
+static void
+i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
+ struct mm_struct *mm)
+{
+ if (mn == NULL)
+ return;
+
+ mmu_notifier_unregister(&mn->mn, mm);
kfree(mn);
-destroy_mmu:
- mutex_lock(&obj->base.dev->struct_mutex);
- if (--mmu->count == 0)
- __i915_mmu_notifier_destroy(mmu);
- mutex_unlock(&obj->base.dev->struct_mutex);
- return ret;
}
#else
@@ -413,15 +371,114 @@
return 0;
}
+
+static void
+i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
+ struct mm_struct *mm)
+{
+}
+
#endif
+static struct i915_mm_struct *
+__i915_mm_struct_find(struct drm_i915_private *dev_priv, struct mm_struct *real)
+{
+ struct i915_mm_struct *mm;
+
+ /* Protected by dev_priv->mm_lock */
+ hash_for_each_possible(dev_priv->mm_structs, mm, node, (unsigned long)real)
+ if (mm->mm == real)
+ return mm;
+
+ return NULL;
+}
+
+static int
+i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj)
+{
+ struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
+ struct i915_mm_struct *mm;
+ int ret = 0;
+
+ /* During release of the GEM object we hold the struct_mutex. This
+ * precludes us from calling mmput() at that time as that may be
+ * the last reference and so call exit_mmap(). exit_mmap() will
+ * attempt to reap the vma, and if we were holding a GTT mmap
+ * would then call drm_gem_vm_close() and attempt to reacquire
+ * the struct mutex. So in order to avoid that recursion, we have
+ * to defer releasing the mm reference until after we drop the
+ * struct_mutex, i.e. we need to schedule a worker to do the clean
+ * up.
+ */
+ mutex_lock(&dev_priv->mm_lock);
+ mm = __i915_mm_struct_find(dev_priv, current->mm);
+ if (mm == NULL) {
+ mm = kmalloc(sizeof(*mm), GFP_KERNEL);
+ if (mm == NULL) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ kref_init(&mm->kref);
+ mm->dev = obj->base.dev;
+
+ mm->mm = current->mm;
+ atomic_inc(¤t->mm->mm_count);
+
+ mm->mn = NULL;
+
+ /* Protected by dev_priv->mm_lock */
+ hash_add(dev_priv->mm_structs,
+ &mm->node, (unsigned long)mm->mm);
+ } else
+ kref_get(&mm->kref);
+
+ obj->userptr.mm = mm;
+out:
+ mutex_unlock(&dev_priv->mm_lock);
+ return ret;
+}
+
+static void
+__i915_mm_struct_free__worker(struct work_struct *work)
+{
+ struct i915_mm_struct *mm = container_of(work, typeof(*mm), work);
+ i915_mmu_notifier_free(mm->mn, mm->mm);
+ mmdrop(mm->mm);
+ kfree(mm);
+}
+
+static void
+__i915_mm_struct_free(struct kref *kref)
+{
+ struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref);
+
+ /* Protected by dev_priv->mm_lock */
+ hash_del(&mm->node);
+ mutex_unlock(&to_i915(mm->dev)->mm_lock);
+
+ INIT_WORK(&mm->work, __i915_mm_struct_free__worker);
+ schedule_work(&mm->work);
+}
+
+static void
+i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj)
+{
+ if (obj->userptr.mm == NULL)
+ return;
+
+ kref_put_mutex(&obj->userptr.mm->kref,
+ __i915_mm_struct_free,
+ &to_i915(obj->base.dev)->mm_lock);
+ obj->userptr.mm = NULL;
+}
+
struct get_pages_work {
struct work_struct work;
struct drm_i915_gem_object *obj;
struct task_struct *task;
};
-
#if IS_ENABLED(CONFIG_SWIOTLB)
#define swiotlb_active() swiotlb_nr_tbl()
#else
@@ -479,7 +536,7 @@
if (pvec == NULL)
pvec = drm_malloc_ab(num_pages, sizeof(struct page *));
if (pvec != NULL) {
- struct mm_struct *mm = obj->userptr.mm;
+ struct mm_struct *mm = obj->userptr.mm->mm;
down_read(&mm->mmap_sem);
while (pinned < num_pages) {
@@ -545,7 +602,7 @@
pvec = NULL;
pinned = 0;
- if (obj->userptr.mm == current->mm) {
+ if (obj->userptr.mm->mm == current->mm) {
pvec = kmalloc(num_pages*sizeof(struct page *),
GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY);
if (pvec == NULL) {
@@ -651,17 +708,13 @@
i915_gem_userptr_release(struct drm_i915_gem_object *obj)
{
i915_gem_userptr_release__mmu_notifier(obj);
-
- if (obj->userptr.mm) {
- mmput(obj->userptr.mm);
- obj->userptr.mm = NULL;
- }
+ i915_gem_userptr_release__mm_struct(obj);
}
static int
i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
{
- if (obj->userptr.mn)
+ if (obj->userptr.mmu_object)
return 0;
return i915_gem_userptr_init__mmu_notifier(obj, 0);
@@ -736,7 +789,6 @@
return -ENODEV;
}
- /* Allocate the new object */
obj = i915_gem_object_alloc(dev);
if (obj == NULL)
return -ENOMEM;
@@ -754,8 +806,8 @@
* at binding. This means that we need to hook into the mmu_notifier
* in order to detect if the mmu is destroyed.
*/
- ret = -ENOMEM;
- if ((obj->userptr.mm = get_task_mm(current)))
+ ret = i915_gem_userptr_init__mm_struct(obj);
+ if (ret == 0)
ret = i915_gem_userptr_init__mmu_notifier(obj, args->flags);
if (ret == 0)
ret = drm_gem_handle_create(file, &obj->base, &handle);
@@ -772,9 +824,8 @@
int
i915_gem_init_userptr(struct drm_device *dev)
{
-#if defined(CONFIG_MMU_NOTIFIER)
struct drm_i915_private *dev_priv = to_i915(dev);
- hash_init(dev_priv->mmu_notifiers);
-#endif
+ mutex_init(&dev_priv->mm_lock);
+ hash_init(dev_priv->mm_structs);
return 0;
}
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index e4d7607..f29b44c 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -334,16 +334,20 @@
#define GFX_OP_DESTBUFFER_INFO ((0x3<<29)|(0x1d<<24)|(0x8e<<16)|1)
#define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3))
#define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2)
-#define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4)
+
+#define COLOR_BLT_CMD (2<<29 | 0x40<<22 | (5-2))
+#define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4)
#define XY_SRC_COPY_BLT_CMD ((2<<29)|(0x53<<22)|6)
#define XY_MONO_SRC_COPY_IMM_BLT ((2<<29)|(0x71<<22)|5)
-#define XY_SRC_COPY_BLT_WRITE_ALPHA (1<<21)
-#define XY_SRC_COPY_BLT_WRITE_RGB (1<<20)
+#define BLT_WRITE_A (2<<20)
+#define BLT_WRITE_RGB (1<<20)
+#define BLT_WRITE_RGBA (BLT_WRITE_RGB | BLT_WRITE_A)
#define BLT_DEPTH_8 (0<<24)
#define BLT_DEPTH_16_565 (1<<24)
#define BLT_DEPTH_16_1555 (2<<24)
#define BLT_DEPTH_32 (3<<24)
-#define BLT_ROP_GXCOPY (0xcc<<16)
+#define BLT_ROP_SRC_COPY (0xcc<<16)
+#define BLT_ROP_COLOR_COPY (0xf0<<16)
#define XY_SRC_COPY_BLT_SRC_TILED (1<<15) /* 965+ only */
#define XY_SRC_COPY_BLT_DST_TILED (1<<11) /* 965+ only */
#define CMD_OP_DISPLAYBUFFER_INFO ((0x0<<29)|(0x14<<23)|2)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 81d7681f..fdff1d4 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -1631,6 +1631,10 @@
pipe_config->adjusted_mode.flags |= flags;
+ if (!HAS_PCH_SPLIT(dev) && !IS_VALLEYVIEW(dev) &&
+ tmp & DP_COLOR_RANGE_16_235)
+ pipe_config->limited_color_range = true;
+
pipe_config->has_dp_encoder = true;
intel_dp_get_m_n(crtc, pipe_config);
diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
index f9151f6..ca34de7 100644
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -712,7 +712,8 @@
struct intel_crtc_config *pipe_config)
{
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
- struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
+ struct drm_device *dev = encoder->base.dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
u32 tmp, flags = 0;
int dotclock;
@@ -734,6 +735,10 @@
if (tmp & HDMI_MODE_SELECT_HDMI)
pipe_config->has_audio = true;
+ if (!HAS_PCH_SPLIT(dev) &&
+ tmp & HDMI_COLOR_RANGE_16_235)
+ pipe_config->limited_color_range = true;
+
pipe_config->adjusted_mode.flags |= flags;
if ((tmp & SDVO_COLOR_FORMAT_MASK) == HDMI_COLOR_FORMAT_12bpc)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 16371a4..47a126a 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1363,54 +1363,66 @@
/* Just userspace ABI convention to limit the wa batch bo to a resonable size */
#define I830_BATCH_LIMIT (256*1024)
+#define I830_TLB_ENTRIES (2)
+#define I830_WA_SIZE max(I830_TLB_ENTRIES*4096, I830_BATCH_LIMIT)
static int
i830_dispatch_execbuffer(struct intel_engine_cs *ring,
u64 offset, u32 len,
unsigned flags)
{
+ u32 cs_offset = ring->scratch.gtt_offset;
int ret;
- if (flags & I915_DISPATCH_PINNED) {
- ret = intel_ring_begin(ring, 4);
- if (ret)
- return ret;
+ ret = intel_ring_begin(ring, 6);
+ if (ret)
+ return ret;
- intel_ring_emit(ring, MI_BATCH_BUFFER);
- intel_ring_emit(ring, offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE));
- intel_ring_emit(ring, offset + len - 8);
- intel_ring_emit(ring, MI_NOOP);
- intel_ring_advance(ring);
- } else {
- u32 cs_offset = ring->scratch.gtt_offset;
+ /* Evict the invalid PTE TLBs */
+ intel_ring_emit(ring, COLOR_BLT_CMD | BLT_WRITE_RGBA);
+ intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | 4096);
+ intel_ring_emit(ring, I830_TLB_ENTRIES << 16 | 4); /* load each page */
+ intel_ring_emit(ring, cs_offset);
+ intel_ring_emit(ring, 0xdeadbeef);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);
+ if ((flags & I915_DISPATCH_PINNED) == 0) {
if (len > I830_BATCH_LIMIT)
return -ENOSPC;
- ret = intel_ring_begin(ring, 9+3);
+ ret = intel_ring_begin(ring, 6 + 2);
if (ret)
return ret;
- /* Blit the batch (which has now all relocs applied) to the stable batch
- * scratch bo area (so that the CS never stumbles over its tlb
- * invalidation bug) ... */
- intel_ring_emit(ring, XY_SRC_COPY_BLT_CMD |
- XY_SRC_COPY_BLT_WRITE_ALPHA |
- XY_SRC_COPY_BLT_WRITE_RGB);
- intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_GXCOPY | 4096);
- intel_ring_emit(ring, 0);
- intel_ring_emit(ring, (DIV_ROUND_UP(len, 4096) << 16) | 1024);
+
+ /* Blit the batch (which has now all relocs applied) to the
+ * stable batch scratch bo area (so that the CS never
+ * stumbles over its tlb invalidation bug) ...
+ */
+ intel_ring_emit(ring, SRC_COPY_BLT_CMD | BLT_WRITE_RGBA);
+ intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_SRC_COPY | 4096);
+ intel_ring_emit(ring, DIV_ROUND_UP(len, 4096) << 16 | 4096);
intel_ring_emit(ring, cs_offset);
- intel_ring_emit(ring, 0);
intel_ring_emit(ring, 4096);
intel_ring_emit(ring, offset);
+
intel_ring_emit(ring, MI_FLUSH);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);
/* ... and execute it. */
- intel_ring_emit(ring, MI_BATCH_BUFFER);
- intel_ring_emit(ring, cs_offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE));
- intel_ring_emit(ring, cs_offset + len - 8);
- intel_ring_advance(ring);
+ offset = cs_offset;
}
+ ret = intel_ring_begin(ring, 4);
+ if (ret)
+ return ret;
+
+ intel_ring_emit(ring, MI_BATCH_BUFFER);
+ intel_ring_emit(ring, offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE));
+ intel_ring_emit(ring, offset + len - 8);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);
+
return 0;
}
@@ -2200,7 +2212,7 @@
/* Workaround batchbuffer to combat CS tlb bug. */
if (HAS_BROKEN_CS_TLB(dev)) {
- obj = i915_gem_alloc_object(dev, I830_BATCH_LIMIT);
+ obj = i915_gem_alloc_object(dev, I830_WA_SIZE);
if (obj == NULL) {
DRM_ERROR("Failed to allocate batch bo\n");
return -ENOMEM;
diff --git a/drivers/gpu/drm/i915/intel_tv.c b/drivers/gpu/drm/i915/intel_tv.c
index c69d3ce..c14341c 100644
--- a/drivers/gpu/drm/i915/intel_tv.c
+++ b/drivers/gpu/drm/i915/intel_tv.c
@@ -854,6 +854,10 @@
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ /* Prevents vblank waits from timing out in intel_tv_detect_type() */
+ intel_wait_for_vblank(encoder->base.dev,
+ to_intel_crtc(encoder->base.crtc)->pipe);
+
I915_WRITE(TV_CTL, I915_READ(TV_CTL) | TV_ENC_ENABLE);
}
diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
index a125a7e..c6c9b02e 100644
--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
+++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
@@ -258,28 +258,30 @@
priv->hdmi_pdev = pdev;
}
+#ifdef CONFIG_OF
+static int get_gpio(struct device *dev, struct device_node *of_node, const char *name)
+{
+ int gpio = of_get_named_gpio(of_node, name, 0);
+ if (gpio < 0) {
+ char name2[32];
+ snprintf(name2, sizeof(name2), "%s-gpio", name);
+ gpio = of_get_named_gpio(of_node, name2, 0);
+ if (gpio < 0) {
+ dev_err(dev, "failed to get gpio: %s (%d)\n",
+ name, gpio);
+ gpio = -1;
+ }
+ }
+ return gpio;
+}
+#endif
+
static int hdmi_bind(struct device *dev, struct device *master, void *data)
{
static struct hdmi_platform_config config = {};
#ifdef CONFIG_OF
struct device_node *of_node = dev->of_node;
- int get_gpio(const char *name)
- {
- int gpio = of_get_named_gpio(of_node, name, 0);
- if (gpio < 0) {
- char name2[32];
- snprintf(name2, sizeof(name2), "%s-gpio", name);
- gpio = of_get_named_gpio(of_node, name2, 0);
- if (gpio < 0) {
- dev_err(dev, "failed to get gpio: %s (%d)\n",
- name, gpio);
- gpio = -1;
- }
- }
- return gpio;
- }
-
if (of_device_is_compatible(of_node, "qcom,hdmi-tx-8074")) {
static const char *hpd_reg_names[] = {"hpd-gdsc", "hpd-5v"};
static const char *pwr_reg_names[] = {"core-vdda", "core-vcc"};
@@ -312,12 +314,12 @@
}
config.mmio_name = "core_physical";
- config.ddc_clk_gpio = get_gpio("qcom,hdmi-tx-ddc-clk");
- config.ddc_data_gpio = get_gpio("qcom,hdmi-tx-ddc-data");
- config.hpd_gpio = get_gpio("qcom,hdmi-tx-hpd");
- config.mux_en_gpio = get_gpio("qcom,hdmi-tx-mux-en");
- config.mux_sel_gpio = get_gpio("qcom,hdmi-tx-mux-sel");
- config.mux_lpm_gpio = get_gpio("qcom,hdmi-tx-mux-lpm");
+ config.ddc_clk_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-ddc-clk");
+ config.ddc_data_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-ddc-data");
+ config.hpd_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-hpd");
+ config.mux_en_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-en");
+ config.mux_sel_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-sel");
+ config.mux_lpm_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-lpm");
#else
static const char *hpd_clk_names[] = {
diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_phy_8960.c b/drivers/gpu/drm/msm/hdmi/hdmi_phy_8960.c
index 902d768..f408b69 100644
--- a/drivers/gpu/drm/msm/hdmi/hdmi_phy_8960.c
+++ b/drivers/gpu/drm/msm/hdmi/hdmi_phy_8960.c
@@ -15,19 +15,25 @@
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
+#ifdef CONFIG_COMMON_CLK
#include <linux/clk.h>
#include <linux/clk-provider.h>
+#endif
#include "hdmi.h"
struct hdmi_phy_8960 {
struct hdmi_phy base;
struct hdmi *hdmi;
+#ifdef CONFIG_COMMON_CLK
struct clk_hw pll_hw;
struct clk *pll;
unsigned long pixclk;
+#endif
};
#define to_hdmi_phy_8960(x) container_of(x, struct hdmi_phy_8960, base)
+
+#ifdef CONFIG_COMMON_CLK
#define clk_to_phy(x) container_of(x, struct hdmi_phy_8960, pll_hw)
/*
@@ -374,7 +380,7 @@
.parent_names = hdmi_pll_parents,
.num_parents = ARRAY_SIZE(hdmi_pll_parents),
};
-
+#endif
/*
* HDMI Phy:
@@ -480,12 +486,15 @@
{
struct hdmi_phy_8960 *phy_8960;
struct hdmi_phy *phy = NULL;
- int ret, i;
+ int ret;
+#ifdef CONFIG_COMMON_CLK
+ int i;
/* sanity check: */
for (i = 0; i < (ARRAY_SIZE(freqtbl) - 1); i++)
if (WARN_ON(freqtbl[i].rate < freqtbl[i+1].rate))
return ERR_PTR(-EINVAL);
+#endif
phy_8960 = kzalloc(sizeof(*phy_8960), GFP_KERNEL);
if (!phy_8960) {
@@ -499,6 +508,7 @@
phy_8960->hdmi = hdmi;
+#ifdef CONFIG_COMMON_CLK
phy_8960->pll_hw.init = &pll_init;
phy_8960->pll = devm_clk_register(hdmi->dev->dev, &phy_8960->pll_hw);
if (IS_ERR(phy_8960->pll)) {
@@ -506,6 +516,7 @@
phy_8960->pll = NULL;
goto fail;
}
+#endif
return phy;
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 26ee80d..fcf9568 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -52,7 +52,7 @@
#define reglog 0
#endif
-static char *vram;
+static char *vram = "16m";
MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU");
module_param(vram, charp, 0);
diff --git a/drivers/gpu/drm/nouveau/core/subdev/bar/nvc0.c b/drivers/gpu/drm/nouveau/core/subdev/bar/nvc0.c
index 0a44459..05a278b 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/bar/nvc0.c
+++ b/drivers/gpu/drm/nouveau/core/subdev/bar/nvc0.c
@@ -200,7 +200,6 @@
nv_mask(priv, 0x000200, 0x00000100, 0x00000000);
nv_mask(priv, 0x000200, 0x00000100, 0x00000100);
- nv_mask(priv, 0x100c80, 0x00000001, 0x00000000);
nv_wr32(priv, 0x001704, 0x80000000 | priv->bar[1].mem->addr >> 12);
if (priv->bar[0].mem)
diff --git a/drivers/gpu/drm/nouveau/core/subdev/fb/nvc0.c b/drivers/gpu/drm/nouveau/core/subdev/fb/nvc0.c
index b19a2b3..32f28dc 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/fb/nvc0.c
+++ b/drivers/gpu/drm/nouveau/core/subdev/fb/nvc0.c
@@ -60,6 +60,7 @@
if (priv->r100c10_page)
nv_wr32(priv, 0x100c10, priv->r100c10 >> 8);
+ nv_mask(priv, 0x100c80, 0x00000001, 0x00000000); /* 128KiB lpg */
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/core/subdev/ltc/gf100.c b/drivers/gpu/drm/nouveau/core/subdev/ltc/gf100.c
index b54b582..d5d6528 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/ltc/gf100.c
+++ b/drivers/gpu/drm/nouveau/core/subdev/ltc/gf100.c
@@ -98,6 +98,7 @@
gf100_ltc_init(struct nouveau_object *object)
{
struct nvkm_ltc_priv *priv = (void *)object;
+ u32 lpg128 = !(nv_rd32(priv, 0x100c80) & 0x00000001);
int ret;
ret = nvkm_ltc_init(priv);
@@ -107,6 +108,7 @@
nv_mask(priv, 0x17e820, 0x00100000, 0x00000000); /* INTR_EN &= ~0x10 */
nv_wr32(priv, 0x17e8d8, priv->ltc_nr);
nv_wr32(priv, 0x17e8d4, priv->tag_base);
+ nv_mask(priv, 0x17e8c0, 0x00000002, lpg128 ? 0x00000002 : 0x00000000);
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/core/subdev/ltc/gk104.c b/drivers/gpu/drm/nouveau/core/subdev/ltc/gk104.c
index ea71656..b39b5d0 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/ltc/gk104.c
+++ b/drivers/gpu/drm/nouveau/core/subdev/ltc/gk104.c
@@ -28,6 +28,7 @@
gk104_ltc_init(struct nouveau_object *object)
{
struct nvkm_ltc_priv *priv = (void *)object;
+ u32 lpg128 = !(nv_rd32(priv, 0x100c80) & 0x00000001);
int ret;
ret = nvkm_ltc_init(priv);
@@ -37,6 +38,7 @@
nv_wr32(priv, 0x17e8d8, priv->ltc_nr);
nv_wr32(priv, 0x17e000, priv->ltc_nr);
nv_wr32(priv, 0x17e8d4, priv->tag_base);
+ nv_mask(priv, 0x17e8c0, 0x00000002, lpg128 ? 0x00000002 : 0x00000000);
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/core/subdev/ltc/gm107.c b/drivers/gpu/drm/nouveau/core/subdev/ltc/gm107.c
index 4761b2e..a4de642 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/ltc/gm107.c
+++ b/drivers/gpu/drm/nouveau/core/subdev/ltc/gm107.c
@@ -98,6 +98,7 @@
gm107_ltc_init(struct nouveau_object *object)
{
struct nvkm_ltc_priv *priv = (void *)object;
+ u32 lpg128 = !(nv_rd32(priv, 0x100c80) & 0x00000001);
int ret;
ret = nvkm_ltc_init(priv);
@@ -106,6 +107,7 @@
nv_wr32(priv, 0x17e27c, priv->ltc_nr);
nv_wr32(priv, 0x17e278, priv->tag_base);
+ nv_mask(priv, 0x17e264, 0x00000002, lpg128 ? 0x00000002 : 0x00000000);
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_acpi.c b/drivers/gpu/drm/nouveau/nouveau_acpi.c
index 2792069..6224246 100644
--- a/drivers/gpu/drm/nouveau/nouveau_acpi.c
+++ b/drivers/gpu/drm/nouveau/nouveau_acpi.c
@@ -46,7 +46,6 @@
bool dsm_detected;
bool optimus_detected;
acpi_handle dhandle;
- acpi_handle other_handle;
acpi_handle rom_handle;
} nouveau_dsm_priv;
@@ -222,10 +221,9 @@
if (!dhandle)
return false;
- if (!acpi_has_method(dhandle, "_DSM")) {
- nouveau_dsm_priv.other_handle = dhandle;
+ if (!acpi_has_method(dhandle, "_DSM"))
return false;
- }
+
if (acpi_check_dsm(dhandle, nouveau_dsm_muid, 0x00000102,
1 << NOUVEAU_DSM_POWER))
retval |= NOUVEAU_DSM_HAS_MUX;
@@ -301,16 +299,6 @@
printk(KERN_INFO "VGA switcheroo: detected DSM switching method %s handle\n",
acpi_method_name);
nouveau_dsm_priv.dsm_detected = true;
- /*
- * On some systems hotplug events are generated for the device
- * being switched off when _DSM is executed. They cause ACPI
- * hotplug to trigger and attempt to remove the device from
- * the system, which causes it to break down. Prevent that from
- * happening by setting the no_hotplug flag for the involved
- * ACPI device objects.
- */
- acpi_bus_no_hotplug(nouveau_dsm_priv.dhandle);
- acpi_bus_no_hotplug(nouveau_dsm_priv.other_handle);
ret = true;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
index 250a5e8..9c3af96 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
@@ -627,6 +627,7 @@
pci_save_state(pdev);
pci_disable_device(pdev);
+ pci_ignore_hotplug(pdev);
pci_set_power_state(pdev, PCI_D3hot);
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_vga.c b/drivers/gpu/drm/nouveau/nouveau_vga.c
index 18d55d4..c7592ec 100644
--- a/drivers/gpu/drm/nouveau/nouveau_vga.c
+++ b/drivers/gpu/drm/nouveau/nouveau_vga.c
@@ -108,7 +108,16 @@
nouveau_vga_fini(struct nouveau_drm *drm)
{
struct drm_device *dev = drm->dev;
+ bool runtime = false;
+
+ if (nouveau_runtime_pm == 1)
+ runtime = true;
+ if ((nouveau_runtime_pm == -1) && (nouveau_is_optimus() || nouveau_is_v1_dsm()))
+ runtime = true;
+
vga_switcheroo_unregister_client(dev->pdev);
+ if (runtime && nouveau_is_v1_dsm() && !nouveau_is_optimus())
+ vga_switcheroo_fini_domain_pm_ops(drm->dev->dev);
vga_client_register(dev->pdev, NULL, NULL, NULL);
}
diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c
index b1e11f8..ac14b67 100644
--- a/drivers/gpu/drm/radeon/atombios_dp.c
+++ b/drivers/gpu/drm/radeon/atombios_dp.c
@@ -405,16 +405,13 @@
u8 msg[DP_DPCD_SIZE];
int ret;
- char dpcd_hex_dump[DP_DPCD_SIZE * 3];
-
ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg,
DP_DPCD_SIZE);
if (ret > 0) {
memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE);
- hex_dump_to_buffer(dig_connector->dpcd, sizeof(dig_connector->dpcd),
- 32, 1, dpcd_hex_dump, sizeof(dpcd_hex_dump), false);
- DRM_DEBUG_KMS("DPCD: %s\n", dpcd_hex_dump);
+ DRM_DEBUG_KMS("DPCD: %*ph\n", (int)sizeof(dig_connector->dpcd),
+ dig_connector->dpcd);
radeon_dp_probe_oui(radeon_connector);
diff --git a/drivers/gpu/drm/radeon/cik_sdma.c b/drivers/gpu/drm/radeon/cik_sdma.c
index 192278b..c4ffa54 100644
--- a/drivers/gpu/drm/radeon/cik_sdma.c
+++ b/drivers/gpu/drm/radeon/cik_sdma.c
@@ -489,13 +489,6 @@
{
int r;
- /* Reset dma */
- WREG32(SRBM_SOFT_RESET, SOFT_RESET_SDMA | SOFT_RESET_SDMA1);
- RREG32(SRBM_SOFT_RESET);
- udelay(50);
- WREG32(SRBM_SOFT_RESET, 0);
- RREG32(SRBM_SOFT_RESET);
-
r = cik_sdma_load_microcode(rdev);
if (r)
return r;
diff --git a/drivers/gpu/drm/radeon/kv_dpm.c b/drivers/gpu/drm/radeon/kv_dpm.c
index 8b58e11..67cb472 100644
--- a/drivers/gpu/drm/radeon/kv_dpm.c
+++ b/drivers/gpu/drm/radeon/kv_dpm.c
@@ -33,6 +33,8 @@
#define KV_MINIMUM_ENGINE_CLOCK 800
#define SMC_RAM_END 0x40000
+static int kv_enable_nb_dpm(struct radeon_device *rdev,
+ bool enable);
static void kv_init_graphics_levels(struct radeon_device *rdev);
static int kv_calculate_ds_divider(struct radeon_device *rdev);
static int kv_calculate_nbps_level_settings(struct radeon_device *rdev);
@@ -1295,6 +1297,9 @@
{
kv_smc_bapm_enable(rdev, false);
+ if (rdev->family == CHIP_MULLINS)
+ kv_enable_nb_dpm(rdev, false);
+
/* powerup blocks */
kv_dpm_powergate_acp(rdev, false);
kv_dpm_powergate_samu(rdev, false);
@@ -1769,15 +1774,24 @@
return ret;
}
-static int kv_enable_nb_dpm(struct radeon_device *rdev)
+static int kv_enable_nb_dpm(struct radeon_device *rdev,
+ bool enable)
{
struct kv_power_info *pi = kv_get_pi(rdev);
int ret = 0;
- if (pi->enable_nb_dpm && !pi->nb_dpm_enabled) {
- ret = kv_notify_message_to_smu(rdev, PPSMC_MSG_NBDPM_Enable);
- if (ret == 0)
- pi->nb_dpm_enabled = true;
+ if (enable) {
+ if (pi->enable_nb_dpm && !pi->nb_dpm_enabled) {
+ ret = kv_notify_message_to_smu(rdev, PPSMC_MSG_NBDPM_Enable);
+ if (ret == 0)
+ pi->nb_dpm_enabled = true;
+ }
+ } else {
+ if (pi->enable_nb_dpm && pi->nb_dpm_enabled) {
+ ret = kv_notify_message_to_smu(rdev, PPSMC_MSG_NBDPM_Disable);
+ if (ret == 0)
+ pi->nb_dpm_enabled = false;
+ }
}
return ret;
@@ -1864,7 +1878,7 @@
}
kv_update_sclk_t(rdev);
if (rdev->family == CHIP_MULLINS)
- kv_enable_nb_dpm(rdev);
+ kv_enable_nb_dpm(rdev, true);
}
} else {
if (pi->enable_dpm) {
@@ -1889,7 +1903,7 @@
}
kv_update_acp_boot_level(rdev);
kv_update_sclk_t(rdev);
- kv_enable_nb_dpm(rdev);
+ kv_enable_nb_dpm(rdev, true);
}
}
diff --git a/drivers/gpu/drm/radeon/ni_dma.c b/drivers/gpu/drm/radeon/ni_dma.c
index 8a3e622..f26f0a9 100644
--- a/drivers/gpu/drm/radeon/ni_dma.c
+++ b/drivers/gpu/drm/radeon/ni_dma.c
@@ -191,12 +191,6 @@
u32 reg_offset, wb_offset;
int i, r;
- /* Reset dma */
- WREG32(SRBM_SOFT_RESET, SOFT_RESET_DMA | SOFT_RESET_DMA1);
- RREG32(SRBM_SOFT_RESET);
- udelay(50);
- WREG32(SRBM_SOFT_RESET, 0);
-
for (i = 0; i < 2; i++) {
if (i == 0) {
ring = &rdev->ring[R600_RING_TYPE_DMA_INDEX];
diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
index 4c5ec44..b0098e7 100644
--- a/drivers/gpu/drm/radeon/r100.c
+++ b/drivers/gpu/drm/radeon/r100.c
@@ -821,6 +821,20 @@
return RREG32(RADEON_CRTC2_CRNT_FRAME);
}
+/**
+ * r100_ring_hdp_flush - flush Host Data Path via the ring buffer
+ * rdev: radeon device structure
+ * ring: ring buffer struct for emitting packets
+ */
+static void r100_ring_hdp_flush(struct radeon_device *rdev, struct radeon_ring *ring)
+{
+ radeon_ring_write(ring, PACKET0(RADEON_HOST_PATH_CNTL, 0));
+ radeon_ring_write(ring, rdev->config.r100.hdp_cntl |
+ RADEON_HDP_READ_BUFFER_INVALIDATE);
+ radeon_ring_write(ring, PACKET0(RADEON_HOST_PATH_CNTL, 0));
+ radeon_ring_write(ring, rdev->config.r100.hdp_cntl);
+}
+
/* Who ever call radeon_fence_emit should call ring_lock and ask
* for enough space (today caller are ib schedule and buffer move) */
void r100_fence_ring_emit(struct radeon_device *rdev,
@@ -1056,20 +1070,6 @@
(void)RREG32(RADEON_CP_RB_WPTR);
}
-/**
- * r100_ring_hdp_flush - flush Host Data Path via the ring buffer
- * rdev: radeon device structure
- * ring: ring buffer struct for emitting packets
- */
-void r100_ring_hdp_flush(struct radeon_device *rdev, struct radeon_ring *ring)
-{
- radeon_ring_write(ring, PACKET0(RADEON_HOST_PATH_CNTL, 0));
- radeon_ring_write(ring, rdev->config.r100.hdp_cntl |
- RADEON_HDP_READ_BUFFER_INVALIDATE);
- radeon_ring_write(ring, PACKET0(RADEON_HOST_PATH_CNTL, 0));
- radeon_ring_write(ring, rdev->config.r100.hdp_cntl);
-}
-
static void r100_cp_load_microcode(struct radeon_device *rdev)
{
const __be32 *fw_data;
diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index e616eb5..3cfb500 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -2769,8 +2769,8 @@
radeon_ring_write(ring, lower_32_bits(addr));
radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel);
- /* PFP_SYNC_ME packet only exists on 7xx+ */
- if (emit_wait && (rdev->family >= CHIP_RV770)) {
+ /* PFP_SYNC_ME packet only exists on 7xx+, only enable it on eg+ */
+ if (emit_wait && (rdev->family >= CHIP_CEDAR)) {
/* Prevent the PFP from running ahead of the semaphore wait */
radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
radeon_ring_write(ring, 0x0);
diff --git a/drivers/gpu/drm/radeon/r600_dma.c b/drivers/gpu/drm/radeon/r600_dma.c
index 51fd985..a908daa 100644
--- a/drivers/gpu/drm/radeon/r600_dma.c
+++ b/drivers/gpu/drm/radeon/r600_dma.c
@@ -124,15 +124,6 @@
u32 rb_bufsz;
int r;
- /* Reset dma */
- if (rdev->family >= CHIP_RV770)
- WREG32(SRBM_SOFT_RESET, RV770_SOFT_RESET_DMA);
- else
- WREG32(SRBM_SOFT_RESET, SOFT_RESET_DMA);
- RREG32(SRBM_SOFT_RESET);
- udelay(50);
- WREG32(SRBM_SOFT_RESET, 0);
-
WREG32(DMA_SEM_INCOMPLETE_TIMER_CNTL, 0);
WREG32(DMA_SEM_WAIT_FAIL_TIMER_CNTL, 0);
diff --git a/drivers/gpu/drm/radeon/r600d.h b/drivers/gpu/drm/radeon/r600d.h
index 0c4a7d8..31e1052 100644
--- a/drivers/gpu/drm/radeon/r600d.h
+++ b/drivers/gpu/drm/radeon/r600d.h
@@ -44,13 +44,6 @@
#define R6XX_MAX_PIPES 8
#define R6XX_MAX_PIPES_MASK 0xff
-/* PTE flags */
-#define PTE_VALID (1 << 0)
-#define PTE_SYSTEM (1 << 1)
-#define PTE_SNOOPED (1 << 2)
-#define PTE_READABLE (1 << 5)
-#define PTE_WRITEABLE (1 << 6)
-
/* tiling bits */
#define ARRAY_LINEAR_GENERAL 0x00000000
#define ARRAY_LINEAR_ALIGNED 0x00000001
diff --git a/drivers/gpu/drm/radeon/radeon_asic.c b/drivers/gpu/drm/radeon/radeon_asic.c
index eeeeabe..2dd5847 100644
--- a/drivers/gpu/drm/radeon/radeon_asic.c
+++ b/drivers/gpu/drm/radeon/radeon_asic.c
@@ -185,7 +185,6 @@
.get_rptr = &r100_gfx_get_rptr,
.get_wptr = &r100_gfx_get_wptr,
.set_wptr = &r100_gfx_set_wptr,
- .hdp_flush = &r100_ring_hdp_flush,
};
static struct radeon_asic r100_asic = {
@@ -332,7 +331,6 @@
.get_rptr = &r100_gfx_get_rptr,
.get_wptr = &r100_gfx_get_wptr,
.set_wptr = &r100_gfx_set_wptr,
- .hdp_flush = &r100_ring_hdp_flush,
};
static struct radeon_asic r300_asic = {
diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h
index 275a5dc..7756bc1 100644
--- a/drivers/gpu/drm/radeon/radeon_asic.h
+++ b/drivers/gpu/drm/radeon/radeon_asic.h
@@ -148,8 +148,7 @@
struct radeon_ring *ring);
void r100_gfx_set_wptr(struct radeon_device *rdev,
struct radeon_ring *ring);
-void r100_ring_hdp_flush(struct radeon_device *rdev,
- struct radeon_ring *ring);
+
/*
* r200,rv250,rs300,rv280
*/
diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c
index 92b2d8d..e74c7e3 100644
--- a/drivers/gpu/drm/radeon/radeon_atombios.c
+++ b/drivers/gpu/drm/radeon/radeon_atombios.c
@@ -447,6 +447,13 @@
}
}
+ /* Fujitsu D3003-S2 board lists DVI-I as DVI-I and VGA */
+ if ((dev->pdev->device == 0x9805) &&
+ (dev->pdev->subsystem_vendor == 0x1734) &&
+ (dev->pdev->subsystem_device == 0x11bd)) {
+ if (*connector_type == DRM_MODE_CONNECTOR_VGA)
+ return false;
+ }
return true;
}
@@ -2281,19 +2288,31 @@
(controller->ucFanParameters &
ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
rdev->pm.int_thermal_type = THERMAL_TYPE_KV;
- } else if ((controller->ucType ==
- ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) ||
- (controller->ucType ==
- ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) ||
- (controller->ucType ==
- ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL)) {
- DRM_INFO("Special thermal controller config\n");
+ } else if (controller->ucType ==
+ ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
+ DRM_INFO("External GPIO thermal controller %s fan control\n",
+ (controller->ucFanParameters &
+ ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+ rdev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL_GPIO;
+ } else if (controller->ucType ==
+ ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
+ DRM_INFO("ADT7473 with internal thermal controller %s fan control\n",
+ (controller->ucFanParameters &
+ ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+ rdev->pm.int_thermal_type = THERMAL_TYPE_ADT7473_WITH_INTERNAL;
+ } else if (controller->ucType ==
+ ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
+ DRM_INFO("EMC2103 with internal thermal controller %s fan control\n",
+ (controller->ucFanParameters &
+ ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+ rdev->pm.int_thermal_type = THERMAL_TYPE_EMC2103_WITH_INTERNAL;
} else if (controller->ucType < ARRAY_SIZE(pp_lib_thermal_controller_names)) {
DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n",
pp_lib_thermal_controller_names[controller->ucType],
controller->ucI2cAddress >> 1,
(controller->ucFanParameters &
ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+ rdev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL;
i2c_bus = radeon_lookup_i2c_gpio(rdev, controller->ucI2cLine);
rdev->pm.i2c_bus = radeon_i2c_lookup(rdev, &i2c_bus);
if (rdev->pm.i2c_bus) {
diff --git a/drivers/gpu/drm/radeon/radeon_atpx_handler.c b/drivers/gpu/drm/radeon/radeon_atpx_handler.c
index a9fb0d0..8bc7d0b 100644
--- a/drivers/gpu/drm/radeon/radeon_atpx_handler.c
+++ b/drivers/gpu/drm/radeon/radeon_atpx_handler.c
@@ -33,7 +33,6 @@
bool atpx_detected;
/* handle for device - and atpx */
acpi_handle dhandle;
- acpi_handle other_handle;
struct radeon_atpx atpx;
} radeon_atpx_priv;
@@ -453,10 +452,9 @@
return false;
status = acpi_get_handle(dhandle, "ATPX", &atpx_handle);
- if (ACPI_FAILURE(status)) {
- radeon_atpx_priv.other_handle = dhandle;
+ if (ACPI_FAILURE(status))
return false;
- }
+
radeon_atpx_priv.dhandle = dhandle;
radeon_atpx_priv.atpx.handle = atpx_handle;
return true;
@@ -540,16 +538,6 @@
printk(KERN_INFO "VGA switcheroo: detected switching method %s handle\n",
acpi_method_name);
radeon_atpx_priv.atpx_detected = true;
- /*
- * On some systems hotplug events are generated for the device
- * being switched off when ATPX is executed. They cause ACPI
- * hotplug to trigger and attempt to remove the device from
- * the system, which causes it to break down. Prevent that from
- * happening by setting the no_hotplug flag for the involved
- * ACPI device objects.
- */
- acpi_bus_no_hotplug(radeon_atpx_priv.dhandle);
- acpi_bus_no_hotplug(radeon_atpx_priv.other_handle);
return true;
}
return false;
diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
index 6a219bc..75223dd 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -1393,7 +1393,7 @@
r = radeon_init(rdev);
if (r)
- return r;
+ goto failed;
r = radeon_ib_ring_tests(rdev);
if (r)
@@ -1413,7 +1413,7 @@
radeon_agp_disable(rdev);
r = radeon_init(rdev);
if (r)
- return r;
+ goto failed;
}
if ((radeon_testing & 1)) {
@@ -1435,6 +1435,11 @@
DRM_INFO("radeon: acceleration disabled, skipping benchmarks\n");
}
return 0;
+
+failed:
+ if (runtime)
+ vga_switcheroo_fini_domain_pm_ops(rdev->dev);
+ return r;
}
static void radeon_debugfs_remove_files(struct radeon_device *rdev);
@@ -1455,6 +1460,8 @@
radeon_bo_evict_vram(rdev);
radeon_fini(rdev);
vga_switcheroo_unregister_client(rdev->pdev);
+ if (rdev->flags & RADEON_IS_PX)
+ vga_switcheroo_fini_domain_pm_ops(rdev->dev);
vga_client_register(rdev->pdev, NULL, NULL, NULL);
if (rdev->rio_mem)
pci_iounmap(rdev->pdev, rdev->rio_mem);
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
index 8df8889..4126fd0 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -83,7 +83,7 @@
* CIK: 1D and linear tiling modes contain valid PIPE_CONFIG
* 2.39.0 - Add INFO query for number of active CUs
* 2.40.0 - Add RADEON_GEM_GTT_WC/UC, flush HDP cache before submitting
- * CS to GPU
+ * CS to GPU on >= r600
*/
#define KMS_DRIVER_MAJOR 2
#define KMS_DRIVER_MINOR 40
@@ -440,6 +440,7 @@
ret = radeon_suspend_kms(drm_dev, false, false);
pci_save_state(pdev);
pci_disable_device(pdev);
+ pci_ignore_hotplug(pdev);
pci_set_power_state(pdev, PCI_D3cold);
drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
diff --git a/drivers/gpu/drm/radeon/radeon_semaphore.c b/drivers/gpu/drm/radeon/radeon_semaphore.c
index 56d9fd6..abd6753 100644
--- a/drivers/gpu/drm/radeon/radeon_semaphore.c
+++ b/drivers/gpu/drm/radeon/radeon_semaphore.c
@@ -34,7 +34,7 @@
int radeon_semaphore_create(struct radeon_device *rdev,
struct radeon_semaphore **semaphore)
{
- uint32_t *cpu_addr;
+ uint64_t *cpu_addr;
int i, r;
*semaphore = kmalloc(sizeof(struct radeon_semaphore), GFP_KERNEL);
diff --git a/drivers/gpu/drm/radeon/rs400.c b/drivers/gpu/drm/radeon/rs400.c
index 6c1fc33..c5799f16 100644
--- a/drivers/gpu/drm/radeon/rs400.c
+++ b/drivers/gpu/drm/radeon/rs400.c
@@ -221,9 +221,9 @@
entry = (lower_32_bits(addr) & PAGE_MASK) |
((upper_32_bits(addr) & 0xff) << 4);
if (flags & RADEON_GART_PAGE_READ)
- addr |= RS400_PTE_READABLE;
+ entry |= RS400_PTE_READABLE;
if (flags & RADEON_GART_PAGE_WRITE)
- addr |= RS400_PTE_WRITEABLE;
+ entry |= RS400_PTE_WRITEABLE;
if (!(flags & RADEON_GART_PAGE_SNOOP))
entry |= RS400_PTE_UNSNOOPED;
entry = cpu_to_le32(entry);
diff --git a/drivers/gpu/drm/sti/sti_hdmi.c b/drivers/gpu/drm/sti/sti_hdmi.c
index ef93156..b22968c 100644
--- a/drivers/gpu/drm/sti/sti_hdmi.c
+++ b/drivers/gpu/drm/sti/sti_hdmi.c
@@ -298,7 +298,6 @@
hdmi_write(hdmi, val, HDMI_SW_DI_N_PKT_WORD2(HDMI_IFRAME_SLOT_AVI));
val = frame[0xC];
- val |= frame[0xD] << 8;
hdmi_write(hdmi, val, HDMI_SW_DI_N_PKT_WORD3(HDMI_IFRAME_SLOT_AVI));
/* Enable transmission slot for AVI infoframe
diff --git a/drivers/gpu/vga/vga_switcheroo.c b/drivers/gpu/vga/vga_switcheroo.c
index 6866448..37ac7b5 100644
--- a/drivers/gpu/vga/vga_switcheroo.c
+++ b/drivers/gpu/vga/vga_switcheroo.c
@@ -660,6 +660,12 @@
}
EXPORT_SYMBOL(vga_switcheroo_init_domain_pm_ops);
+void vga_switcheroo_fini_domain_pm_ops(struct device *dev)
+{
+ dev->pm_domain = NULL;
+}
+EXPORT_SYMBOL(vga_switcheroo_fini_domain_pm_ops);
+
static int vga_switcheroo_runtime_resume_hdmi_audio(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
diff --git a/drivers/gpu/vga/vgaarb.c b/drivers/gpu/vga/vgaarb.c
index d2077f0..7771162 100644
--- a/drivers/gpu/vga/vgaarb.c
+++ b/drivers/gpu/vga/vgaarb.c
@@ -41,6 +41,7 @@
#include <linux/poll.h>
#include <linux/miscdevice.h>
#include <linux/slab.h>
+#include <linux/screen_info.h>
#include <linux/uaccess.h>
@@ -112,10 +113,8 @@
return 1;
}
-#ifndef __ARCH_HAS_VGA_DEFAULT_DEVICE
/* this is only used a cookie - it should not be dereferenced */
static struct pci_dev *vga_default;
-#endif
static void vga_arb_device_card_gone(struct pci_dev *pdev);
@@ -131,7 +130,6 @@
}
/* Returns the default VGA device (vgacon's babe) */
-#ifndef __ARCH_HAS_VGA_DEFAULT_DEVICE
struct pci_dev *vga_default_device(void)
{
return vga_default;
@@ -147,7 +145,6 @@
pci_dev_put(vga_default);
vga_default = pci_dev_get(pdev);
}
-#endif
static inline void vga_irq_set_state(struct vga_device *vgadev, bool state)
{
@@ -583,11 +580,12 @@
/* Deal with VGA default device. Use first enabled one
* by default if arch doesn't have it's own hook
*/
-#ifndef __ARCH_HAS_VGA_DEFAULT_DEVICE
if (vga_default == NULL &&
- ((vgadev->owns & VGA_RSRC_LEGACY_MASK) == VGA_RSRC_LEGACY_MASK))
+ ((vgadev->owns & VGA_RSRC_LEGACY_MASK) == VGA_RSRC_LEGACY_MASK)) {
+ pr_info("vgaarb: setting as boot device: PCI:%s\n",
+ pci_name(pdev));
vga_set_default_device(pdev);
-#endif
+ }
vga_arbiter_check_bridge_sharing(vgadev);
@@ -621,10 +619,8 @@
goto bail;
}
-#ifndef __ARCH_HAS_VGA_DEFAULT_DEVICE
if (vga_default == pdev)
vga_set_default_device(NULL);
-#endif
if (vgadev->decodes & (VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM))
vga_decode_count--;
@@ -1320,6 +1316,38 @@
pr_info("vgaarb: loaded\n");
list_for_each_entry(vgadev, &vga_list, list) {
+#if defined(CONFIG_X86) || defined(CONFIG_IA64)
+ /* Override I/O based detection done by vga_arbiter_add_pci_device()
+ * as it may take the wrong device (e.g. on Apple system under EFI).
+ *
+ * Select the device owning the boot framebuffer if there is one.
+ */
+ resource_size_t start, end;
+ int i;
+
+ /* Does firmware framebuffer belong to us? */
+ for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
+ if (!(pci_resource_flags(vgadev->pdev, i) & IORESOURCE_MEM))
+ continue;
+
+ start = pci_resource_start(vgadev->pdev, i);
+ end = pci_resource_end(vgadev->pdev, i);
+
+ if (!start || !end)
+ continue;
+
+ if (screen_info.lfb_base < start ||
+ (screen_info.lfb_base + screen_info.lfb_size) >= end)
+ continue;
+ if (!vga_default_device())
+ pr_info("vgaarb: setting as boot device: PCI:%s\n",
+ pci_name(vgadev->pdev));
+ else if (vgadev->pdev != vga_default_device())
+ pr_info("vgaarb: overriding boot device: PCI:%s\n",
+ pci_name(vgadev->pdev));
+ vga_set_default_device(vgadev->pdev);
+ }
+#endif
if (vgadev->bridge_has_one_vga)
pr_info("vgaarb: bridge control possible %s\n", pci_name(vgadev->pdev));
else
diff --git a/drivers/hwmon/fam15h_power.c b/drivers/hwmon/fam15h_power.c
index 4a7cbfa..fcdbde4 100644
--- a/drivers/hwmon/fam15h_power.c
+++ b/drivers/hwmon/fam15h_power.c
@@ -93,13 +93,29 @@
}
static DEVICE_ATTR(power1_crit, S_IRUGO, show_power_crit, NULL);
+static umode_t fam15h_power_is_visible(struct kobject *kobj,
+ struct attribute *attr,
+ int index)
+{
+ /* power1_input is only reported for Fam15h, Models 00h-0fh */
+ if (attr == &dev_attr_power1_input.attr &&
+ (boot_cpu_data.x86 != 0x15 || boot_cpu_data.x86_model > 0xf))
+ return 0;
+
+ return attr->mode;
+}
+
static struct attribute *fam15h_power_attrs[] = {
&dev_attr_power1_input.attr,
&dev_attr_power1_crit.attr,
NULL
};
-ATTRIBUTE_GROUPS(fam15h_power);
+static const struct attribute_group fam15h_power_group = {
+ .attrs = fam15h_power_attrs,
+ .is_visible = fam15h_power_is_visible,
+};
+__ATTRIBUTE_GROUPS(fam15h_power);
static bool fam15h_power_is_internal_node0(struct pci_dev *f4)
{
@@ -216,7 +232,9 @@
static const struct pci_device_id fam15h_power_id_table[] = {
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_NB_F4) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_M30H_NB_F4) },
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_NB_F4) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) },
{}
};
MODULE_DEVICE_TABLE(pci, fam15h_power_id_table);
diff --git a/drivers/hwmon/tmp103.c b/drivers/hwmon/tmp103.c
index e42964f..ad571ec 100644
--- a/drivers/hwmon/tmp103.c
+++ b/drivers/hwmon/tmp103.c
@@ -145,7 +145,7 @@
}
i2c_set_clientdata(client, regmap);
- hwmon_dev = hwmon_device_register_with_groups(dev, client->name,
+ hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name,
regmap, tmp103_groups);
return PTR_ERR_OR_ZERO(hwmon_dev);
}
diff --git a/drivers/iio/accel/bma180.c b/drivers/iio/accel/bma180.c
index a077cc8..19100fd 100644
--- a/drivers/iio/accel/bma180.c
+++ b/drivers/iio/accel/bma180.c
@@ -571,7 +571,7 @@
trig->ops = &bma180_trigger_ops;
iio_trigger_set_drvdata(trig, indio_dev);
data->trig = trig;
- indio_dev->trig = trig;
+ indio_dev->trig = iio_trigger_get(trig);
ret = iio_trigger_register(trig);
if (ret)
diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
index c55b81f..d10bd0c 100644
--- a/drivers/iio/adc/ad_sigma_delta.c
+++ b/drivers/iio/adc/ad_sigma_delta.c
@@ -472,7 +472,7 @@
goto error_free_irq;
/* select default trigger */
- indio_dev->trig = sigma_delta->trig;
+ indio_dev->trig = iio_trigger_get(sigma_delta->trig);
return 0;
diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
index 772e869..7eadaf1 100644
--- a/drivers/iio/adc/at91_adc.c
+++ b/drivers/iio/adc/at91_adc.c
@@ -196,6 +196,7 @@
bool done;
int irq;
u16 last_value;
+ int chnb;
struct mutex lock;
u8 num_channels;
void __iomem *reg_base;
@@ -274,7 +275,7 @@
disable_irq_nosync(irq);
iio_trigger_poll(idev->trig);
} else {
- st->last_value = at91_adc_readl(st, AT91_ADC_LCDR);
+ st->last_value = at91_adc_readl(st, AT91_ADC_CHAN(st, st->chnb));
st->done = true;
wake_up_interruptible(&st->wq_data_avail);
}
@@ -351,7 +352,7 @@
unsigned int reg;
status &= at91_adc_readl(st, AT91_ADC_IMR);
- if (status & st->registers->drdy_mask)
+ if (status & GENMASK(st->num_channels - 1, 0))
handle_adc_eoc_trigger(irq, idev);
if (status & AT91RL_ADC_IER_PEN) {
@@ -418,7 +419,7 @@
AT91_ADC_IER_YRDY |
AT91_ADC_IER_PRDY;
- if (status & st->registers->drdy_mask)
+ if (status & GENMASK(st->num_channels - 1, 0))
handle_adc_eoc_trigger(irq, idev);
if (status & AT91_ADC_IER_PEN) {
@@ -689,9 +690,10 @@
case IIO_CHAN_INFO_RAW:
mutex_lock(&st->lock);
+ st->chnb = chan->channel;
at91_adc_writel(st, AT91_ADC_CHER,
AT91_ADC_CH(chan->channel));
- at91_adc_writel(st, AT91_ADC_IER, st->registers->drdy_mask);
+ at91_adc_writel(st, AT91_ADC_IER, BIT(chan->channel));
at91_adc_writel(st, AT91_ADC_CR, AT91_ADC_START);
ret = wait_event_interruptible_timeout(st->wq_data_avail,
@@ -708,7 +710,7 @@
at91_adc_writel(st, AT91_ADC_CHDR,
AT91_ADC_CH(chan->channel));
- at91_adc_writel(st, AT91_ADC_IDR, st->registers->drdy_mask);
+ at91_adc_writel(st, AT91_ADC_IDR, BIT(chan->channel));
st->last_value = 0;
st->done = false;
diff --git a/drivers/iio/adc/xilinx-xadc-core.c b/drivers/iio/adc/xilinx-xadc-core.c
index fd2745c..626b397 100644
--- a/drivers/iio/adc/xilinx-xadc-core.c
+++ b/drivers/iio/adc/xilinx-xadc-core.c
@@ -1126,7 +1126,7 @@
chan->address = XADC_REG_VPVN;
} else {
chan->scan_index = 15 + reg;
- chan->scan_index = XADC_REG_VAUX(reg - 1);
+ chan->address = XADC_REG_VAUX(reg - 1);
}
num_channels++;
chan++;
diff --git a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
index a3109a6..92068cd 100644
--- a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
+++ b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
@@ -122,7 +122,8 @@
dev_err(&indio_dev->dev, "Trigger Register Failed\n");
goto error_free_trig;
}
- indio_dev->trig = attrb->trigger = trig;
+ attrb->trigger = trig;
+ indio_dev->trig = iio_trigger_get(trig);
return ret;
diff --git a/drivers/iio/common/st_sensors/st_sensors_trigger.c b/drivers/iio/common/st_sensors/st_sensors_trigger.c
index 8fc3a97..8d8ca6f 100644
--- a/drivers/iio/common/st_sensors/st_sensors_trigger.c
+++ b/drivers/iio/common/st_sensors/st_sensors_trigger.c
@@ -49,7 +49,7 @@
dev_err(&indio_dev->dev, "failed to register iio trigger.\n");
goto iio_trigger_register_error;
}
- indio_dev->trig = sdata->trig;
+ indio_dev->trig = iio_trigger_get(sdata->trig);
return 0;
diff --git a/drivers/iio/gyro/itg3200_buffer.c b/drivers/iio/gyro/itg3200_buffer.c
index e3b3c50..eef50e9 100644
--- a/drivers/iio/gyro/itg3200_buffer.c
+++ b/drivers/iio/gyro/itg3200_buffer.c
@@ -132,7 +132,7 @@
goto error_free_irq;
/* select default trigger */
- indio_dev->trig = st->trig;
+ indio_dev->trig = iio_trigger_get(st->trig);
return 0;
diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
index 03b9372..926fcce 100644
--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
+++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
@@ -135,7 +135,7 @@
ret = iio_trigger_register(st->trig);
if (ret)
goto error_free_irq;
- indio_dev->trig = st->trig;
+ indio_dev->trig = iio_trigger_get(st->trig);
return 0;
diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
index c749700..f084610 100644
--- a/drivers/iio/inkern.c
+++ b/drivers/iio/inkern.c
@@ -178,7 +178,7 @@
index = of_property_match_string(np, "io-channel-names",
name);
chan = of_iio_channel_get(np, index);
- if (!IS_ERR(chan))
+ if (!IS_ERR(chan) || PTR_ERR(chan) == -EPROBE_DEFER)
break;
else if (name && index >= 0) {
pr_err("ERROR: could not get IIO channel %s:%s(%i)\n",
diff --git a/drivers/iio/magnetometer/st_magn_core.c b/drivers/iio/magnetometer/st_magn_core.c
index a4b6413..68cae86 100644
--- a/drivers/iio/magnetometer/st_magn_core.c
+++ b/drivers/iio/magnetometer/st_magn_core.c
@@ -42,7 +42,8 @@
#define ST_MAGN_FS_AVL_5600MG 5600
#define ST_MAGN_FS_AVL_8000MG 8000
#define ST_MAGN_FS_AVL_8100MG 8100
-#define ST_MAGN_FS_AVL_10000MG 10000
+#define ST_MAGN_FS_AVL_12000MG 12000
+#define ST_MAGN_FS_AVL_16000MG 16000
/* CUSTOM VALUES FOR SENSOR 1 */
#define ST_MAGN_1_WAI_EXP 0x3c
@@ -69,20 +70,20 @@
#define ST_MAGN_1_FS_AVL_4700_VAL 0x05
#define ST_MAGN_1_FS_AVL_5600_VAL 0x06
#define ST_MAGN_1_FS_AVL_8100_VAL 0x07
-#define ST_MAGN_1_FS_AVL_1300_GAIN_XY 1100
-#define ST_MAGN_1_FS_AVL_1900_GAIN_XY 855
-#define ST_MAGN_1_FS_AVL_2500_GAIN_XY 670
-#define ST_MAGN_1_FS_AVL_4000_GAIN_XY 450
-#define ST_MAGN_1_FS_AVL_4700_GAIN_XY 400
-#define ST_MAGN_1_FS_AVL_5600_GAIN_XY 330
-#define ST_MAGN_1_FS_AVL_8100_GAIN_XY 230
-#define ST_MAGN_1_FS_AVL_1300_GAIN_Z 980
-#define ST_MAGN_1_FS_AVL_1900_GAIN_Z 760
-#define ST_MAGN_1_FS_AVL_2500_GAIN_Z 600
-#define ST_MAGN_1_FS_AVL_4000_GAIN_Z 400
-#define ST_MAGN_1_FS_AVL_4700_GAIN_Z 355
-#define ST_MAGN_1_FS_AVL_5600_GAIN_Z 295
-#define ST_MAGN_1_FS_AVL_8100_GAIN_Z 205
+#define ST_MAGN_1_FS_AVL_1300_GAIN_XY 909
+#define ST_MAGN_1_FS_AVL_1900_GAIN_XY 1169
+#define ST_MAGN_1_FS_AVL_2500_GAIN_XY 1492
+#define ST_MAGN_1_FS_AVL_4000_GAIN_XY 2222
+#define ST_MAGN_1_FS_AVL_4700_GAIN_XY 2500
+#define ST_MAGN_1_FS_AVL_5600_GAIN_XY 3030
+#define ST_MAGN_1_FS_AVL_8100_GAIN_XY 4347
+#define ST_MAGN_1_FS_AVL_1300_GAIN_Z 1020
+#define ST_MAGN_1_FS_AVL_1900_GAIN_Z 1315
+#define ST_MAGN_1_FS_AVL_2500_GAIN_Z 1666
+#define ST_MAGN_1_FS_AVL_4000_GAIN_Z 2500
+#define ST_MAGN_1_FS_AVL_4700_GAIN_Z 2816
+#define ST_MAGN_1_FS_AVL_5600_GAIN_Z 3389
+#define ST_MAGN_1_FS_AVL_8100_GAIN_Z 4878
#define ST_MAGN_1_MULTIREAD_BIT false
/* CUSTOM VALUES FOR SENSOR 2 */
@@ -105,10 +106,12 @@
#define ST_MAGN_2_FS_MASK 0x60
#define ST_MAGN_2_FS_AVL_4000_VAL 0x00
#define ST_MAGN_2_FS_AVL_8000_VAL 0x01
-#define ST_MAGN_2_FS_AVL_10000_VAL 0x02
-#define ST_MAGN_2_FS_AVL_4000_GAIN 430
-#define ST_MAGN_2_FS_AVL_8000_GAIN 230
-#define ST_MAGN_2_FS_AVL_10000_GAIN 230
+#define ST_MAGN_2_FS_AVL_12000_VAL 0x02
+#define ST_MAGN_2_FS_AVL_16000_VAL 0x03
+#define ST_MAGN_2_FS_AVL_4000_GAIN 146
+#define ST_MAGN_2_FS_AVL_8000_GAIN 292
+#define ST_MAGN_2_FS_AVL_12000_GAIN 438
+#define ST_MAGN_2_FS_AVL_16000_GAIN 584
#define ST_MAGN_2_MULTIREAD_BIT false
#define ST_MAGN_2_OUT_X_L_ADDR 0x28
#define ST_MAGN_2_OUT_Y_L_ADDR 0x2a
@@ -266,9 +269,14 @@
.gain = ST_MAGN_2_FS_AVL_8000_GAIN,
},
[2] = {
- .num = ST_MAGN_FS_AVL_10000MG,
- .value = ST_MAGN_2_FS_AVL_10000_VAL,
- .gain = ST_MAGN_2_FS_AVL_10000_GAIN,
+ .num = ST_MAGN_FS_AVL_12000MG,
+ .value = ST_MAGN_2_FS_AVL_12000_VAL,
+ .gain = ST_MAGN_2_FS_AVL_12000_GAIN,
+ },
+ [3] = {
+ .num = ST_MAGN_FS_AVL_16000MG,
+ .value = ST_MAGN_2_FS_AVL_16000_VAL,
+ .gain = ST_MAGN_2_FS_AVL_16000_GAIN,
},
},
},
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index a3a2e9c..df0c4f6 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -105,6 +105,7 @@
umem->length = size;
umem->offset = addr & ~PAGE_MASK;
umem->page_size = PAGE_SIZE;
+ umem->pid = get_task_pid(current, PIDTYPE_PID);
/*
* We ask for writable memory if any access flags other than
* "remote read" are set. "Local write" and "remote write"
@@ -198,6 +199,7 @@
if (ret < 0) {
if (need_release)
__ib_umem_release(context->device, umem, 0);
+ put_pid(umem->pid);
kfree(umem);
} else
current->mm->pinned_vm = locked;
@@ -230,15 +232,19 @@
{
struct ib_ucontext *context = umem->context;
struct mm_struct *mm;
+ struct task_struct *task;
unsigned long diff;
__ib_umem_release(umem->context->device, umem, 1);
- mm = get_task_mm(current);
- if (!mm) {
- kfree(umem);
- return;
- }
+ task = get_pid_task(umem->pid, PIDTYPE_PID);
+ put_pid(umem->pid);
+ if (!task)
+ goto out;
+ mm = get_task_mm(task);
+ put_task_struct(task);
+ if (!mm)
+ goto out;
diff = PAGE_ALIGN(umem->length + umem->offset) >> PAGE_SHIFT;
@@ -262,9 +268,10 @@
} else
down_write(&mm->mmap_sem);
- current->mm->pinned_vm -= diff;
+ mm->pinned_vm -= diff;
up_write(&mm->mmap_sem);
mmput(mm);
+out:
kfree(umem);
}
EXPORT_SYMBOL(ib_umem_release);
diff --git a/drivers/infiniband/core/uverbs_marshall.c b/drivers/infiniband/core/uverbs_marshall.c
index e7bee46..abd9724 100644
--- a/drivers/infiniband/core/uverbs_marshall.c
+++ b/drivers/infiniband/core/uverbs_marshall.c
@@ -140,5 +140,9 @@
dst->packet_life_time = src->packet_life_time;
dst->preference = src->preference;
dst->packet_life_time_selector = src->packet_life_time_selector;
+
+ memset(dst->smac, 0, sizeof(dst->smac));
+ memset(dst->dmac, 0, sizeof(dst->dmac));
+ dst->vlan_id = 0xffff;
}
EXPORT_SYMBOL(ib_copy_path_rec_from_user);
diff --git a/drivers/infiniband/hw/ipath/ipath_user_pages.c b/drivers/infiniband/hw/ipath/ipath_user_pages.c
index dc66c45..1da1252 100644
--- a/drivers/infiniband/hw/ipath/ipath_user_pages.c
+++ b/drivers/infiniband/hw/ipath/ipath_user_pages.c
@@ -54,7 +54,7 @@
/* call with current->mm->mmap_sem held */
static int __ipath_get_user_pages(unsigned long start_page, size_t num_pages,
- struct page **p, struct vm_area_struct **vma)
+ struct page **p)
{
unsigned long lock_limit;
size_t got;
@@ -74,7 +74,7 @@
ret = get_user_pages(current, current->mm,
start_page + got * PAGE_SIZE,
num_pages - got, 1, 1,
- p + got, vma);
+ p + got, NULL);
if (ret < 0)
goto bail_release;
}
@@ -165,7 +165,7 @@
down_write(¤t->mm->mmap_sem);
- ret = __ipath_get_user_pages(start_page, num_pages, p, NULL);
+ ret = __ipath_get_user_pages(start_page, num_pages, p);
up_write(¤t->mm->mmap_sem);
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index af82563..bda5994 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -59,6 +59,7 @@
#define MLX4_IB_FLOW_MAX_PRIO 0xFFF
#define MLX4_IB_FLOW_QPN_MASK 0xFFFFFF
+#define MLX4_IB_CARD_REV_A0 0xA0
MODULE_AUTHOR("Roland Dreier");
MODULE_DESCRIPTION("Mellanox ConnectX HCA InfiniBand driver");
@@ -119,6 +120,17 @@
return dmfs;
}
+static int num_ib_ports(struct mlx4_dev *dev)
+{
+ int ib_ports = 0;
+ int i;
+
+ mlx4_foreach_port(i, dev, MLX4_PORT_TYPE_IB)
+ ib_ports++;
+
+ return ib_ports;
+}
+
static int mlx4_ib_query_device(struct ib_device *ibdev,
struct ib_device_attr *props)
{
@@ -126,6 +138,7 @@
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
+ int have_ib_ports;
in_mad = kzalloc(sizeof *in_mad, GFP_KERNEL);
out_mad = kmalloc(sizeof *out_mad, GFP_KERNEL);
@@ -142,6 +155,8 @@
memset(props, 0, sizeof *props);
+ have_ib_ports = num_ib_ports(dev->dev);
+
props->fw_ver = dev->dev->caps.fw_ver;
props->device_cap_flags = IB_DEVICE_CHANGE_PHY_PORT |
IB_DEVICE_PORT_ACTIVE_EVENT |
@@ -152,13 +167,15 @@
props->device_cap_flags |= IB_DEVICE_BAD_PKEY_CNTR;
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_BAD_QKEY_CNTR)
props->device_cap_flags |= IB_DEVICE_BAD_QKEY_CNTR;
- if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_APM)
+ if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_APM && have_ib_ports)
props->device_cap_flags |= IB_DEVICE_AUTO_PATH_MIG;
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_UD_AV_PORT)
props->device_cap_flags |= IB_DEVICE_UD_AV_PORT_ENFORCE;
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_IPOIB_CSUM)
props->device_cap_flags |= IB_DEVICE_UD_IP_CSUM;
- if (dev->dev->caps.max_gso_sz && dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_BLH)
+ if (dev->dev->caps.max_gso_sz &&
+ (dev->dev->rev_id != MLX4_IB_CARD_REV_A0) &&
+ (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_BLH))
props->device_cap_flags |= IB_DEVICE_UD_TSO;
if (dev->dev->caps.bmme_flags & MLX4_BMME_FLAG_RESERVED_LKEY)
props->device_cap_flags |= IB_DEVICE_LOCAL_DMA_LKEY;
@@ -357,7 +374,7 @@
props->state = IB_PORT_DOWN;
props->phys_state = state_to_phys_state(props->state);
props->active_mtu = IB_MTU_256;
- spin_lock(&iboe->lock);
+ spin_lock_bh(&iboe->lock);
ndev = iboe->netdevs[port - 1];
if (!ndev)
goto out_unlock;
@@ -369,7 +386,7 @@
IB_PORT_ACTIVE : IB_PORT_DOWN;
props->phys_state = state_to_phys_state(props->state);
out_unlock:
- spin_unlock(&iboe->lock);
+ spin_unlock_bh(&iboe->lock);
out:
mlx4_free_cmd_mailbox(mdev->dev, mailbox);
return err;
@@ -811,11 +828,11 @@
if (!mqp->port)
return 0;
- spin_lock(&mdev->iboe.lock);
+ spin_lock_bh(&mdev->iboe.lock);
ndev = mdev->iboe.netdevs[mqp->port - 1];
if (ndev)
dev_hold(ndev);
- spin_unlock(&mdev->iboe.lock);
+ spin_unlock_bh(&mdev->iboe.lock);
if (ndev) {
ret = 1;
@@ -1292,11 +1309,11 @@
mutex_lock(&mqp->mutex);
ge = find_gid_entry(mqp, gid->raw);
if (ge) {
- spin_lock(&mdev->iboe.lock);
+ spin_lock_bh(&mdev->iboe.lock);
ndev = ge->added ? mdev->iboe.netdevs[ge->port - 1] : NULL;
if (ndev)
dev_hold(ndev);
- spin_unlock(&mdev->iboe.lock);
+ spin_unlock_bh(&mdev->iboe.lock);
if (ndev)
dev_put(ndev);
list_del(&ge->list);
@@ -1417,6 +1434,9 @@
int err;
struct mlx4_dev *dev = gw->dev->dev;
+ if (!gw->dev->ib_active)
+ return;
+
mailbox = mlx4_alloc_cmd_mailbox(dev);
if (IS_ERR(mailbox)) {
pr_warn("update gid table failed %ld\n", PTR_ERR(mailbox));
@@ -1447,6 +1467,9 @@
int err;
struct mlx4_dev *dev = gw->dev->dev;
+ if (!gw->dev->ib_active)
+ return;
+
mailbox = mlx4_alloc_cmd_mailbox(dev);
if (IS_ERR(mailbox)) {
pr_warn("reset gid table failed\n");
@@ -1581,7 +1604,7 @@
return 0;
iboe = &ibdev->iboe;
- spin_lock(&iboe->lock);
+ spin_lock_bh(&iboe->lock);
for (port = 1; port <= ibdev->dev->caps.num_ports; ++port)
if ((netif_is_bond_master(real_dev) &&
@@ -1591,7 +1614,7 @@
update_gid_table(ibdev, port, gid,
event == NETDEV_DOWN, 0);
- spin_unlock(&iboe->lock);
+ spin_unlock_bh(&iboe->lock);
return 0;
}
@@ -1664,13 +1687,21 @@
new_smac = mlx4_mac_to_u64(dev->dev_addr);
read_unlock(&dev_base_lock);
+ atomic64_set(&ibdev->iboe.mac[port - 1], new_smac);
+
+ /* no need for update QP1 and mac registration in non-SRIOV */
+ if (!mlx4_is_mfunc(ibdev->dev))
+ return;
+
mutex_lock(&ibdev->qp1_proxy_lock[port - 1]);
qp = ibdev->qp1_proxy[port - 1];
if (qp) {
int new_smac_index;
- u64 old_smac = qp->pri.smac;
+ u64 old_smac;
struct mlx4_update_qp_params update_params;
+ mutex_lock(&qp->mutex);
+ old_smac = qp->pri.smac;
if (new_smac == old_smac)
goto unlock;
@@ -1680,22 +1711,25 @@
goto unlock;
update_params.smac_index = new_smac_index;
- if (mlx4_update_qp(ibdev->dev, &qp->mqp, MLX4_UPDATE_QP_SMAC,
+ if (mlx4_update_qp(ibdev->dev, qp->mqp.qpn, MLX4_UPDATE_QP_SMAC,
&update_params)) {
release_mac = new_smac;
goto unlock;
}
-
+ /* if old port was zero, no mac was yet registered for this QP */
+ if (qp->pri.smac_port)
+ release_mac = old_smac;
qp->pri.smac = new_smac;
+ qp->pri.smac_port = port;
qp->pri.smac_index = new_smac_index;
-
- release_mac = old_smac;
}
unlock:
- mutex_unlock(&ibdev->qp1_proxy_lock[port - 1]);
if (release_mac != MLX4_IB_INVALID_MAC)
mlx4_unregister_mac(ibdev->dev, port, release_mac);
+ if (qp)
+ mutex_unlock(&qp->mutex);
+ mutex_unlock(&ibdev->qp1_proxy_lock[port - 1]);
}
static void mlx4_ib_get_dev_addr(struct net_device *dev,
@@ -1706,6 +1740,7 @@
struct inet6_dev *in6_dev;
union ib_gid *pgid;
struct inet6_ifaddr *ifp;
+ union ib_gid default_gid;
#endif
union ib_gid gid;
@@ -1726,12 +1761,15 @@
in_dev_put(in_dev);
}
#if IS_ENABLED(CONFIG_IPV6)
+ mlx4_make_default_gid(dev, &default_gid);
/* IPv6 gids */
in6_dev = in6_dev_get(dev);
if (in6_dev) {
read_lock_bh(&in6_dev->lock);
list_for_each_entry(ifp, &in6_dev->addr_list, if_list) {
pgid = (union ib_gid *)&ifp->addr;
+ if (!memcmp(pgid, &default_gid, sizeof(*pgid)))
+ continue;
update_gid_table(ibdev, port, pgid, 0, 0);
}
read_unlock_bh(&in6_dev->lock);
@@ -1753,24 +1791,33 @@
struct net_device *dev;
struct mlx4_ib_iboe *iboe = &ibdev->iboe;
int i;
+ int err = 0;
- for (i = 1; i <= ibdev->num_ports; ++i)
- if (reset_gid_table(ibdev, i))
- return -1;
+ for (i = 1; i <= ibdev->num_ports; ++i) {
+ if (rdma_port_get_link_layer(&ibdev->ib_dev, i) ==
+ IB_LINK_LAYER_ETHERNET) {
+ err = reset_gid_table(ibdev, i);
+ if (err)
+ goto out;
+ }
+ }
read_lock(&dev_base_lock);
- spin_lock(&iboe->lock);
+ spin_lock_bh(&iboe->lock);
for_each_netdev(&init_net, dev) {
u8 port = mlx4_ib_get_dev_port(dev, ibdev);
- if (port)
+ /* port will be non-zero only for ETH ports */
+ if (port) {
+ mlx4_ib_set_default_gid(ibdev, dev, port);
mlx4_ib_get_dev_addr(dev, ibdev, port);
+ }
}
- spin_unlock(&iboe->lock);
+ spin_unlock_bh(&iboe->lock);
read_unlock(&dev_base_lock);
-
- return 0;
+out:
+ return err;
}
static void mlx4_ib_scan_netdevs(struct mlx4_ib_dev *ibdev,
@@ -1784,7 +1831,7 @@
iboe = &ibdev->iboe;
- spin_lock(&iboe->lock);
+ spin_lock_bh(&iboe->lock);
mlx4_foreach_ib_transport_port(port, ibdev->dev) {
enum ib_port_state port_state = IB_PORT_NOP;
struct net_device *old_master = iboe->masters[port - 1];
@@ -1816,35 +1863,47 @@
port_state = (netif_running(curr_netdev) && netif_carrier_ok(curr_netdev)) ?
IB_PORT_ACTIVE : IB_PORT_DOWN;
mlx4_ib_set_default_gid(ibdev, curr_netdev, port);
+ if (curr_master) {
+ /* if using bonding/team and a slave port is down, we
+ * don't want the bond IP based gids in the table since
+ * flows that select port by gid may get the down port.
+ */
+ if (port_state == IB_PORT_DOWN) {
+ reset_gid_table(ibdev, port);
+ mlx4_ib_set_default_gid(ibdev,
+ curr_netdev,
+ port);
+ } else {
+ /* gids from the upper dev (bond/team)
+ * should appear in port's gid table
+ */
+ mlx4_ib_get_dev_addr(curr_master,
+ ibdev, port);
+ }
+ }
+ /* if bonding is used it is possible that we add it to
+ * masters only after IP address is assigned to the
+ * net bonding interface.
+ */
+ if (curr_master && (old_master != curr_master)) {
+ reset_gid_table(ibdev, port);
+ mlx4_ib_set_default_gid(ibdev,
+ curr_netdev, port);
+ mlx4_ib_get_dev_addr(curr_master, ibdev, port);
+ }
+
+ if (!curr_master && (old_master != curr_master)) {
+ reset_gid_table(ibdev, port);
+ mlx4_ib_set_default_gid(ibdev,
+ curr_netdev, port);
+ mlx4_ib_get_dev_addr(curr_netdev, ibdev, port);
+ }
} else {
reset_gid_table(ibdev, port);
}
- /* if using bonding/team and a slave port is down, we don't the bond IP
- * based gids in the table since flows that select port by gid may get
- * the down port.
- */
- if (curr_master && (port_state == IB_PORT_DOWN)) {
- reset_gid_table(ibdev, port);
- mlx4_ib_set_default_gid(ibdev, curr_netdev, port);
- }
- /* if bonding is used it is possible that we add it to masters
- * only after IP address is assigned to the net bonding
- * interface.
- */
- if (curr_master && (old_master != curr_master)) {
- reset_gid_table(ibdev, port);
- mlx4_ib_set_default_gid(ibdev, curr_netdev, port);
- mlx4_ib_get_dev_addr(curr_master, ibdev, port);
- }
-
- if (!curr_master && (old_master != curr_master)) {
- reset_gid_table(ibdev, port);
- mlx4_ib_set_default_gid(ibdev, curr_netdev, port);
- mlx4_ib_get_dev_addr(curr_netdev, ibdev, port);
- }
}
- spin_unlock(&iboe->lock);
+ spin_unlock_bh(&iboe->lock);
if (update_qps_port > 0)
mlx4_ib_update_qps(ibdev, dev, update_qps_port);
@@ -2186,6 +2245,9 @@
goto err_steer_free_bitmap;
}
+ for (j = 1; j <= ibdev->dev->caps.num_ports; j++)
+ atomic64_set(&iboe->mac[j - 1], ibdev->dev->caps.def_mac[j]);
+
if (ib_register_device(&ibdev->ib_dev, NULL))
goto err_steer_free_bitmap;
@@ -2222,12 +2284,8 @@
}
}
#endif
- for (i = 1 ; i <= ibdev->num_ports ; ++i)
- reset_gid_table(ibdev, i);
- rtnl_lock();
- mlx4_ib_scan_netdevs(ibdev, NULL, 0);
- rtnl_unlock();
- mlx4_ib_init_gid_table(ibdev);
+ if (mlx4_ib_init_gid_table(ibdev))
+ goto err_notif;
}
for (j = 0; j < ARRAY_SIZE(mlx4_class_attributes); ++j) {
@@ -2375,6 +2433,9 @@
struct mlx4_ib_dev *ibdev = ibdev_ptr;
int p;
+ ibdev->ib_active = false;
+ flush_workqueue(wq);
+
mlx4_ib_close_sriov(ibdev);
mlx4_ib_mad_cleanup(ibdev);
ib_unregister_device(&ibdev->ib_dev);
diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h
index e8cad39..6eb743f 100644
--- a/drivers/infiniband/hw/mlx4/mlx4_ib.h
+++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h
@@ -451,6 +451,7 @@
spinlock_t lock;
struct net_device *netdevs[MLX4_MAX_PORTS];
struct net_device *masters[MLX4_MAX_PORTS];
+ atomic64_t mac[MLX4_MAX_PORTS];
struct notifier_block nb;
struct notifier_block nb_inet;
struct notifier_block nb_inet6;
diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
index 9b0e80e..8f9325c 100644
--- a/drivers/infiniband/hw/mlx4/mr.c
+++ b/drivers/infiniband/hw/mlx4/mr.c
@@ -234,14 +234,13 @@
0);
if (IS_ERR(mmr->umem)) {
err = PTR_ERR(mmr->umem);
+ /* Prevent mlx4_ib_dereg_mr from free'ing invalid pointer */
mmr->umem = NULL;
goto release_mpt_entry;
}
n = ib_umem_page_count(mmr->umem);
shift = ilog2(mmr->umem->page_size);
- mmr->mmr.iova = virt_addr;
- mmr->mmr.size = length;
err = mlx4_mr_rereg_mem_write(dev->dev, &mmr->mmr,
virt_addr, length, n, shift,
*pmpt_entry);
@@ -249,6 +248,8 @@
ib_umem_release(mmr->umem);
goto release_mpt_entry;
}
+ mmr->mmr.iova = virt_addr;
+ mmr->mmr.size = length;
err = mlx4_ib_umem_write_mtt(dev, &mmr->mmr.mtt, mmr->umem);
if (err) {
@@ -262,6 +263,8 @@
* return a failure. But dereg_mr will free the resources.
*/
err = mlx4_mr_hw_write_mpt(dev->dev, &mmr->mmr, pmpt_entry);
+ if (!err && flags & IB_MR_REREG_ACCESS)
+ mmr->mmr.access = mr_access_flags;
release_mpt_entry:
mlx4_mr_hw_put_mpt(dev->dev, pmpt_entry);
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index efb9eff..9c5150c 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -964,9 +964,10 @@
MLX4_QP_STATE_RST, NULL, 0, 0, &qp->mqp))
pr_warn("modify QP %06x to RESET failed.\n",
qp->mqp.qpn);
- if (qp->pri.smac) {
+ if (qp->pri.smac || (!qp->pri.smac && qp->pri.smac_port)) {
mlx4_unregister_mac(dev->dev, qp->pri.smac_port, qp->pri.smac);
qp->pri.smac = 0;
+ qp->pri.smac_port = 0;
}
if (qp->alt.smac) {
mlx4_unregister_mac(dev->dev, qp->alt.smac_port, qp->alt.smac);
@@ -1325,7 +1326,8 @@
* If one was already assigned, but the new mac differs,
* unregister the old one and register the new one.
*/
- if (!smac_info->smac || smac_info->smac != smac) {
+ if ((!smac_info->smac && !smac_info->smac_port) ||
+ smac_info->smac != smac) {
/* register candidate now, unreg if needed, after success */
smac_index = mlx4_register_mac(dev->dev, port, smac);
if (smac_index >= 0) {
@@ -1390,21 +1392,13 @@
static int handle_eth_ud_smac_index(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp, u8 *smac,
struct mlx4_qp_context *context)
{
- struct net_device *ndev;
u64 u64_mac;
int smac_index;
-
- ndev = dev->iboe.netdevs[qp->port - 1];
- if (ndev) {
- smac = ndev->dev_addr;
- u64_mac = mlx4_mac_to_u64(smac);
- } else {
- u64_mac = dev->dev->caps.def_mac[qp->port];
- }
+ u64_mac = atomic64_read(&dev->iboe.mac[qp->port - 1]);
context->pri_path.sched_queue = MLX4_IB_DEFAULT_SCHED_QUEUE | ((qp->port - 1) << 6);
- if (!qp->pri.smac) {
+ if (!qp->pri.smac && !qp->pri.smac_port) {
smac_index = mlx4_register_mac(dev->dev, qp->port, u64_mac);
if (smac_index >= 0) {
qp->pri.candidate_smac_index = smac_index;
@@ -1432,6 +1426,12 @@
int steer_qp = 0;
int err = -EINVAL;
+ /* APM is not supported under RoCE */
+ if (attr_mask & IB_QP_ALT_PATH &&
+ rdma_port_get_link_layer(&dev->ib_dev, qp->port) ==
+ IB_LINK_LAYER_ETHERNET)
+ return -ENOTSUPP;
+
context = kzalloc(sizeof *context, GFP_KERNEL);
if (!context)
return -ENOMEM;
@@ -1682,7 +1682,7 @@
MLX4_IB_LINK_TYPE_ETH;
if (dev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) {
/* set QP to receive both tunneled & non-tunneled packets */
- if (!(context->flags & (1 << MLX4_RSS_QPC_FLAG_OFFSET)))
+ if (!(context->flags & cpu_to_be32(1 << MLX4_RSS_QPC_FLAG_OFFSET)))
context->srqn = cpu_to_be32(7 << 28);
}
}
@@ -1786,9 +1786,10 @@
if (qp->flags & MLX4_IB_QP_NETIF)
mlx4_ib_steer_qp_reg(dev, qp, 0);
}
- if (qp->pri.smac) {
+ if (qp->pri.smac || (!qp->pri.smac && qp->pri.smac_port)) {
mlx4_unregister_mac(dev->dev, qp->pri.smac_port, qp->pri.smac);
qp->pri.smac = 0;
+ qp->pri.smac_port = 0;
}
if (qp->alt.smac) {
mlx4_unregister_mac(dev->dev, qp->alt.smac_port, qp->alt.smac);
@@ -1812,11 +1813,12 @@
if (err && steer_qp)
mlx4_ib_steer_qp_reg(dev, qp, 0);
kfree(context);
- if (qp->pri.candidate_smac) {
+ if (qp->pri.candidate_smac ||
+ (!qp->pri.candidate_smac && qp->pri.candidate_smac_port)) {
if (err) {
mlx4_unregister_mac(dev->dev, qp->pri.candidate_smac_port, qp->pri.candidate_smac);
} else {
- if (qp->pri.smac)
+ if (qp->pri.smac || (!qp->pri.smac && qp->pri.smac_port))
mlx4_unregister_mac(dev->dev, qp->pri.smac_port, qp->pri.smac);
qp->pri.smac = qp->pri.candidate_smac;
qp->pri.smac_index = qp->pri.candidate_smac_index;
@@ -2089,6 +2091,16 @@
return 0;
}
+static void mlx4_u64_to_smac(u8 *dst_mac, u64 src_mac)
+{
+ int i;
+
+ for (i = ETH_ALEN; i; i--) {
+ dst_mac[i - 1] = src_mac & 0xff;
+ src_mac >>= 8;
+ }
+}
+
static int build_mlx_header(struct mlx4_ib_sqp *sqp, struct ib_send_wr *wr,
void *wqe, unsigned *mlx_seg_len)
{
@@ -2203,7 +2215,6 @@
}
if (is_eth) {
- u8 *smac;
struct in6_addr in6;
u16 pcp = (be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 29) << 13;
@@ -2216,12 +2227,17 @@
memcpy(&ctrl->imm, ah->av.eth.mac + 2, 4);
memcpy(&in6, sgid.raw, sizeof(in6));
- if (!mlx4_is_mfunc(to_mdev(ib_dev)->dev))
- smac = to_mdev(sqp->qp.ibqp.device)->
- iboe.netdevs[sqp->qp.port - 1]->dev_addr;
- else /* use the src mac of the tunnel */
- smac = ah->av.eth.s_mac;
- memcpy(sqp->ud_header.eth.smac_h, smac, 6);
+ if (!mlx4_is_mfunc(to_mdev(ib_dev)->dev)) {
+ u64 mac = atomic64_read(&to_mdev(ib_dev)->iboe.mac[sqp->qp.port - 1]);
+ u8 smac[ETH_ALEN];
+
+ mlx4_u64_to_smac(smac, mac);
+ memcpy(sqp->ud_header.eth.smac_h, smac, ETH_ALEN);
+ } else {
+ /* use the src mac of the tunnel */
+ memcpy(sqp->ud_header.eth.smac_h, ah->av.eth.s_mac, ETH_ALEN);
+ }
+
if (!memcmp(sqp->ud_header.eth.smac_h, sqp->ud_header.eth.dmac_h, 6))
mlx->flags |= cpu_to_be32(MLX4_WQE_CTRL_FORCE_LOOPBACK);
if (!is_vlan) {
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_ah.c b/drivers/infiniband/hw/ocrdma/ocrdma_ah.c
index 40f8536..ac02ce4 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_ah.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_ah.c
@@ -38,7 +38,7 @@
#define OCRDMA_VID_PCP_SHIFT 0xD
static inline int set_av_attr(struct ocrdma_dev *dev, struct ocrdma_ah *ah,
- struct ib_ah_attr *attr, int pdid)
+ struct ib_ah_attr *attr, union ib_gid *sgid, int pdid)
{
int status = 0;
u16 vlan_tag; bool vlan_enabled = false;
@@ -49,8 +49,7 @@
memset(ð, 0, sizeof(eth));
memset(&grh, 0, sizeof(grh));
- ah->sgid_index = attr->grh.sgid_index;
-
+ /* VLAN */
vlan_tag = attr->vlan_id;
if (!vlan_tag || (vlan_tag > 0xFFF))
vlan_tag = dev->pvid;
@@ -65,15 +64,14 @@
eth.eth_type = cpu_to_be16(OCRDMA_ROCE_ETH_TYPE);
eth_sz = sizeof(struct ocrdma_eth_basic);
}
+ /* MAC */
memcpy(ð.smac[0], &dev->nic_info.mac_addr[0], ETH_ALEN);
- memcpy(ð.dmac[0], attr->dmac, ETH_ALEN);
status = ocrdma_resolve_dmac(dev, attr, ð.dmac[0]);
if (status)
return status;
- status = ocrdma_query_gid(&dev->ibdev, 1, attr->grh.sgid_index,
- (union ib_gid *)&grh.sgid[0]);
- if (status)
- return status;
+ ah->sgid_index = attr->grh.sgid_index;
+ memcpy(&grh.sgid[0], sgid->raw, sizeof(union ib_gid));
+ memcpy(&grh.dgid[0], attr->grh.dgid.raw, sizeof(attr->grh.dgid.raw));
grh.tclass_flow = cpu_to_be32((6 << 28) |
(attr->grh.traffic_class << 24) |
@@ -81,8 +79,7 @@
/* 0x1b is next header value in GRH */
grh.pdid_hoplimit = cpu_to_be32((pdid << 16) |
(0x1b << 8) | attr->grh.hop_limit);
-
- memcpy(&grh.dgid[0], attr->grh.dgid.raw, sizeof(attr->grh.dgid.raw));
+ /* Eth HDR */
memcpy(&ah->av->eth_hdr, ð, eth_sz);
memcpy((u8 *)ah->av + eth_sz, &grh, sizeof(struct ocrdma_grh));
if (vlan_enabled)
@@ -98,6 +95,8 @@
struct ocrdma_ah *ah;
struct ocrdma_pd *pd = get_ocrdma_pd(ibpd);
struct ocrdma_dev *dev = get_ocrdma_dev(ibpd->device);
+ union ib_gid sgid;
+ u8 zmac[ETH_ALEN];
if (!(attr->ah_flags & IB_AH_GRH))
return ERR_PTR(-EINVAL);
@@ -111,7 +110,27 @@
status = ocrdma_alloc_av(dev, ah);
if (status)
goto av_err;
- status = set_av_attr(dev, ah, attr, pd->id);
+
+ status = ocrdma_query_gid(&dev->ibdev, 1, attr->grh.sgid_index, &sgid);
+ if (status) {
+ pr_err("%s(): Failed to query sgid, status = %d\n",
+ __func__, status);
+ goto av_conf_err;
+ }
+
+ memset(&zmac, 0, ETH_ALEN);
+ if (pd->uctx &&
+ memcmp(attr->dmac, &zmac, ETH_ALEN)) {
+ status = rdma_addr_find_dmac_by_grh(&sgid, &attr->grh.dgid,
+ attr->dmac, &attr->vlan_id);
+ if (status) {
+ pr_err("%s(): Failed to resolve dmac from gid."
+ "status = %d\n", __func__, status);
+ goto av_conf_err;
+ }
+ }
+
+ status = set_av_attr(dev, ah, attr, &sgid, pd->id);
if (status)
goto av_conf_err;
@@ -145,7 +164,7 @@
struct ocrdma_av *av = ah->av;
struct ocrdma_grh *grh;
attr->ah_flags |= IB_AH_GRH;
- if (ah->av->valid & Bit(1)) {
+ if (ah->av->valid & OCRDMA_AV_VALID) {
grh = (struct ocrdma_grh *)((u8 *)ah->av +
sizeof(struct ocrdma_eth_vlan));
attr->sl = be16_to_cpu(av->eth_hdr.vlan_tag) >> 13;
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index acb434d..8f5f257 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -101,7 +101,7 @@
attr->max_srq_sge = dev->attr.max_srq_sge;
attr->max_srq_wr = dev->attr.max_rqe;
attr->local_ca_ack_delay = dev->attr.local_ca_ack_delay;
- attr->max_fast_reg_page_list_len = 0;
+ attr->max_fast_reg_page_list_len = dev->attr.max_pages_per_frmr;
attr->max_pkeys = 1;
return 0;
}
@@ -2846,11 +2846,9 @@
if (cq->first_arm) {
ocrdma_ring_cq_db(dev, cq_id, arm_needed, sol_needed, 0);
cq->first_arm = false;
- goto skip_defer;
}
- cq->deferred_arm = true;
-skip_defer:
+ cq->deferred_arm = true;
cq->deferred_sol = sol_needed;
spin_unlock_irqrestore(&cq->cq_lock, flags);
diff --git a/drivers/infiniband/hw/qib/qib_debugfs.c b/drivers/infiniband/hw/qib/qib_debugfs.c
index 799a0c3..6abd3ed 100644
--- a/drivers/infiniband/hw/qib/qib_debugfs.c
+++ b/drivers/infiniband/hw/qib/qib_debugfs.c
@@ -193,6 +193,7 @@
struct qib_qp_iter *iter;
loff_t n = *pos;
+ rcu_read_lock();
iter = qib_qp_iter_init(s->private);
if (!iter)
return NULL;
@@ -224,7 +225,7 @@
static void _qp_stats_seq_stop(struct seq_file *s, void *iter_ptr)
{
- /* nothing for now */
+ rcu_read_unlock();
}
static int _qp_stats_seq_show(struct seq_file *s, void *iter_ptr)
diff --git a/drivers/infiniband/hw/qib/qib_qp.c b/drivers/infiniband/hw/qib/qib_qp.c
index 7fcc150..6ddc026 100644
--- a/drivers/infiniband/hw/qib/qib_qp.c
+++ b/drivers/infiniband/hw/qib/qib_qp.c
@@ -1325,7 +1325,6 @@
struct qib_qp *pqp = iter->qp;
struct qib_qp *qp;
- rcu_read_lock();
for (; n < dev->qp_table_size; n++) {
if (pqp)
qp = rcu_dereference(pqp->next);
@@ -1333,18 +1332,11 @@
qp = rcu_dereference(dev->qp_table[n]);
pqp = qp;
if (qp) {
- if (iter->qp)
- atomic_dec(&iter->qp->refcount);
- atomic_inc(&qp->refcount);
- rcu_read_unlock();
iter->qp = qp;
iter->n = n;
return 0;
}
}
- rcu_read_unlock();
- if (iter->qp)
- atomic_dec(&iter->qp->refcount);
return ret;
}
diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c
index 2bc1d2b..74f90b2 100644
--- a/drivers/infiniband/hw/qib/qib_user_pages.c
+++ b/drivers/infiniband/hw/qib/qib_user_pages.c
@@ -52,7 +52,7 @@
* Call with current->mm->mmap_sem held.
*/
static int __qib_get_user_pages(unsigned long start_page, size_t num_pages,
- struct page **p, struct vm_area_struct **vma)
+ struct page **p)
{
unsigned long lock_limit;
size_t got;
@@ -69,7 +69,7 @@
ret = get_user_pages(current, current->mm,
start_page + got * PAGE_SIZE,
num_pages - got, 1, 1,
- p + got, vma);
+ p + got, NULL);
if (ret < 0)
goto bail_release;
}
@@ -136,7 +136,7 @@
down_write(¤t->mm->mmap_sem);
- ret = __qib_get_user_pages(start_page, num_pages, p, NULL);
+ ret = __qib_get_user_pages(start_page, num_pages, p);
up_write(¤t->mm->mmap_sem);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
index 3edce61..d7562be 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib.h
+++ b/drivers/infiniband/ulp/ipoib/ipoib.h
@@ -131,6 +131,12 @@
u8 hwaddr[INFINIBAND_ALEN];
};
+static inline struct ipoib_cb *ipoib_skb_cb(const struct sk_buff *skb)
+{
+ BUILD_BUG_ON(sizeof(skb->cb) < sizeof(struct ipoib_cb));
+ return (struct ipoib_cb *)skb->cb;
+}
+
/* Used for all multicast joins (broadcast, IPv4 mcast and IPv6 mcast) */
struct ipoib_mcast {
struct ib_sa_mcmember_rec mcmember;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index 1310acf..13e6e04 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -716,7 +716,7 @@
{
struct ipoib_dev_priv *priv = netdev_priv(dev);
struct ipoib_neigh *neigh;
- struct ipoib_cb *cb = (struct ipoib_cb *) skb->cb;
+ struct ipoib_cb *cb = ipoib_skb_cb(skb);
struct ipoib_header *header;
unsigned long flags;
@@ -813,7 +813,7 @@
const void *daddr, const void *saddr, unsigned len)
{
struct ipoib_header *header;
- struct ipoib_cb *cb = (struct ipoib_cb *) skb->cb;
+ struct ipoib_cb *cb = ipoib_skb_cb(skb);
header = (struct ipoib_header *) skb_push(skb, sizeof *header);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
index d4e0057..ffb83b5 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
@@ -529,21 +529,13 @@
port_attr.state);
return;
}
+ priv->local_lid = port_attr.lid;
if (ib_query_gid(priv->ca, priv->port, 0, &priv->local_gid))
ipoib_warn(priv, "ib_query_gid() failed\n");
else
memcpy(priv->dev->dev_addr + 4, priv->local_gid.raw, sizeof (union ib_gid));
- {
- struct ib_port_attr attr;
-
- if (!ib_query_port(priv->ca, priv->port, &attr))
- priv->local_lid = attr.lid;
- else
- ipoib_warn(priv, "ib_query_port failed\n");
- }
-
if (!priv->broadcast) {
struct ipoib_mcast *broadcast;
diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c
index 61ee91d..93ce62f 100644
--- a/drivers/infiniband/ulp/iser/iscsi_iser.c
+++ b/drivers/infiniband/ulp/iser/iscsi_iser.c
@@ -344,7 +344,6 @@
int is_leading)
{
struct iscsi_conn *conn = cls_conn->dd_data;
- struct iscsi_session *session;
struct iser_conn *ib_conn;
struct iscsi_endpoint *ep;
int error;
@@ -363,9 +362,17 @@
}
ib_conn = ep->dd_data;
- session = conn->session;
- if (iser_alloc_rx_descriptors(ib_conn, session))
- return -ENOMEM;
+ mutex_lock(&ib_conn->state_mutex);
+ if (ib_conn->state != ISER_CONN_UP) {
+ error = -EINVAL;
+ iser_err("iser_conn %p state is %d, teardown started\n",
+ ib_conn, ib_conn->state);
+ goto out;
+ }
+
+ error = iser_alloc_rx_descriptors(ib_conn, conn->session);
+ if (error)
+ goto out;
/* binds the iSER connection retrieved from the previously
* connected ep_handle to the iSCSI layer connection. exchanges
@@ -375,7 +382,9 @@
conn->dd_data = ib_conn;
ib_conn->iscsi_conn = conn;
- return 0;
+out:
+ mutex_unlock(&ib_conn->state_mutex);
+ return error;
}
static int
diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.h b/drivers/infiniband/ulp/iser/iscsi_iser.h
index c877dad..9f0e0e3 100644
--- a/drivers/infiniband/ulp/iser/iscsi_iser.h
+++ b/drivers/infiniband/ulp/iser/iscsi_iser.h
@@ -69,7 +69,7 @@
#define DRV_NAME "iser"
#define PFX DRV_NAME ": "
-#define DRV_VER "1.4"
+#define DRV_VER "1.4.1"
#define iser_dbg(fmt, arg...) \
do { \
diff --git a/drivers/infiniband/ulp/iser/iser_verbs.c b/drivers/infiniband/ulp/iser/iser_verbs.c
index 3ef167f..3bfec4b 100644
--- a/drivers/infiniband/ulp/iser/iser_verbs.c
+++ b/drivers/infiniband/ulp/iser/iser_verbs.c
@@ -73,7 +73,7 @@
{
struct iser_cq_desc *cq_desc;
struct ib_device_attr *dev_attr = &device->dev_attr;
- int ret, i, j;
+ int ret, i;
ret = ib_query_device(device->ib_device, dev_attr);
if (ret) {
@@ -125,16 +125,20 @@
iser_cq_event_callback,
(void *)&cq_desc[i],
ISER_MAX_RX_CQ_LEN, i);
- if (IS_ERR(device->rx_cq[i]))
+ if (IS_ERR(device->rx_cq[i])) {
+ device->rx_cq[i] = NULL;
goto cq_err;
+ }
device->tx_cq[i] = ib_create_cq(device->ib_device,
NULL, iser_cq_event_callback,
(void *)&cq_desc[i],
ISER_MAX_TX_CQ_LEN, i);
- if (IS_ERR(device->tx_cq[i]))
+ if (IS_ERR(device->tx_cq[i])) {
+ device->tx_cq[i] = NULL;
goto cq_err;
+ }
if (ib_req_notify_cq(device->rx_cq[i], IB_CQ_NEXT_COMP))
goto cq_err;
@@ -160,14 +164,14 @@
handler_err:
ib_dereg_mr(device->mr);
dma_mr_err:
- for (j = 0; j < device->cqs_used; j++)
- tasklet_kill(&device->cq_tasklet[j]);
+ for (i = 0; i < device->cqs_used; i++)
+ tasklet_kill(&device->cq_tasklet[i]);
cq_err:
- for (j = 0; j < i; j++) {
- if (device->tx_cq[j])
- ib_destroy_cq(device->tx_cq[j]);
- if (device->rx_cq[j])
- ib_destroy_cq(device->rx_cq[j]);
+ for (i = 0; i < device->cqs_used; i++) {
+ if (device->tx_cq[i])
+ ib_destroy_cq(device->tx_cq[i]);
+ if (device->rx_cq[i])
+ ib_destroy_cq(device->rx_cq[i]);
}
ib_dealloc_pd(device->pd);
pd_err:
diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index d4c7928..da8ff12 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -586,17 +586,12 @@
init_completion(&isert_conn->conn_wait);
init_completion(&isert_conn->conn_wait_comp_err);
kref_init(&isert_conn->conn_kref);
- kref_get(&isert_conn->conn_kref);
mutex_init(&isert_conn->conn_mutex);
spin_lock_init(&isert_conn->conn_lock);
INIT_LIST_HEAD(&isert_conn->conn_fr_pool);
cma_id->context = isert_conn;
isert_conn->conn_cm_id = cma_id;
- isert_conn->responder_resources = event->param.conn.responder_resources;
- isert_conn->initiator_depth = event->param.conn.initiator_depth;
- pr_debug("Using responder_resources: %u initiator_depth: %u\n",
- isert_conn->responder_resources, isert_conn->initiator_depth);
isert_conn->login_buf = kzalloc(ISCSI_DEF_MAX_RECV_SEG_LEN +
ISER_RX_LOGIN_SIZE, GFP_KERNEL);
@@ -643,6 +638,12 @@
goto out_rsp_dma_map;
}
+ /* Set max inflight RDMA READ requests */
+ isert_conn->initiator_depth = min_t(u8,
+ event->param.conn.initiator_depth,
+ device->dev_attr.max_qp_init_rd_atom);
+ pr_debug("Using initiator_depth: %u\n", isert_conn->initiator_depth);
+
isert_conn->conn_device = device;
isert_conn->conn_pd = ib_alloc_pd(isert_conn->conn_device->ib_device);
if (IS_ERR(isert_conn->conn_pd)) {
@@ -746,7 +747,9 @@
static void
isert_connected_handler(struct rdma_cm_id *cma_id)
{
- return;
+ struct isert_conn *isert_conn = cma_id->context;
+
+ kref_get(&isert_conn->conn_kref);
}
static void
@@ -798,7 +801,6 @@
wake_up:
complete(&isert_conn->conn_wait);
- isert_put_conn(isert_conn);
}
static void
@@ -3067,7 +3069,6 @@
int ret;
memset(&cp, 0, sizeof(struct rdma_conn_param));
- cp.responder_resources = isert_conn->responder_resources;
cp.initiator_depth = isert_conn->initiator_depth;
cp.retry_count = 7;
cp.rnr_retry_count = 7;
@@ -3215,7 +3216,7 @@
pr_debug("isert_wait_conn: Starting \n");
mutex_lock(&isert_conn->conn_mutex);
- if (isert_conn->conn_cm_id) {
+ if (isert_conn->conn_cm_id && !isert_conn->disconnect) {
pr_debug("Calling rdma_disconnect from isert_wait_conn\n");
rdma_disconnect(isert_conn->conn_cm_id);
}
@@ -3234,6 +3235,7 @@
wait_for_completion(&isert_conn->conn_wait_comp_err);
wait_for_completion(&isert_conn->conn_wait);
+ isert_put_conn(isert_conn);
}
static void isert_free_conn(struct iscsi_conn *conn)
diff --git a/drivers/input/keyboard/atkbd.c b/drivers/input/keyboard/atkbd.c
index 2dd1d0d..6f5d795 100644
--- a/drivers/input/keyboard/atkbd.c
+++ b/drivers/input/keyboard/atkbd.c
@@ -1791,14 +1791,6 @@
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"),
- DMI_MATCH(DMI_PRODUCT_NAME, "LW25-B7HV"),
- },
- .callback = atkbd_deactivate_fixup,
- },
- {
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"),
- DMI_MATCH(DMI_PRODUCT_NAME, "P1-J273B"),
},
.callback = atkbd_deactivate_fixup,
},
diff --git a/drivers/input/keyboard/cap1106.c b/drivers/input/keyboard/cap1106.c
index 180b184..d70b65a 100644
--- a/drivers/input/keyboard/cap1106.c
+++ b/drivers/input/keyboard/cap1106.c
@@ -33,8 +33,8 @@
#define CAP1106_REG_SENSOR_CONFIG 0x22
#define CAP1106_REG_SENSOR_CONFIG2 0x23
#define CAP1106_REG_SAMPLING_CONFIG 0x24
-#define CAP1106_REG_CALIBRATION 0x25
-#define CAP1106_REG_INT_ENABLE 0x26
+#define CAP1106_REG_CALIBRATION 0x26
+#define CAP1106_REG_INT_ENABLE 0x27
#define CAP1106_REG_REPEAT_RATE 0x28
#define CAP1106_REG_MT_CONFIG 0x2a
#define CAP1106_REG_MT_PATTERN_CONFIG 0x2b
diff --git a/drivers/input/keyboard/matrix_keypad.c b/drivers/input/keyboard/matrix_keypad.c
index 8d2e19e..e651fa6 100644
--- a/drivers/input/keyboard/matrix_keypad.c
+++ b/drivers/input/keyboard/matrix_keypad.c
@@ -332,23 +332,24 @@
}
if (pdata->clustered_irq > 0) {
- err = request_irq(pdata->clustered_irq,
+ err = request_any_context_irq(pdata->clustered_irq,
matrix_keypad_interrupt,
pdata->clustered_irq_flags,
"matrix-keypad", keypad);
- if (err) {
+ if (err < 0) {
dev_err(&pdev->dev,
"Unable to acquire clustered interrupt\n");
goto err_free_rows;
}
} else {
for (i = 0; i < pdata->num_row_gpios; i++) {
- err = request_irq(gpio_to_irq(pdata->row_gpios[i]),
+ err = request_any_context_irq(
+ gpio_to_irq(pdata->row_gpios[i]),
matrix_keypad_interrupt,
IRQF_TRIGGER_RISING |
IRQF_TRIGGER_FALLING,
"matrix-keypad", keypad);
- if (err) {
+ if (err < 0) {
dev_err(&pdev->dev,
"Unable to acquire interrupt for GPIO line %i\n",
pdata->row_gpios[i]);
diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
index a956b98..35a49bf 100644
--- a/drivers/input/mouse/alps.c
+++ b/drivers/input/mouse/alps.c
@@ -2373,6 +2373,10 @@
dev2->keybit[BIT_WORD(BTN_LEFT)] =
BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) | BIT_MASK(BTN_RIGHT);
+ __set_bit(INPUT_PROP_POINTER, dev2->propbit);
+ if (priv->flags & ALPS_DUALPOINT)
+ __set_bit(INPUT_PROP_POINTING_STICK, dev2->propbit);
+
if (input_register_device(priv->dev2))
goto init_fail;
diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
index da51738..06fc6e7 100644
--- a/drivers/input/mouse/elantech.c
+++ b/drivers/input/mouse/elantech.c
@@ -1331,6 +1331,13 @@
if (param[1] == 0)
return true;
+ /*
+ * Some models have a revision higher then 20. Meaning param[2] may
+ * be 10 or 20, skip the rates check for these.
+ */
+ if (param[0] == 0x46 && (param[1] & 0xef) == 0x0f && param[2] < 40)
+ return true;
+
for (i = 0; i < ARRAY_SIZE(rates); i++)
if (param[2] == rates[i])
return false;
@@ -1607,6 +1614,10 @@
tp_dev->keybit[BIT_WORD(BTN_LEFT)] =
BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) |
BIT_MASK(BTN_RIGHT);
+
+ __set_bit(INPUT_PROP_POINTER, tp_dev->propbit);
+ __set_bit(INPUT_PROP_POINTING_STICK, tp_dev->propbit);
+
error = input_register_device(etd->tp_dev);
if (error < 0)
goto init_fail_tp_reg;
diff --git a/drivers/input/mouse/psmouse-base.c b/drivers/input/mouse/psmouse-base.c
index cff065f..b4e1f01 100644
--- a/drivers/input/mouse/psmouse-base.c
+++ b/drivers/input/mouse/psmouse-base.c
@@ -670,6 +670,8 @@
__set_bit(REL_X, input_dev->relbit);
__set_bit(REL_Y, input_dev->relbit);
+ __set_bit(INPUT_PROP_POINTER, input_dev->propbit);
+
psmouse->set_rate = psmouse_set_rate;
psmouse->set_resolution = psmouse_set_resolution;
psmouse->poll = psmouse_poll;
diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
index e8573c6..fd23181 100644
--- a/drivers/input/mouse/synaptics.c
+++ b/drivers/input/mouse/synaptics.c
@@ -629,10 +629,61 @@
((buf[0] & 0x04) >> 1) |
((buf[3] & 0x04) >> 2));
+ if ((SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) ||
+ SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c)) &&
+ hw->w == 2) {
+ synaptics_parse_agm(buf, priv, hw);
+ return 1;
+ }
+
+ hw->x = (((buf[3] & 0x10) << 8) |
+ ((buf[1] & 0x0f) << 8) |
+ buf[4]);
+ hw->y = (((buf[3] & 0x20) << 7) |
+ ((buf[1] & 0xf0) << 4) |
+ buf[5]);
+ hw->z = buf[2];
+
hw->left = (buf[0] & 0x01) ? 1 : 0;
hw->right = (buf[0] & 0x02) ? 1 : 0;
- if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) {
+ if (SYN_CAP_FORCEPAD(priv->ext_cap_0c)) {
+ /*
+ * ForcePads, like Clickpads, use middle button
+ * bits to report primary button clicks.
+ * Unfortunately they report primary button not
+ * only when user presses on the pad above certain
+ * threshold, but also when there are more than one
+ * finger on the touchpad, which interferes with
+ * out multi-finger gestures.
+ */
+ if (hw->z == 0) {
+ /* No contacts */
+ priv->press = priv->report_press = false;
+ } else if (hw->w >= 4 && ((buf[0] ^ buf[3]) & 0x01)) {
+ /*
+ * Single-finger touch with pressure above
+ * the threshold. If pressure stays long
+ * enough, we'll start reporting primary
+ * button. We rely on the device continuing
+ * sending data even if finger does not
+ * move.
+ */
+ if (!priv->press) {
+ priv->press_start = jiffies;
+ priv->press = true;
+ } else if (time_after(jiffies,
+ priv->press_start +
+ msecs_to_jiffies(50))) {
+ priv->report_press = true;
+ }
+ } else {
+ priv->press = false;
+ }
+
+ hw->left = priv->report_press;
+
+ } else if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) {
/*
* Clickpad's button is transmitted as middle button,
* however, since it is primary button, we will report
@@ -651,21 +702,6 @@
hw->down = ((buf[0] ^ buf[3]) & 0x02) ? 1 : 0;
}
- if ((SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) ||
- SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c)) &&
- hw->w == 2) {
- synaptics_parse_agm(buf, priv, hw);
- return 1;
- }
-
- hw->x = (((buf[3] & 0x10) << 8) |
- ((buf[1] & 0x0f) << 8) |
- buf[4]);
- hw->y = (((buf[3] & 0x20) << 7) |
- ((buf[1] & 0xf0) << 4) |
- buf[5]);
- hw->z = buf[2];
-
if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) &&
((buf[0] ^ buf[3]) & 0x02)) {
switch (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) & ~0x01) {
diff --git a/drivers/input/mouse/synaptics.h b/drivers/input/mouse/synaptics.h
index e594af0..fb2e076 100644
--- a/drivers/input/mouse/synaptics.h
+++ b/drivers/input/mouse/synaptics.h
@@ -78,6 +78,11 @@
* 2 0x08 image sensor image sensor tracks 5 fingers, but only
* reports 2.
* 2 0x20 report min query 0x0f gives min coord reported
+ * 2 0x80 forcepad forcepad is a variant of clickpad that
+ * does not have physical buttons but rather
+ * uses pressure above certain threshold to
+ * report primary clicks. Forcepads also have
+ * clickpad bit set.
*/
#define SYN_CAP_CLICKPAD(ex0c) ((ex0c) & 0x100000) /* 1-button ClickPad */
#define SYN_CAP_CLICKPAD2BTN(ex0c) ((ex0c) & 0x000100) /* 2-button ClickPad */
@@ -86,6 +91,7 @@
#define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & 0x080000)
#define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400)
#define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800)
+#define SYN_CAP_FORCEPAD(ex0c) ((ex0c) & 0x008000)
/* synaptics modes query bits */
#define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7))
@@ -177,6 +183,11 @@
*/
struct synaptics_hw_state agm;
bool agm_pending; /* new AGM packet received */
+
+ /* ForcePad handling */
+ unsigned long press_start;
+ bool press;
+ bool report_press;
};
void synaptics_module_init(void);
diff --git a/drivers/input/mouse/synaptics_usb.c b/drivers/input/mouse/synaptics_usb.c
index e122bda..6bcc018 100644
--- a/drivers/input/mouse/synaptics_usb.c
+++ b/drivers/input/mouse/synaptics_usb.c
@@ -387,6 +387,7 @@
__set_bit(EV_REL, input_dev->evbit);
__set_bit(REL_X, input_dev->relbit);
__set_bit(REL_Y, input_dev->relbit);
+ __set_bit(INPUT_PROP_POINTING_STICK, input_dev->propbit);
input_set_abs_params(input_dev, ABS_PRESSURE, 0, 127, 0, 0);
} else {
input_set_abs_params(input_dev, ABS_X,
@@ -401,6 +402,11 @@
__set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit);
}
+ if (synusb->flags & SYNUSB_TOUCHSCREEN)
+ __set_bit(INPUT_PROP_DIRECT, input_dev->propbit);
+ else
+ __set_bit(INPUT_PROP_POINTER, input_dev->propbit);
+
__set_bit(BTN_LEFT, input_dev->keybit);
__set_bit(BTN_RIGHT, input_dev->keybit);
__set_bit(BTN_MIDDLE, input_dev->keybit);
diff --git a/drivers/input/mouse/trackpoint.c b/drivers/input/mouse/trackpoint.c
index ca843b6..30c8b69 100644
--- a/drivers/input/mouse/trackpoint.c
+++ b/drivers/input/mouse/trackpoint.c
@@ -393,6 +393,9 @@
if ((button_info & 0x0f) >= 3)
__set_bit(BTN_MIDDLE, psmouse->dev->keybit);
+ __set_bit(INPUT_PROP_POINTER, psmouse->dev->propbit);
+ __set_bit(INPUT_PROP_POINTING_STICK, psmouse->dev->propbit);
+
trackpoint_defaults(psmouse->private);
error = trackpoint_power_on_reset(&psmouse->ps2dev);
diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
index 136b7b20..713e3dd 100644
--- a/drivers/input/serio/i8042-x86ia64io.h
+++ b/drivers/input/serio/i8042-x86ia64io.h
@@ -465,6 +465,13 @@
DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"),
},
},
+ {
+ /* Avatar AVIU-145A6 */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Intel"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "IC4I"),
+ },
+ },
{ }
};
@@ -608,6 +615,14 @@
DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"),
},
},
+ {
+ /* Fujitsu U574 laptop */
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U574"),
+ },
+ },
{ }
};
diff --git a/drivers/input/serio/i8042.c b/drivers/input/serio/i8042.c
index 3807c3e..f5a98af 100644
--- a/drivers/input/serio/i8042.c
+++ b/drivers/input/serio/i8042.c
@@ -1254,6 +1254,8 @@
} else {
snprintf(serio->name, sizeof(serio->name), "i8042 AUX%d port", idx);
snprintf(serio->phys, sizeof(serio->phys), I8042_MUX_PHYS_DESC, idx + 1);
+ strlcpy(serio->firmware_id, i8042_aux_firmware_id,
+ sizeof(serio->firmware_id));
}
port->serio = serio;
diff --git a/drivers/input/serio/serport.c b/drivers/input/serio/serport.c
index 0cb7ef5..69175b8 100644
--- a/drivers/input/serio/serport.c
+++ b/drivers/input/serio/serport.c
@@ -21,6 +21,7 @@
#include <linux/init.h>
#include <linux/serio.h>
#include <linux/tty.h>
+#include <linux/compat.h>
MODULE_AUTHOR("Vojtech Pavlik <vojtech@ucw.cz>");
MODULE_DESCRIPTION("Input device TTY line discipline");
@@ -198,29 +199,56 @@
return 0;
}
+static void serport_set_type(struct tty_struct *tty, unsigned long type)
+{
+ struct serport *serport = tty->disc_data;
+
+ serport->id.proto = type & 0x000000ff;
+ serport->id.id = (type & 0x0000ff00) >> 8;
+ serport->id.extra = (type & 0x00ff0000) >> 16;
+}
+
/*
* serport_ldisc_ioctl() allows to set the port protocol, and device ID
*/
-static int serport_ldisc_ioctl(struct tty_struct * tty, struct file * file, unsigned int cmd, unsigned long arg)
+static int serport_ldisc_ioctl(struct tty_struct *tty, struct file *file,
+ unsigned int cmd, unsigned long arg)
{
- struct serport *serport = (struct serport*) tty->disc_data;
- unsigned long type;
-
if (cmd == SPIOCSTYPE) {
+ unsigned long type;
+
if (get_user(type, (unsigned long __user *) arg))
return -EFAULT;
- serport->id.proto = type & 0x000000ff;
- serport->id.id = (type & 0x0000ff00) >> 8;
- serport->id.extra = (type & 0x00ff0000) >> 16;
-
+ serport_set_type(tty, type);
return 0;
}
return -EINVAL;
}
+#ifdef CONFIG_COMPAT
+#define COMPAT_SPIOCSTYPE _IOW('q', 0x01, compat_ulong_t)
+static long serport_ldisc_compat_ioctl(struct tty_struct *tty,
+ struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ if (cmd == COMPAT_SPIOCSTYPE) {
+ void __user *uarg = compat_ptr(arg);
+ compat_ulong_t compat_type;
+
+ if (get_user(compat_type, (compat_ulong_t __user *)uarg))
+ return -EFAULT;
+
+ serport_set_type(tty, compat_type);
+ return 0;
+ }
+
+ return -EINVAL;
+}
+#endif
+
static void serport_ldisc_write_wakeup(struct tty_struct * tty)
{
struct serport *serport = (struct serport *) tty->disc_data;
@@ -243,6 +271,9 @@
.close = serport_ldisc_close,
.read = serport_ldisc_read,
.ioctl = serport_ldisc_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = serport_ldisc_compat_ioctl,
+#endif
.receive_buf = serport_ldisc_receive,
.write_wakeup = serport_ldisc_write_wakeup
};
diff --git a/drivers/input/touchscreen/atmel_mxt_ts.c b/drivers/input/touchscreen/atmel_mxt_ts.c
index db178ed..aaacf8b 100644
--- a/drivers/input/touchscreen/atmel_mxt_ts.c
+++ b/drivers/input/touchscreen/atmel_mxt_ts.c
@@ -837,7 +837,12 @@
count = data->msg_buf[0];
if (count == 0) {
- dev_warn(dev, "Interrupt triggered but zero messages\n");
+ /*
+ * This condition is caused by the CHG line being configured
+ * in Mode 0. It results in unnecessary I2C operations but it
+ * is benign.
+ */
+ dev_dbg(dev, "Interrupt triggered but zero messages\n");
return IRQ_NONE;
} else if (count > data->max_reportid) {
dev_err(dev, "T44 count %d exceeded max report id\n", count);
@@ -1374,11 +1379,16 @@
return 0;
}
+static void mxt_free_input_device(struct mxt_data *data)
+{
+ if (data->input_dev) {
+ input_unregister_device(data->input_dev);
+ data->input_dev = NULL;
+ }
+}
+
static void mxt_free_object_table(struct mxt_data *data)
{
- input_unregister_device(data->input_dev);
- data->input_dev = NULL;
-
kfree(data->object_table);
data->object_table = NULL;
kfree(data->msg_buf);
@@ -1957,11 +1967,13 @@
ret = mxt_lookup_bootloader_address(data, 0);
if (ret)
goto release_firmware;
+
+ mxt_free_input_device(data);
+ mxt_free_object_table(data);
} else {
enable_irq(data->irq);
}
- mxt_free_object_table(data);
reinit_completion(&data->bl_completion);
ret = mxt_check_bootloader(data, MXT_WAITING_BOOTLOAD_CMD, false);
@@ -2210,6 +2222,7 @@
return 0;
err_free_object:
+ mxt_free_input_device(data);
mxt_free_object_table(data);
err_free_irq:
free_irq(client->irq, data);
@@ -2224,7 +2237,7 @@
sysfs_remove_group(&client->dev.kobj, &mxt_attr_group);
free_irq(data->irq, data);
- input_unregister_device(data->input_dev);
+ mxt_free_input_device(data);
mxt_free_object_table(data);
kfree(data);
diff --git a/drivers/input/touchscreen/wm9712.c b/drivers/input/touchscreen/wm9712.c
index 16b5211..705ffa1 100644
--- a/drivers/input/touchscreen/wm9712.c
+++ b/drivers/input/touchscreen/wm9712.c
@@ -41,7 +41,7 @@
*/
static int rpu = 8;
module_param(rpu, int, 0);
-MODULE_PARM_DESC(rpu, "Set internal pull up resitor for pen detect.");
+MODULE_PARM_DESC(rpu, "Set internal pull up resistor for pen detect.");
/*
* Set current used for pressure measurement.
diff --git a/drivers/input/touchscreen/wm9713.c b/drivers/input/touchscreen/wm9713.c
index 7405353..572a5a6 100644
--- a/drivers/input/touchscreen/wm9713.c
+++ b/drivers/input/touchscreen/wm9713.c
@@ -41,7 +41,7 @@
*/
static int rpu = 8;
module_param(rpu, int, 0);
-MODULE_PARM_DESC(rpu, "Set internal pull up resitor for pen detect.");
+MODULE_PARM_DESC(rpu, "Set internal pull up resistor for pen detect.");
/*
* Set current used for pressure measurement.
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index ca18d6d..a83cc2a 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -146,6 +146,8 @@
#define ID0_CTTW (1 << 14)
#define ID0_NUMIRPT_SHIFT 16
#define ID0_NUMIRPT_MASK 0xff
+#define ID0_NUMSIDB_SHIFT 9
+#define ID0_NUMSIDB_MASK 0xf
#define ID0_NUMSMRG_SHIFT 0
#define ID0_NUMSMRG_MASK 0xff
@@ -524,9 +526,18 @@
master->of_node = masterspec->np;
master->cfg.num_streamids = masterspec->args_count;
- for (i = 0; i < master->cfg.num_streamids; ++i)
- master->cfg.streamids[i] = masterspec->args[i];
+ for (i = 0; i < master->cfg.num_streamids; ++i) {
+ u16 streamid = masterspec->args[i];
+ if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) &&
+ (streamid >= smmu->num_mapping_groups)) {
+ dev_err(dev,
+ "stream ID for master device %s greater than maximum allowed (%d)\n",
+ masterspec->np->name, smmu->num_mapping_groups);
+ return -ERANGE;
+ }
+ master->cfg.streamids[i] = streamid;
+ }
return insert_smmu_master(smmu, master);
}
@@ -623,7 +634,7 @@
if (fsr & FSR_IGN)
dev_err_ratelimited(smmu->dev,
- "Unexpected context fault (fsr 0x%u)\n",
+ "Unexpected context fault (fsr 0x%x)\n",
fsr);
fsynr = readl_relaxed(cb_base + ARM_SMMU_CB_FSYNR0);
@@ -752,6 +763,7 @@
reg = (TTBCR2_ADDR_36 << TTBCR2_SEP_SHIFT);
break;
case 39:
+ case 40:
reg = (TTBCR2_ADDR_40 << TTBCR2_SEP_SHIFT);
break;
case 42:
@@ -773,6 +785,7 @@
reg |= (TTBCR2_ADDR_36 << TTBCR2_PASIZE_SHIFT);
break;
case 39:
+ case 40:
reg |= (TTBCR2_ADDR_40 << TTBCR2_PASIZE_SHIFT);
break;
case 42:
@@ -843,8 +856,11 @@
reg |= TTBCR_EAE |
(TTBCR_SH_IS << TTBCR_SH0_SHIFT) |
(TTBCR_RGN_WBWA << TTBCR_ORGN0_SHIFT) |
- (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT) |
- (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
+ (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT);
+
+ if (!stage1)
+ reg |= (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
+
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
/* MAIR0 (stage-1 only) */
@@ -868,10 +884,15 @@
static int arm_smmu_init_domain_context(struct iommu_domain *domain,
struct arm_smmu_device *smmu)
{
- int irq, ret, start;
+ int irq, start, ret = 0;
+ unsigned long flags;
struct arm_smmu_domain *smmu_domain = domain->priv;
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
+ spin_lock_irqsave(&smmu_domain->lock, flags);
+ if (smmu_domain->smmu)
+ goto out_unlock;
+
if (smmu->features & ARM_SMMU_FEAT_TRANS_NESTED) {
/*
* We will likely want to change this if/when KVM gets
@@ -890,7 +911,7 @@
ret = __arm_smmu_alloc_bitmap(smmu->context_map, start,
smmu->num_context_banks);
if (IS_ERR_VALUE(ret))
- return ret;
+ goto out_unlock;
cfg->cbndx = ret;
if (smmu->version == 1) {
@@ -900,6 +921,10 @@
cfg->irptndx = cfg->cbndx;
}
+ ACCESS_ONCE(smmu_domain->smmu) = smmu;
+ arm_smmu_init_context_bank(smmu_domain);
+ spin_unlock_irqrestore(&smmu_domain->lock, flags);
+
irq = smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
ret = request_irq(irq, arm_smmu_context_fault, IRQF_SHARED,
"arm-smmu-context-fault", domain);
@@ -907,15 +932,12 @@
dev_err(smmu->dev, "failed to request context IRQ %d (%u)\n",
cfg->irptndx, irq);
cfg->irptndx = INVALID_IRPTNDX;
- goto out_free_context;
}
- smmu_domain->smmu = smmu;
- arm_smmu_init_context_bank(smmu_domain);
return 0;
-out_free_context:
- __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
+out_unlock:
+ spin_unlock_irqrestore(&smmu_domain->lock, flags);
return ret;
}
@@ -975,7 +997,6 @@
{
pgtable_t table = pmd_pgtable(*pmd);
- pgtable_page_dtor(table);
__free_page(table);
}
@@ -1108,6 +1129,9 @@
void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
struct arm_smmu_smr *smrs = cfg->smrs;
+ if (!smrs)
+ return;
+
/* Invalidate the SMRs before freeing back to the allocator */
for (i = 0; i < cfg->num_streamids; ++i) {
u8 idx = smrs[i].idx;
@@ -1120,20 +1144,6 @@
kfree(smrs);
}
-static void arm_smmu_bypass_stream_mapping(struct arm_smmu_device *smmu,
- struct arm_smmu_master_cfg *cfg)
-{
- int i;
- void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
-
- for (i = 0; i < cfg->num_streamids; ++i) {
- u16 sid = cfg->streamids[i];
-
- writel_relaxed(S2CR_TYPE_BYPASS,
- gr0_base + ARM_SMMU_GR0_S2CR(sid));
- }
-}
-
static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
struct arm_smmu_master_cfg *cfg)
{
@@ -1160,23 +1170,30 @@
static void arm_smmu_domain_remove_master(struct arm_smmu_domain *smmu_domain,
struct arm_smmu_master_cfg *cfg)
{
+ int i;
struct arm_smmu_device *smmu = smmu_domain->smmu;
+ void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
/*
* We *must* clear the S2CR first, because freeing the SMR means
* that it can be re-allocated immediately.
*/
- arm_smmu_bypass_stream_mapping(smmu, cfg);
+ for (i = 0; i < cfg->num_streamids; ++i) {
+ u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
+
+ writel_relaxed(S2CR_TYPE_BYPASS,
+ gr0_base + ARM_SMMU_GR0_S2CR(idx));
+ }
+
arm_smmu_master_free_smrs(smmu, cfg);
}
static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
{
- int ret = -EINVAL;
+ int ret;
struct arm_smmu_domain *smmu_domain = domain->priv;
- struct arm_smmu_device *smmu;
+ struct arm_smmu_device *smmu, *dom_smmu;
struct arm_smmu_master_cfg *cfg;
- unsigned long flags;
smmu = dev_get_master_dev(dev)->archdata.iommu;
if (!smmu) {
@@ -1188,20 +1205,22 @@
* Sanity check the domain. We don't support domains across
* different SMMUs.
*/
- spin_lock_irqsave(&smmu_domain->lock, flags);
- if (!smmu_domain->smmu) {
+ dom_smmu = ACCESS_ONCE(smmu_domain->smmu);
+ if (!dom_smmu) {
/* Now that we have a master, we can finalise the domain */
ret = arm_smmu_init_domain_context(domain, smmu);
if (IS_ERR_VALUE(ret))
- goto err_unlock;
- } else if (smmu_domain->smmu != smmu) {
+ return ret;
+
+ dom_smmu = smmu_domain->smmu;
+ }
+
+ if (dom_smmu != smmu) {
dev_err(dev,
"cannot attach to SMMU %s whilst already attached to domain on SMMU %s\n",
- dev_name(smmu_domain->smmu->dev),
- dev_name(smmu->dev));
- goto err_unlock;
+ dev_name(smmu_domain->smmu->dev), dev_name(smmu->dev));
+ return -EINVAL;
}
- spin_unlock_irqrestore(&smmu_domain->lock, flags);
/* Looks ok, so add the device to the domain */
cfg = find_smmu_master_cfg(smmu_domain->smmu, dev);
@@ -1209,10 +1228,6 @@
return -ENODEV;
return arm_smmu_domain_add_master(smmu_domain, cfg);
-
-err_unlock:
- spin_unlock_irqrestore(&smmu_domain->lock, flags);
- return ret;
}
static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device *dev)
@@ -1247,10 +1262,6 @@
return -ENOMEM;
arm_smmu_flush_pgtable(smmu, page_address(table), PAGE_SIZE);
- if (!pgtable_page_ctor(table)) {
- __free_page(table);
- return -ENOMEM;
- }
pmd_populate(NULL, pmd, table);
arm_smmu_flush_pgtable(smmu, pmd, sizeof(*pmd));
}
@@ -1626,7 +1637,7 @@
/* Mark all SMRn as invalid and all S2CRn as bypass */
for (i = 0; i < smmu->num_mapping_groups; ++i) {
- writel_relaxed(~SMR_VALID, gr0_base + ARM_SMMU_GR0_SMR(i));
+ writel_relaxed(0, gr0_base + ARM_SMMU_GR0_SMR(i));
writel_relaxed(S2CR_TYPE_BYPASS,
gr0_base + ARM_SMMU_GR0_S2CR(i));
}
@@ -1761,6 +1772,9 @@
dev_notice(smmu->dev,
"\tstream matching with %u register groups, mask 0x%x",
smmu->num_mapping_groups, mask);
+ } else {
+ smmu->num_mapping_groups = (id >> ID0_NUMSIDB_SHIFT) &
+ ID0_NUMSIDB_MASK;
}
/* ID1 */
@@ -1794,11 +1808,16 @@
* Stage-1 output limited by stage-2 input size due to pgd
* allocation (PTRS_PER_PGD).
*/
+ if (smmu->features & ARM_SMMU_FEAT_TRANS_NESTED) {
#ifdef CONFIG_64BIT
- smmu->s1_output_size = min_t(unsigned long, VA_BITS, size);
+ smmu->s1_output_size = min_t(unsigned long, VA_BITS, size);
#else
- smmu->s1_output_size = min(32UL, size);
+ smmu->s1_output_size = min(32UL, size);
#endif
+ } else {
+ smmu->s1_output_size = min_t(unsigned long, PHYS_MASK_SHIFT,
+ size);
+ }
/* The stage-2 output mask is also applied for bypass */
size = arm_smmu_id_size_to_bits((id >> ID2_OAS_SHIFT) & ID2_OAS_MASK);
@@ -1889,6 +1908,10 @@
smmu->irqs[i] = irq;
}
+ err = arm_smmu_device_cfg_probe(smmu);
+ if (err)
+ return err;
+
i = 0;
smmu->masters = RB_ROOT;
while (!of_parse_phandle_with_args(dev->of_node, "mmu-masters",
@@ -1905,10 +1928,6 @@
}
dev_notice(dev, "registered %d master devices\n", i);
- err = arm_smmu_device_cfg_probe(smmu);
- if (err)
- goto out_put_masters;
-
parse_driver_options(smmu);
if (smmu->version > 1 &&
diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index 60ab474..06d268a 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -678,8 +678,7 @@
andd->device_name);
continue;
}
- acpi_bus_get_device(h, &adev);
- if (!adev) {
+ if (acpi_bus_get_device(h, &adev)) {
pr_err("Failed to get device for ACPI object %s\n",
andd->device_name);
continue;
diff --git a/drivers/iommu/fsl_pamu_domain.c b/drivers/iommu/fsl_pamu_domain.c
index 61d1daf..56feed7 100644
--- a/drivers/iommu/fsl_pamu_domain.c
+++ b/drivers/iommu/fsl_pamu_domain.c
@@ -984,7 +984,7 @@
struct iommu_group *group = ERR_PTR(-ENODEV);
struct pci_dev *pdev;
const u32 *prop;
- int ret, len;
+ int ret = 0, len;
/*
* For platform devices we allocate a separate group for
@@ -1007,7 +1007,13 @@
if (IS_ERR(group))
return PTR_ERR(group);
- ret = iommu_group_add_device(group, dev);
+ /*
+ * Check if device has already been added to an iommu group.
+ * Group could have already been created for a PCI device in
+ * the iommu_group_get_for_dev path.
+ */
+ if (!dev->iommu_group)
+ ret = iommu_group_add_device(group, dev);
iommu_group_put(group);
return ret;
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index ac4adb3..0639b92 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -678,15 +678,17 @@
*/
struct iommu_group *iommu_group_get_for_dev(struct device *dev)
{
- struct iommu_group *group = ERR_PTR(-EIO);
+ struct iommu_group *group;
int ret;
group = iommu_group_get(dev);
if (group)
return group;
- if (dev_is_pci(dev))
- group = iommu_group_get_for_pci_dev(to_pci_dev(dev));
+ if (!dev_is_pci(dev))
+ return ERR_PTR(-EINVAL);
+
+ group = iommu_group_get_for_pci_dev(to_pci_dev(dev));
if (IS_ERR(group))
return group;
diff --git a/drivers/irqchip/exynos-combiner.c b/drivers/irqchip/exynos-combiner.c
index f8636a6..5945223 100644
--- a/drivers/irqchip/exynos-combiner.c
+++ b/drivers/irqchip/exynos-combiner.c
@@ -15,6 +15,7 @@
#include <linux/slab.h>
#include <linux/irqdomain.h>
#include <linux/irqchip/chained_irq.h>
+#include <linux/interrupt.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
diff --git a/drivers/irqchip/irq-crossbar.c b/drivers/irqchip/irq-crossbar.c
index 85c2985..bbbaf5d 100644
--- a/drivers/irqchip/irq-crossbar.c
+++ b/drivers/irqchip/irq-crossbar.c
@@ -220,7 +220,7 @@
of_property_read_u32_index(node,
"ti,irqs-reserved",
i, &entry);
- if (entry > max) {
+ if (entry >= max) {
pr_err("Invalid reserved entry\n");
ret = -EINVAL;
goto err_irq_map;
@@ -238,7 +238,7 @@
of_property_read_u32_index(node,
"ti,irqs-skip",
i, &entry);
- if (entry > max) {
+ if (entry >= max) {
pr_err("Invalid skip entry\n");
ret = -EINVAL;
goto err_irq_map;
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 57eaa5a..a0698b4 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -36,7 +36,7 @@
struct gic_chip_data {
void __iomem *dist_base;
void __iomem **redist_base;
- void __percpu __iomem **rdist;
+ void __iomem * __percpu *rdist;
struct irq_domain *domain;
u64 redist_stride;
u32 redist_regions;
@@ -104,7 +104,7 @@
}
/* Low level accessors */
-static u64 gic_read_iar(void)
+static u64 __maybe_unused gic_read_iar(void)
{
u64 irqstat;
@@ -112,24 +112,24 @@
return irqstat;
}
-static void gic_write_pmr(u64 val)
+static void __maybe_unused gic_write_pmr(u64 val)
{
asm volatile("msr_s " __stringify(ICC_PMR_EL1) ", %0" : : "r" (val));
}
-static void gic_write_ctlr(u64 val)
+static void __maybe_unused gic_write_ctlr(u64 val)
{
asm volatile("msr_s " __stringify(ICC_CTLR_EL1) ", %0" : : "r" (val));
isb();
}
-static void gic_write_grpen1(u64 val)
+static void __maybe_unused gic_write_grpen1(u64 val)
{
asm volatile("msr_s " __stringify(ICC_GRPEN1_EL1) ", %0" : : "r" (val));
isb();
}
-static void gic_write_sgi1r(u64 val)
+static void __maybe_unused gic_write_sgi1r(u64 val)
{
asm volatile("msr_s " __stringify(ICC_SGI1R_EL1) ", %0" : : "r" (val));
}
@@ -200,19 +200,6 @@
rwp_wait();
}
-static int gic_peek_irq(struct irq_data *d, u32 offset)
-{
- u32 mask = 1 << (gic_irq(d) % 32);
- void __iomem *base;
-
- if (gic_irq_in_rdist(d))
- base = gic_data_rdist_sgi_base();
- else
- base = gic_data.dist_base;
-
- return !!(readl_relaxed(base + offset + (gic_irq(d) / 32) * 4) & mask);
-}
-
static void gic_mask_irq(struct irq_data *d)
{
gic_poke_irq(d, GICD_ICENABLER);
@@ -401,6 +388,19 @@
}
#ifdef CONFIG_SMP
+static int gic_peek_irq(struct irq_data *d, u32 offset)
+{
+ u32 mask = 1 << (gic_irq(d) % 32);
+ void __iomem *base;
+
+ if (gic_irq_in_rdist(d))
+ base = gic_data_rdist_sgi_base();
+ else
+ base = gic_data.dist_base;
+
+ return !!(readl_relaxed(base + offset + (gic_irq(d) / 32) * 4) & mask);
+}
+
static int gic_secondary_init(struct notifier_block *nfb,
unsigned long action, void *hcpu)
{
diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index 4b959e6..dda6dbc 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -867,7 +867,7 @@
return 0;
}
-const struct irq_domain_ops gic_default_routable_irq_domain_ops = {
+static const struct irq_domain_ops gic_default_routable_irq_domain_ops = {
.map = gic_routable_irq_domain_map,
.unmap = gic_routable_irq_domain_unmap,
.xlate = gic_routable_irq_domain_xlate,
diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
index 1af40ee..7130505 100644
--- a/drivers/md/dm-cache-target.c
+++ b/drivers/md/dm-cache-target.c
@@ -895,8 +895,8 @@
struct cache *cache = mg->cache;
if (mg->writeback) {
- cell_defer(cache, mg->old_ocell, false);
clear_dirty(cache, mg->old_oblock, mg->cblock);
+ cell_defer(cache, mg->old_ocell, false);
cleanup_migration(mg);
return;
@@ -951,13 +951,13 @@
}
} else {
+ clear_dirty(cache, mg->new_oblock, mg->cblock);
if (mg->requeue_holder)
cell_defer(cache, mg->new_ocell, true);
else {
bio_endio(mg->new_ocell->holder, 0);
cell_defer(cache, mg->new_ocell, false);
}
- clear_dirty(cache, mg->new_oblock, mg->cblock);
cleanup_migration(mg);
}
}
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index d7690f8..55de4f6 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -540,11 +540,7 @@
has_nonrot_disk = 0;
choose_next_idle = 0;
- if (conf->mddev->recovery_cp < MaxSector &&
- (this_sector + sectors >= conf->next_resync))
- choose_first = 1;
- else
- choose_first = 0;
+ choose_first = (conf->mddev->recovery_cp < this_sector + sectors);
for (disk = 0 ; disk < conf->raid_disks * 2 ; disk++) {
sector_t dist;
@@ -831,7 +827,7 @@
* there is no normal IO happeing. It must arrange to call
* lower_barrier when the particular background IO completes.
*/
-static void raise_barrier(struct r1conf *conf)
+static void raise_barrier(struct r1conf *conf, sector_t sector_nr)
{
spin_lock_irq(&conf->resync_lock);
@@ -841,6 +837,7 @@
/* block any new IO from starting */
conf->barrier++;
+ conf->next_resync = sector_nr;
/* For these conditions we must wait:
* A: while the array is in frozen state
@@ -849,14 +846,17 @@
* C: next_resync + RESYNC_SECTORS > start_next_window, meaning
* next resync will reach to the window which normal bios are
* handling.
+ * D: while there are any active requests in the current window.
*/
wait_event_lock_irq(conf->wait_barrier,
!conf->array_frozen &&
conf->barrier < RESYNC_DEPTH &&
+ conf->current_window_requests == 0 &&
(conf->start_next_window >=
conf->next_resync + RESYNC_SECTORS),
conf->resync_lock);
+ conf->nr_pending++;
spin_unlock_irq(&conf->resync_lock);
}
@@ -866,6 +866,7 @@
BUG_ON(conf->barrier <= 0);
spin_lock_irqsave(&conf->resync_lock, flags);
conf->barrier--;
+ conf->nr_pending--;
spin_unlock_irqrestore(&conf->resync_lock, flags);
wake_up(&conf->wait_barrier);
}
@@ -877,12 +878,10 @@
if (conf->array_frozen || !bio)
wait = true;
else if (conf->barrier && bio_data_dir(bio) == WRITE) {
- if (conf->next_resync < RESYNC_WINDOW_SECTORS)
- wait = true;
- else if ((conf->next_resync - RESYNC_WINDOW_SECTORS
- >= bio_end_sector(bio)) ||
- (conf->next_resync + NEXT_NORMALIO_DISTANCE
- <= bio->bi_iter.bi_sector))
+ if ((conf->mddev->curr_resync_completed
+ >= bio_end_sector(bio)) ||
+ (conf->next_resync + NEXT_NORMALIO_DISTANCE
+ <= bio->bi_iter.bi_sector))
wait = false;
else
wait = true;
@@ -919,8 +918,8 @@
}
if (bio && bio_data_dir(bio) == WRITE) {
- if (conf->next_resync + NEXT_NORMALIO_DISTANCE
- <= bio->bi_iter.bi_sector) {
+ if (bio->bi_iter.bi_sector >=
+ conf->mddev->curr_resync_completed) {
if (conf->start_next_window == MaxSector)
conf->start_next_window =
conf->next_resync +
@@ -1186,6 +1185,7 @@
atomic_read(&bitmap->behind_writes) == 0);
}
r1_bio->read_disk = rdisk;
+ r1_bio->start_next_window = 0;
read_bio = bio_clone_mddev(bio, GFP_NOIO, mddev);
bio_trim(read_bio, r1_bio->sector - bio->bi_iter.bi_sector,
@@ -1548,8 +1548,13 @@
mempool_destroy(conf->r1buf_pool);
conf->r1buf_pool = NULL;
+ spin_lock_irq(&conf->resync_lock);
conf->next_resync = 0;
conf->start_next_window = MaxSector;
+ conf->current_window_requests +=
+ conf->next_window_requests;
+ conf->next_window_requests = 0;
+ spin_unlock_irq(&conf->resync_lock);
}
static int raid1_spare_active(struct mddev *mddev)
@@ -2150,7 +2155,7 @@
d--;
rdev = conf->mirrors[d].rdev;
if (rdev &&
- test_bit(In_sync, &rdev->flags))
+ !test_bit(Faulty, &rdev->flags))
r1_sync_page_io(rdev, sect, s,
conf->tmppage, WRITE);
}
@@ -2162,7 +2167,7 @@
d--;
rdev = conf->mirrors[d].rdev;
if (rdev &&
- test_bit(In_sync, &rdev->flags)) {
+ !test_bit(Faulty, &rdev->flags)) {
if (r1_sync_page_io(rdev, sect, s,
conf->tmppage, READ)) {
atomic_add(s, &rdev->corrected_errors);
@@ -2541,9 +2546,8 @@
bitmap_cond_end_sync(mddev->bitmap, sector_nr);
r1_bio = mempool_alloc(conf->r1buf_pool, GFP_NOIO);
- raise_barrier(conf);
- conf->next_resync = sector_nr;
+ raise_barrier(conf, sector_nr);
rcu_read_lock();
/*
diff --git a/drivers/media/Kconfig b/drivers/media/Kconfig
index f60bad4..3c89fcb 100644
--- a/drivers/media/Kconfig
+++ b/drivers/media/Kconfig
@@ -182,7 +182,6 @@
depends on HAS_IOMEM
select I2C
select I2C_MUX
- select SPI
default y
help
By default, a media driver auto-selects all possible ancillary
diff --git a/drivers/media/common/cx2341x.c b/drivers/media/common/cx2341x.c
index 103ef6b..be76315 100644
--- a/drivers/media/common/cx2341x.c
+++ b/drivers/media/common/cx2341x.c
@@ -1490,6 +1490,7 @@
{
struct v4l2_ctrl_config cfg;
+ memset(&cfg, 0, sizeof(cfg));
cx2341x_ctrl_fill(id, &cfg.name, &cfg.type, &min, &max, &step, &def, &cfg.flags);
cfg.ops = &cx2341x_ops;
cfg.id = id;
diff --git a/drivers/media/dvb-core/dvb-usb-ids.h b/drivers/media/dvb-core/dvb-usb-ids.h
index 5135a09..12ce19c 100644
--- a/drivers/media/dvb-core/dvb-usb-ids.h
+++ b/drivers/media/dvb-core/dvb-usb-ids.h
@@ -280,6 +280,8 @@
#define USB_PID_PCTV_400E 0x020f
#define USB_PID_PCTV_450E 0x0222
#define USB_PID_PCTV_452E 0x021f
+#define USB_PID_PCTV_78E 0x025a
+#define USB_PID_PCTV_79E 0x0262
#define USB_PID_REALTEK_RTL2831U 0x2831
#define USB_PID_REALTEK_RTL2832U 0x2832
#define USB_PID_TECHNOTREND_CONNECT_S2_3600 0x3007
diff --git a/drivers/media/dvb-frontends/af9033.c b/drivers/media/dvb-frontends/af9033.c
index be4bec2..5c90ea6 100644
--- a/drivers/media/dvb-frontends/af9033.c
+++ b/drivers/media/dvb-frontends/af9033.c
@@ -314,6 +314,19 @@
goto err;
}
+ /* feed clock to RF tuner */
+ switch (state->cfg.tuner) {
+ case AF9033_TUNER_IT9135_38:
+ case AF9033_TUNER_IT9135_51:
+ case AF9033_TUNER_IT9135_52:
+ case AF9033_TUNER_IT9135_60:
+ case AF9033_TUNER_IT9135_61:
+ case AF9033_TUNER_IT9135_62:
+ ret = af9033_wr_reg(state, 0x80fba8, 0x00);
+ if (ret < 0)
+ goto err;
+ }
+
/* settings for TS interface */
if (state->cfg.ts_mode == AF9033_TS_MODE_USB) {
ret = af9033_wr_reg_mask(state, 0x80f9a5, 0x00, 0x01);
diff --git a/drivers/media/dvb-frontends/af9033_priv.h b/drivers/media/dvb-frontends/af9033_priv.h
index fc2ad58..ded7b67 100644
--- a/drivers/media/dvb-frontends/af9033_priv.h
+++ b/drivers/media/dvb-frontends/af9033_priv.h
@@ -1418,7 +1418,7 @@
{ 0x800068, 0x0a },
{ 0x80006a, 0x03 },
{ 0x800070, 0x0a },
- { 0x800071, 0x05 },
+ { 0x800071, 0x0a },
{ 0x800072, 0x02 },
{ 0x800075, 0x8c },
{ 0x800076, 0x8c },
@@ -1484,7 +1484,6 @@
{ 0x800104, 0x02 },
{ 0x800105, 0xbe },
{ 0x800106, 0x00 },
- { 0x800109, 0x02 },
{ 0x800115, 0x0a },
{ 0x800116, 0x03 },
{ 0x80011a, 0xbe },
@@ -1510,7 +1509,6 @@
{ 0x80014b, 0x8c },
{ 0x80014d, 0xac },
{ 0x80014e, 0xc6 },
- { 0x80014f, 0x03 },
{ 0x800151, 0x1e },
{ 0x800153, 0xbc },
{ 0x800178, 0x09 },
@@ -1522,9 +1520,10 @@
{ 0x80018d, 0x5f },
{ 0x80018f, 0xa0 },
{ 0x800190, 0x5a },
- { 0x80ed02, 0xff },
- { 0x80ee42, 0xff },
- { 0x80ee82, 0xff },
+ { 0x800191, 0x00 },
+ { 0x80ed02, 0x40 },
+ { 0x80ee42, 0x40 },
+ { 0x80ee82, 0x40 },
{ 0x80f000, 0x0f },
{ 0x80f01f, 0x8c },
{ 0x80f020, 0x00 },
@@ -1699,7 +1698,6 @@
{ 0x800104, 0x02 },
{ 0x800105, 0xc8 },
{ 0x800106, 0x00 },
- { 0x800109, 0x02 },
{ 0x800115, 0x0a },
{ 0x800116, 0x03 },
{ 0x80011a, 0xc6 },
@@ -1725,7 +1723,6 @@
{ 0x80014b, 0x8c },
{ 0x80014d, 0xa8 },
{ 0x80014e, 0xc6 },
- { 0x80014f, 0x03 },
{ 0x800151, 0x28 },
{ 0x800153, 0xcc },
{ 0x800178, 0x09 },
@@ -1737,9 +1734,10 @@
{ 0x80018d, 0x5f },
{ 0x80018f, 0xfb },
{ 0x800190, 0x5c },
- { 0x80ed02, 0xff },
- { 0x80ee42, 0xff },
- { 0x80ee82, 0xff },
+ { 0x800191, 0x00 },
+ { 0x80ed02, 0x40 },
+ { 0x80ee42, 0x40 },
+ { 0x80ee82, 0x40 },
{ 0x80f000, 0x0f },
{ 0x80f01f, 0x8c },
{ 0x80f020, 0x00 },
diff --git a/drivers/media/dvb-frontends/cx24123.c b/drivers/media/dvb-frontends/cx24123.c
index 72fb583..7975c660 100644
--- a/drivers/media/dvb-frontends/cx24123.c
+++ b/drivers/media/dvb-frontends/cx24123.c
@@ -1095,6 +1095,7 @@
sizeof(state->tuner_i2c_adapter.name));
state->tuner_i2c_adapter.algo = &cx24123_tuner_i2c_algo;
state->tuner_i2c_adapter.algo_data = NULL;
+ state->tuner_i2c_adapter.dev.parent = i2c->dev.parent;
i2c_set_adapdata(&state->tuner_i2c_adapter, state);
if (i2c_add_adapter(&state->tuner_i2c_adapter) < 0) {
err("tuner i2c bus could not be initialized\n");
diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
index d4fa213..de88b98 100644
--- a/drivers/media/i2c/adv7604.c
+++ b/drivers/media/i2c/adv7604.c
@@ -2325,7 +2325,7 @@
v4l2_info(sd, "HDCP keys read: %s%s\n",
(hdmi_read(sd, 0x04) & 0x20) ? "yes" : "no",
(hdmi_read(sd, 0x04) & 0x10) ? "ERROR" : "");
- if (!is_hdmi(sd)) {
+ if (is_hdmi(sd)) {
bool audio_pll_locked = hdmi_read(sd, 0x04) & 0x01;
bool audio_sample_packet_detect = hdmi_read(sd, 0x18) & 0x01;
bool audio_mute = io_read(sd, 0x65) & 0x40;
diff --git a/drivers/media/i2c/smiapp/smiapp-core.c b/drivers/media/i2c/smiapp/smiapp-core.c
index 1eaf975..62acb10 100644
--- a/drivers/media/i2c/smiapp/smiapp-core.c
+++ b/drivers/media/i2c/smiapp/smiapp-core.c
@@ -1282,19 +1282,12 @@
mutex_lock(&sensor->power_mutex);
- /*
- * If the power count is modified from 0 to != 0 or from != 0
- * to 0, update the power state.
- */
- if (!sensor->power_count == !on)
- goto out;
-
- if (on) {
+ if (on && !sensor->power_count) {
/* Power on and perform initialisation. */
ret = smiapp_power_on(sensor);
if (ret < 0)
goto out;
- } else {
+ } else if (!on && sensor->power_count == 1) {
smiapp_power_off(sensor);
}
@@ -2572,7 +2565,7 @@
this->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
this->sd.internal_ops = &smiapp_internal_ops;
- this->sd.owner = NULL;
+ this->sd.owner = THIS_MODULE;
v4l2_set_subdevdata(&this->sd, client);
rval = media_entity_init(&this->sd.entity,
diff --git a/drivers/media/pci/cx18/cx18-driver.c b/drivers/media/pci/cx18/cx18-driver.c
index 716bdc5..83f5074 100644
--- a/drivers/media/pci/cx18/cx18-driver.c
+++ b/drivers/media/pci/cx18/cx18-driver.c
@@ -1091,6 +1091,7 @@
setup.addr = ADDR_UNSET;
setup.type = cx->options.tuner;
setup.mode_mask = T_ANALOG_TV; /* matches TV tuners */
+ setup.config = NULL;
if (cx->options.radio > 0)
setup.mode_mask |= T_RADIO;
setup.tuner_callback = (setup.type == TUNER_XC2028) ?
diff --git a/drivers/media/radio/radio-miropcm20.c b/drivers/media/radio/radio-miropcm20.c
index 998919e..7b35e63 100644
--- a/drivers/media/radio/radio-miropcm20.c
+++ b/drivers/media/radio/radio-miropcm20.c
@@ -27,6 +27,7 @@
#include <linux/module.h>
#include <linux/init.h>
+#include <linux/io.h>
#include <linux/delay.h>
#include <linux/videodev2.h>
#include <linux/kthread.h>
diff --git a/drivers/media/tuners/tuner_it913x.c b/drivers/media/tuners/tuner_it913x.c
index 6f30d7e..3d83c42 100644
--- a/drivers/media/tuners/tuner_it913x.c
+++ b/drivers/media/tuners/tuner_it913x.c
@@ -396,6 +396,7 @@
struct i2c_adapter *i2c_adap, u8 i2c_addr, u8 config)
{
struct it913x_state *state = NULL;
+ int ret;
/* allocate memory for the internal state */
state = kzalloc(sizeof(struct it913x_state), GFP_KERNEL);
@@ -425,6 +426,11 @@
state->tuner_type = config;
state->firmware_ver = 1;
+ /* tuner RF initial */
+ ret = it913x_wr_reg(state, PRO_DMOD, 0xec4c, 0x68);
+ if (ret < 0)
+ goto error;
+
fe->tuner_priv = state;
memcpy(&fe->ops.tuner_ops, &it913x_tuner_ops,
sizeof(struct dvb_tuner_ops));
diff --git a/drivers/media/usb/dvb-usb-v2/af9035.c b/drivers/media/usb/dvb-usb-v2/af9035.c
index 75ec1c6..c82beac 100644
--- a/drivers/media/usb/dvb-usb-v2/af9035.c
+++ b/drivers/media/usb/dvb-usb-v2/af9035.c
@@ -1575,6 +1575,10 @@
&af9035_props, "Leadtek WinFast DTV Dongle Dual", NULL) },
{ DVB_USB_DEVICE(USB_VID_HAUPPAUGE, 0xf900,
&af9035_props, "Hauppauge WinTV-MiniStick 2", NULL) },
+ { DVB_USB_DEVICE(USB_VID_PCTV, USB_PID_PCTV_78E,
+ &af9035_props, "PCTV 78e", RC_MAP_IT913X_V1) },
+ { DVB_USB_DEVICE(USB_VID_PCTV, USB_PID_PCTV_79E,
+ &af9035_props, "PCTV 79e", RC_MAP_IT913X_V2) },
{ }
};
MODULE_DEVICE_TABLE(usb, af9035_id_table);
diff --git a/drivers/media/usb/em28xx/em28xx-video.c b/drivers/media/usb/em28xx/em28xx-video.c
index 90dec29..29abc37 100644
--- a/drivers/media/usb/em28xx/em28xx-video.c
+++ b/drivers/media/usb/em28xx/em28xx-video.c
@@ -1342,7 +1342,7 @@
struct em28xx *dev = video_drvdata(file);
struct em28xx_v4l2 *v4l2 = dev->v4l2;
- if (v4l2->streaming_users > 0)
+ if (vb2_is_busy(&v4l2->vb_vidq))
return -EBUSY;
vidioc_try_fmt_vid_cap(file, priv, f);
@@ -1883,8 +1883,9 @@
return -EINVAL;
}
- em28xx_videodbg("open dev=%s type=%s\n",
- video_device_node_name(vdev), v4l2_type_names[fh_type]);
+ em28xx_videodbg("open dev=%s type=%s users=%d\n",
+ video_device_node_name(vdev), v4l2_type_names[fh_type],
+ v4l2->users);
if (mutex_lock_interruptible(&dev->lock))
return -ERESTARTSYS;
@@ -1897,9 +1898,7 @@
return ret;
}
- if (v4l2_fh_is_singular_file(filp)) {
- em28xx_videodbg("first opened filehandle, initializing device\n");
-
+ if (v4l2->users == 0) {
em28xx_set_mode(dev, EM28XX_ANALOG_MODE);
if (vdev->vfl_type != VFL_TYPE_RADIO)
@@ -1910,8 +1909,6 @@
* of some i2c devices
*/
em28xx_wake_i2c(dev);
- } else {
- em28xx_videodbg("further filehandles are already opened\n");
}
if (vdev->vfl_type == VFL_TYPE_RADIO) {
@@ -1921,6 +1918,7 @@
kref_get(&dev->ref);
kref_get(&v4l2->ref);
+ v4l2->users++;
mutex_unlock(&dev->lock);
@@ -2027,11 +2025,12 @@
struct em28xx_v4l2 *v4l2 = dev->v4l2;
int errCode;
+ em28xx_videodbg("users=%d\n", v4l2->users);
+
+ vb2_fop_release(filp);
mutex_lock(&dev->lock);
- if (v4l2_fh_is_singular_file(filp)) {
- em28xx_videodbg("last opened filehandle, shutting down device\n");
-
+ if (v4l2->users == 1) {
/* No sense to try to write to the device */
if (dev->disconnected)
goto exit;
@@ -2050,12 +2049,10 @@
em28xx_errdev("cannot change alternate number to "
"0 (error=%i)\n", errCode);
}
- } else {
- em28xx_videodbg("further opened filehandles left\n");
}
exit:
- vb2_fop_release(filp);
+ v4l2->users--;
kref_put(&v4l2->ref, em28xx_free_v4l2);
mutex_unlock(&dev->lock);
kref_put(&dev->ref, em28xx_free_device);
diff --git a/drivers/media/usb/em28xx/em28xx.h b/drivers/media/usb/em28xx/em28xx.h
index 84ef8ef..4360338 100644
--- a/drivers/media/usb/em28xx/em28xx.h
+++ b/drivers/media/usb/em28xx/em28xx.h
@@ -524,6 +524,7 @@
int sensor_yres;
int sensor_xtal;
+ int users; /* user count for exclusive use */
int streaming_users; /* number of actively streaming users */
u32 frequency; /* selected tuner frequency */
diff --git a/drivers/media/v4l2-core/videobuf2-core.c b/drivers/media/v4l2-core/videobuf2-core.c
index c359006..25d3ae2 100644
--- a/drivers/media/v4l2-core/videobuf2-core.c
+++ b/drivers/media/v4l2-core/videobuf2-core.c
@@ -971,6 +971,7 @@
* to the userspace.
*/
req->count = allocated_buffers;
+ q->waiting_for_buffers = !V4L2_TYPE_IS_OUTPUT(q->type);
return 0;
}
@@ -1018,6 +1019,7 @@
memset(q->plane_sizes, 0, sizeof(q->plane_sizes));
memset(q->alloc_ctx, 0, sizeof(q->alloc_ctx));
q->memory = create->memory;
+ q->waiting_for_buffers = !V4L2_TYPE_IS_OUTPUT(q->type);
}
num_buffers = min(create->count, VIDEO_MAX_FRAME - q->num_buffers);
@@ -1130,7 +1132,7 @@
*/
void *vb2_plane_cookie(struct vb2_buffer *vb, unsigned int plane_no)
{
- if (plane_no > vb->num_planes || !vb->planes[plane_no].mem_priv)
+ if (plane_no >= vb->num_planes || !vb->planes[plane_no].mem_priv)
return NULL;
return call_ptr_memop(vb, cookie, vb->planes[plane_no].mem_priv);
@@ -1165,13 +1167,10 @@
if (WARN_ON(vb->state != VB2_BUF_STATE_ACTIVE))
return;
- if (!q->start_streaming_called) {
- if (WARN_ON(state != VB2_BUF_STATE_QUEUED))
- state = VB2_BUF_STATE_QUEUED;
- } else if (WARN_ON(state != VB2_BUF_STATE_DONE &&
- state != VB2_BUF_STATE_ERROR)) {
- state = VB2_BUF_STATE_ERROR;
- }
+ if (WARN_ON(state != VB2_BUF_STATE_DONE &&
+ state != VB2_BUF_STATE_ERROR &&
+ state != VB2_BUF_STATE_QUEUED))
+ state = VB2_BUF_STATE_ERROR;
#ifdef CONFIG_VIDEO_ADV_DEBUG
/*
@@ -1762,6 +1761,12 @@
q->start_streaming_called = 0;
dprintk(1, "driver refused to start streaming\n");
+ /*
+ * If you see this warning, then the driver isn't cleaning up properly
+ * after a failed start_streaming(). See the start_streaming()
+ * documentation in videobuf2-core.h for more information how buffers
+ * should be returned to vb2 in start_streaming().
+ */
if (WARN_ON(atomic_read(&q->owned_by_drv_count))) {
unsigned i;
@@ -1777,6 +1782,12 @@
/* Must be zero now */
WARN_ON(atomic_read(&q->owned_by_drv_count));
}
+ /*
+ * If done_list is not empty, then start_streaming() didn't call
+ * vb2_buffer_done(vb, VB2_BUF_STATE_QUEUED) but STATE_ERROR or
+ * STATE_DONE.
+ */
+ WARN_ON(!list_empty(&q->done_list));
return ret;
}
@@ -1812,6 +1823,7 @@
*/
list_add_tail(&vb->queued_entry, &q->queued_list);
q->queued_count++;
+ q->waiting_for_buffers = false;
vb->state = VB2_BUF_STATE_QUEUED;
if (V4L2_TYPE_IS_OUTPUT(q->type)) {
/*
@@ -2123,6 +2135,12 @@
if (q->start_streaming_called)
call_void_qop(q, stop_streaming, q);
+ /*
+ * If you see this warning, then the driver isn't cleaning up properly
+ * in stop_streaming(). See the stop_streaming() documentation in
+ * videobuf2-core.h for more information how buffers should be returned
+ * to vb2 in stop_streaming().
+ */
if (WARN_ON(atomic_read(&q->owned_by_drv_count))) {
for (i = 0; i < q->num_buffers; ++i)
if (q->bufs[i]->state == VB2_BUF_STATE_ACTIVE)
@@ -2272,6 +2290,7 @@
* their normal dequeued state.
*/
__vb2_queue_cancel(q);
+ q->waiting_for_buffers = !V4L2_TYPE_IS_OUTPUT(q->type);
dprintk(3, "successful\n");
return 0;
@@ -2590,10 +2609,17 @@
}
/*
- * There is nothing to wait for if no buffer has been queued and the
- * queue isn't streaming, or if the error flag is set.
+ * There is nothing to wait for if the queue isn't streaming, or if the
+ * error flag is set.
*/
- if ((list_empty(&q->queued_list) && !vb2_is_streaming(q)) || q->error)
+ if (!vb2_is_streaming(q) || q->error)
+ return res | POLLERR;
+ /*
+ * For compatibility with vb1: if QBUF hasn't been called yet, then
+ * return POLLERR as well. This only affects capture queues, output
+ * queues will always initialize waiting_for_buffers to false.
+ */
+ if (q->waiting_for_buffers)
return res | POLLERR;
/*
diff --git a/drivers/media/v4l2-core/videobuf2-dma-sg.c b/drivers/media/v4l2-core/videobuf2-dma-sg.c
index adefc31..9b163a4 100644
--- a/drivers/media/v4l2-core/videobuf2-dma-sg.c
+++ b/drivers/media/v4l2-core/videobuf2-dma-sg.c
@@ -113,7 +113,7 @@
goto fail_pages_alloc;
ret = sg_alloc_table_from_pages(&buf->sg_table, buf->pages,
- buf->num_pages, 0, size, gfp_flags);
+ buf->num_pages, 0, size, GFP_KERNEL);
if (ret)
goto fail_table_alloc;
diff --git a/drivers/message/fusion/Kconfig b/drivers/message/fusion/Kconfig
index a34a11d..63ca984 100644
--- a/drivers/message/fusion/Kconfig
+++ b/drivers/message/fusion/Kconfig
@@ -29,7 +29,7 @@
config FUSION_FC
tristate "Fusion MPT ScsiHost drivers for FC"
depends on PCI && SCSI
- select SCSI_FC_ATTRS
+ depends on SCSI_FC_ATTRS
---help---
SCSI HOST support for a Fiber Channel host adapters.
diff --git a/drivers/misc/lattice-ecp3-config.c b/drivers/misc/lattice-ecp3-config.c
index 7ffdb58..7e1efd5 100644
--- a/drivers/misc/lattice-ecp3-config.c
+++ b/drivers/misc/lattice-ecp3-config.c
@@ -79,6 +79,11 @@
u32 jedec_id;
u32 status;
+ if (fw == NULL) {
+ dev_err(&spi->dev, "Cannot load firmware, aborting\n");
+ return;
+ }
+
if (fw->size == 0) {
dev_err(&spi->dev, "Error: Firmware size is 0!\n");
return;
diff --git a/drivers/net/arcnet/arcnet.c b/drivers/net/arcnet/arcnet.c
index 3b790de..09de683 100644
--- a/drivers/net/arcnet/arcnet.c
+++ b/drivers/net/arcnet/arcnet.c
@@ -777,7 +777,7 @@
ACOMMAND(CFLAGScmd | RESETclear);
AINTMASK(0);
spin_unlock(&lp->lock);
- return IRQ_HANDLED;
+ return retval;
}
BUGMSG(D_DURING, "in arcnet_inthandler (status=%Xh, intmask=%Xh)\n",
diff --git a/drivers/net/arcnet/com20020-pci.c b/drivers/net/arcnet/com20020-pci.c
index 7bb292e..6c99ff0 100644
--- a/drivers/net/arcnet/com20020-pci.c
+++ b/drivers/net/arcnet/com20020-pci.c
@@ -38,6 +38,7 @@
#include <linux/pci.h>
#include <linux/arcdevice.h>
#include <linux/com20020.h>
+#include <linux/list.h>
#include <asm/io.h>
@@ -61,115 +62,317 @@
module_param(clockm, int, 0);
MODULE_LICENSE("GPL");
+static void com20020pci_remove(struct pci_dev *pdev);
+
static int com20020pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
+ struct com20020_pci_card_info *ci;
struct net_device *dev;
struct arcnet_local *lp;
- int ioaddr, err;
+ struct com20020_priv *priv;
+ int i, ioaddr, ret;
+ struct resource *r;
if (pci_enable_device(pdev))
return -EIO;
- dev = alloc_arcdev(device);
- if (!dev)
- return -ENOMEM;
- dev->netdev_ops = &com20020_netdev_ops;
+ priv = devm_kzalloc(&pdev->dev, sizeof(struct com20020_priv),
+ GFP_KERNEL);
+ ci = (struct com20020_pci_card_info *)id->driver_data;
+ priv->ci = ci;
- lp = netdev_priv(dev);
+ INIT_LIST_HEAD(&priv->list_dev);
- pci_set_drvdata(pdev, dev);
- // SOHARD needs PCI base addr 4
- if (pdev->vendor==0x10B5) {
- BUGMSG(D_NORMAL, "SOHARD\n");
- ioaddr = pci_resource_start(pdev, 4);
- }
- else {
- BUGMSG(D_NORMAL, "Contemporary Controls\n");
- ioaddr = pci_resource_start(pdev, 2);
+ for (i = 0; i < ci->devcount; i++) {
+ struct com20020_pci_channel_map *cm = &ci->chan_map_tbl[i];
+ struct com20020_dev *card;
+
+ dev = alloc_arcdev(device);
+ if (!dev) {
+ ret = -ENOMEM;
+ goto out_port;
+ }
+
+ dev->netdev_ops = &com20020_netdev_ops;
+
+ lp = netdev_priv(dev);
+
+ BUGMSG(D_NORMAL, "%s Controls\n", ci->name);
+ ioaddr = pci_resource_start(pdev, cm->bar) + cm->offset;
+
+ r = devm_request_region(&pdev->dev, ioaddr, cm->size,
+ "com20020-pci");
+ if (!r) {
+ pr_err("IO region %xh-%xh already allocated.\n",
+ ioaddr, ioaddr + cm->size - 1);
+ ret = -EBUSY;
+ goto out_port;
+ }
+
+ /* Dummy access after Reset
+ * ARCNET controller needs
+ * this access to detect bustype
+ */
+ outb(0x00, ioaddr + 1);
+ inb(ioaddr + 1);
+
+ dev->base_addr = ioaddr;
+ dev->dev_addr[0] = node;
+ dev->irq = pdev->irq;
+ lp->card_name = "PCI COM20020";
+ lp->card_flags = ci->flags;
+ lp->backplane = backplane;
+ lp->clockp = clockp & 7;
+ lp->clockm = clockm & 3;
+ lp->timeout = timeout;
+ lp->hw.owner = THIS_MODULE;
+
+ if (ASTATUS() == 0xFF) {
+ pr_err("IO address %Xh is empty!\n", ioaddr);
+ ret = -EIO;
+ goto out_port;
+ }
+ if (com20020_check(dev)) {
+ ret = -EIO;
+ goto out_port;
+ }
+
+ card = devm_kzalloc(&pdev->dev, sizeof(struct com20020_dev),
+ GFP_KERNEL);
+ if (!card) {
+ pr_err("%s out of memory!\n", __func__);
+ return -ENOMEM;
+ }
+
+ card->index = i;
+ card->pci_priv = priv;
+ card->dev = dev;
+
+ dev_set_drvdata(&dev->dev, card);
+
+ ret = com20020_found(dev, IRQF_SHARED);
+ if (ret)
+ goto out_port;
+
+ list_add(&card->list, &priv->list_dev);
}
- if (!request_region(ioaddr, ARCNET_TOTAL_SIZE, "com20020-pci")) {
- BUGMSG(D_INIT, "IO region %xh-%xh already allocated.\n",
- ioaddr, ioaddr + ARCNET_TOTAL_SIZE - 1);
- err = -EBUSY;
- goto out_dev;
- }
-
- // Dummy access after Reset
- // ARCNET controller needs this access to detect bustype
- outb(0x00,ioaddr+1);
- inb(ioaddr+1);
-
- dev->base_addr = ioaddr;
- dev->irq = pdev->irq;
- dev->dev_addr[0] = node;
- lp->card_name = "PCI COM20020";
- lp->card_flags = id->driver_data;
- lp->backplane = backplane;
- lp->clockp = clockp & 7;
- lp->clockm = clockm & 3;
- lp->timeout = timeout;
- lp->hw.owner = THIS_MODULE;
-
- if (ASTATUS() == 0xFF) {
- BUGMSG(D_NORMAL, "IO address %Xh was reported by PCI BIOS, "
- "but seems empty!\n", ioaddr);
- err = -EIO;
- goto out_port;
- }
- if (com20020_check(dev)) {
- err = -EIO;
- goto out_port;
- }
-
- if ((err = com20020_found(dev, IRQF_SHARED)) != 0)
- goto out_port;
+ pci_set_drvdata(pdev, priv);
return 0;
out_port:
- release_region(ioaddr, ARCNET_TOTAL_SIZE);
-out_dev:
- free_netdev(dev);
- return err;
+ com20020pci_remove(pdev);
+ return ret;
}
static void com20020pci_remove(struct pci_dev *pdev)
{
- struct net_device *dev = pci_get_drvdata(pdev);
- unregister_netdev(dev);
- free_irq(dev->irq, dev);
- release_region(dev->base_addr, ARCNET_TOTAL_SIZE);
- free_netdev(dev);
+ struct com20020_dev *card, *tmpcard;
+ struct com20020_priv *priv;
+
+ priv = pci_get_drvdata(pdev);
+
+ list_for_each_entry_safe(card, tmpcard, &priv->list_dev, list) {
+ struct net_device *dev = card->dev;
+
+ unregister_netdev(dev);
+ free_irq(dev->irq, dev);
+ free_netdev(dev);
+ }
}
+static struct com20020_pci_card_info card_info_10mbit = {
+ .name = "ARC-PCI",
+ .devcount = 1,
+ .chan_map_tbl = {
+ { 2, 0x00, 0x08 },
+ },
+ .flags = ARC_CAN_10MBIT,
+};
+
+static struct com20020_pci_card_info card_info_5mbit = {
+ .name = "ARC-PCI",
+ .devcount = 1,
+ .chan_map_tbl = {
+ { 2, 0x00, 0x08 },
+ },
+ .flags = ARC_IS_5MBIT,
+};
+
+static struct com20020_pci_card_info card_info_sohard = {
+ .name = "PLX-PCI",
+ .devcount = 1,
+ /* SOHARD needs PCI base addr 4 */
+ .chan_map_tbl = {
+ {4, 0x00, 0x08},
+ },
+ .flags = ARC_CAN_10MBIT,
+};
+
+static struct com20020_pci_card_info card_info_eae = {
+ .name = "EAE PLX-PCI",
+ .devcount = 2,
+ .chan_map_tbl = {
+ { 2, 0x00, 0x08 },
+ { 2, 0x08, 0x08 }
+ },
+ .flags = ARC_CAN_10MBIT,
+};
+
static const struct pci_device_id com20020pci_id_table[] = {
- { 0x1571, 0xa001, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
- { 0x1571, 0xa002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
- { 0x1571, 0xa003, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
- { 0x1571, 0xa004, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
- { 0x1571, 0xa005, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
- { 0x1571, 0xa006, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
- { 0x1571, 0xa007, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
- { 0x1571, 0xa008, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
- { 0x1571, 0xa009, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_IS_5MBIT },
- { 0x1571, 0xa00a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_IS_5MBIT },
- { 0x1571, 0xa00b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_IS_5MBIT },
- { 0x1571, 0xa00c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_IS_5MBIT },
- { 0x1571, 0xa00d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_IS_5MBIT },
- { 0x1571, 0xa00e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_IS_5MBIT },
- { 0x1571, 0xa201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
- { 0x1571, 0xa202, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
- { 0x1571, 0xa203, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
- { 0x1571, 0xa204, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
- { 0x1571, 0xa205, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
- { 0x1571, 0xa206, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
- { 0x10B5, 0x9030, 0x10B5, 0x2978, 0, 0, ARC_CAN_10MBIT },
- { 0x10B5, 0x9050, 0x10B5, 0x2273, 0, 0, ARC_CAN_10MBIT },
- { 0x14BA, 0x6000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
- { 0x10B5, 0x2200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
- {0,}
+ {
+ 0x1571, 0xa001,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ 0,
+ },
+ {
+ 0x1571, 0xa002,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ 0,
+ },
+ {
+ 0x1571, 0xa003,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ 0
+ },
+ {
+ 0x1571, 0xa004,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ 0,
+ },
+ {
+ 0x1571, 0xa005,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ 0
+ },
+ {
+ 0x1571, 0xa006,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ 0
+ },
+ {
+ 0x1571, 0xa007,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ 0
+ },
+ {
+ 0x1571, 0xa008,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ 0
+ },
+ {
+ 0x1571, 0xa009,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_5mbit
+ },
+ {
+ 0x1571, 0xa00a,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_5mbit
+ },
+ {
+ 0x1571, 0xa00b,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_5mbit
+ },
+ {
+ 0x1571, 0xa00c,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_5mbit
+ },
+ {
+ 0x1571, 0xa00d,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_5mbit
+ },
+ {
+ 0x1571, 0xa00e,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_5mbit
+ },
+ {
+ 0x1571, 0xa201,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_10mbit
+ },
+ {
+ 0x1571, 0xa202,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_10mbit
+ },
+ {
+ 0x1571, 0xa203,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_10mbit
+ },
+ {
+ 0x1571, 0xa204,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_10mbit
+ },
+ {
+ 0x1571, 0xa205,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_10mbit
+ },
+ {
+ 0x1571, 0xa206,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_10mbit
+ },
+ {
+ 0x10B5, 0x9030,
+ 0x10B5, 0x2978,
+ 0, 0,
+ (kernel_ulong_t)&card_info_sohard
+ },
+ {
+ 0x10B5, 0x9050,
+ 0x10B5, 0x2273,
+ 0, 0,
+ (kernel_ulong_t)&card_info_sohard
+ },
+ {
+ 0x10B5, 0x9050,
+ 0x10B5, 0x3292,
+ 0, 0,
+ (kernel_ulong_t)&card_info_eae
+ },
+ {
+ 0x14BA, 0x6000,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_10mbit
+ },
+ {
+ 0x10B5, 0x2200,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&card_info_10mbit
+ },
+ { 0, }
};
MODULE_DEVICE_TABLE(pci, com20020pci_id_table);
diff --git a/drivers/net/arcnet/com20020.c b/drivers/net/arcnet/com20020.c
index 7b96c5f..1a84378 100644
--- a/drivers/net/arcnet/com20020.c
+++ b/drivers/net/arcnet/com20020.c
@@ -149,11 +149,25 @@
return 0;
}
+static int com20020_set_hwaddr(struct net_device *dev, void *addr)
+{
+ int ioaddr = dev->base_addr;
+ struct arcnet_local *lp = netdev_priv(dev);
+ struct sockaddr *hwaddr = addr;
+
+ memcpy(dev->dev_addr, hwaddr->sa_data, 1);
+ SET_SUBADR(SUB_NODE);
+ outb(dev->dev_addr[0], _XREG);
+
+ return 0;
+}
+
const struct net_device_ops com20020_netdev_ops = {
.ndo_open = arcnet_open,
.ndo_stop = arcnet_close,
.ndo_start_xmit = arcnet_send_packet,
.ndo_tx_timeout = arcnet_timeout,
+ .ndo_set_mac_address = com20020_set_hwaddr,
.ndo_set_rx_mode = com20020_set_mc_list,
};
diff --git a/drivers/net/arcnet/com20020_cs.c b/drivers/net/arcnet/com20020_cs.c
index 1a790a2..057d958 100644
--- a/drivers/net/arcnet/com20020_cs.c
+++ b/drivers/net/arcnet/com20020_cs.c
@@ -112,10 +112,6 @@
/*====================================================================*/
-struct com20020_dev {
- struct net_device *dev;
-};
-
static int com20020_probe(struct pcmcia_device *p_dev)
{
struct com20020_dev *info;
diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
index 5d27a62..7e9e522 100644
--- a/drivers/net/bonding/bond_3ad.c
+++ b/drivers/net/bonding/bond_3ad.c
@@ -234,24 +234,6 @@
}
/**
- * __get_state_machine_lock - lock the port's state machines
- * @port: the port we're looking at
- */
-static inline void __get_state_machine_lock(struct port *port)
-{
- spin_lock_bh(&(SLAVE_AD_INFO(port->slave)->state_machine_lock));
-}
-
-/**
- * __release_state_machine_lock - unlock the port's state machines
- * @port: the port we're looking at
- */
-static inline void __release_state_machine_lock(struct port *port)
-{
- spin_unlock_bh(&(SLAVE_AD_INFO(port->slave)->state_machine_lock));
-}
-
-/**
* __get_link_speed - get a port's speed
* @port: the port we're looking at
*
@@ -315,15 +297,14 @@
static u8 __get_duplex(struct port *port)
{
struct slave *slave = port->slave;
-
u8 retval;
/* handling a special case: when the configuration starts with
* link down, it sets the duplex to 0.
*/
- if (slave->link != BOND_LINK_UP)
+ if (slave->link != BOND_LINK_UP) {
retval = 0x0;
- else {
+ } else {
switch (slave->duplex) {
case DUPLEX_FULL:
retval = 0x1;
@@ -341,16 +322,6 @@
return retval;
}
-/**
- * __initialize_port_locks - initialize a port's STATE machine spinlock
- * @port: the slave of the port we're looking at
- */
-static inline void __initialize_port_locks(struct slave *slave)
-{
- /* make sure it isn't called twice */
- spin_lock_init(&(SLAVE_AD_INFO(slave)->state_machine_lock));
-}
-
/* Conversions */
/**
@@ -1843,7 +1814,6 @@
ad_initialize_port(port, bond->params.lacp_fast);
- __initialize_port_locks(slave);
port->slave = slave;
port->actor_port_number = SLAVE_AD_INFO(slave)->id;
/* key is determined according to the link speed, duplex and user key(which
@@ -1899,6 +1869,8 @@
struct slave *slave_iter;
struct list_head *iter;
+ /* Sync against bond_3ad_state_machine_handler() */
+ spin_lock_bh(&bond->mode_lock);
aggregator = &(SLAVE_AD_INFO(slave)->aggregator);
port = &(SLAVE_AD_INFO(slave)->port);
@@ -1906,7 +1878,7 @@
if (!port->slave) {
netdev_warn(bond->dev, "Trying to unbind an uninitialized port on %s\n",
slave->dev->name);
- return;
+ goto out;
}
netdev_dbg(bond->dev, "Unbinding Link Aggregation Group %d\n",
@@ -2032,6 +2004,9 @@
}
}
port->slave = NULL;
+
+out:
+ spin_unlock_bh(&bond->mode_lock);
}
/**
@@ -2057,7 +2032,11 @@
struct port *port;
bool should_notify_rtnl = BOND_SLAVE_NOTIFY_LATER;
- read_lock(&bond->curr_slave_lock);
+ /* Lock to protect data accessed by all (e.g., port->sm_vars) and
+ * against running with bond_3ad_unbind_slave. ad_rx_machine may run
+ * concurrently due to incoming LACPDU as well.
+ */
+ spin_lock_bh(&bond->mode_lock);
rcu_read_lock();
/* check if there are any slaves */
@@ -2093,12 +2072,6 @@
goto re_arm;
}
- /* Lock around state machines to protect data accessed
- * by all (e.g., port->sm_vars). ad_rx_machine may run
- * concurrently due to incoming LACPDU.
- */
- __get_state_machine_lock(port);
-
ad_rx_machine(NULL, port);
ad_periodic_machine(port);
ad_port_selection_logic(port);
@@ -2108,8 +2081,6 @@
/* turn off the BEGIN bit, since we already handled it */
if (port->sm_vars & AD_PORT_BEGIN)
port->sm_vars &= ~AD_PORT_BEGIN;
-
- __release_state_machine_lock(port);
}
re_arm:
@@ -2120,7 +2091,7 @@
}
}
rcu_read_unlock();
- read_unlock(&bond->curr_slave_lock);
+ spin_unlock_bh(&bond->mode_lock);
if (should_notify_rtnl && rtnl_trylock()) {
bond_slave_state_notify(bond);
@@ -2161,9 +2132,9 @@
netdev_dbg(slave->bond->dev, "Received LACPDU on port %d\n",
port->actor_port_number);
/* Protect against concurrent state machines */
- __get_state_machine_lock(port);
+ spin_lock(&slave->bond->mode_lock);
ad_rx_machine(lacpdu, port);
- __release_state_machine_lock(port);
+ spin_unlock(&slave->bond->mode_lock);
break;
case AD_TYPE_MARKER:
@@ -2213,7 +2184,7 @@
return;
}
- __get_state_machine_lock(port);
+ spin_lock_bh(&slave->bond->mode_lock);
port->actor_admin_port_key &= ~AD_SPEED_KEY_BITS;
port->actor_oper_port_key = port->actor_admin_port_key |=
@@ -2224,7 +2195,7 @@
*/
port->sm_vars |= AD_PORT_BEGIN;
- __release_state_machine_lock(port);
+ spin_unlock_bh(&slave->bond->mode_lock);
}
/**
@@ -2246,7 +2217,7 @@
return;
}
- __get_state_machine_lock(port);
+ spin_lock_bh(&slave->bond->mode_lock);
port->actor_admin_port_key &= ~AD_DUPLEX_KEY_BITS;
port->actor_oper_port_key = port->actor_admin_port_key |=
@@ -2257,7 +2228,7 @@
*/
port->sm_vars |= AD_PORT_BEGIN;
- __release_state_machine_lock(port);
+ spin_unlock_bh(&slave->bond->mode_lock);
}
/**
@@ -2280,7 +2251,7 @@
return;
}
- __get_state_machine_lock(port);
+ spin_lock_bh(&slave->bond->mode_lock);
/* on link down we are zeroing duplex and speed since
* some of the adaptors(ce1000.lan) report full duplex/speed
* instead of N/A(duplex) / 0(speed).
@@ -2311,7 +2282,7 @@
*/
port->sm_vars |= AD_PORT_BEGIN;
- __release_state_machine_lock(port);
+ spin_unlock_bh(&slave->bond->mode_lock);
}
/**
@@ -2476,20 +2447,16 @@
int bond_3ad_lacpdu_recv(const struct sk_buff *skb, struct bonding *bond,
struct slave *slave)
{
- int ret = RX_HANDLER_ANOTHER;
struct lacpdu *lacpdu, _lacpdu;
if (skb->protocol != PKT_TYPE_LACPDU)
- return ret;
+ return RX_HANDLER_ANOTHER;
lacpdu = skb_header_pointer(skb, 0, sizeof(_lacpdu), &_lacpdu);
if (!lacpdu)
- return ret;
+ return RX_HANDLER_ANOTHER;
- read_lock(&bond->curr_slave_lock);
- ret = bond_3ad_rx_indication(lacpdu, slave, skb->len);
- read_unlock(&bond->curr_slave_lock);
- return ret;
+ return bond_3ad_rx_indication(lacpdu, slave, skb->len);
}
/**
@@ -2499,7 +2466,7 @@
* When modify lacp_rate parameter via sysfs,
* update actor_oper_port_state of each port.
*
- * Hold slave->state_machine_lock,
+ * Hold bond->mode_lock,
* so we can modify port->actor_oper_port_state,
* no matter bond is up or down.
*/
@@ -2511,13 +2478,13 @@
int lacp_fast;
lacp_fast = bond->params.lacp_fast;
+ spin_lock_bh(&bond->mode_lock);
bond_for_each_slave(bond, slave, iter) {
port = &(SLAVE_AD_INFO(slave)->port);
- __get_state_machine_lock(port);
if (lacp_fast)
port->actor_oper_port_state |= AD_STATE_LACP_TIMEOUT;
else
port->actor_oper_port_state &= ~AD_STATE_LACP_TIMEOUT;
- __release_state_machine_lock(port);
}
+ spin_unlock_bh(&bond->mode_lock);
}
diff --git a/drivers/net/bonding/bond_3ad.h b/drivers/net/bonding/bond_3ad.h
index bb03b1d..c5f14ac 100644
--- a/drivers/net/bonding/bond_3ad.h
+++ b/drivers/net/bonding/bond_3ad.h
@@ -259,7 +259,6 @@
struct ad_slave_info {
struct aggregator aggregator; /* 802.3ad aggregator structure */
struct port port; /* 802.3ad port structure */
- spinlock_t state_machine_lock; /* mutex state machines vs. incoming LACPDU */
u16 id;
};
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
index 0284962..615f3be 100644
--- a/drivers/net/bonding/bond_alb.c
+++ b/drivers/net/bonding/bond_alb.c
@@ -100,27 +100,6 @@
/*********************** tlb specific functions ***************************/
-static inline void _lock_tx_hashtbl_bh(struct bonding *bond)
-{
- spin_lock_bh(&(BOND_ALB_INFO(bond).tx_hashtbl_lock));
-}
-
-static inline void _unlock_tx_hashtbl_bh(struct bonding *bond)
-{
- spin_unlock_bh(&(BOND_ALB_INFO(bond).tx_hashtbl_lock));
-}
-
-static inline void _lock_tx_hashtbl(struct bonding *bond)
-{
- spin_lock(&(BOND_ALB_INFO(bond).tx_hashtbl_lock));
-}
-
-static inline void _unlock_tx_hashtbl(struct bonding *bond)
-{
- spin_unlock(&(BOND_ALB_INFO(bond).tx_hashtbl_lock));
-}
-
-/* Caller must hold tx_hashtbl lock */
static inline void tlb_init_table_entry(struct tlb_client_info *entry, int save_load)
{
if (save_load) {
@@ -140,7 +119,6 @@
SLAVE_TLB_INFO(slave).head = TLB_NULL_INDEX;
}
-/* Caller must hold bond lock for read, BH disabled */
static void __tlb_clear_slave(struct bonding *bond, struct slave *slave,
int save_load)
{
@@ -163,13 +141,12 @@
tlb_init_slave(slave);
}
-/* Caller must hold bond lock for read */
static void tlb_clear_slave(struct bonding *bond, struct slave *slave,
int save_load)
{
- _lock_tx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
__tlb_clear_slave(bond, slave, save_load);
- _unlock_tx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
}
/* Must be called before starting the monitor timer */
@@ -184,14 +161,14 @@
if (!new_hashtbl)
return -1;
- _lock_tx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
bond_info->tx_hashtbl = new_hashtbl;
for (i = 0; i < TLB_HASH_TABLE_SIZE; i++)
tlb_init_table_entry(&bond_info->tx_hashtbl[i], 0);
- _unlock_tx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
return 0;
}
@@ -202,12 +179,12 @@
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
struct tlb_up_slave *arr;
- _lock_tx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
kfree(bond_info->tx_hashtbl);
bond_info->tx_hashtbl = NULL;
- _unlock_tx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
arr = rtnl_dereference(bond_info->slave_arr);
if (arr)
@@ -220,7 +197,6 @@
(s64) (SLAVE_TLB_INFO(slave).load << 3); /* Bytes to bits */
}
-/* Caller must hold bond lock for read */
static struct slave *tlb_get_least_loaded_slave(struct bonding *bond)
{
struct slave *slave, *least_loaded;
@@ -281,42 +257,23 @@
return assigned_slave;
}
-/* Caller must hold bond lock for read */
static struct slave *tlb_choose_channel(struct bonding *bond, u32 hash_index,
u32 skb_len)
{
struct slave *tx_slave;
- /*
- * We don't need to disable softirq here, becase
+
+ /* We don't need to disable softirq here, becase
* tlb_choose_channel() is only called by bond_alb_xmit()
* which already has softirq disabled.
*/
- _lock_tx_hashtbl(bond);
+ spin_lock(&bond->mode_lock);
tx_slave = __tlb_choose_channel(bond, hash_index, skb_len);
- _unlock_tx_hashtbl(bond);
+ spin_unlock(&bond->mode_lock);
+
return tx_slave;
}
/*********************** rlb specific functions ***************************/
-static inline void _lock_rx_hashtbl_bh(struct bonding *bond)
-{
- spin_lock_bh(&(BOND_ALB_INFO(bond).rx_hashtbl_lock));
-}
-
-static inline void _unlock_rx_hashtbl_bh(struct bonding *bond)
-{
- spin_unlock_bh(&(BOND_ALB_INFO(bond).rx_hashtbl_lock));
-}
-
-static inline void _lock_rx_hashtbl(struct bonding *bond)
-{
- spin_lock(&(BOND_ALB_INFO(bond).rx_hashtbl_lock));
-}
-
-static inline void _unlock_rx_hashtbl(struct bonding *bond)
-{
- spin_unlock(&(BOND_ALB_INFO(bond).rx_hashtbl_lock));
-}
/* when an ARP REPLY is received from a client update its info
* in the rx_hashtbl
@@ -327,7 +284,7 @@
struct rlb_client_info *client_info;
u32 hash_index;
- _lock_rx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
hash_index = _simple_hash((u8 *)&(arp->ip_src), sizeof(arp->ip_src));
client_info = &(bond_info->rx_hashtbl[hash_index]);
@@ -342,7 +299,7 @@
bond_info->rx_ntt = 1;
}
- _unlock_rx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
}
static int rlb_arp_recv(const struct sk_buff *skb, struct bonding *bond,
@@ -378,40 +335,7 @@
return RX_HANDLER_ANOTHER;
}
-/* Caller must hold bond lock for read */
-static struct slave *rlb_next_rx_slave(struct bonding *bond)
-{
- struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
- struct slave *before = NULL, *rx_slave = NULL, *slave;
- struct list_head *iter;
- bool found = false;
-
- bond_for_each_slave(bond, slave, iter) {
- if (!bond_slave_can_tx(slave))
- continue;
- if (!found) {
- if (!before || before->speed < slave->speed)
- before = slave;
- } else {
- if (!rx_slave || rx_slave->speed < slave->speed)
- rx_slave = slave;
- }
- if (slave == bond_info->rx_slave)
- found = true;
- }
- /* we didn't find anything after the current or we have something
- * better before and up to the current slave
- */
- if (!rx_slave || (before && rx_slave->speed < before->speed))
- rx_slave = before;
-
- if (rx_slave)
- bond_info->rx_slave = rx_slave;
-
- return rx_slave;
-}
-
-/* Caller must hold rcu_read_lock() for read */
+/* Caller must hold rcu_read_lock() */
static struct slave *__rlb_next_rx_slave(struct bonding *bond)
{
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
@@ -444,14 +368,28 @@
return rx_slave;
}
+/* Caller must hold RTNL, rcu_read_lock is obtained only to silence checkers */
+static struct slave *rlb_next_rx_slave(struct bonding *bond)
+{
+ struct slave *rx_slave;
+
+ ASSERT_RTNL();
+
+ rcu_read_lock();
+ rx_slave = __rlb_next_rx_slave(bond);
+ rcu_read_unlock();
+
+ return rx_slave;
+}
+
/* teach the switch the mac of a disabled slave
* on the primary for fault tolerance
*
- * Caller must hold bond->curr_slave_lock for write or bond lock for write
+ * Caller must hold RTNL
*/
static void rlb_teach_disabled_mac_on_primary(struct bonding *bond, u8 addr[])
{
- struct slave *curr_active = bond_deref_active_protected(bond);
+ struct slave *curr_active = rtnl_dereference(bond->curr_active_slave);
if (!curr_active)
return;
@@ -479,7 +417,7 @@
u32 index, next_index;
/* clear slave from rx_hashtbl */
- _lock_rx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
rx_hash_table = bond_info->rx_hashtbl;
index = bond_info->rx_hashtbl_used_head;
@@ -510,14 +448,10 @@
}
}
- _unlock_rx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
- write_lock_bh(&bond->curr_slave_lock);
-
- if (slave != bond_deref_active_protected(bond))
+ if (slave != rtnl_dereference(bond->curr_active_slave))
rlb_teach_disabled_mac_on_primary(bond, slave->dev->dev_addr);
-
- write_unlock_bh(&bond->curr_slave_lock);
}
static void rlb_update_client(struct rlb_client_info *client_info)
@@ -565,7 +499,7 @@
struct rlb_client_info *client_info;
u32 hash_index;
- _lock_rx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
hash_index = bond_info->rx_hashtbl_used_head;
for (; hash_index != RLB_NULL_INDEX;
@@ -583,7 +517,7 @@
*/
bond_info->rlb_update_delay_counter = RLB_UPDATE_DELAY;
- _unlock_rx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
}
/* The slave was assigned a new mac address - update the clients */
@@ -594,7 +528,7 @@
int ntt = 0;
u32 hash_index;
- _lock_rx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
hash_index = bond_info->rx_hashtbl_used_head;
for (; hash_index != RLB_NULL_INDEX;
@@ -615,7 +549,7 @@
bond_info->rlb_update_retry_counter = RLB_UPDATE_RETRY;
}
- _unlock_rx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
}
/* mark all clients using src_ip to be updated */
@@ -625,7 +559,7 @@
struct rlb_client_info *client_info;
u32 hash_index;
- _lock_rx_hashtbl(bond);
+ spin_lock(&bond->mode_lock);
hash_index = bond_info->rx_hashtbl_used_head;
for (; hash_index != RLB_NULL_INDEX;
@@ -636,7 +570,7 @@
netdev_err(bond->dev, "found a client with no channel in the client's hash table\n");
continue;
}
- /*update all clients using this src_ip, that are not assigned
+ /* update all clients using this src_ip, that are not assigned
* to the team's address (curr_active_slave) and have a known
* unicast mac address.
*/
@@ -649,10 +583,9 @@
}
}
- _unlock_rx_hashtbl(bond);
+ spin_unlock(&bond->mode_lock);
}
-/* Caller must hold both bond and ptr locks for read */
static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bond)
{
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
@@ -661,7 +594,7 @@
struct rlb_client_info *client_info;
u32 hash_index = 0;
- _lock_rx_hashtbl(bond);
+ spin_lock(&bond->mode_lock);
curr_active_slave = rcu_dereference(bond->curr_active_slave);
@@ -680,7 +613,7 @@
assigned_slave = client_info->slave;
if (assigned_slave) {
- _unlock_rx_hashtbl(bond);
+ spin_unlock(&bond->mode_lock);
return assigned_slave;
}
} else {
@@ -742,7 +675,7 @@
}
}
- _unlock_rx_hashtbl(bond);
+ spin_unlock(&bond->mode_lock);
return assigned_slave;
}
@@ -763,9 +696,7 @@
return NULL;
if (arp->op_code == htons(ARPOP_REPLY)) {
- /* the arp must be sent on the selected
- * rx channel
- */
+ /* the arp must be sent on the selected rx channel */
tx_slave = rlb_choose_channel(skb, bond);
if (tx_slave)
ether_addr_copy(arp->mac_src, tx_slave->dev->dev_addr);
@@ -795,7 +726,6 @@
return tx_slave;
}
-/* Caller must hold bond lock for read */
static void rlb_rebalance(struct bonding *bond)
{
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
@@ -804,7 +734,7 @@
int ntt;
u32 hash_index;
- _lock_rx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
ntt = 0;
hash_index = bond_info->rx_hashtbl_used_head;
@@ -822,10 +752,10 @@
/* update the team's flag only after the whole iteration */
if (ntt)
bond_info->rx_ntt = 1;
- _unlock_rx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
}
-/* Caller must hold rx_hashtbl lock */
+/* Caller must hold mode_lock */
static void rlb_init_table_entry_dst(struct rlb_client_info *entry)
{
entry->used_next = RLB_NULL_INDEX;
@@ -913,15 +843,16 @@
bond_info->rx_hashtbl[ip_src_hash].src_first = ip_dst_hash;
}
-/* deletes all rx_hashtbl entries with arp->ip_src if their mac_src does
- * not match arp->mac_src */
+/* deletes all rx_hashtbl entries with arp->ip_src if their mac_src does
+ * not match arp->mac_src
+ */
static void rlb_purge_src_ip(struct bonding *bond, struct arp_pkt *arp)
{
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
u32 ip_src_hash = _simple_hash((u8 *)&(arp->ip_src), sizeof(arp->ip_src));
u32 index;
- _lock_rx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
index = bond_info->rx_hashtbl[ip_src_hash].src_first;
while (index != RLB_NULL_INDEX) {
@@ -932,7 +863,7 @@
rlb_delete_table_entry(bond, index);
index = next_index;
}
- _unlock_rx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
}
static int rlb_initialize(struct bonding *bond)
@@ -946,7 +877,7 @@
if (!new_hashtbl)
return -1;
- _lock_rx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
bond_info->rx_hashtbl = new_hashtbl;
@@ -955,7 +886,7 @@
for (i = 0; i < RLB_HASH_TABLE_SIZE; i++)
rlb_init_table_entry(bond_info->rx_hashtbl + i);
- _unlock_rx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
/* register to receive ARPs */
bond->recv_probe = rlb_arp_recv;
@@ -967,13 +898,13 @@
{
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
- _lock_rx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
kfree(bond_info->rx_hashtbl);
bond_info->rx_hashtbl = NULL;
bond_info->rx_hashtbl_used_head = RLB_NULL_INDEX;
- _unlock_rx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
}
static void rlb_clear_vlan(struct bonding *bond, unsigned short vlan_id)
@@ -981,7 +912,7 @@
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
u32 curr_index;
- _lock_rx_hashtbl_bh(bond);
+ spin_lock_bh(&bond->mode_lock);
curr_index = bond_info->rx_hashtbl_used_head;
while (curr_index != RLB_NULL_INDEX) {
@@ -994,7 +925,7 @@
curr_index = next_index;
}
- _unlock_rx_hashtbl_bh(bond);
+ spin_unlock_bh(&bond->mode_lock);
}
/*********************** tlb/rlb shared functions *********************/
@@ -1091,8 +1022,9 @@
return 0;
}
- /* for rlb each slave must have a unique hw mac addresses so that */
- /* each slave will receive packets destined to a different mac */
+ /* for rlb each slave must have a unique hw mac addresses so that
+ * each slave will receive packets destined to a different mac
+ */
memcpy(s_addr.sa_data, addr, dev->addr_len);
s_addr.sa_family = dev->type;
if (dev_set_mac_address(dev, &s_addr)) {
@@ -1103,13 +1035,10 @@
return 0;
}
-/*
- * Swap MAC addresses between two slaves.
+/* Swap MAC addresses between two slaves.
*
* Called with RTNL held, and no other locks.
- *
*/
-
static void alb_swap_mac_addr(struct slave *slave1, struct slave *slave2)
{
u8 tmp_mac_addr[ETH_ALEN];
@@ -1120,8 +1049,7 @@
}
-/*
- * Send learning packets after MAC address swap.
+/* Send learning packets after MAC address swap.
*
* Called with RTNL and no other locks
*/
@@ -1194,7 +1122,6 @@
found_slave = bond_slave_has_mac(bond, slave->perm_hwaddr);
if (found_slave) {
- /* locking: needs RTNL and nothing else */
alb_swap_mac_addr(slave, found_slave);
alb_fasten_mac_swap(bond, slave, found_slave);
}
@@ -1243,7 +1170,8 @@
return 0;
/* Try setting slave mac to bond address and fall-through
- to code handling that situation below... */
+ * to code handling that situation below...
+ */
alb_set_slave_mac_addr(slave, bond->dev->dev_addr);
}
@@ -1351,7 +1279,6 @@
if (rlb_enabled) {
bond->alb_info.rlb_enabled = 1;
- /* initialize rlb */
res = rlb_initialize(bond);
if (res) {
tlb_deinitialize(bond);
@@ -1375,7 +1302,7 @@
}
static int bond_do_alb_xmit(struct sk_buff *skb, struct bonding *bond,
- struct slave *tx_slave)
+ struct slave *tx_slave)
{
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
struct ethhdr *eth_data = eth_hdr(skb);
@@ -1398,9 +1325,9 @@
}
if (tx_slave && bond->params.tlb_dynamic_lb) {
- _lock_tx_hashtbl(bond);
+ spin_lock(&bond->mode_lock);
__tlb_clear_slave(bond, tx_slave, 0);
- _unlock_tx_hashtbl(bond);
+ spin_unlock(&bond->mode_lock);
}
/* no suitable interface, frame not sent */
@@ -1595,13 +1522,6 @@
if (bond_info->lp_counter >= BOND_ALB_LP_TICKS(bond)) {
bool strict_match;
- /* change of curr_active_slave involves swapping of mac addresses.
- * in order to avoid this swapping from happening while
- * sending the learning packets, the curr_slave_lock must be held for
- * read.
- */
- read_lock(&bond->curr_slave_lock);
-
bond_for_each_slave_rcu(bond, slave, iter) {
/* If updating current_active, use all currently
* user mac addreses (!strict_match). Otherwise, only
@@ -1613,17 +1533,11 @@
alb_send_learning_packets(slave, slave->dev->dev_addr,
strict_match);
}
-
- read_unlock(&bond->curr_slave_lock);
-
bond_info->lp_counter = 0;
}
/* rebalance tx traffic */
if (bond_info->tx_rebalance_counter >= BOND_TLB_REBALANCE_TICKS) {
-
- read_lock(&bond->curr_slave_lock);
-
bond_for_each_slave_rcu(bond, slave, iter) {
tlb_clear_slave(bond, slave, 1);
if (slave == rcu_access_pointer(bond->curr_active_slave)) {
@@ -1633,19 +1547,14 @@
bond_info->unbalanced_load = 0;
}
}
-
- read_unlock(&bond->curr_slave_lock);
-
bond_info->tx_rebalance_counter = 0;
}
- /* handle rlb stuff */
if (bond_info->rlb_enabled) {
if (bond_info->primary_is_promisc &&
(++bond_info->rlb_promisc_timeout_counter >= RLB_PROMISC_TIMEOUT)) {
- /*
- * dev_set_promiscuity requires rtnl and
+ /* dev_set_promiscuity requires rtnl and
* nothing else. Avoid race with bond_close.
*/
rcu_read_unlock();
@@ -1715,8 +1624,7 @@
return 0;
}
-/*
- * Remove slave from tlb and rlb hash tables, and fix up MAC addresses
+/* Remove slave from tlb and rlb hash tables, and fix up MAC addresses
* if necessary.
*
* Caller must hold RTNL and no other locks
@@ -1739,7 +1647,6 @@
}
-/* Caller must hold bond lock for read */
void bond_alb_handle_link_change(struct bonding *bond, struct slave *slave, char link)
{
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
@@ -1775,21 +1682,14 @@
* Set the bond->curr_active_slave to @new_slave and handle
* mac address swapping and promiscuity changes as needed.
*
- * If new_slave is NULL, caller must hold curr_slave_lock for write
- *
- * If new_slave is not NULL, caller must hold RTNL, curr_slave_lock
- * for write. Processing here may sleep, so no other locks may be held.
+ * Caller must hold RTNL
*/
void bond_alb_handle_active_change(struct bonding *bond, struct slave *new_slave)
- __releases(&bond->curr_slave_lock)
- __acquires(&bond->curr_slave_lock)
{
struct slave *swap_slave;
struct slave *curr_active;
- curr_active = rcu_dereference_protected(bond->curr_active_slave,
- !new_slave ||
- lockdep_is_held(&bond->curr_slave_lock));
+ curr_active = rtnl_dereference(bond->curr_active_slave);
if (curr_active == new_slave)
return;
@@ -1811,8 +1711,7 @@
if (!swap_slave)
swap_slave = bond_slave_has_mac(bond, bond->dev->dev_addr);
- /*
- * Arrange for swap_slave and new_slave to temporarily be
+ /* Arrange for swap_slave and new_slave to temporarily be
* ignored so we can mess with their MAC addresses without
* fear of interference from transmit activity.
*/
@@ -1820,10 +1719,6 @@
tlb_clear_slave(bond, swap_slave, 1);
tlb_clear_slave(bond, new_slave, 1);
- write_unlock_bh(&bond->curr_slave_lock);
-
- ASSERT_RTNL();
-
/* in TLB mode, the slave might flip down/up with the old dev_addr,
* and thus filter bond->dev_addr's packets, so force bond's mac
*/
@@ -1852,8 +1747,6 @@
alb_send_learning_packets(new_slave, bond->dev->dev_addr,
false);
}
-
- write_lock_bh(&bond->curr_slave_lock);
}
/* Called with RTNL */
diff --git a/drivers/net/bonding/bond_alb.h b/drivers/net/bonding/bond_alb.h
index aaeac61..3c6a7ff 100644
--- a/drivers/net/bonding/bond_alb.h
+++ b/drivers/net/bonding/bond_alb.h
@@ -147,7 +147,6 @@
struct alb_bond_info {
struct tlb_client_info *tx_hashtbl; /* Dynamically allocated */
- spinlock_t tx_hashtbl_lock;
u32 unbalanced_load;
int tx_rebalance_counter;
int lp_counter;
@@ -156,7 +155,6 @@
/* -------- rlb parameters -------- */
int rlb_enabled;
struct rlb_client_info *rx_hashtbl; /* Receive hash table */
- spinlock_t rx_hashtbl_lock;
u32 rx_hashtbl_used_head;
u8 rx_ntt; /* flag - need to transmit
* to all rx clients
diff --git a/drivers/net/bonding/bond_debugfs.c b/drivers/net/bonding/bond_debugfs.c
index 280971b..8f99082 100644
--- a/drivers/net/bonding/bond_debugfs.c
+++ b/drivers/net/bonding/bond_debugfs.c
@@ -13,9 +13,7 @@
static struct dentry *bonding_debug_root;
-/*
- * Show RLB hash table
- */
+/* Show RLB hash table */
static int bond_debug_rlb_hash_show(struct seq_file *m, void *v)
{
struct bonding *bond = m->private;
@@ -29,7 +27,7 @@
seq_printf(m, "SourceIP DestinationIP "
"Destination MAC DEV\n");
- spin_lock_bh(&(BOND_ALB_INFO(bond).rx_hashtbl_lock));
+ spin_lock_bh(&bond->mode_lock);
hash_index = bond_info->rx_hashtbl_used_head;
for (; hash_index != RLB_NULL_INDEX;
@@ -42,7 +40,7 @@
client_info->slave->dev->name);
}
- spin_unlock_bh(&(BOND_ALB_INFO(bond).rx_hashtbl_lock));
+ spin_unlock_bh(&bond->mode_lock);
return 0;
}
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index b43b2df..5390475 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -175,7 +175,7 @@
"the same MAC; 0 for none (default), "
"1 for active, 2 for follow");
module_param(all_slaves_active, int, 0);
-MODULE_PARM_DESC(all_slaves_active, "Keep all frames received on an interface"
+MODULE_PARM_DESC(all_slaves_active, "Keep all frames received on an interface "
"by setting active flag for all slaves; "
"0 for never (default), 1 for always.");
module_param(resend_igmp, int, 0);
@@ -253,8 +253,7 @@
dev_queue_xmit(skb);
}
-/*
- * In the following 2 functions, bond_vlan_rx_add_vid and bond_vlan_rx_kill_vid,
+/* In the following 2 functions, bond_vlan_rx_add_vid and bond_vlan_rx_kill_vid,
* We don't protect the slave list iteration with a lock because:
* a. This operation is performed in IOCTL context,
* b. The operation is protected by the RTNL semaphore in the 8021q code,
@@ -326,8 +325,7 @@
/*------------------------------- Link status -------------------------------*/
-/*
- * Set the carrier state for the master according to the state of its
+/* Set the carrier state for the master according to the state of its
* slaves. If any slaves are up, the master is up. In 802.3ad mode,
* do special 802.3ad magic.
*
@@ -362,8 +360,7 @@
return 0;
}
-/*
- * Get link speed and duplex from the slave's base driver
+/* Get link speed and duplex from the slave's base driver
* using ethtool. If for some reason the call fails or the
* values are invalid, set speed and duplex to -1,
* and return.
@@ -416,8 +413,7 @@
}
}
-/*
- * if <dev> supports MII link status reporting, check its link status.
+/* if <dev> supports MII link status reporting, check its link status.
*
* We either do MII/ETHTOOL ioctls, or check netif_carrier_ok(),
* depending upon the setting of the use_carrier parameter.
@@ -454,14 +450,14 @@
/* Ethtool can't be used, fallback to MII ioctls. */
ioctl = slave_ops->ndo_do_ioctl;
if (ioctl) {
- /* TODO: set pointer to correct ioctl on a per team member */
- /* bases to make this more efficient. that is, once */
- /* we determine the correct ioctl, we will always */
- /* call it and not the others for that team */
- /* member. */
+ /* TODO: set pointer to correct ioctl on a per team member
+ * bases to make this more efficient. that is, once
+ * we determine the correct ioctl, we will always
+ * call it and not the others for that team
+ * member.
+ */
- /*
- * We cannot assume that SIOCGMIIPHY will also read a
+ /* We cannot assume that SIOCGMIIPHY will also read a
* register; not all network drivers (e.g., e100)
* support that.
*/
@@ -476,8 +472,7 @@
}
}
- /*
- * If reporting, report that either there's no dev->do_ioctl,
+ /* If reporting, report that either there's no dev->do_ioctl,
* or both SIOCGMIIREG and get_link failed (meaning that we
* cannot report link status). If not reporting, pretend
* we're ok.
@@ -487,9 +482,7 @@
/*----------------------------- Multicast list ------------------------------*/
-/*
- * Push the promiscuity flag down to appropriate slaves
- */
+/* Push the promiscuity flag down to appropriate slaves */
static int bond_set_promiscuity(struct bonding *bond, int inc)
{
struct list_head *iter;
@@ -512,9 +505,7 @@
return err;
}
-/*
- * Push the allmulti flag down to all slaves
- */
+/* Push the allmulti flag down to all slaves */
static int bond_set_allmulti(struct bonding *bond, int inc)
{
struct list_head *iter;
@@ -537,8 +528,7 @@
return err;
}
-/*
- * Retrieve the list of registered multicast addresses for the bonding
+/* Retrieve the list of registered multicast addresses for the bonding
* device and retransmit an IGMP JOIN request to the current active
* slave.
*/
@@ -560,8 +550,7 @@
rtnl_unlock();
}
-/* Flush bond's hardware addresses from slave
- */
+/* Flush bond's hardware addresses from slave */
static void bond_hw_addr_flush(struct net_device *bond_dev,
struct net_device *slave_dev)
{
@@ -588,8 +577,6 @@
static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
struct slave *old_active)
{
- ASSERT_RTNL();
-
if (old_active) {
if (bond->dev->flags & IFF_PROMISC)
dev_set_promiscuity(old_active->dev, -1);
@@ -632,18 +619,15 @@
call_netdevice_notifiers(NETDEV_CHANGEADDR, bond_dev);
}
-/*
- * bond_do_fail_over_mac
+/* bond_do_fail_over_mac
*
* Perform special MAC address swapping for fail_over_mac settings
*
- * Called with RTNL, curr_slave_lock for write_bh.
+ * Called with RTNL
*/
static void bond_do_fail_over_mac(struct bonding *bond,
struct slave *new_active,
struct slave *old_active)
- __releases(&bond->curr_slave_lock)
- __acquires(&bond->curr_slave_lock)
{
u8 tmp_mac[ETH_ALEN];
struct sockaddr saddr;
@@ -651,23 +635,17 @@
switch (bond->params.fail_over_mac) {
case BOND_FOM_ACTIVE:
- if (new_active) {
- write_unlock_bh(&bond->curr_slave_lock);
+ if (new_active)
bond_set_dev_addr(bond->dev, new_active->dev);
- write_lock_bh(&bond->curr_slave_lock);
- }
break;
case BOND_FOM_FOLLOW:
- /*
- * if new_active && old_active, swap them
+ /* if new_active && old_active, swap them
* if just old_active, do nothing (going to no active slave)
* if just new_active, set new_active to bond's MAC
*/
if (!new_active)
return;
- write_unlock_bh(&bond->curr_slave_lock);
-
if (old_active) {
ether_addr_copy(tmp_mac, new_active->dev->dev_addr);
ether_addr_copy(saddr.sa_data,
@@ -696,7 +674,6 @@
netdev_err(bond->dev, "Error %d setting MAC of slave %s\n",
-rv, new_active->dev->name);
out:
- write_lock_bh(&bond->curr_slave_lock);
break;
default:
netdev_err(bond->dev, "bond_do_fail_over_mac impossible: bad policy %d\n",
@@ -709,7 +686,7 @@
static bool bond_should_change_active(struct bonding *bond)
{
struct slave *prim = rtnl_dereference(bond->primary_slave);
- struct slave *curr = bond_deref_active_protected(bond);
+ struct slave *curr = rtnl_dereference(bond->curr_active_slave);
if (!prim || !curr || curr->link != BOND_LINK_UP)
return true;
@@ -785,15 +762,15 @@
* because it is apparently the best available slave we have, even though its
* updelay hasn't timed out yet.
*
- * If new_active is not NULL, caller must hold curr_slave_lock for write_bh.
+ * Caller must hold RTNL.
*/
void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
{
struct slave *old_active;
- old_active = rcu_dereference_protected(bond->curr_active_slave,
- !new_active ||
- lockdep_is_held(&bond->curr_slave_lock));
+ ASSERT_RTNL();
+
+ old_active = rtnl_dereference(bond->curr_active_slave);
if (old_active == new_active)
return;
@@ -861,21 +838,18 @@
bond_should_notify_peers(bond);
}
- write_unlock_bh(&bond->curr_slave_lock);
-
call_netdevice_notifiers(NETDEV_BONDING_FAILOVER, bond->dev);
if (should_notify_peers)
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,
bond->dev);
-
- write_lock_bh(&bond->curr_slave_lock);
}
}
/* resend IGMP joins since active slave has changed or
* all were sent on curr_active_slave.
* resend only if bond is brought up with the affected
- * bonding modes and the retransmission is enabled */
+ * bonding modes and the retransmission is enabled
+ */
if (netif_running(bond->dev) && (bond->params.resend_igmp > 0) &&
((bond_uses_primary(bond) && new_active) ||
BOND_MODE(bond) == BOND_MODE_ROUNDROBIN)) {
@@ -893,15 +867,17 @@
* - The primary_slave has got its link back.
* - A slave has got its link back and there's no old curr_active_slave.
*
- * Caller must hold curr_slave_lock for write_bh.
+ * Caller must hold RTNL.
*/
void bond_select_active_slave(struct bonding *bond)
{
struct slave *best_slave;
int rv;
+ ASSERT_RTNL();
+
best_slave = bond_find_best_slave(bond);
- if (best_slave != bond_deref_active_protected(bond)) {
+ if (best_slave != rtnl_dereference(bond->curr_active_slave)) {
bond_change_active_slave(bond, best_slave);
rv = bond_set_carrier(bond);
if (!rv)
@@ -1241,8 +1217,7 @@
slave_dev->name);
}
- /*
- * Old ifenslave binaries are no longer supported. These can
+ /* Old ifenslave binaries are no longer supported. These can
* be identified with moderate accuracy by the state of the slave:
* the current ifenslave will set the interface down prior to
* enslaving it; the old ifenslave will not.
@@ -1314,7 +1289,8 @@
call_netdevice_notifiers(NETDEV_JOIN, slave_dev);
/* If this is the first slave, then we need to set the master's hardware
- * address to be the same as the slave's. */
+ * address to be the same as the slave's.
+ */
if (!bond_has_slaves(bond) &&
bond->dev->addr_assign_type == NET_ADDR_RANDOM)
bond_set_dev_addr(bond->dev, slave_dev);
@@ -1327,8 +1303,7 @@
new_slave->bond = bond;
new_slave->dev = slave_dev;
- /*
- * Set the new_slave's queue_id to be zero. Queue ID mapping
+ /* Set the new_slave's queue_id to be zero. Queue ID mapping
* is set via sysfs or module option if desired.
*/
new_slave->queue_id = 0;
@@ -1341,8 +1316,7 @@
goto err_free;
}
- /*
- * Save slave's original ("permanent") mac address for modes
+ /* Save slave's original ("permanent") mac address for modes
* that need it, and for restoring it upon release, and then
* set it to the master's address
*/
@@ -1350,8 +1324,7 @@
if (!bond->params.fail_over_mac ||
BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) {
- /*
- * Set slave to master's mac address. The application already
+ /* Set slave to master's mac address. The application already
* set the master's mac address to that of the first slave
*/
memcpy(addr.sa_data, bond_dev->dev_addr, bond_dev->addr_len);
@@ -1437,8 +1410,7 @@
link_reporting = bond_check_dev_link(bond, slave_dev, 1);
if ((link_reporting == -1) && !bond->params.arp_interval) {
- /*
- * miimon is set but a bonded network driver
+ /* miimon is set but a bonded network driver
* does not support ETHTOOL/MII and
* arp_interval is not set. Note: if
* use_carrier is enabled, we will never go
@@ -1571,9 +1543,7 @@
if (bond_uses_primary(bond)) {
block_netpoll_tx();
- write_lock_bh(&bond->curr_slave_lock);
bond_select_active_slave(bond);
- write_unlock_bh(&bond->curr_slave_lock);
unblock_netpoll_tx();
}
@@ -1601,10 +1571,8 @@
RCU_INIT_POINTER(bond->primary_slave, NULL);
if (rcu_access_pointer(bond->curr_active_slave) == new_slave) {
block_netpoll_tx();
- write_lock_bh(&bond->curr_slave_lock);
bond_change_active_slave(bond, NULL);
bond_select_active_slave(bond);
- write_unlock_bh(&bond->curr_slave_lock);
unblock_netpoll_tx();
}
/* either primary_slave or curr_active_slave might've changed */
@@ -1642,10 +1610,9 @@
return res;
}
-/*
- * Try to release the slave device <slave> from the bond device <master>
+/* Try to release the slave device <slave> from the bond device <master>
* It is legal to access curr_active_slave without a lock because all the function
- * is write-locked. If "all" is true it means that the function is being called
+ * is RTNL-locked. If "all" is true it means that the function is being called
* while destroying a bond interface and all slaves are being released.
*
* The rules for slave state should be:
@@ -1691,14 +1658,8 @@
*/
netdev_rx_handler_unregister(slave_dev);
- if (BOND_MODE(bond) == BOND_MODE_8023AD) {
- /* Sync against bond_3ad_rx_indication and
- * bond_3ad_state_machine_handler
- */
- write_lock_bh(&bond->curr_slave_lock);
+ if (BOND_MODE(bond) == BOND_MODE_8023AD)
bond_3ad_unbind_slave(slave);
- write_unlock_bh(&bond->curr_slave_lock);
- }
netdev_info(bond_dev, "Releasing %s interface %s\n",
bond_is_active_slave(slave) ? "active" : "backup",
@@ -1720,11 +1681,8 @@
if (rtnl_dereference(bond->primary_slave) == slave)
RCU_INIT_POINTER(bond->primary_slave, NULL);
- if (oldcurrent == slave) {
- write_lock_bh(&bond->curr_slave_lock);
+ if (oldcurrent == slave)
bond_change_active_slave(bond, NULL);
- write_unlock_bh(&bond->curr_slave_lock);
- }
if (bond_is_lb(bond)) {
/* Must be called only after the slave has been
@@ -1738,16 +1696,11 @@
if (all) {
RCU_INIT_POINTER(bond->curr_active_slave, NULL);
} else if (oldcurrent == slave) {
- /*
- * Note that we hold RTNL over this sequence, so there
+ /* Note that we hold RTNL over this sequence, so there
* is no concern that another slave add/remove event
* will interfere.
*/
- write_lock_bh(&bond->curr_slave_lock);
-
bond_select_active_slave(bond);
-
- write_unlock_bh(&bond->curr_slave_lock);
}
if (!bond_has_slaves(bond)) {
@@ -1770,10 +1723,9 @@
netdev_info(bond_dev, "last VLAN challenged slave %s left bond %s - VLAN blocking is removed\n",
slave_dev->name, bond_dev->name);
- /* must do this from outside any spinlocks */
vlan_vids_del_by_dev(slave_dev, bond_dev);
- /* If the mode uses primary, then this cases was handled above by
+ /* If the mode uses primary, then this case was handled above by
* bond_change_active_slave(..., NULL)
*/
if (!bond_uses_primary(bond)) {
@@ -1813,7 +1765,7 @@
bond_free_slave(slave);
- return 0; /* deletion OK */
+ return 0;
}
/* A wrapper used because of ndo_del_link */
@@ -1822,10 +1774,9 @@
return __bond_release_one(bond_dev, slave_dev, false);
}
-/*
-* First release a slave and then destroy the bond if no more slaves are left.
-* Must be under rtnl_lock when this function is called.
-*/
+/* First release a slave and then destroy the bond if no more slaves are left.
+ * Must be under rtnl_lock when this function is called.
+ */
static int bond_release_and_destroy(struct net_device *bond_dev,
struct net_device *slave_dev)
{
@@ -1848,7 +1799,6 @@
info->bond_mode = BOND_MODE(bond);
info->miimon = bond->params.miimon;
-
info->num_slaves = bond->slave_cnt;
return 0;
@@ -1911,9 +1861,7 @@
/*FALLTHRU*/
case BOND_LINK_FAIL:
if (link_state) {
- /*
- * recovered before downdelay expired
- */
+ /* recovered before downdelay expired */
slave->link = BOND_LINK_UP;
slave->last_link_up = jiffies;
netdev_info(bond->dev, "link status up again after %d ms for interface %s\n",
@@ -2056,19 +2004,15 @@
}
do_failover:
- ASSERT_RTNL();
block_netpoll_tx();
- write_lock_bh(&bond->curr_slave_lock);
bond_select_active_slave(bond);
- write_unlock_bh(&bond->curr_slave_lock);
unblock_netpoll_tx();
}
bond_set_carrier(bond);
}
-/*
- * bond_mii_monitor
+/* bond_mii_monitor
*
* Really a wrapper that splits the mii monitor into two phases: an
* inspection, then (if inspection indicates something needs to be done)
@@ -2140,8 +2084,7 @@
return ret;
}
-/*
- * We go to the (large) trouble of VLAN tagging ARP frames because
+/* We go to the (large) trouble of VLAN tagging ARP frames because
* switches in VLAN mode (especially if ports are configured as
* "native" to a VLAN) might not pass non-tagged frames.
*/
@@ -2368,8 +2311,7 @@
curr_active_slave = rcu_dereference(bond->curr_active_slave);
- /*
- * Backup slaves won't see the ARP reply, but do come through
+ /* Backup slaves won't see the ARP reply, but do come through
* here for each ARP probe (so we swap the sip/tip to validate
* the probe). In a "redundant switch, common router" type of
* configuration, the ARP probe will (hopefully) travel from
@@ -2409,8 +2351,7 @@
last_act + mod * delta_in_ticks + delta_in_ticks/2);
}
-/*
- * this function is called regularly to monitor each slave's link
+/* This function is called regularly to monitor each slave's link
* ensuring that traffic is being sent and received when arp monitoring
* is used in load-balancing mode. if the adapter has been dormant, then an
* arp is transmitted to generate traffic. see activebackup_arp_monitor for
@@ -2506,15 +2447,8 @@
if (slave_state_changed) {
bond_slave_state_change(bond);
} else if (do_failover) {
- /* the bond_select_active_slave must hold RTNL
- * and curr_slave_lock for write.
- */
block_netpoll_tx();
- write_lock_bh(&bond->curr_slave_lock);
-
bond_select_active_slave(bond);
-
- write_unlock_bh(&bond->curr_slave_lock);
unblock_netpoll_tx();
}
rtnl_unlock();
@@ -2526,13 +2460,12 @@
msecs_to_jiffies(bond->params.arp_interval));
}
-/*
- * Called to inspect slaves for active-backup mode ARP monitor link state
+/* Called to inspect slaves for active-backup mode ARP monitor link state
* changes. Sets new_link in slaves to specify what action should take
* place for the slave. Returns 0 if no changes are found, >0 if changes
* to link states must be committed.
*
- * Called with rcu_read_lock hold.
+ * Called with rcu_read_lock held.
*/
static int bond_ab_arp_inspect(struct bonding *bond)
{
@@ -2553,16 +2486,14 @@
continue;
}
- /*
- * Give slaves 2*delta after being enslaved or made
+ /* Give slaves 2*delta after being enslaved or made
* active. This avoids bouncing, as the last receive
* times need a full ARP monitor cycle to be updated.
*/
if (bond_time_in_interval(bond, slave->last_link_up, 2))
continue;
- /*
- * Backup slave is down if:
+ /* Backup slave is down if:
* - No current_arp_slave AND
* - more than 3*delta since last receive AND
* - the bond has an IP address
@@ -2581,8 +2512,7 @@
commit++;
}
- /*
- * Active slave is down if:
+ /* Active slave is down if:
* - more than 2*delta since transmitting OR
* - (more than 2*delta since receive AND
* the bond has an IP address)
@@ -2599,8 +2529,7 @@
return commit;
}
-/*
- * Called to commit link state changes noted by inspection step of
+/* Called to commit link state changes noted by inspection step of
* active-backup mode ARP monitor.
*
* Called with RTNL hold.
@@ -2668,21 +2597,17 @@
}
do_failover:
- ASSERT_RTNL();
block_netpoll_tx();
- write_lock_bh(&bond->curr_slave_lock);
bond_select_active_slave(bond);
- write_unlock_bh(&bond->curr_slave_lock);
unblock_netpoll_tx();
}
bond_set_carrier(bond);
}
-/*
- * Send ARP probes for active-backup mode ARP monitor.
+/* Send ARP probes for active-backup mode ARP monitor.
*
- * Called with rcu_read_lock hold.
+ * Called with rcu_read_lock held.
*/
static bool bond_ab_arp_probe(struct bonding *bond)
{
@@ -2822,9 +2747,7 @@
/*-------------------------- netdev event handling --------------------------*/
-/*
- * Change device name
- */
+/* Change device name */
static int bond_event_changename(struct bonding *bond)
{
bond_remove_proc_entry(bond);
@@ -2901,13 +2824,9 @@
}
break;
case NETDEV_DOWN:
- /*
- * ... Or is it this?
- */
break;
case NETDEV_CHANGEMTU:
- /*
- * TODO: Should slaves be allowed to
+ /* TODO: Should slaves be allowed to
* independently alter their MTU? For
* an active-backup bond, slaves need
* not be the same type of device, so
@@ -2939,9 +2858,7 @@
primary ? slave_dev->name : "none");
block_netpoll_tx();
- write_lock_bh(&bond->curr_slave_lock);
bond_select_active_slave(bond);
- write_unlock_bh(&bond->curr_slave_lock);
unblock_netpoll_tx();
break;
case NETDEV_FEAT_CHANGE:
@@ -2958,8 +2875,7 @@
return NOTIFY_DONE;
}
-/*
- * bond_netdev_event: handle netdev notifier chain events.
+/* bond_netdev_event: handle netdev notifier chain events.
*
* This function receives events for the netdev chain. The caller (an
* ioctl handler calling blocking_notifier_call_chain) holds the necessary
@@ -3106,7 +3022,6 @@
/* reset slave->backup and slave->inactive */
if (bond_has_slaves(bond)) {
- read_lock(&bond->curr_slave_lock);
bond_for_each_slave(bond, slave, iter) {
if (bond_uses_primary(bond) &&
slave != rcu_access_pointer(bond->curr_active_slave)) {
@@ -3117,7 +3032,6 @@
BOND_SLAVE_NOTIFY_NOW);
}
}
- read_unlock(&bond->curr_slave_lock);
}
bond_work_init_all(bond);
@@ -3231,22 +3145,17 @@
mii->phy_id = 0;
/* Fall Through */
case SIOCGMIIREG:
- /*
- * We do this again just in case we were called by SIOCGMIIREG
+ /* We do this again just in case we were called by SIOCGMIIREG
* instead of SIOCGMIIPHY.
*/
mii = if_mii(ifr);
if (!mii)
return -EINVAL;
-
if (mii->reg_num == 1) {
mii->val_out = 0;
- read_lock(&bond->curr_slave_lock);
if (netif_carrier_ok(bond->dev))
mii->val_out = BMSR_LSTATUS;
-
- read_unlock(&bond->curr_slave_lock);
}
return 0;
@@ -3277,7 +3186,6 @@
return res;
default:
- /* Go on */
break;
}
@@ -3339,7 +3247,6 @@
struct list_head *iter;
struct slave *slave;
-
rcu_read_lock();
if (bond_uses_primary(bond)) {
slave = rcu_dereference(bond->curr_active_slave);
@@ -3377,8 +3284,7 @@
if (ret)
return ret;
- /*
- * Assign slave's neigh_cleanup to neighbour in case cleanup is called
+ /* Assign slave's neigh_cleanup to neighbour in case cleanup is called
* after the last slave has been detached. Assumes that all slaves
* utilize the same neigh_cleanup (true at this writing as only user
* is ipoib).
@@ -3391,8 +3297,7 @@
return parms.neigh_setup(n);
}
-/*
- * The bonding ndo_neigh_setup is called at init time beofre any
+/* The bonding ndo_neigh_setup is called at init time beofre any
* slave exists. So we must declare proxy setup function which will
* be used at run time to resolve the actual slave neigh param setup.
*
@@ -3410,9 +3315,7 @@
return 0;
}
-/*
- * Change the MTU of all of a master's slaves to match the master
- */
+/* Change the MTU of all of a master's slaves to match the master */
static int bond_change_mtu(struct net_device *bond_dev, int new_mtu)
{
struct bonding *bond = netdev_priv(bond_dev);
@@ -3465,8 +3368,7 @@
return res;
}
-/*
- * Change HW address
+/* Change HW address
*
* Note that many devices must be down to change the HW address, and
* downing the master releases all slaves. We can make bonds full of
@@ -3624,20 +3526,25 @@
*/
if (iph->protocol == IPPROTO_IGMP && skb->protocol == htons(ETH_P_IP)) {
slave = rcu_dereference(bond->curr_active_slave);
- if (slave && bond_slave_can_tx(slave))
+ if (slave)
bond_dev_queue_xmit(bond, skb, slave->dev);
else
bond_xmit_slave_id(bond, skb, 0);
} else {
- slave_id = bond_rr_gen_slave_id(bond);
- bond_xmit_slave_id(bond, skb, slave_id % bond->slave_cnt);
+ int slave_cnt = ACCESS_ONCE(bond->slave_cnt);
+
+ if (likely(slave_cnt)) {
+ slave_id = bond_rr_gen_slave_id(bond);
+ bond_xmit_slave_id(bond, skb, slave_id % slave_cnt);
+ } else {
+ dev_kfree_skb_any(skb);
+ }
}
return NETDEV_TX_OK;
}
-/*
- * in active-backup mode, we know that bond->curr_active_slave is always valid if
+/* In active-backup mode, we know that bond->curr_active_slave is always valid if
* the bond has a usable interface.
*/
static int bond_xmit_activebackup(struct sk_buff *skb, struct net_device *bond_dev)
@@ -3661,8 +3568,13 @@
static int bond_xmit_xor(struct sk_buff *skb, struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
+ int slave_cnt = ACCESS_ONCE(bond->slave_cnt);
- bond_xmit_slave_id(bond, skb, bond_xmit_hash(bond, skb) % bond->slave_cnt);
+ if (likely(slave_cnt))
+ bond_xmit_slave_id(bond, skb,
+ bond_xmit_hash(bond, skb) % slave_cnt);
+ else
+ dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
@@ -3685,7 +3597,6 @@
bond_dev->name, __func__);
continue;
}
- /* bond_dev_queue_xmit always returns 0 */
bond_dev_queue_xmit(bond, skb2, slave->dev);
}
}
@@ -3699,9 +3610,7 @@
/*------------------------- Device initialization ---------------------------*/
-/*
- * Lookup the slave that corresponds to a qid
- */
+/* Lookup the slave that corresponds to a qid */
static inline int bond_slave_override(struct bonding *bond,
struct sk_buff *skb)
{
@@ -3730,17 +3639,14 @@
static u16 bond_select_queue(struct net_device *dev, struct sk_buff *skb,
void *accel_priv, select_queue_fallback_t fallback)
{
- /*
- * This helper function exists to help dev_pick_tx get the correct
+ /* This helper function exists to help dev_pick_tx get the correct
* destination queue. Using a helper function skips a call to
* skb_tx_hash and will put the skbs in the queue we expect on their
* way down to the bonding driver.
*/
u16 txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 0;
- /*
- * Save the original txq to restore before passing to the driver
- */
+ /* Save the original txq to restore before passing to the driver */
qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping;
if (unlikely(txq >= dev->real_num_tx_queues)) {
@@ -3788,8 +3694,7 @@
struct bonding *bond = netdev_priv(dev);
netdev_tx_t ret = NETDEV_TX_OK;
- /*
- * If we risk deadlock from transmitting this in the
+ /* If we risk deadlock from transmitting this in the
* netpoll path, tell netpoll to queue the frame for later tx
*/
if (unlikely(is_netpoll_tx_blocked(dev)))
@@ -3892,8 +3797,7 @@
{
struct bonding *bond = netdev_priv(bond_dev);
- /* initialize rwlocks */
- rwlock_init(&bond->curr_slave_lock);
+ spin_lock_init(&bond->mode_lock);
bond->params = bonding_defaults;
/* Initialize pointers */
@@ -3914,8 +3818,7 @@
bond_dev->priv_flags |= IFF_BONDING | IFF_UNICAST_FLT;
bond_dev->priv_flags &= ~(IFF_XMIT_DST_RELEASE | IFF_TX_SKB_SHARING);
- /* don't acquire bond device's netif_tx_lock when
- * transmitting */
+ /* don't acquire bond device's netif_tx_lock when transmitting */
bond_dev->features |= NETIF_F_LLTX;
/* By default, we declare the bond to be fully
@@ -3938,10 +3841,9 @@
bond_dev->features |= bond_dev->hw_features;
}
-/*
-* Destroy a bonding device.
-* Must be under rtnl_lock when this function is called.
-*/
+/* Destroy a bonding device.
+ * Must be under rtnl_lock when this function is called.
+ */
static void bond_uninit(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
@@ -3969,9 +3871,7 @@
const struct bond_opt_value *valptr;
int arp_all_targets_value;
- /*
- * Convert string parameters.
- */
+ /* Convert string parameters. */
if (mode) {
bond_opt_initstr(&newval, mode);
valptr = bond_opt_parse(bond_opt_get(BOND_OPT_MODE), &newval);
@@ -4148,9 +4048,9 @@
for (arp_ip_count = 0, i = 0;
(arp_ip_count < BOND_MAX_ARP_TARGETS) && arp_ip_target[i]; i++) {
- /* not complete check, but should be good enough to
- catch mistakes */
__be32 ip;
+
+ /* not a complete check, but good enough to catch mistakes */
if (!in4_pton(arp_ip_target[i], -1, (u8 *)&ip, -1, NULL) ||
!bond_is_ip_target_ok(ip)) {
pr_warn("Warning: bad arp_ip_target module parameter (%s), ARP monitoring will not be performed\n",
@@ -4333,26 +4233,14 @@
dev->qdisc_tx_busylock = &bonding_tx_busylock_key;
}
-/*
- * Called from registration process
- */
+/* Called from registration process */
static int bond_init(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct bond_net *bn = net_generic(dev_net(bond_dev), bond_net_id);
- struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
netdev_dbg(bond_dev, "Begin bond_init\n");
- /*
- * Initialize locks that may be required during
- * en/deslave operations. All of the bond_open work
- * (of which this is part) should really be moved to
- * a phase prior to dev_open
- */
- spin_lock_init(&(bond_info->tx_hashtbl_lock));
- spin_lock_init(&(bond_info->rx_hashtbl_lock));
-
bond->wq = create_singlethread_workqueue(bond_dev->name);
if (!bond->wq)
return -ENOMEM;
@@ -4499,9 +4387,7 @@
unregister_pernet_subsys(&bond_net_ops);
#ifdef CONFIG_NET_POLL_CONTROLLER
- /*
- * Make sure we don't have an imbalance on our netpoll blocking
- */
+ /* Make sure we don't have an imbalance on our netpoll blocking */
WARN_ON(atomic_read(&netpoll_block_tx));
#endif
}
diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
index 534c060..b62697f 100644
--- a/drivers/net/bonding/bond_options.c
+++ b/drivers/net/bonding/bond_options.c
@@ -734,15 +734,13 @@
}
block_netpoll_tx();
- write_lock_bh(&bond->curr_slave_lock);
-
/* check to see if we are clearing active */
if (!slave_dev) {
netdev_info(bond->dev, "Clearing current active slave\n");
RCU_INIT_POINTER(bond->curr_active_slave, NULL);
bond_select_active_slave(bond);
} else {
- struct slave *old_active = bond_deref_active_protected(bond);
+ struct slave *old_active = rtnl_dereference(bond->curr_active_slave);
struct slave *new_active = bond_slave_get_rtnl(slave_dev);
BUG_ON(!new_active);
@@ -765,8 +763,6 @@
}
}
}
-
- write_unlock_bh(&bond->curr_slave_lock);
unblock_netpoll_tx();
return ret;
@@ -1066,7 +1062,6 @@
struct slave *slave;
block_netpoll_tx();
- write_lock_bh(&bond->curr_slave_lock);
p = strchr(primary, '\n');
if (p)
@@ -1103,7 +1098,6 @@
primary, bond->dev->name);
out:
- write_unlock_bh(&bond->curr_slave_lock);
unblock_netpoll_tx();
return 0;
@@ -1117,9 +1111,7 @@
bond->params.primary_reselect = newval->value;
block_netpoll_tx();
- write_lock_bh(&bond->curr_slave_lock);
bond_select_active_slave(bond);
- write_unlock_bh(&bond->curr_slave_lock);
unblock_netpoll_tx();
return 0;
diff --git a/drivers/net/bonding/bond_sysfs.c b/drivers/net/bonding/bond_sysfs.c
index 5555517..8ffbafd 100644
--- a/drivers/net/bonding/bond_sysfs.c
+++ b/drivers/net/bonding/bond_sysfs.c
@@ -91,7 +91,6 @@
* creates and deletes entire bonds.
*
* The class parameter is ignored.
- *
*/
static ssize_t bonding_store_bonds(struct class *cls,
struct class_attribute *attr,
diff --git a/drivers/net/bonding/bonding.h b/drivers/net/bonding/bonding.h
index 78c461a..6140bf0 100644
--- a/drivers/net/bonding/bonding.h
+++ b/drivers/net/bonding/bonding.h
@@ -184,9 +184,7 @@
/*
* Here are the locking policies for the two bonding locks:
- *
- * 1) Get rcu_read_lock when reading or RTNL when writing slave list.
- * 2) Get bond->curr_slave_lock when reading/writing bond->curr_active_slave.
+ * Get rcu_read_lock when reading or RTNL when writing slave list.
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
@@ -197,7 +195,14 @@
s32 slave_cnt; /* never change this value outside the attach/detach wrappers */
int (*recv_probe)(const struct sk_buff *, struct bonding *,
struct slave *);
- rwlock_t curr_slave_lock;
+ /* mode_lock is used for mode-specific locking needs, currently used by:
+ * 3ad mode (4) - protect against running bond_3ad_unbind_slave() and
+ * bond_3ad_state_machine_handler() concurrently and also
+ * the access to the state machine shared variables.
+ * TLB mode (5) - to sync the use and modifications of its hash table
+ * ALB mode (6) - to sync the use and modifications of its hash table
+ */
+ spinlock_t mode_lock;
u8 send_peer_notif;
u8 igmp_retrans;
#ifdef CONFIG_PROC_FS
@@ -227,10 +232,6 @@
#define bond_slave_get_rtnl(dev) \
((struct slave *) rtnl_dereference(dev->rx_handler_data))
-#define bond_deref_active_protected(bond) \
- rcu_dereference_protected(bond->curr_active_slave, \
- lockdep_is_held(&bond->curr_slave_lock))
-
struct bond_vlan_tag {
__be16 vlan_proto;
unsigned short vlan_id;
diff --git a/drivers/net/can/at91_can.c b/drivers/net/can/at91_can.c
index f07fa89..05e1aa0 100644
--- a/drivers/net/can/at91_can.c
+++ b/drivers/net/can/at91_can.c
@@ -1123,7 +1123,9 @@
struct at91_priv *priv = netdev_priv(dev);
int err;
- clk_enable(priv->clk);
+ err = clk_prepare_enable(priv->clk);
+ if (err)
+ return err;
/* check or determine and set bittime */
err = open_candev(dev);
@@ -1149,7 +1151,7 @@
out_close:
close_candev(dev);
out:
- clk_disable(priv->clk);
+ clk_disable_unprepare(priv->clk);
return err;
}
@@ -1166,7 +1168,7 @@
at91_chip_stop(dev, CAN_STATE_STOPPED);
free_irq(dev->irq, dev);
- clk_disable(priv->clk);
+ clk_disable_unprepare(priv->clk);
close_candev(dev);
diff --git a/drivers/net/can/c_can/c_can_platform.c b/drivers/net/can/c_can/c_can_platform.c
index 109cb44..fb279d6 100644
--- a/drivers/net/can/c_can/c_can_platform.c
+++ b/drivers/net/can/c_can/c_can_platform.c
@@ -97,14 +97,14 @@
ctrl |= CAN_RAMINIT_DONE_MASK(priv->instance);
writel(ctrl, priv->raminit_ctrlreg);
ctrl &= ~CAN_RAMINIT_DONE_MASK(priv->instance);
- c_can_hw_raminit_wait_ti(priv, ctrl, mask);
+ c_can_hw_raminit_wait_ti(priv, mask, ctrl);
if (enable) {
/* Set start bit and wait for the done bit. */
ctrl |= CAN_RAMINIT_START_MASK(priv->instance);
writel(ctrl, priv->raminit_ctrlreg);
ctrl |= CAN_RAMINIT_DONE_MASK(priv->instance);
- c_can_hw_raminit_wait_ti(priv, ctrl, mask);
+ c_can_hw_raminit_wait_ti(priv, mask, ctrl);
}
spin_unlock(&raminit_lock);
}
diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
index 2700865..60f86bd 100644
--- a/drivers/net/can/flexcan.c
+++ b/drivers/net/can/flexcan.c
@@ -62,7 +62,7 @@
#define FLEXCAN_MCR_BCC BIT(16)
#define FLEXCAN_MCR_LPRIO_EN BIT(13)
#define FLEXCAN_MCR_AEN BIT(12)
-#define FLEXCAN_MCR_MAXMB(x) ((x) & 0x1f)
+#define FLEXCAN_MCR_MAXMB(x) ((x) & 0x7f)
#define FLEXCAN_MCR_IDAM_A (0 << 8)
#define FLEXCAN_MCR_IDAM_B (1 << 8)
#define FLEXCAN_MCR_IDAM_C (2 << 8)
@@ -146,7 +146,9 @@
FLEXCAN_ESR_BOFF_INT | FLEXCAN_ESR_ERR_INT)
/* FLEXCAN interrupt flag register (IFLAG) bits */
-#define FLEXCAN_TX_BUF_ID 8
+/* Errata ERR005829 step7: Reserve first valid MB */
+#define FLEXCAN_TX_BUF_RESERVED 8
+#define FLEXCAN_TX_BUF_ID 9
#define FLEXCAN_IFLAG_BUF(x) BIT(x)
#define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7)
#define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6)
@@ -157,6 +159,17 @@
/* FLEXCAN message buffers */
#define FLEXCAN_MB_CNT_CODE(x) (((x) & 0xf) << 24)
+#define FLEXCAN_MB_CODE_RX_INACTIVE (0x0 << 24)
+#define FLEXCAN_MB_CODE_RX_EMPTY (0x4 << 24)
+#define FLEXCAN_MB_CODE_RX_FULL (0x2 << 24)
+#define FLEXCAN_MB_CODE_RX_OVERRRUN (0x6 << 24)
+#define FLEXCAN_MB_CODE_RX_RANSWER (0xa << 24)
+
+#define FLEXCAN_MB_CODE_TX_INACTIVE (0x8 << 24)
+#define FLEXCAN_MB_CODE_TX_ABORT (0x9 << 24)
+#define FLEXCAN_MB_CODE_TX_DATA (0xc << 24)
+#define FLEXCAN_MB_CODE_TX_TANSWER (0xe << 24)
+
#define FLEXCAN_MB_CNT_SRR BIT(22)
#define FLEXCAN_MB_CNT_IDE BIT(21)
#define FLEXCAN_MB_CNT_RTR BIT(20)
@@ -333,7 +346,7 @@
flexcan_write(reg, ®s->mcr);
while (timeout-- && (flexcan_read(®s->mcr) & FLEXCAN_MCR_LPM_ACK))
- usleep_range(10, 20);
+ udelay(10);
if (flexcan_read(®s->mcr) & FLEXCAN_MCR_LPM_ACK)
return -ETIMEDOUT;
@@ -352,7 +365,7 @@
flexcan_write(reg, ®s->mcr);
while (timeout-- && !(flexcan_read(®s->mcr) & FLEXCAN_MCR_LPM_ACK))
- usleep_range(10, 20);
+ udelay(10);
if (!(flexcan_read(®s->mcr) & FLEXCAN_MCR_LPM_ACK))
return -ETIMEDOUT;
@@ -371,7 +384,7 @@
flexcan_write(reg, ®s->mcr);
while (timeout-- && !(flexcan_read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK))
- usleep_range(100, 200);
+ udelay(100);
if (!(flexcan_read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK))
return -ETIMEDOUT;
@@ -390,7 +403,7 @@
flexcan_write(reg, ®s->mcr);
while (timeout-- && (flexcan_read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK))
- usleep_range(10, 20);
+ udelay(10);
if (flexcan_read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK)
return -ETIMEDOUT;
@@ -405,7 +418,7 @@
flexcan_write(FLEXCAN_MCR_SOFTRST, ®s->mcr);
while (timeout-- && (flexcan_read(®s->mcr) & FLEXCAN_MCR_SOFTRST))
- usleep_range(10, 20);
+ udelay(10);
if (flexcan_read(®s->mcr) & FLEXCAN_MCR_SOFTRST)
return -ETIMEDOUT;
@@ -487,6 +500,14 @@
flexcan_write(can_id, ®s->cantxfg[FLEXCAN_TX_BUF_ID].can_id);
flexcan_write(ctrl, ®s->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl);
+ /* Errata ERR005829 step8:
+ * Write twice INACTIVE(0x8) code to first MB.
+ */
+ flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE,
+ ®s->cantxfg[FLEXCAN_TX_BUF_RESERVED].can_ctrl);
+ flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE,
+ ®s->cantxfg[FLEXCAN_TX_BUF_RESERVED].can_ctrl);
+
return NETDEV_TX_OK;
}
@@ -803,6 +824,9 @@
stats->tx_bytes += can_get_echo_skb(dev, 0);
stats->tx_packets++;
can_led_event(dev, CAN_LED_EVENT_TX);
+ /* after sending a RTR frame mailbox is in RX mode */
+ flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE,
+ ®s->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl);
flexcan_write((1 << FLEXCAN_TX_BUF_ID), ®s->iflag1);
netif_wake_queue(dev);
}
@@ -858,8 +882,8 @@
{
struct flexcan_priv *priv = netdev_priv(dev);
struct flexcan_regs __iomem *regs = priv->base;
- int err;
u32 reg_mcr, reg_ctrl, reg_crl2, reg_mecr;
+ int err, i;
/* enable module */
err = flexcan_chip_enable(priv);
@@ -926,8 +950,18 @@
netdev_dbg(dev, "%s: writing ctrl=0x%08x", __func__, reg_ctrl);
flexcan_write(reg_ctrl, ®s->ctrl);
- /* Abort any pending TX, mark Mailbox as INACTIVE */
- flexcan_write(FLEXCAN_MB_CNT_CODE(0x4),
+ /* clear and invalidate all mailboxes first */
+ for (i = FLEXCAN_TX_BUF_ID; i < ARRAY_SIZE(regs->cantxfg); i++) {
+ flexcan_write(FLEXCAN_MB_CODE_RX_INACTIVE,
+ ®s->cantxfg[i].can_ctrl);
+ }
+
+ /* Errata ERR005829: mark first TX mailbox as INACTIVE */
+ flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE,
+ ®s->cantxfg[FLEXCAN_TX_BUF_RESERVED].can_ctrl);
+
+ /* mark TX mailbox as INACTIVE */
+ flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE,
®s->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl);
/* acceptance mask/acceptance code (accept everything) */
diff --git a/drivers/net/can/sja1000/peak_pci.c b/drivers/net/can/sja1000/peak_pci.c
index 7a85590..e5fac36 100644
--- a/drivers/net/can/sja1000/peak_pci.c
+++ b/drivers/net/can/sja1000/peak_pci.c
@@ -70,6 +70,8 @@
#define PEAK_PC_104P_DEVICE_ID 0x0006 /* PCAN-PC/104+ cards */
#define PEAK_PCI_104E_DEVICE_ID 0x0007 /* PCAN-PCI/104 Express cards */
#define PEAK_MPCIE_DEVICE_ID 0x0008 /* The miniPCIe slot cards */
+#define PEAK_PCIE_OEM_ID 0x0009 /* PCAN-PCI Express OEM */
+#define PEAK_PCIEC34_DEVICE_ID 0x000A /* PCAN-PCI Express 34 (one channel) */
#define PEAK_PCI_CHAN_MAX 4
@@ -87,6 +89,7 @@
{PEAK_PCI_VENDOR_ID, PEAK_CPCI_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,},
#ifdef CONFIG_CAN_PEAK_PCIEC
{PEAK_PCI_VENDOR_ID, PEAK_PCIEC_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,},
+ {PEAK_PCI_VENDOR_ID, PEAK_PCIEC34_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,},
#endif
{0,}
};
@@ -653,7 +656,8 @@
* This must be done *before* register_sja1000dev() but
* *after* devices linkage
*/
- if (pdev->device == PEAK_PCIEC_DEVICE_ID) {
+ if (pdev->device == PEAK_PCIEC_DEVICE_ID ||
+ pdev->device == PEAK_PCIEC34_DEVICE_ID) {
err = peak_pciec_probe(pdev, dev);
if (err) {
dev_err(&pdev->dev,
diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
index c6ee07c..ea0697e 100644
--- a/drivers/net/dsa/Kconfig
+++ b/drivers/net/dsa/Kconfig
@@ -36,6 +36,15 @@
This enables support for the Marvell 88E6123/6161/6165
ethernet switch chips.
+config NET_DSA_MV88E6171
+ tristate "Marvell 88E6171 ethernet switch chip support"
+ select NET_DSA
+ select NET_DSA_MV88E6XXX
+ select NET_DSA_TAG_EDSA
+ ---help---
+ This enables support for the Marvell 88E6171 ethernet switch
+ chip.
+
config NET_DSA_BCM_SF2
tristate "Broadcom Starfighter 2 Ethernet switch support"
select NET_DSA
diff --git a/drivers/net/dsa/Makefile b/drivers/net/dsa/Makefile
index dd3cd3b..23a90de 100644
--- a/drivers/net/dsa/Makefile
+++ b/drivers/net/dsa/Makefile
@@ -7,4 +7,7 @@
ifdef CONFIG_NET_DSA_MV88E6131
mv88e6xxx_drv-y += mv88e6131.o
endif
+ifdef CONFIG_NET_DSA_MV88E6171
+mv88e6xxx_drv-y += mv88e6171.o
+endif
obj-$(CONFIG_NET_DSA_BCM_SF2) += bcm_sf2.o
diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
index bb7cb8e..b962596 100644
--- a/drivers/net/dsa/bcm_sf2.c
+++ b/drivers/net/dsa/bcm_sf2.c
@@ -22,6 +22,7 @@
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <net/dsa.h>
+#include <linux/ethtool.h>
#include "bcm_sf2.h"
#include "bcm_sf2_regs.h"
@@ -129,15 +130,34 @@
return BCM_SF2_STATS_SIZE;
}
-static char *bcm_sf2_sw_probe(struct mii_bus *bus, int sw_addr)
+static char *bcm_sf2_sw_probe(struct device *host_dev, int sw_addr)
{
return "Broadcom Starfighter 2";
}
+static void bcm_sf2_imp_vlan_setup(struct dsa_switch *ds, int cpu_port)
+{
+ struct bcm_sf2_priv *priv = ds_to_priv(ds);
+ unsigned int i;
+ u32 reg;
+
+ /* Enable the IMP Port to be in the same VLAN as the other ports
+ * on a per-port basis such that we only have Port i and IMP in
+ * the same VLAN.
+ */
+ for (i = 0; i < priv->hw_params.num_ports; i++) {
+ if (!((1 << i) & ds->phys_port_mask))
+ continue;
+
+ reg = core_readl(priv, CORE_PORT_VLAN_CTL_PORT(i));
+ reg |= (1 << cpu_port);
+ core_writel(priv, reg, CORE_PORT_VLAN_CTL_PORT(i));
+ }
+}
+
static void bcm_sf2_imp_setup(struct dsa_switch *ds, int port)
{
struct bcm_sf2_priv *priv = ds_to_priv(ds);
- unsigned int i;
u32 reg, val;
/* Enable the port memories */
@@ -198,26 +218,28 @@
reg = core_readl(priv, CORE_STS_OVERRIDE_IMP);
reg |= (MII_SW_OR | LINK_STS);
core_writel(priv, reg, CORE_STS_OVERRIDE_IMP);
-
- /* Enable the IMP Port to be in the same VLAN as the other ports
- * on a per-port basis such that we only have Port i and IMP in
- * the same VLAN.
- */
- for (i = 0; i < priv->hw_params.num_ports; i++) {
- if (!((1 << i) & ds->phys_port_mask))
- continue;
-
- reg = core_readl(priv, CORE_PORT_VLAN_CTL_PORT(i));
- reg |= (1 << port);
- core_writel(priv, reg, CORE_PORT_VLAN_CTL_PORT(i));
- }
}
-static void bcm_sf2_port_setup(struct dsa_switch *ds, int port)
+static void bcm_sf2_eee_enable_set(struct dsa_switch *ds, int port, bool enable)
{
struct bcm_sf2_priv *priv = ds_to_priv(ds);
u32 reg;
+ reg = core_readl(priv, CORE_EEE_EN_CTRL);
+ if (enable)
+ reg |= 1 << port;
+ else
+ reg &= ~(1 << port);
+ core_writel(priv, reg, CORE_EEE_EN_CTRL);
+}
+
+static int bcm_sf2_port_setup(struct dsa_switch *ds, int port,
+ struct phy_device *phy)
+{
+ struct bcm_sf2_priv *priv = ds_to_priv(ds);
+ s8 cpu_port = ds->dst[ds->index].cpu_port;
+ u32 reg;
+
/* Clear the memory power down */
reg = core_readl(priv, CORE_MEM_PSM_VDD_CTRL);
reg &= ~P_TXQ_PSM_VDD(port);
@@ -235,13 +257,30 @@
reg &= ~PORT_VLAN_CTRL_MASK;
reg |= (1 << port);
core_writel(priv, reg, CORE_PORT_VLAN_CTL_PORT(port));
+
+ bcm_sf2_imp_vlan_setup(ds, cpu_port);
+
+ /* If EEE was enabled, restore it */
+ if (priv->port_sts[port].eee.eee_enabled)
+ bcm_sf2_eee_enable_set(ds, port, true);
+
+ return 0;
}
-static void bcm_sf2_port_disable(struct dsa_switch *ds, int port)
+static void bcm_sf2_port_disable(struct dsa_switch *ds, int port,
+ struct phy_device *phy)
{
struct bcm_sf2_priv *priv = ds_to_priv(ds);
u32 off, reg;
+ if (priv->wol_ports_mask & (1 << port))
+ return;
+
+ if (port == 7) {
+ intrl2_1_mask_set(priv, P_IRQ_MASK(P7_IRQ_OFF));
+ intrl2_1_writel(priv, P_IRQ_MASK(P7_IRQ_OFF), INTRL2_CPU_CLEAR);
+ }
+
if (dsa_is_cpu_port(ds, port))
off = CORE_IMP_CTL;
else
@@ -257,6 +296,60 @@
core_writel(priv, reg, CORE_MEM_PSM_VDD_CTRL);
}
+/* Returns 0 if EEE was not enabled, or 1 otherwise
+ */
+static int bcm_sf2_eee_init(struct dsa_switch *ds, int port,
+ struct phy_device *phy)
+{
+ struct bcm_sf2_priv *priv = ds_to_priv(ds);
+ struct ethtool_eee *p = &priv->port_sts[port].eee;
+ int ret;
+
+ p->supported = (SUPPORTED_1000baseT_Full | SUPPORTED_100baseT_Full);
+
+ ret = phy_init_eee(phy, 0);
+ if (ret)
+ return 0;
+
+ bcm_sf2_eee_enable_set(ds, port, true);
+
+ return 1;
+}
+
+static int bcm_sf2_sw_get_eee(struct dsa_switch *ds, int port,
+ struct ethtool_eee *e)
+{
+ struct bcm_sf2_priv *priv = ds_to_priv(ds);
+ struct ethtool_eee *p = &priv->port_sts[port].eee;
+ u32 reg;
+
+ reg = core_readl(priv, CORE_EEE_LPI_INDICATE);
+ e->eee_enabled = p->eee_enabled;
+ e->eee_active = !!(reg & (1 << port));
+
+ return 0;
+}
+
+static int bcm_sf2_sw_set_eee(struct dsa_switch *ds, int port,
+ struct phy_device *phydev,
+ struct ethtool_eee *e)
+{
+ struct bcm_sf2_priv *priv = ds_to_priv(ds);
+ struct ethtool_eee *p = &priv->port_sts[port].eee;
+
+ p->eee_enabled = e->eee_enabled;
+
+ if (!p->eee_enabled) {
+ bcm_sf2_eee_enable_set(ds, port, false);
+ } else {
+ p->eee_enabled = bcm_sf2_eee_init(ds, port, phydev);
+ if (!p->eee_enabled)
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
static irqreturn_t bcm_sf2_switch_0_isr(int irq, void *dev_id)
{
struct bcm_sf2_priv *priv = dev_id;
@@ -359,11 +452,11 @@
for (port = 0; port < priv->hw_params.num_ports; port++) {
/* IMP port receives special treatment */
if ((1 << port) & ds->phys_port_mask)
- bcm_sf2_port_setup(ds, port);
+ bcm_sf2_port_setup(ds, port, NULL);
else if (dsa_is_cpu_port(ds, port))
bcm_sf2_imp_setup(ds, port);
else
- bcm_sf2_port_disable(ds, port);
+ bcm_sf2_port_disable(ds, port, NULL);
}
/* Include the pseudo-PHY address and the broadcast PHY address to
@@ -376,6 +469,9 @@
SWITCH_TOP_REV_MASK;
priv->hw_params.core_rev = (rev & SF2_REV_MASK);
+ rev = reg_readl(priv, REG_PHY_REVISION);
+ priv->hw_params.gphy_rev = rev & PHY_REVISION_MASK;
+
pr_info("Starfighter 2 top: %x.%02x, core: %x.%02x base: 0x%p, IRQs: %d, %d\n",
priv->hw_params.top_rev >> 8, priv->hw_params.top_rev & 0xff,
priv->hw_params.core_rev >> 8, priv->hw_params.core_rev & 0xff,
@@ -399,6 +495,18 @@
return 0;
}
+static u32 bcm_sf2_sw_get_phy_flags(struct dsa_switch *ds, int port)
+{
+ struct bcm_sf2_priv *priv = ds_to_priv(ds);
+
+ /* The BCM7xxx PHY driver expects to find the integrated PHY revision
+ * in bits 15:8 and the patch level in bits 7:0 which is exactly what
+ * the REG_PHY_REVISION register layout is.
+ */
+
+ return priv->hw_params.gphy_rev;
+}
+
static int bcm_sf2_sw_indir_rw(struct dsa_switch *ds, int op, int addr,
int regnum, u16 val)
{
@@ -487,6 +595,15 @@
port_mode = EXT_REVMII;
break;
default:
+ /* All other PHYs: internal and MoCA */
+ goto force_link;
+ }
+
+ /* If the link is down, just disable the interface to conserve power */
+ if (!phydev->link) {
+ reg = reg_readl(priv, REG_RGMII_CNTRL_P(port));
+ reg &= ~RGMII_MODE_EN;
+ reg_writel(priv, reg, REG_RGMII_CNTRL_P(port));
goto force_link;
}
@@ -591,12 +708,148 @@
status->pause = 1;
}
+static int bcm_sf2_sw_suspend(struct dsa_switch *ds)
+{
+ struct bcm_sf2_priv *priv = ds_to_priv(ds);
+ unsigned int port;
+
+ intrl2_0_writel(priv, 0xffffffff, INTRL2_CPU_MASK_SET);
+ intrl2_0_writel(priv, 0xffffffff, INTRL2_CPU_CLEAR);
+ intrl2_0_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
+ intrl2_1_writel(priv, 0xffffffff, INTRL2_CPU_MASK_SET);
+ intrl2_1_writel(priv, 0xffffffff, INTRL2_CPU_CLEAR);
+ intrl2_1_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
+
+ /* Disable all ports physically present including the IMP
+ * port, the other ones have already been disabled during
+ * bcm_sf2_sw_setup
+ */
+ for (port = 0; port < DSA_MAX_PORTS; port++) {
+ if ((1 << port) & ds->phys_port_mask ||
+ dsa_is_cpu_port(ds, port))
+ bcm_sf2_port_disable(ds, port, NULL);
+ }
+
+ return 0;
+}
+
+static int bcm_sf2_sw_rst(struct bcm_sf2_priv *priv)
+{
+ unsigned int timeout = 1000;
+ u32 reg;
+
+ reg = core_readl(priv, CORE_WATCHDOG_CTRL);
+ reg |= SOFTWARE_RESET | EN_CHIP_RST | EN_SW_RESET;
+ core_writel(priv, reg, CORE_WATCHDOG_CTRL);
+
+ do {
+ reg = core_readl(priv, CORE_WATCHDOG_CTRL);
+ if (!(reg & SOFTWARE_RESET))
+ break;
+
+ usleep_range(1000, 2000);
+ } while (timeout-- > 0);
+
+ if (timeout == 0)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+static int bcm_sf2_sw_resume(struct dsa_switch *ds)
+{
+ struct bcm_sf2_priv *priv = ds_to_priv(ds);
+ unsigned int port;
+ u32 reg;
+ int ret;
+
+ ret = bcm_sf2_sw_rst(priv);
+ if (ret) {
+ pr_err("%s: failed to software reset switch\n", __func__);
+ return ret;
+ }
+
+ /* Reinitialize the single GPHY */
+ if (priv->hw_params.num_gphy == 1) {
+ reg = reg_readl(priv, REG_SPHY_CNTRL);
+ reg |= PHY_RESET;
+ reg &= ~(EXT_PWR_DOWN | IDDQ_BIAS);
+ reg_writel(priv, reg, REG_SPHY_CNTRL);
+ udelay(21);
+ reg = reg_readl(priv, REG_SPHY_CNTRL);
+ reg &= ~PHY_RESET;
+ reg_writel(priv, reg, REG_SPHY_CNTRL);
+ }
+
+ for (port = 0; port < DSA_MAX_PORTS; port++) {
+ if ((1 << port) & ds->phys_port_mask)
+ bcm_sf2_port_setup(ds, port, NULL);
+ else if (dsa_is_cpu_port(ds, port))
+ bcm_sf2_imp_setup(ds, port);
+ }
+
+ return 0;
+}
+
+static void bcm_sf2_sw_get_wol(struct dsa_switch *ds, int port,
+ struct ethtool_wolinfo *wol)
+{
+ struct net_device *p = ds->dst[ds->index].master_netdev;
+ struct bcm_sf2_priv *priv = ds_to_priv(ds);
+ struct ethtool_wolinfo pwol;
+
+ /* Get the parent device WoL settings */
+ p->ethtool_ops->get_wol(p, &pwol);
+
+ /* Advertise the parent device supported settings */
+ wol->supported = pwol.supported;
+ memset(&wol->sopass, 0, sizeof(wol->sopass));
+
+ if (pwol.wolopts & WAKE_MAGICSECURE)
+ memcpy(&wol->sopass, pwol.sopass, sizeof(wol->sopass));
+
+ if (priv->wol_ports_mask & (1 << port))
+ wol->wolopts = pwol.wolopts;
+ else
+ wol->wolopts = 0;
+}
+
+static int bcm_sf2_sw_set_wol(struct dsa_switch *ds, int port,
+ struct ethtool_wolinfo *wol)
+{
+ struct net_device *p = ds->dst[ds->index].master_netdev;
+ struct bcm_sf2_priv *priv = ds_to_priv(ds);
+ s8 cpu_port = ds->dst[ds->index].cpu_port;
+ struct ethtool_wolinfo pwol;
+
+ p->ethtool_ops->get_wol(p, &pwol);
+ if (wol->wolopts & ~pwol.supported)
+ return -EINVAL;
+
+ if (wol->wolopts)
+ priv->wol_ports_mask |= (1 << port);
+ else
+ priv->wol_ports_mask &= ~(1 << port);
+
+ /* If we have at least one port enabled, make sure the CPU port
+ * is also enabled. If the CPU port is the last one enabled, we disable
+ * it since this configuration does not make sense.
+ */
+ if (priv->wol_ports_mask && priv->wol_ports_mask != (1 << cpu_port))
+ priv->wol_ports_mask |= (1 << cpu_port);
+ else
+ priv->wol_ports_mask &= ~(1 << cpu_port);
+
+ return p->ethtool_ops->set_wol(p, wol);
+}
+
static struct dsa_switch_driver bcm_sf2_switch_driver = {
- .tag_protocol = htons(ETH_P_BRCMTAG),
+ .tag_protocol = DSA_TAG_PROTO_BRCM,
.priv_size = sizeof(struct bcm_sf2_priv),
.probe = bcm_sf2_sw_probe,
.setup = bcm_sf2_sw_setup,
.set_addr = bcm_sf2_sw_set_addr,
+ .get_phy_flags = bcm_sf2_sw_get_phy_flags,
.phy_read = bcm_sf2_sw_phy_read,
.phy_write = bcm_sf2_sw_phy_write,
.get_strings = bcm_sf2_sw_get_strings,
@@ -604,6 +857,14 @@
.get_sset_count = bcm_sf2_sw_get_sset_count,
.adjust_link = bcm_sf2_sw_adjust_link,
.fixed_link_update = bcm_sf2_sw_fixed_link_update,
+ .suspend = bcm_sf2_sw_suspend,
+ .resume = bcm_sf2_sw_resume,
+ .get_wol = bcm_sf2_sw_get_wol,
+ .set_wol = bcm_sf2_sw_set_wol,
+ .port_enable = bcm_sf2_port_setup,
+ .port_disable = bcm_sf2_port_disable,
+ .get_eee = bcm_sf2_sw_get_eee,
+ .set_eee = bcm_sf2_sw_set_eee,
};
static int __init bcm_sf2_init(void)
diff --git a/drivers/net/dsa/bcm_sf2.h b/drivers/net/dsa/bcm_sf2.h
index 260bab3..ee9f650 100644
--- a/drivers/net/dsa/bcm_sf2.h
+++ b/drivers/net/dsa/bcm_sf2.h
@@ -18,6 +18,7 @@
#include <linux/spinlock.h>
#include <linux/mutex.h>
#include <linux/mii.h>
+#include <linux/ethtool.h>
#include <net/dsa.h>
@@ -26,6 +27,7 @@
struct bcm_sf2_hw_params {
u16 top_rev;
u16 core_rev;
+ u16 gphy_rev;
u32 num_gphy;
u8 num_acb_queue;
u8 num_rgmii;
@@ -42,6 +44,8 @@
struct bcm_sf2_port_status {
unsigned int link;
+
+ struct ethtool_eee eee;
};
struct bcm_sf2_priv {
@@ -69,6 +73,9 @@
struct bcm_sf2_hw_params hw_params;
struct bcm_sf2_port_status port_sts[DSA_MAX_PORTS];
+
+ /* Mask of ports enabled for Wake-on-LAN */
+ u32 wol_ports_mask;
};
struct bcm_sf2_hw_stats {
diff --git a/drivers/net/dsa/bcm_sf2_regs.h b/drivers/net/dsa/bcm_sf2_regs.h
index 885c231..1bb49cb 100644
--- a/drivers/net/dsa/bcm_sf2_regs.h
+++ b/drivers/net/dsa/bcm_sf2_regs.h
@@ -25,6 +25,7 @@
#define SWITCH_TOP_REV_MASK 0xffff
#define REG_PHY_REVISION 0x1C
+#define PHY_REVISION_MASK 0xffff
#define REG_SPHY_CNTRL 0x2C
#define IDDQ_BIAS (1 << 0)
@@ -224,4 +225,7 @@
#define CORE_PORT_VLAN_CTL_PORT(x) (0xc400 + ((x) * 0x8))
#define PORT_VLAN_CTRL_MASK 0x1ff
+#define CORE_EEE_EN_CTRL 0x24800
+#define CORE_EEE_LPI_INDICATE 0x24810
+
#endif /* __BCM_SF2_REGS_H */
diff --git a/drivers/net/dsa/mv88e6060.c b/drivers/net/dsa/mv88e6060.c
index 7a54ec0..776e965 100644
--- a/drivers/net/dsa/mv88e6060.c
+++ b/drivers/net/dsa/mv88e6060.c
@@ -21,7 +21,8 @@
static int reg_read(struct dsa_switch *ds, int addr, int reg)
{
- return mdiobus_read(ds->master_mii_bus, ds->pd->sw_addr + addr, reg);
+ return mdiobus_read(to_mii_bus(ds->master_dev),
+ ds->pd->sw_addr + addr, reg);
}
#define REG_READ(addr, reg) \
@@ -37,8 +38,8 @@
static int reg_write(struct dsa_switch *ds, int addr, int reg, u16 val)
{
- return mdiobus_write(ds->master_mii_bus, ds->pd->sw_addr + addr,
- reg, val);
+ return mdiobus_write(to_mii_bus(ds->master_dev),
+ ds->pd->sw_addr + addr, reg, val);
}
#define REG_WRITE(addr, reg, val) \
@@ -50,10 +51,14 @@
return __ret; \
})
-static char *mv88e6060_probe(struct mii_bus *bus, int sw_addr)
+static char *mv88e6060_probe(struct device *host_dev, int sw_addr)
{
+ struct mii_bus *bus = dsa_host_dev_to_mii_bus(host_dev);
int ret;
+ if (bus == NULL)
+ return NULL;
+
ret = mdiobus_read(bus, sw_addr + REG_PORT(0), 0x03);
if (ret >= 0) {
ret &= 0xfff0;
@@ -258,7 +263,7 @@
}
static struct dsa_switch_driver mv88e6060_switch_driver = {
- .tag_protocol = htons(ETH_P_TRAILER),
+ .tag_protocol = DSA_TAG_PROTO_TRAILER,
.probe = mv88e6060_probe,
.setup = mv88e6060_setup,
.set_addr = mv88e6060_set_addr,
diff --git a/drivers/net/dsa/mv88e6123_61_65.c b/drivers/net/dsa/mv88e6123_61_65.c
index 69c4251..a332c53 100644
--- a/drivers/net/dsa/mv88e6123_61_65.c
+++ b/drivers/net/dsa/mv88e6123_61_65.c
@@ -17,10 +17,14 @@
#include <net/dsa.h>
#include "mv88e6xxx.h"
-static char *mv88e6123_61_65_probe(struct mii_bus *bus, int sw_addr)
+static char *mv88e6123_61_65_probe(struct device *host_dev, int sw_addr)
{
+ struct mii_bus *bus = dsa_host_dev_to_mii_bus(host_dev);
int ret;
+ if (bus == NULL)
+ return NULL;
+
ret = __mv88e6xxx_reg_read(bus, sw_addr, REG_PORT(0), 0x03);
if (ret >= 0) {
if (ret == 0x1212)
@@ -207,7 +211,7 @@
*/
val = 0x0433;
if (dsa_is_cpu_port(ds, p)) {
- if (ds->dst->tag_protocol == htons(ETH_P_EDSA))
+ if (ds->dst->tag_protocol == DSA_TAG_PROTO_EDSA)
val |= 0x3300;
else
val |= 0x0100;
@@ -391,7 +395,7 @@
}
struct dsa_switch_driver mv88e6123_61_65_switch_driver = {
- .tag_protocol = cpu_to_be16(ETH_P_EDSA),
+ .tag_protocol = DSA_TAG_PROTO_EDSA,
.priv_size = sizeof(struct mv88e6xxx_priv_state),
.probe = mv88e6123_61_65_probe,
.setup = mv88e6123_61_65_setup,
diff --git a/drivers/net/dsa/mv88e6131.c b/drivers/net/dsa/mv88e6131.c
index 953bc6a..244c735 100644
--- a/drivers/net/dsa/mv88e6131.c
+++ b/drivers/net/dsa/mv88e6131.c
@@ -22,10 +22,14 @@
#define ID_6095 0x0950
#define ID_6131 0x1060
-static char *mv88e6131_probe(struct mii_bus *bus, int sw_addr)
+static char *mv88e6131_probe(struct device *host_dev, int sw_addr)
{
+ struct mii_bus *bus = dsa_host_dev_to_mii_bus(host_dev);
int ret;
+ if (bus == NULL)
+ return NULL;
+
ret = __mv88e6xxx_reg_read(bus, sw_addr, REG_PORT(0), 0x03);
if (ret >= 0) {
ret &= 0xfff0;
@@ -379,7 +383,7 @@
}
struct dsa_switch_driver mv88e6131_switch_driver = {
- .tag_protocol = cpu_to_be16(ETH_P_DSA),
+ .tag_protocol = DSA_TAG_PROTO_DSA,
.priv_size = sizeof(struct mv88e6xxx_priv_state),
.probe = mv88e6131_probe,
.setup = mv88e6131_setup,
diff --git a/drivers/net/dsa/mv88e6171.c b/drivers/net/dsa/mv88e6171.c
new file mode 100644
index 0000000..6365e30
--- /dev/null
+++ b/drivers/net/dsa/mv88e6171.c
@@ -0,0 +1,411 @@
+/* net/dsa/mv88e6171.c - Marvell 88e6171 switch chip support
+ * Copyright (c) 2008-2009 Marvell Semiconductor
+ * Copyright (c) 2014 Claudio Leite <leitec@staticky.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/phy.h>
+#include <net/dsa.h>
+#include "mv88e6xxx.h"
+
+static char *mv88e6171_probe(struct device *host_dev, int sw_addr)
+{
+ struct mii_bus *bus = dsa_host_dev_to_mii_bus(host_dev);
+ int ret;
+
+ if (bus == NULL)
+ return NULL;
+
+ ret = __mv88e6xxx_reg_read(bus, sw_addr, REG_PORT(0), 0x03);
+ if (ret >= 0) {
+ if ((ret & 0xfff0) == 0x1710)
+ return "Marvell 88E6171";
+ }
+
+ return NULL;
+}
+
+static int mv88e6171_switch_reset(struct dsa_switch *ds)
+{
+ int i;
+ int ret;
+ unsigned long timeout;
+
+ /* Set all ports to the disabled state. */
+ for (i = 0; i < 8; i++) {
+ ret = REG_READ(REG_PORT(i), 0x04);
+ REG_WRITE(REG_PORT(i), 0x04, ret & 0xfffc);
+ }
+
+ /* Wait for transmit queues to drain. */
+ usleep_range(2000, 4000);
+
+ /* Reset the switch. */
+ REG_WRITE(REG_GLOBAL, 0x04, 0xc400);
+
+ /* Wait up to one second for reset to complete. */
+ timeout = jiffies + 1 * HZ;
+ while (time_before(jiffies, timeout)) {
+ ret = REG_READ(REG_GLOBAL, 0x00);
+ if ((ret & 0xc800) == 0xc800)
+ break;
+
+ usleep_range(1000, 2000);
+ }
+ if (time_after(jiffies, timeout))
+ return -ETIMEDOUT;
+
+ /* Enable ports not under DSA, e.g. WAN port */
+ for (i = 0; i < 8; i++) {
+ if (dsa_is_cpu_port(ds, i) || ds->phys_port_mask & (1 << i))
+ continue;
+
+ ret = REG_READ(REG_PORT(i), 0x04);
+ REG_WRITE(REG_PORT(i), 0x04, ret | 0x03);
+ }
+
+ return 0;
+}
+
+static int mv88e6171_setup_global(struct dsa_switch *ds)
+{
+ int ret;
+ int i;
+
+ /* Disable the PHY polling unit (since there won't be any
+ * external PHYs to poll), don't discard packets with
+ * excessive collisions, and mask all interrupt sources.
+ */
+ REG_WRITE(REG_GLOBAL, 0x04, 0x0000);
+
+ /* Set the default address aging time to 5 minutes, and
+ * enable address learn messages to be sent to all message
+ * ports.
+ */
+ REG_WRITE(REG_GLOBAL, 0x0a, 0x0148);
+
+ /* Configure the priority mapping registers. */
+ ret = mv88e6xxx_config_prio(ds);
+ if (ret < 0)
+ return ret;
+
+ /* Configure the upstream port, and configure the upstream
+ * port as the port to which ingress and egress monitor frames
+ * are to be sent.
+ */
+ if (REG_READ(REG_PORT(0), 0x03) == 0x1710)
+ REG_WRITE(REG_GLOBAL, 0x1a, (dsa_upstream_port(ds) * 0x1111));
+ else
+ REG_WRITE(REG_GLOBAL, 0x1a, (dsa_upstream_port(ds) * 0x1110));
+
+ /* Disable remote management for now, and set the switch's
+ * DSA device number.
+ */
+ REG_WRITE(REG_GLOBAL, 0x1c, ds->index & 0x1f);
+
+ /* Send all frames with destination addresses matching
+ * 01:80:c2:00:00:2x to the CPU port.
+ */
+ REG_WRITE(REG_GLOBAL2, 0x02, 0xffff);
+
+ /* Send all frames with destination addresses matching
+ * 01:80:c2:00:00:0x to the CPU port.
+ */
+ REG_WRITE(REG_GLOBAL2, 0x03, 0xffff);
+
+ /* Disable the loopback filter, disable flow control
+ * messages, disable flood broadcast override, disable
+ * removing of provider tags, disable ATU age violation
+ * interrupts, disable tag flow control, force flow
+ * control priority to the highest, and send all special
+ * multicast frames to the CPU at the highest priority.
+ */
+ REG_WRITE(REG_GLOBAL2, 0x05, 0x00ff);
+
+ /* Program the DSA routing table. */
+ for (i = 0; i < 32; i++) {
+ int nexthop;
+
+ nexthop = 0x1f;
+ if (i != ds->index && i < ds->dst->pd->nr_chips)
+ nexthop = ds->pd->rtable[i] & 0x1f;
+
+ REG_WRITE(REG_GLOBAL2, 0x06, 0x8000 | (i << 8) | nexthop);
+ }
+
+ /* Clear all trunk masks. */
+ for (i = 0; i < 8; i++)
+ REG_WRITE(REG_GLOBAL2, 0x07, 0x8000 | (i << 12) | 0xff);
+
+ /* Clear all trunk mappings. */
+ for (i = 0; i < 16; i++)
+ REG_WRITE(REG_GLOBAL2, 0x08, 0x8000 | (i << 11));
+
+ /* Disable ingress rate limiting by resetting all ingress
+ * rate limit registers to their initial state.
+ */
+ for (i = 0; i < 6; i++)
+ REG_WRITE(REG_GLOBAL2, 0x09, 0x9000 | (i << 8));
+
+ /* Initialise cross-chip port VLAN table to reset defaults. */
+ REG_WRITE(REG_GLOBAL2, 0x0b, 0x9000);
+
+ /* Clear the priority override table. */
+ for (i = 0; i < 16; i++)
+ REG_WRITE(REG_GLOBAL2, 0x0f, 0x8000 | (i << 8));
+
+ /* @@@ initialise AVB (22/23) watchdog (27) sdet (29) registers */
+
+ return 0;
+}
+
+static int mv88e6171_setup_port(struct dsa_switch *ds, int p)
+{
+ int addr = REG_PORT(p);
+ u16 val;
+
+ /* MAC Forcing register: don't force link, speed, duplex
+ * or flow control state to any particular values on physical
+ * ports, but force the CPU port and all DSA ports to 1000 Mb/s
+ * full duplex.
+ */
+ val = REG_READ(addr, 0x01);
+ if (dsa_is_cpu_port(ds, p) || ds->dsa_port_mask & (1 << p))
+ REG_WRITE(addr, 0x01, val | 0x003e);
+ else
+ REG_WRITE(addr, 0x01, val | 0x0003);
+
+ /* Do not limit the period of time that this port can be
+ * paused for by the remote end or the period of time that
+ * this port can pause the remote end.
+ */
+ REG_WRITE(addr, 0x02, 0x0000);
+
+ /* Port Control: disable Drop-on-Unlock, disable Drop-on-Lock,
+ * disable Header mode, enable IGMP/MLD snooping, disable VLAN
+ * tunneling, determine priority by looking at 802.1p and IP
+ * priority fields (IP prio has precedence), and set STP state
+ * to Forwarding.
+ *
+ * If this is the CPU link, use DSA or EDSA tagging depending
+ * on which tagging mode was configured.
+ *
+ * If this is a link to another switch, use DSA tagging mode.
+ *
+ * If this is the upstream port for this switch, enable
+ * forwarding of unknown unicasts and multicasts.
+ */
+ val = 0x0433;
+ if (dsa_is_cpu_port(ds, p)) {
+ if (ds->dst->tag_protocol == htons(ETH_P_EDSA))
+ val |= 0x3300;
+ else
+ val |= 0x0100;
+ }
+ if (ds->dsa_port_mask & (1 << p))
+ val |= 0x0100;
+ if (p == dsa_upstream_port(ds))
+ val |= 0x000c;
+ REG_WRITE(addr, 0x04, val);
+
+ /* Port Control 1: disable trunking. Also, if this is the
+ * CPU port, enable learn messages to be sent to this port.
+ */
+ REG_WRITE(addr, 0x05, dsa_is_cpu_port(ds, p) ? 0x8000 : 0x0000);
+
+ /* Port based VLAN map: give each port its own address
+ * database, allow the CPU port to talk to each of the 'real'
+ * ports, and allow each of the 'real' ports to only talk to
+ * the upstream port.
+ */
+ val = (p & 0xf) << 12;
+ if (dsa_is_cpu_port(ds, p))
+ val |= ds->phys_port_mask;
+ else
+ val |= 1 << dsa_upstream_port(ds);
+ REG_WRITE(addr, 0x06, val);
+
+ /* Default VLAN ID and priority: don't set a default VLAN
+ * ID, and set the default packet priority to zero.
+ */
+ REG_WRITE(addr, 0x07, 0x0000);
+
+ /* Port Control 2: don't force a good FCS, set the maximum
+ * frame size to 10240 bytes, don't let the switch add or
+ * strip 802.1q tags, don't discard tagged or untagged frames
+ * on this port, do a destination address lookup on all
+ * received packets as usual, disable ARP mirroring and don't
+ * send a copy of all transmitted/received frames on this port
+ * to the CPU.
+ */
+ REG_WRITE(addr, 0x08, 0x2080);
+
+ /* Egress rate control: disable egress rate control. */
+ REG_WRITE(addr, 0x09, 0x0001);
+
+ /* Egress rate control 2: disable egress rate control. */
+ REG_WRITE(addr, 0x0a, 0x0000);
+
+ /* Port Association Vector: when learning source addresses
+ * of packets, add the address to the address database using
+ * a port bitmap that has only the bit for this port set and
+ * the other bits clear.
+ */
+ REG_WRITE(addr, 0x0b, 1 << p);
+
+ /* Port ATU control: disable limiting the number of address
+ * database entries that this port is allowed to use.
+ */
+ REG_WRITE(addr, 0x0c, 0x0000);
+
+ /* Priority Override: disable DA, SA and VTU priority override. */
+ REG_WRITE(addr, 0x0d, 0x0000);
+
+ /* Port Ethertype: use the Ethertype DSA Ethertype value. */
+ REG_WRITE(addr, 0x0f, ETH_P_EDSA);
+
+ /* Tag Remap: use an identity 802.1p prio -> switch prio
+ * mapping.
+ */
+ REG_WRITE(addr, 0x18, 0x3210);
+
+ /* Tag Remap 2: use an identity 802.1p prio -> switch prio
+ * mapping.
+ */
+ REG_WRITE(addr, 0x19, 0x7654);
+
+ return 0;
+}
+
+static int mv88e6171_setup(struct dsa_switch *ds)
+{
+ struct mv88e6xxx_priv_state *ps = (void *)(ds + 1);
+ int i;
+ int ret;
+
+ mutex_init(&ps->smi_mutex);
+ mutex_init(&ps->stats_mutex);
+
+ ret = mv88e6171_switch_reset(ds);
+ if (ret < 0)
+ return ret;
+
+ /* @@@ initialise vtu and atu */
+
+ ret = mv88e6171_setup_global(ds);
+ if (ret < 0)
+ return ret;
+
+ for (i = 0; i < 8; i++) {
+ if (!(dsa_is_cpu_port(ds, i) || ds->phys_port_mask & (1 << i)))
+ continue;
+
+ ret = mv88e6171_setup_port(ds, i);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int mv88e6171_port_to_phy_addr(int port)
+{
+ if (port >= 0 && port <= 4)
+ return port;
+ return -1;
+}
+
+static int
+mv88e6171_phy_read(struct dsa_switch *ds, int port, int regnum)
+{
+ int addr = mv88e6171_port_to_phy_addr(port);
+
+ return mv88e6xxx_phy_read(ds, addr, regnum);
+}
+
+static int
+mv88e6171_phy_write(struct dsa_switch *ds,
+ int port, int regnum, u16 val)
+{
+ int addr = mv88e6171_port_to_phy_addr(port);
+
+ return mv88e6xxx_phy_write(ds, addr, regnum, val);
+}
+
+static struct mv88e6xxx_hw_stat mv88e6171_hw_stats[] = {
+ { "in_good_octets", 8, 0x00, },
+ { "in_bad_octets", 4, 0x02, },
+ { "in_unicast", 4, 0x04, },
+ { "in_broadcasts", 4, 0x06, },
+ { "in_multicasts", 4, 0x07, },
+ { "in_pause", 4, 0x16, },
+ { "in_undersize", 4, 0x18, },
+ { "in_fragments", 4, 0x19, },
+ { "in_oversize", 4, 0x1a, },
+ { "in_jabber", 4, 0x1b, },
+ { "in_rx_error", 4, 0x1c, },
+ { "in_fcs_error", 4, 0x1d, },
+ { "out_octets", 8, 0x0e, },
+ { "out_unicast", 4, 0x10, },
+ { "out_broadcasts", 4, 0x13, },
+ { "out_multicasts", 4, 0x12, },
+ { "out_pause", 4, 0x15, },
+ { "excessive", 4, 0x11, },
+ { "collisions", 4, 0x1e, },
+ { "deferred", 4, 0x05, },
+ { "single", 4, 0x14, },
+ { "multiple", 4, 0x17, },
+ { "out_fcs_error", 4, 0x03, },
+ { "late", 4, 0x1f, },
+ { "hist_64bytes", 4, 0x08, },
+ { "hist_65_127bytes", 4, 0x09, },
+ { "hist_128_255bytes", 4, 0x0a, },
+ { "hist_256_511bytes", 4, 0x0b, },
+ { "hist_512_1023bytes", 4, 0x0c, },
+ { "hist_1024_max_bytes", 4, 0x0d, },
+};
+
+static void
+mv88e6171_get_strings(struct dsa_switch *ds, int port, uint8_t *data)
+{
+ mv88e6xxx_get_strings(ds, ARRAY_SIZE(mv88e6171_hw_stats),
+ mv88e6171_hw_stats, port, data);
+}
+
+static void
+mv88e6171_get_ethtool_stats(struct dsa_switch *ds,
+ int port, uint64_t *data)
+{
+ mv88e6xxx_get_ethtool_stats(ds, ARRAY_SIZE(mv88e6171_hw_stats),
+ mv88e6171_hw_stats, port, data);
+}
+
+static int mv88e6171_get_sset_count(struct dsa_switch *ds)
+{
+ return ARRAY_SIZE(mv88e6171_hw_stats);
+}
+
+struct dsa_switch_driver mv88e6171_switch_driver = {
+ .tag_protocol = DSA_TAG_PROTO_DSA,
+ .priv_size = sizeof(struct mv88e6xxx_priv_state),
+ .probe = mv88e6171_probe,
+ .setup = mv88e6171_setup,
+ .set_addr = mv88e6xxx_set_addr_indirect,
+ .phy_read = mv88e6171_phy_read,
+ .phy_write = mv88e6171_phy_write,
+ .poll_link = mv88e6xxx_poll_link,
+ .get_strings = mv88e6171_get_strings,
+ .get_ethtool_stats = mv88e6171_get_ethtool_stats,
+ .get_sset_count = mv88e6171_get_sset_count,
+};
+
+MODULE_ALIAS("platform:mv88e6171");
diff --git a/drivers/net/dsa/mv88e6xxx.c b/drivers/net/dsa/mv88e6xxx.c
index 9ce2146..d6f6428 100644
--- a/drivers/net/dsa/mv88e6xxx.c
+++ b/drivers/net/dsa/mv88e6xxx.c
@@ -78,7 +78,7 @@
int ret;
mutex_lock(&ps->smi_mutex);
- ret = __mv88e6xxx_reg_read(ds->master_mii_bus,
+ ret = __mv88e6xxx_reg_read(to_mii_bus(ds->master_dev),
ds->pd->sw_addr, addr, reg);
mutex_unlock(&ps->smi_mutex);
@@ -122,7 +122,7 @@
int ret;
mutex_lock(&ps->smi_mutex);
- ret = __mv88e6xxx_reg_write(ds->master_mii_bus,
+ ret = __mv88e6xxx_reg_write(to_mii_bus(ds->master_dev),
ds->pd->sw_addr, addr, reg, val);
mutex_unlock(&ps->smi_mutex);
@@ -501,12 +501,18 @@
#if IS_ENABLED(CONFIG_NET_DSA_MV88E6123_61_65)
register_switch_driver(&mv88e6123_61_65_switch_driver);
#endif
+#if IS_ENABLED(CONFIG_NET_DSA_MV88E6171)
+ register_switch_driver(&mv88e6171_switch_driver);
+#endif
return 0;
}
module_init(mv88e6xxx_init);
static void __exit mv88e6xxx_cleanup(void)
{
+#if IS_ENABLED(CONFIG_NET_DSA_MV88E6171)
+ unregister_switch_driver(&mv88e6171_switch_driver);
+#endif
#if IS_ENABLED(CONFIG_NET_DSA_MV88E6123_61_65)
unregister_switch_driver(&mv88e6123_61_65_switch_driver);
#endif
diff --git a/drivers/net/dsa/mv88e6xxx.h b/drivers/net/dsa/mv88e6xxx.h
index 911ede5..5e5145a 100644
--- a/drivers/net/dsa/mv88e6xxx.h
+++ b/drivers/net/dsa/mv88e6xxx.h
@@ -70,6 +70,7 @@
extern struct dsa_switch_driver mv88e6131_switch_driver;
extern struct dsa_switch_driver mv88e6123_61_65_switch_driver;
+extern struct dsa_switch_driver mv88e6171_switch_driver;
#define REG_READ(addr, reg) \
({ \
diff --git a/drivers/net/ethernet/3com/3c59x.c b/drivers/net/ethernet/3com/3c59x.c
index 2b92d712..86e6211 100644
--- a/drivers/net/ethernet/3com/3c59x.c
+++ b/drivers/net/ethernet/3com/3c59x.c
@@ -2128,6 +2128,7 @@
int entry = vp->cur_tx % TX_RING_SIZE;
struct boom_tx_desc *prev_entry = &vp->tx_ring[(vp->cur_tx-1) % TX_RING_SIZE];
unsigned long flags;
+ dma_addr_t dma_addr;
if (vortex_debug > 6) {
pr_debug("boomerang_start_xmit()\n");
@@ -2162,24 +2163,48 @@
vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum);
if (!skb_shinfo(skb)->nr_frags) {
- vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data,
- skb->len, PCI_DMA_TODEVICE));
+ dma_addr = pci_map_single(VORTEX_PCI(vp), skb->data, skb->len,
+ PCI_DMA_TODEVICE);
+ if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr))
+ goto out_dma_err;
+
+ vp->tx_ring[entry].frag[0].addr = cpu_to_le32(dma_addr);
vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len | LAST_FRAG);
} else {
int i;
- vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data,
- skb_headlen(skb), PCI_DMA_TODEVICE));
+ dma_addr = pci_map_single(VORTEX_PCI(vp), skb->data,
+ skb_headlen(skb), PCI_DMA_TODEVICE);
+ if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr))
+ goto out_dma_err;
+
+ vp->tx_ring[entry].frag[0].addr = cpu_to_le32(dma_addr);
vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb_headlen(skb));
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ dma_addr = skb_frag_dma_map(&VORTEX_PCI(vp)->dev, frag,
+ 0,
+ frag->size,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr)) {
+ for(i = i-1; i >= 0; i--)
+ dma_unmap_page(&VORTEX_PCI(vp)->dev,
+ le32_to_cpu(vp->tx_ring[entry].frag[i+1].addr),
+ le32_to_cpu(vp->tx_ring[entry].frag[i+1].length),
+ DMA_TO_DEVICE);
+
+ pci_unmap_single(VORTEX_PCI(vp),
+ le32_to_cpu(vp->tx_ring[entry].frag[0].addr),
+ le32_to_cpu(vp->tx_ring[entry].frag[0].length),
+ PCI_DMA_TODEVICE);
+
+ goto out_dma_err;
+ }
+
vp->tx_ring[entry].frag[i+1].addr =
- cpu_to_le32(skb_frag_dma_map(
- &VORTEX_PCI(vp)->dev,
- frag,
- frag->page_offset, frag->size, DMA_TO_DEVICE));
+ cpu_to_le32(dma_addr);
if (i == skb_shinfo(skb)->nr_frags-1)
vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(skb_frag_size(frag)|LAST_FRAG);
@@ -2188,7 +2213,10 @@
}
}
#else
- vp->tx_ring[entry].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE));
+ dma_addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE));
+ if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr))
+ goto out_dma_err;
+ vp->tx_ring[entry].addr = cpu_to_le32(dma_addr);
vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG);
vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
#endif
@@ -2216,7 +2244,11 @@
skb_tx_timestamp(skb);
iowrite16(DownUnstall, ioaddr + EL3_CMD);
spin_unlock_irqrestore(&vp->lock, flags);
+out:
return NETDEV_TX_OK;
+out_dma_err:
+ dev_err(&VORTEX_PCI(vp)->dev, "Error mapping dma buffer\n");
+ goto out;
}
/* The interrupt handler does all of the Rx thread work and cleans up
diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
index dc7406c..0005e37 100644
--- a/drivers/net/ethernet/Kconfig
+++ b/drivers/net/ethernet/Kconfig
@@ -150,6 +150,7 @@
source "drivers/net/ethernet/packetengines/Kconfig"
source "drivers/net/ethernet/pasemi/Kconfig"
source "drivers/net/ethernet/qlogic/Kconfig"
+source "drivers/net/ethernet/qualcomm/Kconfig"
source "drivers/net/ethernet/realtek/Kconfig"
source "drivers/net/ethernet/renesas/Kconfig"
source "drivers/net/ethernet/rdc/Kconfig"
diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile
index 224a018..153bf2d 100644
--- a/drivers/net/ethernet/Makefile
+++ b/drivers/net/ethernet/Makefile
@@ -60,6 +60,7 @@
obj-$(CONFIG_NET_PACKET_ENGINE) += packetengines/
obj-$(CONFIG_NET_VENDOR_PASEMI) += pasemi/
obj-$(CONFIG_NET_VENDOR_QLOGIC) += qlogic/
+obj-$(CONFIG_NET_VENDOR_QUALCOMM) += qualcomm/
obj-$(CONFIG_NET_VENDOR_REALTEK) += realtek/
obj-$(CONFIG_SH_ETH) += renesas/
obj-$(CONFIG_NET_VENDOR_RDC) += rdc/
diff --git a/drivers/net/ethernet/amd/au1000_eth.c b/drivers/net/ethernet/amd/au1000_eth.c
index 31c48a7..6c323f4 100644
--- a/drivers/net/ethernet/amd/au1000_eth.c
+++ b/drivers/net/ethernet/amd/au1000_eth.c
@@ -1140,7 +1140,6 @@
static int au1000_probe(struct platform_device *pdev)
{
- static unsigned version_printed;
struct au1000_private *aup = NULL;
struct au1000_eth_platform_data *pd;
struct net_device *dev = NULL;
@@ -1371,9 +1370,8 @@
netdev_info(dev, "Au1xx0 Ethernet found at 0x%lx, irq %d\n",
(unsigned long)base->start, irq);
- if (version_printed++ == 0)
- pr_info("%s version %s %s\n",
- DRV_NAME, DRV_VERSION, DRV_AUTHOR);
+
+ pr_info_once("%s version %s %s\n", DRV_NAME, DRV_VERSION, DRV_AUTHOR);
return 0;
diff --git a/drivers/net/ethernet/amd/nmclan_cs.c b/drivers/net/ethernet/amd/nmclan_cs.c
index abf3b15..5b22764 100644
--- a/drivers/net/ethernet/amd/nmclan_cs.c
+++ b/drivers/net/ethernet/amd/nmclan_cs.c
@@ -621,7 +621,7 @@
ret = pcmcia_request_io(link);
if (ret)
goto failed;
- ret = pcmcia_request_exclusive_irq(link, mace_interrupt);
+ ret = pcmcia_request_irq(link, mace_interrupt);
if (ret)
goto failed;
ret = pcmcia_enable_device(link);
diff --git a/drivers/net/ethernet/arc/emac_main.c b/drivers/net/ethernet/arc/emac_main.c
index dbea847..abe1eab 100644
--- a/drivers/net/ethernet/arc/emac_main.c
+++ b/drivers/net/ethernet/arc/emac_main.c
@@ -28,6 +28,17 @@
/**
+ * arc_emac_tx_avail - Return the number of available slots in the tx ring.
+ * @priv: Pointer to ARC EMAC private data structure.
+ *
+ * returns: the number of slots available for transmission in tx the ring.
+ */
+static inline int arc_emac_tx_avail(struct arc_emac_priv *priv)
+{
+ return (priv->txbd_dirty + TX_BD_NUM - priv->txbd_curr - 1) % TX_BD_NUM;
+}
+
+/**
* arc_emac_adjust_link - Adjust the PHY link duplex.
* @ndev: Pointer to the net_device structure.
*
@@ -182,10 +193,15 @@
txbd->info = 0;
*txbd_dirty = (*txbd_dirty + 1) % TX_BD_NUM;
-
- if (netif_queue_stopped(ndev))
- netif_wake_queue(ndev);
}
+
+ /* Ensure that txbd_dirty is visible to tx() before checking
+ * for queue stopped.
+ */
+ smp_mb();
+
+ if (netif_queue_stopped(ndev) && arc_emac_tx_avail(priv))
+ netif_wake_queue(ndev);
}
/**
@@ -300,7 +316,7 @@
work_done = arc_emac_rx(ndev, budget);
if (work_done < budget) {
napi_complete(napi);
- arc_reg_or(priv, R_ENABLE, RXINT_MASK);
+ arc_reg_or(priv, R_ENABLE, RXINT_MASK | TXINT_MASK);
}
return work_done;
@@ -329,9 +345,9 @@
/* Reset all flags except "MDIO complete" */
arc_reg_set(priv, R_STATUS, status);
- if (status & RXINT_MASK) {
+ if (status & (RXINT_MASK | TXINT_MASK)) {
if (likely(napi_schedule_prep(&priv->napi))) {
- arc_reg_clr(priv, R_ENABLE, RXINT_MASK);
+ arc_reg_clr(priv, R_ENABLE, RXINT_MASK | TXINT_MASK);
__napi_schedule(&priv->napi);
}
}
@@ -442,7 +458,7 @@
arc_reg_set(priv, R_TX_RING, (unsigned int)priv->txbd_dma);
/* Enable interrupts */
- arc_reg_set(priv, R_ENABLE, RXINT_MASK | ERR_MASK);
+ arc_reg_set(priv, R_ENABLE, RXINT_MASK | TXINT_MASK | ERR_MASK);
/* Set CONTROL */
arc_reg_set(priv, R_CTRL,
@@ -513,7 +529,7 @@
netif_stop_queue(ndev);
/* Disable interrupts */
- arc_reg_clr(priv, R_ENABLE, RXINT_MASK | ERR_MASK);
+ arc_reg_clr(priv, R_ENABLE, RXINT_MASK | TXINT_MASK | ERR_MASK);
/* Disable EMAC */
arc_reg_clr(priv, R_CTRL, EN_MASK);
@@ -576,11 +592,9 @@
len = max_t(unsigned int, ETH_ZLEN, skb->len);
- /* EMAC still holds this buffer in its possession.
- * CPU must not modify this buffer descriptor
- */
- if (unlikely((le32_to_cpu(*info) & OWN_MASK) == FOR_EMAC)) {
+ if (unlikely(!arc_emac_tx_avail(priv))) {
netif_stop_queue(ndev);
+ netdev_err(ndev, "BUG! Tx Ring full when queue awake!\n");
return NETDEV_TX_BUSY;
}
@@ -609,12 +623,19 @@
/* Increment index to point to the next BD */
*txbd_curr = (*txbd_curr + 1) % TX_BD_NUM;
- /* Get "info" of the next BD */
- info = &priv->txbd[*txbd_curr].info;
+ /* Ensure that tx_clean() sees the new txbd_curr before
+ * checking the queue status. This prevents an unneeded wake
+ * of the queue in tx_clean().
+ */
+ smp_mb();
- /* Check if if Tx BD ring is full - next BD is still owned by EMAC */
- if (unlikely((le32_to_cpu(*info) & OWN_MASK) == FOR_EMAC))
+ if (!arc_emac_tx_avail(priv)) {
netif_stop_queue(ndev);
+ /* Refresh tx_dirty */
+ smp_mb();
+ if (arc_emac_tx_avail(priv))
+ netif_start_queue(ndev);
+ }
arc_reg_set(priv, R_STATUS, TXPL_MASK);
diff --git a/drivers/net/ethernet/broadcom/b44.c b/drivers/net/ethernet/broadcom/b44.c
index 56fadbd..416620f 100644
--- a/drivers/net/ethernet/broadcom/b44.c
+++ b/drivers/net/ethernet/broadcom/b44.c
@@ -1697,7 +1697,7 @@
hwstat->tx_underruns +
hwstat->tx_excessive_cols +
hwstat->tx_late_cols);
- nstat->multicast = hwstat->tx_multicast_pkts;
+ nstat->multicast = hwstat->rx_multicast_pkts;
nstat->collisions = hwstat->tx_total_cols;
nstat->rx_length_errors = (hwstat->rx_oversize_pkts +
diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
index 662cf22..77f1ff7 100644
--- a/drivers/net/ethernet/broadcom/bcmsysport.c
+++ b/drivers/net/ethernet/broadcom/bcmsysport.c
@@ -543,6 +543,25 @@
while ((processed < to_process) && (processed < budget)) {
cb = &priv->rx_cbs[priv->rx_read_ptr];
skb = cb->skb;
+
+ processed++;
+ priv->rx_read_ptr++;
+
+ if (priv->rx_read_ptr == priv->num_rx_bds)
+ priv->rx_read_ptr = 0;
+
+ /* We do not have a backing SKB, so we do not a corresponding
+ * DMA mapping for this incoming packet since
+ * bcm_sysport_rx_refill always either has both skb and mapping
+ * or none.
+ */
+ if (unlikely(!skb)) {
+ netif_err(priv, rx_err, ndev, "out of memory!\n");
+ ndev->stats.rx_dropped++;
+ ndev->stats.rx_errors++;
+ goto refill;
+ }
+
dma_unmap_single(kdev, dma_unmap_addr(cb, dma_addr),
RX_BUF_LENGTH, DMA_FROM_DEVICE);
@@ -552,23 +571,11 @@
status = (rsb->rx_status_len >> DESC_STATUS_SHIFT) &
DESC_STATUS_MASK;
- processed++;
- priv->rx_read_ptr++;
- if (priv->rx_read_ptr == priv->num_rx_bds)
- priv->rx_read_ptr = 0;
-
netif_dbg(priv, rx_status, ndev,
"p=%d, c=%d, rd_ptr=%d, len=%d, flag=0x%04x\n",
p_index, priv->rx_c_index, priv->rx_read_ptr,
len, status);
- if (unlikely(!skb)) {
- netif_err(priv, rx_err, ndev, "out of memory!\n");
- ndev->stats.rx_dropped++;
- ndev->stats.rx_errors++;
- goto refill;
- }
-
if (unlikely(!(status & DESC_EOP) || !(status & DESC_SOP))) {
netif_err(priv, rx_status, ndev, "fragmented packet!\n");
ndev->stats.rx_dropped++;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index 86e9451..c3a6072 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -1448,6 +1448,12 @@
struct bnx2x_eth_q_stats_old eth_q_stats_old;
};
+enum {
+ SUB_MF_MODE_UNKNOWN = 0,
+ SUB_MF_MODE_UFP,
+ SUB_MF_MODE_NPAR1_DOT_5,
+};
+
struct bnx2x {
/* Fields used in the tx and intr/napi performance paths
* are grouped together in the beginning of the structure
@@ -1659,6 +1665,9 @@
#define IS_MF_SI(bp) (bp->mf_mode == MULTI_FUNCTION_SI)
#define IS_MF_SD(bp) (bp->mf_mode == MULTI_FUNCTION_SD)
#define IS_MF_AFEX(bp) (bp->mf_mode == MULTI_FUNCTION_AFEX)
+ u8 mf_sub_mode;
+#define IS_MF_UFP(bp) (IS_MF_SD(bp) && \
+ bp->mf_sub_mode == SUB_MF_MODE_UFP)
u8 wol;
@@ -2361,7 +2370,7 @@
#define ATTN_HARD_WIRED_MASK 0xff00
#define ATTENTION_ID 4
-#define IS_MF_STORAGE_ONLY(bp) (IS_MF_STORAGE_SD(bp) || \
+#define IS_MF_STORAGE_ONLY(bp) (IS_MF_STORAGE_PERSONALITY_ONLY(bp) || \
IS_MF_FCOE_AFEX(bp))
/* stuff added to make the code fit 80Col */
@@ -2537,14 +2546,44 @@
#define IS_MF_ISCSI_SD(bp) (IS_MF_SD(bp) && BNX2X_IS_MF_SD_PROTOCOL_ISCSI(bp))
#define IS_MF_FCOE_SD(bp) (IS_MF_SD(bp) && BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp))
+#define IS_MF_ISCSI_SI(bp) (IS_MF_SI(bp) && BNX2X_IS_MF_EXT_PROTOCOL_ISCSI(bp))
-#define BNX2X_MF_EXT_PROTOCOL_FCOE(bp) ((bp)->mf_ext_config & \
- MACP_FUNC_CFG_FLAGS_FCOE_OFFLOAD)
+#define IS_MF_ISCSI_ONLY(bp) (IS_MF_ISCSI_SD(bp) || IS_MF_ISCSI_SI(bp))
-#define IS_MF_FCOE_AFEX(bp) (IS_MF_AFEX(bp) && BNX2X_MF_EXT_PROTOCOL_FCOE(bp))
-#define IS_MF_STORAGE_SD(bp) (IS_MF_SD(bp) && \
- (BNX2X_IS_MF_SD_PROTOCOL_ISCSI(bp) || \
- BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp)))
+#define BNX2X_MF_EXT_PROTOCOL_MASK \
+ (MACP_FUNC_CFG_FLAGS_ETHERNET | \
+ MACP_FUNC_CFG_FLAGS_ISCSI_OFFLOAD | \
+ MACP_FUNC_CFG_FLAGS_FCOE_OFFLOAD)
+
+#define BNX2X_MF_EXT_PROT(bp) ((bp)->mf_ext_config & \
+ BNX2X_MF_EXT_PROTOCOL_MASK)
+
+#define BNX2X_HAS_MF_EXT_PROTOCOL_FCOE(bp) \
+ (BNX2X_MF_EXT_PROT(bp) & MACP_FUNC_CFG_FLAGS_FCOE_OFFLOAD)
+
+#define BNX2X_IS_MF_EXT_PROTOCOL_FCOE(bp) \
+ (BNX2X_MF_EXT_PROT(bp) == MACP_FUNC_CFG_FLAGS_FCOE_OFFLOAD)
+
+#define BNX2X_IS_MF_EXT_PROTOCOL_ISCSI(bp) \
+ (BNX2X_MF_EXT_PROT(bp) == MACP_FUNC_CFG_FLAGS_ISCSI_OFFLOAD)
+
+#define IS_MF_FCOE_AFEX(bp) \
+ (IS_MF_AFEX(bp) && BNX2X_IS_MF_EXT_PROTOCOL_FCOE(bp))
+
+#define IS_MF_SD_STORAGE_PERSONALITY_ONLY(bp) \
+ (IS_MF_SD(bp) && \
+ (BNX2X_IS_MF_SD_PROTOCOL_ISCSI(bp) || \
+ BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp)))
+
+#define IS_MF_SI_STORAGE_PERSONALITY_ONLY(bp) \
+ (IS_MF_SI(bp) && \
+ (BNX2X_IS_MF_EXT_PROTOCOL_ISCSI(bp) || \
+ BNX2X_IS_MF_EXT_PROTOCOL_FCOE(bp)))
+
+#define IS_MF_STORAGE_PERSONALITY_ONLY(bp) \
+ (IS_MF_SD_STORAGE_PERSONALITY_ONLY(bp) || \
+ IS_MF_SI_STORAGE_PERSONALITY_ONLY(bp))
+
#define SET_FLAG(value, mask, flag) \
do {\
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 6dc32ae..40beef5 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -1938,7 +1938,7 @@
bp->num_ethernet_queues = bnx2x_calc_num_queues(bp);
/* override in STORAGE SD modes */
- if (IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp))
+ if (IS_MF_STORAGE_ONLY(bp))
bp->num_ethernet_queues = 1;
/* Add special queues */
@@ -4231,14 +4231,13 @@
struct bnx2x *bp = netdev_priv(dev);
int rc = 0;
- if (!bnx2x_is_valid_ether_addr(bp, addr->sa_data)) {
+ if (!is_valid_ether_addr(addr->sa_data)) {
BNX2X_ERR("Requested MAC address is not valid\n");
return -EINVAL;
}
- if ((IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp)) &&
- !is_zero_ether_addr(addr->sa_data)) {
- BNX2X_ERR("Can't configure non-zero address on iSCSI or FCoE functions in MF-SD mode\n");
+ if (IS_MF_STORAGE_ONLY(bp)) {
+ BNX2X_ERR("Can't change address on STORAGE ONLY function\n");
return -EINVAL;
}
@@ -4417,8 +4416,7 @@
u8 cos;
int rx_ring_size = 0;
- if (!bp->rx_ring_size &&
- (IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp))) {
+ if (!bp->rx_ring_size && IS_MF_STORAGE_ONLY(bp)) {
rx_ring_size = MIN_RX_SIZE_NONTPA;
bp->rx_ring_size = rx_ring_size;
} else if (!bp->rx_ring_size) {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index ac63e16..adcacda 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -936,6 +936,12 @@
start_params->gre_tunnel_type = IPGRE_TUNNEL;
start_params->inner_gre_rss_en = 1;
+ if (IS_MF_UFP(bp) && BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp)) {
+ start_params->class_fail_ethtype = ETH_P_FIP;
+ start_params->class_fail = 1;
+ start_params->no_added_tags = 1;
+ }
+
return bnx2x_func_state_change(bp, &func_params);
}
@@ -1298,15 +1304,7 @@
}
}
-static inline bool bnx2x_is_valid_ether_addr(struct bnx2x *bp, u8 *addr)
-{
- if (is_valid_ether_addr(addr) ||
- (is_zero_ether_addr(addr) &&
- (IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp))))
- return true;
- return false;
-}
/**
* bnx2x_fill_fw_str - Fill buffer with FW version string
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
index 0b173ed..1edc931 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
@@ -1852,7 +1852,7 @@
if ((ering->rx_pending > MAX_RX_AVAIL) ||
(ering->rx_pending < (bp->disable_tpa ? MIN_RX_SIZE_NONTPA :
MIN_RX_SIZE_TPA)) ||
- (ering->tx_pending > (IS_MF_FCOE_AFEX(bp) ? 0 : MAX_TX_AVAIL)) ||
+ (ering->tx_pending > (IS_MF_STORAGE_ONLY(bp) ? 0 : MAX_TX_AVAIL)) ||
(ering->tx_pending <= MAX_SKB_FRAGS + 4)) {
DP(BNX2X_MSG_ETHTOOL, "Command parameters not supported\n");
return -EINVAL;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
index 3e0621a..583591d 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
@@ -280,17 +280,11 @@
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_BOTH 0x60000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_SWAPPED 0x80000000
-
- u32 power_dissipated; /* 0x11c */
- #define SHARED_HW_CFG_POWER_MGNT_SCALE_MASK 0x00ff0000
- #define SHARED_HW_CFG_POWER_MGNT_SCALE_SHIFT 16
- #define SHARED_HW_CFG_POWER_MGNT_UNKNOWN_SCALE 0x00000000
- #define SHARED_HW_CFG_POWER_MGNT_DOT_1_WATT 0x00010000
- #define SHARED_HW_CFG_POWER_MGNT_DOT_01_WATT 0x00020000
- #define SHARED_HW_CFG_POWER_MGNT_DOT_001_WATT 0x00030000
-
- #define SHARED_HW_CFG_POWER_DIS_CMN_MASK 0xff000000
- #define SHARED_HW_CFG_POWER_DIS_CMN_SHIFT 24
+ u32 config_3; /* 0x11C */
+ #define SHARED_HW_CFG_EXTENDED_MF_MODE_MASK 0x00000F00
+ #define SHARED_HW_CFG_EXTENDED_MF_MODE_SHIFT 8
+ #define SHARED_HW_CFG_EXTENDED_MF_MODE_NPAR1_DOT_5 0x00000000
+ #define SHARED_HW_CFG_EXTENDED_MF_MODE_NPAR2_DOT_0 0x00000100
u32 ump_nc_si_config; /* 0x120 */
#define SHARED_HW_CFG_UMP_NC_SI_MII_MODE_MASK 0x00000003
@@ -859,6 +853,8 @@
#define SHARED_FEAT_CFG_FORCE_SF_MODE_SPIO4 0x00000200
#define SHARED_FEAT_CFG_FORCE_SF_MODE_SWITCH_INDEPT 0x00000300
#define SHARED_FEAT_CFG_FORCE_SF_MODE_AFEX_MODE 0x00000400
+ #define SHARED_FEAT_CFG_FORCE_SF_MODE_UFP_MODE 0x00000600
+ #define SHARED_FEAT_CFG_FORCE_SF_MODE_EXTENDED_MODE 0x00000700
/* The interval in seconds between sending LLDP packets. Set to zero
to disable the feature */
@@ -1268,6 +1264,10 @@
#define DRV_MSG_CODE_GET_UPGRADE_KEY 0x81000000
#define DRV_MSG_CODE_GET_MANUF_KEY 0x82000000
#define DRV_MSG_CODE_LOAD_L2B_PRAM 0x90000000
+ #define DRV_MSG_CODE_OEM_OK 0x00010000
+ #define DRV_MSG_CODE_OEM_FAILURE 0x00020000
+ #define DRV_MSG_CODE_OEM_UPDATE_SVID_OK 0x00030000
+ #define DRV_MSG_CODE_OEM_UPDATE_SVID_FAILURE 0x00040000
/*
* The optic module verification command requires bootcode
* v5.0.6 or later, te specific optic module verification command
@@ -1423,6 +1423,12 @@
#define DRV_STATUS_SET_MF_BW 0x00000004
#define DRV_STATUS_LINK_EVENT 0x00000008
+ #define DRV_STATUS_OEM_EVENT_MASK 0x00000070
+ #define DRV_STATUS_OEM_DISABLE_ENABLE_PF 0x00000010
+ #define DRV_STATUS_OEM_BANDWIDTH_ALLOCATION 0x00000020
+
+ #define DRV_STATUS_OEM_UPDATE_SVID 0x00000080
+
#define DRV_STATUS_DCC_EVENT_MASK 0x0000ff00
#define DRV_STATUS_DCC_DISABLE_ENABLE_PF 0x00000100
#define DRV_STATUS_DCC_BANDWIDTH_ALLOCATION 0x00000200
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 32e2444..74fbf9e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -2905,6 +2905,57 @@
}
}
+static void bnx2x_handle_update_svid_cmd(struct bnx2x *bp)
+{
+ struct bnx2x_func_switch_update_params *switch_update_params;
+ struct bnx2x_func_state_params func_params;
+
+ memset(&func_params, 0, sizeof(struct bnx2x_func_state_params));
+ switch_update_params = &func_params.params.switch_update;
+ func_params.f_obj = &bp->func_obj;
+ func_params.cmd = BNX2X_F_CMD_SWITCH_UPDATE;
+
+ if (IS_MF_UFP(bp)) {
+ int func = BP_ABS_FUNC(bp);
+ u32 val;
+
+ /* Re-learn the S-tag from shmem */
+ val = MF_CFG_RD(bp, func_mf_config[func].e1hov_tag) &
+ FUNC_MF_CFG_E1HOV_TAG_MASK;
+ if (val != FUNC_MF_CFG_E1HOV_TAG_DEFAULT) {
+ bp->mf_ov = val;
+ } else {
+ BNX2X_ERR("Got an SVID event, but no tag is configured in shmem\n");
+ goto fail;
+ }
+
+ /* Configure new S-tag in LLH */
+ REG_WR(bp, NIG_REG_LLH0_FUNC_VLAN_ID + BP_PORT(bp) * 8,
+ bp->mf_ov);
+
+ /* Send Ramrod to update FW of change */
+ __set_bit(BNX2X_F_UPDATE_SD_VLAN_TAG_CHNG,
+ &switch_update_params->changes);
+ switch_update_params->vlan = bp->mf_ov;
+
+ if (bnx2x_func_state_change(bp, &func_params) < 0) {
+ BNX2X_ERR("Failed to configure FW of S-tag Change to %02x\n",
+ bp->mf_ov);
+ goto fail;
+ }
+
+ DP(BNX2X_MSG_MCP, "Configured S-tag %02x\n", bp->mf_ov);
+
+ bnx2x_fw_command(bp, DRV_MSG_CODE_OEM_UPDATE_SVID_OK, 0);
+
+ return;
+ }
+
+ /* not supported by SW yet */
+fail:
+ bnx2x_fw_command(bp, DRV_MSG_CODE_OEM_UPDATE_SVID_FAILURE, 0);
+}
+
static void bnx2x_pmf_update(struct bnx2x *bp)
{
int port = BP_PORT(bp);
@@ -3297,7 +3348,8 @@
{
int port = BP_PORT(bp);
- REG_WR(bp, NIG_REG_LLH0_FUNC_EN + port*8, 1);
+ if (!(IS_MF_UFP(bp) && BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp)))
+ REG_WR(bp, NIG_REG_LLH0_FUNC_EN + port * 8, 1);
/* Tx queue should be only re-enabled */
netif_tx_wake_all_queues(bp->dev);
@@ -3652,14 +3704,30 @@
ethver, iscsiver, fcoever);
}
-static void bnx2x_dcc_event(struct bnx2x *bp, u32 dcc_event)
+static void bnx2x_oem_event(struct bnx2x *bp, u32 event)
{
- DP(BNX2X_MSG_MCP, "dcc_event 0x%x\n", dcc_event);
+ u32 cmd_ok, cmd_fail;
- if (dcc_event & DRV_STATUS_DCC_DISABLE_ENABLE_PF) {
+ /* sanity */
+ if (event & DRV_STATUS_DCC_EVENT_MASK &&
+ event & DRV_STATUS_OEM_EVENT_MASK) {
+ BNX2X_ERR("Received simultaneous events %08x\n", event);
+ return;
+ }
- /*
- * This is the only place besides the function initialization
+ if (event & DRV_STATUS_DCC_EVENT_MASK) {
+ cmd_fail = DRV_MSG_CODE_DCC_FAILURE;
+ cmd_ok = DRV_MSG_CODE_DCC_OK;
+ } else /* if (event & DRV_STATUS_OEM_EVENT_MASK) */ {
+ cmd_fail = DRV_MSG_CODE_OEM_FAILURE;
+ cmd_ok = DRV_MSG_CODE_OEM_OK;
+ }
+
+ DP(BNX2X_MSG_MCP, "oem_event 0x%x\n", event);
+
+ if (event & (DRV_STATUS_DCC_DISABLE_ENABLE_PF |
+ DRV_STATUS_OEM_DISABLE_ENABLE_PF)) {
+ /* This is the only place besides the function initialization
* where the bp->flags can change so it is done without any
* locks
*/
@@ -3674,18 +3742,22 @@
bnx2x_e1h_enable(bp);
}
- dcc_event &= ~DRV_STATUS_DCC_DISABLE_ENABLE_PF;
+ event &= ~(DRV_STATUS_DCC_DISABLE_ENABLE_PF |
+ DRV_STATUS_OEM_DISABLE_ENABLE_PF);
}
- if (dcc_event & DRV_STATUS_DCC_BANDWIDTH_ALLOCATION) {
+
+ if (event & (DRV_STATUS_DCC_BANDWIDTH_ALLOCATION |
+ DRV_STATUS_OEM_BANDWIDTH_ALLOCATION)) {
bnx2x_config_mf_bw(bp);
- dcc_event &= ~DRV_STATUS_DCC_BANDWIDTH_ALLOCATION;
+ event &= ~(DRV_STATUS_DCC_BANDWIDTH_ALLOCATION |
+ DRV_STATUS_OEM_BANDWIDTH_ALLOCATION);
}
/* Report results to MCP */
- if (dcc_event)
- bnx2x_fw_command(bp, DRV_MSG_CODE_DCC_FAILURE, 0);
+ if (event)
+ bnx2x_fw_command(bp, cmd_fail, 0);
else
- bnx2x_fw_command(bp, DRV_MSG_CODE_DCC_OK, 0);
+ bnx2x_fw_command(bp, cmd_ok, 0);
}
/* must be called under the spq lock */
@@ -4167,9 +4239,12 @@
func_mf_config[BP_ABS_FUNC(bp)].config);
val = SHMEM_RD(bp,
func_mb[BP_FW_MB_IDX(bp)].drv_status);
- if (val & DRV_STATUS_DCC_EVENT_MASK)
- bnx2x_dcc_event(bp,
- (val & DRV_STATUS_DCC_EVENT_MASK));
+
+ if (val & (DRV_STATUS_DCC_EVENT_MASK |
+ DRV_STATUS_OEM_EVENT_MASK))
+ bnx2x_oem_event(bp,
+ (val & (DRV_STATUS_DCC_EVENT_MASK |
+ DRV_STATUS_OEM_EVENT_MASK)));
if (val & DRV_STATUS_SET_MF_BW)
bnx2x_set_mf_bw(bp);
@@ -4195,6 +4270,10 @@
val & DRV_STATUS_AFEX_EVENT_MASK);
if (val & DRV_STATUS_EEE_NEGOTIATION_RESULTS)
bnx2x_handle_eee_event(bp);
+
+ if (val & DRV_STATUS_OEM_UPDATE_SVID)
+ bnx2x_handle_update_svid_cmd(bp);
+
if (bp->link_vars.periodic_flags &
PERIODIC_FLAGS_LINK_EVENT) {
/* sync with link */
@@ -7930,8 +8009,11 @@
REG_WR(bp, CFC_REG_WEAK_ENABLE_PF, 1);
if (IS_MF(bp)) {
- REG_WR(bp, NIG_REG_LLH0_FUNC_EN + port*8, 1);
- REG_WR(bp, NIG_REG_LLH0_FUNC_VLAN_ID + port*8, bp->mf_ov);
+ if (!(IS_MF_UFP(bp) && BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp))) {
+ REG_WR(bp, NIG_REG_LLH0_FUNC_EN + port * 8, 1);
+ REG_WR(bp, NIG_REG_LLH0_FUNC_VLAN_ID + port * 8,
+ bp->mf_ov);
+ }
}
bnx2x_init_block(bp, BLOCK_MISC_AEU, init_phase);
@@ -8323,13 +8405,6 @@
int bnx2x_set_eth_mac(struct bnx2x *bp, bool set)
{
- if (is_zero_ether_addr(bp->dev->dev_addr) &&
- (IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp))) {
- DP(NETIF_MSG_IFUP | NETIF_MSG_IFDOWN,
- "Ignoring Zero MAC for STORAGE SD mode\n");
- return 0;
- }
-
if (IS_PF(bp)) {
unsigned long ramrod_flags = 0;
@@ -11355,15 +11430,14 @@
dev_info.port_hw_config[port].
fcoe_wwn_node_name_lower);
} else if (!IS_MF_SD(bp)) {
- /*
- * Read the WWN info only if the FCoE feature is enabled for
+ /* Read the WWN info only if the FCoE feature is enabled for
* this function.
*/
- if (BNX2X_MF_EXT_PROTOCOL_FCOE(bp) && !CHIP_IS_E1x(bp))
+ if (BNX2X_HAS_MF_EXT_PROTOCOL_FCOE(bp))
bnx2x_get_ext_wwn_info(bp, func);
-
- } else if (IS_MF_FCOE_SD(bp) && !CHIP_IS_E1x(bp)) {
- bnx2x_get_ext_wwn_info(bp, func);
+ } else {
+ if (BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp) && !CHIP_IS_E1x(bp))
+ bnx2x_get_ext_wwn_info(bp, func);
}
BNX2X_DEV_INFO("max_fcoe_conn 0x%x\n", bp->cnic_eth_dev.max_fcoe_conn);
@@ -11401,7 +11475,7 @@
* In non SD mode features configuration comes from struct
* func_ext_config.
*/
- if (!IS_MF_SD(bp) && !CHIP_IS_E1x(bp)) {
+ if (!IS_MF_SD(bp)) {
u32 cfg = MF_CFG_RD(bp, func_ext_config[func].func_cfg);
if (cfg & MACP_FUNC_CFG_FLAGS_ISCSI_OFFLOAD) {
val2 = MF_CFG_RD(bp, func_ext_config[func].
@@ -11520,7 +11594,7 @@
memcpy(bp->link_params.mac_addr, bp->dev->dev_addr, ETH_ALEN);
- if (!bnx2x_is_valid_ether_addr(bp, bp->dev->dev_addr))
+ if (!is_valid_ether_addr(bp->dev->dev_addr))
dev_err(&bp->pdev->dev,
"bad Ethernet MAC address configuration: %pM\n"
"change it manually before bringing up the appropriate network interface\n",
@@ -11550,11 +11624,27 @@
return cfg;
}
+static void validate_set_si_mode(struct bnx2x *bp)
+{
+ u8 func = BP_ABS_FUNC(bp);
+ u32 val;
+
+ val = MF_CFG_RD(bp, func_mf_config[func].mac_upper);
+
+ /* check for legal mac (upper bytes) */
+ if (val != 0xffff) {
+ bp->mf_mode = MULTI_FUNCTION_SI;
+ bp->mf_config[BP_VN(bp)] =
+ MF_CFG_RD(bp, func_mf_config[func].config);
+ } else
+ BNX2X_DEV_INFO("illegal MAC address for SI\n");
+}
+
static int bnx2x_get_hwinfo(struct bnx2x *bp)
{
int /*abs*/func = BP_ABS_FUNC(bp);
int vn;
- u32 val = 0;
+ u32 val = 0, val2 = 0;
int rc = 0;
bnx2x_get_common_hwinfo(bp);
@@ -11634,6 +11724,7 @@
bp->mf_ov = 0;
bp->mf_mode = 0;
+ bp->mf_sub_mode = 0;
vn = BP_VN(bp);
if (!CHIP_IS_E1(bp) && !BP_NOMCP(bp)) {
@@ -11663,15 +11754,7 @@
switch (val) {
case SHARED_FEAT_CFG_FORCE_SF_MODE_SWITCH_INDEPT:
- val = MF_CFG_RD(bp, func_mf_config[func].
- mac_upper);
- /* check for legal mac (upper bytes)*/
- if (val != 0xffff) {
- bp->mf_mode = MULTI_FUNCTION_SI;
- bp->mf_config[vn] = MF_CFG_RD(bp,
- func_mf_config[func].config);
- } else
- BNX2X_DEV_INFO("illegal MAC address for SI\n");
+ validate_set_si_mode(bp);
break;
case SHARED_FEAT_CFG_FORCE_SF_MODE_AFEX_MODE:
if ((!CHIP_IS_E1x(bp)) &&
@@ -11699,9 +11782,33 @@
} else
BNX2X_DEV_INFO("illegal OV for SD\n");
break;
+ case SHARED_FEAT_CFG_FORCE_SF_MODE_UFP_MODE:
+ bp->mf_mode = MULTI_FUNCTION_SD;
+ bp->mf_sub_mode = SUB_MF_MODE_UFP;
+ bp->mf_config[vn] =
+ MF_CFG_RD(bp,
+ func_mf_config[func].config);
+ break;
case SHARED_FEAT_CFG_FORCE_SF_MODE_FORCED_SF:
bp->mf_config[vn] = 0;
break;
+ case SHARED_FEAT_CFG_FORCE_SF_MODE_EXTENDED_MODE:
+ val2 = SHMEM_RD(bp,
+ dev_info.shared_hw_config.config_3);
+ val2 &= SHARED_HW_CFG_EXTENDED_MF_MODE_MASK;
+ switch (val2) {
+ case SHARED_HW_CFG_EXTENDED_MF_MODE_NPAR1_DOT_5:
+ validate_set_si_mode(bp);
+ bp->mf_sub_mode =
+ SUB_MF_MODE_NPAR1_DOT_5;
+ break;
+ default:
+ /* Unknown configuration */
+ bp->mf_config[vn] = 0;
+ BNX2X_DEV_INFO("unknown extended MF mode 0x%x\n",
+ val);
+ }
+ break;
default:
/* Unknown configuration: reset mf_config */
bp->mf_config[vn] = 0;
@@ -11722,6 +11829,11 @@
BNX2X_DEV_INFO("MF OV for func %d is %d (0x%04x)\n",
func, bp->mf_ov, bp->mf_ov);
+ } else if (bp->mf_sub_mode == SUB_MF_MODE_UFP) {
+ dev_err(&bp->pdev->dev,
+ "Unexpected - no valid MF OV for func %d in UFP mode\n",
+ func);
+ bp->path_has_ovlan = true;
} else {
dev_err(&bp->pdev->dev,
"No valid MF OV for func %d, aborting\n",
@@ -11970,7 +12082,7 @@
dev_err(&bp->pdev->dev, "MCP disabled, must load devices in order!\n");
bp->disable_tpa = disable_tpa;
- bp->disable_tpa |= IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp);
+ bp->disable_tpa |= !!IS_MF_STORAGE_ONLY(bp);
/* Reduce memory usage in kdump environment by disabling TPA */
bp->disable_tpa |= is_kdump_kernel();
@@ -11990,7 +12102,7 @@
bp->mrrs = mrrs;
- bp->tx_ring_size = IS_MF_FCOE_AFEX(bp) ? 0 : MAX_TX_AVAIL;
+ bp->tx_ring_size = IS_MF_STORAGE_ONLY(bp) ? 0 : MAX_TX_AVAIL;
if (IS_VF(bp))
bp->rx_ring_size = MAX_RX_AVAIL;
@@ -12310,7 +12422,7 @@
bp->rx_mode = rx_mode;
/* handle ISCSI SD mode */
- if (IS_MF_ISCSI_SD(bp))
+ if (IS_MF_ISCSI_ONLY(bp))
bp->rx_mode = BNX2X_RX_MODE_NONE;
/* Schedule the rx_mode command */
@@ -12417,7 +12529,7 @@
if (IS_VF(bp))
bnx2x_sample_bulletin(bp);
- if (!bnx2x_is_valid_ether_addr(bp, dev->dev_addr)) {
+ if (!is_valid_ether_addr(dev->dev_addr)) {
BNX2X_ERR("Non-valid Ethernet address\n");
return -EADDRNOTAVAIL;
}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
index 19d0c11..7bc2924 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
@@ -5673,8 +5673,23 @@
rdata->gre_tunnel_type = start_params->gre_tunnel_type;
rdata->inner_gre_rss_en = start_params->inner_gre_rss_en;
rdata->vxlan_dst_port = cpu_to_le16(4789);
- rdata->sd_vlan_eth_type = cpu_to_le16(0x8100);
+ rdata->sd_accept_mf_clss_fail = start_params->class_fail;
+ if (start_params->class_fail_ethtype) {
+ rdata->sd_accept_mf_clss_fail_match_ethtype = 1;
+ rdata->sd_accept_mf_clss_fail_ethtype =
+ cpu_to_le16(start_params->class_fail_ethtype);
+ }
+ rdata->sd_vlan_force_pri_flg = start_params->sd_vlan_force_pri;
+ rdata->sd_vlan_force_pri_val = start_params->sd_vlan_force_pri_val;
+ if (start_params->sd_vlan_eth_type)
+ rdata->sd_vlan_eth_type =
+ cpu_to_le16(start_params->sd_vlan_eth_type);
+ else
+ rdata->sd_vlan_eth_type =
+ cpu_to_le16(0x8100);
+
+ rdata->no_added_tags = start_params->no_added_tags;
/* No need for an explicit memory barrier here as long we would
* need to ensure the ordering of writing to the SPQ element
* and updating of the SPQ producer which involves a memory
@@ -5708,6 +5723,30 @@
&switch_update_params->changes);
}
+ if (test_bit(BNX2X_F_UPDATE_SD_VLAN_TAG_CHNG,
+ &switch_update_params->changes)) {
+ rdata->sd_vlan_tag_change_flg = 1;
+ rdata->sd_vlan_tag =
+ cpu_to_le16(switch_update_params->vlan);
+ }
+
+ if (test_bit(BNX2X_F_UPDATE_SD_VLAN_ETH_TYPE_CHNG,
+ &switch_update_params->changes)) {
+ rdata->sd_vlan_eth_type_change_flg = 1;
+ rdata->sd_vlan_eth_type =
+ cpu_to_le16(switch_update_params->vlan_eth_type);
+ }
+
+ if (test_bit(BNX2X_F_UPDATE_VLAN_FORCE_PRIO_CHNG,
+ &switch_update_params->changes)) {
+ rdata->sd_vlan_force_pri_change_flg = 1;
+ if (test_bit(BNX2X_F_UPDATE_VLAN_FORCE_PRIO_FLAG,
+ &switch_update_params->changes))
+ rdata->sd_vlan_force_pri_flg = 1;
+ rdata->sd_vlan_force_pri_flg =
+ switch_update_params->vlan_force_prio;
+ }
+
if (test_bit(BNX2X_F_UPDATE_TUNNEL_CFG_CHNG,
&switch_update_params->changes)) {
rdata->update_tunn_cfg_flg = 1;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
index 21c8f6f..e97275f 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
@@ -1098,6 +1098,10 @@
enum {
BNX2X_F_UPDATE_TX_SWITCH_SUSPEND_CHNG,
BNX2X_F_UPDATE_TX_SWITCH_SUSPEND,
+ BNX2X_F_UPDATE_SD_VLAN_TAG_CHNG,
+ BNX2X_F_UPDATE_SD_VLAN_ETH_TYPE_CHNG,
+ BNX2X_F_UPDATE_VLAN_FORCE_PRIO_CHNG,
+ BNX2X_F_UPDATE_VLAN_FORCE_PRIO_FLAG,
BNX2X_F_UPDATE_TUNNEL_CFG_CHNG,
BNX2X_F_UPDATE_TUNNEL_CLSS_EN,
BNX2X_F_UPDATE_TUNNEL_INNER_GRE_RSS_EN,
@@ -1178,10 +1182,29 @@
* capailities
*/
u8 inner_gre_rss_en;
+
+ /* Allows accepting of packets failing MF classification, possibly
+ * only matching a given ethertype
+ */
+ u8 class_fail;
+ u16 class_fail_ethtype;
+
+ /* Override priority of output packets */
+ u8 sd_vlan_force_pri;
+ u8 sd_vlan_force_pri_val;
+
+ /* Replace vlan's ethertype */
+ u16 sd_vlan_eth_type;
+
+ /* Prevent inner vlans from being added by FW */
+ u8 no_added_tags;
};
struct bnx2x_func_switch_update_params {
unsigned long changes; /* BNX2X_F_UPDATE_XX bits */
+ u16 vlan;
+ u16 vlan_eth_type;
+ u8 vlan_force_prio;
u8 tunnel_mode;
u8 gre_tunnel_type;
};
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index 3f9d4de..77cb755 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -875,6 +875,7 @@
int last_tx_cn, last_c_index, num_tx_bds;
struct enet_cb *tx_cb_ptr;
struct netdev_queue *txq;
+ unsigned int bds_compl;
unsigned int c_index;
/* Compute how many buffers are transmitted since last xmit call */
@@ -899,7 +900,9 @@
/* Reclaim transmitted buffers */
while (last_tx_cn-- > 0) {
tx_cb_ptr = ring->cbs + last_c_index;
+ bds_compl = 0;
if (tx_cb_ptr->skb) {
+ bds_compl = skb_shinfo(tx_cb_ptr->skb)->nr_frags + 1;
dev->stats.tx_bytes += tx_cb_ptr->skb->len;
dma_unmap_single(&dev->dev,
dma_unmap_addr(tx_cb_ptr, dma_addr),
@@ -916,7 +919,7 @@
dma_unmap_addr_set(tx_cb_ptr, dma_addr, 0);
}
dev->stats.tx_packets++;
- ring->free_bds += 1;
+ ring->free_bds += bds_compl;
last_c_index++;
last_c_index &= (num_tx_bds - 1);
@@ -1274,12 +1277,29 @@
while ((rxpktprocessed < rxpkttoprocess) &&
(rxpktprocessed < budget)) {
+ cb = &priv->rx_cbs[priv->rx_read_ptr];
+ skb = cb->skb;
+
+ rxpktprocessed++;
+
+ priv->rx_read_ptr++;
+ priv->rx_read_ptr &= (priv->num_rx_bds - 1);
+
+ /* We do not have a backing SKB, so we do not have a
+ * corresponding DMA mapping for this incoming packet since
+ * bcmgenet_rx_refill always either has both skb and mapping or
+ * none.
+ */
+ if (unlikely(!skb)) {
+ dev->stats.rx_dropped++;
+ dev->stats.rx_errors++;
+ goto refill;
+ }
+
/* Unmap the packet contents such that we can use the
* RSV from the 64 bytes descriptor when enabled and save
* a 32-bits register read
*/
- cb = &priv->rx_cbs[priv->rx_read_ptr];
- skb = cb->skb;
dma_unmap_single(&dev->dev, dma_unmap_addr(cb, dma_addr),
priv->rx_buf_len, DMA_FROM_DEVICE);
@@ -1307,18 +1327,6 @@
__func__, p_index, priv->rx_c_index,
priv->rx_read_ptr, dma_length_status);
- rxpktprocessed++;
-
- priv->rx_read_ptr++;
- priv->rx_read_ptr &= (priv->num_rx_bds - 1);
-
- /* out of memory, just drop packets at the hardware level */
- if (unlikely(!skb)) {
- dev->stats.rx_dropped++;
- dev->stats.rx_errors++;
- goto refill;
- }
-
if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) {
netif_err(priv, rx_status, dev,
"dropping fragmented packet!\n");
@@ -1736,13 +1744,63 @@
bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
}
+static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv)
+{
+ int ret = 0;
+ int timeout = 0;
+ u32 reg;
+
+ /* Disable TDMA to stop add more frames in TX DMA */
+ reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+ reg &= ~DMA_EN;
+ bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+
+ /* Check TDMA status register to confirm TDMA is disabled */
+ while (timeout++ < DMA_TIMEOUT_VAL) {
+ reg = bcmgenet_tdma_readl(priv, DMA_STATUS);
+ if (reg & DMA_DISABLED)
+ break;
+
+ udelay(1);
+ }
+
+ if (timeout == DMA_TIMEOUT_VAL) {
+ netdev_warn(priv->dev, "Timed out while disabling TX DMA\n");
+ ret = -ETIMEDOUT;
+ }
+
+ /* Wait 10ms for packet drain in both tx and rx dma */
+ usleep_range(10000, 20000);
+
+ /* Disable RDMA */
+ reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+ reg &= ~DMA_EN;
+ bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+
+ timeout = 0;
+ /* Check RDMA status register to confirm RDMA is disabled */
+ while (timeout++ < DMA_TIMEOUT_VAL) {
+ reg = bcmgenet_rdma_readl(priv, DMA_STATUS);
+ if (reg & DMA_DISABLED)
+ break;
+
+ udelay(1);
+ }
+
+ if (timeout == DMA_TIMEOUT_VAL) {
+ netdev_warn(priv->dev, "Timed out while disabling RX DMA\n");
+ ret = -ETIMEDOUT;
+ }
+
+ return ret;
+}
+
static void bcmgenet_fini_dma(struct bcmgenet_priv *priv)
{
int i;
/* disable DMA */
- bcmgenet_rdma_writel(priv, 0, DMA_CTRL);
- bcmgenet_tdma_writel(priv, 0, DMA_CTRL);
+ bcmgenet_dma_teardown(priv);
for (i = 0; i < priv->num_tx_bds; i++) {
if (priv->tx_cbs[i].skb != NULL) {
@@ -1959,19 +2017,6 @@
bcmgenet_umac_writel(priv, (addr[4] << 8) | addr[5], UMAC_MAC1);
}
-static int bcmgenet_wol_resume(struct bcmgenet_priv *priv)
-{
- /* From WOL-enabled suspend, switch to regular clock */
- if (priv->wolopts)
- clk_disable_unprepare(priv->clk_wol);
-
- phy_init_hw(priv->phydev);
- /* Speed settings must be restored */
- bcmgenet_mii_config(priv->dev);
-
- return 0;
-}
-
/* Returns a reusable dma control register value */
static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
{
@@ -2101,57 +2146,6 @@
return ret;
}
-static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv)
-{
- int ret = 0;
- int timeout = 0;
- u32 reg;
-
- /* Disable TDMA to stop add more frames in TX DMA */
- reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
- reg &= ~DMA_EN;
- bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
-
- /* Check TDMA status register to confirm TDMA is disabled */
- while (timeout++ < DMA_TIMEOUT_VAL) {
- reg = bcmgenet_tdma_readl(priv, DMA_STATUS);
- if (reg & DMA_DISABLED)
- break;
-
- udelay(1);
- }
-
- if (timeout == DMA_TIMEOUT_VAL) {
- netdev_warn(priv->dev, "Timed out while disabling TX DMA\n");
- ret = -ETIMEDOUT;
- }
-
- /* Wait 10ms for packet drain in both tx and rx dma */
- usleep_range(10000, 20000);
-
- /* Disable RDMA */
- reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
- reg &= ~DMA_EN;
- bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
-
- timeout = 0;
- /* Check RDMA status register to confirm RDMA is disabled */
- while (timeout++ < DMA_TIMEOUT_VAL) {
- reg = bcmgenet_rdma_readl(priv, DMA_STATUS);
- if (reg & DMA_DISABLED)
- break;
-
- udelay(1);
- }
-
- if (timeout == DMA_TIMEOUT_VAL) {
- netdev_warn(priv->dev, "Timed out while disabling RX DMA\n");
- ret = -ETIMEDOUT;
- }
-
- return ret;
-}
-
static void bcmgenet_netif_stop(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
@@ -2432,6 +2426,13 @@
dev_info(&priv->pdev->dev, "GENET " GENET_VER_FMT,
major, (reg >> 16) & 0x0f, reg & 0xffff);
+ /* Store the integrated PHY revision for the MDIO probing function
+ * to pass this information to the PHY driver. The PHY driver expects
+ * to find the PHY major revision in bits 15:8 while the GENET register
+ * stores that information in bits 7:0, account for that.
+ */
+ priv->gphy_rev = (reg & 0xffff) << 8;
+
#ifdef CONFIG_PHYS_ADDR_T_64BIT
if (!(params->flags & GENET_HAS_40BITS))
pr_warn("GENET does not support 40-bits PA\n");
@@ -2669,9 +2670,13 @@
if (ret)
goto out_clk_disable;
- ret = bcmgenet_wol_resume(priv);
- if (ret)
- goto out_clk_disable;
+ /* From WOL-enabled suspend, switch to regular clock */
+ if (priv->wolopts)
+ clk_disable_unprepare(priv->clk_wol);
+
+ phy_init_hw(priv->phydev);
+ /* Speed settings must be restored */
+ bcmgenet_mii_config(priv->dev);
/* disable ethernet MAC while updating its registers */
umac_enable_set(priv, CMD_TX_EN | CMD_RX_EN, false);
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
index c862d06..ad95fe5 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -545,6 +545,7 @@
struct phy_device *phydev;
struct device_node *phy_dn;
struct mii_bus *mii_bus;
+ u16 gphy_rev;
/* PHY device variables */
int old_duplex;
diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
index c88f7ae..75b26cba 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
@@ -296,7 +296,7 @@
struct bcmgenet_priv *priv = netdev_priv(dev);
struct device_node *dn = priv->pdev->dev.of_node;
struct phy_device *phydev;
- unsigned int phy_flags;
+ u32 phy_flags;
int ret;
if (priv->phydev) {
@@ -315,8 +315,11 @@
priv->phy_dn = of_node_get(dn);
}
- phydev = of_phy_connect(dev, priv->phy_dn, bcmgenet_mii_setup, 0,
- priv->phy_interface);
+ /* Communicate the integrated PHY revision */
+ phy_flags = priv->gphy_rev;
+
+ phydev = of_phy_connect(dev, priv->phy_dn, bcmgenet_mii_setup,
+ phy_flags, priv->phy_interface);
if (!phydev) {
pr_err("could not attach to PHY\n");
return -ENODEV;
@@ -338,15 +341,6 @@
return ret;
}
- phy_flags = PHY_BRCM_100MBPS_WAR;
-
- /* workarounds are only needed for 100Mpbs PHYs, and
- * never on GENET V1 hardware
- */
- if ((phydev->supported & PHY_GBIT_FEATURES) || GENET_IS_V1(priv))
- phy_flags = 0;
-
- phydev->dev_flags |= phy_flags;
phydev->advertising = phydev->supported;
/* The internal PHY has its link interrupts routed to the
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index cb77ae9..e7d3a62 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -7914,8 +7914,6 @@
entry = tnapi->tx_prod;
base_flags = 0;
- if (skb->ip_summed == CHECKSUM_PARTIAL)
- base_flags |= TXD_FLAG_TCPUDP_CSUM;
mss = skb_shinfo(skb)->gso_size;
if (mss) {
@@ -7929,6 +7927,13 @@
hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb) - ETH_HLEN;
+ /* HW/FW can not correctly segment packets that have been
+ * vlan encapsulated.
+ */
+ if (skb->protocol == htons(ETH_P_8021Q) ||
+ skb->protocol == htons(ETH_P_8021AD))
+ return tg3_tso_bug(tp, tnapi, txq, skb);
+
if (!skb_is_gso_v6(skb)) {
if (unlikely((ETH_HLEN + hdr_len) > 80) &&
tg3_flag(tp, TSO_BUG))
@@ -7979,6 +7984,17 @@
base_flags |= tsflags << 12;
}
}
+ } else if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ /* HW/FW can not correctly checksum packets that have been
+ * vlan encapsulated.
+ */
+ if (skb->protocol == htons(ETH_P_8021Q) ||
+ skb->protocol == htons(ETH_P_8021AD)) {
+ if (skb_checksum_help(skb))
+ goto drop;
+ } else {
+ base_flags |= TXD_FLAG_TCPUDP_CSUM;
+ }
}
if (tg3_flag(tp, USE_JUMBO_BDFLAG) &&
diff --git a/drivers/net/ethernet/brocade/bna/bna_enet.c b/drivers/net/ethernet/brocade/bna/bna_enet.c
index 13f9636..903466e 100644
--- a/drivers/net/ethernet/brocade/bna/bna_enet.c
+++ b/drivers/net/ethernet/brocade/bna/bna_enet.c
@@ -107,7 +107,8 @@
{
struct bfi_enet_enable_req *admin_req =
ðport->bfi_enet_cmd.admin_req;
- struct bfi_enet_rsp *rsp = (struct bfi_enet_rsp *)msghdr;
+ struct bfi_enet_rsp *rsp =
+ container_of(msghdr, struct bfi_enet_rsp, mh);
switch (admin_req->enable) {
case BNA_STATUS_T_ENABLED:
@@ -133,7 +134,8 @@
{
struct bfi_enet_diag_lb_req *diag_lb_req =
ðport->bfi_enet_cmd.lpbk_req;
- struct bfi_enet_rsp *rsp = (struct bfi_enet_rsp *)msghdr;
+ struct bfi_enet_rsp *rsp =
+ container_of(msghdr, struct bfi_enet_rsp, mh);
switch (diag_lb_req->enable) {
case BNA_STATUS_T_ENABLED:
@@ -161,7 +163,8 @@
bna_bfi_attr_get_rsp(struct bna_ioceth *ioceth,
struct bfi_msgq_mhdr *msghdr)
{
- struct bfi_enet_attr_rsp *rsp = (struct bfi_enet_attr_rsp *)msghdr;
+ struct bfi_enet_attr_rsp *rsp =
+ container_of(msghdr, struct bfi_enet_attr_rsp, mh);
/**
* Store only if not set earlier, since BNAD can override the HW
diff --git a/drivers/net/ethernet/brocade/bna/bna_tx_rx.c b/drivers/net/ethernet/brocade/bna/bna_tx_rx.c
index 85e6354..8ee3fdc 100644
--- a/drivers/net/ethernet/brocade/bna/bna_tx_rx.c
+++ b/drivers/net/ethernet/brocade/bna/bna_tx_rx.c
@@ -715,7 +715,7 @@
struct bfi_msgq_mhdr *msghdr)
{
struct bfi_enet_rsp *rsp =
- (struct bfi_enet_rsp *)msghdr;
+ container_of(msghdr, struct bfi_enet_rsp, mh);
if (rsp->error) {
/* Clear ucast from cache */
@@ -732,7 +732,7 @@
struct bfi_enet_mcast_add_req *req =
&rxf->bfi_enet_cmd.mcast_add_req;
struct bfi_enet_mcast_add_rsp *rsp =
- (struct bfi_enet_mcast_add_rsp *)msghdr;
+ container_of(msghdr, struct bfi_enet_mcast_add_rsp, mh);
bna_rxf_mchandle_attach(rxf, (u8 *)&req->mac_addr,
ntohs(rsp->handle));
diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c
index ca5d779..d9b8e94 100644
--- a/drivers/net/ethernet/cadence/macb.c
+++ b/drivers/net/ethernet/cadence/macb.c
@@ -2241,9 +2241,9 @@
netif_carrier_off(dev);
- netdev_info(dev, "Cadence %s at 0x%08lx irq %d (%pM)\n",
- macb_is_gem(bp) ? "GEM" : "MACB", dev->base_addr,
- dev->irq, dev->dev_addr);
+ netdev_info(dev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n",
+ macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID),
+ dev->base_addr, dev->irq, dev->dev_addr);
phydev = bp->phy_dev;
netdev_info(dev, "attached PHY driver [%s] (mii_bus:phy_addr=%s, irq=%d)\n",
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
index c067b78..9b2c669 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
@@ -431,6 +431,7 @@
struct rx_sw_desc *sdesc; /* address of SW Rx descriptor ring */
__be64 *desc; /* address of HW Rx descriptor ring */
dma_addr_t addr; /* bus address of HW ring start */
+ u64 udb; /* BAR2 offset of User Doorbell area */
};
/* A packet gather list */
@@ -451,6 +452,7 @@
u8 gen; /* current generation bit */
u8 intr_params; /* interrupt holdoff parameters */
u8 next_intr_params; /* holdoff params for next interrupt */
+ u8 adaptive_rx;
u8 pktcnt_idx; /* interrupt packet threshold */
u8 uld; /* ULD handling this queue */
u8 idx; /* queue index within its group */
@@ -459,6 +461,7 @@
u16 abs_id; /* absolute SGE id for the response q */
__be64 *desc; /* address of HW response ring */
dma_addr_t phys_addr; /* physical address of the ring */
+ u64 udb; /* BAR2 offset of User Doorbell area */
unsigned int iqe_len; /* entry size */
unsigned int size; /* capacity of response queue */
struct adapter *adap;
@@ -516,7 +519,7 @@
int db_disabled;
unsigned short db_pidx;
unsigned short db_pidx_inc;
- u64 udb;
+ u64 udb; /* BAR2 offset of User Doorbell area */
};
struct sge_eth_txq { /* state for an SGE Ethernet Tx queue */
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index f56b95a..321f3d9 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -284,6 +284,8 @@
CH_DEVICE(0x5084, 4),
CH_DEVICE(0x5085, 4),
CH_DEVICE(0x5086, 4),
+ CH_DEVICE(0x5087, 4),
+ CH_DEVICE(0x5088, 4),
CH_DEVICE(0x5401, 4),
CH_DEVICE(0x5402, 4),
CH_DEVICE(0x5403, 4),
@@ -312,6 +314,8 @@
CH_DEVICE(0x5484, 4),
CH_DEVICE(0x5485, 4),
CH_DEVICE(0x5486, 4),
+ CH_DEVICE(0x5487, 4),
+ CH_DEVICE(0x5488, 4),
{ 0, }
};
@@ -2749,8 +2753,31 @@
return 0;
}
+static int set_adaptive_rx_setting(struct net_device *dev, int adaptive_rx)
+{
+ int i;
+ struct port_info *pi = netdev_priv(dev);
+ struct adapter *adap = pi->adapter;
+ struct sge_eth_rxq *q = &adap->sge.ethrxq[pi->first_qset];
+
+ for (i = 0; i < pi->nqsets; i++, q++)
+ q->rspq.adaptive_rx = adaptive_rx;
+
+ return 0;
+}
+
+static int get_adaptive_rx_setting(struct net_device *dev)
+{
+ struct port_info *pi = netdev_priv(dev);
+ struct adapter *adap = pi->adapter;
+ struct sge_eth_rxq *q = &adap->sge.ethrxq[pi->first_qset];
+
+ return q->rspq.adaptive_rx;
+}
+
static int set_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
{
+ set_adaptive_rx_setting(dev, c->use_adaptive_rx_coalesce);
return set_rx_intr_params(dev, c->rx_coalesce_usecs,
c->rx_max_coalesced_frames);
}
@@ -2764,6 +2791,7 @@
c->rx_coalesce_usecs = qtimer_val(adap, rq);
c->rx_max_coalesced_frames = (rq->intr_params & QINTR_CNT_EN) ?
adap->sge.counter_val[rq->pktcnt_idx] : 0;
+ c->use_adaptive_rx_coalesce = get_adaptive_rx_setting(dev);
return 0;
}
@@ -6478,6 +6506,7 @@
struct port_info *pi;
bool highdma = false;
struct adapter *adapter = NULL;
+ void __iomem *regs;
printk_once(KERN_INFO "%s - version %s\n", DRV_DESC, DRV_VERSION);
@@ -6494,19 +6523,35 @@
goto out_release_regions;
}
+ regs = pci_ioremap_bar(pdev, 0);
+ if (!regs) {
+ dev_err(&pdev->dev, "cannot map device registers\n");
+ err = -ENOMEM;
+ goto out_disable_device;
+ }
+
+ /* We control everything through one PF */
+ func = SOURCEPF_GET(readl(regs + PL_WHOAMI));
+ if (func != ent->driver_data) {
+ iounmap(regs);
+ pci_disable_device(pdev);
+ pci_save_state(pdev); /* to restore SR-IOV later */
+ goto sriov;
+ }
+
if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
highdma = true;
err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
if (err) {
dev_err(&pdev->dev, "unable to obtain 64-bit DMA for "
"coherent allocations\n");
- goto out_disable_device;
+ goto out_unmap_bar0;
}
} else {
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
if (err) {
dev_err(&pdev->dev, "no usable DMA configuration\n");
- goto out_disable_device;
+ goto out_unmap_bar0;
}
}
@@ -6518,7 +6563,7 @@
adapter = kzalloc(sizeof(*adapter), GFP_KERNEL);
if (!adapter) {
err = -ENOMEM;
- goto out_disable_device;
+ goto out_unmap_bar0;
}
adapter->workq = create_singlethread_workqueue("cxgb4");
@@ -6530,20 +6575,7 @@
/* PCI device has been enabled */
adapter->flags |= DEV_ENABLED;
- adapter->regs = pci_ioremap_bar(pdev, 0);
- if (!adapter->regs) {
- dev_err(&pdev->dev, "cannot map device registers\n");
- err = -ENOMEM;
- goto out_free_adapter;
- }
-
- /* We control everything through one PF */
- func = SOURCEPF_GET(readl(adapter->regs + PL_WHOAMI));
- if (func != ent->driver_data) {
- pci_save_state(pdev); /* to restore SR-IOV later */
- goto sriov;
- }
-
+ adapter->regs = regs;
adapter->pdev = pdev;
adapter->pdev_dev = &pdev->dev;
adapter->mbox = func;
@@ -6560,7 +6592,8 @@
err = t4_prep_adapter(adapter);
if (err)
- goto out_unmap_bar0;
+ goto out_free_adapter;
+
if (!is_t4(adapter->params.chip)) {
s_qpp = QUEUESPERPAGEPF1 * adapter->fn;
@@ -6577,14 +6610,14 @@
dev_err(&pdev->dev,
"Incorrect number of egress queues per page\n");
err = -EINVAL;
- goto out_unmap_bar0;
+ goto out_free_adapter;
}
adapter->bar2 = ioremap_wc(pci_resource_start(pdev, 2),
pci_resource_len(pdev, 2));
if (!adapter->bar2) {
dev_err(&pdev->dev, "cannot map device bar2 region\n");
err = -ENOMEM;
- goto out_unmap_bar0;
+ goto out_free_adapter;
}
}
@@ -6722,13 +6755,13 @@
out_unmap_bar:
if (!is_t4(adapter->params.chip))
iounmap(adapter->bar2);
- out_unmap_bar0:
- iounmap(adapter->regs);
out_free_adapter:
if (adapter->workq)
destroy_workqueue(adapter->workq);
kfree(adapter);
+ out_unmap_bar0:
+ iounmap(regs);
out_disable_device:
pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
index d22d728..bb7851e 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
@@ -203,6 +203,9 @@
RX_LARGE_MTU_BUF = 0x3, /* large MTU buffer */
};
+static int timer_pkt_quota[] = {1, 1, 2, 3, 4, 5};
+#define MIN_NAPI_WORK 1
+
static inline dma_addr_t get_buf_addr(const struct rx_sw_desc *d)
{
return d->dma_addr & ~(dma_addr_t)RX_BUF_FLAGS;
@@ -521,9 +524,23 @@
val = PIDX(q->pend_cred / 8);
if (!is_t4(adap->params.chip))
val |= DBTYPE(1);
+ val |= DBPRIO(1);
wmb();
- t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL), DBPRIO(1) |
- QID(q->cntxt_id) | val);
+
+ /* If we're on T4, use the old doorbell mechanism; otherwise
+ * use the new BAR2 mechanism.
+ */
+ if (is_t4(adap->params.chip)) {
+ t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
+ val | QID(q->cntxt_id));
+ } else {
+ writel(val, adap->bar2 + q->udb + SGE_UDB_KDOORBELL);
+
+ /* This Write memory Barrier will force the write to
+ * the User Doorbell area to be flushed.
+ */
+ wmb();
+ }
q->pend_cred &= 7;
}
}
@@ -859,30 +876,66 @@
*/
static inline void ring_tx_db(struct adapter *adap, struct sge_txq *q, int n)
{
- unsigned int *wr, index;
- unsigned long flags;
-
wmb(); /* write descriptors before telling HW */
- spin_lock_irqsave(&q->db_lock, flags);
- if (!q->db_disabled) {
- if (is_t4(adap->params.chip)) {
+
+ if (is_t4(adap->params.chip)) {
+ u32 val = PIDX(n);
+ unsigned long flags;
+
+ /* For T4 we need to participate in the Doorbell Recovery
+ * mechanism.
+ */
+ spin_lock_irqsave(&q->db_lock, flags);
+ if (!q->db_disabled)
t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
- QID(q->cntxt_id) | PIDX(n));
+ QID(q->cntxt_id) | val);
+ else
+ q->db_pidx_inc += n;
+ q->db_pidx = q->pidx;
+ spin_unlock_irqrestore(&q->db_lock, flags);
+ } else {
+ u32 val = PIDX_T5(n);
+
+ /* T4 and later chips share the same PIDX field offset within
+ * the doorbell, but T5 and later shrank the field in order to
+ * gain a bit for Doorbell Priority. The field was absurdly
+ * large in the first place (14 bits) so we just use the T5
+ * and later limits and warn if a Queue ID is too large.
+ */
+ WARN_ON(val & DBPRIO(1));
+
+ /* For T5 and later we use the Write-Combine mapped BAR2 User
+ * Doorbell mechanism. If we're only writing a single TX
+ * Descriptor and TX Write Combining hasn't been disabled, we
+ * can use the Write Combining Gather Buffer; otherwise we use
+ * the simple doorbell.
+ */
+ if (n == 1) {
+ int index = (q->pidx
+ ? (q->pidx - 1)
+ : (q->size - 1));
+ unsigned int *wr = (unsigned int *)&q->desc[index];
+
+ cxgb_pio_copy((u64 __iomem *)
+ (adap->bar2 + q->udb +
+ SGE_UDB_WCDOORBELL),
+ (u64 *)wr);
} else {
- if (n == 1) {
- index = q->pidx ? (q->pidx - 1) : (q->size - 1);
- wr = (unsigned int *)&q->desc[index];
- cxgb_pio_copy((u64 __iomem *)
- (adap->bar2 + q->udb + 64),
- (u64 *)wr);
- } else
- writel(n, adap->bar2 + q->udb + 8);
- wmb();
+ writel(val, adap->bar2 + q->udb + SGE_UDB_KDOORBELL);
}
- } else
- q->db_pidx_inc += n;
- q->db_pidx = q->pidx;
- spin_unlock_irqrestore(&q->db_lock, flags);
+
+ /* This Write Memory Barrier will force the write to the User
+ * Doorbell area to be flushed. This is needed to prevent
+ * writes on different CPUs for the same queue from hitting
+ * the adapter out of order. This is required when some Work
+ * Requests take the Write Combine Gather Buffer path (user
+ * doorbell area offset [SGE_UDB_WCDOORBELL..+63]) and some
+ * take the traditional path where we simply increment the
+ * PIDX (User Doorbell area SGE_UDB_KDOORBELL) and have the
+ * hardware DMA read the actual Work Request.
+ */
+ wmb();
+ }
}
/**
@@ -1916,16 +1969,40 @@
unsigned int params;
struct sge_rspq *q = container_of(napi, struct sge_rspq, napi);
int work_done = process_responses(q, budget);
+ u32 val;
if (likely(work_done < budget)) {
+ int timer_index;
+
napi_complete(napi);
- params = q->next_intr_params;
- q->next_intr_params = q->intr_params;
+ timer_index = QINTR_TIMER_IDX_GET(q->next_intr_params);
+
+ if (q->adaptive_rx) {
+ if (work_done > max(timer_pkt_quota[timer_index],
+ MIN_NAPI_WORK))
+ timer_index = (timer_index + 1);
+ else
+ timer_index = timer_index - 1;
+
+ timer_index = clamp(timer_index, 0, SGE_TIMERREGS - 1);
+ q->next_intr_params = QINTR_TIMER_IDX(timer_index) |
+ V_QINTR_CNT_EN;
+ params = q->next_intr_params;
+ } else {
+ params = q->next_intr_params;
+ q->next_intr_params = q->intr_params;
+ }
} else
params = QINTR_TIMER_IDX(7);
- t4_write_reg(q->adap, MYPF_REG(SGE_PF_GTS), CIDXINC(work_done) |
- INGRESSQID((u32)q->cntxt_id) | SEINTARM(params));
+ val = CIDXINC(work_done) | SEINTARM(params);
+ if (is_t4(q->adap->params.chip)) {
+ t4_write_reg(q->adap, MYPF_REG(SGE_PF_GTS),
+ val | INGRESSQID((u32)q->cntxt_id));
+ } else {
+ writel(val, q->adap->bar2 + q->udb + SGE_UDB_GTS);
+ wmb();
+ }
return work_done;
}
@@ -1949,6 +2026,7 @@
unsigned int credits;
const struct rsp_ctrl *rc;
struct sge_rspq *q = &adap->sge.intrq;
+ u32 val;
spin_lock(&adap->sge.intrq_lock);
for (credits = 0; ; credits++) {
@@ -1967,8 +2045,14 @@
rspq_next(q);
}
- t4_write_reg(adap, MYPF_REG(SGE_PF_GTS), CIDXINC(credits) |
- INGRESSQID(q->cntxt_id) | SEINTARM(q->intr_params));
+ val = CIDXINC(credits) | SEINTARM(q->intr_params);
+ if (is_t4(adap->params.chip)) {
+ t4_write_reg(adap, MYPF_REG(SGE_PF_GTS),
+ val | INGRESSQID(q->cntxt_id));
+ } else {
+ writel(val, adap->bar2 + q->udb + SGE_UDB_GTS);
+ wmb();
+ }
spin_unlock(&adap->sge.intrq_lock);
return credits;
}
@@ -2149,6 +2233,51 @@
mod_timer(&s->tx_timer, jiffies + (budget ? TX_QCHECK_PERIOD : 2));
}
+/**
+ * udb_address - return the BAR2 User Doorbell address for a Queue
+ * @adap: the adapter
+ * @cntxt_id: the Queue Context ID
+ * @qpp: Queues Per Page (for all PFs)
+ *
+ * Returns the BAR2 address of the user Doorbell associated with the
+ * indicated Queue Context ID. Note that this is only applicable
+ * for T5 and later.
+ */
+static u64 udb_address(struct adapter *adap, unsigned int cntxt_id,
+ unsigned int qpp)
+{
+ u64 udb;
+ unsigned int s_qpp;
+ unsigned short udb_density;
+ unsigned long qpshift;
+ int page;
+
+ BUG_ON(is_t4(adap->params.chip));
+
+ s_qpp = (QUEUESPERPAGEPF0 +
+ (QUEUESPERPAGEPF1 - QUEUESPERPAGEPF0) * adap->fn);
+ udb_density = 1 << ((qpp >> s_qpp) & QUEUESPERPAGEPF0_MASK);
+ qpshift = PAGE_SHIFT - ilog2(udb_density);
+ udb = cntxt_id << qpshift;
+ udb &= PAGE_MASK;
+ page = udb / PAGE_SIZE;
+ udb += (cntxt_id - (page * udb_density)) * SGE_UDB_SIZE;
+
+ return udb;
+}
+
+static u64 udb_address_eq(struct adapter *adap, unsigned int cntxt_id)
+{
+ return udb_address(adap, cntxt_id,
+ t4_read_reg(adap, SGE_EGRESS_QUEUES_PER_PAGE_PF));
+}
+
+static u64 udb_address_iq(struct adapter *adap, unsigned int cntxt_id)
+{
+ return udb_address(adap, cntxt_id,
+ t4_read_reg(adap, SGE_INGRESS_QUEUES_PER_PAGE_PF));
+}
+
int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq,
struct net_device *dev, int intr_idx,
struct sge_fl *fl, rspq_handler_t hnd)
@@ -2214,6 +2343,8 @@
iq->next_intr_params = iq->intr_params;
iq->cntxt_id = ntohs(c.iqid);
iq->abs_id = ntohs(c.physiqid);
+ if (!is_t4(adap->params.chip))
+ iq->udb = udb_address_iq(adap, iq->cntxt_id);
iq->size--; /* subtract status entry */
iq->netdev = dev;
iq->handler = hnd;
@@ -2229,6 +2360,12 @@
fl->pidx = fl->cidx = 0;
fl->alloc_failed = fl->large_alloc_failed = fl->starving = 0;
adap->sge.egr_map[fl->cntxt_id - adap->sge.egr_start] = fl;
+
+ /* Note, we must initialize the Free List User Doorbell
+ * address before refilling the Free List!
+ */
+ if (!is_t4(adap->params.chip))
+ fl->udb = udb_address_eq(adap, fl->cntxt_id);
refill_fl(adap, fl, fl_cap(fl), GFP_KERNEL);
}
return 0;
@@ -2254,21 +2391,8 @@
static void init_txq(struct adapter *adap, struct sge_txq *q, unsigned int id)
{
q->cntxt_id = id;
- if (!is_t4(adap->params.chip)) {
- unsigned int s_qpp;
- unsigned short udb_density;
- unsigned long qpshift;
- int page;
-
- s_qpp = QUEUESPERPAGEPF1 * adap->fn;
- udb_density = 1 << QUEUESPERPAGEPF0_GET((t4_read_reg(adap,
- SGE_EGRESS_QUEUES_PER_PAGE_PF) >> s_qpp));
- qpshift = PAGE_SHIFT - ilog2(udb_density);
- q->udb = q->cntxt_id << qpshift;
- q->udb &= PAGE_MASK;
- page = q->udb / PAGE_SIZE;
- q->udb += (q->cntxt_id - (page * udb_density)) * 128;
- }
+ if (!is_t4(adap->params.chip))
+ q->udb = udb_address_eq(adap, q->cntxt_id);
q->in_use = 0;
q->cidx = q->pidx = 0;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
index 6833a7b..c19a90e 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
@@ -135,6 +135,7 @@
#define RSPD_GEN(x) ((x) >> 7)
#define RSPD_TYPE(x) (((x) >> 4) & 3)
+#define V_QINTR_CNT_EN 0x0
#define QINTR_CNT_EN 0x1
#define QINTR_TIMER_IDX(x) ((x) << 1)
#define QINTR_TIMER_IDX_GET(x) (((x) >> 1) & 0x7)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
index 39fb325..eee2728 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
@@ -77,6 +77,7 @@
#define PIDX_T5(x) (((x) >> S_PIDX_T5) & M_PIDX_T5)
+#define SGE_TIMERREGS 6
#define SGE_PF_GTS 0x4
#define INGRESSQID_MASK 0xffff0000U
#define INGRESSQID_SHIFT 16
@@ -157,8 +158,27 @@
#define QUEUESPERPAGEPF0_MASK 0x0000000fU
#define QUEUESPERPAGEPF0_GET(x) ((x) & QUEUESPERPAGEPF0_MASK)
+#define QUEUESPERPAGEPF0 0
#define QUEUESPERPAGEPF1 4
+/* T5 and later support a new BAR2-based doorbell mechanism for Egress Queues.
+ * The User Doorbells are each 128 bytes in length with a Simple Doorbell at
+ * offsets 8x and a Write Combining single 64-byte Egress Queue Unit
+ * (X_IDXSIZE_UNIT) Gather Buffer interface at offset 64. For Ingress Queues,
+ * we have a Going To Sleep register at offsets 8x+4.
+ *
+ * As noted above, we have many instances of the Simple Doorbell and Going To
+ * Sleep registers at offsets 8x and 8x+4, respectively. We want to use a
+ * non-64-byte aligned offset for the Simple Doorbell in order to attempt to
+ * avoid buffering of the writes to the Simple Doorbell and we want to use a
+ * non-contiguous offset for the Going To Sleep writes in order to avoid
+ * possible combining between them.
+ */
+#define SGE_UDB_SIZE 128
+#define SGE_UDB_KDOORBELL 8
+#define SGE_UDB_GTS 20
+#define SGE_UDB_WCDOORBELL 64
+
#define SGE_INT_CAUSE1 0x1024
#define SGE_INT_CAUSE2 0x1030
#define SGE_INT_CAUSE3 0x103c
diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
index 8253403..8498a64 100644
--- a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
@@ -2907,60 +2907,62 @@
/*
* PCI Device registration data structures.
*/
-#define CH_DEVICE(devid, idx) \
- { PCI_VENDOR_ID_CHELSIO, devid, PCI_ANY_ID, PCI_ANY_ID, 0, 0, idx }
+#define CH_DEVICE(devid) \
+ { PCI_VENDOR_ID_CHELSIO, devid, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }
static const struct pci_device_id cxgb4vf_pci_tbl[] = {
- CH_DEVICE(0xb000, 0), /* PE10K FPGA */
- CH_DEVICE(0x4801, 0), /* T420-cr */
- CH_DEVICE(0x4802, 0), /* T422-cr */
- CH_DEVICE(0x4803, 0), /* T440-cr */
- CH_DEVICE(0x4804, 0), /* T420-bch */
- CH_DEVICE(0x4805, 0), /* T440-bch */
- CH_DEVICE(0x4806, 0), /* T460-ch */
- CH_DEVICE(0x4807, 0), /* T420-so */
- CH_DEVICE(0x4808, 0), /* T420-cx */
- CH_DEVICE(0x4809, 0), /* T420-bt */
- CH_DEVICE(0x480a, 0), /* T404-bt */
- CH_DEVICE(0x480d, 0), /* T480-cr */
- CH_DEVICE(0x480e, 0), /* T440-lp-cr */
- CH_DEVICE(0x4880, 0),
- CH_DEVICE(0x4880, 1),
- CH_DEVICE(0x4880, 2),
- CH_DEVICE(0x4880, 3),
- CH_DEVICE(0x4880, 4),
- CH_DEVICE(0x4880, 5),
- CH_DEVICE(0x4880, 6),
- CH_DEVICE(0x4880, 7),
- CH_DEVICE(0x4880, 8),
- CH_DEVICE(0x5801, 0), /* T520-cr */
- CH_DEVICE(0x5802, 0), /* T522-cr */
- CH_DEVICE(0x5803, 0), /* T540-cr */
- CH_DEVICE(0x5804, 0), /* T520-bch */
- CH_DEVICE(0x5805, 0), /* T540-bch */
- CH_DEVICE(0x5806, 0), /* T540-ch */
- CH_DEVICE(0x5807, 0), /* T520-so */
- CH_DEVICE(0x5808, 0), /* T520-cx */
- CH_DEVICE(0x5809, 0), /* T520-bt */
- CH_DEVICE(0x580a, 0), /* T504-bt */
- CH_DEVICE(0x580b, 0), /* T520-sr */
- CH_DEVICE(0x580c, 0), /* T504-bt */
- CH_DEVICE(0x580d, 0), /* T580-cr */
- CH_DEVICE(0x580e, 0), /* T540-lp-cr */
- CH_DEVICE(0x580f, 0), /* Amsterdam */
- CH_DEVICE(0x5810, 0), /* T580-lp-cr */
- CH_DEVICE(0x5811, 0), /* T520-lp-cr */
- CH_DEVICE(0x5812, 0), /* T560-cr */
- CH_DEVICE(0x5813, 0), /* T580-cr */
- CH_DEVICE(0x5814, 0), /* T580-so-cr */
- CH_DEVICE(0x5815, 0), /* T502-bt */
- CH_DEVICE(0x5880, 0),
- CH_DEVICE(0x5881, 0),
- CH_DEVICE(0x5882, 0),
- CH_DEVICE(0x5883, 0),
- CH_DEVICE(0x5884, 0),
- CH_DEVICE(0x5885, 0),
- CH_DEVICE(0x5886, 0),
+ CH_DEVICE(0xb000), /* PE10K FPGA */
+ CH_DEVICE(0x4801), /* T420-cr */
+ CH_DEVICE(0x4802), /* T422-cr */
+ CH_DEVICE(0x4803), /* T440-cr */
+ CH_DEVICE(0x4804), /* T420-bch */
+ CH_DEVICE(0x4805), /* T440-bch */
+ CH_DEVICE(0x4806), /* T460-ch */
+ CH_DEVICE(0x4807), /* T420-so */
+ CH_DEVICE(0x4808), /* T420-cx */
+ CH_DEVICE(0x4809), /* T420-bt */
+ CH_DEVICE(0x480a), /* T404-bt */
+ CH_DEVICE(0x480d), /* T480-cr */
+ CH_DEVICE(0x480e), /* T440-lp-cr */
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x5801), /* T520-cr */
+ CH_DEVICE(0x5802), /* T522-cr */
+ CH_DEVICE(0x5803), /* T540-cr */
+ CH_DEVICE(0x5804), /* T520-bch */
+ CH_DEVICE(0x5805), /* T540-bch */
+ CH_DEVICE(0x5806), /* T540-ch */
+ CH_DEVICE(0x5807), /* T520-so */
+ CH_DEVICE(0x5808), /* T520-cx */
+ CH_DEVICE(0x5809), /* T520-bt */
+ CH_DEVICE(0x580a), /* T504-bt */
+ CH_DEVICE(0x580b), /* T520-sr */
+ CH_DEVICE(0x580c), /* T504-bt */
+ CH_DEVICE(0x580d), /* T580-cr */
+ CH_DEVICE(0x580e), /* T540-lp-cr */
+ CH_DEVICE(0x580f), /* Amsterdam */
+ CH_DEVICE(0x5810), /* T580-lp-cr */
+ CH_DEVICE(0x5811), /* T520-lp-cr */
+ CH_DEVICE(0x5812), /* T560-cr */
+ CH_DEVICE(0x5813), /* T580-cr */
+ CH_DEVICE(0x5814), /* T580-so-cr */
+ CH_DEVICE(0x5815), /* T502-bt */
+ CH_DEVICE(0x5880),
+ CH_DEVICE(0x5881),
+ CH_DEVICE(0x5882),
+ CH_DEVICE(0x5883),
+ CH_DEVICE(0x5884),
+ CH_DEVICE(0x5885),
+ CH_DEVICE(0x5886),
+ CH_DEVICE(0x5887),
+ CH_DEVICE(0x5888),
{ 0, }
};
diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
index 9b33057..70089c2 100644
--- a/drivers/net/ethernet/davicom/dm9000.c
+++ b/drivers/net/ethernet/davicom/dm9000.c
@@ -1399,7 +1399,7 @@
const void *mac_addr;
if (!IS_ENABLED(CONFIG_OF) || !np)
- return NULL;
+ return ERR_PTR(-ENXIO);
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata)
diff --git a/drivers/net/ethernet/emulex/benet/be.h b/drivers/net/ethernet/emulex/benet/be.h
index a9f239a..9a2d752 100644
--- a/drivers/net/ethernet/emulex/benet/be.h
+++ b/drivers/net/ethernet/emulex/benet/be.h
@@ -407,9 +407,9 @@
u16 auto_speeds_supported;
u16 fixed_speeds_supported;
int link_speed;
- u32 dac_cable_len;
u32 advertising;
u32 supported;
+ u8 cable_type;
};
struct be_resources {
diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
index 5be100d..fead5c6 100644
--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
+++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
@@ -209,7 +209,6 @@
if (base_status != MCC_STATUS_SUCCESS &&
!be_skip_err_log(opcode, base_status, addl_status)) {
-
if (base_status == MCC_STATUS_UNAUTHORIZED_REQUEST) {
dev_warn(&adapter->pdev->dev,
"VF is not privileged to issue opcode %d-%d\n",
@@ -309,8 +308,6 @@
be_async_grp5_pvid_state_process(adapter, compl);
break;
default:
- dev_warn(&adapter->pdev->dev, "Unknown grp5 event 0x%x!\n",
- event_type);
break;
}
}
@@ -319,7 +316,7 @@
struct be_mcc_compl *cmp)
{
u8 event_type = 0;
- struct be_async_event_qnq *evt = (struct be_async_event_qnq *) cmp;
+ struct be_async_event_qnq *evt = (struct be_async_event_qnq *)cmp;
event_type = (cmp->flags >> ASYNC_EVENT_TYPE_SHIFT) &
ASYNC_EVENT_TYPE_MASK;
@@ -595,6 +592,7 @@
static bool lancer_provisioning_error(struct be_adapter *adapter)
{
u32 sliport_status = 0, sliport_err1 = 0, sliport_err2 = 0;
+
sliport_status = ioread32(adapter->db + SLIPORT_STATUS_OFFSET);
if (sliport_status & SLIPORT_STATUS_ERR_MASK) {
sliport_err1 = ioread32(adapter->db + SLIPORT_ERROR1_OFFSET);
@@ -677,7 +675,6 @@
return -1;
}
-
static inline struct be_sge *nonembedded_sgl(struct be_mcc_wrb *wrb)
{
return &wrb->payload.sgl[0];
@@ -924,6 +921,7 @@
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_eq_create *resp = embedded_payload(wrb);
+
eqo->q.id = le16_to_cpu(resp->eq_id);
eqo->msix_idx =
(ver == 2) ? le16_to_cpu(resp->msix_idx) : eqo->idx;
@@ -958,7 +956,7 @@
if (permanent) {
req->permanent = 1;
} else {
- req->if_id = cpu_to_le16((u16) if_handle);
+ req->if_id = cpu_to_le16((u16)if_handle);
req->pmac_id = cpu_to_le32(pmac_id);
req->permanent = 0;
}
@@ -966,6 +964,7 @@
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_mac_query *resp = embedded_payload(wrb);
+
memcpy(mac_addr, resp->mac.addr, ETH_ALEN);
}
@@ -1002,6 +1001,7 @@
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_pmac_add *resp = embedded_payload(wrb);
+
*pmac_id = le32_to_cpu(resp->pmac_id);
}
@@ -1034,7 +1034,8 @@
req = embedded_payload(wrb);
be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON,
- OPCODE_COMMON_NTWK_PMAC_DEL, sizeof(*req), wrb, NULL);
+ OPCODE_COMMON_NTWK_PMAC_DEL, sizeof(*req),
+ wrb, NULL);
req->hdr.domain = dom;
req->if_id = cpu_to_le32(if_id);
@@ -1106,6 +1107,7 @@
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_cq_create *resp = embedded_payload(wrb);
+
cq->id = le16_to_cpu(resp->cq_id);
cq->created = true;
}
@@ -1118,6 +1120,7 @@
static u32 be_encoded_q_len(int q_len)
{
u32 len_encoded = fls(q_len); /* log2(len) + 1 */
+
if (len_encoded == 16)
len_encoded = 0;
return len_encoded;
@@ -1173,6 +1176,7 @@
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_mcc_create *resp = embedded_payload(wrb);
+
mccq->id = le16_to_cpu(resp->id);
mccq->created = true;
}
@@ -1216,6 +1220,7 @@
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_mcc_create *resp = embedded_payload(wrb);
+
mccq->id = le16_to_cpu(resp->id);
mccq->created = true;
}
@@ -1274,6 +1279,7 @@
status = be_cmd_notify_wait(adapter, &wrb);
if (!status) {
struct be_cmd_resp_eth_tx_create *resp = embedded_payload(&wrb);
+
txq->id = le16_to_cpu(resp->cid);
if (ver == 2)
txo->db_offset = le32_to_cpu(resp->db_offset);
@@ -1318,6 +1324,7 @@
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_eth_rx_create *resp = embedded_payload(wrb);
+
rxq->id = le16_to_cpu(resp->id);
rxq->created = true;
*rss_id = resp->rss_id;
@@ -1431,6 +1438,7 @@
status = be_cmd_notify_wait(adapter, &wrb);
if (!status) {
struct be_cmd_resp_if_create *resp = embedded_payload(&wrb);
+
*if_handle = le32_to_cpu(resp->interface_id);
/* Hack to retrieve VF's pmac-id on BE3 */
@@ -1514,7 +1522,6 @@
int lancer_cmd_get_pport_stats(struct be_adapter *adapter,
struct be_dma_mem *nonemb_cmd)
{
-
struct be_mcc_wrb *wrb;
struct lancer_cmd_req_pport_stats *req;
int status = 0;
@@ -1605,6 +1612,7 @@
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_link_status *resp = embedded_payload(wrb);
+
if (link_speed) {
*link_speed = resp->link_speed ?
le16_to_cpu(resp->link_speed) * 10 :
@@ -1672,6 +1680,7 @@
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_get_fat *resp = embedded_payload(wrb);
+
if (log_size && resp->log_size)
*log_size = le32_to_cpu(resp->log_size) -
sizeof(u32);
@@ -1701,7 +1710,7 @@
&get_fat_cmd.dma);
if (!get_fat_cmd.va) {
dev_err(&adapter->pdev->dev,
- "Memory allocation failure while retrieving FAT data\n");
+ "Memory allocation failure while reading FAT data\n");
return -ENOMEM;
}
@@ -1731,6 +1740,7 @@
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_get_fat *resp = get_fat_cmd.va;
+
memcpy(buf + offset,
resp->data_buffer,
le32_to_cpu(resp->read_log_length));
@@ -1772,8 +1782,10 @@
if (!status) {
struct be_cmd_resp_get_fw_version *resp = embedded_payload(wrb);
- strcpy(adapter->fw_ver, resp->firmware_version_string);
- strcpy(adapter->fw_on_flash, resp->fw_on_flash_version_string);
+ strlcpy(adapter->fw_ver, resp->firmware_version_string,
+ sizeof(adapter->fw_ver));
+ strlcpy(adapter->fw_on_flash, resp->fw_on_flash_version_string,
+ sizeof(adapter->fw_on_flash));
}
err:
spin_unlock_bh(&adapter->mcc_lock);
@@ -1783,8 +1795,8 @@
/* set the EQ delay interval of an EQ to specified value
* Uses async mcc
*/
-int be_cmd_modify_eqd(struct be_adapter *adapter, struct be_set_eqd *set_eqd,
- int num)
+static int __be_cmd_modify_eqd(struct be_adapter *adapter,
+ struct be_set_eqd *set_eqd, int num)
{
struct be_mcc_wrb *wrb;
struct be_cmd_req_modify_eq_delay *req;
@@ -1817,6 +1829,25 @@
return status;
}
+int be_cmd_modify_eqd(struct be_adapter *adapter, struct be_set_eqd *set_eqd,
+ int num)
+{
+ int num_eqs, i = 0;
+
+ if (lancer_chip(adapter) && num > 8) {
+ while (num) {
+ num_eqs = min(num, 8);
+ __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs);
+ i += num_eqs;
+ num -= num_eqs;
+ }
+ } else {
+ __be_cmd_modify_eqd(adapter, set_eqd, num);
+ }
+
+ return 0;
+}
+
/* Uses sycnhronous mcc */
int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,
u32 num)
@@ -1880,8 +1911,8 @@
BE_IF_FLAGS_VLAN_PROMISCUOUS |
BE_IF_FLAGS_MCAST_PROMISCUOUS);
} else if (flags & IFF_ALLMULTI) {
- req->if_flags_mask = req->if_flags =
- cpu_to_le32(BE_IF_FLAGS_MCAST_PROMISCUOUS);
+ req->if_flags_mask = cpu_to_le32(BE_IF_FLAGS_MCAST_PROMISCUOUS);
+ req->if_flags = cpu_to_le32(BE_IF_FLAGS_MCAST_PROMISCUOUS);
} else if (flags & BE_FLAGS_VLAN_PROMISC) {
req->if_flags_mask = cpu_to_le32(BE_IF_FLAGS_VLAN_PROMISCUOUS);
@@ -1892,8 +1923,8 @@
struct netdev_hw_addr *ha;
int i = 0;
- req->if_flags_mask = req->if_flags =
- cpu_to_le32(BE_IF_FLAGS_MULTICAST);
+ req->if_flags_mask = cpu_to_le32(BE_IF_FLAGS_MULTICAST);
+ req->if_flags = cpu_to_le32(BE_IF_FLAGS_MULTICAST);
/* Reset mcast promisc mode if already set by setting mask
* and not setting flags field
@@ -1948,6 +1979,7 @@
OPCODE_COMMON_SET_FLOW_CONTROL, sizeof(*req),
wrb, NULL);
+ req->hdr.version = 1;
req->tx_flow_control = cpu_to_le16((u16)tx_fc);
req->rx_flow_control = cpu_to_le16((u16)rx_fc);
@@ -1955,6 +1987,10 @@
err:
spin_unlock_bh(&adapter->mcc_lock);
+
+ if (base_status(status) == MCC_STATUS_FEATURE_NOT_SUPPORTED)
+ return -EOPNOTSUPP;
+
return status;
}
@@ -1986,6 +2022,7 @@
if (!status) {
struct be_cmd_resp_get_flow_control *resp =
embedded_payload(wrb);
+
*tx_fc = le16_to_cpu(resp->tx_flow_control);
*rx_fc = le16_to_cpu(resp->rx_flow_control);
}
@@ -2015,6 +2052,7 @@
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_query_fw_cfg *resp = embedded_payload(wrb);
+
adapter->port_num = le32_to_cpu(resp->phys_port);
adapter->function_mode = le32_to_cpu(resp->function_mode);
adapter->function_caps = le32_to_cpu(resp->function_caps);
@@ -2163,6 +2201,7 @@
if (!status) {
struct be_cmd_resp_get_beacon_state *resp =
embedded_payload(wrb);
+
*state = resp->beacon_state;
}
@@ -2171,6 +2210,53 @@
return status;
}
+/* Uses sync mcc */
+int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ u8 page_num, u8 *data)
+{
+ struct be_dma_mem cmd;
+ struct be_mcc_wrb *wrb;
+ struct be_cmd_req_port_type *req;
+ int status;
+
+ if (page_num > TR_PAGE_A2)
+ return -EINVAL;
+
+ cmd.size = sizeof(struct be_cmd_resp_port_type);
+ cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
+ if (!cmd.va) {
+ dev_err(&adapter->pdev->dev, "Memory allocation failed\n");
+ return -ENOMEM;
+ }
+ memset(cmd.va, 0, cmd.size);
+
+ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+ status = -EBUSY;
+ goto err;
+ }
+ req = cmd.va;
+
+ be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON,
+ OPCODE_COMMON_READ_TRANSRECV_DATA,
+ cmd.size, wrb, &cmd);
+
+ req->port = cpu_to_le32(adapter->hba_port_num);
+ req->page_num = cpu_to_le32(page_num);
+ status = be_mcc_notify_wait(adapter);
+ if (!status) {
+ struct be_cmd_resp_port_type *resp = cmd.va;
+
+ memcpy(data, resp->page_data, PAGE_DATA_LEN);
+ }
+err:
+ spin_unlock_bh(&adapter->mcc_lock);
+ pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
+ return status;
+}
+
int lancer_cmd_write_object(struct be_adapter *adapter, struct be_dma_mem *cmd,
u32 data_size, u32 data_offset,
const char *obj_name, u32 *data_written,
@@ -2211,7 +2297,7 @@
be_dws_cpu_to_le(ctxt, sizeof(req->context));
req->write_offset = cpu_to_le32(data_offset);
- strcpy(req->object_name, obj_name);
+ strlcpy(req->object_name, obj_name, sizeof(req->object_name));
req->descriptor_count = cpu_to_le32(1);
req->buf_len = cpu_to_le32(data_size);
req->addr_low = cpu_to_le32((cmd->dma +
@@ -2244,6 +2330,31 @@
return status;
}
+int be_cmd_query_cable_type(struct be_adapter *adapter)
+{
+ u8 page_data[PAGE_DATA_LEN];
+ int status;
+
+ status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
+ page_data);
+ if (!status) {
+ switch (adapter->phy.interface_type) {
+ case PHY_TYPE_QSFP:
+ adapter->phy.cable_type =
+ page_data[QSFP_PLUS_CABLE_TYPE_OFFSET];
+ break;
+ case PHY_TYPE_SFP_PLUS_10GB:
+ adapter->phy.cable_type =
+ page_data[SFP_PLUS_CABLE_TYPE_OFFSET];
+ break;
+ default:
+ adapter->phy.cable_type = 0;
+ break;
+ }
+ }
+ return status;
+}
+
int lancer_cmd_delete_object(struct be_adapter *adapter, const char *obj_name)
{
struct lancer_cmd_req_delete_object *req;
@@ -2264,7 +2375,7 @@
OPCODE_COMMON_DELETE_OBJECT,
sizeof(*req), wrb, NULL);
- strcpy(req->object_name, obj_name);
+ strlcpy(req->object_name, obj_name, sizeof(req->object_name));
status = be_mcc_notify_wait(adapter);
err:
@@ -2361,7 +2472,7 @@
}
int be_cmd_get_flash_crc(struct be_adapter *adapter, u8 *flashed_crc,
- u16 optype, int offset)
+ u16 optype, int offset)
{
struct be_mcc_wrb *wrb;
struct be_cmd_read_flash_crc *req;
@@ -2532,9 +2643,10 @@
if (!status) {
struct be_cmd_resp_ddrdma_test *resp;
+
resp = cmd->va;
if ((memcmp(resp->rcv_buff, req->snd_buff, byte_cnt) != 0) ||
- resp->snd_err) {
+ resp->snd_err) {
status = -1;
}
}
@@ -2607,6 +2719,7 @@
if (!status) {
struct be_phy_info *resp_phy_info =
cmd.va + sizeof(struct be_cmd_req_hdr);
+
adapter->phy.phy_type = le16_to_cpu(resp_phy_info->phy_type);
adapter->phy.interface_type =
le16_to_cpu(resp_phy_info->interface_type);
@@ -2736,6 +2849,7 @@
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_set_func_cap *resp = embedded_payload(wrb);
+
adapter->be3_native = le32_to_cpu(resp->cap_flags) &
CAPABILITY_BE3_NATIVE_ERX_API;
if (!adapter->be3_native)
@@ -2775,6 +2889,7 @@
if (!status) {
struct be_cmd_resp_get_fn_privileges *resp =
embedded_payload(wrb);
+
*privilege = le32_to_cpu(resp->privilege_mask);
/* In UMC mode FW does not return right privileges.
@@ -2922,7 +3037,6 @@
int be_cmd_get_active_mac(struct be_adapter *adapter, u32 curr_pmac_id,
u8 *mac, u32 if_handle, bool active, u32 domain)
{
-
if (!active)
be_cmd_get_mac_from_list(adapter, mac, &active, &curr_pmac_id,
if_handle, domain);
@@ -3106,6 +3220,7 @@
if (!status) {
struct be_cmd_resp_get_hsw_config *resp =
embedded_payload(wrb);
+
be_dws_le_to_cpu(&resp->context, sizeof(resp->context));
vid = AMAP_GET_BITS(struct amap_get_hsw_resp_context,
pvid, &resp->context);
@@ -3165,7 +3280,8 @@
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_acpi_wol_magic_config_v1 *resp;
- resp = (struct be_cmd_resp_acpi_wol_magic_config_v1 *) cmd.va;
+
+ resp = (struct be_cmd_resp_acpi_wol_magic_config_v1 *)cmd.va;
adapter->wol_cap = resp->wol_settings;
if (adapter->wol_cap & BE_WOL_CAP)
@@ -3201,6 +3317,7 @@
(extfat_cmd.va + sizeof(struct be_cmd_resp_hdr));
for (i = 0; i < le32_to_cpu(cfgs->num_modules); i++) {
u32 num_modes = le32_to_cpu(cfgs->module[i].num_modes);
+
for (j = 0; j < num_modes; j++) {
if (cfgs->module[i].trace_lvl[j].mode == MODE_UART)
cfgs->module[i].trace_lvl[j].dbg_lvl =
@@ -3237,6 +3354,7 @@
if (!status) {
cfgs = (struct be_fat_conf_params *)(extfat_cmd.va +
sizeof(struct be_cmd_resp_hdr));
+
for (j = 0; j < le32_to_cpu(cfgs->module[0].num_modes); j++) {
if (cfgs->module[0].trace_lvl[j].mode == MODE_UART)
level = cfgs->module[0].trace_lvl[j].dbg_lvl;
@@ -3333,6 +3451,7 @@
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_get_port_name *resp = embedded_payload(wrb);
+
*port_name = resp->port_name[adapter->hba_port_num];
} else {
*port_name = adapter->hba_port_num + '0';
@@ -3956,6 +4075,7 @@
if (!status) {
struct be_cmd_resp_get_active_profile *resp =
embedded_payload(wrb);
+
*profile_id = le16_to_cpu(resp->active_profile_id);
}
@@ -4008,7 +4128,7 @@
{
struct be_adapter *adapter = netdev_priv(netdev_handle);
struct be_mcc_wrb *wrb;
- struct be_cmd_req_hdr *hdr = (struct be_cmd_req_hdr *) wrb_payload;
+ struct be_cmd_req_hdr *hdr = (struct be_cmd_req_hdr *)wrb_payload;
struct be_cmd_req_hdr *req;
struct be_cmd_resp_hdr *resp;
int status;
diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.h b/drivers/net/ethernet/emulex/benet/be_cmds.h
index 0e11868..eb5085d 100644
--- a/drivers/net/ethernet/emulex/benet/be_cmds.h
+++ b/drivers/net/ethernet/emulex/benet/be_cmds.h
@@ -57,7 +57,8 @@
MCC_STATUS_ILLEGAL_FIELD = 3,
MCC_STATUS_INSUFFICIENT_BUFFER = 4,
MCC_STATUS_UNAUTHORIZED_REQUEST = 5,
- MCC_STATUS_NOT_SUPPORTED = 66
+ MCC_STATUS_NOT_SUPPORTED = 66,
+ MCC_STATUS_FEATURE_NOT_SUPPORTED = 68
};
/* Additional status */
@@ -1004,8 +1005,8 @@
/* Identifies the type of port attached to NIC */
struct be_cmd_req_port_type {
struct be_cmd_req_hdr hdr;
- u32 page_num;
- u32 port;
+ __le32 page_num;
+ __le32 port;
};
enum {
@@ -1013,28 +1014,23 @@
TR_PAGE_A2 = 0xa2
};
+/* From SFF-8436 QSFP+ spec */
+#define QSFP_PLUS_CABLE_TYPE_OFFSET 0x83
+#define QSFP_PLUS_CR4_CABLE 0x8
+#define QSFP_PLUS_SR4_CABLE 0x4
+#define QSFP_PLUS_LR4_CABLE 0x2
+
+/* From SFF-8472 spec */
+#define SFP_PLUS_SFF_8472_COMP 0x5E
+#define SFP_PLUS_CABLE_TYPE_OFFSET 0x8
+#define SFP_PLUS_COPPER_CABLE 0x4
+
+#define PAGE_DATA_LEN 256
struct be_cmd_resp_port_type {
struct be_cmd_resp_hdr hdr;
u32 page_num;
u32 port;
- struct data {
- u8 identifier;
- u8 identifier_ext;
- u8 connector;
- u8 transceiver[8];
- u8 rsvd0[3];
- u8 length_km;
- u8 length_hm;
- u8 length_om1;
- u8 length_om2;
- u8 length_cu;
- u8 length_cu_m;
- u8 vendor_name[16];
- u8 rsvd;
- u8 vendor_oui[3];
- u8 vendor_pn[16];
- u8 vendor_rev[4];
- } data;
+ u8 page_data[PAGE_DATA_LEN];
};
/******************** Get FW Version *******************/
@@ -1367,6 +1363,9 @@
PHY_TYPE_BASET_1GB,
PHY_TYPE_BASEX_1GB,
PHY_TYPE_SGMII,
+ PHY_TYPE_QSFP,
+ PHY_TYPE_KR4_40GB,
+ PHY_TYPE_KR2_20GB,
PHY_TYPE_DISABLED = 255
};
@@ -1375,6 +1374,8 @@
#define BE_SUPPORTED_SPEED_100MBPS 2
#define BE_SUPPORTED_SPEED_1GBPS 4
#define BE_SUPPORTED_SPEED_10GBPS 8
+#define BE_SUPPORTED_SPEED_20GBPS 0x10
+#define BE_SUPPORTED_SPEED_40GBPS 0x20
#define BE_AN_EN 0x2
#define BE_PAUSE_SYM_EN 0x80
@@ -2066,6 +2067,9 @@
u8 status, u8 state);
int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num,
u32 *state);
+int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ u8 page_num, u8 *data);
+int be_cmd_query_cable_type(struct be_adapter *adapter);
int be_cmd_write_flashrom(struct be_adapter *adapter, struct be_dma_mem *cmd,
u32 flash_oper, u32 flash_opcode, u32 buf_size);
int lancer_cmd_write_object(struct be_adapter *adapter, struct be_dma_mem *cmd,
diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c
index 2fd3826..e42a791 100644
--- a/drivers/net/ethernet/emulex/benet/be_ethtool.c
+++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c
@@ -130,6 +130,7 @@
{DRVSTAT_INFO(roce_drops_payload_len)},
{DRVSTAT_INFO(roce_drops_crc)}
};
+
#define ETHTOOL_STATS_NUM ARRAY_SIZE(et_stats)
/* Stats related to multi RX queues: get_stats routine assumes bytes, pkts
@@ -152,6 +153,7 @@
*/
{DRVSTAT_RX_INFO(rx_drops_no_frags)}
};
+
#define ETHTOOL_RXSTATS_NUM (ARRAY_SIZE(et_rx_stats))
/* Stats related to multi TX queues: get_stats routine assumes compl is the
@@ -200,6 +202,7 @@
/* Pkts dropped in the driver's transmit path */
{DRVSTAT_TX_INFO(tx_drv_drops)}
};
+
#define ETHTOOL_TXSTATS_NUM (ARRAY_SIZE(et_tx_stats))
static const char et_self_tests[][ETH_GSTRING_LEN] = {
@@ -274,7 +277,7 @@
while ((total_read_len < buf_len) && !eof) {
chunk_size = min_t(u32, (buf_len - total_read_len),
- LANCER_READ_FILE_CHUNK);
+ LANCER_READ_FILE_CHUNK);
chunk_size = ALIGN(chunk_size, 4);
status = lancer_cmd_read_object(adapter, &read_cmd, chunk_size,
total_read_len, file_name,
@@ -333,7 +336,6 @@
struct be_adapter *adapter = netdev_priv(netdev);
struct be_aic_obj *aic = &adapter->aic_obj[0];
-
et->rx_coalesce_usecs = aic->prev_eqd;
et->rx_coalesce_usecs_high = aic->max_eqd;
et->rx_coalesce_usecs_low = aic->min_eqd;
@@ -475,18 +477,27 @@
}
}
-static u32 be_get_port_type(u32 phy_type, u32 dac_cable_len)
+static u32 be_get_port_type(struct be_adapter *adapter)
{
u32 port;
- switch (phy_type) {
+ switch (adapter->phy.interface_type) {
case PHY_TYPE_BASET_1GB:
case PHY_TYPE_BASEX_1GB:
case PHY_TYPE_SGMII:
port = PORT_TP;
break;
case PHY_TYPE_SFP_PLUS_10GB:
- port = dac_cable_len ? PORT_DA : PORT_FIBRE;
+ if (adapter->phy.cable_type & SFP_PLUS_COPPER_CABLE)
+ port = PORT_DA;
+ else
+ port = PORT_FIBRE;
+ break;
+ case PHY_TYPE_QSFP:
+ if (adapter->phy.cable_type & QSFP_PLUS_CR4_CABLE)
+ port = PORT_DA;
+ else
+ port = PORT_FIBRE;
break;
case PHY_TYPE_XFP_10GB:
case PHY_TYPE_SFP_1GB:
@@ -502,11 +513,11 @@
return port;
}
-static u32 convert_to_et_setting(u32 if_type, u32 if_speeds)
+static u32 convert_to_et_setting(struct be_adapter *adapter, u32 if_speeds)
{
u32 val = 0;
- switch (if_type) {
+ switch (adapter->phy.interface_type) {
case PHY_TYPE_BASET_1GB:
case PHY_TYPE_BASEX_1GB:
case PHY_TYPE_SGMII:
@@ -525,10 +536,38 @@
if (if_speeds & BE_SUPPORTED_SPEED_10GBPS)
val |= SUPPORTED_10000baseKX4_Full;
break;
+ case PHY_TYPE_KR2_20GB:
+ val |= SUPPORTED_Backplane;
+ if (if_speeds & BE_SUPPORTED_SPEED_10GBPS)
+ val |= SUPPORTED_10000baseKR_Full;
+ if (if_speeds & BE_SUPPORTED_SPEED_20GBPS)
+ val |= SUPPORTED_20000baseKR2_Full;
+ break;
case PHY_TYPE_KR_10GB:
val |= SUPPORTED_Backplane |
SUPPORTED_10000baseKR_Full;
break;
+ case PHY_TYPE_KR4_40GB:
+ val |= SUPPORTED_Backplane;
+ if (if_speeds & BE_SUPPORTED_SPEED_10GBPS)
+ val |= SUPPORTED_10000baseKR_Full;
+ if (if_speeds & BE_SUPPORTED_SPEED_40GBPS)
+ val |= SUPPORTED_40000baseKR4_Full;
+ break;
+ case PHY_TYPE_QSFP:
+ if (if_speeds & BE_SUPPORTED_SPEED_40GBPS) {
+ switch (adapter->phy.cable_type) {
+ case QSFP_PLUS_CR4_CABLE:
+ val |= SUPPORTED_40000baseCR4_Full;
+ break;
+ case QSFP_PLUS_LR4_CABLE:
+ val |= SUPPORTED_40000baseLR4_Full;
+ break;
+ default:
+ val |= SUPPORTED_40000baseSR4_Full;
+ break;
+ }
+ }
case PHY_TYPE_SFP_PLUS_10GB:
case PHY_TYPE_XFP_10GB:
case PHY_TYPE_SFP_1GB:
@@ -569,8 +608,6 @@
int status;
u32 auto_speeds;
u32 fixed_speeds;
- u32 dac_cable_len;
- u16 interface_type;
if (adapter->phy.link_speed < 0) {
status = be_cmd_link_status_query(adapter, &link_speed,
@@ -581,21 +618,19 @@
status = be_cmd_get_phy_info(adapter);
if (!status) {
- interface_type = adapter->phy.interface_type;
auto_speeds = adapter->phy.auto_speeds_supported;
fixed_speeds = adapter->phy.fixed_speeds_supported;
- dac_cable_len = adapter->phy.dac_cable_len;
+
+ be_cmd_query_cable_type(adapter);
ecmd->supported =
- convert_to_et_setting(interface_type,
+ convert_to_et_setting(adapter,
auto_speeds |
fixed_speeds);
ecmd->advertising =
- convert_to_et_setting(interface_type,
- auto_speeds);
+ convert_to_et_setting(adapter, auto_speeds);
- ecmd->port = be_get_port_type(interface_type,
- dac_cable_len);
+ ecmd->port = be_get_port_type(adapter);
if (adapter->phy.auto_speeds_supported) {
ecmd->supported |= SUPPORTED_Autoneg;
@@ -649,8 +684,10 @@
{
struct be_adapter *adapter = netdev_priv(netdev);
- ring->rx_max_pending = ring->rx_pending = adapter->rx_obj[0].q.len;
- ring->tx_max_pending = ring->tx_pending = adapter->tx_obj[0].q.len;
+ ring->rx_max_pending = adapter->rx_obj[0].q.len;
+ ring->rx_pending = adapter->rx_obj[0].q.len;
+ ring->tx_max_pending = adapter->tx_obj[0].q.len;
+ ring->tx_pending = adapter->tx_obj[0].q.len;
}
static void
@@ -676,7 +713,7 @@
status = be_cmd_set_flow_control(adapter,
adapter->tx_fc, adapter->rx_fc);
if (status)
- dev_warn(&adapter->pdev->dev, "Pause param set failed.\n");
+ dev_warn(&adapter->pdev->dev, "Pause param set failed\n");
return be_cmd_status(status);
}
@@ -942,8 +979,6 @@
FW_LOG_LEVEL_DEFAULT :
FW_LOG_LEVEL_FATAL);
adapter->msg_enable = level;
-
- return;
}
static u64 be_get_rss_hash_opts(struct be_adapter *adapter, u64 flow_type)
@@ -1162,6 +1197,7 @@
if (indir) {
struct be_rx_obj *rxo;
+
for (i = 0; i < RSS_INDIR_TABLE_LEN; i++) {
j = indir[i];
rxo = &adapter->rx_obj[j];
@@ -1177,8 +1213,8 @@
hkey = adapter->rss_info.rss_hkey;
rc = be_cmd_rss_config(adapter, rsstable,
- adapter->rss_info.rss_flags,
- RSS_INDIR_TABLE_LEN, hkey);
+ adapter->rss_info.rss_flags,
+ RSS_INDIR_TABLE_LEN, hkey);
if (rc) {
adapter->rss_info.rss_flags = RSS_ENABLE_NONE;
return -EIO;
@@ -1189,6 +1225,58 @@
return 0;
}
+static int be_get_module_info(struct net_device *netdev,
+ struct ethtool_modinfo *modinfo)
+{
+ struct be_adapter *adapter = netdev_priv(netdev);
+ u8 page_data[PAGE_DATA_LEN];
+ int status;
+
+ if (!check_privilege(adapter, MAX_PRIVILEGES))
+ return -EOPNOTSUPP;
+
+ status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
+ page_data);
+ if (!status) {
+ if (!page_data[SFP_PLUS_SFF_8472_COMP]) {
+ modinfo->type = ETH_MODULE_SFF_8079;
+ modinfo->eeprom_len = PAGE_DATA_LEN;
+ } else {
+ modinfo->type = ETH_MODULE_SFF_8472;
+ modinfo->eeprom_len = 2 * PAGE_DATA_LEN;
+ }
+ }
+ return be_cmd_status(status);
+}
+
+static int be_get_module_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *data)
+{
+ struct be_adapter *adapter = netdev_priv(netdev);
+ int status;
+
+ if (!check_privilege(adapter, MAX_PRIVILEGES))
+ return -EOPNOTSUPP;
+
+ status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
+ data);
+ if (status)
+ goto err;
+
+ if (eeprom->offset + eeprom->len > PAGE_DATA_LEN) {
+ status = be_cmd_read_port_transceiver_data(adapter,
+ TR_PAGE_A2,
+ data +
+ PAGE_DATA_LEN);
+ if (status)
+ goto err;
+ }
+ if (eeprom->offset)
+ memcpy(data, data + eeprom->offset, eeprom->len);
+err:
+ return be_cmd_status(status);
+}
+
const struct ethtool_ops be_ethtool_ops = {
.get_settings = be_get_settings,
.get_drvinfo = be_get_drvinfo,
@@ -1220,5 +1308,7 @@
.get_rxfh = be_get_rxfh,
.set_rxfh = be_set_rxfh,
.get_channels = be_get_channels,
- .set_channels = be_set_channels
+ .set_channels = be_set_channels,
+ .get_module_info = be_get_module_info,
+ .get_module_eeprom = be_get_module_eeprom
};
diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 5b26c4c..9a18e79 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -86,6 +86,7 @@
"JTAG ",
"MPU_INTPEND "
};
+
/* UE Status High CSR */
static const char * const ue_status_hi_desc[] = {
"LPCMEMHOST",
@@ -122,10 +123,10 @@
"Unknown"
};
-
static void be_queue_free(struct be_adapter *adapter, struct be_queue_info *q)
{
struct be_dma_mem *mem = &q->dma_mem;
+
if (mem->va) {
dma_free_coherent(&adapter->pdev->dev, mem->size, mem->va,
mem->dma);
@@ -187,6 +188,7 @@
static void be_rxq_notify(struct be_adapter *adapter, u16 qid, u16 posted)
{
u32 val = 0;
+
val |= qid & DB_RQ_RING_ID_MASK;
val |= posted << DB_RQ_NUM_POSTED_SHIFT;
@@ -198,6 +200,7 @@
u16 posted)
{
u32 val = 0;
+
val |= txo->q.id & DB_TXULP_RING_ID_MASK;
val |= (posted & DB_TXULP_NUM_POSTED_MASK) << DB_TXULP_NUM_POSTED_SHIFT;
@@ -209,6 +212,7 @@
bool arm, bool clear_int, u16 num_popped)
{
u32 val = 0;
+
val |= qid & DB_EQ_RING_ID_MASK;
val |= ((qid & DB_EQ_RING_ID_EXT_MASK) << DB_EQ_RING_ID_EXT_MASK_SHIFT);
@@ -227,6 +231,7 @@
void be_cq_notify(struct be_adapter *adapter, u16 qid, bool arm, u16 num_popped)
{
u32 val = 0;
+
val |= qid & DB_CQ_RING_ID_MASK;
val |= ((qid & DB_CQ_RING_ID_EXT_MASK) <<
DB_CQ_RING_ID_EXT_MASK_SHIFT);
@@ -488,7 +493,6 @@
static void populate_lancer_stats(struct be_adapter *adapter)
{
-
struct be_drv_stats *drvs = &adapter->drv_stats;
struct lancer_pport_stats *pport_stats = pport_stats_from_cmd(adapter);
@@ -588,6 +592,7 @@
for_all_rx_queues(adapter, rxo, i) {
const struct be_rx_stats *rx_stats = rx_stats(rxo);
+
do {
start = u64_stats_fetch_begin_irq(&rx_stats->sync);
pkts = rx_stats(rxo)->rx_pkts;
@@ -602,6 +607,7 @@
for_all_tx_queues(adapter, txo, i) {
const struct be_tx_stats *tx_stats = tx_stats(txo);
+
do {
start = u64_stats_fetch_begin_irq(&tx_stats->sync);
pkts = tx_stats(txo)->tx_pkts;
@@ -807,6 +813,7 @@
if (skb->len > skb->data_len) {
int len = skb_headlen(skb);
+
busaddr = dma_map_single(dev, skb->data, len, DMA_TO_DEVICE);
if (dma_mapping_error(dev, busaddr))
goto dma_err;
@@ -820,6 +827,7 @@
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
+
busaddr = skb_frag_dma_map(dev, frag, 0,
skb_frag_size(frag), DMA_TO_DEVICE);
if (dma_mapping_error(dev, busaddr))
@@ -910,7 +918,7 @@
if (ip6h->nexthdr != NEXTHDR_TCP &&
ip6h->nexthdr != NEXTHDR_UDP) {
struct ipv6_opt_hdr *ehdr =
- (struct ipv6_opt_hdr *) (skb->data + offset);
+ (struct ipv6_opt_hdr *)(skb->data + offset);
/* offending pkt: 2nd byte following IPv6 hdr is 0xff */
if (ehdr->hdrlen == 0xff)
@@ -974,8 +982,8 @@
* skip HW tagging is not enabled by FW.
*/
if (unlikely(be_ipv6_tx_stall_chk(adapter, skb) &&
- (adapter->pvid || adapter->qnq_vid) &&
- !qnq_async_evt_rcvd(adapter)))
+ (adapter->pvid || adapter->qnq_vid) &&
+ !qnq_async_evt_rcvd(adapter)))
goto tx_drop;
/* Manual VLAN tag insertion to prevent:
@@ -1093,6 +1101,7 @@
*/
static int be_vid_config(struct be_adapter *adapter)
{
+ struct device *dev = &adapter->pdev->dev;
u16 vids[BE_NUM_VLANS_SUPPORTED];
u16 num = 0, i = 0;
int status = 0;
@@ -1114,16 +1123,15 @@
if (addl_status(status) ==
MCC_ADDL_STATUS_INSUFFICIENT_RESOURCES)
goto set_vlan_promisc;
- dev_err(&adapter->pdev->dev,
- "Setting HW VLAN filtering failed.\n");
+ dev_err(dev, "Setting HW VLAN filtering failed\n");
} else {
if (adapter->flags & BE_FLAGS_VLAN_PROMISC) {
/* hw VLAN filtering re-enabled. */
status = be_cmd_rx_filter(adapter,
BE_FLAGS_VLAN_PROMISC, OFF);
if (!status) {
- dev_info(&adapter->pdev->dev,
- "Disabling VLAN Promiscuous mode.\n");
+ dev_info(dev,
+ "Disabling VLAN Promiscuous mode\n");
adapter->flags &= ~BE_FLAGS_VLAN_PROMISC;
}
}
@@ -1137,11 +1145,10 @@
status = be_cmd_rx_filter(adapter, BE_FLAGS_VLAN_PROMISC, ON);
if (!status) {
- dev_info(&adapter->pdev->dev, "Enable VLAN Promiscuous mode\n");
+ dev_info(dev, "Enable VLAN Promiscuous mode\n");
adapter->flags |= BE_FLAGS_VLAN_PROMISC;
} else
- dev_err(&adapter->pdev->dev,
- "Failed to enable VLAN Promiscuous mode.\n");
+ dev_err(dev, "Failed to enable VLAN Promiscuous mode\n");
return status;
}
@@ -1417,6 +1424,7 @@
max_tx_rate, vf);
return be_cmd_status(status);
}
+
static int be_set_vf_link_state(struct net_device *netdev, int vf,
int link_state)
{
@@ -1482,7 +1490,6 @@
tx_pkts = txo->stats.tx_reqs;
} while (u64_stats_fetch_retry_irq(&txo->stats.sync, start));
-
/* Skip, if wrapped around or first calculation */
now = jiffies;
if (!aic->jiffies || time_before(now, aic->jiffies) ||
@@ -1853,7 +1860,7 @@
* Allocate a page, split it to fragments of size rx_frag_size and post as
* receive buffers to BE
*/
-static void be_post_rx_frags(struct be_rx_obj *rxo, gfp_t gfp)
+static void be_post_rx_frags(struct be_rx_obj *rxo, gfp_t gfp, u32 frags_needed)
{
struct be_adapter *adapter = rxo->adapter;
struct be_rx_page_info *page_info = NULL, *prev_page_info = NULL;
@@ -1862,10 +1869,10 @@
struct device *dev = &adapter->pdev->dev;
struct be_eth_rx_d *rxd;
u64 page_dmaaddr = 0, frag_dmaaddr;
- u32 posted, page_offset = 0;
+ u32 posted, page_offset = 0, notify = 0;
page_info = &rxo->page_info_tbl[rxq->head];
- for (posted = 0; posted < MAX_RX_POST && !page_info->page; posted++) {
+ for (posted = 0; posted < frags_needed && !page_info->page; posted++) {
if (!pagep) {
pagep = be_alloc_pages(adapter->big_page_size, gfp);
if (unlikely(!pagep)) {
@@ -1921,7 +1928,11 @@
atomic_add(posted, &rxq->used);
if (rxo->rx_post_starved)
rxo->rx_post_starved = false;
- be_rxq_notify(adapter, rxq->id, posted);
+ do {
+ notify = min(256u, posted);
+ be_rxq_notify(adapter, rxq->id, notify);
+ posted -= notify;
+ } while (posted);
} else if (atomic_read(&rxq->used) == 0) {
/* Let be_worker replenish when memory is available */
rxo->rx_post_starved = true;
@@ -2050,7 +2061,8 @@
memset(page_info, 0, sizeof(*page_info));
}
BUG_ON(atomic_read(&rxq->used));
- rxq->tail = rxq->head = 0;
+ rxq->tail = 0;
+ rxq->head = 0;
}
static void be_tx_compl_clean(struct be_adapter *adapter)
@@ -2372,6 +2384,7 @@
struct be_queue_info *rx_cq = &rxo->cq;
struct be_rx_compl_info *rxcp;
u32 work_done;
+ u32 frags_consumed = 0;
for (work_done = 0; work_done < budget; work_done++) {
rxcp = be_rx_compl_get(rxo);
@@ -2404,6 +2417,7 @@
be_rx_compl_process(rxo, napi, rxcp);
loop_continue:
+ frags_consumed += rxcp->num_rcvd;
be_rx_stats_update(rxo, rxcp);
}
@@ -2415,7 +2429,9 @@
*/
if (atomic_read(&rxo->q.used) < RX_FRAGS_REFILL_WM &&
!rxo->rx_post_starved)
- be_post_rx_frags(rxo, GFP_ATOMIC);
+ be_post_rx_frags(rxo, GFP_ATOMIC,
+ max_t(u32, MAX_RX_POST,
+ frags_consumed));
}
return work_done;
@@ -2897,7 +2913,7 @@
/* First time posting */
for_all_rx_queues(adapter, rxo, i)
- be_post_rx_frags(rxo, GFP_KERNEL);
+ be_post_rx_frags(rxo, GFP_KERNEL, MAX_RX_POST);
return 0;
}
@@ -3387,7 +3403,7 @@
if (!be_max_vfs(adapter)) {
if (num_vfs)
- dev_warn(dev, "device doesn't support SRIOV\n");
+ dev_warn(dev, "SRIOV is disabled. Ignoring num_vfs\n");
adapter->num_vfs = 0;
return 0;
}
@@ -3661,7 +3677,7 @@
dev_info(dev, "FW version is %s\n", adapter->fw_ver);
if (BE2_chip(adapter) && fw_major_num(adapter->fw_ver) < 4) {
- dev_err(dev, "Firmware on card is old(%s), IRQs may not work.",
+ dev_err(dev, "Firmware on card is old(%s), IRQs may not work",
adapter->fw_ver);
dev_err(dev, "Please upgrade firmware to version >= 4.0\n");
}
@@ -3709,8 +3725,6 @@
be_eq_notify(eqo->adapter, eqo->q.id, false, true, 0);
napi_schedule(&eqo->napi);
}
-
- return;
}
#endif
@@ -4388,7 +4402,6 @@
return;
err:
be_disable_vxlan_offloads(adapter);
- return;
}
static void be_del_vxlan_port(struct net_device *netdev, sa_family_t sa_family,
@@ -4728,7 +4741,6 @@
be_detect_error(adapter);
if (adapter->hw_error && lancer_chip(adapter)) {
-
rtnl_lock();
netif_device_detach(adapter->netdev);
rtnl_unlock();
@@ -4765,7 +4777,7 @@
if (!adapter->stats_cmd_sent) {
if (lancer_chip(adapter))
lancer_cmd_get_pport_stats(adapter,
- &adapter->stats_cmd);
+ &adapter->stats_cmd);
else
be_cmd_get_stats(adapter, &adapter->stats_cmd);
}
@@ -4779,7 +4791,7 @@
* allocation failures.
*/
if (rxo->rx_post_starved)
- be_post_rx_frags(rxo, GFP_KERNEL);
+ be_post_rx_frags(rxo, GFP_KERNEL, MAX_RX_POST);
}
be_eqd_update(adapter);
@@ -4870,11 +4882,9 @@
}
}
- if (be_physfn(adapter)) {
- status = pci_enable_pcie_error_reporting(pdev);
- if (!status)
- dev_info(&pdev->dev, "PCIe error reporting enabled\n");
- }
+ status = pci_enable_pcie_error_reporting(pdev);
+ if (!status)
+ dev_info(&pdev->dev, "PCIe error reporting enabled\n");
status = be_ctrl_init(adapter);
if (status)
@@ -4914,7 +4924,8 @@
INIT_DELAYED_WORK(&adapter->work, be_worker);
INIT_DELAYED_WORK(&adapter->func_recovery_work, be_func_recovery_task);
- adapter->rx_fc = adapter->tx_fc = true;
+ adapter->rx_fc = true;
+ adapter->tx_fc = true;
status = be_setup(adapter);
if (status)
diff --git a/drivers/net/ethernet/emulex/benet/be_roce.c b/drivers/net/ethernet/emulex/benet/be_roce.c
index ef4672d..1328664 100644
--- a/drivers/net/ethernet/emulex/benet/be_roce.c
+++ b/drivers/net/ethernet/emulex/benet/be_roce.c
@@ -174,6 +174,7 @@
ocrdma_drv = drv;
list_for_each_entry(dev, &be_adapter_list, entry) {
struct net_device *netdev;
+
_be_roce_dev_add(dev);
netdev = dev->netdev;
if (netif_running(netdev) && netif_oper_up(netdev))
diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
index ee41d98..354a309 100644
--- a/drivers/net/ethernet/freescale/fec.h
+++ b/drivers/net/ethernet/freescale/fec.h
@@ -27,8 +27,8 @@
*/
#define FEC_IEVENT 0x004 /* Interrupt event reg */
#define FEC_IMASK 0x008 /* Interrupt mask reg */
-#define FEC_R_DES_ACTIVE 0x010 /* Receive descriptor reg */
-#define FEC_X_DES_ACTIVE 0x014 /* Transmit descriptor reg */
+#define FEC_R_DES_ACTIVE_0 0x010 /* Receive descriptor reg */
+#define FEC_X_DES_ACTIVE_0 0x014 /* Transmit descriptor reg */
#define FEC_ECNTRL 0x024 /* Ethernet control reg */
#define FEC_MII_DATA 0x040 /* MII manage frame reg */
#define FEC_MII_SPEED 0x044 /* MII speed control reg */
@@ -38,6 +38,12 @@
#define FEC_ADDR_LOW 0x0e4 /* Low 32bits MAC address */
#define FEC_ADDR_HIGH 0x0e8 /* High 16bits MAC address */
#define FEC_OPD 0x0ec /* Opcode + Pause duration */
+#define FEC_TXIC0 0xF0 /* Tx Interrupt Coalescing for ring 0 */
+#define FEC_TXIC1 0xF4 /* Tx Interrupt Coalescing for ring 1 */
+#define FEC_TXIC2 0xF8 /* Tx Interrupt Coalescing for ring 2 */
+#define FEC_RXIC0 0x100 /* Rx Interrupt Coalescing for ring 0 */
+#define FEC_RXIC1 0x104 /* Rx Interrupt Coalescing for ring 1 */
+#define FEC_RXIC2 0x108 /* Rx Interrupt Coalescing for ring 2 */
#define FEC_HASH_TABLE_HIGH 0x118 /* High 32bits hash table */
#define FEC_HASH_TABLE_LOW 0x11c /* Low 32bits hash table */
#define FEC_GRP_HASH_TABLE_HIGH 0x120 /* High 32bits hash table */
@@ -45,14 +51,27 @@
#define FEC_X_WMRK 0x144 /* FIFO transmit water mark */
#define FEC_R_BOUND 0x14c /* FIFO receive bound reg */
#define FEC_R_FSTART 0x150 /* FIFO receive start reg */
-#define FEC_R_DES_START 0x180 /* Receive descriptor ring */
-#define FEC_X_DES_START 0x184 /* Transmit descriptor ring */
+#define FEC_R_DES_START_1 0x160 /* Receive descriptor ring 1 */
+#define FEC_X_DES_START_1 0x164 /* Transmit descriptor ring 1 */
+#define FEC_R_DES_START_2 0x16c /* Receive descriptor ring 2 */
+#define FEC_X_DES_START_2 0x170 /* Transmit descriptor ring 2 */
+#define FEC_R_DES_START_0 0x180 /* Receive descriptor ring */
+#define FEC_X_DES_START_0 0x184 /* Transmit descriptor ring */
#define FEC_R_BUFF_SIZE 0x188 /* Maximum receive buff size */
#define FEC_R_FIFO_RSFL 0x190 /* Receive FIFO section full threshold */
#define FEC_R_FIFO_RSEM 0x194 /* Receive FIFO section empty threshold */
#define FEC_R_FIFO_RAEM 0x198 /* Receive FIFO almost empty threshold */
#define FEC_R_FIFO_RAFL 0x19c /* Receive FIFO almost full threshold */
#define FEC_RACC 0x1C4 /* Receive Accelerator function */
+#define FEC_RCMR_1 0x1c8 /* Receive classification match ring 1 */
+#define FEC_RCMR_2 0x1cc /* Receive classification match ring 2 */
+#define FEC_DMA_CFG_1 0x1d8 /* DMA class configuration for ring 1 */
+#define FEC_DMA_CFG_2 0x1dc /* DMA class Configuration for ring 2 */
+#define FEC_R_DES_ACTIVE_1 0x1e0 /* Rx descriptor active for ring 1 */
+#define FEC_X_DES_ACTIVE_1 0x1e4 /* Tx descriptor active for ring 1 */
+#define FEC_R_DES_ACTIVE_2 0x1e8 /* Rx descriptor active for ring 2 */
+#define FEC_X_DES_ACTIVE_2 0x1ec /* Tx descriptor active for ring 2 */
+#define FEC_QOS_SCHEME 0x1f0 /* Set multi queues Qos scheme */
#define FEC_MIIGSK_CFGR 0x300 /* MIIGSK Configuration reg */
#define FEC_MIIGSK_ENR 0x308 /* MIIGSK Enable reg */
@@ -121,8 +140,12 @@
#define FEC_IEVENT 0x004 /* Interrupt even reg */
#define FEC_IMASK 0x008 /* Interrupt mask reg */
#define FEC_IVEC 0x00c /* Interrupt vec status reg */
-#define FEC_R_DES_ACTIVE 0x010 /* Receive descriptor reg */
-#define FEC_X_DES_ACTIVE 0x014 /* Transmit descriptor reg */
+#define FEC_R_DES_ACTIVE_0 0x010 /* Receive descriptor reg */
+#define FEC_R_DES_ACTIVE_1 FEC_R_DES_ACTIVE_0
+#define FEC_R_DES_ACTIVE_2 FEC_R_DES_ACTIVE_0
+#define FEC_X_DES_ACTIVE_0 0x014 /* Transmit descriptor reg */
+#define FEC_X_DES_ACTIVE_1 FEC_X_DES_ACTIVE_0
+#define FEC_X_DES_ACTIVE_2 FEC_X_DES_ACTIVE_0
#define FEC_MII_DATA 0x040 /* MII manage frame reg */
#define FEC_MII_SPEED 0x044 /* MII speed control reg */
#define FEC_R_BOUND 0x08c /* FIFO receive bound reg */
@@ -136,11 +159,27 @@
#define FEC_ADDR_HIGH 0x3c4 /* High 16bits MAC address */
#define FEC_GRP_HASH_TABLE_HIGH 0x3c8 /* High 32bits hash table */
#define FEC_GRP_HASH_TABLE_LOW 0x3cc /* Low 32bits hash table */
-#define FEC_R_DES_START 0x3d0 /* Receive descriptor ring */
-#define FEC_X_DES_START 0x3d4 /* Transmit descriptor ring */
+#define FEC_R_DES_START_0 0x3d0 /* Receive descriptor ring */
+#define FEC_R_DES_START_1 FEC_R_DES_START_0
+#define FEC_R_DES_START_2 FEC_R_DES_START_0
+#define FEC_X_DES_START_0 0x3d4 /* Transmit descriptor ring */
+#define FEC_X_DES_START_1 FEC_X_DES_START_0
+#define FEC_X_DES_START_2 FEC_X_DES_START_0
#define FEC_R_BUFF_SIZE 0x3d8 /* Maximum receive buff size */
#define FEC_FIFO_RAM 0x400 /* FIFO RAM buffer */
-
+/* Not existed in real chip
+ * Just for pass build.
+ */
+#define FEC_RCMR_1 0xFFF
+#define FEC_RCMR_2 0xFFF
+#define FEC_DMA_CFG_1 0xFFF
+#define FEC_DMA_CFG_2 0xFFF
+#define FEC_TXIC0 0xFFF
+#define FEC_TXIC1 0xFFF
+#define FEC_TXIC2 0xFFF
+#define FEC_RXIC0 0xFFF
+#define FEC_RXIC1 0xFFF
+#define FEC_RXIC2 0xFFF
#endif /* CONFIG_M5272 */
@@ -233,6 +272,44 @@
/* This device has up to three irqs on some platforms */
#define FEC_IRQ_NUM 3
+/* Maximum number of queues supported
+ * ENET with AVB IP can support up to 3 independent tx queues and rx queues.
+ * User can point the queue number that is less than or equal to 3.
+ */
+#define FEC_ENET_MAX_TX_QS 3
+#define FEC_ENET_MAX_RX_QS 3
+
+#define FEC_R_DES_START(X) ((X == 1) ? FEC_R_DES_START_1 : \
+ ((X == 2) ? \
+ FEC_R_DES_START_2 : FEC_R_DES_START_0))
+#define FEC_X_DES_START(X) ((X == 1) ? FEC_X_DES_START_1 : \
+ ((X == 2) ? \
+ FEC_X_DES_START_2 : FEC_X_DES_START_0))
+#define FEC_R_DES_ACTIVE(X) ((X == 1) ? FEC_R_DES_ACTIVE_1 : \
+ ((X == 2) ? \
+ FEC_R_DES_ACTIVE_2 : FEC_R_DES_ACTIVE_0))
+#define FEC_X_DES_ACTIVE(X) ((X == 1) ? FEC_X_DES_ACTIVE_1 : \
+ ((X == 2) ? \
+ FEC_X_DES_ACTIVE_2 : FEC_X_DES_ACTIVE_0))
+
+#define FEC_DMA_CFG(X) ((X == 2) ? FEC_DMA_CFG_2 : FEC_DMA_CFG_1)
+
+#define DMA_CLASS_EN (1 << 16)
+#define FEC_RCMR(X) ((X == 2) ? FEC_RCMR_2 : FEC_RCMR_1)
+#define IDLE_SLOPE_MASK 0xFFFF
+#define IDLE_SLOPE_1 0x200 /* BW fraction: 0.5 */
+#define IDLE_SLOPE_2 0x200 /* BW fraction: 0.5 */
+#define IDLE_SLOPE(X) ((X == 1) ? (IDLE_SLOPE_1 & IDLE_SLOPE_MASK) : \
+ (IDLE_SLOPE_2 & IDLE_SLOPE_MASK))
+#define RCMR_MATCHEN (0x1 << 16)
+#define RCMR_CMP_CFG(v, n) ((v & 0x7) << (n << 2))
+#define RCMR_CMP_1 (RCMR_CMP_CFG(0, 0) | RCMR_CMP_CFG(1, 1) | \
+ RCMR_CMP_CFG(2, 2) | RCMR_CMP_CFG(3, 3))
+#define RCMR_CMP_2 (RCMR_CMP_CFG(4, 0) | RCMR_CMP_CFG(5, 1) | \
+ RCMR_CMP_CFG(6, 2) | RCMR_CMP_CFG(7, 3))
+#define RCMR_CMP(X) ((X == 1) ? RCMR_CMP_1 : RCMR_CMP_2)
+#define FEC_TX_BD_FTYPE(X) ((X & 0xF) << 20)
+
/* The number of Tx and Rx buffers. These are allocated from the page
* pool. The code may assume these are power of two, so it it best
* to keep them that size.
@@ -240,7 +317,7 @@
* the skbuffer directly.
*/
-#define FEC_ENET_RX_PAGES 8
+#define FEC_ENET_RX_PAGES 256
#define FEC_ENET_RX_FRSIZE 2048
#define FEC_ENET_RX_FRPPG (PAGE_SIZE / FEC_ENET_RX_FRSIZE)
#define RX_RING_SIZE (FEC_ENET_RX_FRPPG * FEC_ENET_RX_PAGES)
@@ -256,6 +333,69 @@
#define FLAG_RX_CSUM_ENABLED (BD_ENET_RX_ICE | BD_ENET_RX_PCR)
#define FLAG_RX_CSUM_ERROR (BD_ENET_RX_ICE | BD_ENET_RX_PCR)
+/* Interrupt events/masks. */
+#define FEC_ENET_HBERR ((uint)0x80000000) /* Heartbeat error */
+#define FEC_ENET_BABR ((uint)0x40000000) /* Babbling receiver */
+#define FEC_ENET_BABT ((uint)0x20000000) /* Babbling transmitter */
+#define FEC_ENET_GRA ((uint)0x10000000) /* Graceful stop complete */
+#define FEC_ENET_TXF_0 ((uint)0x08000000) /* Full frame transmitted */
+#define FEC_ENET_TXF_1 ((uint)0x00000008) /* Full frame transmitted */
+#define FEC_ENET_TXF_2 ((uint)0x00000080) /* Full frame transmitted */
+#define FEC_ENET_TXB ((uint)0x04000000) /* A buffer was transmitted */
+#define FEC_ENET_RXF_0 ((uint)0x02000000) /* Full frame received */
+#define FEC_ENET_RXF_1 ((uint)0x00000002) /* Full frame received */
+#define FEC_ENET_RXF_2 ((uint)0x00000020) /* Full frame received */
+#define FEC_ENET_RXB ((uint)0x01000000) /* A buffer was received */
+#define FEC_ENET_MII ((uint)0x00800000) /* MII interrupt */
+#define FEC_ENET_EBERR ((uint)0x00400000) /* SDMA bus error */
+#define FEC_ENET_TXF (FEC_ENET_TXF_0 | FEC_ENET_TXF_1 | FEC_ENET_TXF_2)
+#define FEC_ENET_RXF (FEC_ENET_RXF_0 | FEC_ENET_RXF_1 | FEC_ENET_RXF_2)
+#define FEC_ENET_TS_AVAIL ((uint)0x00010000)
+#define FEC_ENET_TS_TIMER ((uint)0x00008000)
+
+#define FEC_DEFAULT_IMASK (FEC_ENET_TXF | FEC_ENET_RXF | FEC_ENET_MII | FEC_ENET_TS_TIMER)
+#define FEC_RX_DISABLED_IMASK (FEC_DEFAULT_IMASK & (~FEC_ENET_RXF))
+
+/* ENET interrupt coalescing macro define */
+#define FEC_ITR_CLK_SEL (0x1 << 30)
+#define FEC_ITR_EN (0x1 << 31)
+#define FEC_ITR_ICFT(X) ((X & 0xFF) << 20)
+#define FEC_ITR_ICTT(X) ((X) & 0xFFFF)
+#define FEC_ITR_ICFT_DEFAULT 200 /* Set 200 frame count threshold */
+#define FEC_ITR_ICTT_DEFAULT 1000 /* Set 1000us timer threshold */
+
+#define FEC_VLAN_TAG_LEN 0x04
+#define FEC_ETHTYPE_LEN 0x02
+
+struct fec_enet_priv_tx_q {
+ int index;
+ unsigned char *tx_bounce[TX_RING_SIZE];
+ struct sk_buff *tx_skbuff[TX_RING_SIZE];
+
+ dma_addr_t bd_dma;
+ struct bufdesc *tx_bd_base;
+ uint tx_ring_size;
+
+ unsigned short tx_stop_threshold;
+ unsigned short tx_wake_threshold;
+
+ struct bufdesc *cur_tx;
+ struct bufdesc *dirty_tx;
+ char *tso_hdrs;
+ dma_addr_t tso_hdrs_dma;
+};
+
+struct fec_enet_priv_rx_q {
+ int index;
+ struct sk_buff *rx_skbuff[RX_RING_SIZE];
+
+ dma_addr_t bd_dma;
+ struct bufdesc *rx_bd_base;
+ uint rx_ring_size;
+
+ struct bufdesc *cur_rx;
+};
+
/* The FEC buffer descriptors track the ring buffers. The rx_bd_base and
* tx_bd_base always point to the base of the buffer descriptors. The
* cur_rx and cur_tx point to the currently available buffer.
@@ -272,36 +412,28 @@
struct clk *clk_ipg;
struct clk *clk_ahb;
+ struct clk *clk_ref;
struct clk *clk_enet_out;
struct clk *clk_ptp;
bool ptp_clk_on;
struct mutex ptp_clk_mutex;
+ unsigned int num_tx_queues;
+ unsigned int num_rx_queues;
/* The saved address of a sent-in-place packet/buffer, for skfree(). */
- unsigned char *tx_bounce[TX_RING_SIZE];
- struct sk_buff *tx_skbuff[TX_RING_SIZE];
- struct sk_buff *rx_skbuff[RX_RING_SIZE];
+ struct fec_enet_priv_tx_q *tx_queue[FEC_ENET_MAX_TX_QS];
+ struct fec_enet_priv_rx_q *rx_queue[FEC_ENET_MAX_RX_QS];
- /* CPM dual port RAM relative addresses */
- dma_addr_t bd_dma;
- /* Address of Rx and Tx buffers */
- struct bufdesc *rx_bd_base;
- struct bufdesc *tx_bd_base;
- /* The next free ring entry */
- struct bufdesc *cur_rx, *cur_tx;
- /* The ring entries to be free()ed */
- struct bufdesc *dirty_tx;
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
+
+ unsigned long work_tx;
+ unsigned long work_rx;
+ unsigned long work_ts;
+ unsigned long work_mdio;
unsigned short bufdesc_size;
- unsigned short tx_ring_size;
- unsigned short rx_ring_size;
- unsigned short tx_stop_threshold;
- unsigned short tx_wake_threshold;
-
- /* Software TSO */
- char *tso_hdrs;
- dma_addr_t tso_hdrs_dma;
struct platform_device *pdev;
@@ -340,6 +472,16 @@
int hwts_tx_en;
struct delayed_work time_keep;
struct regulator *reg_phy;
+
+ unsigned int tx_align;
+ unsigned int rx_align;
+
+ /* hw interrupt coalesce */
+ unsigned int rx_pkts_itr;
+ unsigned int rx_time_itr;
+ unsigned int tx_pkts_itr;
+ unsigned int tx_time_itr;
+ unsigned int itr_clk_rate;
};
void fec_ptp_init(struct platform_device *pdev);
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 89355a7..2c73434 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -63,15 +63,12 @@
#include "fec.h"
static void set_multicast_list(struct net_device *ndev);
-
-#if defined(CONFIG_ARM)
-#define FEC_ALIGNMENT 0xf
-#else
-#define FEC_ALIGNMENT 0x3
-#endif
+static void fec_enet_itr_coal_init(struct net_device *ndev);
#define DRIVER_NAME "fec"
+#define FEC_ENET_GET_QUQUE(_x) ((_x == 0) ? 1 : ((_x == 1) ? 2 : 0))
+
/* Pause frame feild and FIFO threshold */
#define FEC_ENET_FCE (1 << 5)
#define FEC_ENET_RSEM_V 0x84
@@ -104,6 +101,22 @@
* ENET_TDAR[TDAR].
*/
#define FEC_QUIRK_ERR006358 (1 << 7)
+/* ENET IP hw AVB
+ *
+ * i.MX6SX ENET IP add Audio Video Bridging (AVB) feature support.
+ * - Two class indicators on receive with configurable priority
+ * - Two class indicators and line speed timer on transmit allowing
+ * implementation class credit based shapers externally
+ * - Additional DMA registers provisioned to allow managing up to 3
+ * independent rings
+ */
+#define FEC_QUIRK_HAS_AVB (1 << 8)
+/* There is a TDAR race condition for mutliQ when the software sets TDAR
+ * and the UDMA clears TDAR simultaneously or in a small window (2-4 cycles).
+ * This will cause the udma_tx and udma_tx_arbiter state machines to hang.
+ * The issue exist at i.MX6SX enet IP.
+ */
+#define FEC_QUIRK_ERR007885 (1 << 9)
static struct platform_device_id fec_devtype[] = {
{
@@ -128,6 +141,12 @@
.name = "mvf600-fec",
.driver_data = FEC_QUIRK_ENET_MAC,
}, {
+ .name = "imx6sx-fec",
+ .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+ FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
+ FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
+ FEC_QUIRK_ERR007885,
+ }, {
/* sentinel */
}
};
@@ -139,6 +158,7 @@
IMX28_FEC,
IMX6Q_FEC,
MVF600_FEC,
+ IMX6SX_FEC,
};
static const struct of_device_id fec_dt_ids[] = {
@@ -147,6 +167,7 @@
{ .compatible = "fsl,imx28-fec", .data = &fec_devtype[IMX28_FEC], },
{ .compatible = "fsl,imx6q-fec", .data = &fec_devtype[IMX6Q_FEC], },
{ .compatible = "fsl,mvf600-fec", .data = &fec_devtype[MVF600_FEC], },
+ { .compatible = "fsl,imx6sx-fec", .data = &fec_devtype[IMX6SX_FEC], },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, fec_dt_ids);
@@ -175,21 +196,6 @@
#endif
#endif /* CONFIG_M5272 */
-/* Interrupt events/masks. */
-#define FEC_ENET_HBERR ((uint)0x80000000) /* Heartbeat error */
-#define FEC_ENET_BABR ((uint)0x40000000) /* Babbling receiver */
-#define FEC_ENET_BABT ((uint)0x20000000) /* Babbling transmitter */
-#define FEC_ENET_GRA ((uint)0x10000000) /* Graceful stop complete */
-#define FEC_ENET_TXF ((uint)0x08000000) /* Full frame transmitted */
-#define FEC_ENET_TXB ((uint)0x04000000) /* A buffer was transmitted */
-#define FEC_ENET_RXF ((uint)0x02000000) /* Full frame received */
-#define FEC_ENET_RXB ((uint)0x01000000) /* A buffer was received */
-#define FEC_ENET_MII ((uint)0x00800000) /* MII interrupt */
-#define FEC_ENET_EBERR ((uint)0x00400000) /* SDMA bus error */
-
-#define FEC_DEFAULT_IMASK (FEC_ENET_TXF | FEC_ENET_RXF | FEC_ENET_MII)
-#define FEC_RX_DISABLED_IMASK (FEC_DEFAULT_IMASK & (~FEC_ENET_RXF))
-
/* The FEC stores dest/src/type/vlan, data, and checksum for receive packets.
*/
#define PKT_MAXBUF_SIZE 1522
@@ -242,22 +248,26 @@
static int mii_cnt;
static inline
-struct bufdesc *fec_enet_get_nextdesc(struct bufdesc *bdp, struct fec_enet_private *fep)
+struct bufdesc *fec_enet_get_nextdesc(struct bufdesc *bdp,
+ struct fec_enet_private *fep,
+ int queue_id)
{
struct bufdesc *new_bd = bdp + 1;
struct bufdesc_ex *ex_new_bd = (struct bufdesc_ex *)bdp + 1;
+ struct fec_enet_priv_tx_q *txq = fep->tx_queue[queue_id];
+ struct fec_enet_priv_rx_q *rxq = fep->rx_queue[queue_id];
struct bufdesc_ex *ex_base;
struct bufdesc *base;
int ring_size;
- if (bdp >= fep->tx_bd_base) {
- base = fep->tx_bd_base;
- ring_size = fep->tx_ring_size;
- ex_base = (struct bufdesc_ex *)fep->tx_bd_base;
+ if (bdp >= txq->tx_bd_base) {
+ base = txq->tx_bd_base;
+ ring_size = txq->tx_ring_size;
+ ex_base = (struct bufdesc_ex *)txq->tx_bd_base;
} else {
- base = fep->rx_bd_base;
- ring_size = fep->rx_ring_size;
- ex_base = (struct bufdesc_ex *)fep->rx_bd_base;
+ base = rxq->rx_bd_base;
+ ring_size = rxq->rx_ring_size;
+ ex_base = (struct bufdesc_ex *)rxq->rx_bd_base;
}
if (fep->bufdesc_ex)
@@ -269,22 +279,26 @@
}
static inline
-struct bufdesc *fec_enet_get_prevdesc(struct bufdesc *bdp, struct fec_enet_private *fep)
+struct bufdesc *fec_enet_get_prevdesc(struct bufdesc *bdp,
+ struct fec_enet_private *fep,
+ int queue_id)
{
struct bufdesc *new_bd = bdp - 1;
struct bufdesc_ex *ex_new_bd = (struct bufdesc_ex *)bdp - 1;
+ struct fec_enet_priv_tx_q *txq = fep->tx_queue[queue_id];
+ struct fec_enet_priv_rx_q *rxq = fep->rx_queue[queue_id];
struct bufdesc_ex *ex_base;
struct bufdesc *base;
int ring_size;
- if (bdp >= fep->tx_bd_base) {
- base = fep->tx_bd_base;
- ring_size = fep->tx_ring_size;
- ex_base = (struct bufdesc_ex *)fep->tx_bd_base;
+ if (bdp >= txq->tx_bd_base) {
+ base = txq->tx_bd_base;
+ ring_size = txq->tx_ring_size;
+ ex_base = (struct bufdesc_ex *)txq->tx_bd_base;
} else {
- base = fep->rx_bd_base;
- ring_size = fep->rx_ring_size;
- ex_base = (struct bufdesc_ex *)fep->rx_bd_base;
+ base = rxq->rx_bd_base;
+ ring_size = rxq->rx_ring_size;
+ ex_base = (struct bufdesc_ex *)rxq->rx_bd_base;
}
if (fep->bufdesc_ex)
@@ -300,14 +314,15 @@
return ((const char *)bdp - (const char *)base) / fep->bufdesc_size;
}
-static int fec_enet_get_free_txdesc_num(struct fec_enet_private *fep)
+static int fec_enet_get_free_txdesc_num(struct fec_enet_private *fep,
+ struct fec_enet_priv_tx_q *txq)
{
int entries;
- entries = ((const char *)fep->dirty_tx -
- (const char *)fep->cur_tx) / fep->bufdesc_size - 1;
+ entries = ((const char *)txq->dirty_tx -
+ (const char *)txq->cur_tx) / fep->bufdesc_size - 1;
- return entries > 0 ? entries : entries + fep->tx_ring_size;
+ return entries > 0 ? entries : entries + txq->tx_ring_size;
}
static void *swap_buffer(void *bufaddr, int len)
@@ -324,22 +339,26 @@
static void fec_dump(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- struct bufdesc *bdp = fep->tx_bd_base;
- unsigned int index = 0;
+ struct bufdesc *bdp;
+ struct fec_enet_priv_tx_q *txq;
+ int index = 0;
netdev_info(ndev, "TX ring dump\n");
pr_info("Nr SC addr len SKB\n");
+ txq = fep->tx_queue[0];
+ bdp = txq->tx_bd_base;
+
do {
pr_info("%3u %c%c 0x%04x 0x%08lx %4u %p\n",
index,
- bdp == fep->cur_tx ? 'S' : ' ',
- bdp == fep->dirty_tx ? 'H' : ' ',
+ bdp == txq->cur_tx ? 'S' : ' ',
+ bdp == txq->dirty_tx ? 'H' : ' ',
bdp->cbd_sc, bdp->cbd_bufaddr, bdp->cbd_datlen,
- fep->tx_skbuff[index]);
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ txq->tx_skbuff[index]);
+ bdp = fec_enet_get_nextdesc(bdp, fep, 0);
index++;
- } while (bdp != fep->tx_bd_base);
+ } while (bdp != txq->tx_bd_base);
}
static inline bool is_ipv4_pkt(struct sk_buff *skb)
@@ -365,14 +384,17 @@
}
static int
-fec_enet_txq_submit_frag_skb(struct sk_buff *skb, struct net_device *ndev)
+fec_enet_txq_submit_frag_skb(struct fec_enet_priv_tx_q *txq,
+ struct sk_buff *skb,
+ struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
- struct bufdesc *bdp = fep->cur_tx;
+ struct bufdesc *bdp = txq->cur_tx;
struct bufdesc_ex *ebdp;
int nr_frags = skb_shinfo(skb)->nr_frags;
+ unsigned short queue = skb_get_queue_mapping(skb);
int frag, frag_len;
unsigned short status;
unsigned int estatus = 0;
@@ -384,7 +406,7 @@
for (frag = 0; frag < nr_frags; frag++) {
this_frag = &skb_shinfo(skb)->frags[frag];
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
ebdp = (struct bufdesc_ex *)bdp;
status = bdp->cbd_sc;
@@ -404,6 +426,8 @@
}
if (fep->bufdesc_ex) {
+ if (id_entry->driver_data & FEC_QUIRK_HAS_AVB)
+ estatus |= FEC_TX_BD_FTYPE(queue);
if (skb->ip_summed == CHECKSUM_PARTIAL)
estatus |= BD_ENET_TX_PINS | BD_ENET_TX_IINS;
ebdp->cbd_bdu = 0;
@@ -412,11 +436,11 @@
bufaddr = page_address(this_frag->page.p) + this_frag->page_offset;
- index = fec_enet_get_bd_index(fep->tx_bd_base, bdp, fep);
- if (((unsigned long) bufaddr) & FEC_ALIGNMENT ||
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep);
+ if (((unsigned long) bufaddr) & fep->tx_align ||
id_entry->driver_data & FEC_QUIRK_SWAP_FRAME) {
- memcpy(fep->tx_bounce[index], bufaddr, frag_len);
- bufaddr = fep->tx_bounce[index];
+ memcpy(txq->tx_bounce[index], bufaddr, frag_len);
+ bufaddr = txq->tx_bounce[index];
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(bufaddr, frag_len);
@@ -436,21 +460,22 @@
bdp->cbd_sc = status;
}
- fep->cur_tx = bdp;
+ txq->cur_tx = bdp;
return 0;
dma_mapping_error:
- bdp = fep->cur_tx;
+ bdp = txq->cur_tx;
for (i = 0; i < frag; i++) {
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
bdp->cbd_datlen, DMA_TO_DEVICE);
}
return NETDEV_TX_OK;
}
-static int fec_enet_txq_submit_skb(struct sk_buff *skb, struct net_device *ndev)
+static int fec_enet_txq_submit_skb(struct fec_enet_priv_tx_q *txq,
+ struct sk_buff *skb, struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
@@ -461,12 +486,13 @@
dma_addr_t addr;
unsigned short status;
unsigned short buflen;
+ unsigned short queue;
unsigned int estatus = 0;
unsigned int index;
int entries_free;
int ret;
- entries_free = fec_enet_get_free_txdesc_num(fep);
+ entries_free = fec_enet_get_free_txdesc_num(fep, txq);
if (entries_free < MAX_SKB_FRAGS + 1) {
dev_kfree_skb_any(skb);
if (net_ratelimit())
@@ -481,7 +507,7 @@
}
/* Fill in a Tx ring entry */
- bdp = fep->cur_tx;
+ bdp = txq->cur_tx;
status = bdp->cbd_sc;
status &= ~BD_ENET_TX_STATS;
@@ -489,11 +515,12 @@
bufaddr = skb->data;
buflen = skb_headlen(skb);
- index = fec_enet_get_bd_index(fep->tx_bd_base, bdp, fep);
- if (((unsigned long) bufaddr) & FEC_ALIGNMENT ||
+ queue = skb_get_queue_mapping(skb);
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep);
+ if (((unsigned long) bufaddr) & fep->tx_align ||
id_entry->driver_data & FEC_QUIRK_SWAP_FRAME) {
- memcpy(fep->tx_bounce[index], skb->data, buflen);
- bufaddr = fep->tx_bounce[index];
+ memcpy(txq->tx_bounce[index], skb->data, buflen);
+ bufaddr = txq->tx_bounce[index];
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(bufaddr, buflen);
@@ -509,7 +536,7 @@
}
if (nr_frags) {
- ret = fec_enet_txq_submit_frag_skb(skb, ndev);
+ ret = fec_enet_txq_submit_frag_skb(txq, skb, ndev);
if (ret)
return ret;
} else {
@@ -530,6 +557,9 @@
fep->hwts_tx_en))
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ if (id_entry->driver_data & FEC_QUIRK_HAS_AVB)
+ estatus |= FEC_TX_BD_FTYPE(queue);
+
if (skb->ip_summed == CHECKSUM_PARTIAL)
estatus |= BD_ENET_TX_PINS | BD_ENET_TX_IINS;
@@ -537,10 +567,10 @@
ebdp->cbd_esc = estatus;
}
- last_bdp = fep->cur_tx;
- index = fec_enet_get_bd_index(fep->tx_bd_base, last_bdp, fep);
+ last_bdp = txq->cur_tx;
+ index = fec_enet_get_bd_index(txq->tx_bd_base, last_bdp, fep);
/* Save skb pointer */
- fep->tx_skbuff[index] = skb;
+ txq->tx_skbuff[index] = skb;
bdp->cbd_datlen = buflen;
bdp->cbd_bufaddr = addr;
@@ -552,27 +582,29 @@
bdp->cbd_sc = status;
/* If this was the last BD in the ring, start at the beginning again. */
- bdp = fec_enet_get_nextdesc(last_bdp, fep);
+ bdp = fec_enet_get_nextdesc(last_bdp, fep, queue);
skb_tx_timestamp(skb);
- fep->cur_tx = bdp;
+ txq->cur_tx = bdp;
/* Trigger transmission start */
- writel(0, fep->hwp + FEC_X_DES_ACTIVE);
+ writel(0, fep->hwp + FEC_X_DES_ACTIVE(queue));
return 0;
}
static int
-fec_enet_txq_put_data_tso(struct sk_buff *skb, struct net_device *ndev,
- struct bufdesc *bdp, int index, char *data,
- int size, bool last_tcp, bool is_last)
+fec_enet_txq_put_data_tso(struct fec_enet_priv_tx_q *txq, struct sk_buff *skb,
+ struct net_device *ndev,
+ struct bufdesc *bdp, int index, char *data,
+ int size, bool last_tcp, bool is_last)
{
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
- struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ struct bufdesc_ex *ebdp = container_of(bdp, struct bufdesc_ex, desc);
+ unsigned short queue = skb_get_queue_mapping(skb);
unsigned short status;
unsigned int estatus = 0;
dma_addr_t addr;
@@ -582,10 +614,10 @@
status |= (BD_ENET_TX_TC | BD_ENET_TX_READY);
- if (((unsigned long) data) & FEC_ALIGNMENT ||
+ if (((unsigned long) data) & fep->tx_align ||
id_entry->driver_data & FEC_QUIRK_SWAP_FRAME) {
- memcpy(fep->tx_bounce[index], data, size);
- data = fep->tx_bounce[index];
+ memcpy(txq->tx_bounce[index], data, size);
+ data = txq->tx_bounce[index];
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(data, size);
@@ -603,6 +635,8 @@
bdp->cbd_bufaddr = addr;
if (fep->bufdesc_ex) {
+ if (id_entry->driver_data & FEC_QUIRK_HAS_AVB)
+ estatus |= FEC_TX_BD_FTYPE(queue);
if (skb->ip_summed == CHECKSUM_PARTIAL)
estatus |= BD_ENET_TX_PINS | BD_ENET_TX_IINS;
ebdp->cbd_bdu = 0;
@@ -624,14 +658,16 @@
}
static int
-fec_enet_txq_put_hdr_tso(struct sk_buff *skb, struct net_device *ndev,
- struct bufdesc *bdp, int index)
+fec_enet_txq_put_hdr_tso(struct fec_enet_priv_tx_q *txq,
+ struct sk_buff *skb, struct net_device *ndev,
+ struct bufdesc *bdp, int index)
{
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
int hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
- struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ struct bufdesc_ex *ebdp = container_of(bdp, struct bufdesc_ex, desc);
+ unsigned short queue = skb_get_queue_mapping(skb);
void *bufaddr;
unsigned long dmabuf;
unsigned short status;
@@ -641,12 +677,12 @@
status &= ~BD_ENET_TX_STATS;
status |= (BD_ENET_TX_TC | BD_ENET_TX_READY);
- bufaddr = fep->tso_hdrs + index * TSO_HEADER_SIZE;
- dmabuf = fep->tso_hdrs_dma + index * TSO_HEADER_SIZE;
- if (((unsigned long) bufaddr) & FEC_ALIGNMENT ||
+ bufaddr = txq->tso_hdrs + index * TSO_HEADER_SIZE;
+ dmabuf = txq->tso_hdrs_dma + index * TSO_HEADER_SIZE;
+ if (((unsigned long)bufaddr) & fep->tx_align ||
id_entry->driver_data & FEC_QUIRK_SWAP_FRAME) {
- memcpy(fep->tx_bounce[index], skb->data, hdr_len);
- bufaddr = fep->tx_bounce[index];
+ memcpy(txq->tx_bounce[index], skb->data, hdr_len);
+ bufaddr = txq->tx_bounce[index];
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(bufaddr, hdr_len);
@@ -665,6 +701,8 @@
bdp->cbd_datlen = hdr_len;
if (fep->bufdesc_ex) {
+ if (id_entry->driver_data & FEC_QUIRK_HAS_AVB)
+ estatus |= FEC_TX_BD_FTYPE(queue);
if (skb->ip_summed == CHECKSUM_PARTIAL)
estatus |= BD_ENET_TX_PINS | BD_ENET_TX_IINS;
ebdp->cbd_bdu = 0;
@@ -676,17 +714,22 @@
return 0;
}
-static int fec_enet_txq_submit_tso(struct sk_buff *skb, struct net_device *ndev)
+static int fec_enet_txq_submit_tso(struct fec_enet_priv_tx_q *txq,
+ struct sk_buff *skb,
+ struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
int hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
int total_len, data_left;
- struct bufdesc *bdp = fep->cur_tx;
+ struct bufdesc *bdp = txq->cur_tx;
+ unsigned short queue = skb_get_queue_mapping(skb);
struct tso_t tso;
unsigned int index = 0;
int ret;
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
- if (tso_count_descs(skb) >= fec_enet_get_free_txdesc_num(fep)) {
+ if (tso_count_descs(skb) >= fec_enet_get_free_txdesc_num(fep, txq)) {
dev_kfree_skb_any(skb);
if (net_ratelimit())
netdev_err(ndev, "NOT enough BD for TSO!\n");
@@ -706,14 +749,14 @@
while (total_len > 0) {
char *hdr;
- index = fec_enet_get_bd_index(fep->tx_bd_base, bdp, fep);
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep);
data_left = min_t(int, skb_shinfo(skb)->gso_size, total_len);
total_len -= data_left;
/* prepare packet headers: MAC + IP + TCP */
- hdr = fep->tso_hdrs + index * TSO_HEADER_SIZE;
+ hdr = txq->tso_hdrs + index * TSO_HEADER_SIZE;
tso_build_hdr(skb, hdr, &tso, data_left, total_len == 0);
- ret = fec_enet_txq_put_hdr_tso(skb, ndev, bdp, index);
+ ret = fec_enet_txq_put_hdr_tso(txq, skb, ndev, bdp, index);
if (ret)
goto err_release;
@@ -721,10 +764,13 @@
int size;
size = min_t(int, tso.size, data_left);
- bdp = fec_enet_get_nextdesc(bdp, fep);
- index = fec_enet_get_bd_index(fep->tx_bd_base, bdp, fep);
- ret = fec_enet_txq_put_data_tso(skb, ndev, bdp, index, tso.data,
- size, size == data_left,
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
+ index = fec_enet_get_bd_index(txq->tx_bd_base,
+ bdp, fep);
+ ret = fec_enet_txq_put_data_tso(txq, skb, ndev,
+ bdp, index,
+ tso.data, size,
+ size == data_left,
total_len == 0);
if (ret)
goto err_release;
@@ -733,17 +779,22 @@
tso_build_data(skb, &tso, size);
}
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
}
/* Save skb pointer */
- fep->tx_skbuff[index] = skb;
+ txq->tx_skbuff[index] = skb;
skb_tx_timestamp(skb);
- fep->cur_tx = bdp;
+ txq->cur_tx = bdp;
/* Trigger transmission start */
- writel(0, fep->hwp + FEC_X_DES_ACTIVE);
+ if (!(id_entry->driver_data & FEC_QUIRK_ERR007885) ||
+ !readl(fep->hwp + FEC_X_DES_ACTIVE(queue)) ||
+ !readl(fep->hwp + FEC_X_DES_ACTIVE(queue)) ||
+ !readl(fep->hwp + FEC_X_DES_ACTIVE(queue)) ||
+ !readl(fep->hwp + FEC_X_DES_ACTIVE(queue)))
+ writel(0, fep->hwp + FEC_X_DES_ACTIVE(queue));
return 0;
@@ -757,18 +808,25 @@
{
struct fec_enet_private *fep = netdev_priv(ndev);
int entries_free;
+ unsigned short queue;
+ struct fec_enet_priv_tx_q *txq;
+ struct netdev_queue *nq;
int ret;
+ queue = skb_get_queue_mapping(skb);
+ txq = fep->tx_queue[queue];
+ nq = netdev_get_tx_queue(ndev, queue);
+
if (skb_is_gso(skb))
- ret = fec_enet_txq_submit_tso(skb, ndev);
+ ret = fec_enet_txq_submit_tso(txq, skb, ndev);
else
- ret = fec_enet_txq_submit_skb(skb, ndev);
+ ret = fec_enet_txq_submit_skb(txq, skb, ndev);
if (ret)
return ret;
- entries_free = fec_enet_get_free_txdesc_num(fep);
- if (entries_free <= fep->tx_stop_threshold)
- netif_stop_queue(ndev);
+ entries_free = fec_enet_get_free_txdesc_num(fep, txq);
+ if (entries_free <= txq->tx_stop_threshold)
+ netif_tx_stop_queue(nq);
return NETDEV_TX_OK;
}
@@ -778,46 +836,111 @@
static void fec_enet_bd_init(struct net_device *dev)
{
struct fec_enet_private *fep = netdev_priv(dev);
+ struct fec_enet_priv_tx_q *txq;
+ struct fec_enet_priv_rx_q *rxq;
struct bufdesc *bdp;
unsigned int i;
+ unsigned int q;
- /* Initialize the receive buffer descriptors. */
- bdp = fep->rx_bd_base;
- for (i = 0; i < fep->rx_ring_size; i++) {
+ for (q = 0; q < fep->num_rx_queues; q++) {
+ /* Initialize the receive buffer descriptors. */
+ rxq = fep->rx_queue[q];
+ bdp = rxq->rx_bd_base;
- /* Initialize the BD for every fragment in the page. */
- if (bdp->cbd_bufaddr)
- bdp->cbd_sc = BD_ENET_RX_EMPTY;
- else
- bdp->cbd_sc = 0;
- bdp = fec_enet_get_nextdesc(bdp, fep);
- }
+ for (i = 0; i < rxq->rx_ring_size; i++) {
- /* Set the last buffer to wrap */
- bdp = fec_enet_get_prevdesc(bdp, fep);
- bdp->cbd_sc |= BD_SC_WRAP;
-
- fep->cur_rx = fep->rx_bd_base;
-
- /* ...and the same for transmit */
- bdp = fep->tx_bd_base;
- fep->cur_tx = bdp;
- for (i = 0; i < fep->tx_ring_size; i++) {
-
- /* Initialize the BD for every fragment in the page. */
- bdp->cbd_sc = 0;
- if (fep->tx_skbuff[i]) {
- dev_kfree_skb_any(fep->tx_skbuff[i]);
- fep->tx_skbuff[i] = NULL;
+ /* Initialize the BD for every fragment in the page. */
+ if (bdp->cbd_bufaddr)
+ bdp->cbd_sc = BD_ENET_RX_EMPTY;
+ else
+ bdp->cbd_sc = 0;
+ bdp = fec_enet_get_nextdesc(bdp, fep, q);
}
- bdp->cbd_bufaddr = 0;
- bdp = fec_enet_get_nextdesc(bdp, fep);
+
+ /* Set the last buffer to wrap */
+ bdp = fec_enet_get_prevdesc(bdp, fep, q);
+ bdp->cbd_sc |= BD_SC_WRAP;
+
+ rxq->cur_rx = rxq->rx_bd_base;
}
- /* Set the last buffer to wrap */
- bdp = fec_enet_get_prevdesc(bdp, fep);
- bdp->cbd_sc |= BD_SC_WRAP;
- fep->dirty_tx = bdp;
+ for (q = 0; q < fep->num_tx_queues; q++) {
+ /* ...and the same for transmit */
+ txq = fep->tx_queue[q];
+ bdp = txq->tx_bd_base;
+ txq->cur_tx = bdp;
+
+ for (i = 0; i < txq->tx_ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ bdp->cbd_sc = 0;
+ if (txq->tx_skbuff[i]) {
+ dev_kfree_skb_any(txq->tx_skbuff[i]);
+ txq->tx_skbuff[i] = NULL;
+ }
+ bdp->cbd_bufaddr = 0;
+ bdp = fec_enet_get_nextdesc(bdp, fep, q);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = fec_enet_get_prevdesc(bdp, fep, q);
+ bdp->cbd_sc |= BD_SC_WRAP;
+ txq->dirty_tx = bdp;
+ }
+}
+
+static void fec_enet_active_rxring(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ int i;
+
+ for (i = 0; i < fep->num_rx_queues; i++)
+ writel(0, fep->hwp + FEC_R_DES_ACTIVE(i));
+}
+
+static void fec_enet_enable_ring(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ struct fec_enet_priv_tx_q *txq;
+ struct fec_enet_priv_rx_q *rxq;
+ int i;
+
+ for (i = 0; i < fep->num_rx_queues; i++) {
+ rxq = fep->rx_queue[i];
+ writel(rxq->bd_dma, fep->hwp + FEC_R_DES_START(i));
+
+ /* enable DMA1/2 */
+ if (i)
+ writel(RCMR_MATCHEN | RCMR_CMP(i),
+ fep->hwp + FEC_RCMR(i));
+ }
+
+ for (i = 0; i < fep->num_tx_queues; i++) {
+ txq = fep->tx_queue[i];
+ writel(txq->bd_dma, fep->hwp + FEC_X_DES_START(i));
+
+ /* enable DMA1/2 */
+ if (i)
+ writel(DMA_CLASS_EN | IDLE_SLOPE(i),
+ fep->hwp + FEC_DMA_CFG(i));
+ }
+}
+
+static void fec_enet_reset_skb(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ struct fec_enet_priv_tx_q *txq;
+ int i, j;
+
+ for (i = 0; i < fep->num_tx_queues; i++) {
+ txq = fep->tx_queue[i];
+
+ for (j = 0; j < txq->tx_ring_size; j++) {
+ if (txq->tx_skbuff[j]) {
+ dev_kfree_skb_any(txq->tx_skbuff[j]);
+ txq->tx_skbuff[j] = NULL;
+ }
+ }
+ }
}
/*
@@ -831,15 +954,21 @@
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
- int i;
u32 val;
u32 temp_mac[2];
u32 rcntl = OPT_FRAME_SIZE | 0x04;
u32 ecntl = 0x2; /* ETHEREN */
- /* Whack a reset. We should wait for this. */
- writel(1, fep->hwp + FEC_ECNTRL);
- udelay(10);
+ /* Whack a reset. We should wait for this.
+ * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+ * instead of reset MAC itself.
+ */
+ if (id_entry && id_entry->driver_data & FEC_QUIRK_HAS_AVB) {
+ writel(0, fep->hwp + FEC_ECNTRL);
+ } else {
+ writel(1, fep->hwp + FEC_ECNTRL);
+ udelay(10);
+ }
/*
* enet-mac reset will reset mac address registers too,
@@ -859,22 +988,10 @@
fec_enet_bd_init(ndev);
- /* Set receive and transmit descriptor base. */
- writel(fep->bd_dma, fep->hwp + FEC_R_DES_START);
- if (fep->bufdesc_ex)
- writel((unsigned long)fep->bd_dma + sizeof(struct bufdesc_ex)
- * fep->rx_ring_size, fep->hwp + FEC_X_DES_START);
- else
- writel((unsigned long)fep->bd_dma + sizeof(struct bufdesc)
- * fep->rx_ring_size, fep->hwp + FEC_X_DES_START);
+ fec_enet_enable_ring(ndev);
-
- for (i = 0; i <= TX_RING_MOD_MASK; i++) {
- if (fep->tx_skbuff[i]) {
- dev_kfree_skb_any(fep->tx_skbuff[i]);
- fep->tx_skbuff[i] = NULL;
- }
- }
+ /* Reset tx SKB buffers. */
+ fec_enet_reset_skb(ndev);
/* Enable MII mode */
if (fep->full_duplex == DUPLEX_FULL) {
@@ -996,13 +1113,17 @@
/* And last, enable the transmit and receive processing */
writel(ecntl, fep->hwp + FEC_ECNTRL);
- writel(0, fep->hwp + FEC_R_DES_ACTIVE);
+ fec_enet_active_rxring(ndev);
if (fep->bufdesc_ex)
fec_ptp_start_cyclecounter(ndev);
/* Enable interrupts we wish to service */
writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
+
+ /* Init the interrupt coalescing */
+ fec_enet_itr_coal_init(ndev);
+
}
static void
@@ -1021,9 +1142,16 @@
netdev_err(ndev, "Graceful transmit stop did not complete!\n");
}
- /* Whack a reset. We should wait for this. */
- writel(1, fep->hwp + FEC_ECNTRL);
- udelay(10);
+ /* Whack a reset. We should wait for this.
+ * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+ * instead of reset MAC itself.
+ */
+ if (id_entry && id_entry->driver_data & FEC_QUIRK_HAS_AVB) {
+ writel(0, fep->hwp + FEC_ECNTRL);
+ } else {
+ writel(1, fep->hwp + FEC_ECNTRL);
+ udelay(10);
+ }
writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED);
writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
@@ -1081,37 +1209,45 @@
}
static void
-fec_enet_tx(struct net_device *ndev)
+fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
{
struct fec_enet_private *fep;
struct bufdesc *bdp;
unsigned short status;
struct sk_buff *skb;
+ struct fec_enet_priv_tx_q *txq;
+ struct netdev_queue *nq;
int index = 0;
int entries_free;
fep = netdev_priv(ndev);
- bdp = fep->dirty_tx;
+
+ queue_id = FEC_ENET_GET_QUQUE(queue_id);
+
+ txq = fep->tx_queue[queue_id];
+ /* get next bdp of dirty_tx */
+ nq = netdev_get_tx_queue(ndev, queue_id);
+ bdp = txq->dirty_tx;
/* get next bdp of dirty_tx */
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
while (((status = bdp->cbd_sc) & BD_ENET_TX_READY) == 0) {
/* current queue is empty */
- if (bdp == fep->cur_tx)
+ if (bdp == txq->cur_tx)
break;
- index = fec_enet_get_bd_index(fep->tx_bd_base, bdp, fep);
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep);
- skb = fep->tx_skbuff[index];
- fep->tx_skbuff[index] = NULL;
- if (!IS_TSO_HEADER(fep, bdp->cbd_bufaddr))
+ skb = txq->tx_skbuff[index];
+ txq->tx_skbuff[index] = NULL;
+ if (!IS_TSO_HEADER(txq, bdp->cbd_bufaddr))
dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
bdp->cbd_datlen, DMA_TO_DEVICE);
bdp->cbd_bufaddr = 0;
if (!skb) {
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
continue;
}
@@ -1153,23 +1289,37 @@
/* Free the sk buffer associated with this last transmit */
dev_kfree_skb_any(skb);
- fep->dirty_tx = bdp;
+ txq->dirty_tx = bdp;
/* Update pointer to next buffer descriptor to be transmitted */
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
/* Since we have freed up a buffer, the ring is no longer full
*/
if (netif_queue_stopped(ndev)) {
- entries_free = fec_enet_get_free_txdesc_num(fep);
- if (entries_free >= fep->tx_wake_threshold)
- netif_wake_queue(ndev);
+ entries_free = fec_enet_get_free_txdesc_num(fep, txq);
+ if (entries_free >= txq->tx_wake_threshold)
+ netif_tx_wake_queue(nq);
}
}
/* ERR006538: Keep the transmitter going */
- if (bdp != fep->cur_tx && readl(fep->hwp + FEC_X_DES_ACTIVE) == 0)
- writel(0, fep->hwp + FEC_X_DES_ACTIVE);
+ if (bdp != txq->cur_tx &&
+ readl(fep->hwp + FEC_X_DES_ACTIVE(queue_id)) == 0)
+ writel(0, fep->hwp + FEC_X_DES_ACTIVE(queue_id));
+}
+
+static void
+fec_enet_tx(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ u16 queue_id;
+ /* First process class A queue, then Class B and Best Effort queue */
+ for_each_set_bit(queue_id, &fep->work_tx, FEC_ENET_MAX_TX_QS) {
+ clear_bit(queue_id, &fep->work_tx);
+ fec_enet_tx_queue(ndev, queue_id);
+ }
+ return;
}
/* During a receive, the cur_rx points to the current incoming buffer.
@@ -1178,11 +1328,12 @@
* effectively tossing the packet.
*/
static int
-fec_enet_rx(struct net_device *ndev, int budget)
+fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
{
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
+ struct fec_enet_priv_rx_q *rxq;
struct bufdesc *bdp;
unsigned short status;
struct sk_buff *skb;
@@ -1197,11 +1348,13 @@
#ifdef CONFIG_M532x
flush_cache_all();
#endif
+ queue_id = FEC_ENET_GET_QUQUE(queue_id);
+ rxq = fep->rx_queue[queue_id];
/* First, grab all of the stats for the incoming packet.
* These get messed up if we get called due to a busy condition.
*/
- bdp = fep->cur_rx;
+ bdp = rxq->cur_rx;
while (!((status = bdp->cbd_sc) & BD_ENET_RX_EMPTY)) {
@@ -1215,7 +1368,6 @@
if ((status & BD_ENET_RX_LAST) == 0)
netdev_err(ndev, "rcv is not +last\n");
- writel(FEC_ENET_RXF, fep->hwp + FEC_IEVENT);
/* Check for errors. */
if (status & (BD_ENET_RX_LG | BD_ENET_RX_SH | BD_ENET_RX_NO |
@@ -1248,10 +1400,11 @@
pkt_len = bdp->cbd_datlen;
ndev->stats.rx_bytes += pkt_len;
- index = fec_enet_get_bd_index(fep->rx_bd_base, bdp, fep);
- data = fep->rx_skbuff[index]->data;
+ index = fec_enet_get_bd_index(rxq->rx_bd_base, bdp, fep);
+ data = rxq->rx_skbuff[index]->data;
dma_sync_single_for_cpu(&fep->pdev->dev, bdp->cbd_bufaddr,
- FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
+ FEC_ENET_RX_FRSIZE - fep->rx_align,
+ DMA_FROM_DEVICE);
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(data, pkt_len);
@@ -1264,7 +1417,7 @@
/* If this is a VLAN packet remove the VLAN Tag */
vlan_packet_rcvd = false;
if ((ndev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
- fep->bufdesc_ex && (ebdp->cbd_esc & BD_ENET_RX_VLAN)) {
+ fep->bufdesc_ex && (ebdp->cbd_esc & BD_ENET_RX_VLAN)) {
/* Push and remove the vlan tag */
struct vlan_hdr *vlan_header =
(struct vlan_hdr *) (data + ETH_HLEN);
@@ -1323,7 +1476,8 @@
}
dma_sync_single_for_device(&fep->pdev->dev, bdp->cbd_bufaddr,
- FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
+ FEC_ENET_RX_FRSIZE - fep->rx_align,
+ DMA_FROM_DEVICE);
rx_processing_done:
/* Clear the status flags for this buffer */
status &= ~BD_ENET_RX_STATS;
@@ -1341,19 +1495,56 @@
}
/* Update BD pointer to next entry */
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
/* Doing this here will keep the FEC running while we process
* incoming frames. On a heavily loaded network, we should be
* able to keep up at the expense of system resources.
*/
- writel(0, fep->hwp + FEC_R_DES_ACTIVE);
+ writel(0, fep->hwp + FEC_R_DES_ACTIVE(queue_id));
}
- fep->cur_rx = bdp;
-
+ rxq->cur_rx = bdp;
return pkt_received;
}
+static int
+fec_enet_rx(struct net_device *ndev, int budget)
+{
+ int pkt_received = 0;
+ u16 queue_id;
+ struct fec_enet_private *fep = netdev_priv(ndev);
+
+ for_each_set_bit(queue_id, &fep->work_rx, FEC_ENET_MAX_RX_QS) {
+ clear_bit(queue_id, &fep->work_rx);
+ pkt_received += fec_enet_rx_queue(ndev,
+ budget - pkt_received, queue_id);
+ }
+ return pkt_received;
+}
+
+static bool
+fec_enet_collect_events(struct fec_enet_private *fep, uint int_events)
+{
+ if (int_events == 0)
+ return false;
+
+ if (int_events & FEC_ENET_RXF)
+ fep->work_rx |= (1 << 2);
+ if (int_events & FEC_ENET_RXF_1)
+ fep->work_rx |= (1 << 0);
+ if (int_events & FEC_ENET_RXF_2)
+ fep->work_rx |= (1 << 1);
+
+ if (int_events & FEC_ENET_TXF)
+ fep->work_tx |= (1 << 2);
+ if (int_events & FEC_ENET_TXF_1)
+ fep->work_tx |= (1 << 0);
+ if (int_events & FEC_ENET_TXF_2)
+ fep->work_tx |= (1 << 1);
+
+ return true;
+}
+
static irqreturn_t
fec_enet_interrupt(int irq, void *dev_id)
{
@@ -1365,6 +1556,7 @@
int_events = readl(fep->hwp + FEC_IEVENT);
writel(int_events & ~napi_mask, fep->hwp + FEC_IEVENT);
+ fec_enet_collect_events(fep, int_events);
if (int_events & napi_mask) {
ret = IRQ_HANDLED;
@@ -1621,6 +1813,11 @@
}
mutex_unlock(&fep->ptp_clk_mutex);
}
+ if (fep->clk_ref) {
+ ret = clk_prepare_enable(fep->clk_ref);
+ if (ret)
+ goto failed_clk_ref;
+ }
} else {
clk_disable_unprepare(fep->clk_ahb);
clk_disable_unprepare(fep->clk_ipg);
@@ -1632,9 +1829,15 @@
fep->ptp_clk_on = false;
mutex_unlock(&fep->ptp_clk_mutex);
}
+ if (fep->clk_ref)
+ clk_disable_unprepare(fep->clk_ref);
}
return 0;
+
+failed_clk_ref:
+ if (fep->clk_ref)
+ clk_disable_unprepare(fep->clk_ref);
failed_clk_ptp:
if (fep->clk_enet_out)
clk_disable_unprepare(fep->clk_enet_out);
@@ -1674,13 +1877,13 @@
continue;
if (dev_id--)
continue;
- strncpy(mdio_bus_id, fep->mii_bus->id, MII_BUS_ID_SIZE);
+ strlcpy(mdio_bus_id, fep->mii_bus->id, MII_BUS_ID_SIZE);
break;
}
if (phy_id >= PHY_MAX_ADDR) {
netdev_info(ndev, "no PHY, assuming direct connection to switch\n");
- strncpy(mdio_bus_id, "fixed-0", MII_BUS_ID_SIZE);
+ strlcpy(mdio_bus_id, "fixed-0", MII_BUS_ID_SIZE);
phy_id = 0;
}
@@ -2062,12 +2265,141 @@
return genphy_restart_aneg(phydev);
}
+/* ITR clock source is enet system clock (clk_ahb).
+ * TCTT unit is cycle_ns * 64 cycle
+ * So, the ICTT value = X us / (cycle_ns * 64)
+ */
+static int fec_enet_us_to_itr_clock(struct net_device *ndev, int us)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+
+ return us * (fep->itr_clk_rate / 64000) / 1000;
+}
+
+/* Set threshold for interrupt coalescing */
+static void fec_enet_itr_coal_set(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+ int rx_itr, tx_itr;
+
+ if (!(id_entry->driver_data & FEC_QUIRK_HAS_AVB))
+ return;
+
+ /* Must be greater than zero to avoid unpredictable behavior */
+ if (!fep->rx_time_itr || !fep->rx_pkts_itr ||
+ !fep->tx_time_itr || !fep->tx_pkts_itr)
+ return;
+
+ /* Select enet system clock as Interrupt Coalescing
+ * timer Clock Source
+ */
+ rx_itr = FEC_ITR_CLK_SEL;
+ tx_itr = FEC_ITR_CLK_SEL;
+
+ /* set ICFT and ICTT */
+ rx_itr |= FEC_ITR_ICFT(fep->rx_pkts_itr);
+ rx_itr |= FEC_ITR_ICTT(fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr));
+ tx_itr |= FEC_ITR_ICFT(fep->tx_pkts_itr);
+ tx_itr |= FEC_ITR_ICTT(fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr));
+
+ rx_itr |= FEC_ITR_EN;
+ tx_itr |= FEC_ITR_EN;
+
+ writel(tx_itr, fep->hwp + FEC_TXIC0);
+ writel(rx_itr, fep->hwp + FEC_RXIC0);
+ writel(tx_itr, fep->hwp + FEC_TXIC1);
+ writel(rx_itr, fep->hwp + FEC_RXIC1);
+ writel(tx_itr, fep->hwp + FEC_TXIC2);
+ writel(rx_itr, fep->hwp + FEC_RXIC2);
+}
+
+static int
+fec_enet_get_coalesce(struct net_device *ndev, struct ethtool_coalesce *ec)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+
+ if (!(id_entry->driver_data & FEC_QUIRK_HAS_AVB))
+ return -EOPNOTSUPP;
+
+ ec->rx_coalesce_usecs = fep->rx_time_itr;
+ ec->rx_max_coalesced_frames = fep->rx_pkts_itr;
+
+ ec->tx_coalesce_usecs = fep->tx_time_itr;
+ ec->tx_max_coalesced_frames = fep->tx_pkts_itr;
+
+ return 0;
+}
+
+static int
+fec_enet_set_coalesce(struct net_device *ndev, struct ethtool_coalesce *ec)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+
+ unsigned int cycle;
+
+ if (!(id_entry->driver_data & FEC_QUIRK_HAS_AVB))
+ return -EOPNOTSUPP;
+
+ if (ec->rx_max_coalesced_frames > 255) {
+ pr_err("Rx coalesced frames exceed hardware limiation");
+ return -EINVAL;
+ }
+
+ if (ec->tx_max_coalesced_frames > 255) {
+ pr_err("Tx coalesced frame exceed hardware limiation");
+ return -EINVAL;
+ }
+
+ cycle = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr);
+ if (cycle > 0xFFFF) {
+ pr_err("Rx coalesed usec exceeed hardware limiation");
+ return -EINVAL;
+ }
+
+ cycle = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr);
+ if (cycle > 0xFFFF) {
+ pr_err("Rx coalesed usec exceeed hardware limiation");
+ return -EINVAL;
+ }
+
+ fep->rx_time_itr = ec->rx_coalesce_usecs;
+ fep->rx_pkts_itr = ec->rx_max_coalesced_frames;
+
+ fep->tx_time_itr = ec->tx_coalesce_usecs;
+ fep->tx_pkts_itr = ec->tx_max_coalesced_frames;
+
+ fec_enet_itr_coal_set(ndev);
+
+ return 0;
+}
+
+static void fec_enet_itr_coal_init(struct net_device *ndev)
+{
+ struct ethtool_coalesce ec;
+
+ ec.rx_coalesce_usecs = FEC_ITR_ICTT_DEFAULT;
+ ec.rx_max_coalesced_frames = FEC_ITR_ICFT_DEFAULT;
+
+ ec.tx_coalesce_usecs = FEC_ITR_ICTT_DEFAULT;
+ ec.tx_max_coalesced_frames = FEC_ITR_ICFT_DEFAULT;
+
+ fec_enet_set_coalesce(ndev, &ec);
+}
+
static const struct ethtool_ops fec_enet_ethtool_ops = {
.get_settings = fec_enet_get_settings,
.set_settings = fec_enet_set_settings,
.get_drvinfo = fec_enet_get_drvinfo,
.nway_reset = fec_enet_nway_reset,
.get_link = ethtool_op_get_link,
+ .get_coalesce = fec_enet_get_coalesce,
+ .set_coalesce = fec_enet_set_coalesce,
#ifndef CONFIG_M5272
.get_pauseparam = fec_enet_get_pauseparam,
.set_pauseparam = fec_enet_set_pauseparam,
@@ -2105,46 +2437,140 @@
unsigned int i;
struct sk_buff *skb;
struct bufdesc *bdp;
+ struct fec_enet_priv_tx_q *txq;
+ struct fec_enet_priv_rx_q *rxq;
+ unsigned int q;
- bdp = fep->rx_bd_base;
- for (i = 0; i < fep->rx_ring_size; i++) {
- skb = fep->rx_skbuff[i];
- fep->rx_skbuff[i] = NULL;
- if (skb) {
- dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
- FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
- dev_kfree_skb(skb);
+ for (q = 0; q < fep->num_rx_queues; q++) {
+ rxq = fep->rx_queue[q];
+ bdp = rxq->rx_bd_base;
+ for (i = 0; i < rxq->rx_ring_size; i++) {
+ skb = rxq->rx_skbuff[i];
+ rxq->rx_skbuff[i] = NULL;
+ if (skb) {
+ dma_unmap_single(&fep->pdev->dev,
+ bdp->cbd_bufaddr,
+ FEC_ENET_RX_FRSIZE - fep->rx_align,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ bdp = fec_enet_get_nextdesc(bdp, fep, q);
}
- bdp = fec_enet_get_nextdesc(bdp, fep);
}
- bdp = fep->tx_bd_base;
- for (i = 0; i < fep->tx_ring_size; i++) {
- kfree(fep->tx_bounce[i]);
- fep->tx_bounce[i] = NULL;
- skb = fep->tx_skbuff[i];
- fep->tx_skbuff[i] = NULL;
- dev_kfree_skb(skb);
+ for (q = 0; q < fep->num_tx_queues; q++) {
+ txq = fep->tx_queue[q];
+ bdp = txq->tx_bd_base;
+ for (i = 0; i < txq->tx_ring_size; i++) {
+ kfree(txq->tx_bounce[i]);
+ txq->tx_bounce[i] = NULL;
+ skb = txq->tx_skbuff[i];
+ txq->tx_skbuff[i] = NULL;
+ dev_kfree_skb(skb);
+ }
}
}
-static int fec_enet_alloc_buffers(struct net_device *ndev)
+static void fec_enet_free_queue(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ int i;
+ struct fec_enet_priv_tx_q *txq;
+
+ for (i = 0; i < fep->num_tx_queues; i++)
+ if (fep->tx_queue[i] && fep->tx_queue[i]->tso_hdrs) {
+ txq = fep->tx_queue[i];
+ dma_free_coherent(NULL,
+ txq->tx_ring_size * TSO_HEADER_SIZE,
+ txq->tso_hdrs,
+ txq->tso_hdrs_dma);
+ }
+
+ for (i = 0; i < fep->num_rx_queues; i++)
+ if (fep->rx_queue[i])
+ kfree(fep->rx_queue[i]);
+
+ for (i = 0; i < fep->num_tx_queues; i++)
+ if (fep->tx_queue[i])
+ kfree(fep->tx_queue[i]);
+}
+
+static int fec_enet_alloc_queue(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ int i;
+ int ret = 0;
+ struct fec_enet_priv_tx_q *txq;
+
+ for (i = 0; i < fep->num_tx_queues; i++) {
+ txq = kzalloc(sizeof(*txq), GFP_KERNEL);
+ if (!txq) {
+ ret = -ENOMEM;
+ goto alloc_failed;
+ }
+
+ fep->tx_queue[i] = txq;
+ txq->tx_ring_size = TX_RING_SIZE;
+ fep->total_tx_ring_size += fep->tx_queue[i]->tx_ring_size;
+
+ txq->tx_stop_threshold = FEC_MAX_SKB_DESCS;
+ txq->tx_wake_threshold =
+ (txq->tx_ring_size - txq->tx_stop_threshold) / 2;
+
+ txq->tso_hdrs = dma_alloc_coherent(NULL,
+ txq->tx_ring_size * TSO_HEADER_SIZE,
+ &txq->tso_hdrs_dma,
+ GFP_KERNEL);
+ if (!txq->tso_hdrs) {
+ ret = -ENOMEM;
+ goto alloc_failed;
+ }
+ }
+
+ for (i = 0; i < fep->num_rx_queues; i++) {
+ fep->rx_queue[i] = kzalloc(sizeof(*fep->rx_queue[i]),
+ GFP_KERNEL);
+ if (!fep->rx_queue[i]) {
+ ret = -ENOMEM;
+ goto alloc_failed;
+ }
+
+ fep->rx_queue[i]->rx_ring_size = RX_RING_SIZE;
+ fep->total_rx_ring_size += fep->rx_queue[i]->rx_ring_size;
+ }
+ return ret;
+
+alloc_failed:
+ fec_enet_free_queue(ndev);
+ return ret;
+}
+
+static int
+fec_enet_alloc_rxq_buffers(struct net_device *ndev, unsigned int queue)
{
struct fec_enet_private *fep = netdev_priv(ndev);
unsigned int i;
struct sk_buff *skb;
struct bufdesc *bdp;
+ struct fec_enet_priv_rx_q *rxq;
+ unsigned int off;
- bdp = fep->rx_bd_base;
- for (i = 0; i < fep->rx_ring_size; i++) {
+ rxq = fep->rx_queue[queue];
+ bdp = rxq->rx_bd_base;
+ for (i = 0; i < rxq->rx_ring_size; i++) {
dma_addr_t addr;
skb = netdev_alloc_skb(ndev, FEC_ENET_RX_FRSIZE);
if (!skb)
goto err_alloc;
+ off = ((unsigned long)skb->data) & fep->rx_align;
+ if (off)
+ skb_reserve(skb, fep->rx_align + 1 - off);
+
addr = dma_map_single(&fep->pdev->dev, skb->data,
- FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
+ FEC_ENET_RX_FRSIZE - fep->rx_align, DMA_FROM_DEVICE);
+
if (dma_mapping_error(&fep->pdev->dev, addr)) {
dev_kfree_skb(skb);
if (net_ratelimit())
@@ -2152,7 +2578,7 @@
goto err_alloc;
}
- fep->rx_skbuff[i] = skb;
+ rxq->rx_skbuff[i] = skb;
bdp->cbd_bufaddr = addr;
bdp->cbd_sc = BD_ENET_RX_EMPTY;
@@ -2161,17 +2587,32 @@
ebdp->cbd_esc = BD_ENET_RX_INT;
}
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
}
/* Set the last buffer to wrap. */
- bdp = fec_enet_get_prevdesc(bdp, fep);
+ bdp = fec_enet_get_prevdesc(bdp, fep, queue);
bdp->cbd_sc |= BD_SC_WRAP;
+ return 0;
- bdp = fep->tx_bd_base;
- for (i = 0; i < fep->tx_ring_size; i++) {
- fep->tx_bounce[i] = kmalloc(FEC_ENET_TX_FRSIZE, GFP_KERNEL);
- if (!fep->tx_bounce[i])
+ err_alloc:
+ fec_enet_free_buffers(ndev);
+ return -ENOMEM;
+}
+
+static int
+fec_enet_alloc_txq_buffers(struct net_device *ndev, unsigned int queue)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ unsigned int i;
+ struct bufdesc *bdp;
+ struct fec_enet_priv_tx_q *txq;
+
+ txq = fep->tx_queue[queue];
+ bdp = txq->tx_bd_base;
+ for (i = 0; i < txq->tx_ring_size; i++) {
+ txq->tx_bounce[i] = kmalloc(FEC_ENET_TX_FRSIZE, GFP_KERNEL);
+ if (!txq->tx_bounce[i])
goto err_alloc;
bdp->cbd_sc = 0;
@@ -2182,11 +2623,11 @@
ebdp->cbd_esc = BD_ENET_TX_INT;
}
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
}
/* Set the last buffer to wrap. */
- bdp = fec_enet_get_prevdesc(bdp, fep);
+ bdp = fec_enet_get_prevdesc(bdp, fep, queue);
bdp->cbd_sc |= BD_SC_WRAP;
return 0;
@@ -2196,6 +2637,21 @@
return -ENOMEM;
}
+static int fec_enet_alloc_buffers(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ unsigned int i;
+
+ for (i = 0; i < fep->num_rx_queues; i++)
+ if (fec_enet_alloc_rxq_buffers(ndev, i))
+ return -ENOMEM;
+
+ for (i = 0; i < fep->num_tx_queues; i++)
+ if (fec_enet_alloc_txq_buffers(ndev, i))
+ return -ENOMEM;
+ return 0;
+}
+
static int
fec_enet_open(struct net_device *ndev)
{
@@ -2219,13 +2675,16 @@
ret = fec_enet_mii_probe(ndev);
if (ret) {
fec_enet_free_buffers(ndev);
+ fec_enet_clk_enable(ndev, false);
+ pinctrl_pm_select_sleep_state(&fep->pdev->dev);
return ret;
}
fec_restart(ndev);
napi_enable(&fep->napi);
phy_start(fep->phy_dev);
- netif_start_queue(ndev);
+ netif_tx_start_all_queues(ndev);
+
return 0;
}
@@ -2399,7 +2858,7 @@
/* Resume the device after updates */
if (netif_running(netdev) && changed & FEATURES_NEED_QUIESCE) {
fec_restart(netdev);
- netif_wake_queue(netdev);
+ netif_tx_wake_all_queues(netdev);
netif_tx_unlock_bh(netdev);
napi_enable(&fep->napi);
}
@@ -2432,39 +2891,38 @@
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
+ struct fec_enet_priv_tx_q *txq;
+ struct fec_enet_priv_rx_q *rxq;
struct bufdesc *cbd_base;
+ dma_addr_t bd_dma;
int bd_size;
+ unsigned int i;
- /* init the tx & rx ring size */
- fep->tx_ring_size = TX_RING_SIZE;
- fep->rx_ring_size = RX_RING_SIZE;
+#if defined(CONFIG_ARM)
+ fep->rx_align = 0xf;
+ fep->tx_align = 0xf;
+#else
+ fep->rx_align = 0x3;
+ fep->tx_align = 0x3;
+#endif
- fep->tx_stop_threshold = FEC_MAX_SKB_DESCS;
- fep->tx_wake_threshold = (fep->tx_ring_size - fep->tx_stop_threshold) / 2;
+ fec_enet_alloc_queue(ndev);
if (fep->bufdesc_ex)
fep->bufdesc_size = sizeof(struct bufdesc_ex);
else
fep->bufdesc_size = sizeof(struct bufdesc);
- bd_size = (fep->tx_ring_size + fep->rx_ring_size) *
+ bd_size = (fep->total_tx_ring_size + fep->total_rx_ring_size) *
fep->bufdesc_size;
/* Allocate memory for buffer descriptors. */
- cbd_base = dma_alloc_coherent(NULL, bd_size, &fep->bd_dma,
+ cbd_base = dma_alloc_coherent(NULL, bd_size, &bd_dma,
GFP_KERNEL);
- if (!cbd_base)
- return -ENOMEM;
-
- fep->tso_hdrs = dma_alloc_coherent(NULL, fep->tx_ring_size * TSO_HEADER_SIZE,
- &fep->tso_hdrs_dma, GFP_KERNEL);
- if (!fep->tso_hdrs) {
- dma_free_coherent(NULL, bd_size, cbd_base, fep->bd_dma);
+ if (!cbd_base) {
return -ENOMEM;
}
- memset(cbd_base, 0, PAGE_SIZE);
-
- fep->netdev = ndev;
+ memset(cbd_base, 0, bd_size);
/* Get the Ethernet address */
fec_get_mac(ndev);
@@ -2472,12 +2930,36 @@
fec_set_mac_address(ndev, NULL);
/* Set receive and transmit descriptor base. */
- fep->rx_bd_base = cbd_base;
- if (fep->bufdesc_ex)
- fep->tx_bd_base = (struct bufdesc *)
- (((struct bufdesc_ex *)cbd_base) + fep->rx_ring_size);
- else
- fep->tx_bd_base = cbd_base + fep->rx_ring_size;
+ for (i = 0; i < fep->num_rx_queues; i++) {
+ rxq = fep->rx_queue[i];
+ rxq->index = i;
+ rxq->rx_bd_base = (struct bufdesc *)cbd_base;
+ rxq->bd_dma = bd_dma;
+ if (fep->bufdesc_ex) {
+ bd_dma += sizeof(struct bufdesc_ex) * rxq->rx_ring_size;
+ cbd_base = (struct bufdesc *)
+ (((struct bufdesc_ex *)cbd_base) + rxq->rx_ring_size);
+ } else {
+ bd_dma += sizeof(struct bufdesc) * rxq->rx_ring_size;
+ cbd_base += rxq->rx_ring_size;
+ }
+ }
+
+ for (i = 0; i < fep->num_tx_queues; i++) {
+ txq = fep->tx_queue[i];
+ txq->index = i;
+ txq->tx_bd_base = (struct bufdesc *)cbd_base;
+ txq->bd_dma = bd_dma;
+ if (fep->bufdesc_ex) {
+ bd_dma += sizeof(struct bufdesc_ex) * txq->tx_ring_size;
+ cbd_base = (struct bufdesc *)
+ (((struct bufdesc_ex *)cbd_base) + txq->tx_ring_size);
+ } else {
+ bd_dma += sizeof(struct bufdesc) * txq->tx_ring_size;
+ cbd_base += txq->tx_ring_size;
+ }
+ }
+
/* The FEC Ethernet specific entries in the device structure */
ndev->watchdog_timeo = TX_TIMEOUT;
@@ -2500,6 +2982,11 @@
fep->csum_flags |= FLAG_RX_CSUM_ENABLED;
}
+ if (id_entry->driver_data & FEC_QUIRK_HAS_AVB) {
+ fep->tx_align = 0;
+ fep->rx_align = 0x3f;
+ }
+
ndev->hw_features = ndev->features;
fec_restart(ndev);
@@ -2545,6 +3032,42 @@
}
#endif /* CONFIG_OF */
+static void
+fec_enet_get_queue_num(struct platform_device *pdev, int *num_tx, int *num_rx)
+{
+ struct device_node *np = pdev->dev.of_node;
+ int err;
+
+ *num_tx = *num_rx = 1;
+
+ if (!np || !of_device_is_available(np))
+ return;
+
+ /* parse the num of tx and rx queues */
+ err = of_property_read_u32(np, "fsl,num-tx-queues", num_tx);
+ if (err)
+ *num_tx = 1;
+
+ err = of_property_read_u32(np, "fsl,num-rx-queues", num_rx);
+ if (err)
+ *num_rx = 1;
+
+ if (*num_tx < 1 || *num_tx > FEC_ENET_MAX_TX_QS) {
+ dev_warn(&pdev->dev, "Invalid num_tx(=%d), fall back to 1\n",
+ *num_tx);
+ *num_tx = 1;
+ return;
+ }
+
+ if (*num_rx < 1 || *num_rx > FEC_ENET_MAX_RX_QS) {
+ dev_warn(&pdev->dev, "Invalid num_rx(=%d), fall back to 1\n",
+ *num_rx);
+ *num_rx = 1;
+ return;
+ }
+
+}
+
static int
fec_probe(struct platform_device *pdev)
{
@@ -2556,13 +3079,18 @@
const struct of_device_id *of_id;
static int dev_id;
struct device_node *np = pdev->dev.of_node, *phy_node;
+ int num_tx_qs;
+ int num_rx_qs;
of_id = of_match_device(fec_dt_ids, &pdev->dev);
if (of_id)
pdev->id_entry = of_id->data;
+ fec_enet_get_queue_num(pdev, &num_tx_qs, &num_rx_qs);
+
/* Init network device */
- ndev = alloc_etherdev(sizeof(struct fec_enet_private));
+ ndev = alloc_etherdev_mqs(sizeof(struct fec_enet_private),
+ num_tx_qs, num_rx_qs);
if (!ndev)
return -ENOMEM;
@@ -2571,6 +3099,9 @@
/* setup board info structure */
fep = netdev_priv(ndev);
+ fep->num_rx_queues = num_rx_qs;
+ fep->num_tx_queues = num_tx_qs;
+
#if !defined(CONFIG_M5272)
/* default enable pause frame auto negotiation */
if (pdev->id_entry &&
@@ -2630,6 +3161,8 @@
goto failed_clk;
}
+ fep->itr_clk_rate = clk_get_rate(fep->clk_ahb);
+
/* enet_out is optional, depends on board */
fep->clk_enet_out = devm_clk_get(&pdev->dev, "enet_out");
if (IS_ERR(fep->clk_enet_out))
@@ -2637,6 +3170,12 @@
fep->ptp_clk_on = false;
mutex_init(&fep->ptp_clk_mutex);
+
+ /* clk_ref is optional, depends on board */
+ fep->clk_ref = devm_clk_get(&pdev->dev, "enet_clk_ref");
+ if (IS_ERR(fep->clk_ref))
+ fep->clk_ref = NULL;
+
fep->clk_ptp = devm_clk_get(&pdev->dev, "ptp");
fep->bufdesc_ex =
pdev->id_entry->driver_data & FEC_QUIRK_HAS_BUFDESC_EX;
@@ -2684,6 +3223,7 @@
goto failed_irq;
}
+ init_completion(&fep->mdio_done);
ret = fec_enet_mii_init(pdev);
if (ret)
goto failed_mii_init;
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index bb9f0ba..6a6d5ee 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -300,4 +300,23 @@
will be called i40evf. MSI-X interrupt support is required
for this driver to work correctly.
+config FM10K
+ tristate "Intel(R) FM10000 Ethernet Switch Host Interface Support"
+ default n
+ depends on PCI_MSI
+ ---help---
+ This driver supports Intel(R) FM10000 Ethernet Switch Host
+ Interface. For more information on how to identify your adapter,
+ go to the Adapter & Driver ID Guide at:
+
+ <http://support.intel.com/support/network/sb/CS-008441.htm>
+
+ For general information and support, go to the Intel support
+ website at:
+
+ <http://support.intel.com>
+
+ To compile this driver as a module, choose M here. The module
+ will be called fm10k. MSI-X interrupt support is required
+
endif # NET_VENDOR_INTEL
diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile
index cdbbca8..5ea764d 100644
--- a/drivers/net/ethernet/intel/Makefile
+++ b/drivers/net/ethernet/intel/Makefile
@@ -12,3 +12,4 @@
obj-$(CONFIG_I40E) += i40e/
obj-$(CONFIG_IXGB) += ixgb/
obj-$(CONFIG_I40EVF) += i40evf/
+obj-$(CONFIG_FM10K) += fm10k/
diff --git a/drivers/net/ethernet/intel/e1000/e1000.h b/drivers/net/ethernet/intel/e1000/e1000.h
index 10a0f22..6970710 100644
--- a/drivers/net/ethernet/intel/e1000/e1000.h
+++ b/drivers/net/ethernet/intel/e1000/e1000.h
@@ -148,16 +148,23 @@
/* wrapper around a pointer to a socket buffer,
* so a DMA handle can be stored along with the buffer
*/
-struct e1000_buffer {
+struct e1000_tx_buffer {
struct sk_buff *skb;
dma_addr_t dma;
- struct page *page;
unsigned long time_stamp;
u16 length;
u16 next_to_watch;
- unsigned int segs;
+ bool mapped_as_page;
+ unsigned short segs;
unsigned int bytecount;
- u16 mapped_as_page;
+};
+
+struct e1000_rx_buffer {
+ union {
+ struct page *page; /* jumbo: alloc_page */
+ u8 *data; /* else, netdev_alloc_frag */
+ } rxbuf;
+ dma_addr_t dma;
};
struct e1000_tx_ring {
@@ -174,7 +181,7 @@
/* next descriptor to check for DD status bit */
unsigned int next_to_clean;
/* array of buffer information structs */
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
u16 tdh;
u16 tdt;
@@ -195,7 +202,7 @@
/* next descriptor to check for DD status bit */
unsigned int next_to_clean;
/* array of buffer information structs */
- struct e1000_buffer *buffer_info;
+ struct e1000_rx_buffer *buffer_info;
struct sk_buff *rx_skb_top;
/* cpu for rx queue */
diff --git a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
index 9b50272..b691eb4 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
@@ -968,10 +968,9 @@
if (rxdr->buffer_info[i].dma)
dma_unmap_single(&pdev->dev,
rxdr->buffer_info[i].dma,
- rxdr->buffer_info[i].length,
+ E1000_RXBUFFER_2048,
DMA_FROM_DEVICE);
- if (rxdr->buffer_info[i].skb)
- dev_kfree_skb(rxdr->buffer_info[i].skb);
+ kfree(rxdr->buffer_info[i].rxbuf.data);
}
}
@@ -1006,7 +1005,7 @@
if (!txdr->count)
txdr->count = E1000_DEFAULT_TXD;
- txdr->buffer_info = kcalloc(txdr->count, sizeof(struct e1000_buffer),
+ txdr->buffer_info = kcalloc(txdr->count, sizeof(struct e1000_tx_buffer),
GFP_KERNEL);
if (!txdr->buffer_info) {
ret_val = 1;
@@ -1065,7 +1064,7 @@
if (!rxdr->count)
rxdr->count = E1000_DEFAULT_RXD;
- rxdr->buffer_info = kcalloc(rxdr->count, sizeof(struct e1000_buffer),
+ rxdr->buffer_info = kcalloc(rxdr->count, sizeof(struct e1000_rx_buffer),
GFP_KERNEL);
if (!rxdr->buffer_info) {
ret_val = 5;
@@ -1095,25 +1094,25 @@
for (i = 0; i < rxdr->count; i++) {
struct e1000_rx_desc *rx_desc = E1000_RX_DESC(*rxdr, i);
- struct sk_buff *skb;
+ u8 *buf;
- skb = alloc_skb(E1000_RXBUFFER_2048 + NET_IP_ALIGN, GFP_KERNEL);
- if (!skb) {
+ buf = kzalloc(E1000_RXBUFFER_2048 + NET_SKB_PAD + NET_IP_ALIGN,
+ GFP_KERNEL);
+ if (!buf) {
ret_val = 7;
goto err_nomem;
}
- skb_reserve(skb, NET_IP_ALIGN);
- rxdr->buffer_info[i].skb = skb;
- rxdr->buffer_info[i].length = E1000_RXBUFFER_2048;
+ rxdr->buffer_info[i].rxbuf.data = buf;
+
rxdr->buffer_info[i].dma =
- dma_map_single(&pdev->dev, skb->data,
+ dma_map_single(&pdev->dev,
+ buf + NET_SKB_PAD + NET_IP_ALIGN,
E1000_RXBUFFER_2048, DMA_FROM_DEVICE);
if (dma_mapping_error(&pdev->dev, rxdr->buffer_info[i].dma)) {
ret_val = 8;
goto err_nomem;
}
rx_desc->buffer_addr = cpu_to_le64(rxdr->buffer_info[i].dma);
- memset(skb->data, 0x00, skb->len);
}
return 0;
@@ -1386,13 +1385,13 @@
memset(&skb->data[frame_size / 2 + 12], 0xAF, 1);
}
-static int e1000_check_lbtest_frame(struct sk_buff *skb,
+static int e1000_check_lbtest_frame(const unsigned char *data,
unsigned int frame_size)
{
frame_size &= ~1;
- if (skb->data[3] == 0xFF) {
- if (skb->data[frame_size / 2 + 10] == 0xBE &&
- skb->data[frame_size / 2 + 12] == 0xAF) {
+ if (*(data + 3) == 0xFF) {
+ if ((*(data + frame_size / 2 + 10) == 0xBE) &&
+ (*(data + frame_size / 2 + 12) == 0xAF)) {
return 0;
}
}
@@ -1440,11 +1439,12 @@
do { /* receive the sent packets */
dma_sync_single_for_cpu(&pdev->dev,
rxdr->buffer_info[l].dma,
- rxdr->buffer_info[l].length,
+ E1000_RXBUFFER_2048,
DMA_FROM_DEVICE);
ret_val = e1000_check_lbtest_frame(
- rxdr->buffer_info[l].skb,
+ rxdr->buffer_info[l].rxbuf.data +
+ NET_SKB_PAD + NET_IP_ALIGN,
1024);
if (!ret_val)
good_cnt++;
diff --git a/drivers/net/ethernet/intel/e1000/e1000_hw.c b/drivers/net/ethernet/intel/e1000/e1000_hw.c
index 1acf503..45c8c864 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_hw.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_hw.c
@@ -4837,84 +4837,6 @@
}
/**
- * e1000_tbi_adjust_stats
- * @hw: Struct containing variables accessed by shared code
- * @frame_len: The length of the frame in question
- * @mac_addr: The Ethernet destination address of the frame in question
- *
- * Adjusts the statistic counters when a frame is accepted by TBI_ACCEPT
- */
-void e1000_tbi_adjust_stats(struct e1000_hw *hw, struct e1000_hw_stats *stats,
- u32 frame_len, u8 *mac_addr)
-{
- u64 carry_bit;
-
- /* First adjust the frame length. */
- frame_len--;
- /* We need to adjust the statistics counters, since the hardware
- * counters overcount this packet as a CRC error and undercount
- * the packet as a good packet
- */
- /* This packet should not be counted as a CRC error. */
- stats->crcerrs--;
- /* This packet does count as a Good Packet Received. */
- stats->gprc++;
-
- /* Adjust the Good Octets received counters */
- carry_bit = 0x80000000 & stats->gorcl;
- stats->gorcl += frame_len;
- /* If the high bit of Gorcl (the low 32 bits of the Good Octets
- * Received Count) was one before the addition,
- * AND it is zero after, then we lost the carry out,
- * need to add one to Gorch (Good Octets Received Count High).
- * This could be simplified if all environments supported
- * 64-bit integers.
- */
- if (carry_bit && ((stats->gorcl & 0x80000000) == 0))
- stats->gorch++;
- /* Is this a broadcast or multicast? Check broadcast first,
- * since the test for a multicast frame will test positive on
- * a broadcast frame.
- */
- if (is_broadcast_ether_addr(mac_addr))
- /* Broadcast packet */
- stats->bprc++;
- else if (is_multicast_ether_addr(mac_addr))
- /* Multicast packet */
- stats->mprc++;
-
- if (frame_len == hw->max_frame_size) {
- /* In this case, the hardware has overcounted the number of
- * oversize frames.
- */
- if (stats->roc > 0)
- stats->roc--;
- }
-
- /* Adjust the bin counters when the extra byte put the frame in the
- * wrong bin. Remember that the frame_len was adjusted above.
- */
- if (frame_len == 64) {
- stats->prc64++;
- stats->prc127--;
- } else if (frame_len == 127) {
- stats->prc127++;
- stats->prc255--;
- } else if (frame_len == 255) {
- stats->prc255++;
- stats->prc511--;
- } else if (frame_len == 511) {
- stats->prc511++;
- stats->prc1023--;
- } else if (frame_len == 1023) {
- stats->prc1023++;
- stats->prc1522--;
- } else if (frame_len == 1522) {
- stats->prc1522++;
- }
-}
-
-/**
* e1000_get_bus_info
* @hw: Struct containing variables accessed by shared code
*
diff --git a/drivers/net/ethernet/intel/e1000/e1000_hw.h b/drivers/net/ethernet/intel/e1000/e1000_hw.h
index 11578c8..5cf7268c 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_hw.h
+++ b/drivers/net/ethernet/intel/e1000/e1000_hw.h
@@ -393,8 +393,6 @@
/* Everything else */
void e1000_reset_adaptive(struct e1000_hw *hw);
void e1000_update_adaptive(struct e1000_hw *hw);
-void e1000_tbi_adjust_stats(struct e1000_hw *hw, struct e1000_hw_stats *stats,
- u32 frame_len, u8 * mac_addr);
void e1000_get_bus_info(struct e1000_hw *hw);
void e1000_pci_set_mwi(struct e1000_hw *hw);
void e1000_pci_clear_mwi(struct e1000_hw *hw);
diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
index ad3d5d1..5f6aded 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
@@ -1497,7 +1497,7 @@
struct pci_dev *pdev = adapter->pdev;
int size;
- size = sizeof(struct e1000_buffer) * txdr->count;
+ size = sizeof(struct e1000_tx_buffer) * txdr->count;
txdr->buffer_info = vzalloc(size);
if (!txdr->buffer_info)
return -ENOMEM;
@@ -1687,7 +1687,7 @@
struct pci_dev *pdev = adapter->pdev;
int size, desc_len;
- size = sizeof(struct e1000_buffer) * rxdr->count;
+ size = sizeof(struct e1000_rx_buffer) * rxdr->count;
rxdr->buffer_info = vzalloc(size);
if (!rxdr->buffer_info)
return -ENOMEM;
@@ -1947,8 +1947,9 @@
e1000_free_tx_resources(adapter, &adapter->tx_ring[i]);
}
-static void e1000_unmap_and_free_tx_resource(struct e1000_adapter *adapter,
- struct e1000_buffer *buffer_info)
+static void
+e1000_unmap_and_free_tx_resource(struct e1000_adapter *adapter,
+ struct e1000_tx_buffer *buffer_info)
{
if (buffer_info->dma) {
if (buffer_info->mapped_as_page)
@@ -1977,7 +1978,7 @@
struct e1000_tx_ring *tx_ring)
{
struct e1000_hw *hw = &adapter->hw;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
unsigned long size;
unsigned int i;
@@ -1989,7 +1990,7 @@
}
netdev_reset_queue(adapter->netdev);
- size = sizeof(struct e1000_buffer) * tx_ring->count;
+ size = sizeof(struct e1000_tx_buffer) * tx_ring->count;
memset(tx_ring->buffer_info, 0, size);
/* Zero out the descriptor ring */
@@ -2053,6 +2054,28 @@
e1000_free_rx_resources(adapter, &adapter->rx_ring[i]);
}
+#define E1000_HEADROOM (NET_SKB_PAD + NET_IP_ALIGN)
+static unsigned int e1000_frag_len(const struct e1000_adapter *a)
+{
+ return SKB_DATA_ALIGN(a->rx_buffer_len + E1000_HEADROOM) +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+}
+
+static void *e1000_alloc_frag(const struct e1000_adapter *a)
+{
+ unsigned int len = e1000_frag_len(a);
+ u8 *data = netdev_alloc_frag(len);
+
+ if (likely(data))
+ data += E1000_HEADROOM;
+ return data;
+}
+
+static void e1000_free_frag(const void *data)
+{
+ put_page(virt_to_head_page(data));
+}
+
/**
* e1000_clean_rx_ring - Free Rx Buffers per Queue
* @adapter: board private structure
@@ -2062,44 +2085,42 @@
struct e1000_rx_ring *rx_ring)
{
struct e1000_hw *hw = &adapter->hw;
- struct e1000_buffer *buffer_info;
+ struct e1000_rx_buffer *buffer_info;
struct pci_dev *pdev = adapter->pdev;
unsigned long size;
unsigned int i;
- /* Free all the Rx ring sk_buffs */
+ /* Free all the Rx netfrags */
for (i = 0; i < rx_ring->count; i++) {
buffer_info = &rx_ring->buffer_info[i];
- if (buffer_info->dma &&
- adapter->clean_rx == e1000_clean_rx_irq) {
- dma_unmap_single(&pdev->dev, buffer_info->dma,
- buffer_info->length,
- DMA_FROM_DEVICE);
- } else if (buffer_info->dma &&
- adapter->clean_rx == e1000_clean_jumbo_rx_irq) {
- dma_unmap_page(&pdev->dev, buffer_info->dma,
- buffer_info->length,
- DMA_FROM_DEVICE);
+ if (adapter->clean_rx == e1000_clean_rx_irq) {
+ if (buffer_info->dma)
+ dma_unmap_single(&pdev->dev, buffer_info->dma,
+ adapter->rx_buffer_len,
+ DMA_FROM_DEVICE);
+ if (buffer_info->rxbuf.data) {
+ e1000_free_frag(buffer_info->rxbuf.data);
+ buffer_info->rxbuf.data = NULL;
+ }
+ } else if (adapter->clean_rx == e1000_clean_jumbo_rx_irq) {
+ if (buffer_info->dma)
+ dma_unmap_page(&pdev->dev, buffer_info->dma,
+ adapter->rx_buffer_len,
+ DMA_FROM_DEVICE);
+ if (buffer_info->rxbuf.page) {
+ put_page(buffer_info->rxbuf.page);
+ buffer_info->rxbuf.page = NULL;
+ }
}
buffer_info->dma = 0;
- if (buffer_info->page) {
- put_page(buffer_info->page);
- buffer_info->page = NULL;
- }
- if (buffer_info->skb) {
- dev_kfree_skb(buffer_info->skb);
- buffer_info->skb = NULL;
- }
}
/* there also may be some cached data from a chained receive */
- if (rx_ring->rx_skb_top) {
- dev_kfree_skb(rx_ring->rx_skb_top);
- rx_ring->rx_skb_top = NULL;
- }
+ napi_free_frags(&adapter->napi);
+ rx_ring->rx_skb_top = NULL;
- size = sizeof(struct e1000_buffer) * rx_ring->count;
+ size = sizeof(struct e1000_rx_buffer) * rx_ring->count;
memset(rx_ring->buffer_info, 0, size);
/* Zero out the descriptor ring */
@@ -2678,7 +2699,7 @@
__be16 protocol)
{
struct e1000_context_desc *context_desc;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
unsigned int i;
u32 cmd_length = 0;
u16 ipcse = 0, tucse, mss;
@@ -2750,7 +2771,7 @@
__be16 protocol)
{
struct e1000_context_desc *context_desc;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
unsigned int i;
u8 css;
u32 cmd_len = E1000_TXD_CMD_DEXT;
@@ -2809,7 +2830,7 @@
{
struct e1000_hw *hw = &adapter->hw;
struct pci_dev *pdev = adapter->pdev;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
unsigned int len = skb_headlen(skb);
unsigned int offset = 0, size, count = 0, i;
unsigned int f, bytecount, segs;
@@ -2955,7 +2976,7 @@
{
struct e1000_hw *hw = &adapter->hw;
struct e1000_tx_desc *tx_desc = NULL;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
u32 txd_upper = 0, txd_lower = E1000_TXD_CMD_IFCS;
unsigned int i;
@@ -3373,7 +3394,7 @@
for (i = 0; tx_ring->desc && (i < tx_ring->count); i++) {
struct e1000_tx_desc *tx_desc = E1000_TX_DESC(*tx_ring, i);
- struct e1000_buffer *buffer_info = &tx_ring->buffer_info[i];
+ struct e1000_tx_buffer *buffer_info = &tx_ring->buffer_info[i];
struct my_u { __le64 a; __le64 b; };
struct my_u *u = (struct my_u *)tx_desc;
const char *type;
@@ -3415,7 +3436,7 @@
for (i = 0; rx_ring->desc && (i < rx_ring->count); i++) {
struct e1000_rx_desc *rx_desc = E1000_RX_DESC(*rx_ring, i);
- struct e1000_buffer *buffer_info = &rx_ring->buffer_info[i];
+ struct e1000_rx_buffer *buffer_info = &rx_ring->buffer_info[i];
struct my_u { __le64 a; __le64 b; };
struct my_u *u = (struct my_u *)rx_desc;
const char *type;
@@ -3429,7 +3450,7 @@
pr_info("R[0x%03X] %016llX %016llX %016llX %p %s\n",
i, le64_to_cpu(u->a), le64_to_cpu(u->b),
- (u64)buffer_info->dma, buffer_info->skb, type);
+ (u64)buffer_info->dma, buffer_info->rxbuf.data, type);
} /* for */
/* dump the descriptor caches */
@@ -3811,7 +3832,7 @@
struct e1000_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
struct e1000_tx_desc *tx_desc, *eop_desc;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
unsigned int i, eop;
unsigned int count = 0;
unsigned int total_tx_bytes=0, total_tx_packets=0;
@@ -3949,12 +3970,12 @@
}
/**
- * e1000_consume_page - helper function
+ * e1000_consume_page - helper function for jumbo Rx path
**/
-static void e1000_consume_page(struct e1000_buffer *bi, struct sk_buff *skb,
+static void e1000_consume_page(struct e1000_rx_buffer *bi, struct sk_buff *skb,
u16 length)
{
- bi->page = NULL;
+ bi->rxbuf.page = NULL;
skb->len += length;
skb->data_len += length;
skb->truesize += PAGE_SIZE;
@@ -3981,6 +4002,113 @@
}
/**
+ * e1000_tbi_adjust_stats
+ * @hw: Struct containing variables accessed by shared code
+ * @frame_len: The length of the frame in question
+ * @mac_addr: The Ethernet destination address of the frame in question
+ *
+ * Adjusts the statistic counters when a frame is accepted by TBI_ACCEPT
+ */
+static void e1000_tbi_adjust_stats(struct e1000_hw *hw,
+ struct e1000_hw_stats *stats,
+ u32 frame_len, const u8 *mac_addr)
+{
+ u64 carry_bit;
+
+ /* First adjust the frame length. */
+ frame_len--;
+ /* We need to adjust the statistics counters, since the hardware
+ * counters overcount this packet as a CRC error and undercount
+ * the packet as a good packet
+ */
+ /* This packet should not be counted as a CRC error. */
+ stats->crcerrs--;
+ /* This packet does count as a Good Packet Received. */
+ stats->gprc++;
+
+ /* Adjust the Good Octets received counters */
+ carry_bit = 0x80000000 & stats->gorcl;
+ stats->gorcl += frame_len;
+ /* If the high bit of Gorcl (the low 32 bits of the Good Octets
+ * Received Count) was one before the addition,
+ * AND it is zero after, then we lost the carry out,
+ * need to add one to Gorch (Good Octets Received Count High).
+ * This could be simplified if all environments supported
+ * 64-bit integers.
+ */
+ if (carry_bit && ((stats->gorcl & 0x80000000) == 0))
+ stats->gorch++;
+ /* Is this a broadcast or multicast? Check broadcast first,
+ * since the test for a multicast frame will test positive on
+ * a broadcast frame.
+ */
+ if (is_broadcast_ether_addr(mac_addr))
+ stats->bprc++;
+ else if (is_multicast_ether_addr(mac_addr))
+ stats->mprc++;
+
+ if (frame_len == hw->max_frame_size) {
+ /* In this case, the hardware has overcounted the number of
+ * oversize frames.
+ */
+ if (stats->roc > 0)
+ stats->roc--;
+ }
+
+ /* Adjust the bin counters when the extra byte put the frame in the
+ * wrong bin. Remember that the frame_len was adjusted above.
+ */
+ if (frame_len == 64) {
+ stats->prc64++;
+ stats->prc127--;
+ } else if (frame_len == 127) {
+ stats->prc127++;
+ stats->prc255--;
+ } else if (frame_len == 255) {
+ stats->prc255++;
+ stats->prc511--;
+ } else if (frame_len == 511) {
+ stats->prc511++;
+ stats->prc1023--;
+ } else if (frame_len == 1023) {
+ stats->prc1023++;
+ stats->prc1522--;
+ } else if (frame_len == 1522) {
+ stats->prc1522++;
+ }
+}
+
+static bool e1000_tbi_should_accept(struct e1000_adapter *adapter,
+ u8 status, u8 errors,
+ u32 length, const u8 *data)
+{
+ struct e1000_hw *hw = &adapter->hw;
+ u8 last_byte = *(data + length - 1);
+
+ if (TBI_ACCEPT(hw, status, errors, length, last_byte)) {
+ unsigned long irq_flags;
+
+ spin_lock_irqsave(&adapter->stats_lock, irq_flags);
+ e1000_tbi_adjust_stats(hw, &adapter->stats, length, data);
+ spin_unlock_irqrestore(&adapter->stats_lock, irq_flags);
+
+ return true;
+ }
+
+ return false;
+}
+
+static struct sk_buff *e1000_alloc_rx_skb(struct e1000_adapter *adapter,
+ unsigned int bufsz)
+{
+ struct sk_buff *skb = netdev_alloc_skb_ip_align(adapter->netdev, bufsz);
+
+ if (unlikely(!skb))
+ adapter->alloc_rx_buff_failed++;
+ return skb;
+}
+
+/**
* e1000_clean_jumbo_rx_irq - Send received data up the network stack; legacy
* @adapter: board private structure
* @rx_ring: ring to clean
@@ -3994,12 +4122,10 @@
struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do)
{
- struct e1000_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct e1000_rx_desc *rx_desc, *next_rxd;
- struct e1000_buffer *buffer_info, *next_buffer;
- unsigned long irq_flags;
+ struct e1000_rx_buffer *buffer_info, *next_buffer;
u32 length;
unsigned int i;
int cleaned_count = 0;
@@ -4020,8 +4146,6 @@
rmb(); /* read descriptor and rx_buffer_info after status DD */
status = rx_desc->status;
- skb = buffer_info->skb;
- buffer_info->skb = NULL;
if (++i == rx_ring->count) i = 0;
next_rxd = E1000_RX_DESC(*rx_ring, i);
@@ -4032,7 +4156,7 @@
cleaned = true;
cleaned_count++;
dma_unmap_page(&pdev->dev, buffer_info->dma,
- buffer_info->length, DMA_FROM_DEVICE);
+ adapter->rx_buffer_len, DMA_FROM_DEVICE);
buffer_info->dma = 0;
length = le16_to_cpu(rx_desc->length);
@@ -4040,25 +4164,15 @@
/* errors is only valid for DD + EOP descriptors */
if (unlikely((status & E1000_RXD_STAT_EOP) &&
(rx_desc->errors & E1000_RXD_ERR_FRAME_ERR_MASK))) {
- u8 *mapped;
- u8 last_byte;
+ u8 *mapped = page_address(buffer_info->rxbuf.page);
- mapped = page_address(buffer_info->page);
- last_byte = *(mapped + length - 1);
- if (TBI_ACCEPT(hw, status, rx_desc->errors, length,
- last_byte)) {
- spin_lock_irqsave(&adapter->stats_lock,
- irq_flags);
- e1000_tbi_adjust_stats(hw, &adapter->stats,
- length, mapped);
- spin_unlock_irqrestore(&adapter->stats_lock,
- irq_flags);
+ if (e1000_tbi_should_accept(adapter, status,
+ rx_desc->errors,
+ length, mapped)) {
length--;
+ } else if (netdev->features & NETIF_F_RXALL) {
+ goto process_skb;
} else {
- if (netdev->features & NETIF_F_RXALL)
- goto process_skb;
- /* recycle both page and skb */
- buffer_info->skb = skb;
/* an error means any chain goes out the window
* too
*/
@@ -4075,16 +4189,18 @@
/* this descriptor is only the beginning (or middle) */
if (!rxtop) {
/* this is the beginning of a chain */
- rxtop = skb;
- skb_fill_page_desc(rxtop, 0, buffer_info->page,
+ rxtop = napi_get_frags(&adapter->napi);
+ if (!rxtop)
+ break;
+
+ skb_fill_page_desc(rxtop, 0,
+ buffer_info->rxbuf.page,
0, length);
} else {
/* this is the middle of a chain */
skb_fill_page_desc(rxtop,
skb_shinfo(rxtop)->nr_frags,
- buffer_info->page, 0, length);
- /* re-use the skb, only consumed the page */
- buffer_info->skb = skb;
+ buffer_info->rxbuf.page, 0, length);
}
e1000_consume_page(buffer_info, rxtop, length);
goto next_desc;
@@ -4093,32 +4209,51 @@
/* end of the chain */
skb_fill_page_desc(rxtop,
skb_shinfo(rxtop)->nr_frags,
- buffer_info->page, 0, length);
- /* re-use the current skb, we only consumed the
- * page
- */
- buffer_info->skb = skb;
+ buffer_info->rxbuf.page, 0, length);
skb = rxtop;
rxtop = NULL;
e1000_consume_page(buffer_info, skb, length);
} else {
+ struct page *p;
/* no chain, got EOP, this buf is the packet
* copybreak to save the put_page/alloc_page
*/
- if (length <= copybreak &&
- skb_tailroom(skb) >= length) {
+ p = buffer_info->rxbuf.page;
+ if (length <= copybreak) {
u8 *vaddr;
- vaddr = kmap_atomic(buffer_info->page);
+
+ if (likely(!(netdev->features & NETIF_F_RXFCS)))
+ length -= 4;
+ skb = e1000_alloc_rx_skb(adapter,
+ length);
+ if (!skb)
+ break;
+
+ vaddr = kmap_atomic(p);
memcpy(skb_tail_pointer(skb), vaddr,
length);
kunmap_atomic(vaddr);
/* re-use the page, so don't erase
- * buffer_info->page
+ * buffer_info->rxbuf.page
*/
skb_put(skb, length);
+ e1000_rx_checksum(adapter,
+ status | rx_desc->errors << 24,
+ le16_to_cpu(rx_desc->csum), skb);
+
+ total_rx_bytes += skb->len;
+ total_rx_packets++;
+
+ e1000_receive_skb(adapter, status,
+ rx_desc->special, skb);
+ goto next_desc;
} else {
- skb_fill_page_desc(skb, 0,
- buffer_info->page, 0,
+ skb = napi_get_frags(&adapter->napi);
+ if (!skb) {
+ adapter->alloc_rx_buff_failed++;
+ break;
+ }
+ skb_fill_page_desc(skb, 0, p, 0,
length);
e1000_consume_page(buffer_info, skb,
length);
@@ -4137,14 +4272,14 @@
pskb_trim(skb, skb->len - 4);
total_rx_packets++;
- /* eth type trans needs skb->data to point to something */
- if (!pskb_may_pull(skb, ETH_HLEN)) {
- e_err(drv, "pskb_may_pull failed.\n");
- dev_kfree_skb(skb);
- goto next_desc;
+ if (status & E1000_RXD_STAT_VP) {
+ __le16 vlan = rx_desc->special;
+ u16 vid = le16_to_cpu(vlan) & E1000_RXD_SPC_VLAN_MASK;
+
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vid);
}
- e1000_receive_skb(adapter, status, rx_desc->special, skb);
+ napi_gro_frags(&adapter->napi);
next_desc:
rx_desc->status = 0;
@@ -4175,25 +4310,25 @@
/* this should improve performance for small packets with large amounts
* of reassembly being done in the stack
*/
-static void e1000_check_copybreak(struct net_device *netdev,
- struct e1000_buffer *buffer_info,
- u32 length, struct sk_buff **skb)
+static struct sk_buff *e1000_copybreak(struct e1000_adapter *adapter,
+ struct e1000_rx_buffer *buffer_info,
+ u32 length, const void *data)
{
- struct sk_buff *new_skb;
+ struct sk_buff *skb;
if (length > copybreak)
- return;
+ return NULL;
- new_skb = netdev_alloc_skb_ip_align(netdev, length);
- if (!new_skb)
- return;
+ skb = e1000_alloc_rx_skb(adapter, length);
+ if (!skb)
+ return NULL;
- skb_copy_to_linear_data_offset(new_skb, -NET_IP_ALIGN,
- (*skb)->data - NET_IP_ALIGN,
- length + NET_IP_ALIGN);
- /* save the skb in buffer_info as good */
- buffer_info->skb = *skb;
- *skb = new_skb;
+ dma_sync_single_for_cpu(&adapter->pdev->dev, buffer_info->dma,
+ length, DMA_FROM_DEVICE);
+
+ memcpy(skb_put(skb, length), data, length);
+
+ return skb;
}
/**
@@ -4207,12 +4342,10 @@
struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do)
{
- struct e1000_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct e1000_rx_desc *rx_desc, *next_rxd;
- struct e1000_buffer *buffer_info, *next_buffer;
- unsigned long flags;
+ struct e1000_rx_buffer *buffer_info, *next_buffer;
u32 length;
unsigned int i;
int cleaned_count = 0;
@@ -4225,6 +4358,7 @@
while (rx_desc->status & E1000_RXD_STAT_DD) {
struct sk_buff *skb;
+ u8 *data;
u8 status;
if (*work_done >= work_to_do)
@@ -4233,10 +4367,27 @@
rmb(); /* read descriptor and rx_buffer_info after status DD */
status = rx_desc->status;
- skb = buffer_info->skb;
- buffer_info->skb = NULL;
+ length = le16_to_cpu(rx_desc->length);
- prefetch(skb->data - NET_IP_ALIGN);
+ data = buffer_info->rxbuf.data;
+ prefetch(data);
+ skb = e1000_copybreak(adapter, buffer_info, length, data);
+ if (!skb) {
+ unsigned int frag_len = e1000_frag_len(adapter);
+
+ skb = build_skb(data - E1000_HEADROOM, frag_len);
+ if (!skb) {
+ adapter->alloc_rx_buff_failed++;
+ break;
+ }
+
+ skb_reserve(skb, E1000_HEADROOM);
+ dma_unmap_single(&pdev->dev, buffer_info->dma,
+ adapter->rx_buffer_len,
+ DMA_FROM_DEVICE);
+ buffer_info->dma = 0;
+ buffer_info->rxbuf.data = NULL;
+ }
if (++i == rx_ring->count) i = 0;
next_rxd = E1000_RX_DESC(*rx_ring, i);
@@ -4246,11 +4397,7 @@
cleaned = true;
cleaned_count++;
- dma_unmap_single(&pdev->dev, buffer_info->dma,
- buffer_info->length, DMA_FROM_DEVICE);
- buffer_info->dma = 0;
- length = le16_to_cpu(rx_desc->length);
/* !EOP means multiple descriptors were used to store a single
* packet, if thats the case we need to toss it. In fact, we
* to toss every packet with the EOP bit clear and the next
@@ -4262,29 +4409,22 @@
if (adapter->discarding) {
/* All receives must fit into a single buffer */
- e_dbg("Receive packet consumed multiple buffers\n");
- /* recycle */
- buffer_info->skb = skb;
+ netdev_dbg(netdev, "Receive packet consumed multiple buffers\n");
+ dev_kfree_skb(skb);
if (status & E1000_RXD_STAT_EOP)
adapter->discarding = false;
goto next_desc;
}
if (unlikely(rx_desc->errors & E1000_RXD_ERR_FRAME_ERR_MASK)) {
- u8 last_byte = *(skb->data + length - 1);
- if (TBI_ACCEPT(hw, status, rx_desc->errors, length,
- last_byte)) {
- spin_lock_irqsave(&adapter->stats_lock, flags);
- e1000_tbi_adjust_stats(hw, &adapter->stats,
- length, skb->data);
- spin_unlock_irqrestore(&adapter->stats_lock,
- flags);
+ if (e1000_tbi_should_accept(adapter, status,
+ rx_desc->errors,
+ length, data)) {
length--;
+ } else if (netdev->features & NETIF_F_RXALL) {
+ goto process_skb;
} else {
- if (netdev->features & NETIF_F_RXALL)
- goto process_skb;
- /* recycle */
- buffer_info->skb = skb;
+ dev_kfree_skb(skb);
goto next_desc;
}
}
@@ -4299,9 +4439,10 @@
*/
length -= 4;
- e1000_check_copybreak(netdev, buffer_info, length, &skb);
-
- skb_put(skb, length);
+ if (buffer_info->rxbuf.data == NULL)
+ skb_put(skb, length);
+ else /* copybreak skb */
+ skb_trim(skb, length);
/* Receive Checksum Offload */
e1000_rx_checksum(adapter,
@@ -4347,38 +4488,19 @@
e1000_alloc_jumbo_rx_buffers(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring, int cleaned_count)
{
- struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct e1000_rx_desc *rx_desc;
- struct e1000_buffer *buffer_info;
- struct sk_buff *skb;
+ struct e1000_rx_buffer *buffer_info;
unsigned int i;
- unsigned int bufsz = 256 - 16 /*for skb_reserve */ ;
i = rx_ring->next_to_use;
buffer_info = &rx_ring->buffer_info[i];
while (cleaned_count--) {
- skb = buffer_info->skb;
- if (skb) {
- skb_trim(skb, 0);
- goto check_page;
- }
-
- skb = netdev_alloc_skb_ip_align(netdev, bufsz);
- if (unlikely(!skb)) {
- /* Better luck next round */
- adapter->alloc_rx_buff_failed++;
- break;
- }
-
- buffer_info->skb = skb;
- buffer_info->length = adapter->rx_buffer_len;
-check_page:
/* allocate a new page if necessary */
- if (!buffer_info->page) {
- buffer_info->page = alloc_page(GFP_ATOMIC);
- if (unlikely(!buffer_info->page)) {
+ if (!buffer_info->rxbuf.page) {
+ buffer_info->rxbuf.page = alloc_page(GFP_ATOMIC);
+ if (unlikely(!buffer_info->rxbuf.page)) {
adapter->alloc_rx_buff_failed++;
break;
}
@@ -4386,17 +4508,15 @@
if (!buffer_info->dma) {
buffer_info->dma = dma_map_page(&pdev->dev,
- buffer_info->page, 0,
- buffer_info->length,
+ buffer_info->rxbuf.page, 0,
+ adapter->rx_buffer_len,
DMA_FROM_DEVICE);
if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
- put_page(buffer_info->page);
- dev_kfree_skb(skb);
- buffer_info->page = NULL;
- buffer_info->skb = NULL;
+ put_page(buffer_info->rxbuf.page);
+ buffer_info->rxbuf.page = NULL;
buffer_info->dma = 0;
adapter->alloc_rx_buff_failed++;
- break; /* while !buffer_info->skb */
+ break;
}
}
@@ -4432,11 +4552,9 @@
int cleaned_count)
{
struct e1000_hw *hw = &adapter->hw;
- struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct e1000_rx_desc *rx_desc;
- struct e1000_buffer *buffer_info;
- struct sk_buff *skb;
+ struct e1000_rx_buffer *buffer_info;
unsigned int i;
unsigned int bufsz = adapter->rx_buffer_len;
@@ -4444,57 +4562,52 @@
buffer_info = &rx_ring->buffer_info[i];
while (cleaned_count--) {
- skb = buffer_info->skb;
- if (skb) {
- skb_trim(skb, 0);
- goto map_skb;
- }
+ void *data;
- skb = netdev_alloc_skb_ip_align(netdev, bufsz);
- if (unlikely(!skb)) {
+ if (buffer_info->rxbuf.data)
+ goto skip;
+
+ data = e1000_alloc_frag(adapter);
+ if (!data) {
/* Better luck next round */
adapter->alloc_rx_buff_failed++;
break;
}
/* Fix for errata 23, can't cross 64kB boundary */
- if (!e1000_check_64k_bound(adapter, skb->data, bufsz)) {
- struct sk_buff *oldskb = skb;
+ if (!e1000_check_64k_bound(adapter, data, bufsz)) {
+ void *olddata = data;
e_err(rx_err, "skb align check failed: %u bytes at "
- "%p\n", bufsz, skb->data);
+ "%p\n", bufsz, data);
/* Try again, without freeing the previous */
- skb = netdev_alloc_skb_ip_align(netdev, bufsz);
+ data = e1000_alloc_frag(adapter);
/* Failed allocation, critical failure */
- if (!skb) {
- dev_kfree_skb(oldskb);
+ if (!data) {
+ e1000_free_frag(olddata);
adapter->alloc_rx_buff_failed++;
break;
}
- if (!e1000_check_64k_bound(adapter, skb->data, bufsz)) {
+ if (!e1000_check_64k_bound(adapter, data, bufsz)) {
/* give up */
- dev_kfree_skb(skb);
- dev_kfree_skb(oldskb);
+ e1000_free_frag(data);
+ e1000_free_frag(olddata);
adapter->alloc_rx_buff_failed++;
- break; /* while !buffer_info->skb */
+ break;
}
/* Use new allocation */
- dev_kfree_skb(oldskb);
+ e1000_free_frag(olddata);
}
- buffer_info->skb = skb;
- buffer_info->length = adapter->rx_buffer_len;
-map_skb:
buffer_info->dma = dma_map_single(&pdev->dev,
- skb->data,
- buffer_info->length,
+ data,
+ adapter->rx_buffer_len,
DMA_FROM_DEVICE);
if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
- dev_kfree_skb(skb);
- buffer_info->skb = NULL;
+ e1000_free_frag(data);
buffer_info->dma = 0;
adapter->alloc_rx_buff_failed++;
- break; /* while !buffer_info->skb */
+ break;
}
/* XXX if it was allocated cleanly it will never map to a
@@ -4508,17 +4621,20 @@
e_err(rx_err, "dma align check failed: %u bytes at "
"%p\n", adapter->rx_buffer_len,
(void *)(unsigned long)buffer_info->dma);
- dev_kfree_skb(skb);
- buffer_info->skb = NULL;
dma_unmap_single(&pdev->dev, buffer_info->dma,
adapter->rx_buffer_len,
DMA_FROM_DEVICE);
+
+ e1000_free_frag(data);
+ buffer_info->rxbuf.data = NULL;
buffer_info->dma = 0;
adapter->alloc_rx_buff_failed++;
- break; /* while !buffer_info->skb */
+ break;
}
+ buffer_info->rxbuf.data = data;
+ skip:
rx_desc = E1000_RX_DESC(*rx_ring, i);
rx_desc->buffer_addr = cpu_to_le64(buffer_info->dma);
diff --git a/drivers/net/ethernet/intel/fm10k/Makefile b/drivers/net/ethernet/intel/fm10k/Makefile
new file mode 100644
index 0000000..08859dd
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/Makefile
@@ -0,0 +1,33 @@
+################################################################################
+#
+# Intel Ethernet Switch Host Interface Driver
+# Copyright(c) 2013 - 2014 Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+# more details.
+#
+# The full GNU General Public License is included in this distribution in
+# the file called "COPYING".
+#
+# Contact Information:
+# e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+# Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+#
+################################################################################
+
+#
+# Makefile for the Intel(R) FM10000 Ethernet Switch Host Interface driver
+#
+
+obj-$(CONFIG_FM10K) += fm10k.o
+
+fm10k-objs := fm10k_main.o fm10k_common.o fm10k_pci.o \
+ fm10k_netdev.o fm10k_ethtool.o fm10k_pf.o fm10k_vf.o \
+ fm10k_mbx.o fm10k_iov.o fm10k_tlv.o \
+ fm10k_debugfs.o fm10k_ptp.o fm10k_dcbnl.o
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k.h b/drivers/net/ethernet/intel/fm10k/fm10k.h
new file mode 100644
index 0000000..0565827
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k.h
@@ -0,0 +1,534 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_H_
+#define _FM10K_H_
+
+#include <linux/types.h>
+#include <linux/etherdevice.h>
+#include <linux/rtnetlink.h>
+#include <linux/if_vlan.h>
+#include <linux/pci.h>
+#include <linux/net_tstamp.h>
+#include <linux/clocksource.h>
+#include <linux/ptp_clock_kernel.h>
+
+#include "fm10k_pf.h"
+#include "fm10k_vf.h"
+
+#define FM10K_MAX_JUMBO_FRAME_SIZE 15358 /* Maximum supported size 15K */
+
+#define MAX_QUEUES FM10K_MAX_QUEUES_PF
+
+#define FM10K_MIN_RXD 128
+#define FM10K_MAX_RXD 4096
+#define FM10K_DEFAULT_RXD 256
+
+#define FM10K_MIN_TXD 128
+#define FM10K_MAX_TXD 4096
+#define FM10K_DEFAULT_TXD 256
+#define FM10K_DEFAULT_TX_WORK 256
+
+#define FM10K_RXBUFFER_256 256
+#define FM10K_RXBUFFER_16384 16384
+#define FM10K_RX_HDR_LEN FM10K_RXBUFFER_256
+#if PAGE_SIZE <= FM10K_RXBUFFER_16384
+#define FM10K_RX_BUFSZ (PAGE_SIZE / 2)
+#else
+#define FM10K_RX_BUFSZ FM10K_RXBUFFER_16384
+#endif
+
+/* How many Rx Buffers do we bundle into one write to the hardware ? */
+#define FM10K_RX_BUFFER_WRITE 16 /* Must be power of 2 */
+
+#define FM10K_MAX_STATIONS 63
+struct fm10k_l2_accel {
+ int size;
+ u16 count;
+ u16 dglort;
+ struct rcu_head rcu;
+ struct net_device *macvlan[0];
+};
+
+enum fm10k_ring_state_t {
+ __FM10K_TX_DETECT_HANG,
+ __FM10K_HANG_CHECK_ARMED,
+};
+
+#define check_for_tx_hang(ring) \
+ test_bit(__FM10K_TX_DETECT_HANG, &(ring)->state)
+#define set_check_for_tx_hang(ring) \
+ set_bit(__FM10K_TX_DETECT_HANG, &(ring)->state)
+#define clear_check_for_tx_hang(ring) \
+ clear_bit(__FM10K_TX_DETECT_HANG, &(ring)->state)
+
+struct fm10k_tx_buffer {
+ struct fm10k_tx_desc *next_to_watch;
+ struct sk_buff *skb;
+ unsigned int bytecount;
+ u16 gso_segs;
+ u16 tx_flags;
+ DEFINE_DMA_UNMAP_ADDR(dma);
+ DEFINE_DMA_UNMAP_LEN(len);
+};
+
+struct fm10k_rx_buffer {
+ dma_addr_t dma;
+ struct page *page;
+ u32 page_offset;
+};
+
+struct fm10k_queue_stats {
+ u64 packets;
+ u64 bytes;
+};
+
+struct fm10k_tx_queue_stats {
+ u64 restart_queue;
+ u64 csum_err;
+ u64 tx_busy;
+ u64 tx_done_old;
+};
+
+struct fm10k_rx_queue_stats {
+ u64 alloc_failed;
+ u64 csum_err;
+ u64 errors;
+};
+
+struct fm10k_ring {
+ struct fm10k_q_vector *q_vector;/* backpointer to host q_vector */
+ struct net_device *netdev; /* netdev ring belongs to */
+ struct device *dev; /* device for DMA mapping */
+ struct fm10k_l2_accel __rcu *l2_accel; /* L2 acceleration list */
+ void *desc; /* descriptor ring memory */
+ union {
+ struct fm10k_tx_buffer *tx_buffer;
+ struct fm10k_rx_buffer *rx_buffer;
+ };
+ u32 __iomem *tail;
+ unsigned long state;
+ dma_addr_t dma; /* phys. address of descriptor ring */
+ unsigned int size; /* length in bytes */
+
+ u8 queue_index; /* needed for queue management */
+ u8 reg_idx; /* holds the special value that gets
+ * the hardware register offset
+ * associated with this ring, which is
+ * different for DCB and RSS modes
+ */
+ u8 qos_pc; /* priority class of queue */
+ u16 vid; /* default vlan ID of queue */
+ u16 count; /* amount of descriptors */
+
+ u16 next_to_alloc;
+ u16 next_to_use;
+ u16 next_to_clean;
+
+ struct fm10k_queue_stats stats;
+ struct u64_stats_sync syncp;
+ union {
+ /* Tx */
+ struct fm10k_tx_queue_stats tx_stats;
+ /* Rx */
+ struct {
+ struct fm10k_rx_queue_stats rx_stats;
+ struct sk_buff *skb;
+ };
+ };
+} ____cacheline_internodealigned_in_smp;
+
+struct fm10k_ring_container {
+ struct fm10k_ring *ring; /* pointer to linked list of rings */
+ unsigned int total_bytes; /* total bytes processed this int */
+ unsigned int total_packets; /* total packets processed this int */
+ u16 work_limit; /* total work allowed per interrupt */
+ u16 itr; /* interrupt throttle rate value */
+ u8 count; /* total number of rings in vector */
+};
+
+#define FM10K_ITR_MAX 0x0FFF /* maximum value for ITR */
+#define FM10K_ITR_10K 100 /* 100us */
+#define FM10K_ITR_20K 50 /* 50us */
+#define FM10K_ITR_ADAPTIVE 0x8000 /* adaptive interrupt moderation flag */
+
+#define FM10K_ITR_ENABLE (FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR)
+
+static inline struct netdev_queue *txring_txq(const struct fm10k_ring *ring)
+{
+ return &ring->netdev->_tx[ring->queue_index];
+}
+
+/* iterator for handling rings in ring container */
+#define fm10k_for_each_ring(pos, head) \
+ for (pos = &(head).ring[(head).count]; (--pos) >= (head).ring;)
+
+#define MAX_Q_VECTORS 256
+#define MIN_Q_VECTORS 1
+enum fm10k_non_q_vectors {
+ FM10K_MBX_VECTOR,
+#define NON_Q_VECTORS_VF NON_Q_VECTORS_PF
+ NON_Q_VECTORS_PF
+};
+
+#define NON_Q_VECTORS(hw) (((hw)->mac.type == fm10k_mac_pf) ? \
+ NON_Q_VECTORS_PF : \
+ NON_Q_VECTORS_VF)
+#define MIN_MSIX_COUNT(hw) (MIN_Q_VECTORS + NON_Q_VECTORS(hw))
+
+struct fm10k_q_vector {
+ struct fm10k_intfc *interface;
+ u32 __iomem *itr; /* pointer to ITR register for this vector */
+ u16 v_idx; /* index of q_vector within interface array */
+ struct fm10k_ring_container rx, tx;
+
+ struct napi_struct napi;
+ char name[IFNAMSIZ + 9];
+
+#ifdef CONFIG_DEBUG_FS
+ struct dentry *dbg_q_vector;
+#endif /* CONFIG_DEBUG_FS */
+ struct rcu_head rcu; /* to avoid race with update stats on free */
+
+ /* for dynamic allocation of rings associated with this q_vector */
+ struct fm10k_ring ring[0] ____cacheline_internodealigned_in_smp;
+};
+
+enum fm10k_ring_f_enum {
+ RING_F_RSS,
+ RING_F_QOS,
+ RING_F_ARRAY_SIZE /* must be last in enum set */
+};
+
+struct fm10k_ring_feature {
+ u16 limit; /* upper limit on feature indices */
+ u16 indices; /* current value of indices */
+ u16 mask; /* Mask used for feature to ring mapping */
+ u16 offset; /* offset to start of feature */
+};
+
+struct fm10k_iov_data {
+ unsigned int num_vfs;
+ unsigned int next_vf_mbx;
+ struct rcu_head rcu;
+ struct fm10k_vf_info vf_info[0];
+};
+
+#define fm10k_vxlan_port_for_each(vp, intfc) \
+ list_for_each_entry(vp, &(intfc)->vxlan_port, list)
+struct fm10k_vxlan_port {
+ struct list_head list;
+ sa_family_t sa_family;
+ __be16 port;
+};
+
+struct fm10k_intfc {
+ unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
+ struct net_device *netdev;
+ struct fm10k_l2_accel *l2_accel; /* pointer to L2 acceleration list */
+ struct pci_dev *pdev;
+ unsigned long state;
+
+ u32 flags;
+#define FM10K_FLAG_RESET_REQUESTED (u32)(1 << 0)
+#define FM10K_FLAG_RSS_FIELD_IPV4_UDP (u32)(1 << 1)
+#define FM10K_FLAG_RSS_FIELD_IPV6_UDP (u32)(1 << 2)
+#define FM10K_FLAG_RX_TS_ENABLED (u32)(1 << 3)
+#define FM10K_FLAG_SWPRI_CONFIG (u32)(1 << 4)
+ int xcast_mode;
+
+ /* Tx fast path data */
+ int num_tx_queues;
+ u16 tx_itr;
+
+ /* Rx fast path data */
+ int num_rx_queues;
+ u16 rx_itr;
+
+ /* TX */
+ struct fm10k_ring *tx_ring[MAX_QUEUES] ____cacheline_aligned_in_smp;
+
+ u64 restart_queue;
+ u64 tx_busy;
+ u64 tx_csum_errors;
+ u64 alloc_failed;
+ u64 rx_csum_errors;
+ u64 rx_errors;
+
+ u64 tx_bytes_nic;
+ u64 tx_packets_nic;
+ u64 rx_bytes_nic;
+ u64 rx_packets_nic;
+ u64 rx_drops_nic;
+ u64 rx_overrun_pf;
+ u64 rx_overrun_vf;
+ u32 tx_timeout_count;
+
+ /* RX */
+ struct fm10k_ring *rx_ring[MAX_QUEUES];
+
+ /* Queueing vectors */
+ struct fm10k_q_vector *q_vector[MAX_Q_VECTORS];
+ struct msix_entry *msix_entries;
+ int num_q_vectors; /* current number of q_vectors for device */
+ struct fm10k_ring_feature ring_feature[RING_F_ARRAY_SIZE];
+
+ /* SR-IOV information management structure */
+ struct fm10k_iov_data *iov_data;
+
+ struct fm10k_hw_stats stats;
+ struct fm10k_hw hw;
+ u32 __iomem *uc_addr;
+ u32 __iomem *sw_addr;
+ u16 msg_enable;
+ u16 tx_ring_count;
+ u16 rx_ring_count;
+ struct timer_list service_timer;
+ struct work_struct service_task;
+ unsigned long next_stats_update;
+ unsigned long next_tx_hang_check;
+ unsigned long last_reset;
+ unsigned long link_down_event;
+ bool host_ready;
+
+ u32 reta[FM10K_RETA_SIZE];
+ u32 rssrk[FM10K_RSSRK_SIZE];
+
+ /* VXLAN port tracking information */
+ struct list_head vxlan_port;
+
+#ifdef CONFIG_DEBUG_FS
+ struct dentry *dbg_intfc;
+
+#endif /* CONFIG_DEBUG_FS */
+ struct ptp_clock_info ptp_caps;
+ struct ptp_clock *ptp_clock;
+
+ struct sk_buff_head ts_tx_skb_queue;
+ u32 tx_hwtstamp_timeouts;
+
+ struct hwtstamp_config ts_config;
+ /* We are unable to actually adjust the clock beyond the frequency
+ * value. Once the clock is started there is no resetting it. As
+ * such we maintain a separate offset from the actual hardware clock
+ * to allow for offset adjustment.
+ */
+ s64 ptp_adjust;
+ rwlock_t systime_lock;
+#ifdef CONFIG_DCB
+ u8 pfc_en;
+#endif
+ u8 rx_pause;
+
+ /* GLORT resources in use by PF */
+ u16 glort;
+ u16 glort_count;
+
+ /* VLAN ID for updating multicast/unicast lists */
+ u16 vid;
+};
+
+enum fm10k_state_t {
+ __FM10K_RESETTING,
+ __FM10K_DOWN,
+ __FM10K_SERVICE_SCHED,
+ __FM10K_SERVICE_DISABLE,
+ __FM10K_MBX_LOCK,
+ __FM10K_LINK_DOWN,
+};
+
+static inline void fm10k_mbx_lock(struct fm10k_intfc *interface)
+{
+ /* busy loop if we cannot obtain the lock as some calls
+ * such as ndo_set_rx_mode may be made in atomic context
+ */
+ while (test_and_set_bit(__FM10K_MBX_LOCK, &interface->state))
+ udelay(20);
+}
+
+static inline void fm10k_mbx_unlock(struct fm10k_intfc *interface)
+{
+ /* flush memory to make sure state is correct */
+ smp_mb__before_atomic();
+ clear_bit(__FM10K_MBX_LOCK, &interface->state);
+}
+
+static inline int fm10k_mbx_trylock(struct fm10k_intfc *interface)
+{
+ return !test_and_set_bit(__FM10K_MBX_LOCK, &interface->state);
+}
+
+/* fm10k_test_staterr - test bits in Rx descriptor status and error fields */
+static inline __le32 fm10k_test_staterr(union fm10k_rx_desc *rx_desc,
+ const u32 stat_err_bits)
+{
+ return rx_desc->d.staterr & cpu_to_le32(stat_err_bits);
+}
+
+/* fm10k_desc_unused - calculate if we have unused descriptors */
+static inline u16 fm10k_desc_unused(struct fm10k_ring *ring)
+{
+ s16 unused = ring->next_to_clean - ring->next_to_use - 1;
+
+ return likely(unused < 0) ? unused + ring->count : unused;
+}
+
+#define FM10K_TX_DESC(R, i) \
+ (&(((struct fm10k_tx_desc *)((R)->desc))[i]))
+#define FM10K_RX_DESC(R, i) \
+ (&(((union fm10k_rx_desc *)((R)->desc))[i]))
+
+#define FM10K_MAX_TXD_PWR 14
+#define FM10K_MAX_DATA_PER_TXD (1 << FM10K_MAX_TXD_PWR)
+
+/* Tx Descriptors needed, worst case */
+#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), FM10K_MAX_DATA_PER_TXD)
+#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
+
+enum fm10k_tx_flags {
+ /* Tx offload flags */
+ FM10K_TX_FLAGS_CSUM = 0x01,
+};
+
+/* This structure is stored as little endian values as that is the native
+ * format of the Rx descriptor. The ordering of these fields is reversed
+ * from the actual ftag header to allow for a single bswap to take care
+ * of placing all of the values in network order
+ */
+union fm10k_ftag_info {
+ __le64 ftag;
+ struct {
+ /* dglort and sglort combined into a single 32bit desc read */
+ __le32 glort;
+ /* upper 16 bits of vlan are reserved 0 for swpri_type_user */
+ __le32 vlan;
+ } d;
+ struct {
+ __le16 dglort;
+ __le16 sglort;
+ __le16 vlan;
+ __le16 swpri_type_user;
+ } w;
+};
+
+struct fm10k_cb {
+ union {
+ __le64 tstamp;
+ unsigned long ts_tx_timeout;
+ };
+ union fm10k_ftag_info fi;
+};
+
+#define FM10K_CB(skb) ((struct fm10k_cb *)(skb)->cb)
+
+/* main */
+extern char fm10k_driver_name[];
+extern const char fm10k_driver_version[];
+int fm10k_init_queueing_scheme(struct fm10k_intfc *interface);
+void fm10k_clear_queueing_scheme(struct fm10k_intfc *interface);
+netdev_tx_t fm10k_xmit_frame_ring(struct sk_buff *skb,
+ struct fm10k_ring *tx_ring);
+void fm10k_tx_timeout_reset(struct fm10k_intfc *interface);
+bool fm10k_check_tx_hang(struct fm10k_ring *tx_ring);
+void fm10k_alloc_rx_buffers(struct fm10k_ring *rx_ring, u16 cleaned_count);
+
+/* PCI */
+void fm10k_mbx_free_irq(struct fm10k_intfc *);
+int fm10k_mbx_request_irq(struct fm10k_intfc *);
+void fm10k_qv_free_irq(struct fm10k_intfc *interface);
+int fm10k_qv_request_irq(struct fm10k_intfc *interface);
+int fm10k_register_pci_driver(void);
+void fm10k_unregister_pci_driver(void);
+void fm10k_up(struct fm10k_intfc *interface);
+void fm10k_down(struct fm10k_intfc *interface);
+void fm10k_update_stats(struct fm10k_intfc *interface);
+void fm10k_service_event_schedule(struct fm10k_intfc *interface);
+void fm10k_update_rx_drop_en(struct fm10k_intfc *interface);
+
+/* Netdev */
+struct net_device *fm10k_alloc_netdev(void);
+int fm10k_setup_rx_resources(struct fm10k_ring *);
+int fm10k_setup_tx_resources(struct fm10k_ring *);
+void fm10k_free_rx_resources(struct fm10k_ring *);
+void fm10k_free_tx_resources(struct fm10k_ring *);
+void fm10k_clean_all_rx_rings(struct fm10k_intfc *);
+void fm10k_clean_all_tx_rings(struct fm10k_intfc *);
+void fm10k_unmap_and_free_tx_resource(struct fm10k_ring *,
+ struct fm10k_tx_buffer *);
+void fm10k_restore_rx_state(struct fm10k_intfc *);
+void fm10k_reset_rx_state(struct fm10k_intfc *);
+int fm10k_setup_tc(struct net_device *dev, u8 tc);
+int fm10k_open(struct net_device *netdev);
+int fm10k_close(struct net_device *netdev);
+
+/* Ethtool */
+void fm10k_set_ethtool_ops(struct net_device *dev);
+
+/* IOV */
+s32 fm10k_iov_event(struct fm10k_intfc *interface);
+s32 fm10k_iov_mbx(struct fm10k_intfc *interface);
+void fm10k_iov_suspend(struct pci_dev *pdev);
+int fm10k_iov_resume(struct pci_dev *pdev);
+void fm10k_iov_disable(struct pci_dev *pdev);
+int fm10k_iov_configure(struct pci_dev *pdev, int num_vfs);
+s32 fm10k_iov_update_pvid(struct fm10k_intfc *interface, u16 glort, u16 pvid);
+int fm10k_ndo_set_vf_mac(struct net_device *netdev, int vf_idx, u8 *mac);
+int fm10k_ndo_set_vf_vlan(struct net_device *netdev,
+ int vf_idx, u16 vid, u8 qos);
+int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int rate,
+ int unused);
+int fm10k_ndo_get_vf_config(struct net_device *netdev,
+ int vf_idx, struct ifla_vf_info *ivi);
+
+/* DebugFS */
+#ifdef CONFIG_DEBUG_FS
+void fm10k_dbg_q_vector_init(struct fm10k_q_vector *q_vector);
+void fm10k_dbg_q_vector_exit(struct fm10k_q_vector *q_vector);
+void fm10k_dbg_intfc_init(struct fm10k_intfc *interface);
+void fm10k_dbg_intfc_exit(struct fm10k_intfc *interface);
+void fm10k_dbg_init(void);
+void fm10k_dbg_exit(void);
+#else
+static inline void fm10k_dbg_q_vector_init(struct fm10k_q_vector *q_vector) {}
+static inline void fm10k_dbg_q_vector_exit(struct fm10k_q_vector *q_vector) {}
+static inline void fm10k_dbg_intfc_init(struct fm10k_intfc *interface) {}
+static inline void fm10k_dbg_intfc_exit(struct fm10k_intfc *interface) {}
+static inline void fm10k_dbg_init(void) {}
+static inline void fm10k_dbg_exit(void) {}
+#endif /* CONFIG_DEBUG_FS */
+
+/* Time Stamping */
+void fm10k_systime_to_hwtstamp(struct fm10k_intfc *interface,
+ struct skb_shared_hwtstamps *hwtstamp,
+ u64 systime);
+void fm10k_ts_tx_enqueue(struct fm10k_intfc *interface, struct sk_buff *skb);
+void fm10k_ts_tx_hwtstamp(struct fm10k_intfc *interface, __le16 dglort,
+ u64 systime);
+void fm10k_ts_reset(struct fm10k_intfc *interface);
+void fm10k_ts_init(struct fm10k_intfc *interface);
+void fm10k_ts_tx_subtask(struct fm10k_intfc *interface);
+void fm10k_ptp_register(struct fm10k_intfc *interface);
+void fm10k_ptp_unregister(struct fm10k_intfc *interface);
+int fm10k_get_ts_config(struct net_device *netdev, struct ifreq *ifr);
+int fm10k_set_ts_config(struct net_device *netdev, struct ifreq *ifr);
+
+/* DCB */
+void fm10k_dcbnl_set_ops(struct net_device *dev);
+#endif /* _FM10K_H_ */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_common.c b/drivers/net/ethernet/intel/fm10k/fm10k_common.c
new file mode 100644
index 0000000..bf19dcc
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_common.c
@@ -0,0 +1,534 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k_common.h"
+
+/**
+ * fm10k_get_bus_info_generic - Generic set PCI bus info
+ * @hw: pointer to hardware structure
+ *
+ * Gets the PCI bus info (speed, width, type) then calls helper function to
+ * store this data within the fm10k_hw structure.
+ **/
+s32 fm10k_get_bus_info_generic(struct fm10k_hw *hw)
+{
+ u16 link_cap, link_status, device_cap, device_control;
+
+ /* Get the maximum link width and speed from PCIe config space */
+ link_cap = fm10k_read_pci_cfg_word(hw, FM10K_PCIE_LINK_CAP);
+
+ switch (link_cap & FM10K_PCIE_LINK_WIDTH) {
+ case FM10K_PCIE_LINK_WIDTH_1:
+ hw->bus_caps.width = fm10k_bus_width_pcie_x1;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_2:
+ hw->bus_caps.width = fm10k_bus_width_pcie_x2;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_4:
+ hw->bus_caps.width = fm10k_bus_width_pcie_x4;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_8:
+ hw->bus_caps.width = fm10k_bus_width_pcie_x8;
+ break;
+ default:
+ hw->bus_caps.width = fm10k_bus_width_unknown;
+ break;
+ }
+
+ switch (link_cap & FM10K_PCIE_LINK_SPEED) {
+ case FM10K_PCIE_LINK_SPEED_2500:
+ hw->bus_caps.speed = fm10k_bus_speed_2500;
+ break;
+ case FM10K_PCIE_LINK_SPEED_5000:
+ hw->bus_caps.speed = fm10k_bus_speed_5000;
+ break;
+ case FM10K_PCIE_LINK_SPEED_8000:
+ hw->bus_caps.speed = fm10k_bus_speed_8000;
+ break;
+ default:
+ hw->bus_caps.speed = fm10k_bus_speed_unknown;
+ break;
+ }
+
+ /* Get the PCIe maximum payload size for the PCIe function */
+ device_cap = fm10k_read_pci_cfg_word(hw, FM10K_PCIE_DEV_CAP);
+
+ switch (device_cap & FM10K_PCIE_DEV_CAP_PAYLOAD) {
+ case FM10K_PCIE_DEV_CAP_PAYLOAD_128:
+ hw->bus_caps.payload = fm10k_bus_payload_128;
+ break;
+ case FM10K_PCIE_DEV_CAP_PAYLOAD_256:
+ hw->bus_caps.payload = fm10k_bus_payload_256;
+ break;
+ case FM10K_PCIE_DEV_CAP_PAYLOAD_512:
+ hw->bus_caps.payload = fm10k_bus_payload_512;
+ break;
+ default:
+ hw->bus_caps.payload = fm10k_bus_payload_unknown;
+ break;
+ }
+
+ /* Get the negotiated link width and speed from PCIe config space */
+ link_status = fm10k_read_pci_cfg_word(hw, FM10K_PCIE_LINK_STATUS);
+
+ switch (link_status & FM10K_PCIE_LINK_WIDTH) {
+ case FM10K_PCIE_LINK_WIDTH_1:
+ hw->bus.width = fm10k_bus_width_pcie_x1;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_2:
+ hw->bus.width = fm10k_bus_width_pcie_x2;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_4:
+ hw->bus.width = fm10k_bus_width_pcie_x4;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_8:
+ hw->bus.width = fm10k_bus_width_pcie_x8;
+ break;
+ default:
+ hw->bus.width = fm10k_bus_width_unknown;
+ break;
+ }
+
+ switch (link_status & FM10K_PCIE_LINK_SPEED) {
+ case FM10K_PCIE_LINK_SPEED_2500:
+ hw->bus.speed = fm10k_bus_speed_2500;
+ break;
+ case FM10K_PCIE_LINK_SPEED_5000:
+ hw->bus.speed = fm10k_bus_speed_5000;
+ break;
+ case FM10K_PCIE_LINK_SPEED_8000:
+ hw->bus.speed = fm10k_bus_speed_8000;
+ break;
+ default:
+ hw->bus.speed = fm10k_bus_speed_unknown;
+ break;
+ }
+
+ /* Get the negotiated PCIe maximum payload size for the PCIe function */
+ device_control = fm10k_read_pci_cfg_word(hw, FM10K_PCIE_DEV_CTRL);
+
+ switch (device_control & FM10K_PCIE_DEV_CTRL_PAYLOAD) {
+ case FM10K_PCIE_DEV_CTRL_PAYLOAD_128:
+ hw->bus.payload = fm10k_bus_payload_128;
+ break;
+ case FM10K_PCIE_DEV_CTRL_PAYLOAD_256:
+ hw->bus.payload = fm10k_bus_payload_256;
+ break;
+ case FM10K_PCIE_DEV_CTRL_PAYLOAD_512:
+ hw->bus.payload = fm10k_bus_payload_512;
+ break;
+ default:
+ hw->bus.payload = fm10k_bus_payload_unknown;
+ break;
+ }
+
+ return 0;
+}
+
+static u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw)
+{
+ u16 msix_count;
+
+ /* read in value from MSI-X capability register */
+ msix_count = fm10k_read_pci_cfg_word(hw, FM10K_PCI_MSIX_MSG_CTRL);
+ msix_count &= FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK;
+
+ /* MSI-X count is zero-based in HW */
+ msix_count++;
+
+ if (msix_count > FM10K_MAX_MSIX_VECTORS)
+ msix_count = FM10K_MAX_MSIX_VECTORS;
+
+ return msix_count;
+}
+
+/**
+ * fm10k_get_invariants_generic - Inits constant values
+ * @hw: pointer to the hardware structure
+ *
+ * Initialize the common invariants for the device.
+ **/
+s32 fm10k_get_invariants_generic(struct fm10k_hw *hw)
+{
+ struct fm10k_mac_info *mac = &hw->mac;
+
+ /* initialize GLORT state to avoid any false hits */
+ mac->dglort_map = FM10K_DGLORTMAP_NONE;
+
+ /* record maximum number of MSI-X vectors */
+ mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw);
+
+ return 0;
+}
+
+/**
+ * fm10k_start_hw_generic - Prepare hardware for Tx/Rx
+ * @hw: pointer to hardware structure
+ *
+ * This function sets the Tx ready flag to indicate that the Tx path has
+ * been initialized.
+ **/
+s32 fm10k_start_hw_generic(struct fm10k_hw *hw)
+{
+ /* set flag indicating we are beginning Tx */
+ hw->mac.tx_ready = true;
+
+ return 0;
+}
+
+/**
+ * fm10k_disable_queues_generic - Stop Tx/Rx queues
+ * @hw: pointer to hardware structure
+ * @q_cnt: number of queues to be disabled
+ *
+ **/
+s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt)
+{
+ u32 reg;
+ u16 i, time;
+
+ /* clear tx_ready to prevent any false hits for reset */
+ hw->mac.tx_ready = false;
+
+ /* clear the enable bit for all rings */
+ for (i = 0; i < q_cnt; i++) {
+ reg = fm10k_read_reg(hw, FM10K_TXDCTL(i));
+ fm10k_write_reg(hw, FM10K_TXDCTL(i),
+ reg & ~FM10K_TXDCTL_ENABLE);
+ reg = fm10k_read_reg(hw, FM10K_RXQCTL(i));
+ fm10k_write_reg(hw, FM10K_RXQCTL(i),
+ reg & ~FM10K_RXQCTL_ENABLE);
+ }
+
+ fm10k_write_flush(hw);
+ udelay(1);
+
+ /* loop through all queues to verify that they are all disabled */
+ for (i = 0, time = FM10K_QUEUE_DISABLE_TIMEOUT; time;) {
+ /* if we are at end of rings all rings are disabled */
+ if (i == q_cnt)
+ return 0;
+
+ /* if queue enables cleared, then move to next ring pair */
+ reg = fm10k_read_reg(hw, FM10K_TXDCTL(i));
+ if (!~reg || !(reg & FM10K_TXDCTL_ENABLE)) {
+ reg = fm10k_read_reg(hw, FM10K_RXQCTL(i));
+ if (!~reg || !(reg & FM10K_RXQCTL_ENABLE)) {
+ i++;
+ continue;
+ }
+ }
+
+ /* decrement time and wait 1 usec */
+ time--;
+ if (time)
+ udelay(1);
+ }
+
+ return FM10K_ERR_REQUESTS_PENDING;
+}
+
+/**
+ * fm10k_stop_hw_generic - Stop Tx/Rx units
+ * @hw: pointer to hardware structure
+ *
+ **/
+s32 fm10k_stop_hw_generic(struct fm10k_hw *hw)
+{
+ return fm10k_disable_queues_generic(hw, hw->mac.max_queues);
+}
+
+/**
+ * fm10k_read_hw_stats_32b - Reads value of 32-bit registers
+ * @hw: pointer to the hardware structure
+ * @addr: address of register containing a 32-bit value
+ *
+ * Function reads the content of the register and returns the delta
+ * between the base and the current value.
+ * **/
+u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr,
+ struct fm10k_hw_stat *stat)
+{
+ u32 delta = fm10k_read_reg(hw, addr) - stat->base_l;
+
+ if (FM10K_REMOVED(hw->hw_addr))
+ stat->base_h = 0;
+
+ return delta;
+}
+
+/**
+ * fm10k_read_hw_stats_48b - Reads value of 48-bit registers
+ * @hw: pointer to the hardware structure
+ * @addr: address of register containing the lower 32-bit value
+ *
+ * Function reads the content of 2 registers, combined to represent a 48-bit
+ * statistical value. Extra processing is required to handle overflowing.
+ * Finally, a delta value is returned representing the difference between the
+ * values stored in registers and values stored in the statistic counters.
+ * **/
+static u64 fm10k_read_hw_stats_48b(struct fm10k_hw *hw, u32 addr,
+ struct fm10k_hw_stat *stat)
+{
+ u32 count_l;
+ u32 count_h;
+ u32 count_tmp;
+ u64 delta;
+
+ count_h = fm10k_read_reg(hw, addr + 1);
+
+ /* Check for overflow */
+ do {
+ count_tmp = count_h;
+ count_l = fm10k_read_reg(hw, addr);
+ count_h = fm10k_read_reg(hw, addr + 1);
+ } while (count_h != count_tmp);
+
+ delta = ((u64)(count_h - stat->base_h) << 32) + count_l;
+ delta -= stat->base_l;
+
+ return delta & FM10K_48_BIT_MASK;
+}
+
+/**
+ * fm10k_update_hw_base_48b - Updates 48-bit statistic base value
+ * @stat: pointer to the hardware statistic structure
+ * @delta: value to be updated into the hardware statistic structure
+ *
+ * Function receives a value and determines if an update is required based on
+ * a delta calculation. Only the base value will be updated.
+ **/
+static void fm10k_update_hw_base_48b(struct fm10k_hw_stat *stat, u64 delta)
+{
+ if (!delta)
+ return;
+
+ /* update lower 32 bits */
+ delta += stat->base_l;
+ stat->base_l = (u32)delta;
+
+ /* update upper 32 bits */
+ stat->base_h += (u32)(delta >> 32);
+}
+
+/**
+ * fm10k_update_hw_stats_tx_q - Updates TX queue statistics counters
+ * @hw: pointer to the hardware structure
+ * @q: pointer to the ring of hardware statistics queue
+ * @idx: index pointing to the start of the ring iteration
+ *
+ * Function updates the TX queue statistics counters that are related to the
+ * hardware.
+ **/
+static void fm10k_update_hw_stats_tx_q(struct fm10k_hw *hw,
+ struct fm10k_hw_stats_q *q,
+ u32 idx)
+{
+ u32 id_tx, id_tx_prev, tx_packets;
+ u64 tx_bytes = 0;
+
+ /* Retrieve TX Owner Data */
+ id_tx = fm10k_read_reg(hw, FM10K_TXQCTL(idx));
+
+ /* Process TX Ring */
+ do {
+ tx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPTC(idx),
+ &q->tx_packets);
+
+ if (tx_packets)
+ tx_bytes = fm10k_read_hw_stats_48b(hw,
+ FM10K_QBTC_L(idx),
+ &q->tx_bytes);
+
+ /* Re-Check Owner Data */
+ id_tx_prev = id_tx;
+ id_tx = fm10k_read_reg(hw, FM10K_TXQCTL(idx));
+ } while ((id_tx ^ id_tx_prev) & FM10K_TXQCTL_ID_MASK);
+
+ /* drop non-ID bits and set VALID ID bit */
+ id_tx &= FM10K_TXQCTL_ID_MASK;
+ id_tx |= FM10K_STAT_VALID;
+
+ /* update packet counts */
+ if (q->tx_stats_idx == id_tx) {
+ q->tx_packets.count += tx_packets;
+ q->tx_bytes.count += tx_bytes;
+ }
+
+ /* update bases and record ID */
+ fm10k_update_hw_base_32b(&q->tx_packets, tx_packets);
+ fm10k_update_hw_base_48b(&q->tx_bytes, tx_bytes);
+
+ q->tx_stats_idx = id_tx;
+}
+
+/**
+ * fm10k_update_hw_stats_rx_q - Updates RX queue statistics counters
+ * @hw: pointer to the hardware structure
+ * @q: pointer to the ring of hardware statistics queue
+ * @idx: index pointing to the start of the ring iteration
+ *
+ * Function updates the RX queue statistics counters that are related to the
+ * hardware.
+ **/
+static void fm10k_update_hw_stats_rx_q(struct fm10k_hw *hw,
+ struct fm10k_hw_stats_q *q,
+ u32 idx)
+{
+ u32 id_rx, id_rx_prev, rx_packets, rx_drops;
+ u64 rx_bytes = 0;
+
+ /* Retrieve RX Owner Data */
+ id_rx = fm10k_read_reg(hw, FM10K_RXQCTL(idx));
+
+ /* Process RX Ring*/
+ do {
+ rx_drops = fm10k_read_hw_stats_32b(hw, FM10K_QPRDC(idx),
+ &q->rx_drops);
+
+ rx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPRC(idx),
+ &q->rx_packets);
+
+ if (rx_packets)
+ rx_bytes = fm10k_read_hw_stats_48b(hw,
+ FM10K_QBRC_L(idx),
+ &q->rx_bytes);
+
+ /* Re-Check Owner Data */
+ id_rx_prev = id_rx;
+ id_rx = fm10k_read_reg(hw, FM10K_RXQCTL(idx));
+ } while ((id_rx ^ id_rx_prev) & FM10K_RXQCTL_ID_MASK);
+
+ /* drop non-ID bits and set VALID ID bit */
+ id_rx &= FM10K_RXQCTL_ID_MASK;
+ id_rx |= FM10K_STAT_VALID;
+
+ /* update packet counts */
+ if (q->rx_stats_idx == id_rx) {
+ q->rx_drops.count += rx_drops;
+ q->rx_packets.count += rx_packets;
+ q->rx_bytes.count += rx_bytes;
+ }
+
+ /* update bases and record ID */
+ fm10k_update_hw_base_32b(&q->rx_drops, rx_drops);
+ fm10k_update_hw_base_32b(&q->rx_packets, rx_packets);
+ fm10k_update_hw_base_48b(&q->rx_bytes, rx_bytes);
+
+ q->rx_stats_idx = id_rx;
+}
+
+/**
+ * fm10k_update_hw_stats_q - Updates queue statistics counters
+ * @hw: pointer to the hardware structure
+ * @q: pointer to the ring of hardware statistics queue
+ * @idx: index pointing to the start of the ring iteration
+ * @count: number of queues to iterate over
+ *
+ * Function updates the queue statistics counters that are related to the
+ * hardware.
+ **/
+void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q,
+ u32 idx, u32 count)
+{
+ u32 i;
+
+ for (i = 0; i < count; i++, idx++, q++) {
+ fm10k_update_hw_stats_tx_q(hw, q, idx);
+ fm10k_update_hw_stats_rx_q(hw, q, idx);
+ }
+}
+
+/**
+ * fm10k_unbind_hw_stats_q - Unbind the queue counters from their queues
+ * @hw: pointer to the hardware structure
+ * @q: pointer to the ring of hardware statistics queue
+ * @idx: index pointing to the start of the ring iteration
+ * @count: number of queues to iterate over
+ *
+ * Function invalidates the index values for the queues so any updates that
+ * may have happened are ignored and the base for the queue stats is reset.
+ **/
+
+void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count)
+{
+ u32 i;
+
+ for (i = 0; i < count; i++, idx++, q++) {
+ q->rx_stats_idx = 0;
+ q->tx_stats_idx = 0;
+ }
+}
+
+/**
+ * fm10k_get_host_state_generic - Returns the state of the host
+ * @hw: pointer to hardware structure
+ * @host_ready: pointer to boolean value that will record host state
+ *
+ * This function will check the health of the mailbox and Tx queue 0
+ * in order to determine if we should report that the link is up or not.
+ **/
+s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ struct fm10k_mac_info *mac = &hw->mac;
+ s32 ret_val = 0;
+ u32 txdctl = fm10k_read_reg(hw, FM10K_TXDCTL(0));
+
+ /* process upstream mailbox in case interrupts were disabled */
+ mbx->ops.process(hw, mbx);
+
+ /* If Tx is no longer enabled link should come down */
+ if (!(~txdctl) || !(txdctl & FM10K_TXDCTL_ENABLE))
+ mac->get_host_state = true;
+
+ /* exit if not checking for link, or link cannot be changed */
+ if (!mac->get_host_state || !(~txdctl))
+ goto out;
+
+ /* if we somehow dropped the Tx enable we should reset */
+ if (hw->mac.tx_ready && !(txdctl & FM10K_TXDCTL_ENABLE)) {
+ ret_val = FM10K_ERR_RESET_REQUESTED;
+ goto out;
+ }
+
+ /* if Mailbox timed out we should request reset */
+ if (!mbx->timeout) {
+ ret_val = FM10K_ERR_RESET_REQUESTED;
+ goto out;
+ }
+
+ /* verify Mailbox is still valid */
+ if (!mbx->ops.tx_ready(mbx, FM10K_VFMBX_MSG_MTU))
+ goto out;
+
+ /* interface cannot receive traffic without logical ports */
+ if (mac->dglort_map == FM10K_DGLORTMAP_NONE)
+ goto out;
+
+ /* if we passed all the tests above then the switch is ready and we no
+ * longer need to check for link
+ */
+ mac->get_host_state = false;
+
+out:
+ *host_ready = !mac->get_host_state;
+ return ret_val;
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_common.h b/drivers/net/ethernet/intel/fm10k/fm10k_common.h
new file mode 100644
index 0000000..45e4e5b
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_common.h
@@ -0,0 +1,65 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_COMMON_H_
+#define _FM10K_COMMON_H_
+
+#include "fm10k_type.h"
+
+#define FM10K_REMOVED(hw_addr) unlikely(!(hw_addr))
+
+/* PCI configuration read */
+u16 fm10k_read_pci_cfg_word(struct fm10k_hw *hw, u32 reg);
+
+/* read operations, indexed using DWORDS */
+u32 fm10k_read_reg(struct fm10k_hw *hw, int reg);
+
+/* write operations, indexed using DWORDS */
+#define fm10k_write_reg(hw, reg, val) \
+do { \
+ u32 __iomem *hw_addr = ACCESS_ONCE((hw)->hw_addr); \
+ if (!FM10K_REMOVED(hw_addr)) \
+ writel((val), &hw_addr[(reg)]); \
+} while (0)
+
+/* Switch register write operations, index using DWORDS */
+#define fm10k_write_sw_reg(hw, reg, val) \
+do { \
+ u32 __iomem *sw_addr = ACCESS_ONCE((hw)->sw_addr); \
+ if (!FM10K_REMOVED(sw_addr)) \
+ writel((val), &sw_addr[(reg)]); \
+} while (0)
+
+/* read ctrl register which has no clear on read fields as PCIe flush */
+#define fm10k_write_flush(hw) fm10k_read_reg((hw), FM10K_CTRL)
+s32 fm10k_get_bus_info_generic(struct fm10k_hw *hw);
+s32 fm10k_get_invariants_generic(struct fm10k_hw *hw);
+s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt);
+s32 fm10k_start_hw_generic(struct fm10k_hw *hw);
+s32 fm10k_stop_hw_generic(struct fm10k_hw *hw);
+u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr,
+ struct fm10k_hw_stat *stat);
+#define fm10k_update_hw_base_32b(stat, delta) ((stat)->base_l += (delta))
+void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q,
+ u32 idx, u32 count);
+#define fm10k_unbind_hw_stats_32b(s) ((s)->base_h = 0)
+void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count);
+s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready);
+#endif /* _FM10K_COMMON_H_ */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c b/drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c
new file mode 100644
index 0000000..212a92d
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c
@@ -0,0 +1,174 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k.h"
+
+#ifdef CONFIG_DCB
+/**
+ * fm10k_dcbnl_ieee_getets - get the ETS configuration for the device
+ * @dev: netdev interface for the device
+ * @ets: ETS structure to push configuration to
+ **/
+static int fm10k_dcbnl_ieee_getets(struct net_device *dev, struct ieee_ets *ets)
+{
+ int i;
+
+ /* we support 8 TCs in all modes */
+ ets->ets_cap = IEEE_8021QAZ_MAX_TCS;
+ ets->cbs = 0;
+
+ /* we only support strict priority and cannot do traffic shaping */
+ memset(ets->tc_tx_bw, 0, sizeof(ets->tc_tx_bw));
+ memset(ets->tc_rx_bw, 0, sizeof(ets->tc_rx_bw));
+ memset(ets->tc_tsa, IEEE_8021QAZ_TSA_STRICT, sizeof(ets->tc_tsa));
+
+ /* populate the prio map based on the netdev */
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+ ets->prio_tc[i] = netdev_get_prio_tc_map(dev, i);
+
+ return 0;
+}
+
+/**
+ * fm10k_dcbnl_ieee_setets - set the ETS configuration for the device
+ * @dev: netdev interface for the device
+ * @ets: ETS structure to pull configuration from
+ **/
+static int fm10k_dcbnl_ieee_setets(struct net_device *dev, struct ieee_ets *ets)
+{
+ u8 num_tc = 0;
+ int i, err;
+
+ /* verify type and determine num_tcs needed */
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ if (ets->tc_tx_bw[i] || ets->tc_rx_bw[i])
+ return -EINVAL;
+ if (ets->tc_tsa[i] != IEEE_8021QAZ_TSA_STRICT)
+ return -EINVAL;
+ if (ets->prio_tc[i] > num_tc)
+ num_tc = ets->prio_tc[i];
+ }
+
+ /* if requested TC is greater than 0 then num_tcs is max + 1 */
+ if (num_tc)
+ num_tc++;
+
+ if (num_tc > IEEE_8021QAZ_MAX_TCS)
+ return -EINVAL;
+
+ /* update TC hardware mapping if necessary */
+ if (num_tc != netdev_get_num_tc(dev)) {
+ err = fm10k_setup_tc(dev, num_tc);
+ if (err)
+ return err;
+ }
+
+ /* update priority mapping */
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+ netdev_set_prio_tc_map(dev, i, ets->prio_tc[i]);
+
+ return 0;
+}
+
+/**
+ * fm10k_dcbnl_ieee_getpfc - get the PFC configuration for the device
+ * @dev: netdev interface for the device
+ * @pfc: PFC structure to push configuration to
+ **/
+static int fm10k_dcbnl_ieee_getpfc(struct net_device *dev, struct ieee_pfc *pfc)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ /* record flow control max count and state of TCs */
+ pfc->pfc_cap = IEEE_8021QAZ_MAX_TCS;
+ pfc->pfc_en = interface->pfc_en;
+
+ return 0;
+}
+
+/**
+ * fm10k_dcbnl_ieee_setpfc - set the PFC configuration for the device
+ * @dev: netdev interface for the device
+ * @pfc: PFC structure to pull configuration from
+ **/
+static int fm10k_dcbnl_ieee_setpfc(struct net_device *dev, struct ieee_pfc *pfc)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ /* record PFC configuration to interface */
+ interface->pfc_en = pfc->pfc_en;
+
+ /* if we are running update the drop_en state for all queues */
+ if (netif_running(dev))
+ fm10k_update_rx_drop_en(interface);
+
+ return 0;
+}
+
+/**
+ * fm10k_dcbnl_ieee_getdcbx - get the DCBX configuration for the device
+ * @dev: netdev interface for the device
+ *
+ * Returns that we support only IEEE DCB for this interface
+ **/
+static u8 fm10k_dcbnl_getdcbx(struct net_device *dev)
+{
+ return DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE;
+}
+
+/**
+ * fm10k_dcbnl_ieee_setdcbx - get the DCBX configuration for the device
+ * @dev: netdev interface for the device
+ * @mode: new mode for this device
+ *
+ * Returns error on attempt to enable anything but IEEE DCB for this interface
+ **/
+static u8 fm10k_dcbnl_setdcbx(struct net_device *dev, u8 mode)
+{
+ return (mode != (DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE)) ? 1 : 0;
+}
+
+static const struct dcbnl_rtnl_ops fm10k_dcbnl_ops = {
+ .ieee_getets = fm10k_dcbnl_ieee_getets,
+ .ieee_setets = fm10k_dcbnl_ieee_setets,
+ .ieee_getpfc = fm10k_dcbnl_ieee_getpfc,
+ .ieee_setpfc = fm10k_dcbnl_ieee_setpfc,
+
+ .getdcbx = fm10k_dcbnl_getdcbx,
+ .setdcbx = fm10k_dcbnl_setdcbx,
+};
+
+#endif /* CONFIG_DCB */
+/**
+ * fm10k_dcbnl_set_ops - Configures dcbnl ops pointer for netdev
+ * @dev: netdev interface for the device
+ *
+ * Enables PF for DCB by assigning DCBNL ops pointer.
+ **/
+void fm10k_dcbnl_set_ops(struct net_device *dev)
+{
+#ifdef CONFIG_DCB
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+
+ if (hw->mac.type == fm10k_mac_pf)
+ dev->dcbnl_ops = &fm10k_dcbnl_ops;
+#endif /* CONFIG_DCB */
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c b/drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c
new file mode 100644
index 0000000..4327f86
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c
@@ -0,0 +1,259 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifdef CONFIG_DEBUG_FS
+
+#include "fm10k.h"
+
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+
+static struct dentry *dbg_root;
+
+/* Descriptor Seq Functions */
+
+static void *fm10k_dbg_desc_seq_start(struct seq_file *s, loff_t *pos)
+{
+ struct fm10k_ring *ring = s->private;
+
+ return (*pos < ring->count) ? pos : NULL;
+}
+
+static void *fm10k_dbg_desc_seq_next(struct seq_file *s, void *v, loff_t *pos)
+{
+ struct fm10k_ring *ring = s->private;
+
+ return (++(*pos) < ring->count) ? pos : NULL;
+}
+
+static void fm10k_dbg_desc_seq_stop(struct seq_file *s, void *v)
+{
+ /* Do nothing. */
+}
+
+static void fm10k_dbg_desc_break(struct seq_file *s, int i)
+{
+ while (i--)
+ seq_puts(s, "-");
+
+ seq_puts(s, "\n");
+}
+
+static int fm10k_dbg_tx_desc_seq_show(struct seq_file *s, void *v)
+{
+ struct fm10k_ring *ring = s->private;
+ int i = *(loff_t *)v;
+ static const char tx_desc_hdr[] =
+ "DES BUFFER_ADDRESS LENGTH VLAN MSS HDRLEN FLAGS\n";
+
+ /* Generate header */
+ if (!i) {
+ seq_printf(s, tx_desc_hdr);
+ fm10k_dbg_desc_break(s, sizeof(tx_desc_hdr) - 1);
+ }
+
+ /* Validate descriptor allocation */
+ if (!ring->desc) {
+ seq_printf(s, "%03X Descriptor ring not allocated.\n", i);
+ } else {
+ struct fm10k_tx_desc *txd = FM10K_TX_DESC(ring, i);
+
+ seq_printf(s, "%03X %#018llx %#06x %#06x %#06x %#06x %#04x\n",
+ i, txd->buffer_addr, txd->buflen, txd->vlan,
+ txd->mss, txd->hdrlen, txd->flags);
+ }
+
+ return 0;
+}
+
+static int fm10k_dbg_rx_desc_seq_show(struct seq_file *s, void *v)
+{
+ struct fm10k_ring *ring = s->private;
+ int i = *(loff_t *)v;
+ static const char rx_desc_hdr[] =
+ "DES DATA RSS STATERR LENGTH VLAN DGLORT SGLORT TIMESTAMP\n";
+
+ /* Generate header */
+ if (!i) {
+ seq_printf(s, rx_desc_hdr);
+ fm10k_dbg_desc_break(s, sizeof(rx_desc_hdr) - 1);
+ }
+
+ /* Validate descriptor allocation */
+ if (!ring->desc) {
+ seq_printf(s, "%03X Descriptor ring not allocated.\n", i);
+ } else {
+ union fm10k_rx_desc *rxd = FM10K_RX_DESC(ring, i);
+
+ seq_printf(s,
+ "%03X %#010x %#010x %#010x %#06x %#06x %#06x %#06x %#018llx\n",
+ i, rxd->d.data, rxd->d.rss, rxd->d.staterr,
+ rxd->w.length, rxd->w.vlan, rxd->w.dglort,
+ rxd->w.sglort, rxd->q.timestamp);
+ }
+
+ return 0;
+}
+
+static const struct seq_operations fm10k_dbg_tx_desc_seq_ops = {
+ .start = fm10k_dbg_desc_seq_start,
+ .next = fm10k_dbg_desc_seq_next,
+ .stop = fm10k_dbg_desc_seq_stop,
+ .show = fm10k_dbg_tx_desc_seq_show,
+};
+
+static const struct seq_operations fm10k_dbg_rx_desc_seq_ops = {
+ .start = fm10k_dbg_desc_seq_start,
+ .next = fm10k_dbg_desc_seq_next,
+ .stop = fm10k_dbg_desc_seq_stop,
+ .show = fm10k_dbg_rx_desc_seq_show,
+};
+
+static int fm10k_dbg_desc_open(struct inode *inode, struct file *filep)
+{
+ struct fm10k_ring *ring = inode->i_private;
+ struct fm10k_q_vector *q_vector = ring->q_vector;
+ const struct seq_operations *desc_seq_ops;
+ int err;
+
+ if (ring < q_vector->rx.ring)
+ desc_seq_ops = &fm10k_dbg_tx_desc_seq_ops;
+ else
+ desc_seq_ops = &fm10k_dbg_rx_desc_seq_ops;
+
+ err = seq_open(filep, desc_seq_ops);
+ if (err)
+ return err;
+
+ ((struct seq_file *)filep->private_data)->private = ring;
+
+ return 0;
+}
+
+static const struct file_operations fm10k_dbg_desc_fops = {
+ .owner = THIS_MODULE,
+ .open = fm10k_dbg_desc_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+/**
+ * fm10k_dbg_q_vector_init - setup debugfs for the q_vectors
+ * @q_vector: q_vector to allocate directories for
+ *
+ * A folder is created for each q_vector found. In each q_vector
+ * folder, a debugfs file is created for each tx and rx ring
+ * allocated to the q_vector.
+ **/
+void fm10k_dbg_q_vector_init(struct fm10k_q_vector *q_vector)
+{
+ struct fm10k_intfc *interface = q_vector->interface;
+ char name[16];
+ int i;
+
+ if (!interface->dbg_intfc)
+ return;
+
+ /* Generate a folder for each q_vector */
+ sprintf(name, "q_vector.%03d", q_vector->v_idx);
+
+ q_vector->dbg_q_vector = debugfs_create_dir(name, interface->dbg_intfc);
+ if (!q_vector->dbg_q_vector)
+ return;
+
+ /* Generate a file for each rx ring in the q_vector */
+ for (i = 0; i < q_vector->tx.count; i++) {
+ struct fm10k_ring *ring = &q_vector->tx.ring[i];
+
+ sprintf(name, "tx_ring.%03d", ring->queue_index);
+
+ debugfs_create_file(name, 0600,
+ q_vector->dbg_q_vector, ring,
+ &fm10k_dbg_desc_fops);
+ }
+
+ /* Generate a file for each rx ring in the q_vector */
+ for (i = 0; i < q_vector->rx.count; i++) {
+ struct fm10k_ring *ring = &q_vector->rx.ring[i];
+
+ sprintf(name, "rx_ring.%03d", ring->queue_index);
+
+ debugfs_create_file(name, 0600,
+ q_vector->dbg_q_vector, ring,
+ &fm10k_dbg_desc_fops);
+ }
+}
+
+/**
+ * fm10k_dbg_free_q_vector_dir - setup debugfs for the q_vectors
+ * @q_vector: q_vector to allocate directories for
+ **/
+void fm10k_dbg_q_vector_exit(struct fm10k_q_vector *q_vector)
+{
+ struct fm10k_intfc *interface = q_vector->interface;
+
+ if (interface->dbg_intfc)
+ debugfs_remove_recursive(q_vector->dbg_q_vector);
+ q_vector->dbg_q_vector = NULL;
+}
+
+/**
+ * fm10k_dbg_intfc_init - setup the debugfs directory for the intferface
+ * @interface: the interface that is starting up
+ **/
+
+void fm10k_dbg_intfc_init(struct fm10k_intfc *interface)
+{
+ const char *name = pci_name(interface->pdev);
+
+ if (dbg_root)
+ interface->dbg_intfc = debugfs_create_dir(name, dbg_root);
+}
+
+/**
+ * fm10k_dbg_intfc_exit - clean out the interface's debugfs entries
+ * @interface: the interface that is stopping
+ **/
+void fm10k_dbg_intfc_exit(struct fm10k_intfc *interface)
+{
+ if (dbg_root)
+ debugfs_remove_recursive(interface->dbg_intfc);
+ interface->dbg_intfc = NULL;
+}
+
+/**
+ * fm10k_dbg_init - start up debugfs for the driver
+ **/
+void fm10k_dbg_init(void)
+{
+ dbg_root = debugfs_create_dir(fm10k_driver_name, NULL);
+}
+
+/**
+ * fm10k_dbg_exit - clean out the driver's debugfs entries
+ **/
+void fm10k_dbg_exit(void)
+{
+ debugfs_remove_recursive(dbg_root);
+ dbg_root = NULL;
+}
+
+#endif /* CONFIG_DEBUG_FS */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c b/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
new file mode 100644
index 0000000..a9bbe60
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
@@ -0,0 +1,1069 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k.h"
+
+struct fm10k_stats {
+ char stat_string[ETH_GSTRING_LEN];
+ int sizeof_stat;
+ int stat_offset;
+};
+
+#define FM10K_NETDEV_STAT(_net_stat) { \
+ .stat_string = #_net_stat, \
+ .sizeof_stat = FIELD_SIZEOF(struct net_device_stats, _net_stat), \
+ .stat_offset = offsetof(struct net_device_stats, _net_stat) \
+}
+
+static const struct fm10k_stats fm10k_gstrings_net_stats[] = {
+ FM10K_NETDEV_STAT(tx_packets),
+ FM10K_NETDEV_STAT(tx_bytes),
+ FM10K_NETDEV_STAT(tx_errors),
+ FM10K_NETDEV_STAT(rx_packets),
+ FM10K_NETDEV_STAT(rx_bytes),
+ FM10K_NETDEV_STAT(rx_errors),
+ FM10K_NETDEV_STAT(rx_dropped),
+
+ /* detailed Rx errors */
+ FM10K_NETDEV_STAT(rx_length_errors),
+ FM10K_NETDEV_STAT(rx_crc_errors),
+ FM10K_NETDEV_STAT(rx_fifo_errors),
+};
+
+#define FM10K_NETDEV_STATS_LEN ARRAY_SIZE(fm10k_gstrings_net_stats)
+
+#define FM10K_STAT(_name, _stat) { \
+ .stat_string = _name, \
+ .sizeof_stat = FIELD_SIZEOF(struct fm10k_intfc, _stat), \
+ .stat_offset = offsetof(struct fm10k_intfc, _stat) \
+}
+
+static const struct fm10k_stats fm10k_gstrings_stats[] = {
+ FM10K_STAT("tx_restart_queue", restart_queue),
+ FM10K_STAT("tx_busy", tx_busy),
+ FM10K_STAT("tx_csum_errors", tx_csum_errors),
+ FM10K_STAT("rx_alloc_failed", alloc_failed),
+ FM10K_STAT("rx_csum_errors", rx_csum_errors),
+ FM10K_STAT("rx_errors", rx_errors),
+
+ FM10K_STAT("tx_packets_nic", tx_packets_nic),
+ FM10K_STAT("tx_bytes_nic", tx_bytes_nic),
+ FM10K_STAT("rx_packets_nic", rx_packets_nic),
+ FM10K_STAT("rx_bytes_nic", rx_bytes_nic),
+ FM10K_STAT("rx_drops_nic", rx_drops_nic),
+ FM10K_STAT("rx_overrun_pf", rx_overrun_pf),
+ FM10K_STAT("rx_overrun_vf", rx_overrun_vf),
+
+ FM10K_STAT("timeout", stats.timeout.count),
+ FM10K_STAT("ur", stats.ur.count),
+ FM10K_STAT("ca", stats.ca.count),
+ FM10K_STAT("um", stats.um.count),
+ FM10K_STAT("xec", stats.xec.count),
+ FM10K_STAT("vlan_drop", stats.vlan_drop.count),
+ FM10K_STAT("loopback_drop", stats.loopback_drop.count),
+ FM10K_STAT("nodesc_drop", stats.nodesc_drop.count),
+
+ FM10K_STAT("swapi_status", hw.swapi.status),
+ FM10K_STAT("mac_rules_used", hw.swapi.mac.used),
+ FM10K_STAT("mac_rules_avail", hw.swapi.mac.avail),
+
+ FM10K_STAT("mbx_tx_busy", hw.mbx.tx_busy),
+ FM10K_STAT("mbx_tx_dropped", hw.mbx.tx_dropped),
+ FM10K_STAT("mbx_tx_messages", hw.mbx.tx_messages),
+ FM10K_STAT("mbx_tx_dwords", hw.mbx.tx_dwords),
+ FM10K_STAT("mbx_rx_messages", hw.mbx.rx_messages),
+ FM10K_STAT("mbx_rx_dwords", hw.mbx.rx_dwords),
+ FM10K_STAT("mbx_rx_parse_err", hw.mbx.rx_parse_err),
+
+ FM10K_STAT("tx_hwtstamp_timeouts", tx_hwtstamp_timeouts),
+};
+
+#define FM10K_GLOBAL_STATS_LEN ARRAY_SIZE(fm10k_gstrings_stats)
+
+#define FM10K_QUEUE_STATS_LEN \
+ (MAX_QUEUES * 2 * (sizeof(struct fm10k_queue_stats) / sizeof(u64)))
+
+#define FM10K_STATS_LEN (FM10K_GLOBAL_STATS_LEN + \
+ FM10K_NETDEV_STATS_LEN + \
+ FM10K_QUEUE_STATS_LEN)
+
+static const char fm10k_gstrings_test[][ETH_GSTRING_LEN] = {
+ "Mailbox test (on/offline)"
+};
+
+#define FM10K_TEST_LEN (sizeof(fm10k_gstrings_test) / ETH_GSTRING_LEN)
+
+enum fm10k_self_test_types {
+ FM10K_TEST_MBX,
+ FM10K_TEST_MAX = FM10K_TEST_LEN
+};
+
+static void fm10k_get_strings(struct net_device *dev, u32 stringset,
+ u8 *data)
+{
+ char *p = (char *)data;
+ int i;
+
+ switch (stringset) {
+ case ETH_SS_TEST:
+ memcpy(data, *fm10k_gstrings_test,
+ FM10K_TEST_LEN * ETH_GSTRING_LEN);
+ break;
+ case ETH_SS_STATS:
+ for (i = 0; i < FM10K_NETDEV_STATS_LEN; i++) {
+ memcpy(p, fm10k_gstrings_net_stats[i].stat_string,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < FM10K_GLOBAL_STATS_LEN; i++) {
+ memcpy(p, fm10k_gstrings_stats[i].stat_string,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ }
+
+ for (i = 0; i < MAX_QUEUES; i++) {
+ sprintf(p, "tx_queue_%u_packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_queue_%u_bytes", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "rx_queue_%u_packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "rx_queue_%u_bytes", i);
+ p += ETH_GSTRING_LEN;
+ }
+ break;
+ }
+}
+
+static int fm10k_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_TEST:
+ return FM10K_TEST_LEN;
+ case ETH_SS_STATS:
+ return FM10K_STATS_LEN;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void fm10k_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ const int stat_count = sizeof(struct fm10k_queue_stats) / sizeof(u64);
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct net_device_stats *net_stats = &netdev->stats;
+ char *p;
+ int i, j;
+
+ fm10k_update_stats(interface);
+
+ for (i = 0; i < FM10K_NETDEV_STATS_LEN; i++) {
+ p = (char *)net_stats + fm10k_gstrings_net_stats[i].stat_offset;
+ *(data++) = (fm10k_gstrings_net_stats[i].sizeof_stat ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ for (i = 0; i < FM10K_GLOBAL_STATS_LEN; i++) {
+ p = (char *)interface + fm10k_gstrings_stats[i].stat_offset;
+ *(data++) = (fm10k_gstrings_stats[i].sizeof_stat ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ for (i = 0; i < MAX_QUEUES; i++) {
+ struct fm10k_ring *ring;
+ u64 *queue_stat;
+
+ ring = interface->tx_ring[i];
+ if (ring)
+ queue_stat = (u64 *)&ring->stats;
+ for (j = 0; j < stat_count; j++)
+ *(data++) = ring ? queue_stat[j] : 0;
+
+ ring = interface->rx_ring[i];
+ if (ring)
+ queue_stat = (u64 *)&ring->stats;
+ for (j = 0; j < stat_count; j++)
+ *(data++) = ring ? queue_stat[j] : 0;
+ }
+}
+
+/* If function below adds more registers this define needs to be updated */
+#define FM10K_REGS_LEN_Q 29
+
+static void fm10k_get_reg_q(struct fm10k_hw *hw, u32 *buff, int i)
+{
+ int idx = 0;
+
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RDBAL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RDBAH(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RDLEN(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TPH_RXCTRL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RDH(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RDT(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RXQCTL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RXDCTL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RXINT(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_SRRCTL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QPRC(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QPRDC(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QBRC_L(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QBRC_H(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TDBAL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TDBAH(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TDLEN(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TPH_TXCTRL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TDH(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TDT(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TXDCTL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TXQCTL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TXINT(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QPTC(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QBTC_L(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QBTC_H(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TQDLOC(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TX_SGLORT(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_PFVTCTL(i));
+
+ BUG_ON(idx != FM10K_REGS_LEN_Q);
+}
+
+/* If function above adds more registers this define needs to be updated */
+#define FM10K_REGS_LEN_VSI 43
+
+static void fm10k_get_reg_vsi(struct fm10k_hw *hw, u32 *buff, int i)
+{
+ int idx = 0, j;
+
+ buff[idx++] = fm10k_read_reg(hw, FM10K_MRQC(i));
+ for (j = 0; j < 10; j++)
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RSSRK(i, j));
+ for (j = 0; j < 32; j++)
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RETA(i, j));
+
+ BUG_ON(idx != FM10K_REGS_LEN_VSI);
+}
+
+static void fm10k_get_regs(struct net_device *netdev,
+ struct ethtool_regs *regs, void *p)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ u32 *buff = p;
+ u16 i;
+
+ regs->version = (1 << 24) | (hw->revision_id << 16) | hw->device_id;
+
+ switch (hw->mac.type) {
+ case fm10k_mac_pf:
+ /* General PF Registers */
+ *(buff++) = fm10k_read_reg(hw, FM10K_CTRL);
+ *(buff++) = fm10k_read_reg(hw, FM10K_CTRL_EXT);
+ *(buff++) = fm10k_read_reg(hw, FM10K_GCR);
+ *(buff++) = fm10k_read_reg(hw, FM10K_GCR_EXT);
+
+ for (i = 0; i < 8; i++) {
+ *(buff++) = fm10k_read_reg(hw, FM10K_DGLORTMAP(i));
+ *(buff++) = fm10k_read_reg(hw, FM10K_DGLORTDEC(i));
+ }
+
+ for (i = 0; i < 65; i++) {
+ fm10k_get_reg_vsi(hw, buff, i);
+ buff += FM10K_REGS_LEN_VSI;
+ }
+
+ *(buff++) = fm10k_read_reg(hw, FM10K_DMA_CTRL);
+ *(buff++) = fm10k_read_reg(hw, FM10K_DMA_CTRL2);
+
+ for (i = 0; i < FM10K_MAX_QUEUES_PF; i++) {
+ fm10k_get_reg_q(hw, buff, i);
+ buff += FM10K_REGS_LEN_Q;
+ }
+
+ *(buff++) = fm10k_read_reg(hw, FM10K_TPH_CTRL);
+
+ for (i = 0; i < 8; i++)
+ *(buff++) = fm10k_read_reg(hw, FM10K_INT_MAP(i));
+
+ /* Interrupt Throttling Registers */
+ for (i = 0; i < 130; i++)
+ *(buff++) = fm10k_read_reg(hw, FM10K_ITR(i));
+
+ break;
+ case fm10k_mac_vf:
+ /* General VF registers */
+ *(buff++) = fm10k_read_reg(hw, FM10K_VFCTRL);
+ *(buff++) = fm10k_read_reg(hw, FM10K_VFINT_MAP);
+ *(buff++) = fm10k_read_reg(hw, FM10K_VFSYSTIME);
+
+ /* Interrupt Throttling Registers */
+ for (i = 0; i < 8; i++)
+ *(buff++) = fm10k_read_reg(hw, FM10K_VFITR(i));
+
+ fm10k_get_reg_vsi(hw, buff, 0);
+ buff += FM10K_REGS_LEN_VSI;
+
+ for (i = 0; i < FM10K_MAX_QUEUES_POOL; i++) {
+ if (i < hw->mac.max_queues)
+ fm10k_get_reg_q(hw, buff, i);
+ else
+ memset(buff, 0, sizeof(u32) * FM10K_REGS_LEN_Q);
+ buff += FM10K_REGS_LEN_Q;
+ }
+
+ break;
+ default:
+ return;
+ }
+}
+
+/* If function above adds more registers these define need to be updated */
+#define FM10K_REGS_LEN_PF \
+(162 + (65 * FM10K_REGS_LEN_VSI) + (FM10K_MAX_QUEUES_PF * FM10K_REGS_LEN_Q))
+#define FM10K_REGS_LEN_VF \
+(11 + FM10K_REGS_LEN_VSI + (FM10K_MAX_QUEUES_POOL * FM10K_REGS_LEN_Q))
+
+static int fm10k_get_regs_len(struct net_device *netdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+
+ switch (hw->mac.type) {
+ case fm10k_mac_pf:
+ return FM10K_REGS_LEN_PF * sizeof(u32);
+ case fm10k_mac_vf:
+ return FM10K_REGS_LEN_VF * sizeof(u32);
+ default:
+ return 0;
+ }
+}
+
+static void fm10k_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ strncpy(info->driver, fm10k_driver_name,
+ sizeof(info->driver) - 1);
+ strncpy(info->version, fm10k_driver_version,
+ sizeof(info->version) - 1);
+ strncpy(info->bus_info, pci_name(interface->pdev),
+ sizeof(info->bus_info) - 1);
+
+ info->n_stats = FM10K_STATS_LEN;
+
+ info->regdump_len = fm10k_get_regs_len(dev);
+}
+
+static void fm10k_get_pauseparam(struct net_device *dev,
+ struct ethtool_pauseparam *pause)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ /* record fixed values for autoneg and tx pause */
+ pause->autoneg = 0;
+ pause->tx_pause = 1;
+
+ pause->rx_pause = interface->rx_pause ? 1 : 0;
+}
+
+static int fm10k_set_pauseparam(struct net_device *dev,
+ struct ethtool_pauseparam *pause)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+
+ if (pause->autoneg || !pause->tx_pause)
+ return -EINVAL;
+
+ /* we can only support pause on the PF to avoid head-of-line blocking */
+ if (hw->mac.type == fm10k_mac_pf)
+ interface->rx_pause = pause->rx_pause ? ~0 : 0;
+ else if (pause->rx_pause)
+ return -EINVAL;
+
+ if (netif_running(dev))
+ fm10k_update_rx_drop_en(interface);
+
+ return 0;
+}
+
+static u32 fm10k_get_msglevel(struct net_device *netdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+
+ return interface->msg_enable;
+}
+
+static void fm10k_set_msglevel(struct net_device *netdev, u32 data)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+
+ interface->msg_enable = data;
+}
+
+static void fm10k_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+
+ ring->rx_max_pending = FM10K_MAX_RXD;
+ ring->tx_max_pending = FM10K_MAX_TXD;
+ ring->rx_mini_max_pending = 0;
+ ring->rx_jumbo_max_pending = 0;
+ ring->rx_pending = interface->rx_ring_count;
+ ring->tx_pending = interface->tx_ring_count;
+ ring->rx_mini_pending = 0;
+ ring->rx_jumbo_pending = 0;
+}
+
+static int fm10k_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_ring *temp_ring;
+ int i, err = 0;
+ u32 new_rx_count, new_tx_count;
+
+ if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
+ return -EINVAL;
+
+ new_tx_count = clamp_t(u32, ring->tx_pending,
+ FM10K_MIN_TXD, FM10K_MAX_TXD);
+ new_tx_count = ALIGN(new_tx_count, FM10K_REQ_TX_DESCRIPTOR_MULTIPLE);
+
+ new_rx_count = clamp_t(u32, ring->rx_pending,
+ FM10K_MIN_RXD, FM10K_MAX_RXD);
+ new_rx_count = ALIGN(new_rx_count, FM10K_REQ_RX_DESCRIPTOR_MULTIPLE);
+
+ if ((new_tx_count == interface->tx_ring_count) &&
+ (new_rx_count == interface->rx_ring_count)) {
+ /* nothing to do */
+ return 0;
+ }
+
+ while (test_and_set_bit(__FM10K_RESETTING, &interface->state))
+ usleep_range(1000, 2000);
+
+ if (!netif_running(interface->netdev)) {
+ for (i = 0; i < interface->num_tx_queues; i++)
+ interface->tx_ring[i]->count = new_tx_count;
+ for (i = 0; i < interface->num_rx_queues; i++)
+ interface->rx_ring[i]->count = new_rx_count;
+ interface->tx_ring_count = new_tx_count;
+ interface->rx_ring_count = new_rx_count;
+ goto clear_reset;
+ }
+
+ /* allocate temporary buffer to store rings in */
+ i = max_t(int, interface->num_tx_queues, interface->num_rx_queues);
+ temp_ring = vmalloc(i * sizeof(struct fm10k_ring));
+
+ if (!temp_ring) {
+ err = -ENOMEM;
+ goto clear_reset;
+ }
+
+ fm10k_down(interface);
+
+ /* Setup new Tx resources and free the old Tx resources in that order.
+ * We can then assign the new resources to the rings via a memcpy.
+ * The advantage to this approach is that we are guaranteed to still
+ * have resources even in the case of an allocation failure.
+ */
+ if (new_tx_count != interface->tx_ring_count) {
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ memcpy(&temp_ring[i], interface->tx_ring[i],
+ sizeof(struct fm10k_ring));
+
+ temp_ring[i].count = new_tx_count;
+ err = fm10k_setup_tx_resources(&temp_ring[i]);
+ if (err) {
+ while (i) {
+ i--;
+ fm10k_free_tx_resources(&temp_ring[i]);
+ }
+ goto err_setup;
+ }
+ }
+
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ fm10k_free_tx_resources(interface->tx_ring[i]);
+
+ memcpy(interface->tx_ring[i], &temp_ring[i],
+ sizeof(struct fm10k_ring));
+ }
+
+ interface->tx_ring_count = new_tx_count;
+ }
+
+ /* Repeat the process for the Rx rings if needed */
+ if (new_rx_count != interface->rx_ring_count) {
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ memcpy(&temp_ring[i], interface->rx_ring[i],
+ sizeof(struct fm10k_ring));
+
+ temp_ring[i].count = new_rx_count;
+ err = fm10k_setup_rx_resources(&temp_ring[i]);
+ if (err) {
+ while (i) {
+ i--;
+ fm10k_free_rx_resources(&temp_ring[i]);
+ }
+ goto err_setup;
+ }
+ }
+
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ fm10k_free_rx_resources(interface->rx_ring[i]);
+
+ memcpy(interface->rx_ring[i], &temp_ring[i],
+ sizeof(struct fm10k_ring));
+ }
+
+ interface->rx_ring_count = new_rx_count;
+ }
+
+err_setup:
+ fm10k_up(interface);
+ vfree(temp_ring);
+clear_reset:
+ clear_bit(__FM10K_RESETTING, &interface->state);
+ return err;
+}
+
+static int fm10k_get_coalesce(struct net_device *dev,
+ struct ethtool_coalesce *ec)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ ec->use_adaptive_tx_coalesce =
+ !!(interface->tx_itr & FM10K_ITR_ADAPTIVE);
+ ec->tx_coalesce_usecs = interface->tx_itr & ~FM10K_ITR_ADAPTIVE;
+
+ ec->use_adaptive_rx_coalesce =
+ !!(interface->rx_itr & FM10K_ITR_ADAPTIVE);
+ ec->rx_coalesce_usecs = interface->rx_itr & ~FM10K_ITR_ADAPTIVE;
+
+ return 0;
+}
+
+static int fm10k_set_coalesce(struct net_device *dev,
+ struct ethtool_coalesce *ec)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_q_vector *qv;
+ u16 tx_itr, rx_itr;
+ int i;
+
+ /* verify limits */
+ if ((ec->rx_coalesce_usecs > FM10K_ITR_MAX) ||
+ (ec->tx_coalesce_usecs > FM10K_ITR_MAX))
+ return -EINVAL;
+
+ /* record settings */
+ tx_itr = ec->tx_coalesce_usecs;
+ rx_itr = ec->rx_coalesce_usecs;
+
+ /* set initial values for adaptive ITR */
+ if (ec->use_adaptive_tx_coalesce)
+ tx_itr = FM10K_ITR_ADAPTIVE | FM10K_ITR_10K;
+
+ if (ec->use_adaptive_rx_coalesce)
+ rx_itr = FM10K_ITR_ADAPTIVE | FM10K_ITR_20K;
+
+ /* update interface */
+ interface->tx_itr = tx_itr;
+ interface->rx_itr = rx_itr;
+
+ /* update q_vectors */
+ for (i = 0; i < interface->num_q_vectors; i++) {
+ qv = interface->q_vector[i];
+ qv->tx.itr = tx_itr;
+ qv->rx.itr = rx_itr;
+ }
+
+ return 0;
+}
+
+static int fm10k_get_rss_hash_opts(struct fm10k_intfc *interface,
+ struct ethtool_rxnfc *cmd)
+{
+ cmd->data = 0;
+
+ /* Report default options for RSS on fm10k */
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ /* fall through */
+ case UDP_V4_FLOW:
+ if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV4_UDP)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ /* fall through */
+ case SCTP_V4_FLOW:
+ case SCTP_V6_FLOW:
+ case AH_ESP_V4_FLOW:
+ case AH_ESP_V6_FLOW:
+ case AH_V4_FLOW:
+ case AH_V6_FLOW:
+ case ESP_V4_FLOW:
+ case ESP_V6_FLOW:
+ case IPV4_FLOW:
+ case IPV6_FLOW:
+ cmd->data |= RXH_IP_SRC | RXH_IP_DST;
+ break;
+ case UDP_V6_FLOW:
+ if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV6_UDP)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ cmd->data |= RXH_IP_SRC | RXH_IP_DST;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int fm10k_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ u32 *rule_locs)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ int ret = -EOPNOTSUPP;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_GRXRINGS:
+ cmd->data = interface->num_rx_queues;
+ ret = 0;
+ break;
+ case ETHTOOL_GRXFH:
+ ret = fm10k_get_rss_hash_opts(interface, cmd);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+#define UDP_RSS_FLAGS (FM10K_FLAG_RSS_FIELD_IPV4_UDP | \
+ FM10K_FLAG_RSS_FIELD_IPV6_UDP)
+static int fm10k_set_rss_hash_opt(struct fm10k_intfc *interface,
+ struct ethtool_rxnfc *nfc)
+{
+ u32 flags = interface->flags;
+
+ /* RSS does not support anything other than hashing
+ * to queues on src and dst IPs and ports
+ */
+ if (nfc->data & ~(RXH_IP_SRC | RXH_IP_DST |
+ RXH_L4_B_0_1 | RXH_L4_B_2_3))
+ return -EINVAL;
+
+ switch (nfc->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST) ||
+ !(nfc->data & RXH_L4_B_0_1) ||
+ !(nfc->data & RXH_L4_B_2_3))
+ return -EINVAL;
+ break;
+ case UDP_V4_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST))
+ return -EINVAL;
+ switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ case 0:
+ flags &= ~FM10K_FLAG_RSS_FIELD_IPV4_UDP;
+ break;
+ case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+ flags |= FM10K_FLAG_RSS_FIELD_IPV4_UDP;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case UDP_V6_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST))
+ return -EINVAL;
+ switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ case 0:
+ flags &= ~FM10K_FLAG_RSS_FIELD_IPV6_UDP;
+ break;
+ case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+ flags |= FM10K_FLAG_RSS_FIELD_IPV6_UDP;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case AH_ESP_V4_FLOW:
+ case AH_V4_FLOW:
+ case ESP_V4_FLOW:
+ case SCTP_V4_FLOW:
+ case AH_ESP_V6_FLOW:
+ case AH_V6_FLOW:
+ case ESP_V6_FLOW:
+ case SCTP_V6_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST) ||
+ (nfc->data & RXH_L4_B_0_1) ||
+ (nfc->data & RXH_L4_B_2_3))
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* if we changed something we need to update flags */
+ if (flags != interface->flags) {
+ struct fm10k_hw *hw = &interface->hw;
+ u32 mrqc;
+
+ if ((flags & UDP_RSS_FLAGS) &&
+ !(interface->flags & UDP_RSS_FLAGS))
+ netif_warn(interface, drv, interface->netdev,
+ "enabling UDP RSS: fragmented packets may arrive out of order to the stack above\n");
+
+ interface->flags = flags;
+
+ /* Perform hash on these packet types */
+ mrqc = FM10K_MRQC_IPV4 |
+ FM10K_MRQC_TCP_IPV4 |
+ FM10K_MRQC_IPV6 |
+ FM10K_MRQC_TCP_IPV6;
+
+ if (flags & FM10K_FLAG_RSS_FIELD_IPV4_UDP)
+ mrqc |= FM10K_MRQC_UDP_IPV4;
+ if (flags & FM10K_FLAG_RSS_FIELD_IPV6_UDP)
+ mrqc |= FM10K_MRQC_UDP_IPV6;
+
+ fm10k_write_reg(hw, FM10K_MRQC(0), mrqc);
+ }
+
+ return 0;
+}
+
+static int fm10k_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ int ret = -EOPNOTSUPP;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_SRXFH:
+ ret = fm10k_set_rss_hash_opt(interface, cmd);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+static int fm10k_mbx_test(struct fm10k_intfc *interface, u64 *data)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 attr_flag, test_msg[6];
+ unsigned long timeout;
+ int err;
+
+ /* For now this is a VF only feature */
+ if (hw->mac.type != fm10k_mac_vf)
+ return 0;
+
+ /* loop through both nested and unnested attribute types */
+ for (attr_flag = (1 << FM10K_TEST_MSG_UNSET);
+ attr_flag < (1 << (2 * FM10K_TEST_MSG_NESTED));
+ attr_flag += attr_flag) {
+ /* generate message to be tested */
+ fm10k_tlv_msg_test_create(test_msg, attr_flag);
+
+ fm10k_mbx_lock(interface);
+ mbx->test_result = FM10K_NOT_IMPLEMENTED;
+ err = mbx->ops.enqueue_tx(hw, mbx, test_msg);
+ fm10k_mbx_unlock(interface);
+
+ /* wait up to 1 second for response */
+ timeout = jiffies + HZ;
+ do {
+ if (err < 0)
+ goto err_out;
+
+ usleep_range(500, 1000);
+
+ fm10k_mbx_lock(interface);
+ mbx->ops.process(hw, mbx);
+ fm10k_mbx_unlock(interface);
+
+ err = mbx->test_result;
+ if (!err)
+ break;
+ } while (time_is_after_jiffies(timeout));
+
+ /* reporting errors */
+ if (err)
+ goto err_out;
+ }
+
+err_out:
+ *data = err < 0 ? (attr_flag) : (err > 0);
+ return err;
+}
+
+static void fm10k_self_test(struct net_device *dev,
+ struct ethtool_test *eth_test, u64 *data)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+
+ memset(data, 0, sizeof(*data) * FM10K_TEST_LEN);
+
+ if (FM10K_REMOVED(hw)) {
+ netif_err(interface, drv, dev,
+ "Interface removed - test blocked\n");
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ return;
+ }
+
+ if (fm10k_mbx_test(interface, &data[FM10K_TEST_MBX]))
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+}
+
+static u32 fm10k_get_reta_size(struct net_device *netdev)
+{
+ return FM10K_RETA_SIZE * FM10K_RETA_ENTRIES_PER_REG;
+}
+
+static int fm10k_get_reta(struct net_device *netdev, u32 *indir)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ int i;
+
+ if (!indir)
+ return 0;
+
+ for (i = 0; i < FM10K_RETA_SIZE; i++, indir += 4) {
+ u32 reta = interface->reta[i];
+
+ indir[0] = (reta << 24) >> 24;
+ indir[1] = (reta << 16) >> 24;
+ indir[2] = (reta << 8) >> 24;
+ indir[3] = (reta) >> 24;
+ }
+
+ return 0;
+}
+
+static int fm10k_set_reta(struct net_device *netdev, const u32 *indir)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ int i;
+ u16 rss_i;
+
+ if (!indir)
+ return 0;
+
+ /* Verify user input. */
+ rss_i = interface->ring_feature[RING_F_RSS].indices;
+ for (i = fm10k_get_reta_size(netdev); i--;) {
+ if (indir[i] < rss_i)
+ continue;
+ return -EINVAL;
+ }
+
+ /* record entries to reta table */
+ for (i = 0; i < FM10K_RETA_SIZE; i++, indir += 4) {
+ u32 reta = indir[0] |
+ (indir[1] << 8) |
+ (indir[2] << 16) |
+ (indir[3] << 24);
+
+ if (interface->reta[i] == reta)
+ continue;
+
+ interface->reta[i] = reta;
+ fm10k_write_reg(hw, FM10K_RETA(0, i), reta);
+ }
+
+ return 0;
+}
+
+static u32 fm10k_get_rssrk_size(struct net_device *netdev)
+{
+ return FM10K_RSSRK_SIZE * FM10K_RSSRK_ENTRIES_PER_REG;
+}
+
+static int fm10k_get_rssh(struct net_device *netdev, u32 *indir, u8 *key)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ int i, err;
+
+ err = fm10k_get_reta(netdev, indir);
+ if (err || !key)
+ return err;
+
+ for (i = 0; i < FM10K_RSSRK_SIZE; i++, key += 4)
+ *(__le32 *)key = cpu_to_le32(interface->rssrk[i]);
+
+ return 0;
+}
+
+static int fm10k_set_rssh(struct net_device *netdev, const u32 *indir,
+ const u8 *key)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ int i, err;
+
+ err = fm10k_set_reta(netdev, indir);
+ if (err || !key)
+ return err;
+
+ for (i = 0; i < FM10K_RSSRK_SIZE; i++, key += 4) {
+ u32 rssrk = le32_to_cpu(*(__le32 *)key);
+
+ if (interface->rssrk[i] == rssrk)
+ continue;
+
+ interface->rssrk[i] = rssrk;
+ fm10k_write_reg(hw, FM10K_RSSRK(0, i), rssrk);
+ }
+
+ return 0;
+}
+
+static unsigned int fm10k_max_channels(struct net_device *dev)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ unsigned int max_combined = interface->hw.mac.max_queues;
+ u8 tcs = netdev_get_num_tc(dev);
+
+ /* For QoS report channels per traffic class */
+ if (tcs > 1)
+ max_combined = 1 << (fls(max_combined / tcs) - 1);
+
+ return max_combined;
+}
+
+static void fm10k_get_channels(struct net_device *dev,
+ struct ethtool_channels *ch)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+
+ /* report maximum channels */
+ ch->max_combined = fm10k_max_channels(dev);
+
+ /* report info for other vector */
+ ch->max_other = NON_Q_VECTORS(hw);
+ ch->other_count = ch->max_other;
+
+ /* record RSS queues */
+ ch->combined_count = interface->ring_feature[RING_F_RSS].indices;
+}
+
+static int fm10k_set_channels(struct net_device *dev,
+ struct ethtool_channels *ch)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ unsigned int count = ch->combined_count;
+ struct fm10k_hw *hw = &interface->hw;
+
+ /* verify they are not requesting separate vectors */
+ if (!count || ch->rx_count || ch->tx_count)
+ return -EINVAL;
+
+ /* verify other_count has not changed */
+ if (ch->other_count != NON_Q_VECTORS(hw))
+ return -EINVAL;
+
+ /* verify the number of channels does not exceed hardware limits */
+ if (count > fm10k_max_channels(dev))
+ return -EINVAL;
+
+ interface->ring_feature[RING_F_RSS].limit = count;
+
+ /* use setup TC to update any traffic class queue mapping */
+ return fm10k_setup_tc(dev, netdev_get_num_tc(dev));
+}
+
+static int fm10k_get_ts_info(struct net_device *dev,
+ struct ethtool_ts_info *info)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ info->so_timestamping =
+ SOF_TIMESTAMPING_TX_SOFTWARE |
+ SOF_TIMESTAMPING_RX_SOFTWARE |
+ SOF_TIMESTAMPING_SOFTWARE |
+ SOF_TIMESTAMPING_TX_HARDWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
+ SOF_TIMESTAMPING_RAW_HARDWARE;
+
+ if (interface->ptp_clock)
+ info->phc_index = ptp_clock_index(interface->ptp_clock);
+ else
+ info->phc_index = -1;
+
+ info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+ (1 << HWTSTAMP_TX_ON);
+
+ info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+ (1 << HWTSTAMP_FILTER_ALL);
+
+ return 0;
+}
+
+static const struct ethtool_ops fm10k_ethtool_ops = {
+ .get_strings = fm10k_get_strings,
+ .get_sset_count = fm10k_get_sset_count,
+ .get_ethtool_stats = fm10k_get_ethtool_stats,
+ .get_drvinfo = fm10k_get_drvinfo,
+ .get_link = ethtool_op_get_link,
+ .get_pauseparam = fm10k_get_pauseparam,
+ .set_pauseparam = fm10k_set_pauseparam,
+ .get_msglevel = fm10k_get_msglevel,
+ .set_msglevel = fm10k_set_msglevel,
+ .get_ringparam = fm10k_get_ringparam,
+ .set_ringparam = fm10k_set_ringparam,
+ .get_coalesce = fm10k_get_coalesce,
+ .set_coalesce = fm10k_set_coalesce,
+ .get_rxnfc = fm10k_get_rxnfc,
+ .set_rxnfc = fm10k_set_rxnfc,
+ .get_regs = fm10k_get_regs,
+ .get_regs_len = fm10k_get_regs_len,
+ .self_test = fm10k_self_test,
+ .get_rxfh_indir_size = fm10k_get_reta_size,
+ .get_rxfh_key_size = fm10k_get_rssrk_size,
+ .get_rxfh = fm10k_get_rssh,
+ .set_rxfh = fm10k_set_rssh,
+ .get_channels = fm10k_get_channels,
+ .set_channels = fm10k_set_channels,
+ .get_ts_info = fm10k_get_ts_info,
+};
+
+void fm10k_set_ethtool_ops(struct net_device *dev)
+{
+ dev->ethtool_ops = &fm10k_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_iov.c b/drivers/net/ethernet/intel/fm10k/fm10k_iov.c
new file mode 100644
index 0000000..0601908
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_iov.c
@@ -0,0 +1,536 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k.h"
+#include "fm10k_vf.h"
+#include "fm10k_pf.h"
+
+static s32 fm10k_iov_msg_error(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
+ struct fm10k_intfc *interface = hw->back;
+ struct pci_dev *pdev = interface->pdev;
+
+ dev_err(&pdev->dev, "Unknown message ID %u on VF %d\n",
+ **results & FM10K_TLV_ID_MASK, vf_info->vf_idx);
+
+ return fm10k_tlv_msg_error(hw, results, mbx);
+}
+
+static const struct fm10k_msg_data iov_mbx_data[] = {
+ FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
+ FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf),
+ FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf),
+ FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_iov_msg_error),
+};
+
+s32 fm10k_iov_event(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_iov_data *iov_data;
+ s64 mbicr, vflre;
+ int i;
+
+ /* if there is no iov_data then there is no mailboxes to process */
+ if (!ACCESS_ONCE(interface->iov_data))
+ return 0;
+
+ rcu_read_lock();
+
+ iov_data = interface->iov_data;
+
+ /* check again now that we are in the RCU block */
+ if (!iov_data)
+ goto read_unlock;
+
+ if (!(fm10k_read_reg(hw, FM10K_EICR) & FM10K_EICR_VFLR))
+ goto process_mbx;
+
+ /* read VFLRE to determine if any VFs have been reset */
+ do {
+ vflre = fm10k_read_reg(hw, FM10K_PFVFLRE(0));
+ vflre <<= 32;
+ vflre |= fm10k_read_reg(hw, FM10K_PFVFLRE(1));
+ vflre = (vflre << 32) | (vflre >> 32);
+ vflre |= fm10k_read_reg(hw, FM10K_PFVFLRE(0));
+
+ i = iov_data->num_vfs;
+
+ for (vflre <<= 64 - i; vflre && i--; vflre += vflre) {
+ struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
+
+ if (vflre >= 0)
+ continue;
+
+ hw->iov.ops.reset_resources(hw, vf_info);
+ vf_info->mbx.ops.connect(hw, &vf_info->mbx);
+ }
+ } while (i != iov_data->num_vfs);
+
+process_mbx:
+ /* read MBICR to determine which VFs require attention */
+ mbicr = fm10k_read_reg(hw, FM10K_MBICR(1));
+ mbicr <<= 32;
+ mbicr |= fm10k_read_reg(hw, FM10K_MBICR(0));
+
+ i = iov_data->next_vf_mbx ? : iov_data->num_vfs;
+
+ for (mbicr <<= 64 - i; i--; mbicr += mbicr) {
+ struct fm10k_mbx_info *mbx = &iov_data->vf_info[i].mbx;
+
+ if (mbicr >= 0)
+ continue;
+
+ if (!hw->mbx.ops.tx_ready(&hw->mbx, FM10K_VFMBX_MSG_MTU))
+ break;
+
+ mbx->ops.process(hw, mbx);
+ }
+
+ if (i >= 0) {
+ iov_data->next_vf_mbx = i + 1;
+ } else if (iov_data->next_vf_mbx) {
+ iov_data->next_vf_mbx = 0;
+ goto process_mbx;
+ }
+read_unlock:
+ rcu_read_unlock();
+
+ return 0;
+}
+
+s32 fm10k_iov_mbx(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_iov_data *iov_data;
+ int i;
+
+ /* if there is no iov_data then there is no mailboxes to process */
+ if (!ACCESS_ONCE(interface->iov_data))
+ return 0;
+
+ rcu_read_lock();
+
+ iov_data = interface->iov_data;
+
+ /* check again now that we are in the RCU block */
+ if (!iov_data)
+ goto read_unlock;
+
+ /* lock the mailbox for transmit and receive */
+ fm10k_mbx_lock(interface);
+
+process_mbx:
+ for (i = iov_data->next_vf_mbx ? : iov_data->num_vfs; i--;) {
+ struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
+ struct fm10k_mbx_info *mbx = &vf_info->mbx;
+ u16 glort = vf_info->glort;
+
+ /* verify port mapping is valid, if not reset port */
+ if (vf_info->vf_flags && !fm10k_glort_valid_pf(hw, glort))
+ hw->iov.ops.reset_lport(hw, vf_info);
+
+ /* reset VFs that have mailbox timed out */
+ if (!mbx->timeout) {
+ hw->iov.ops.reset_resources(hw, vf_info);
+ mbx->ops.connect(hw, mbx);
+ }
+
+ /* no work pending, then just continue */
+ if (mbx->ops.tx_complete(mbx) && !mbx->ops.rx_ready(mbx))
+ continue;
+
+ /* guarantee we have free space in the SM mailbox */
+ if (!hw->mbx.ops.tx_ready(&hw->mbx, FM10K_VFMBX_MSG_MTU))
+ break;
+
+ /* cleanup mailbox and process received messages */
+ mbx->ops.process(hw, mbx);
+ }
+
+ if (i >= 0) {
+ iov_data->next_vf_mbx = i + 1;
+ } else if (iov_data->next_vf_mbx) {
+ iov_data->next_vf_mbx = 0;
+ goto process_mbx;
+ }
+
+ /* free the lock */
+ fm10k_mbx_unlock(interface);
+
+read_unlock:
+ rcu_read_unlock();
+
+ return 0;
+}
+
+void fm10k_iov_suspend(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+ int num_vfs, i;
+
+ /* pull out num_vfs from iov_data */
+ num_vfs = iov_data ? iov_data->num_vfs : 0;
+
+ /* shut down queue mapping for VFs */
+ fm10k_write_reg(hw, FM10K_DGLORTMAP(fm10k_dglort_vf_rss),
+ FM10K_DGLORTMAP_NONE);
+
+ /* Stop any active VFs and reset their resources */
+ for (i = 0; i < num_vfs; i++) {
+ struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
+
+ hw->iov.ops.reset_resources(hw, vf_info);
+ hw->iov.ops.reset_lport(hw, vf_info);
+ }
+}
+
+int fm10k_iov_resume(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_dglort_cfg dglort = { 0 };
+ struct fm10k_hw *hw = &interface->hw;
+ int num_vfs, i;
+
+ /* pull out num_vfs from iov_data */
+ num_vfs = iov_data ? iov_data->num_vfs : 0;
+
+ /* return error if iov_data is not already populated */
+ if (!iov_data)
+ return -ENOMEM;
+
+ /* allocate hardware resources for the VFs */
+ hw->iov.ops.assign_resources(hw, num_vfs, num_vfs);
+
+ /* configure DGLORT mapping for RSS */
+ dglort.glort = hw->mac.dglort_map & FM10K_DGLORTMAP_NONE;
+ dglort.idx = fm10k_dglort_vf_rss;
+ dglort.inner_rss = 1;
+ dglort.rss_l = fls(fm10k_queues_per_pool(hw) - 1);
+ dglort.queue_b = fm10k_vf_queue_index(hw, 0);
+ dglort.vsi_l = fls(hw->iov.total_vfs - 1);
+ dglort.vsi_b = 1;
+
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+
+ /* assign resources to the device */
+ for (i = 0; i < num_vfs; i++) {
+ struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
+
+ /* allocate all but the last GLORT to the VFs */
+ if (i == ((~hw->mac.dglort_map) >> FM10K_DGLORTMAP_MASK_SHIFT))
+ break;
+
+ /* assign GLORT to VF, and restrict it to multicast */
+ hw->iov.ops.set_lport(hw, vf_info, i,
+ FM10K_VF_FLAG_MULTI_CAPABLE);
+
+ /* assign our default vid to the VF following reset */
+ vf_info->sw_vid = hw->mac.default_vid;
+
+ /* mailbox is disconnected so we don't send a message */
+ hw->iov.ops.assign_default_mac_vlan(hw, vf_info);
+
+ /* now we are ready so we can connect */
+ vf_info->mbx.ops.connect(hw, &vf_info->mbx);
+ }
+
+ return 0;
+}
+
+s32 fm10k_iov_update_pvid(struct fm10k_intfc *interface, u16 glort, u16 pvid)
+{
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_vf_info *vf_info;
+ u16 vf_idx = (glort - hw->mac.dglort_map) & FM10K_DGLORTMAP_NONE;
+
+ /* no IOV support, not our message to process */
+ if (!iov_data)
+ return FM10K_ERR_PARAM;
+
+ /* glort outside our range, not our message to process */
+ if (vf_idx >= iov_data->num_vfs)
+ return FM10K_ERR_PARAM;
+
+ /* determine if an update has occured and if so notify the VF */
+ vf_info = &iov_data->vf_info[vf_idx];
+ if (vf_info->sw_vid != pvid) {
+ vf_info->sw_vid = pvid;
+ hw->iov.ops.assign_default_mac_vlan(hw, vf_info);
+ }
+
+ return 0;
+}
+
+static void fm10k_iov_free_data(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+
+ if (!interface->iov_data)
+ return;
+
+ /* reclaim hardware resources */
+ fm10k_iov_suspend(pdev);
+
+ /* drop iov_data from interface */
+ kfree_rcu(interface->iov_data, rcu);
+ interface->iov_data = NULL;
+}
+
+static s32 fm10k_iov_alloc_data(struct pci_dev *pdev, int num_vfs)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+ size_t size;
+ int i, err;
+
+ /* return error if iov_data is already populated */
+ if (iov_data)
+ return -EBUSY;
+
+ /* The PF should always be able to assign resources */
+ if (!hw->iov.ops.assign_resources)
+ return -ENODEV;
+
+ /* nothing to do if no VFs are requested */
+ if (!num_vfs)
+ return 0;
+
+ /* allocate memory for VF storage */
+ size = offsetof(struct fm10k_iov_data, vf_info[num_vfs]);
+ iov_data = kzalloc(size, GFP_KERNEL);
+ if (!iov_data)
+ return -ENOMEM;
+
+ /* record number of VFs */
+ iov_data->num_vfs = num_vfs;
+
+ /* loop through vf_info structures initializing each entry */
+ for (i = 0; i < num_vfs; i++) {
+ struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
+
+ /* Record VF VSI value */
+ vf_info->vsi = i + 1;
+ vf_info->vf_idx = i;
+
+ /* initialize mailbox memory */
+ err = fm10k_pfvf_mbx_init(hw, &vf_info->mbx, iov_mbx_data, i);
+ if (err) {
+ dev_err(&pdev->dev,
+ "Unable to initialize SR-IOV mailbox\n");
+ kfree(iov_data);
+ return err;
+ }
+ }
+
+ /* assign iov_data to interface */
+ interface->iov_data = iov_data;
+
+ /* allocate hardware resources for the VFs */
+ fm10k_iov_resume(pdev);
+
+ return 0;
+}
+
+void fm10k_iov_disable(struct pci_dev *pdev)
+{
+ if (pci_num_vf(pdev) && pci_vfs_assigned(pdev))
+ dev_err(&pdev->dev,
+ "Cannot disable SR-IOV while VFs are assigned\n");
+ else
+ pci_disable_sriov(pdev);
+
+ fm10k_iov_free_data(pdev);
+}
+
+static void fm10k_disable_aer_comp_abort(struct pci_dev *pdev)
+{
+ u32 err_sev;
+ int pos;
+
+ pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR);
+ if (!pos)
+ return;
+
+ pci_read_config_dword(pdev, pos + PCI_ERR_UNCOR_SEVER, &err_sev);
+ err_sev &= ~PCI_ERR_UNC_COMP_ABORT;
+ pci_write_config_dword(pdev, pos + PCI_ERR_UNCOR_SEVER, err_sev);
+}
+
+int fm10k_iov_configure(struct pci_dev *pdev, int num_vfs)
+{
+ int current_vfs = pci_num_vf(pdev);
+ int err = 0;
+
+ if (current_vfs && pci_vfs_assigned(pdev)) {
+ dev_err(&pdev->dev,
+ "Cannot modify SR-IOV while VFs are assigned\n");
+ num_vfs = current_vfs;
+ } else {
+ pci_disable_sriov(pdev);
+ fm10k_iov_free_data(pdev);
+ }
+
+ /* allocate resources for the VFs */
+ err = fm10k_iov_alloc_data(pdev, num_vfs);
+ if (err)
+ return err;
+
+ /* allocate VFs if not already allocated */
+ if (num_vfs && (num_vfs != current_vfs)) {
+ /* Disable completer abort error reporting as
+ * the VFs can trigger this any time they read a queue
+ * that they don't own.
+ */
+ fm10k_disable_aer_comp_abort(pdev);
+
+ err = pci_enable_sriov(pdev, num_vfs);
+ if (err) {
+ dev_err(&pdev->dev,
+ "Enable PCI SR-IOV failed: %d\n", err);
+ return err;
+ }
+ }
+
+ return num_vfs;
+}
+
+int fm10k_ndo_set_vf_mac(struct net_device *netdev, int vf_idx, u8 *mac)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_vf_info *vf_info;
+
+ /* verify SR-IOV is active and that vf idx is valid */
+ if (!iov_data || vf_idx >= iov_data->num_vfs)
+ return -EINVAL;
+
+ /* verify MAC addr is valid */
+ if (!is_zero_ether_addr(mac) && !is_valid_ether_addr(mac))
+ return -EINVAL;
+
+ /* record new MAC address */
+ vf_info = &iov_data->vf_info[vf_idx];
+ ether_addr_copy(vf_info->mac, mac);
+
+ /* assigning the MAC will send a mailbox message so lock is needed */
+ fm10k_mbx_lock(interface);
+
+ /* assign MAC address to VF */
+ hw->iov.ops.assign_default_mac_vlan(hw, vf_info);
+
+ fm10k_mbx_unlock(interface);
+
+ return 0;
+}
+
+int fm10k_ndo_set_vf_vlan(struct net_device *netdev, int vf_idx, u16 vid,
+ u8 qos)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_vf_info *vf_info;
+
+ /* verify SR-IOV is active and that vf idx is valid */
+ if (!iov_data || vf_idx >= iov_data->num_vfs)
+ return -EINVAL;
+
+ /* QOS is unsupported and VLAN IDs accepted range 0-4094 */
+ if (qos || (vid > (VLAN_VID_MASK - 1)))
+ return -EINVAL;
+
+ vf_info = &iov_data->vf_info[vf_idx];
+
+ /* exit if there is nothing to do */
+ if (vf_info->pf_vid == vid)
+ return 0;
+
+ /* record default VLAN ID for VF */
+ vf_info->pf_vid = vid;
+
+ /* assigning the VLAN will send a mailbox message so lock is needed */
+ fm10k_mbx_lock(interface);
+
+ /* Clear the VLAN table for the VF */
+ hw->mac.ops.update_vlan(hw, FM10K_VLAN_ALL, vf_info->vsi, false);
+
+ /* Update VF assignment and trigger reset */
+ hw->iov.ops.assign_default_mac_vlan(hw, vf_info);
+
+ fm10k_mbx_unlock(interface);
+
+ return 0;
+}
+
+int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int unused,
+ int rate)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+
+ /* verify SR-IOV is active and that vf idx is valid */
+ if (!iov_data || vf_idx >= iov_data->num_vfs)
+ return -EINVAL;
+
+ /* rate limit cannot be less than 10Mbs or greater than link speed */
+ if (rate && ((rate < FM10K_VF_TC_MIN) || rate > FM10K_VF_TC_MAX))
+ return -EINVAL;
+
+ /* store values */
+ iov_data->vf_info[vf_idx].rate = rate;
+
+ /* update hardware configuration */
+ hw->iov.ops.configure_tc(hw, vf_idx, rate);
+
+ return 0;
+}
+
+int fm10k_ndo_get_vf_config(struct net_device *netdev,
+ int vf_idx, struct ifla_vf_info *ivi)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_vf_info *vf_info;
+
+ /* verify SR-IOV is active and that vf idx is valid */
+ if (!iov_data || vf_idx >= iov_data->num_vfs)
+ return -EINVAL;
+
+ vf_info = &iov_data->vf_info[vf_idx];
+
+ ivi->vf = vf_idx;
+ ivi->max_tx_rate = vf_info->rate;
+ ivi->min_tx_rate = 0;
+ ether_addr_copy(ivi->mac, vf_info->mac);
+ ivi->vlan = vf_info->pf_vid;
+ ivi->qos = 0;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
new file mode 100644
index 0000000..6c800a3
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
@@ -0,0 +1,1979 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <net/ipv6.h>
+#include <net/ip.h>
+#include <net/tcp.h>
+#include <linux/if_macvlan.h>
+#include <linux/prefetch.h>
+
+#include "fm10k.h"
+
+#define DRV_VERSION "0.12.2-k"
+const char fm10k_driver_version[] = DRV_VERSION;
+char fm10k_driver_name[] = "fm10k";
+static const char fm10k_driver_string[] =
+ "Intel(R) Ethernet Switch Host Interface Driver";
+static const char fm10k_copyright[] =
+ "Copyright (c) 2013 Intel Corporation.";
+
+MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
+MODULE_DESCRIPTION("Intel(R) Ethernet Switch Host Interface Driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(DRV_VERSION);
+
+/**
+ * fm10k_init_module - Driver Registration Routine
+ *
+ * fm10k_init_module is the first routine called when the driver is
+ * loaded. All it does is register with the PCI subsystem.
+ **/
+static int __init fm10k_init_module(void)
+{
+ pr_info("%s - version %s\n", fm10k_driver_string, fm10k_driver_version);
+ pr_info("%s\n", fm10k_copyright);
+
+ fm10k_dbg_init();
+
+ return fm10k_register_pci_driver();
+}
+module_init(fm10k_init_module);
+
+/**
+ * fm10k_exit_module - Driver Exit Cleanup Routine
+ *
+ * fm10k_exit_module is called just before the driver is removed
+ * from memory.
+ **/
+static void __exit fm10k_exit_module(void)
+{
+ fm10k_unregister_pci_driver();
+
+ fm10k_dbg_exit();
+}
+module_exit(fm10k_exit_module);
+
+static bool fm10k_alloc_mapped_page(struct fm10k_ring *rx_ring,
+ struct fm10k_rx_buffer *bi)
+{
+ struct page *page = bi->page;
+ dma_addr_t dma;
+
+ /* Only page will be NULL if buffer was consumed */
+ if (likely(page))
+ return true;
+
+ /* alloc new page for storage */
+ page = alloc_page(GFP_ATOMIC | __GFP_COLD);
+ if (unlikely(!page)) {
+ rx_ring->rx_stats.alloc_failed++;
+ return false;
+ }
+
+ /* map page for use */
+ dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
+
+ /* if mapping failed free memory back to system since
+ * there isn't much point in holding memory we can't use
+ */
+ if (dma_mapping_error(rx_ring->dev, dma)) {
+ __free_page(page);
+ bi->page = NULL;
+
+ rx_ring->rx_stats.alloc_failed++;
+ return false;
+ }
+
+ bi->dma = dma;
+ bi->page = page;
+ bi->page_offset = 0;
+
+ return true;
+}
+
+/**
+ * fm10k_alloc_rx_buffers - Replace used receive buffers
+ * @rx_ring: ring to place buffers on
+ * @cleaned_count: number of buffers to replace
+ **/
+void fm10k_alloc_rx_buffers(struct fm10k_ring *rx_ring, u16 cleaned_count)
+{
+ union fm10k_rx_desc *rx_desc;
+ struct fm10k_rx_buffer *bi;
+ u16 i = rx_ring->next_to_use;
+
+ /* nothing to do */
+ if (!cleaned_count)
+ return;
+
+ rx_desc = FM10K_RX_DESC(rx_ring, i);
+ bi = &rx_ring->rx_buffer[i];
+ i -= rx_ring->count;
+
+ do {
+ if (!fm10k_alloc_mapped_page(rx_ring, bi))
+ break;
+
+ /* Refresh the desc even if buffer_addrs didn't change
+ * because each write-back erases this info.
+ */
+ rx_desc->q.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset);
+
+ rx_desc++;
+ bi++;
+ i++;
+ if (unlikely(!i)) {
+ rx_desc = FM10K_RX_DESC(rx_ring, 0);
+ bi = rx_ring->rx_buffer;
+ i -= rx_ring->count;
+ }
+
+ /* clear the hdr_addr for the next_to_use descriptor */
+ rx_desc->q.hdr_addr = 0;
+
+ cleaned_count--;
+ } while (cleaned_count);
+
+ i += rx_ring->count;
+
+ if (rx_ring->next_to_use != i) {
+ /* record the next descriptor to use */
+ rx_ring->next_to_use = i;
+
+ /* update next to alloc since we have filled the ring */
+ rx_ring->next_to_alloc = i;
+
+ /* Force memory writes to complete before letting h/w
+ * know there are new descriptors to fetch. (Only
+ * applicable for weak-ordered memory model archs,
+ * such as IA-64).
+ */
+ wmb();
+
+ /* notify hardware of new descriptors */
+ writel(i, rx_ring->tail);
+ }
+}
+
+/**
+ * fm10k_reuse_rx_page - page flip buffer and store it back on the ring
+ * @rx_ring: rx descriptor ring to store buffers on
+ * @old_buff: donor buffer to have page reused
+ *
+ * Synchronizes page for reuse by the interface
+ **/
+static void fm10k_reuse_rx_page(struct fm10k_ring *rx_ring,
+ struct fm10k_rx_buffer *old_buff)
+{
+ struct fm10k_rx_buffer *new_buff;
+ u16 nta = rx_ring->next_to_alloc;
+
+ new_buff = &rx_ring->rx_buffer[nta];
+
+ /* update, and store next to alloc */
+ nta++;
+ rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
+
+ /* transfer page from old buffer to new buffer */
+ memcpy(new_buff, old_buff, sizeof(struct fm10k_rx_buffer));
+
+ /* sync the buffer for use by the device */
+ dma_sync_single_range_for_device(rx_ring->dev, old_buff->dma,
+ old_buff->page_offset,
+ FM10K_RX_BUFSZ,
+ DMA_FROM_DEVICE);
+}
+
+static bool fm10k_can_reuse_rx_page(struct fm10k_rx_buffer *rx_buffer,
+ struct page *page,
+ unsigned int truesize)
+{
+ /* avoid re-using remote pages */
+ if (unlikely(page_to_nid(page) != numa_mem_id()))
+ return false;
+
+#if (PAGE_SIZE < 8192)
+ /* if we are only owner of page we can reuse it */
+ if (unlikely(page_count(page) != 1))
+ return false;
+
+ /* flip page offset to other buffer */
+ rx_buffer->page_offset ^= FM10K_RX_BUFSZ;
+
+ /* since we are the only owner of the page and we need to
+ * increment it, just set the value to 2 in order to avoid
+ * an unnecessary locked operation
+ */
+ atomic_set(&page->_count, 2);
+#else
+ /* move offset up to the next cache line */
+ rx_buffer->page_offset += truesize;
+
+ if (rx_buffer->page_offset > (PAGE_SIZE - FM10K_RX_BUFSZ))
+ return false;
+
+ /* bump ref count on page before it is given to the stack */
+ get_page(page);
+#endif
+
+ return true;
+}
+
+/**
+ * fm10k_add_rx_frag - Add contents of Rx buffer to sk_buff
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_buffer: buffer containing page to add
+ * @rx_desc: descriptor containing length of buffer written by hardware
+ * @skb: sk_buff to place the data into
+ *
+ * This function will add the data contained in rx_buffer->page to the skb.
+ * This is done either through a direct copy if the data in the buffer is
+ * less than the skb header size, otherwise it will just attach the page as
+ * a frag to the skb.
+ *
+ * The function will then update the page offset if necessary and return
+ * true if the buffer can be reused by the interface.
+ **/
+static bool fm10k_add_rx_frag(struct fm10k_ring *rx_ring,
+ struct fm10k_rx_buffer *rx_buffer,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct page *page = rx_buffer->page;
+ unsigned int size = le16_to_cpu(rx_desc->w.length);
+#if (PAGE_SIZE < 8192)
+ unsigned int truesize = FM10K_RX_BUFSZ;
+#else
+ unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
+#endif
+
+ if ((size <= FM10K_RX_HDR_LEN) && !skb_is_nonlinear(skb)) {
+ unsigned char *va = page_address(page) + rx_buffer->page_offset;
+
+ memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
+
+ /* we can reuse buffer as-is, just make sure it is local */
+ if (likely(page_to_nid(page) == numa_mem_id()))
+ return true;
+
+ /* this page cannot be reused so discard it */
+ put_page(page);
+ return false;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ rx_buffer->page_offset, size, truesize);
+
+ return fm10k_can_reuse_rx_page(rx_buffer, page, truesize);
+}
+
+static struct sk_buff *fm10k_fetch_rx_buffer(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct fm10k_rx_buffer *rx_buffer;
+ struct page *page;
+
+ rx_buffer = &rx_ring->rx_buffer[rx_ring->next_to_clean];
+
+ page = rx_buffer->page;
+ prefetchw(page);
+
+ if (likely(!skb)) {
+ void *page_addr = page_address(page) +
+ rx_buffer->page_offset;
+
+ /* prefetch first cache line of first page */
+ prefetch(page_addr);
+#if L1_CACHE_BYTES < 128
+ prefetch(page_addr + L1_CACHE_BYTES);
+#endif
+
+ /* allocate a skb to store the frags */
+ skb = netdev_alloc_skb_ip_align(rx_ring->netdev,
+ FM10K_RX_HDR_LEN);
+ if (unlikely(!skb)) {
+ rx_ring->rx_stats.alloc_failed++;
+ return NULL;
+ }
+
+ /* we will be copying header into skb->data in
+ * pskb_may_pull so it is in our interest to prefetch
+ * it now to avoid a possible cache miss
+ */
+ prefetchw(skb->data);
+ }
+
+ /* we are reusing so sync this buffer for CPU use */
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ rx_buffer->dma,
+ rx_buffer->page_offset,
+ FM10K_RX_BUFSZ,
+ DMA_FROM_DEVICE);
+
+ /* pull page into skb */
+ if (fm10k_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) {
+ /* hand second half of page back to the ring */
+ fm10k_reuse_rx_page(rx_ring, rx_buffer);
+ } else {
+ /* we are not reusing the buffer so unmap it */
+ dma_unmap_page(rx_ring->dev, rx_buffer->dma,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+ }
+
+ /* clear contents of rx_buffer */
+ rx_buffer->page = NULL;
+
+ return skb;
+}
+
+static inline void fm10k_rx_checksum(struct fm10k_ring *ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ skb_checksum_none_assert(skb);
+
+ /* Rx checksum disabled via ethtool */
+ if (!(ring->netdev->features & NETIF_F_RXCSUM))
+ return;
+
+ /* TCP/UDP checksum error bit is set */
+ if (fm10k_test_staterr(rx_desc,
+ FM10K_RXD_STATUS_L4E |
+ FM10K_RXD_STATUS_L4E2 |
+ FM10K_RXD_STATUS_IPE |
+ FM10K_RXD_STATUS_IPE2)) {
+ ring->rx_stats.csum_err++;
+ return;
+ }
+
+ /* It must be a TCP or UDP packet with a valid checksum */
+ if (fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_L4CS2))
+ skb->encapsulation = true;
+ else if (!fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_L4CS))
+ return;
+
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+}
+
+#define FM10K_RSS_L4_TYPES_MASK \
+ ((1ul << FM10K_RSSTYPE_IPV4_TCP) | \
+ (1ul << FM10K_RSSTYPE_IPV4_UDP) | \
+ (1ul << FM10K_RSSTYPE_IPV6_TCP) | \
+ (1ul << FM10K_RSSTYPE_IPV6_UDP))
+
+static inline void fm10k_rx_hash(struct fm10k_ring *ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ u16 rss_type;
+
+ if (!(ring->netdev->features & NETIF_F_RXHASH))
+ return;
+
+ rss_type = le16_to_cpu(rx_desc->w.pkt_info) & FM10K_RXD_RSSTYPE_MASK;
+ if (!rss_type)
+ return;
+
+ skb_set_hash(skb, le32_to_cpu(rx_desc->d.rss),
+ (FM10K_RSS_L4_TYPES_MASK & (1ul << rss_type)) ?
+ PKT_HASH_TYPE_L4 : PKT_HASH_TYPE_L3);
+}
+
+static void fm10k_rx_hwtstamp(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct fm10k_intfc *interface = rx_ring->q_vector->interface;
+
+ FM10K_CB(skb)->tstamp = rx_desc->q.timestamp;
+
+ if (unlikely(interface->flags & FM10K_FLAG_RX_TS_ENABLED))
+ fm10k_systime_to_hwtstamp(interface, skb_hwtstamps(skb),
+ le64_to_cpu(rx_desc->q.timestamp));
+}
+
+static void fm10k_type_trans(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct net_device *dev = rx_ring->netdev;
+ struct fm10k_l2_accel *l2_accel = rcu_dereference_bh(rx_ring->l2_accel);
+
+ /* check to see if DGLORT belongs to a MACVLAN */
+ if (l2_accel) {
+ u16 idx = le16_to_cpu(FM10K_CB(skb)->fi.w.dglort) - 1;
+
+ idx -= l2_accel->dglort;
+ if (idx < l2_accel->size && l2_accel->macvlan[idx])
+ dev = l2_accel->macvlan[idx];
+ else
+ l2_accel = NULL;
+ }
+
+ skb->protocol = eth_type_trans(skb, dev);
+
+ if (!l2_accel)
+ return;
+
+ /* update MACVLAN statistics */
+ macvlan_count_rx(netdev_priv(dev), skb->len + ETH_HLEN, 1,
+ !!(rx_desc->w.hdr_info &
+ cpu_to_le16(FM10K_RXD_HDR_INFO_XC_MASK)));
+}
+
+/**
+ * fm10k_process_skb_fields - Populate skb header fields from Rx descriptor
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being populated
+ *
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, timestamp, protocol, and
+ * other fields within the skb.
+ **/
+static unsigned int fm10k_process_skb_fields(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ unsigned int len = skb->len;
+
+ fm10k_rx_hash(rx_ring, rx_desc, skb);
+
+ fm10k_rx_checksum(rx_ring, rx_desc, skb);
+
+ fm10k_rx_hwtstamp(rx_ring, rx_desc, skb);
+
+ FM10K_CB(skb)->fi.w.vlan = rx_desc->w.vlan;
+
+ skb_record_rx_queue(skb, rx_ring->queue_index);
+
+ FM10K_CB(skb)->fi.d.glort = rx_desc->d.glort;
+
+ if (rx_desc->w.vlan) {
+ u16 vid = le16_to_cpu(rx_desc->w.vlan);
+
+ if (vid != rx_ring->vid)
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vid);
+ }
+
+ fm10k_type_trans(rx_ring, rx_desc, skb);
+
+ return len;
+}
+
+/**
+ * fm10k_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ *
+ * This function updates next to clean. If the buffer is an EOP buffer
+ * this function exits returning false, otherwise it will place the
+ * sk_buff in the next buffer to be chained and return true indicating
+ * that this is in fact a non-EOP buffer.
+ **/
+static bool fm10k_is_non_eop(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc)
+{
+ u32 ntc = rx_ring->next_to_clean + 1;
+
+ /* fetch, update, and store next to clean */
+ ntc = (ntc < rx_ring->count) ? ntc : 0;
+ rx_ring->next_to_clean = ntc;
+
+ prefetch(FM10K_RX_DESC(rx_ring, ntc));
+
+ if (likely(fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_EOP)))
+ return false;
+
+ return true;
+}
+
+/**
+ * fm10k_pull_tail - fm10k specific version of skb_pull_tail
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being adjusted
+ *
+ * This function is an fm10k specific version of __pskb_pull_tail. The
+ * main difference between this version and the original function is that
+ * this function can make several assumptions about the state of things
+ * that allow for significant optimizations versus the standard function.
+ * As a result we can do things like drop a frag and maintain an accurate
+ * truesize for the skb.
+ */
+static void fm10k_pull_tail(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
+ unsigned char *va;
+ unsigned int pull_len;
+
+ /* it is valid to use page_address instead of kmap since we are
+ * working with pages allocated out of the lomem pool per
+ * alloc_page(GFP_ATOMIC)
+ */
+ va = skb_frag_address(frag);
+
+ /* we need the header to contain the greater of either ETH_HLEN or
+ * 60 bytes if the skb->len is less than 60 for skb_pad.
+ */
+ pull_len = eth_get_headlen(va, FM10K_RX_HDR_LEN);
+
+ /* align pull length to size of long to optimize memcpy performance */
+ skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
+
+ /* update all of the pointers */
+ skb_frag_size_sub(frag, pull_len);
+ frag->page_offset += pull_len;
+ skb->data_len -= pull_len;
+ skb->tail += pull_len;
+}
+
+/**
+ * fm10k_cleanup_headers - Correct corrupted or empty headers
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being fixed
+ *
+ * Address the case where we are pulling data in on pages only
+ * and as such no data is present in the skb header.
+ *
+ * In addition if skb is not at least 60 bytes we need to pad it so that
+ * it is large enough to qualify as a valid Ethernet frame.
+ *
+ * Returns true if an error was encountered and skb was freed.
+ **/
+static bool fm10k_cleanup_headers(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ if (unlikely((fm10k_test_staterr(rx_desc,
+ FM10K_RXD_STATUS_RXE)))) {
+ dev_kfree_skb_any(skb);
+ rx_ring->rx_stats.errors++;
+ return true;
+ }
+
+ /* place header in linear portion of buffer */
+ if (skb_is_nonlinear(skb))
+ fm10k_pull_tail(rx_ring, rx_desc, skb);
+
+ /* if skb_pad returns an error the skb was freed */
+ if (unlikely(skb->len < 60)) {
+ int pad_len = 60 - skb->len;
+
+ if (skb_pad(skb, pad_len))
+ return true;
+ __skb_put(skb, pad_len);
+ }
+
+ return false;
+}
+
+/**
+ * fm10k_receive_skb - helper function to handle rx indications
+ * @q_vector: structure containing interrupt and ring information
+ * @skb: packet to send up
+ **/
+static void fm10k_receive_skb(struct fm10k_q_vector *q_vector,
+ struct sk_buff *skb)
+{
+ napi_gro_receive(&q_vector->napi, skb);
+}
+
+static bool fm10k_clean_rx_irq(struct fm10k_q_vector *q_vector,
+ struct fm10k_ring *rx_ring,
+ int budget)
+{
+ struct sk_buff *skb = rx_ring->skb;
+ unsigned int total_bytes = 0, total_packets = 0;
+ u16 cleaned_count = fm10k_desc_unused(rx_ring);
+
+ do {
+ union fm10k_rx_desc *rx_desc;
+
+ /* return some buffers to hardware, one at a time is too slow */
+ if (cleaned_count >= FM10K_RX_BUFFER_WRITE) {
+ fm10k_alloc_rx_buffers(rx_ring, cleaned_count);
+ cleaned_count = 0;
+ }
+
+ rx_desc = FM10K_RX_DESC(rx_ring, rx_ring->next_to_clean);
+
+ if (!fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_DD))
+ break;
+
+ /* This memory barrier is needed to keep us from reading
+ * any other fields out of the rx_desc until we know the
+ * RXD_STATUS_DD bit is set
+ */
+ rmb();
+
+ /* retrieve a buffer from the ring */
+ skb = fm10k_fetch_rx_buffer(rx_ring, rx_desc, skb);
+
+ /* exit if we failed to retrieve a buffer */
+ if (!skb)
+ break;
+
+ cleaned_count++;
+
+ /* fetch next buffer in frame if non-eop */
+ if (fm10k_is_non_eop(rx_ring, rx_desc))
+ continue;
+
+ /* verify the packet layout is correct */
+ if (fm10k_cleanup_headers(rx_ring, rx_desc, skb)) {
+ skb = NULL;
+ continue;
+ }
+
+ /* populate checksum, timestamp, VLAN, and protocol */
+ total_bytes += fm10k_process_skb_fields(rx_ring, rx_desc, skb);
+
+ fm10k_receive_skb(q_vector, skb);
+
+ /* reset skb pointer */
+ skb = NULL;
+
+ /* update budget accounting */
+ total_packets++;
+ } while (likely(total_packets < budget));
+
+ /* place incomplete frames back on ring for completion */
+ rx_ring->skb = skb;
+
+ u64_stats_update_begin(&rx_ring->syncp);
+ rx_ring->stats.packets += total_packets;
+ rx_ring->stats.bytes += total_bytes;
+ u64_stats_update_end(&rx_ring->syncp);
+ q_vector->rx.total_packets += total_packets;
+ q_vector->rx.total_bytes += total_bytes;
+
+ return total_packets < budget;
+}
+
+#define VXLAN_HLEN (sizeof(struct udphdr) + 8)
+static struct ethhdr *fm10k_port_is_vxlan(struct sk_buff *skb)
+{
+ struct fm10k_intfc *interface = netdev_priv(skb->dev);
+ struct fm10k_vxlan_port *vxlan_port;
+
+ /* we can only offload a vxlan if we recognize it as such */
+ vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
+ struct fm10k_vxlan_port, list);
+
+ if (!vxlan_port)
+ return NULL;
+ if (vxlan_port->port != udp_hdr(skb)->dest)
+ return NULL;
+
+ /* return offset of udp_hdr plus 8 bytes for VXLAN header */
+ return (struct ethhdr *)(skb_transport_header(skb) + VXLAN_HLEN);
+}
+
+#define FM10K_NVGRE_RESERVED0_FLAGS htons(0x9FFF)
+#define NVGRE_TNI htons(0x2000)
+struct fm10k_nvgre_hdr {
+ __be16 flags;
+ __be16 proto;
+ __be32 tni;
+};
+
+static struct ethhdr *fm10k_gre_is_nvgre(struct sk_buff *skb)
+{
+ struct fm10k_nvgre_hdr *nvgre_hdr;
+ int hlen = ip_hdrlen(skb);
+
+ /* currently only IPv4 is supported due to hlen above */
+ if (vlan_get_protocol(skb) != htons(ETH_P_IP))
+ return NULL;
+
+ /* our transport header should be NVGRE */
+ nvgre_hdr = (struct fm10k_nvgre_hdr *)(skb_network_header(skb) + hlen);
+
+ /* verify all reserved flags are 0 */
+ if (nvgre_hdr->flags & FM10K_NVGRE_RESERVED0_FLAGS)
+ return NULL;
+
+ /* verify protocol is transparent Ethernet bridging */
+ if (nvgre_hdr->proto != htons(ETH_P_TEB))
+ return NULL;
+
+ /* report start of ethernet header */
+ if (nvgre_hdr->flags & NVGRE_TNI)
+ return (struct ethhdr *)(nvgre_hdr + 1);
+
+ return (struct ethhdr *)(&nvgre_hdr->tni);
+}
+
+static __be16 fm10k_tx_encap_offload(struct sk_buff *skb)
+{
+ struct ethhdr *eth_hdr;
+ u8 l4_hdr = 0;
+
+ switch (vlan_get_protocol(skb)) {
+ case htons(ETH_P_IP):
+ l4_hdr = ip_hdr(skb)->protocol;
+ break;
+ case htons(ETH_P_IPV6):
+ l4_hdr = ipv6_hdr(skb)->nexthdr;
+ break;
+ default:
+ return 0;
+ }
+
+ switch (l4_hdr) {
+ case IPPROTO_UDP:
+ eth_hdr = fm10k_port_is_vxlan(skb);
+ break;
+ case IPPROTO_GRE:
+ eth_hdr = fm10k_gre_is_nvgre(skb);
+ break;
+ default:
+ return 0;
+ }
+
+ if (!eth_hdr)
+ return 0;
+
+ switch (eth_hdr->h_proto) {
+ case htons(ETH_P_IP):
+ case htons(ETH_P_IPV6):
+ break;
+ default:
+ return 0;
+ }
+
+ return eth_hdr->h_proto;
+}
+
+static int fm10k_tso(struct fm10k_ring *tx_ring,
+ struct fm10k_tx_buffer *first)
+{
+ struct sk_buff *skb = first->skb;
+ struct fm10k_tx_desc *tx_desc;
+ unsigned char *th;
+ u8 hdrlen;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
+
+ if (!skb_is_gso(skb))
+ return 0;
+
+ /* compute header lengths */
+ if (skb->encapsulation) {
+ if (!fm10k_tx_encap_offload(skb))
+ goto err_vxlan;
+ th = skb_inner_transport_header(skb);
+ } else {
+ th = skb_transport_header(skb);
+ }
+
+ /* compute offset from SOF to transport header and add header len */
+ hdrlen = (th - skb->data) + (((struct tcphdr *)th)->doff << 2);
+
+ first->tx_flags |= FM10K_TX_FLAGS_CSUM;
+
+ /* update gso size and bytecount with header size */
+ first->gso_segs = skb_shinfo(skb)->gso_segs;
+ first->bytecount += (first->gso_segs - 1) * hdrlen;
+
+ /* populate Tx descriptor header size and mss */
+ tx_desc = FM10K_TX_DESC(tx_ring, tx_ring->next_to_use);
+ tx_desc->hdrlen = hdrlen;
+ tx_desc->mss = cpu_to_le16(skb_shinfo(skb)->gso_size);
+
+ return 1;
+err_vxlan:
+ tx_ring->netdev->features &= ~NETIF_F_GSO_UDP_TUNNEL;
+ if (!net_ratelimit())
+ netdev_err(tx_ring->netdev,
+ "TSO requested for unsupported tunnel, disabling offload\n");
+ return -1;
+}
+
+static void fm10k_tx_csum(struct fm10k_ring *tx_ring,
+ struct fm10k_tx_buffer *first)
+{
+ struct sk_buff *skb = first->skb;
+ struct fm10k_tx_desc *tx_desc;
+ union {
+ struct iphdr *ipv4;
+ struct ipv6hdr *ipv6;
+ u8 *raw;
+ } network_hdr;
+ __be16 protocol;
+ u8 l4_hdr = 0;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ goto no_csum;
+
+ if (skb->encapsulation) {
+ protocol = fm10k_tx_encap_offload(skb);
+ if (!protocol) {
+ if (skb_checksum_help(skb)) {
+ dev_warn(tx_ring->dev,
+ "failed to offload encap csum!\n");
+ tx_ring->tx_stats.csum_err++;
+ }
+ goto no_csum;
+ }
+ network_hdr.raw = skb_inner_network_header(skb);
+ } else {
+ protocol = vlan_get_protocol(skb);
+ network_hdr.raw = skb_network_header(skb);
+ }
+
+ switch (protocol) {
+ case htons(ETH_P_IP):
+ l4_hdr = network_hdr.ipv4->protocol;
+ break;
+ case htons(ETH_P_IPV6):
+ l4_hdr = network_hdr.ipv6->nexthdr;
+ break;
+ default:
+ if (unlikely(net_ratelimit())) {
+ dev_warn(tx_ring->dev,
+ "partial checksum but ip version=%x!\n",
+ protocol);
+ }
+ tx_ring->tx_stats.csum_err++;
+ goto no_csum;
+ }
+
+ switch (l4_hdr) {
+ case IPPROTO_TCP:
+ case IPPROTO_UDP:
+ break;
+ case IPPROTO_GRE:
+ if (skb->encapsulation)
+ break;
+ default:
+ if (unlikely(net_ratelimit())) {
+ dev_warn(tx_ring->dev,
+ "partial checksum but l4 proto=%x!\n",
+ l4_hdr);
+ }
+ tx_ring->tx_stats.csum_err++;
+ goto no_csum;
+ }
+
+ /* update TX checksum flag */
+ first->tx_flags |= FM10K_TX_FLAGS_CSUM;
+
+no_csum:
+ /* populate Tx descriptor header size and mss */
+ tx_desc = FM10K_TX_DESC(tx_ring, tx_ring->next_to_use);
+ tx_desc->hdrlen = 0;
+ tx_desc->mss = 0;
+}
+
+#define FM10K_SET_FLAG(_input, _flag, _result) \
+ ((_flag <= _result) ? \
+ ((u32)(_input & _flag) * (_result / _flag)) : \
+ ((u32)(_input & _flag) / (_flag / _result)))
+
+static u8 fm10k_tx_desc_flags(struct sk_buff *skb, u32 tx_flags)
+{
+ /* set type for advanced descriptor with frame checksum insertion */
+ u32 desc_flags = 0;
+
+ /* set timestamping bits */
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+ likely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS))
+ desc_flags |= FM10K_TXD_FLAG_TIME;
+
+ /* set checksum offload bits */
+ desc_flags |= FM10K_SET_FLAG(tx_flags, FM10K_TX_FLAGS_CSUM,
+ FM10K_TXD_FLAG_CSUM);
+
+ return desc_flags;
+}
+
+static bool fm10k_tx_desc_push(struct fm10k_ring *tx_ring,
+ struct fm10k_tx_desc *tx_desc, u16 i,
+ dma_addr_t dma, unsigned int size, u8 desc_flags)
+{
+ /* set RS and INT for last frame in a cache line */
+ if ((++i & (FM10K_TXD_WB_FIFO_SIZE - 1)) == 0)
+ desc_flags |= FM10K_TXD_FLAG_RS | FM10K_TXD_FLAG_INT;
+
+ /* record values to descriptor */
+ tx_desc->buffer_addr = cpu_to_le64(dma);
+ tx_desc->flags = desc_flags;
+ tx_desc->buflen = cpu_to_le16(size);
+
+ /* return true if we just wrapped the ring */
+ return i == tx_ring->count;
+}
+
+static void fm10k_tx_map(struct fm10k_ring *tx_ring,
+ struct fm10k_tx_buffer *first)
+{
+ struct sk_buff *skb = first->skb;
+ struct fm10k_tx_buffer *tx_buffer;
+ struct fm10k_tx_desc *tx_desc;
+ struct skb_frag_struct *frag;
+ unsigned char *data;
+ dma_addr_t dma;
+ unsigned int data_len, size;
+ u32 tx_flags = first->tx_flags;
+ u16 i = tx_ring->next_to_use;
+ u8 flags = fm10k_tx_desc_flags(skb, tx_flags);
+
+ tx_desc = FM10K_TX_DESC(tx_ring, i);
+
+ /* add HW VLAN tag */
+ if (vlan_tx_tag_present(skb))
+ tx_desc->vlan = cpu_to_le16(vlan_tx_tag_get(skb));
+ else
+ tx_desc->vlan = 0;
+
+ size = skb_headlen(skb);
+ data = skb->data;
+
+ dma = dma_map_single(tx_ring->dev, data, size, DMA_TO_DEVICE);
+
+ data_len = skb->data_len;
+ tx_buffer = first;
+
+ for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ if (dma_mapping_error(tx_ring->dev, dma))
+ goto dma_error;
+
+ /* record length, and DMA address */
+ dma_unmap_len_set(tx_buffer, len, size);
+ dma_unmap_addr_set(tx_buffer, dma, dma);
+
+ while (unlikely(size > FM10K_MAX_DATA_PER_TXD)) {
+ if (fm10k_tx_desc_push(tx_ring, tx_desc++, i++, dma,
+ FM10K_MAX_DATA_PER_TXD, flags)) {
+ tx_desc = FM10K_TX_DESC(tx_ring, 0);
+ i = 0;
+ }
+
+ dma += FM10K_MAX_DATA_PER_TXD;
+ size -= FM10K_MAX_DATA_PER_TXD;
+ }
+
+ if (likely(!data_len))
+ break;
+
+ if (fm10k_tx_desc_push(tx_ring, tx_desc++, i++,
+ dma, size, flags)) {
+ tx_desc = FM10K_TX_DESC(tx_ring, 0);
+ i = 0;
+ }
+
+ size = skb_frag_size(frag);
+ data_len -= size;
+
+ dma = skb_frag_dma_map(tx_ring->dev, frag, 0, size,
+ DMA_TO_DEVICE);
+
+ tx_buffer = &tx_ring->tx_buffer[i];
+ }
+
+ /* write last descriptor with LAST bit set */
+ flags |= FM10K_TXD_FLAG_LAST;
+
+ if (fm10k_tx_desc_push(tx_ring, tx_desc, i++, dma, size, flags))
+ i = 0;
+
+ /* record bytecount for BQL */
+ netdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount);
+
+ /* record SW timestamp if HW timestamp is not available */
+ skb_tx_timestamp(first->skb);
+
+ /* Force memory writes to complete before letting h/w know there
+ * are new descriptors to fetch. (Only applicable for weak-ordered
+ * memory model archs, such as IA-64).
+ *
+ * We also need this memory barrier to make certain all of the
+ * status bits have been updated before next_to_watch is written.
+ */
+ wmb();
+
+ /* set next_to_watch value indicating a packet is present */
+ first->next_to_watch = tx_desc;
+
+ tx_ring->next_to_use = i;
+
+ /* notify HW of packet */
+ writel(i, tx_ring->tail);
+
+ /* we need this if more than one processor can write to our tail
+ * at a time, it synchronizes IO on IA64/Altix systems
+ */
+ mmiowb();
+
+ return;
+dma_error:
+ dev_err(tx_ring->dev, "TX DMA map failed\n");
+
+ /* clear dma mappings for failed tx_buffer map */
+ for (;;) {
+ tx_buffer = &tx_ring->tx_buffer[i];
+ fm10k_unmap_and_free_tx_resource(tx_ring, tx_buffer);
+ if (tx_buffer == first)
+ break;
+ if (i == 0)
+ i = tx_ring->count;
+ i--;
+ }
+
+ tx_ring->next_to_use = i;
+}
+
+static int __fm10k_maybe_stop_tx(struct fm10k_ring *tx_ring, u16 size)
+{
+ netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+ smp_mb();
+
+ /* We need to check again in a case another CPU has just
+ * made room available. */
+ if (likely(fm10k_desc_unused(tx_ring) < size))
+ return -EBUSY;
+
+ /* A reprieve! - use start_queue because it doesn't call schedule */
+ netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
+ ++tx_ring->tx_stats.restart_queue;
+ return 0;
+}
+
+static inline int fm10k_maybe_stop_tx(struct fm10k_ring *tx_ring, u16 size)
+{
+ if (likely(fm10k_desc_unused(tx_ring) >= size))
+ return 0;
+ return __fm10k_maybe_stop_tx(tx_ring, size);
+}
+
+netdev_tx_t fm10k_xmit_frame_ring(struct sk_buff *skb,
+ struct fm10k_ring *tx_ring)
+{
+ struct fm10k_tx_buffer *first;
+ int tso;
+ u32 tx_flags = 0;
+#if PAGE_SIZE > FM10K_MAX_DATA_PER_TXD
+ unsigned short f;
+#endif
+ u16 count = TXD_USE_COUNT(skb_headlen(skb));
+
+ /* need: 1 descriptor per page * PAGE_SIZE/FM10K_MAX_DATA_PER_TXD,
+ * + 1 desc for skb_headlen/FM10K_MAX_DATA_PER_TXD,
+ * + 2 desc gap to keep tail from touching head
+ * otherwise try next time
+ */
+#if PAGE_SIZE > FM10K_MAX_DATA_PER_TXD
+ for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
+ count += TXD_USE_COUNT(skb_shinfo(skb)->frags[f].size);
+#else
+ count += skb_shinfo(skb)->nr_frags;
+#endif
+ if (fm10k_maybe_stop_tx(tx_ring, count + 3)) {
+ tx_ring->tx_stats.tx_busy++;
+ return NETDEV_TX_BUSY;
+ }
+
+ /* record the location of the first descriptor for this packet */
+ first = &tx_ring->tx_buffer[tx_ring->next_to_use];
+ first->skb = skb;
+ first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN);
+ first->gso_segs = 1;
+
+ /* record initial flags and protocol */
+ first->tx_flags = tx_flags;
+
+ tso = fm10k_tso(tx_ring, first);
+ if (tso < 0)
+ goto out_drop;
+ else if (!tso)
+ fm10k_tx_csum(tx_ring, first);
+
+ fm10k_tx_map(tx_ring, first);
+
+ fm10k_maybe_stop_tx(tx_ring, DESC_NEEDED);
+
+ return NETDEV_TX_OK;
+
+out_drop:
+ dev_kfree_skb_any(first->skb);
+ first->skb = NULL;
+
+ return NETDEV_TX_OK;
+}
+
+static u64 fm10k_get_tx_completed(struct fm10k_ring *ring)
+{
+ return ring->stats.packets;
+}
+
+static u64 fm10k_get_tx_pending(struct fm10k_ring *ring)
+{
+ /* use SW head and tail until we have real hardware */
+ u32 head = ring->next_to_clean;
+ u32 tail = ring->next_to_use;
+
+ return ((head <= tail) ? tail : tail + ring->count) - head;
+}
+
+bool fm10k_check_tx_hang(struct fm10k_ring *tx_ring)
+{
+ u32 tx_done = fm10k_get_tx_completed(tx_ring);
+ u32 tx_done_old = tx_ring->tx_stats.tx_done_old;
+ u32 tx_pending = fm10k_get_tx_pending(tx_ring);
+
+ clear_check_for_tx_hang(tx_ring);
+
+ /* Check for a hung queue, but be thorough. This verifies
+ * that a transmit has been completed since the previous
+ * check AND there is at least one packet pending. By
+ * requiring this to fail twice we avoid races with
+ * clearing the ARMED bit and conditions where we
+ * run the check_tx_hang logic with a transmit completion
+ * pending but without time to complete it yet.
+ */
+ if (!tx_pending || (tx_done_old != tx_done)) {
+ /* update completed stats and continue */
+ tx_ring->tx_stats.tx_done_old = tx_done;
+ /* reset the countdown */
+ clear_bit(__FM10K_HANG_CHECK_ARMED, &tx_ring->state);
+
+ return false;
+ }
+
+ /* make sure it is true for two checks in a row */
+ return test_and_set_bit(__FM10K_HANG_CHECK_ARMED, &tx_ring->state);
+}
+
+/**
+ * fm10k_tx_timeout_reset - initiate reset due to Tx timeout
+ * @interface: driver private struct
+ **/
+void fm10k_tx_timeout_reset(struct fm10k_intfc *interface)
+{
+ /* Do the reset outside of interrupt context */
+ if (!test_bit(__FM10K_DOWN, &interface->state)) {
+ netdev_err(interface->netdev, "Reset interface\n");
+ interface->tx_timeout_count++;
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+ fm10k_service_event_schedule(interface);
+ }
+}
+
+/**
+ * fm10k_clean_tx_irq - Reclaim resources after transmit completes
+ * @q_vector: structure containing interrupt and ring information
+ * @tx_ring: tx ring to clean
+ **/
+static bool fm10k_clean_tx_irq(struct fm10k_q_vector *q_vector,
+ struct fm10k_ring *tx_ring)
+{
+ struct fm10k_intfc *interface = q_vector->interface;
+ struct fm10k_tx_buffer *tx_buffer;
+ struct fm10k_tx_desc *tx_desc;
+ unsigned int total_bytes = 0, total_packets = 0;
+ unsigned int budget = q_vector->tx.work_limit;
+ unsigned int i = tx_ring->next_to_clean;
+
+ if (test_bit(__FM10K_DOWN, &interface->state))
+ return true;
+
+ tx_buffer = &tx_ring->tx_buffer[i];
+ tx_desc = FM10K_TX_DESC(tx_ring, i);
+ i -= tx_ring->count;
+
+ do {
+ struct fm10k_tx_desc *eop_desc = tx_buffer->next_to_watch;
+
+ /* if next_to_watch is not set then there is no work pending */
+ if (!eop_desc)
+ break;
+
+ /* prevent any other reads prior to eop_desc */
+ read_barrier_depends();
+
+ /* if DD is not set pending work has not been completed */
+ if (!(eop_desc->flags & FM10K_TXD_FLAG_DONE))
+ break;
+
+ /* clear next_to_watch to prevent false hangs */
+ tx_buffer->next_to_watch = NULL;
+
+ /* update the statistics for this packet */
+ total_bytes += tx_buffer->bytecount;
+ total_packets += tx_buffer->gso_segs;
+
+ /* free the skb */
+ dev_consume_skb_any(tx_buffer->skb);
+
+ /* unmap skb header data */
+ dma_unmap_single(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+
+ /* clear tx_buffer data */
+ tx_buffer->skb = NULL;
+ dma_unmap_len_set(tx_buffer, len, 0);
+
+ /* unmap remaining buffers */
+ while (tx_desc != eop_desc) {
+ tx_buffer++;
+ tx_desc++;
+ i++;
+ if (unlikely(!i)) {
+ i -= tx_ring->count;
+ tx_buffer = tx_ring->tx_buffer;
+ tx_desc = FM10K_TX_DESC(tx_ring, 0);
+ }
+
+ /* unmap any remaining paged data */
+ if (dma_unmap_len(tx_buffer, len)) {
+ dma_unmap_page(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buffer, len, 0);
+ }
+ }
+
+ /* move us one more past the eop_desc for start of next pkt */
+ tx_buffer++;
+ tx_desc++;
+ i++;
+ if (unlikely(!i)) {
+ i -= tx_ring->count;
+ tx_buffer = tx_ring->tx_buffer;
+ tx_desc = FM10K_TX_DESC(tx_ring, 0);
+ }
+
+ /* issue prefetch for next Tx descriptor */
+ prefetch(tx_desc);
+
+ /* update budget accounting */
+ budget--;
+ } while (likely(budget));
+
+ i += tx_ring->count;
+ tx_ring->next_to_clean = i;
+ u64_stats_update_begin(&tx_ring->syncp);
+ tx_ring->stats.bytes += total_bytes;
+ tx_ring->stats.packets += total_packets;
+ u64_stats_update_end(&tx_ring->syncp);
+ q_vector->tx.total_bytes += total_bytes;
+ q_vector->tx.total_packets += total_packets;
+
+ if (check_for_tx_hang(tx_ring) && fm10k_check_tx_hang(tx_ring)) {
+ /* schedule immediate reset if we believe we hung */
+ struct fm10k_hw *hw = &interface->hw;
+
+ netif_err(interface, drv, tx_ring->netdev,
+ "Detected Tx Unit Hang\n"
+ " Tx Queue <%d>\n"
+ " TDH, TDT <%x>, <%x>\n"
+ " next_to_use <%x>\n"
+ " next_to_clean <%x>\n",
+ tx_ring->queue_index,
+ fm10k_read_reg(hw, FM10K_TDH(tx_ring->reg_idx)),
+ fm10k_read_reg(hw, FM10K_TDT(tx_ring->reg_idx)),
+ tx_ring->next_to_use, i);
+
+ netif_stop_subqueue(tx_ring->netdev,
+ tx_ring->queue_index);
+
+ netif_info(interface, probe, tx_ring->netdev,
+ "tx hang %d detected on queue %d, resetting interface\n",
+ interface->tx_timeout_count + 1,
+ tx_ring->queue_index);
+
+ fm10k_tx_timeout_reset(interface);
+
+ /* the netdev is about to reset, no point in enabling stuff */
+ return true;
+ }
+
+ /* notify netdev of completed buffers */
+ netdev_tx_completed_queue(txring_txq(tx_ring),
+ total_packets, total_bytes);
+
+#define TX_WAKE_THRESHOLD min_t(u16, FM10K_MIN_TXD - 1, DESC_NEEDED * 2)
+ if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) &&
+ (fm10k_desc_unused(tx_ring) >= TX_WAKE_THRESHOLD))) {
+ /* Make sure that anybody stopping the queue after this
+ * sees the new next_to_clean.
+ */
+ smp_mb();
+ if (__netif_subqueue_stopped(tx_ring->netdev,
+ tx_ring->queue_index) &&
+ !test_bit(__FM10K_DOWN, &interface->state)) {
+ netif_wake_subqueue(tx_ring->netdev,
+ tx_ring->queue_index);
+ ++tx_ring->tx_stats.restart_queue;
+ }
+ }
+
+ return !!budget;
+}
+
+/**
+ * fm10k_update_itr - update the dynamic ITR value based on packet size
+ *
+ * Stores a new ITR value based on strictly on packet size. The
+ * divisors and thresholds used by this function were determined based
+ * on theoretical maximum wire speed and testing data, in order to
+ * minimize response time while increasing bulk throughput.
+ *
+ * @ring_container: Container for rings to have ITR updated
+ **/
+static void fm10k_update_itr(struct fm10k_ring_container *ring_container)
+{
+ unsigned int avg_wire_size, packets;
+
+ /* Only update ITR if we are using adaptive setting */
+ if (!(ring_container->itr & FM10K_ITR_ADAPTIVE))
+ goto clear_counts;
+
+ packets = ring_container->total_packets;
+ if (!packets)
+ goto clear_counts;
+
+ avg_wire_size = ring_container->total_bytes / packets;
+
+ /* Add 24 bytes to size to account for CRC, preamble, and gap */
+ avg_wire_size += 24;
+
+ /* Don't starve jumbo frames */
+ if (avg_wire_size > 3000)
+ avg_wire_size = 3000;
+
+ /* Give a little boost to mid-size frames */
+ if ((avg_wire_size > 300) && (avg_wire_size < 1200))
+ avg_wire_size /= 3;
+ else
+ avg_wire_size /= 2;
+
+ /* write back value and retain adaptive flag */
+ ring_container->itr = avg_wire_size | FM10K_ITR_ADAPTIVE;
+
+clear_counts:
+ ring_container->total_bytes = 0;
+ ring_container->total_packets = 0;
+}
+
+static void fm10k_qv_enable(struct fm10k_q_vector *q_vector)
+{
+ /* Enable auto-mask and clear the current mask */
+ u32 itr = FM10K_ITR_ENABLE;
+
+ /* Update Tx ITR */
+ fm10k_update_itr(&q_vector->tx);
+
+ /* Update Rx ITR */
+ fm10k_update_itr(&q_vector->rx);
+
+ /* Store Tx itr in timer slot 0 */
+ itr |= (q_vector->tx.itr & FM10K_ITR_MAX);
+
+ /* Shift Rx itr to timer slot 1 */
+ itr |= (q_vector->rx.itr & FM10K_ITR_MAX) << FM10K_ITR_INTERVAL1_SHIFT;
+
+ /* Write the final value to the ITR register */
+ writel(itr, q_vector->itr);
+}
+
+static int fm10k_poll(struct napi_struct *napi, int budget)
+{
+ struct fm10k_q_vector *q_vector =
+ container_of(napi, struct fm10k_q_vector, napi);
+ struct fm10k_ring *ring;
+ int per_ring_budget;
+ bool clean_complete = true;
+
+ fm10k_for_each_ring(ring, q_vector->tx)
+ clean_complete &= fm10k_clean_tx_irq(q_vector, ring);
+
+ /* attempt to distribute budget to each queue fairly, but don't
+ * allow the budget to go below 1 because we'll exit polling
+ */
+ if (q_vector->rx.count > 1)
+ per_ring_budget = max(budget/q_vector->rx.count, 1);
+ else
+ per_ring_budget = budget;
+
+ fm10k_for_each_ring(ring, q_vector->rx)
+ clean_complete &= fm10k_clean_rx_irq(q_vector, ring,
+ per_ring_budget);
+
+ /* If all work not completed, return budget and keep polling */
+ if (!clean_complete)
+ return budget;
+
+ /* all work done, exit the polling mode */
+ napi_complete(napi);
+
+ /* re-enable the q_vector */
+ fm10k_qv_enable(q_vector);
+
+ return 0;
+}
+
+/**
+ * fm10k_set_qos_queues: Allocate queues for a QOS-enabled device
+ * @interface: board private structure to initialize
+ *
+ * When QoS (Quality of Service) is enabled, allocate queues for
+ * each traffic class. If multiqueue isn't available,then abort QoS
+ * initialization.
+ *
+ * This function handles all combinations of Qos and RSS.
+ *
+ **/
+static bool fm10k_set_qos_queues(struct fm10k_intfc *interface)
+{
+ struct net_device *dev = interface->netdev;
+ struct fm10k_ring_feature *f;
+ int rss_i, i;
+ int pcs;
+
+ /* Map queue offset and counts onto allocated tx queues */
+ pcs = netdev_get_num_tc(dev);
+
+ if (pcs <= 1)
+ return false;
+
+ /* set QoS mask and indices */
+ f = &interface->ring_feature[RING_F_QOS];
+ f->indices = pcs;
+ f->mask = (1 << fls(pcs - 1)) - 1;
+
+ /* determine the upper limit for our current DCB mode */
+ rss_i = interface->hw.mac.max_queues / pcs;
+ rss_i = 1 << (fls(rss_i) - 1);
+
+ /* set RSS mask and indices */
+ f = &interface->ring_feature[RING_F_RSS];
+ rss_i = min_t(u16, rss_i, f->limit);
+ f->indices = rss_i;
+ f->mask = (1 << fls(rss_i - 1)) - 1;
+
+ /* configure pause class to queue mapping */
+ for (i = 0; i < pcs; i++)
+ netdev_set_tc_queue(dev, i, rss_i, rss_i * i);
+
+ interface->num_rx_queues = rss_i * pcs;
+ interface->num_tx_queues = rss_i * pcs;
+
+ return true;
+}
+
+/**
+ * fm10k_set_rss_queues: Allocate queues for RSS
+ * @interface: board private structure to initialize
+ *
+ * This is our "base" multiqueue mode. RSS (Receive Side Scaling) will try
+ * to allocate one Rx queue per CPU, and if available, one Tx queue per CPU.
+ *
+ **/
+static bool fm10k_set_rss_queues(struct fm10k_intfc *interface)
+{
+ struct fm10k_ring_feature *f;
+ u16 rss_i;
+
+ f = &interface->ring_feature[RING_F_RSS];
+ rss_i = min_t(u16, interface->hw.mac.max_queues, f->limit);
+
+ /* record indices and power of 2 mask for RSS */
+ f->indices = rss_i;
+ f->mask = (1 << fls(rss_i - 1)) - 1;
+
+ interface->num_rx_queues = rss_i;
+ interface->num_tx_queues = rss_i;
+
+ return true;
+}
+
+/**
+ * fm10k_set_num_queues: Allocate queues for device, feature dependent
+ * @interface: board private structure to initialize
+ *
+ * This is the top level queue allocation routine. The order here is very
+ * important, starting with the "most" number of features turned on at once,
+ * and ending with the smallest set of features. This way large combinations
+ * can be allocated if they're turned on, and smaller combinations are the
+ * fallthrough conditions.
+ *
+ **/
+static void fm10k_set_num_queues(struct fm10k_intfc *interface)
+{
+ /* Start with base case */
+ interface->num_rx_queues = 1;
+ interface->num_tx_queues = 1;
+
+ if (fm10k_set_qos_queues(interface))
+ return;
+
+ fm10k_set_rss_queues(interface);
+}
+
+/**
+ * fm10k_alloc_q_vector - Allocate memory for a single interrupt vector
+ * @interface: board private structure to initialize
+ * @v_count: q_vectors allocated on interface, used for ring interleaving
+ * @v_idx: index of vector in interface struct
+ * @txr_count: total number of Tx rings to allocate
+ * @txr_idx: index of first Tx ring to allocate
+ * @rxr_count: total number of Rx rings to allocate
+ * @rxr_idx: index of first Rx ring to allocate
+ *
+ * We allocate one q_vector. If allocation fails we return -ENOMEM.
+ **/
+static int fm10k_alloc_q_vector(struct fm10k_intfc *interface,
+ unsigned int v_count, unsigned int v_idx,
+ unsigned int txr_count, unsigned int txr_idx,
+ unsigned int rxr_count, unsigned int rxr_idx)
+{
+ struct fm10k_q_vector *q_vector;
+ struct fm10k_ring *ring;
+ int ring_count, size;
+
+ ring_count = txr_count + rxr_count;
+ size = sizeof(struct fm10k_q_vector) +
+ (sizeof(struct fm10k_ring) * ring_count);
+
+ /* allocate q_vector and rings */
+ q_vector = kzalloc(size, GFP_KERNEL);
+ if (!q_vector)
+ return -ENOMEM;
+
+ /* initialize NAPI */
+ netif_napi_add(interface->netdev, &q_vector->napi,
+ fm10k_poll, NAPI_POLL_WEIGHT);
+
+ /* tie q_vector and interface together */
+ interface->q_vector[v_idx] = q_vector;
+ q_vector->interface = interface;
+ q_vector->v_idx = v_idx;
+
+ /* initialize pointer to rings */
+ ring = q_vector->ring;
+
+ /* save Tx ring container info */
+ q_vector->tx.ring = ring;
+ q_vector->tx.work_limit = FM10K_DEFAULT_TX_WORK;
+ q_vector->tx.itr = interface->tx_itr;
+ q_vector->tx.count = txr_count;
+
+ while (txr_count) {
+ /* assign generic ring traits */
+ ring->dev = &interface->pdev->dev;
+ ring->netdev = interface->netdev;
+
+ /* configure backlink on ring */
+ ring->q_vector = q_vector;
+
+ /* apply Tx specific ring traits */
+ ring->count = interface->tx_ring_count;
+ ring->queue_index = txr_idx;
+
+ /* assign ring to interface */
+ interface->tx_ring[txr_idx] = ring;
+
+ /* update count and index */
+ txr_count--;
+ txr_idx += v_count;
+
+ /* push pointer to next ring */
+ ring++;
+ }
+
+ /* save Rx ring container info */
+ q_vector->rx.ring = ring;
+ q_vector->rx.itr = interface->rx_itr;
+ q_vector->rx.count = rxr_count;
+
+ while (rxr_count) {
+ /* assign generic ring traits */
+ ring->dev = &interface->pdev->dev;
+ ring->netdev = interface->netdev;
+ rcu_assign_pointer(ring->l2_accel, interface->l2_accel);
+
+ /* configure backlink on ring */
+ ring->q_vector = q_vector;
+
+ /* apply Rx specific ring traits */
+ ring->count = interface->rx_ring_count;
+ ring->queue_index = rxr_idx;
+
+ /* assign ring to interface */
+ interface->rx_ring[rxr_idx] = ring;
+
+ /* update count and index */
+ rxr_count--;
+ rxr_idx += v_count;
+
+ /* push pointer to next ring */
+ ring++;
+ }
+
+ fm10k_dbg_q_vector_init(q_vector);
+
+ return 0;
+}
+
+/**
+ * fm10k_free_q_vector - Free memory allocated for specific interrupt vector
+ * @interface: board private structure to initialize
+ * @v_idx: Index of vector to be freed
+ *
+ * This function frees the memory allocated to the q_vector. In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void fm10k_free_q_vector(struct fm10k_intfc *interface, int v_idx)
+{
+ struct fm10k_q_vector *q_vector = interface->q_vector[v_idx];
+ struct fm10k_ring *ring;
+
+ fm10k_dbg_q_vector_exit(q_vector);
+
+ fm10k_for_each_ring(ring, q_vector->tx)
+ interface->tx_ring[ring->queue_index] = NULL;
+
+ fm10k_for_each_ring(ring, q_vector->rx)
+ interface->rx_ring[ring->queue_index] = NULL;
+
+ interface->q_vector[v_idx] = NULL;
+ netif_napi_del(&q_vector->napi);
+ kfree_rcu(q_vector, rcu);
+}
+
+/**
+ * fm10k_alloc_q_vectors - Allocate memory for interrupt vectors
+ * @interface: board private structure to initialize
+ *
+ * We allocate one q_vector per queue interrupt. If allocation fails we
+ * return -ENOMEM.
+ **/
+static int fm10k_alloc_q_vectors(struct fm10k_intfc *interface)
+{
+ unsigned int q_vectors = interface->num_q_vectors;
+ unsigned int rxr_remaining = interface->num_rx_queues;
+ unsigned int txr_remaining = interface->num_tx_queues;
+ unsigned int rxr_idx = 0, txr_idx = 0, v_idx = 0;
+ int err;
+
+ if (q_vectors >= (rxr_remaining + txr_remaining)) {
+ for (; rxr_remaining; v_idx++) {
+ err = fm10k_alloc_q_vector(interface, q_vectors, v_idx,
+ 0, 0, 1, rxr_idx);
+ if (err)
+ goto err_out;
+
+ /* update counts and index */
+ rxr_remaining--;
+ rxr_idx++;
+ }
+ }
+
+ for (; v_idx < q_vectors; v_idx++) {
+ int rqpv = DIV_ROUND_UP(rxr_remaining, q_vectors - v_idx);
+ int tqpv = DIV_ROUND_UP(txr_remaining, q_vectors - v_idx);
+
+ err = fm10k_alloc_q_vector(interface, q_vectors, v_idx,
+ tqpv, txr_idx,
+ rqpv, rxr_idx);
+
+ if (err)
+ goto err_out;
+
+ /* update counts and index */
+ rxr_remaining -= rqpv;
+ txr_remaining -= tqpv;
+ rxr_idx++;
+ txr_idx++;
+ }
+
+ return 0;
+
+err_out:
+ interface->num_tx_queues = 0;
+ interface->num_rx_queues = 0;
+ interface->num_q_vectors = 0;
+
+ while (v_idx--)
+ fm10k_free_q_vector(interface, v_idx);
+
+ return -ENOMEM;
+}
+
+/**
+ * fm10k_free_q_vectors - Free memory allocated for interrupt vectors
+ * @interface: board private structure to initialize
+ *
+ * This function frees the memory allocated to the q_vectors. In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void fm10k_free_q_vectors(struct fm10k_intfc *interface)
+{
+ int v_idx = interface->num_q_vectors;
+
+ interface->num_tx_queues = 0;
+ interface->num_rx_queues = 0;
+ interface->num_q_vectors = 0;
+
+ while (v_idx--)
+ fm10k_free_q_vector(interface, v_idx);
+}
+
+/**
+ * f10k_reset_msix_capability - reset MSI-X capability
+ * @interface: board private structure to initialize
+ *
+ * Reset the MSI-X capability back to its starting state
+ **/
+static void fm10k_reset_msix_capability(struct fm10k_intfc *interface)
+{
+ pci_disable_msix(interface->pdev);
+ kfree(interface->msix_entries);
+ interface->msix_entries = NULL;
+}
+
+/**
+ * f10k_init_msix_capability - configure MSI-X capability
+ * @interface: board private structure to initialize
+ *
+ * Attempt to configure the interrupts using the best available
+ * capabilities of the hardware and the kernel.
+ **/
+static int fm10k_init_msix_capability(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ int v_budget, vector;
+
+ /* It's easy to be greedy for MSI-X vectors, but it really
+ * doesn't do us much good if we have a lot more vectors
+ * than CPU's. So let's be conservative and only ask for
+ * (roughly) the same number of vectors as there are CPU's.
+ * the default is to use pairs of vectors
+ */
+ v_budget = max(interface->num_rx_queues, interface->num_tx_queues);
+ v_budget = min_t(u16, v_budget, num_online_cpus());
+
+ /* account for vectors not related to queues */
+ v_budget += NON_Q_VECTORS(hw);
+
+ /* At the same time, hardware can only support a maximum of
+ * hw.mac->max_msix_vectors vectors. With features
+ * such as RSS and VMDq, we can easily surpass the number of Rx and Tx
+ * descriptor queues supported by our device. Thus, we cap it off in
+ * those rare cases where the cpu count also exceeds our vector limit.
+ */
+ v_budget = min_t(int, v_budget, hw->mac.max_msix_vectors);
+
+ /* A failure in MSI-X entry allocation is fatal. */
+ interface->msix_entries = kcalloc(v_budget, sizeof(struct msix_entry),
+ GFP_KERNEL);
+ if (!interface->msix_entries)
+ return -ENOMEM;
+
+ /* populate entry values */
+ for (vector = 0; vector < v_budget; vector++)
+ interface->msix_entries[vector].entry = vector;
+
+ /* Attempt to enable MSI-X with requested value */
+ v_budget = pci_enable_msix_range(interface->pdev,
+ interface->msix_entries,
+ MIN_MSIX_COUNT(hw),
+ v_budget);
+ if (v_budget < 0) {
+ kfree(interface->msix_entries);
+ interface->msix_entries = NULL;
+ return -ENOMEM;
+ }
+
+ /* record the number of queues available for q_vectors */
+ interface->num_q_vectors = v_budget - NON_Q_VECTORS(hw);
+
+ return 0;
+}
+
+/**
+ * fm10k_cache_ring_qos - Descriptor ring to register mapping for QoS
+ * @interface: Interface structure continaining rings and devices
+ *
+ * Cache the descriptor ring offsets for Qos
+ **/
+static bool fm10k_cache_ring_qos(struct fm10k_intfc *interface)
+{
+ struct net_device *dev = interface->netdev;
+ int pc, offset, rss_i, i, q_idx;
+ u16 pc_stride = interface->ring_feature[RING_F_QOS].mask + 1;
+ u8 num_pcs = netdev_get_num_tc(dev);
+
+ if (num_pcs <= 1)
+ return false;
+
+ rss_i = interface->ring_feature[RING_F_RSS].indices;
+
+ for (pc = 0, offset = 0; pc < num_pcs; pc++, offset += rss_i) {
+ q_idx = pc;
+ for (i = 0; i < rss_i; i++) {
+ interface->tx_ring[offset + i]->reg_idx = q_idx;
+ interface->tx_ring[offset + i]->qos_pc = pc;
+ interface->rx_ring[offset + i]->reg_idx = q_idx;
+ interface->rx_ring[offset + i]->qos_pc = pc;
+ q_idx += pc_stride;
+ }
+ }
+
+ return true;
+}
+
+/**
+ * fm10k_cache_ring_rss - Descriptor ring to register mapping for RSS
+ * @interface: Interface structure continaining rings and devices
+ *
+ * Cache the descriptor ring offsets for RSS
+ **/
+static void fm10k_cache_ring_rss(struct fm10k_intfc *interface)
+{
+ int i;
+
+ for (i = 0; i < interface->num_rx_queues; i++)
+ interface->rx_ring[i]->reg_idx = i;
+
+ for (i = 0; i < interface->num_tx_queues; i++)
+ interface->tx_ring[i]->reg_idx = i;
+}
+
+/**
+ * fm10k_assign_rings - Map rings to network devices
+ * @interface: Interface structure containing rings and devices
+ *
+ * This function is meant to go though and configure both the network
+ * devices so that they contain rings, and configure the rings so that
+ * they function with their network devices.
+ **/
+static void fm10k_assign_rings(struct fm10k_intfc *interface)
+{
+ if (fm10k_cache_ring_qos(interface))
+ return;
+
+ fm10k_cache_ring_rss(interface);
+}
+
+static void fm10k_init_reta(struct fm10k_intfc *interface)
+{
+ u16 i, rss_i = interface->ring_feature[RING_F_RSS].indices;
+ u32 reta, base;
+
+ /* If the netdev is initialized we have to maintain table if possible */
+ if (interface->netdev->reg_state) {
+ for (i = FM10K_RETA_SIZE; i--;) {
+ reta = interface->reta[i];
+ if ((((reta << 24) >> 24) < rss_i) &&
+ (((reta << 16) >> 24) < rss_i) &&
+ (((reta << 8) >> 24) < rss_i) &&
+ (((reta) >> 24) < rss_i))
+ continue;
+ goto repopulate_reta;
+ }
+
+ /* do nothing if all of the elements are in bounds */
+ return;
+ }
+
+repopulate_reta:
+ /* Populate the redirection table 4 entries at a time. To do this
+ * we are generating the results for n and n+2 and then interleaving
+ * those with the results with n+1 and n+3.
+ */
+ for (i = FM10K_RETA_SIZE; i--;) {
+ /* first pass generates n and n+2 */
+ base = ((i * 0x00040004) + 0x00020000) * rss_i;
+ reta = (base & 0x3F803F80) >> 7;
+
+ /* second pass generates n+1 and n+3 */
+ base += 0x00010001 * rss_i;
+ reta |= (base & 0x3F803F80) << 1;
+
+ interface->reta[i] = reta;
+ }
+}
+
+/**
+ * fm10k_init_queueing_scheme - Determine proper queueing scheme
+ * @interface: board private structure to initialize
+ *
+ * We determine which queueing scheme to use based on...
+ * - Hardware queue count (num_*_queues)
+ * - defined by miscellaneous hardware support/features (RSS, etc.)
+ **/
+int fm10k_init_queueing_scheme(struct fm10k_intfc *interface)
+{
+ int err;
+
+ /* Number of supported queues */
+ fm10k_set_num_queues(interface);
+
+ /* Configure MSI-X capability */
+ err = fm10k_init_msix_capability(interface);
+ if (err) {
+ dev_err(&interface->pdev->dev,
+ "Unable to initialize MSI-X capability\n");
+ return err;
+ }
+
+ /* Allocate memory for queues */
+ err = fm10k_alloc_q_vectors(interface);
+ if (err)
+ return err;
+
+ /* Map rings to devices, and map devices to physical queues */
+ fm10k_assign_rings(interface);
+
+ /* Initialize RSS redirection table */
+ fm10k_init_reta(interface);
+
+ return 0;
+}
+
+/**
+ * fm10k_clear_queueing_scheme - Clear the current queueing scheme settings
+ * @interface: board private structure to clear queueing scheme on
+ *
+ * We go through and clear queueing specific resources and reset the structure
+ * to pre-load conditions
+ **/
+void fm10k_clear_queueing_scheme(struct fm10k_intfc *interface)
+{
+ fm10k_free_q_vectors(interface);
+ fm10k_reset_msix_capability(interface);
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
new file mode 100644
index 0000000..14a4ea7
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
@@ -0,0 +1,2125 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k_common.h"
+
+/**
+ * fm10k_fifo_init - Initialize a message FIFO
+ * @fifo: pointer to FIFO
+ * @buffer: pointer to memory to be used to store FIFO
+ * @size: maximum message size to store in FIFO, must be 2^n - 1
+ **/
+static void fm10k_fifo_init(struct fm10k_mbx_fifo *fifo, u32 *buffer, u16 size)
+{
+ fifo->buffer = buffer;
+ fifo->size = size;
+ fifo->head = 0;
+ fifo->tail = 0;
+}
+
+/**
+ * fm10k_fifo_used - Retrieve used space in FIFO
+ * @fifo: pointer to FIFO
+ *
+ * This function returns the number of DWORDs used in the FIFO
+ **/
+static u16 fm10k_fifo_used(struct fm10k_mbx_fifo *fifo)
+{
+ return fifo->tail - fifo->head;
+}
+
+/**
+ * fm10k_fifo_unused - Retrieve unused space in FIFO
+ * @fifo: pointer to FIFO
+ *
+ * This function returns the number of unused DWORDs in the FIFO
+ **/
+static u16 fm10k_fifo_unused(struct fm10k_mbx_fifo *fifo)
+{
+ return fifo->size + fifo->head - fifo->tail;
+}
+
+/**
+ * fm10k_fifo_empty - Test to verify if fifo is empty
+ * @fifo: pointer to FIFO
+ *
+ * This function returns true if the FIFO is empty, else false
+ **/
+static bool fm10k_fifo_empty(struct fm10k_mbx_fifo *fifo)
+{
+ return fifo->head == fifo->tail;
+}
+
+/**
+ * fm10k_fifo_head_offset - returns indices of head with given offset
+ * @fifo: pointer to FIFO
+ * @offset: offset to add to head
+ *
+ * This function returns the indicies into the fifo based on head + offset
+ **/
+static u16 fm10k_fifo_head_offset(struct fm10k_mbx_fifo *fifo, u16 offset)
+{
+ return (fifo->head + offset) & (fifo->size - 1);
+}
+
+/**
+ * fm10k_fifo_tail_offset - returns indices of tail with given offset
+ * @fifo: pointer to FIFO
+ * @offset: offset to add to tail
+ *
+ * This function returns the indicies into the fifo based on tail + offset
+ **/
+static u16 fm10k_fifo_tail_offset(struct fm10k_mbx_fifo *fifo, u16 offset)
+{
+ return (fifo->tail + offset) & (fifo->size - 1);
+}
+
+/**
+ * fm10k_fifo_head_len - Retrieve length of first message in FIFO
+ * @fifo: pointer to FIFO
+ *
+ * This function returns the size of the first message in the FIFO
+ **/
+static u16 fm10k_fifo_head_len(struct fm10k_mbx_fifo *fifo)
+{
+ u32 *head = fifo->buffer + fm10k_fifo_head_offset(fifo, 0);
+
+ /* verify there is at least 1 DWORD in the fifo so *head is valid */
+ if (fm10k_fifo_empty(fifo))
+ return 0;
+
+ /* retieve the message length */
+ return FM10K_TLV_DWORD_LEN(*head);
+}
+
+/**
+ * fm10k_fifo_head_drop - Drop the first message in FIFO
+ * @fifo: pointer to FIFO
+ *
+ * This function returns the size of the message dropped from the FIFO
+ **/
+static u16 fm10k_fifo_head_drop(struct fm10k_mbx_fifo *fifo)
+{
+ u16 len = fm10k_fifo_head_len(fifo);
+
+ /* update head so it is at the start of next frame */
+ fifo->head += len;
+
+ return len;
+}
+
+/**
+ * fm10k_mbx_index_len - Convert a head/tail index into a length value
+ * @mbx: pointer to mailbox
+ * @head: head index
+ * @tail: head index
+ *
+ * This function takes the head and tail index and determines the length
+ * of the data indicated by this pair.
+ **/
+static u16 fm10k_mbx_index_len(struct fm10k_mbx_info *mbx, u16 head, u16 tail)
+{
+ u16 len = tail - head;
+
+ /* we wrapped so subtract 2, one for index 0, one for all 1s index */
+ if (len > tail)
+ len -= 2;
+
+ return len & ((mbx->mbmem_len << 1) - 1);
+}
+
+/**
+ * fm10k_mbx_tail_add - Determine new tail value with added offset
+ * @mbx: pointer to mailbox
+ * @offset: length to add to head offset
+ *
+ * This function takes the local tail index and recomputes it for
+ * a given length added as an offset.
+ **/
+static u16 fm10k_mbx_tail_add(struct fm10k_mbx_info *mbx, u16 offset)
+{
+ u16 tail = (mbx->tail + offset + 1) & ((mbx->mbmem_len << 1) - 1);
+
+ /* add/sub 1 because we cannot have offset 0 or all 1s */
+ return (tail > mbx->tail) ? --tail : ++tail;
+}
+
+/**
+ * fm10k_mbx_tail_sub - Determine new tail value with subtracted offset
+ * @mbx: pointer to mailbox
+ * @offset: length to add to head offset
+ *
+ * This function takes the local tail index and recomputes it for
+ * a given length added as an offset.
+ **/
+static u16 fm10k_mbx_tail_sub(struct fm10k_mbx_info *mbx, u16 offset)
+{
+ u16 tail = (mbx->tail - offset - 1) & ((mbx->mbmem_len << 1) - 1);
+
+ /* sub/add 1 because we cannot have offset 0 or all 1s */
+ return (tail < mbx->tail) ? ++tail : --tail;
+}
+
+/**
+ * fm10k_mbx_head_add - Determine new head value with added offset
+ * @mbx: pointer to mailbox
+ * @offset: length to add to head offset
+ *
+ * This function takes the local head index and recomputes it for
+ * a given length added as an offset.
+ **/
+static u16 fm10k_mbx_head_add(struct fm10k_mbx_info *mbx, u16 offset)
+{
+ u16 head = (mbx->head + offset + 1) & ((mbx->mbmem_len << 1) - 1);
+
+ /* add/sub 1 because we cannot have offset 0 or all 1s */
+ return (head > mbx->head) ? --head : ++head;
+}
+
+/**
+ * fm10k_mbx_head_sub - Determine new head value with subtracted offset
+ * @mbx: pointer to mailbox
+ * @offset: length to add to head offset
+ *
+ * This function takes the local head index and recomputes it for
+ * a given length added as an offset.
+ **/
+static u16 fm10k_mbx_head_sub(struct fm10k_mbx_info *mbx, u16 offset)
+{
+ u16 head = (mbx->head - offset - 1) & ((mbx->mbmem_len << 1) - 1);
+
+ /* sub/add 1 because we cannot have offset 0 or all 1s */
+ return (head < mbx->head) ? ++head : --head;
+}
+
+/**
+ * fm10k_mbx_pushed_tail_len - Retrieve the length of message being pushed
+ * @mbx: pointer to mailbox
+ *
+ * This function will return the length of the message currently being
+ * pushed onto the tail of the Rx queue.
+ **/
+static u16 fm10k_mbx_pushed_tail_len(struct fm10k_mbx_info *mbx)
+{
+ u32 *tail = mbx->rx.buffer + fm10k_fifo_tail_offset(&mbx->rx, 0);
+
+ /* pushed tail is only valid if pushed is set */
+ if (!mbx->pushed)
+ return 0;
+
+ return FM10K_TLV_DWORD_LEN(*tail);
+}
+
+/**
+ * fm10k_fifo_write_copy - pulls data off of msg and places it in fifo
+ * @fifo: pointer to FIFO
+ * @msg: message array to populate
+ * @tail_offset: additional offset to add to tail pointer
+ * @len: length of FIFO to copy into message header
+ *
+ * This function will take a message and copy it into a section of the
+ * FIFO. In order to get something into a location other than just
+ * the tail you can use tail_offset to adjust the pointer.
+ **/
+static void fm10k_fifo_write_copy(struct fm10k_mbx_fifo *fifo,
+ const u32 *msg, u16 tail_offset, u16 len)
+{
+ u16 end = fm10k_fifo_tail_offset(fifo, tail_offset);
+ u32 *tail = fifo->buffer + end;
+
+ /* track when we should cross the end of the FIFO */
+ end = fifo->size - end;
+
+ /* copy end of message before start of message */
+ if (end < len)
+ memcpy(fifo->buffer, msg + end, (len - end) << 2);
+ else
+ end = len;
+
+ /* Copy remaining message into Tx FIFO */
+ memcpy(tail, msg, end << 2);
+}
+
+/**
+ * fm10k_fifo_enqueue - Enqueues the message to the tail of the FIFO
+ * @fifo: pointer to FIFO
+ * @msg: message array to read
+ *
+ * This function enqueues a message up to the size specified by the length
+ * contained in the first DWORD of the message and will place at the tail
+ * of the FIFO. It will return 0 on success, or a negative value on error.
+ **/
+static s32 fm10k_fifo_enqueue(struct fm10k_mbx_fifo *fifo, const u32 *msg)
+{
+ u16 len = FM10K_TLV_DWORD_LEN(*msg);
+
+ /* verify parameters */
+ if (len > fifo->size)
+ return FM10K_MBX_ERR_SIZE;
+
+ /* verify there is room for the message */
+ if (len > fm10k_fifo_unused(fifo))
+ return FM10K_MBX_ERR_NO_SPACE;
+
+ /* Copy message into FIFO */
+ fm10k_fifo_write_copy(fifo, msg, 0, len);
+
+ /* memory barrier to guarantee FIFO is written before tail update */
+ wmb();
+
+ /* Update Tx FIFO tail */
+ fifo->tail += len;
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_validate_msg_size - Validate incoming message based on size
+ * @mbx: pointer to mailbox
+ * @len: length of data pushed onto buffer
+ *
+ * This function analyzes the frame and will return a non-zero value when
+ * the start of a message larger than the mailbox is detected.
+ **/
+static u16 fm10k_mbx_validate_msg_size(struct fm10k_mbx_info *mbx, u16 len)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->rx;
+ u16 total_len = 0, msg_len;
+ u32 *msg;
+
+ /* length should include previous amounts pushed */
+ len += mbx->pushed;
+
+ /* offset in message is based off of current message size */
+ do {
+ msg = fifo->buffer + fm10k_fifo_tail_offset(fifo, total_len);
+ msg_len = FM10K_TLV_DWORD_LEN(*msg);
+ total_len += msg_len;
+ } while (total_len < len);
+
+ /* message extends out of pushed section, but fits in FIFO */
+ if ((len < total_len) && (msg_len <= mbx->rx.size))
+ return 0;
+
+ /* return length of invalid section */
+ return (len < total_len) ? len : (len - total_len);
+}
+
+/**
+ * fm10k_mbx_write_copy - pulls data off of Tx FIFO and places it in mbmem
+ * @mbx: pointer to mailbox
+ *
+ * This function will take a seciton of the Rx FIFO and copy it into the
+ mbx->tail--;
+ * mailbox memory. The offset in mbmem is based on the lower bits of the
+ * tail and len determines the length to copy.
+ **/
+static void fm10k_mbx_write_copy(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->tx;
+ u32 mbmem = mbx->mbmem_reg;
+ u32 *head = fifo->buffer;
+ u16 end, len, tail, mask;
+
+ if (!mbx->tail_len)
+ return;
+
+ /* determine data length and mbmem tail index */
+ mask = mbx->mbmem_len - 1;
+ len = mbx->tail_len;
+ tail = fm10k_mbx_tail_sub(mbx, len);
+ if (tail > mask)
+ tail++;
+
+ /* determine offset in the ring */
+ end = fm10k_fifo_head_offset(fifo, mbx->pulled);
+ head += end;
+
+ /* memory barrier to guarantee data is ready to be read */
+ rmb();
+
+ /* Copy message from Tx FIFO */
+ for (end = fifo->size - end; len; head = fifo->buffer) {
+ do {
+ /* adjust tail to match offset for FIFO */
+ tail &= mask;
+ if (!tail)
+ tail++;
+
+ /* write message to hardware FIFO */
+ fm10k_write_reg(hw, mbmem + tail++, *(head++));
+ } while (--len && --end);
+ }
+}
+
+/**
+ * fm10k_mbx_pull_head - Pulls data off of head of Tx FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ * @head: acknowledgement number last received
+ *
+ * This function will push the tail index forward based on the remote
+ * head index. It will then pull up to mbmem_len DWORDs off of the
+ * head of the FIFO and will place it in the MBMEM registers
+ * associated with the mailbox.
+ **/
+static void fm10k_mbx_pull_head(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx, u16 head)
+{
+ u16 mbmem_len, len, ack = fm10k_mbx_index_len(mbx, head, mbx->tail);
+ struct fm10k_mbx_fifo *fifo = &mbx->tx;
+
+ /* update number of bytes pulled and update bytes in transit */
+ mbx->pulled += mbx->tail_len - ack;
+
+ /* determine length of data to pull, reserve space for mbmem header */
+ mbmem_len = mbx->mbmem_len - 1;
+ len = fm10k_fifo_used(fifo) - mbx->pulled;
+ if (len > mbmem_len)
+ len = mbmem_len;
+
+ /* update tail and record number of bytes in transit */
+ mbx->tail = fm10k_mbx_tail_add(mbx, len - ack);
+ mbx->tail_len = len;
+
+ /* drop pulled messages from the FIFO */
+ for (len = fm10k_fifo_head_len(fifo);
+ len && (mbx->pulled >= len);
+ len = fm10k_fifo_head_len(fifo)) {
+ mbx->pulled -= fm10k_fifo_head_drop(fifo);
+ mbx->tx_messages++;
+ mbx->tx_dwords += len;
+ }
+
+ /* Copy message out from the Tx FIFO */
+ fm10k_mbx_write_copy(hw, mbx);
+}
+
+/**
+ * fm10k_mbx_read_copy - pulls data off of mbmem and places it in Rx FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will take a seciton of the mailbox memory and copy it
+ * into the Rx FIFO. The offset is based on the lower bits of the
+ * head and len determines the length to copy.
+ **/
+static void fm10k_mbx_read_copy(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->rx;
+ u32 mbmem = mbx->mbmem_reg ^ mbx->mbmem_len;
+ u32 *tail = fifo->buffer;
+ u16 end, len, head;
+
+ /* determine data length and mbmem head index */
+ len = mbx->head_len;
+ head = fm10k_mbx_head_sub(mbx, len);
+ if (head >= mbx->mbmem_len)
+ head++;
+
+ /* determine offset in the ring */
+ end = fm10k_fifo_tail_offset(fifo, mbx->pushed);
+ tail += end;
+
+ /* Copy message into Rx FIFO */
+ for (end = fifo->size - end; len; tail = fifo->buffer) {
+ do {
+ /* adjust head to match offset for FIFO */
+ head &= mbx->mbmem_len - 1;
+ if (!head)
+ head++;
+
+ /* read message from hardware FIFO */
+ *(tail++) = fm10k_read_reg(hw, mbmem + head++);
+ } while (--len && --end);
+ }
+
+ /* memory barrier to guarantee FIFO is written before tail update */
+ wmb();
+}
+
+/**
+ * fm10k_mbx_push_tail - Pushes up to 15 DWORDs on to tail of FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ * @tail: tail index of message
+ *
+ * This function will first validate the tail index and size for the
+ * incoming message. It then updates the acknowlegment number and
+ * copies the data into the FIFO. It will return the number of messages
+ * dequeued on success and a negative value on error.
+ **/
+static s32 fm10k_mbx_push_tail(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx,
+ u16 tail)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->rx;
+ u16 len, seq = fm10k_mbx_index_len(mbx, mbx->head, tail);
+
+ /* determine length of data to push */
+ len = fm10k_fifo_unused(fifo) - mbx->pushed;
+ if (len > seq)
+ len = seq;
+
+ /* update head and record bytes received */
+ mbx->head = fm10k_mbx_head_add(mbx, len);
+ mbx->head_len = len;
+
+ /* nothing to do if there is no data */
+ if (!len)
+ return 0;
+
+ /* Copy msg into Rx FIFO */
+ fm10k_mbx_read_copy(hw, mbx);
+
+ /* determine if there are any invalid lengths in message */
+ if (fm10k_mbx_validate_msg_size(mbx, len))
+ return FM10K_MBX_ERR_SIZE;
+
+ /* Update pushed */
+ mbx->pushed += len;
+
+ /* flush any completed messages */
+ for (len = fm10k_mbx_pushed_tail_len(mbx);
+ len && (mbx->pushed >= len);
+ len = fm10k_mbx_pushed_tail_len(mbx)) {
+ fifo->tail += len;
+ mbx->pushed -= len;
+ mbx->rx_messages++;
+ mbx->rx_dwords += len;
+ }
+
+ return 0;
+}
+
+/* pre-generated data for generating the CRC based on the poly 0xAC9A. */
+static const u16 fm10k_crc_16b_table[256] = {
+ 0x0000, 0x7956, 0xF2AC, 0x8BFA, 0xBC6D, 0xC53B, 0x4EC1, 0x3797,
+ 0x21EF, 0x58B9, 0xD343, 0xAA15, 0x9D82, 0xE4D4, 0x6F2E, 0x1678,
+ 0x43DE, 0x3A88, 0xB172, 0xC824, 0xFFB3, 0x86E5, 0x0D1F, 0x7449,
+ 0x6231, 0x1B67, 0x909D, 0xE9CB, 0xDE5C, 0xA70A, 0x2CF0, 0x55A6,
+ 0x87BC, 0xFEEA, 0x7510, 0x0C46, 0x3BD1, 0x4287, 0xC97D, 0xB02B,
+ 0xA653, 0xDF05, 0x54FF, 0x2DA9, 0x1A3E, 0x6368, 0xE892, 0x91C4,
+ 0xC462, 0xBD34, 0x36CE, 0x4F98, 0x780F, 0x0159, 0x8AA3, 0xF3F5,
+ 0xE58D, 0x9CDB, 0x1721, 0x6E77, 0x59E0, 0x20B6, 0xAB4C, 0xD21A,
+ 0x564D, 0x2F1B, 0xA4E1, 0xDDB7, 0xEA20, 0x9376, 0x188C, 0x61DA,
+ 0x77A2, 0x0EF4, 0x850E, 0xFC58, 0xCBCF, 0xB299, 0x3963, 0x4035,
+ 0x1593, 0x6CC5, 0xE73F, 0x9E69, 0xA9FE, 0xD0A8, 0x5B52, 0x2204,
+ 0x347C, 0x4D2A, 0xC6D0, 0xBF86, 0x8811, 0xF147, 0x7ABD, 0x03EB,
+ 0xD1F1, 0xA8A7, 0x235D, 0x5A0B, 0x6D9C, 0x14CA, 0x9F30, 0xE666,
+ 0xF01E, 0x8948, 0x02B2, 0x7BE4, 0x4C73, 0x3525, 0xBEDF, 0xC789,
+ 0x922F, 0xEB79, 0x6083, 0x19D5, 0x2E42, 0x5714, 0xDCEE, 0xA5B8,
+ 0xB3C0, 0xCA96, 0x416C, 0x383A, 0x0FAD, 0x76FB, 0xFD01, 0x8457,
+ 0xAC9A, 0xD5CC, 0x5E36, 0x2760, 0x10F7, 0x69A1, 0xE25B, 0x9B0D,
+ 0x8D75, 0xF423, 0x7FD9, 0x068F, 0x3118, 0x484E, 0xC3B4, 0xBAE2,
+ 0xEF44, 0x9612, 0x1DE8, 0x64BE, 0x5329, 0x2A7F, 0xA185, 0xD8D3,
+ 0xCEAB, 0xB7FD, 0x3C07, 0x4551, 0x72C6, 0x0B90, 0x806A, 0xF93C,
+ 0x2B26, 0x5270, 0xD98A, 0xA0DC, 0x974B, 0xEE1D, 0x65E7, 0x1CB1,
+ 0x0AC9, 0x739F, 0xF865, 0x8133, 0xB6A4, 0xCFF2, 0x4408, 0x3D5E,
+ 0x68F8, 0x11AE, 0x9A54, 0xE302, 0xD495, 0xADC3, 0x2639, 0x5F6F,
+ 0x4917, 0x3041, 0xBBBB, 0xC2ED, 0xF57A, 0x8C2C, 0x07D6, 0x7E80,
+ 0xFAD7, 0x8381, 0x087B, 0x712D, 0x46BA, 0x3FEC, 0xB416, 0xCD40,
+ 0xDB38, 0xA26E, 0x2994, 0x50C2, 0x6755, 0x1E03, 0x95F9, 0xECAF,
+ 0xB909, 0xC05F, 0x4BA5, 0x32F3, 0x0564, 0x7C32, 0xF7C8, 0x8E9E,
+ 0x98E6, 0xE1B0, 0x6A4A, 0x131C, 0x248B, 0x5DDD, 0xD627, 0xAF71,
+ 0x7D6B, 0x043D, 0x8FC7, 0xF691, 0xC106, 0xB850, 0x33AA, 0x4AFC,
+ 0x5C84, 0x25D2, 0xAE28, 0xD77E, 0xE0E9, 0x99BF, 0x1245, 0x6B13,
+ 0x3EB5, 0x47E3, 0xCC19, 0xB54F, 0x82D8, 0xFB8E, 0x7074, 0x0922,
+ 0x1F5A, 0x660C, 0xEDF6, 0x94A0, 0xA337, 0xDA61, 0x519B, 0x28CD };
+
+/**
+ * fm10k_crc_16b - Generate a 16 bit CRC for a region of 16 bit data
+ * @data: pointer to data to process
+ * @seed: seed value for CRC
+ * @len: length measured in 16 bits words
+ *
+ * This function will generate a CRC based on the polynomial 0xAC9A and
+ * whatever value is stored in the seed variable. Note that this
+ * value inverts the local seed and the result in order to capture all
+ * leading and trailing zeros.
+ */
+static u16 fm10k_crc_16b(const u32 *data, u16 seed, u16 len)
+{
+ u32 result = seed;
+
+ while (len--) {
+ result ^= *(data++);
+ result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF];
+ result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF];
+
+ if (!(len--))
+ break;
+
+ result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF];
+ result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF];
+ }
+
+ return (u16)result;
+}
+
+/**
+ * fm10k_fifo_crc - generate a CRC based off of FIFO data
+ * @fifo: pointer to FIFO
+ * @offset: offset point for start of FIFO
+ * @len: number of DWORDS words to process
+ * @seed: seed value for CRC
+ *
+ * This function generates a CRC for some region of the FIFO
+ **/
+static u16 fm10k_fifo_crc(struct fm10k_mbx_fifo *fifo, u16 offset,
+ u16 len, u16 seed)
+{
+ u32 *data = fifo->buffer + offset;
+
+ /* track when we should cross the end of the FIFO */
+ offset = fifo->size - offset;
+
+ /* if we are in 2 blocks process the end of the FIFO first */
+ if (offset < len) {
+ seed = fm10k_crc_16b(data, seed, offset * 2);
+ data = fifo->buffer;
+ len -= offset;
+ }
+
+ /* process any remaining bits */
+ return fm10k_crc_16b(data, seed, len * 2);
+}
+
+/**
+ * fm10k_mbx_update_local_crc - Update the local CRC for outgoing data
+ * @mbx: pointer to mailbox
+ * @head: head index provided by remote mailbox
+ *
+ * This function will generate the CRC for all data from the end of the
+ * last head update to the current one. It uses the result of the
+ * previous CRC as the seed for this update. The result is stored in
+ * mbx->local.
+ **/
+static void fm10k_mbx_update_local_crc(struct fm10k_mbx_info *mbx, u16 head)
+{
+ u16 len = mbx->tail_len - fm10k_mbx_index_len(mbx, head, mbx->tail);
+
+ /* determine the offset for the start of the region to be pulled */
+ head = fm10k_fifo_head_offset(&mbx->tx, mbx->pulled);
+
+ /* update local CRC to include all of the pulled data */
+ mbx->local = fm10k_fifo_crc(&mbx->tx, head, len, mbx->local);
+}
+
+/**
+ * fm10k_mbx_verify_remote_crc - Verify the CRC is correct for current data
+ * @mbx: pointer to mailbox
+ *
+ * This function will take all data that has been provided from the remote
+ * end and generate a CRC for it. This is stored in mbx->remote. The
+ * CRC for the header is then computed and if the result is non-zero this
+ * is an error and we signal an error dropping all data and resetting the
+ * connection.
+ */
+static s32 fm10k_mbx_verify_remote_crc(struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->rx;
+ u16 len = mbx->head_len;
+ u16 offset = fm10k_fifo_tail_offset(fifo, mbx->pushed) - len;
+ u16 crc;
+
+ /* update the remote CRC if new data has been received */
+ if (len)
+ mbx->remote = fm10k_fifo_crc(fifo, offset, len, mbx->remote);
+
+ /* process the full header as we have to validate the CRC */
+ crc = fm10k_crc_16b(&mbx->mbx_hdr, mbx->remote, 1);
+
+ /* notify other end if we have a problem */
+ return crc ? FM10K_MBX_ERR_CRC : 0;
+}
+
+/**
+ * fm10k_mbx_rx_ready - Indicates that a message is ready in the Rx FIFO
+ * @mbx: pointer to mailbox
+ *
+ * This function returns true if there is a message in the Rx FIFO to dequeue.
+ **/
+static bool fm10k_mbx_rx_ready(struct fm10k_mbx_info *mbx)
+{
+ u16 msg_size = fm10k_fifo_head_len(&mbx->rx);
+
+ return msg_size && (fm10k_fifo_used(&mbx->rx) >= msg_size);
+}
+
+/**
+ * fm10k_mbx_tx_ready - Indicates that the mailbox is in state ready for Tx
+ * @mbx: pointer to mailbox
+ * @len: verify free space is >= this value
+ *
+ * This function returns true if the mailbox is in a state ready to transmit.
+ **/
+static bool fm10k_mbx_tx_ready(struct fm10k_mbx_info *mbx, u16 len)
+{
+ u16 fifo_unused = fm10k_fifo_unused(&mbx->tx);
+
+ return (mbx->state == FM10K_STATE_OPEN) && (fifo_unused >= len);
+}
+
+/**
+ * fm10k_mbx_tx_complete - Indicates that the Tx FIFO has been emptied
+ * @mbx: pointer to mailbox
+ *
+ * This function returns true if the Tx FIFO is empty.
+ **/
+static bool fm10k_mbx_tx_complete(struct fm10k_mbx_info *mbx)
+{
+ return fm10k_fifo_empty(&mbx->tx);
+}
+
+/**
+ * fm10k_mbx_deqeueue_rx - Dequeues the message from the head in the Rx FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function dequeues messages and hands them off to the tlv parser.
+ * It will return the number of messages processed when called.
+ **/
+static u16 fm10k_mbx_dequeue_rx(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->rx;
+ s32 err;
+ u16 cnt;
+
+ /* parse Rx messages out of the Rx FIFO to empty it */
+ for (cnt = 0; !fm10k_fifo_empty(fifo); cnt++) {
+ err = fm10k_tlv_msg_parse(hw, fifo->buffer + fifo->head,
+ mbx, mbx->msg_data);
+ if (err < 0)
+ mbx->rx_parse_err++;
+
+ fm10k_fifo_head_drop(fifo);
+ }
+
+ /* shift remaining bytes back to start of FIFO */
+ memmove(fifo->buffer, fifo->buffer + fifo->tail, mbx->pushed << 2);
+
+ /* shift head and tail based on the memory we moved */
+ fifo->tail -= fifo->head;
+ fifo->head = 0;
+
+ return cnt;
+}
+
+/**
+ * fm10k_mbx_enqueue_tx - Enqueues the message to the tail of the Tx FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ * @msg: message array to read
+ *
+ * This function enqueues a message up to the size specified by the length
+ * contained in the first DWORD of the message and will place at the tail
+ * of the FIFO. It will return 0 on success, or a negative value on error.
+ **/
+static s32 fm10k_mbx_enqueue_tx(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx, const u32 *msg)
+{
+ u32 countdown = mbx->timeout;
+ s32 err;
+
+ switch (mbx->state) {
+ case FM10K_STATE_CLOSED:
+ case FM10K_STATE_DISCONNECT:
+ return FM10K_MBX_ERR_NO_MBX;
+ default:
+ break;
+ }
+
+ /* enqueue the message on the Tx FIFO */
+ err = fm10k_fifo_enqueue(&mbx->tx, msg);
+
+ /* if it failed give the FIFO a chance to drain */
+ while (err && countdown) {
+ countdown--;
+ udelay(mbx->udelay);
+ mbx->ops.process(hw, mbx);
+ err = fm10k_fifo_enqueue(&mbx->tx, msg);
+ }
+
+ /* if we failed trhead the error */
+ if (err) {
+ mbx->timeout = 0;
+ mbx->tx_busy++;
+ }
+
+ /* begin processing message, ignore errors as this is just meant
+ * to start the mailbox flow so we are not concerned if there
+ * is a bad error, or the mailbox is already busy with a request
+ */
+ if (!mbx->tail_len)
+ mbx->ops.process(hw, mbx);
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_read - Copies the mbmem to local message buffer
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function copies the message from the mbmem to the message array
+ **/
+static s32 fm10k_mbx_read(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx)
+{
+ /* only allow one reader in here at a time */
+ if (mbx->mbx_hdr)
+ return FM10K_MBX_ERR_BUSY;
+
+ /* read to capture initial interrupt bits */
+ if (fm10k_read_reg(hw, mbx->mbx_reg) & FM10K_MBX_REQ_INTERRUPT)
+ mbx->mbx_lock = FM10K_MBX_ACK;
+
+ /* write back interrupt bits to clear */
+ fm10k_write_reg(hw, mbx->mbx_reg,
+ FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT);
+
+ /* read remote header */
+ mbx->mbx_hdr = fm10k_read_reg(hw, mbx->mbmem_reg ^ mbx->mbmem_len);
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_write - Copies the local message buffer to mbmem
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function copies the message from the the message array to mbmem
+ **/
+static void fm10k_mbx_write(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx)
+{
+ u32 mbmem = mbx->mbmem_reg;
+
+ /* write new msg header to notify recepient of change */
+ fm10k_write_reg(hw, mbmem, mbx->mbx_hdr);
+
+ /* write mailbox to sent interrupt */
+ if (mbx->mbx_lock)
+ fm10k_write_reg(hw, mbx->mbx_reg, mbx->mbx_lock);
+
+ /* we no longer are using the header so free it */
+ mbx->mbx_hdr = 0;
+ mbx->mbx_lock = 0;
+}
+
+/**
+ * fm10k_mbx_create_connect_hdr - Generate a connect mailbox header
+ * @mbx: pointer to mailbox
+ *
+ * This function returns a connection mailbox header
+ **/
+static void fm10k_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx)
+{
+ mbx->mbx_lock |= FM10K_MBX_REQ;
+
+ mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_CONNECT, TYPE) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->rx.size - 1, CONNECT_SIZE);
+}
+
+/**
+ * fm10k_mbx_create_data_hdr - Generate a data mailbox header
+ * @mbx: pointer to mailbox
+ *
+ * This function returns a data mailbox header
+ **/
+static void fm10k_mbx_create_data_hdr(struct fm10k_mbx_info *mbx)
+{
+ u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DATA, TYPE) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD);
+ struct fm10k_mbx_fifo *fifo = &mbx->tx;
+ u16 crc;
+
+ if (mbx->tail_len)
+ mbx->mbx_lock |= FM10K_MBX_REQ;
+
+ /* generate CRC for data in flight and header */
+ crc = fm10k_fifo_crc(fifo, fm10k_fifo_head_offset(fifo, mbx->pulled),
+ mbx->tail_len, mbx->local);
+ crc = fm10k_crc_16b(&hdr, crc, 1);
+
+ /* load header to memory to be written */
+ mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC);
+}
+
+/**
+ * fm10k_mbx_create_disconnect_hdr - Generate a disconnect mailbox header
+ * @mbx: pointer to mailbox
+ *
+ * This function returns a disconnect mailbox header
+ **/
+static void fm10k_mbx_create_disconnect_hdr(struct fm10k_mbx_info *mbx)
+{
+ u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DISCONNECT, TYPE) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD);
+ u16 crc = fm10k_crc_16b(&hdr, mbx->local, 1);
+
+ mbx->mbx_lock |= FM10K_MBX_ACK;
+
+ /* load header to memory to be written */
+ mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC);
+}
+
+/**
+ * fm10k_mbx_create_error_msg - Generate a error message
+ * @mbx: pointer to mailbox
+ * @err: local error encountered
+ *
+ * This function will interpret the error provided by err, and based on
+ * that it may shift the message by 1 DWORD and then place an error header
+ * at the start of the message.
+ **/
+static void fm10k_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err)
+{
+ /* only generate an error message for these types */
+ switch (err) {
+ case FM10K_MBX_ERR_TAIL:
+ case FM10K_MBX_ERR_HEAD:
+ case FM10K_MBX_ERR_TYPE:
+ case FM10K_MBX_ERR_SIZE:
+ case FM10K_MBX_ERR_RSVD0:
+ case FM10K_MBX_ERR_CRC:
+ break;
+ default:
+ return;
+ }
+
+ mbx->mbx_lock |= FM10K_MBX_REQ;
+
+ mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_ERROR, TYPE) |
+ FM10K_MSG_HDR_FIELD_SET(err, ERR_NO) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD);
+}
+
+/**
+ * fm10k_mbx_validate_msg_hdr - Validate common fields in the message header
+ * @mbx: pointer to mailbox
+ * @msg: message array to read
+ *
+ * This function will parse up the fields in the mailbox header and return
+ * an error if the header contains any of a number of invalid configurations
+ * including unrecognized type, invalid route, or a malformed message.
+ **/
+static s32 fm10k_mbx_validate_msg_hdr(struct fm10k_mbx_info *mbx)
+{
+ u16 type, rsvd0, head, tail, size;
+ const u32 *hdr = &mbx->mbx_hdr;
+
+ type = FM10K_MSG_HDR_FIELD_GET(*hdr, TYPE);
+ rsvd0 = FM10K_MSG_HDR_FIELD_GET(*hdr, RSVD0);
+ tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL);
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD);
+ size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE);
+
+ if (rsvd0)
+ return FM10K_MBX_ERR_RSVD0;
+
+ switch (type) {
+ case FM10K_MSG_DISCONNECT:
+ /* validate that all data has been received */
+ if (tail != mbx->head)
+ return FM10K_MBX_ERR_TAIL;
+
+ /* fall through */
+ case FM10K_MSG_DATA:
+ /* validate that head is moving correctly */
+ if (!head || (head == FM10K_MSG_HDR_MASK(HEAD)))
+ return FM10K_MBX_ERR_HEAD;
+ if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len)
+ return FM10K_MBX_ERR_HEAD;
+
+ /* validate that tail is moving correctly */
+ if (!tail || (tail == FM10K_MSG_HDR_MASK(TAIL)))
+ return FM10K_MBX_ERR_TAIL;
+ if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len)
+ break;
+
+ return FM10K_MBX_ERR_TAIL;
+ case FM10K_MSG_CONNECT:
+ /* validate size is in range and is power of 2 mask */
+ if ((size < FM10K_VFMBX_MSG_MTU) || (size & (size + 1)))
+ return FM10K_MBX_ERR_SIZE;
+
+ /* fall through */
+ case FM10K_MSG_ERROR:
+ if (!head || (head == FM10K_MSG_HDR_MASK(HEAD)))
+ return FM10K_MBX_ERR_HEAD;
+ /* neither create nor error include a tail offset */
+ if (tail)
+ return FM10K_MBX_ERR_TAIL;
+
+ break;
+ default:
+ return FM10K_MBX_ERR_TYPE;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_create_reply - Generate reply based on state and remote head
+ * @mbx: pointer to mailbox
+ * @head: acknowledgement number
+ *
+ * This function will generate an outgoing message based on the current
+ * mailbox state and the remote fifo head. It will return the length
+ * of the outgoing message excluding header on success, and a negative value
+ * on error.
+ **/
+static s32 fm10k_mbx_create_reply(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx, u16 head)
+{
+ switch (mbx->state) {
+ case FM10K_STATE_OPEN:
+ case FM10K_STATE_DISCONNECT:
+ /* update our checksum for the outgoing data */
+ fm10k_mbx_update_local_crc(mbx, head);
+
+ /* as long as other end recognizes us keep sending data */
+ fm10k_mbx_pull_head(hw, mbx, head);
+
+ /* generate new header based on data */
+ if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN))
+ fm10k_mbx_create_data_hdr(mbx);
+ else
+ fm10k_mbx_create_disconnect_hdr(mbx);
+ break;
+ case FM10K_STATE_CONNECT:
+ /* send disconnect even if we aren't connected */
+ fm10k_mbx_create_connect_hdr(mbx);
+ break;
+ case FM10K_STATE_CLOSED:
+ /* generate new header based on data */
+ fm10k_mbx_create_disconnect_hdr(mbx);
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_reset_work- Reset internal pointers for any pending work
+ * @mbx: pointer to mailbox
+ *
+ * This function will reset all internal pointers so any work in progress
+ * is dropped. This call should occur every time we transition from the
+ * open state to the connect state.
+ **/
+static void fm10k_mbx_reset_work(struct fm10k_mbx_info *mbx)
+{
+ /* reset our outgoing max size back to Rx limits */
+ mbx->max_size = mbx->rx.size - 1;
+
+ /* just do a quick resysnc to start of message */
+ mbx->pushed = 0;
+ mbx->pulled = 0;
+ mbx->tail_len = 0;
+ mbx->head_len = 0;
+ mbx->rx.tail = 0;
+ mbx->rx.head = 0;
+}
+
+/**
+ * fm10k_mbx_update_max_size - Update the max_size and drop any large messages
+ * @mbx: pointer to mailbox
+ * @size: new value for max_size
+ *
+ * This function will update the max_size value and drop any outgoing messages
+ * from the head of the Tx FIFO that are larger than max_size.
+ **/
+static void fm10k_mbx_update_max_size(struct fm10k_mbx_info *mbx, u16 size)
+{
+ u16 len;
+
+ mbx->max_size = size;
+
+ /* flush any oversized messages from the queue */
+ for (len = fm10k_fifo_head_len(&mbx->tx);
+ len > size;
+ len = fm10k_fifo_head_len(&mbx->tx)) {
+ fm10k_fifo_head_drop(&mbx->tx);
+ mbx->tx_dropped++;
+ }
+}
+
+/**
+ * fm10k_mbx_connect_reset - Reset following request for reset
+ * @mbx: pointer to mailbox
+ *
+ * This function resets the mailbox to either a disconnected state
+ * or a connect state depending on the current mailbox state
+ **/
+static void fm10k_mbx_connect_reset(struct fm10k_mbx_info *mbx)
+{
+ /* just do a quick resysnc to start of frame */
+ fm10k_mbx_reset_work(mbx);
+
+ /* reset CRC seeds */
+ mbx->local = FM10K_MBX_CRC_SEED;
+ mbx->remote = FM10K_MBX_CRC_SEED;
+
+ /* we cannot exit connect until the size is good */
+ if (mbx->state == FM10K_STATE_OPEN)
+ mbx->state = FM10K_STATE_CONNECT;
+ else
+ mbx->state = FM10K_STATE_CLOSED;
+}
+
+/**
+ * fm10k_mbx_process_connect - Process connect header
+ * @mbx: pointer to mailbox
+ * @msg: message array to process
+ *
+ * This function will read an incoming connect header and reply with the
+ * appropriate message. It will return a value indicating the number of
+ * data DWORDs on success, or will return a negative value on failure.
+ **/
+static s32 fm10k_mbx_process_connect(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const enum fm10k_mbx_state state = mbx->state;
+ const u32 *hdr = &mbx->mbx_hdr;
+ u16 size, head;
+
+ /* we will need to pull all of the fields for verification */
+ size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE);
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD);
+
+ switch (state) {
+ case FM10K_STATE_DISCONNECT:
+ case FM10K_STATE_OPEN:
+ /* reset any in-progress work */
+ fm10k_mbx_connect_reset(mbx);
+ break;
+ case FM10K_STATE_CONNECT:
+ /* we cannot exit connect until the size is good */
+ if (size > mbx->rx.size) {
+ mbx->max_size = mbx->rx.size - 1;
+ } else {
+ /* record the remote system requesting connection */
+ mbx->state = FM10K_STATE_OPEN;
+
+ fm10k_mbx_update_max_size(mbx, size);
+ }
+ break;
+ default:
+ break;
+ }
+
+ /* align our tail index to remote head index */
+ mbx->tail = head;
+
+ return fm10k_mbx_create_reply(hw, mbx, head);
+}
+
+/**
+ * fm10k_mbx_process_data - Process data header
+ * @mbx: pointer to mailbox
+ *
+ * This function will read an incoming data header and reply with the
+ * appropriate message. It will return a value indicating the number of
+ * data DWORDs on success, or will return a negative value on failure.
+ **/
+static s32 fm10k_mbx_process_data(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const u32 *hdr = &mbx->mbx_hdr;
+ u16 head, tail;
+ s32 err;
+
+ /* we will need to pull all of the fields for verification */
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD);
+ tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL);
+
+ /* if we are in connect just update our data and go */
+ if (mbx->state == FM10K_STATE_CONNECT) {
+ mbx->tail = head;
+ mbx->state = FM10K_STATE_OPEN;
+ }
+
+ /* abort on message size errors */
+ err = fm10k_mbx_push_tail(hw, mbx, tail);
+ if (err < 0)
+ return err;
+
+ /* verify the checksum on the incoming data */
+ err = fm10k_mbx_verify_remote_crc(mbx);
+ if (err)
+ return err;
+
+ /* process messages if we have received any */
+ fm10k_mbx_dequeue_rx(hw, mbx);
+
+ return fm10k_mbx_create_reply(hw, mbx, head);
+}
+
+/**
+ * fm10k_mbx_process_disconnect - Process disconnect header
+ * @mbx: pointer to mailbox
+ *
+ * This function will read an incoming disconnect header and reply with the
+ * appropriate message. It will return a value indicating the number of
+ * data DWORDs on success, or will return a negative value on failure.
+ **/
+static s32 fm10k_mbx_process_disconnect(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const enum fm10k_mbx_state state = mbx->state;
+ const u32 *hdr = &mbx->mbx_hdr;
+ u16 head, tail;
+ s32 err;
+
+ /* we will need to pull all of the fields for verification */
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD);
+ tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL);
+
+ /* We should not be receiving disconnect if Rx is incomplete */
+ if (mbx->pushed)
+ return FM10K_MBX_ERR_TAIL;
+
+ /* we have already verified mbx->head == tail so we know this is 0 */
+ mbx->head_len = 0;
+
+ /* verify the checksum on the incoming header is correct */
+ err = fm10k_mbx_verify_remote_crc(mbx);
+ if (err)
+ return err;
+
+ switch (state) {
+ case FM10K_STATE_DISCONNECT:
+ case FM10K_STATE_OPEN:
+ /* state doesn't change if we still have work to do */
+ if (!fm10k_mbx_tx_complete(mbx))
+ break;
+
+ /* verify the head indicates we completed all transmits */
+ if (head != mbx->tail)
+ return FM10K_MBX_ERR_HEAD;
+
+ /* reset any in-progress work */
+ fm10k_mbx_connect_reset(mbx);
+ break;
+ default:
+ break;
+ }
+
+ return fm10k_mbx_create_reply(hw, mbx, head);
+}
+
+/**
+ * fm10k_mbx_process_error - Process error header
+ * @mbx: pointer to mailbox
+ *
+ * This function will read an incoming error header and reply with the
+ * appropriate message. It will return a value indicating the number of
+ * data DWORDs on success, or will return a negative value on failure.
+ **/
+static s32 fm10k_mbx_process_error(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const u32 *hdr = &mbx->mbx_hdr;
+ s32 err_no;
+ u16 head;
+
+ /* we will need to pull all of the fields for verification */
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD);
+
+ /* we only have lower 10 bits of error number os add upper bits */
+ err_no = FM10K_MSG_HDR_FIELD_GET(*hdr, ERR_NO);
+ err_no |= ~FM10K_MSG_HDR_MASK(ERR_NO);
+
+ switch (mbx->state) {
+ case FM10K_STATE_OPEN:
+ case FM10K_STATE_DISCONNECT:
+ /* flush any uncompleted work */
+ fm10k_mbx_reset_work(mbx);
+
+ /* reset CRC seeds */
+ mbx->local = FM10K_MBX_CRC_SEED;
+ mbx->remote = FM10K_MBX_CRC_SEED;
+
+ /* reset tail index and size to prepare for reconnect */
+ mbx->tail = head;
+
+ /* if open then reset max_size and go back to connect */
+ if (mbx->state == FM10K_STATE_OPEN) {
+ mbx->state = FM10K_STATE_CONNECT;
+ break;
+ }
+
+ /* send a connect message to get data flowing again */
+ fm10k_mbx_create_connect_hdr(mbx);
+ return 0;
+ default:
+ break;
+ }
+
+ return fm10k_mbx_create_reply(hw, mbx, mbx->tail);
+}
+
+/**
+ * fm10k_mbx_process - Process mailbox interrupt
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will process incoming mailbox events and generate mailbox
+ * replies. It will return a value indicating the number of DWORDs
+ * transmitted excluding header on success or a negative value on error.
+ **/
+static s32 fm10k_mbx_process(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ s32 err;
+
+ /* we do not read mailbox if closed */
+ if (mbx->state == FM10K_STATE_CLOSED)
+ return 0;
+
+ /* copy data from mailbox */
+ err = fm10k_mbx_read(hw, mbx);
+ if (err)
+ return err;
+
+ /* validate type, source, and destination */
+ err = fm10k_mbx_validate_msg_hdr(mbx);
+ if (err < 0)
+ goto msg_err;
+
+ switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, TYPE)) {
+ case FM10K_MSG_CONNECT:
+ err = fm10k_mbx_process_connect(hw, mbx);
+ break;
+ case FM10K_MSG_DATA:
+ err = fm10k_mbx_process_data(hw, mbx);
+ break;
+ case FM10K_MSG_DISCONNECT:
+ err = fm10k_mbx_process_disconnect(hw, mbx);
+ break;
+ case FM10K_MSG_ERROR:
+ err = fm10k_mbx_process_error(hw, mbx);
+ break;
+ default:
+ err = FM10K_MBX_ERR_TYPE;
+ break;
+ }
+
+msg_err:
+ /* notify partner of errors on our end */
+ if (err < 0)
+ fm10k_mbx_create_error_msg(mbx, err);
+
+ /* copy data from mailbox */
+ fm10k_mbx_write(hw, mbx);
+
+ return err;
+}
+
+/**
+ * fm10k_mbx_disconnect - Shutdown mailbox connection
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will shut down the mailbox. It places the mailbox first
+ * in the disconnect state, it then allows up to a predefined timeout for
+ * the mailbox to transition to close on its own. If this does not occur
+ * then the mailbox will be forced into the closed state.
+ *
+ * Any mailbox transactions not completed before calling this function
+ * are not guaranteed to complete and may be dropped.
+ **/
+static void fm10k_mbx_disconnect(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0;
+
+ /* Place mbx in ready to disconnect state */
+ mbx->state = FM10K_STATE_DISCONNECT;
+
+ /* trigger interrupt to start shutdown process */
+ fm10k_write_reg(hw, mbx->mbx_reg, FM10K_MBX_REQ |
+ FM10K_MBX_INTERRUPT_DISABLE);
+ do {
+ udelay(FM10K_MBX_POLL_DELAY);
+ mbx->ops.process(hw, mbx);
+ timeout -= FM10K_MBX_POLL_DELAY;
+ } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED));
+
+ /* in case we didn't close just force the mailbox into shutdown */
+ fm10k_mbx_connect_reset(mbx);
+ fm10k_mbx_update_max_size(mbx, 0);
+
+ fm10k_write_reg(hw, mbx->mbmem_reg, 0);
+}
+
+/**
+ * fm10k_mbx_connect - Start mailbox connection
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will initiate a mailbox connection. It will populate the
+ * mailbox with a broadcast connect message and then initialize the lock.
+ * This is safe since the connect message is a single DWORD so the mailbox
+ * transaction is guaranteed to be atomic.
+ *
+ * This function will return an error if the mailbox has not been initiated
+ * or is currently in use.
+ **/
+static s32 fm10k_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx)
+{
+ /* we cannot connect an uninitialized mailbox */
+ if (!mbx->rx.buffer)
+ return FM10K_MBX_ERR_NO_SPACE;
+
+ /* we cannot connect an already connected mailbox */
+ if (mbx->state != FM10K_STATE_CLOSED)
+ return FM10K_MBX_ERR_BUSY;
+
+ /* mailbox timeout can now become active */
+ mbx->timeout = FM10K_MBX_INIT_TIMEOUT;
+
+ /* Place mbx in ready to connect state */
+ mbx->state = FM10K_STATE_CONNECT;
+
+ /* initialize header of remote mailbox */
+ fm10k_mbx_create_disconnect_hdr(mbx);
+ fm10k_write_reg(hw, mbx->mbmem_reg ^ mbx->mbmem_len, mbx->mbx_hdr);
+
+ /* enable interrupt and notify other party of new message */
+ mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT |
+ FM10K_MBX_INTERRUPT_ENABLE;
+
+ /* generate and load connect header into mailbox */
+ fm10k_mbx_create_connect_hdr(mbx);
+ fm10k_mbx_write(hw, mbx);
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_validate_handlers - Validate layout of message parsing data
+ * @msg_data: handlers for mailbox events
+ *
+ * This function validates the layout of the message parsing data. This
+ * should be mostly static, but it is important to catch any errors that
+ * are made when constructing the parsers.
+ **/
+static s32 fm10k_mbx_validate_handlers(const struct fm10k_msg_data *msg_data)
+{
+ const struct fm10k_tlv_attr *attr;
+ unsigned int id;
+
+ /* Allow NULL mailboxes that transmit but don't receive */
+ if (!msg_data)
+ return 0;
+
+ while (msg_data->id != FM10K_TLV_ERROR) {
+ /* all messages should have a function handler */
+ if (!msg_data->func)
+ return FM10K_ERR_PARAM;
+
+ /* parser is optional */
+ attr = msg_data->attr;
+ if (attr) {
+ while (attr->id != FM10K_TLV_ERROR) {
+ id = attr->id;
+ attr++;
+ /* ID should always be increasing */
+ if (id >= attr->id)
+ return FM10K_ERR_PARAM;
+ /* ID should fit in results array */
+ if (id >= FM10K_TLV_RESULTS_MAX)
+ return FM10K_ERR_PARAM;
+ }
+
+ /* verify terminator is in the list */
+ if (attr->id != FM10K_TLV_ERROR)
+ return FM10K_ERR_PARAM;
+ }
+
+ id = msg_data->id;
+ msg_data++;
+ /* ID should always be increasing */
+ if (id >= msg_data->id)
+ return FM10K_ERR_PARAM;
+ }
+
+ /* verify terminator is in the list */
+ if ((msg_data->id != FM10K_TLV_ERROR) || !msg_data->func)
+ return FM10K_ERR_PARAM;
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_register_handlers - Register a set of handler ops for mailbox
+ * @mbx: pointer to mailbox
+ * @msg_data: handlers for mailbox events
+ *
+ * This function associates a set of message handling ops with a mailbox.
+ **/
+static s32 fm10k_mbx_register_handlers(struct fm10k_mbx_info *mbx,
+ const struct fm10k_msg_data *msg_data)
+{
+ /* validate layout of handlers before assigning them */
+ if (fm10k_mbx_validate_handlers(msg_data))
+ return FM10K_ERR_PARAM;
+
+ /* initialize the message handlers */
+ mbx->msg_data = msg_data;
+
+ return 0;
+}
+
+/**
+ * fm10k_pfvf_mbx_init - Initialize mailbox memory for PF/VF mailbox
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ * @msg_data: handlers for mailbox events
+ * @id: ID reference for PF as it supports up to 64 PF/VF mailboxes
+ *
+ * This function initializes the mailbox for use. It will split the
+ * buffer provided an use that th populate both the Tx and Rx FIFO by
+ * evenly splitting it. In order to allow for easy masking of head/tail
+ * the value reported in size must be a power of 2 and is reported in
+ * DWORDs, not bytes. Any invalid values will cause the mailbox to return
+ * error.
+ **/
+s32 fm10k_pfvf_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx,
+ const struct fm10k_msg_data *msg_data, u8 id)
+{
+ /* initialize registers */
+ switch (hw->mac.type) {
+ case fm10k_mac_vf:
+ mbx->mbx_reg = FM10K_VFMBX;
+ mbx->mbmem_reg = FM10K_VFMBMEM(FM10K_VFMBMEM_VF_XOR);
+ break;
+ case fm10k_mac_pf:
+ /* there are only 64 VF <-> PF mailboxes */
+ if (id < 64) {
+ mbx->mbx_reg = FM10K_MBX(id);
+ mbx->mbmem_reg = FM10K_MBMEM_VF(id, 0);
+ break;
+ }
+ /* fallthough */
+ default:
+ return FM10K_MBX_ERR_NO_MBX;
+ }
+
+ /* start out in closed state */
+ mbx->state = FM10K_STATE_CLOSED;
+
+ /* validate layout of handlers before assigning them */
+ if (fm10k_mbx_validate_handlers(msg_data))
+ return FM10K_ERR_PARAM;
+
+ /* initialize the message handlers */
+ mbx->msg_data = msg_data;
+
+ /* start mailbox as timed out and let the reset_hw call
+ * set the timeout value to begin communications
+ */
+ mbx->timeout = 0;
+ mbx->udelay = FM10K_MBX_INIT_DELAY;
+
+ /* initalize tail and head */
+ mbx->tail = 1;
+ mbx->head = 1;
+
+ /* initialize CRC seeds */
+ mbx->local = FM10K_MBX_CRC_SEED;
+ mbx->remote = FM10K_MBX_CRC_SEED;
+
+ /* Split buffer for use by Tx/Rx FIFOs */
+ mbx->max_size = FM10K_MBX_MSG_MAX_SIZE;
+ mbx->mbmem_len = FM10K_VFMBMEM_VF_XOR;
+
+ /* initialize the FIFOs, sizes are in 4 byte increments */
+ fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE);
+ fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE],
+ FM10K_MBX_RX_BUFFER_SIZE);
+
+ /* initialize function pointers */
+ mbx->ops.connect = fm10k_mbx_connect;
+ mbx->ops.disconnect = fm10k_mbx_disconnect;
+ mbx->ops.rx_ready = fm10k_mbx_rx_ready;
+ mbx->ops.tx_ready = fm10k_mbx_tx_ready;
+ mbx->ops.tx_complete = fm10k_mbx_tx_complete;
+ mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx;
+ mbx->ops.process = fm10k_mbx_process;
+ mbx->ops.register_handlers = fm10k_mbx_register_handlers;
+
+ return 0;
+}
+
+/**
+ * fm10k_sm_mbx_create_data_hdr - Generate a mailbox header for local FIFO
+ * @mbx: pointer to mailbox
+ *
+ * This function returns a connection mailbox header
+ **/
+static void fm10k_sm_mbx_create_data_hdr(struct fm10k_mbx_info *mbx)
+{
+ if (mbx->tail_len)
+ mbx->mbx_lock |= FM10K_MBX_REQ;
+
+ mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD);
+}
+
+/**
+ * fm10k_sm_mbx_create_connect_hdr - Generate a mailbox header for local FIFO
+ * @mbx: pointer to mailbox
+ * @err: error flags to report if any
+ *
+ * This function returns a connection mailbox header
+ **/
+static void fm10k_sm_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx, u8 err)
+{
+ if (mbx->local)
+ mbx->mbx_lock |= FM10K_MBX_REQ;
+
+ mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD) |
+ FM10K_MSG_HDR_FIELD_SET(err, SM_ERR);
+}
+
+/**
+ * fm10k_sm_mbx_connect_reset - Reset following request for reset
+ * @mbx: pointer to mailbox
+ *
+ * This function resets the mailbox to a just connected state
+ **/
+static void fm10k_sm_mbx_connect_reset(struct fm10k_mbx_info *mbx)
+{
+ /* flush any uncompleted work */
+ fm10k_mbx_reset_work(mbx);
+
+ /* set local version to max and remote version to 0 */
+ mbx->local = FM10K_SM_MBX_VERSION;
+ mbx->remote = 0;
+
+ /* initalize tail and head */
+ mbx->tail = 1;
+ mbx->head = 1;
+
+ /* reset state back to connect */
+ mbx->state = FM10K_STATE_CONNECT;
+}
+
+/**
+ * fm10k_sm_mbx_connect - Start switch manager mailbox connection
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will initiate a mailbox connection with the switch
+ * manager. To do this it will first disconnect the mailbox, and then
+ * reconnect it in order to complete a reset of the mailbox.
+ *
+ * This function will return an error if the mailbox has not been initiated
+ * or is currently in use.
+ **/
+static s32 fm10k_sm_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx)
+{
+ /* we cannot connect an uninitialized mailbox */
+ if (!mbx->rx.buffer)
+ return FM10K_MBX_ERR_NO_SPACE;
+
+ /* we cannot connect an already connected mailbox */
+ if (mbx->state != FM10K_STATE_CLOSED)
+ return FM10K_MBX_ERR_BUSY;
+
+ /* mailbox timeout can now become active */
+ mbx->timeout = FM10K_MBX_INIT_TIMEOUT;
+
+ /* Place mbx in ready to connect state */
+ mbx->state = FM10K_STATE_CONNECT;
+ mbx->max_size = FM10K_MBX_MSG_MAX_SIZE;
+
+ /* reset interface back to connect */
+ fm10k_sm_mbx_connect_reset(mbx);
+
+ /* enable interrupt and notify other party of new message */
+ mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT |
+ FM10K_MBX_INTERRUPT_ENABLE;
+
+ /* generate and load connect header into mailbox */
+ fm10k_sm_mbx_create_connect_hdr(mbx, 0);
+ fm10k_mbx_write(hw, mbx);
+
+ /* enable interrupt and notify other party of new message */
+
+ return 0;
+}
+
+/**
+ * fm10k_sm_mbx_disconnect - Shutdown mailbox connection
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will shut down the mailbox. It places the mailbox first
+ * in the disconnect state, it then allows up to a predefined timeout for
+ * the mailbox to transition to close on its own. If this does not occur
+ * then the mailbox will be forced into the closed state.
+ *
+ * Any mailbox transactions not completed before calling this function
+ * are not guaranteed to complete and may be dropped.
+ **/
+static void fm10k_sm_mbx_disconnect(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0;
+
+ /* Place mbx in ready to disconnect state */
+ mbx->state = FM10K_STATE_DISCONNECT;
+
+ /* trigger interrupt to start shutdown process */
+ fm10k_write_reg(hw, mbx->mbx_reg, FM10K_MBX_REQ |
+ FM10K_MBX_INTERRUPT_DISABLE);
+ do {
+ udelay(FM10K_MBX_POLL_DELAY);
+ mbx->ops.process(hw, mbx);
+ timeout -= FM10K_MBX_POLL_DELAY;
+ } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED));
+
+ /* in case we didn't close just force the mailbox into shutdown */
+ mbx->state = FM10K_STATE_CLOSED;
+ mbx->remote = 0;
+ fm10k_mbx_reset_work(mbx);
+ fm10k_mbx_update_max_size(mbx, 0);
+
+ fm10k_write_reg(hw, mbx->mbmem_reg, 0);
+}
+
+/**
+ * fm10k_mbx_validate_fifo_hdr - Validate fields in the remote FIFO header
+ * @mbx: pointer to mailbox
+ *
+ * This function will parse up the fields in the mailbox header and return
+ * an error if the header contains any of a number of invalid configurations
+ * including unrecognized offsets or version numbers.
+ **/
+static s32 fm10k_sm_mbx_validate_fifo_hdr(struct fm10k_mbx_info *mbx)
+{
+ const u32 *hdr = &mbx->mbx_hdr;
+ u16 tail, head, ver;
+
+ tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL);
+ ver = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_VER);
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD);
+
+ switch (ver) {
+ case 0:
+ break;
+ case FM10K_SM_MBX_VERSION:
+ if (!head || head > FM10K_SM_MBX_FIFO_LEN)
+ return FM10K_MBX_ERR_HEAD;
+ if (!tail || tail > FM10K_SM_MBX_FIFO_LEN)
+ return FM10K_MBX_ERR_TAIL;
+ if (mbx->tail < head)
+ head += mbx->mbmem_len - 1;
+ if (tail < mbx->head)
+ tail += mbx->mbmem_len - 1;
+ if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len)
+ return FM10K_MBX_ERR_HEAD;
+ if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len)
+ break;
+ return FM10K_MBX_ERR_TAIL;
+ default:
+ return FM10K_MBX_ERR_SRC;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_sm_mbx_process_error - Process header with error flag set
+ * @mbx: pointer to mailbox
+ *
+ * This function is meant to respond to a request where the error flag
+ * is set. As a result we will terminate a connection if one is present
+ * and fall back into the reset state with a connection header of version
+ * 0 (RESET).
+ **/
+static void fm10k_sm_mbx_process_error(struct fm10k_mbx_info *mbx)
+{
+ const enum fm10k_mbx_state state = mbx->state;
+
+ switch (state) {
+ case FM10K_STATE_DISCONNECT:
+ /* if there is an error just disconnect */
+ mbx->remote = 0;
+ break;
+ case FM10K_STATE_OPEN:
+ /* flush any uncompleted work */
+ fm10k_sm_mbx_connect_reset(mbx);
+ break;
+ case FM10K_STATE_CONNECT:
+ /* try connnecting at lower version */
+ if (mbx->remote) {
+ while (mbx->local > 1)
+ mbx->local--;
+ mbx->remote = 0;
+ }
+ break;
+ default:
+ break;
+ }
+
+ fm10k_sm_mbx_create_connect_hdr(mbx, 0);
+}
+
+/**
+ * fm10k_sm_mbx_create_error_message - Process an error in FIFO hdr
+ * @mbx: pointer to mailbox
+ * @err: local error encountered
+ *
+ * This function will interpret the error provided by err, and based on
+ * that it may set the error bit in the local message header
+ **/
+static void fm10k_sm_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err)
+{
+ /* only generate an error message for these types */
+ switch (err) {
+ case FM10K_MBX_ERR_TAIL:
+ case FM10K_MBX_ERR_HEAD:
+ case FM10K_MBX_ERR_SRC:
+ case FM10K_MBX_ERR_SIZE:
+ case FM10K_MBX_ERR_RSVD0:
+ break;
+ default:
+ return;
+ }
+
+ /* process it as though we received an error, and send error reply */
+ fm10k_sm_mbx_process_error(mbx);
+ fm10k_sm_mbx_create_connect_hdr(mbx, 1);
+}
+
+/**
+ * fm10k_sm_mbx_receive - Take message from Rx mailbox FIFO and put it in Rx
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will dequeue one message from the Rx switch manager mailbox
+ * FIFO and place it in the Rx mailbox FIFO for processing by software.
+ **/
+static s32 fm10k_sm_mbx_receive(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx,
+ u16 tail)
+{
+ /* reduce length by 1 to convert to a mask */
+ u16 mbmem_len = mbx->mbmem_len - 1;
+ s32 err;
+
+ /* push tail in front of head */
+ if (tail < mbx->head)
+ tail += mbmem_len;
+
+ /* copy data to the Rx FIFO */
+ err = fm10k_mbx_push_tail(hw, mbx, tail);
+ if (err < 0)
+ return err;
+
+ /* process messages if we have received any */
+ fm10k_mbx_dequeue_rx(hw, mbx);
+
+ /* guarantee head aligns with the end of the last message */
+ mbx->head = fm10k_mbx_head_sub(mbx, mbx->pushed);
+ mbx->pushed = 0;
+
+ /* clear any extra bits left over since index adds 1 extra bit */
+ if (mbx->head > mbmem_len)
+ mbx->head -= mbmem_len;
+
+ return err;
+}
+
+/**
+ * fm10k_sm_mbx_transmit - Take message from Tx and put it in Tx mailbox FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will dequeue one message from the Tx mailbox FIFO and place
+ * it in the Tx switch manager mailbox FIFO for processing by hardware.
+ **/
+static void fm10k_sm_mbx_transmit(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx, u16 head)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->tx;
+ /* reduce length by 1 to convert to a mask */
+ u16 mbmem_len = mbx->mbmem_len - 1;
+ u16 tail_len, len = 0;
+ u32 *msg;
+
+ /* push head behind tail */
+ if (mbx->tail < head)
+ head += mbmem_len;
+
+ fm10k_mbx_pull_head(hw, mbx, head);
+
+ /* determine msg aligned offset for end of buffer */
+ do {
+ msg = fifo->buffer + fm10k_fifo_head_offset(fifo, len);
+ tail_len = len;
+ len += FM10K_TLV_DWORD_LEN(*msg);
+ } while ((len <= mbx->tail_len) && (len < mbmem_len));
+
+ /* guarantee we stop on a message boundary */
+ if (mbx->tail_len > tail_len) {
+ mbx->tail = fm10k_mbx_tail_sub(mbx, mbx->tail_len - tail_len);
+ mbx->tail_len = tail_len;
+ }
+
+ /* clear any extra bits left over since index adds 1 extra bit */
+ if (mbx->tail > mbmem_len)
+ mbx->tail -= mbmem_len;
+}
+
+/**
+ * fm10k_sm_mbx_create_reply - Generate reply based on state and remote head
+ * @mbx: pointer to mailbox
+ * @head: acknowledgement number
+ *
+ * This function will generate an outgoing message based on the current
+ * mailbox state and the remote fifo head. It will return the length
+ * of the outgoing message excluding header on success, and a negative value
+ * on error.
+ **/
+static void fm10k_sm_mbx_create_reply(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx, u16 head)
+{
+ switch (mbx->state) {
+ case FM10K_STATE_OPEN:
+ case FM10K_STATE_DISCONNECT:
+ /* flush out Tx data */
+ fm10k_sm_mbx_transmit(hw, mbx, head);
+
+ /* generate new header based on data */
+ if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) {
+ fm10k_sm_mbx_create_data_hdr(mbx);
+ } else {
+ mbx->remote = 0;
+ fm10k_sm_mbx_create_connect_hdr(mbx, 0);
+ }
+ break;
+ case FM10K_STATE_CONNECT:
+ case FM10K_STATE_CLOSED:
+ fm10k_sm_mbx_create_connect_hdr(mbx, 0);
+ break;
+ default:
+ break;
+ }
+}
+
+/**
+ * fm10k_sm_mbx_process_reset - Process header with version == 0 (RESET)
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function is meant to respond to a request where the version data
+ * is set to 0. As such we will either terminate the connection or go
+ * into the connect state in order to re-establish the connection. This
+ * function can also be used to respond to an error as the connection
+ * resetting would also be a means of dealing with errors.
+ **/
+static void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const enum fm10k_mbx_state state = mbx->state;
+
+ switch (state) {
+ case FM10K_STATE_DISCONNECT:
+ /* drop remote connections and disconnect */
+ mbx->state = FM10K_STATE_CLOSED;
+ mbx->remote = 0;
+ mbx->local = 0;
+ break;
+ case FM10K_STATE_OPEN:
+ /* flush any incomplete work */
+ fm10k_sm_mbx_connect_reset(mbx);
+ break;
+ case FM10K_STATE_CONNECT:
+ /* Update remote value to match local value */
+ mbx->remote = mbx->local;
+ default:
+ break;
+ }
+
+ fm10k_sm_mbx_create_reply(hw, mbx, mbx->tail);
+}
+
+/**
+ * fm10k_sm_mbx_process_version_1 - Process header with version == 1
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function is meant to process messages received when the remote
+ * mailbox is active.
+ **/
+static s32 fm10k_sm_mbx_process_version_1(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const u32 *hdr = &mbx->mbx_hdr;
+ u16 head, tail;
+ s32 len;
+
+ /* pull all fields needed for verification */
+ tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL);
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD);
+
+ /* if we are in connect and wanting version 1 then start up and go */
+ if (mbx->state == FM10K_STATE_CONNECT) {
+ if (!mbx->remote)
+ goto send_reply;
+ if (mbx->remote != 1)
+ return FM10K_MBX_ERR_SRC;
+
+ mbx->state = FM10K_STATE_OPEN;
+ }
+
+ do {
+ /* abort on message size errors */
+ len = fm10k_sm_mbx_receive(hw, mbx, tail);
+ if (len < 0)
+ return len;
+
+ /* continue until we have flushed the Rx FIFO */
+ } while (len);
+
+send_reply:
+ fm10k_sm_mbx_create_reply(hw, mbx, head);
+
+ return 0;
+}
+
+/**
+ * fm10k_sm_mbx_process - Process mailbox switch mailbox interrupt
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will process incoming mailbox events and generate mailbox
+ * replies. It will return a value indicating the number of DWORDs
+ * transmitted excluding header on success or a negative value on error.
+ **/
+static s32 fm10k_sm_mbx_process(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ s32 err;
+
+ /* we do not read mailbox if closed */
+ if (mbx->state == FM10K_STATE_CLOSED)
+ return 0;
+
+ /* retrieve data from switch manager */
+ err = fm10k_mbx_read(hw, mbx);
+ if (err)
+ return err;
+
+ err = fm10k_sm_mbx_validate_fifo_hdr(mbx);
+ if (err < 0)
+ goto fifo_err;
+
+ if (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_ERR)) {
+ fm10k_sm_mbx_process_error(mbx);
+ goto fifo_err;
+ }
+
+ switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_VER)) {
+ case 0:
+ fm10k_sm_mbx_process_reset(hw, mbx);
+ break;
+ case FM10K_SM_MBX_VERSION:
+ err = fm10k_sm_mbx_process_version_1(hw, mbx);
+ break;
+ }
+
+fifo_err:
+ if (err < 0)
+ fm10k_sm_mbx_create_error_msg(mbx, err);
+
+ /* report data to switch manager */
+ fm10k_mbx_write(hw, mbx);
+
+ return err;
+}
+
+/**
+ * fm10k_sm_mbx_init - Initialize mailbox memory for PF/SM mailbox
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ * @msg_data: handlers for mailbox events
+ *
+ * This function for now is used to stub out the PF/SM mailbox
+ **/
+s32 fm10k_sm_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx,
+ const struct fm10k_msg_data *msg_data)
+{
+ mbx->mbx_reg = FM10K_GMBX;
+ mbx->mbmem_reg = FM10K_MBMEM_PF(0);
+ /* start out in closed state */
+ mbx->state = FM10K_STATE_CLOSED;
+
+ /* validate layout of handlers before assigning them */
+ if (fm10k_mbx_validate_handlers(msg_data))
+ return FM10K_ERR_PARAM;
+
+ /* initialize the message handlers */
+ mbx->msg_data = msg_data;
+
+ /* start mailbox as timed out and let the reset_hw call
+ * set the timeout value to begin communications
+ */
+ mbx->timeout = 0;
+ mbx->udelay = FM10K_MBX_INIT_DELAY;
+
+ /* Split buffer for use by Tx/Rx FIFOs */
+ mbx->max_size = FM10K_MBX_MSG_MAX_SIZE;
+ mbx->mbmem_len = FM10K_MBMEM_PF_XOR;
+
+ /* initialize the FIFOs, sizes are in 4 byte increments */
+ fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE);
+ fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE],
+ FM10K_MBX_RX_BUFFER_SIZE);
+
+ /* initialize function pointers */
+ mbx->ops.connect = fm10k_sm_mbx_connect;
+ mbx->ops.disconnect = fm10k_sm_mbx_disconnect;
+ mbx->ops.rx_ready = fm10k_mbx_rx_ready;
+ mbx->ops.tx_ready = fm10k_mbx_tx_ready;
+ mbx->ops.tx_complete = fm10k_mbx_tx_complete;
+ mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx;
+ mbx->ops.process = fm10k_sm_mbx_process;
+ mbx->ops.register_handlers = fm10k_mbx_register_handlers;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_mbx.h b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.h
new file mode 100644
index 0000000..0419a7f
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.h
@@ -0,0 +1,307 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_MBX_H_
+#define _FM10K_MBX_H_
+
+/* forward declaration */
+struct fm10k_mbx_info;
+
+#include "fm10k_type.h"
+#include "fm10k_tlv.h"
+
+/* PF Mailbox Registers */
+#define FM10K_MBMEM(_n) ((_n) + 0x18000)
+#define FM10K_MBMEM_VF(_n, _m) (((_n) * 0x10) + (_m) + 0x18000)
+#define FM10K_MBMEM_SM(_n) ((_n) + 0x18400)
+#define FM10K_MBMEM_PF(_n) ((_n) + 0x18600)
+/* XOR provides means of switching from Tx to Rx FIFO */
+#define FM10K_MBMEM_PF_XOR (FM10K_MBMEM_SM(0) ^ FM10K_MBMEM_PF(0))
+#define FM10K_MBX(_n) ((_n) + 0x18800)
+#define FM10K_MBX_REQ 0x00000002
+#define FM10K_MBX_ACK 0x00000004
+#define FM10K_MBX_REQ_INTERRUPT 0x00000008
+#define FM10K_MBX_ACK_INTERRUPT 0x00000010
+#define FM10K_MBX_INTERRUPT_ENABLE 0x00000020
+#define FM10K_MBX_INTERRUPT_DISABLE 0x00000040
+#define FM10K_MBICR(_n) ((_n) + 0x18840)
+#define FM10K_GMBX 0x18842
+
+/* VF Mailbox Registers */
+#define FM10K_VFMBX 0x00010
+#define FM10K_VFMBMEM(_n) ((_n) + 0x00020)
+#define FM10K_VFMBMEM_LEN 16
+#define FM10K_VFMBMEM_VF_XOR (FM10K_VFMBMEM_LEN / 2)
+
+/* Delays/timeouts */
+#define FM10K_MBX_DISCONNECT_TIMEOUT 500
+#define FM10K_MBX_POLL_DELAY 19
+#define FM10K_MBX_INT_DELAY 20
+
+/* PF/VF Mailbox state machine
+ *
+ * +----------+ connect() +----------+
+ * | CLOSED | --------------> | CONNECT |
+ * +----------+ +----------+
+ * ^ ^ |
+ * | rcv: rcv: | | rcv:
+ * | Connect Disconnect | | Connect
+ * | Disconnect Error | | Data
+ * | | |
+ * | | V
+ * +----------+ disconnect() +----------+
+ * |DISCONNECT| <-------------- | OPEN |
+ * +----------+ +----------+
+ *
+ * The diagram above describes the PF/VF mailbox state machine. There
+ * are four main states to this machine.
+ * Closed: This state represents a mailbox that is in a standby state
+ * with interrupts disabled. In this state the mailbox should not
+ * read the mailbox or write any data. The only means of exiting
+ * this state is for the system to make the connect() call for the
+ * mailbox, it will then transition to the connect state.
+ * Connect: In this state the mailbox is seeking a connection. It will
+ * post a connect message with no specified destination and will
+ * wait for a reply from the other side of the mailbox. This state
+ * is exited when either a connect with the local mailbox as the
+ * destination is received or when a data message is received with
+ * a valid sequence number.
+ * Open: In this state the mailbox is able to transfer data between the local
+ * entity and the remote. It will fall back to connect in the event of
+ * receiving either an error message, or a disconnect message. It will
+ * transition to disconnect on a call to disconnect();
+ * Disconnect: In this state the mailbox is attempting to gracefully terminate
+ * the connection. It will do so at the first point where it knows
+ * that the remote endpoint is either done sending, or when the
+ * remote endpoint has fallen back into connect.
+ */
+enum fm10k_mbx_state {
+ FM10K_STATE_CLOSED,
+ FM10K_STATE_CONNECT,
+ FM10K_STATE_OPEN,
+ FM10K_STATE_DISCONNECT,
+};
+
+/* PF/VF Mailbox header format
+ * 3 2 1 0
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Size/Err_no/CRC | Rsvd0 | Head | Tail | Type |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ *
+ * The layout above describes the format for the header used in the PF/VF
+ * mailbox. The header is broken out into the following fields:
+ * Type: There are 4 supported message types
+ * 0x8: Data header - used to transport message data
+ * 0xC: Connect header - used to establish connection
+ * 0xD: Disconnect header - used to tear down a connection
+ * 0xE: Error header - used to address message exceptions
+ * Tail: Tail index for local FIFO
+ * Tail index actually consists of two parts. The MSB of
+ * the head is a loop tracker, it is 0 on an even numbered
+ * loop through the FIFO, and 1 on the odd numbered loops.
+ * To get the actual mailbox offset based on the tail it
+ * is necessary to add bit 3 to bit 0 and clear bit 3. This
+ * gives us a valid range of 0x1 - 0xE.
+ * Head: Head index for remote FIFO
+ * Head index follows the same format as the tail index.
+ * Rsvd0: Reserved 0 portion of the mailbox header
+ * CRC: Running CRC for all data since connect plus current message header
+ * Size: Maximum message size - Applies only to connect headers
+ * The maximum message size is provided during connect to avoid
+ * jamming the mailbox with messages that do not fit.
+ * Err_no: Error number - Applies only to error headers
+ * The error number provides a indication of the type of error
+ * experienced.
+ */
+
+/* macros for retriving and setting header values */
+#define FM10K_MSG_HDR_MASK(name) \
+ ((0x1u << FM10K_MSG_##name##_SIZE) - 1)
+#define FM10K_MSG_HDR_FIELD_SET(value, name) \
+ (((u32)(value) & FM10K_MSG_HDR_MASK(name)) << FM10K_MSG_##name##_SHIFT)
+#define FM10K_MSG_HDR_FIELD_GET(value, name) \
+ ((u16)((value) >> FM10K_MSG_##name##_SHIFT) & FM10K_MSG_HDR_MASK(name))
+
+/* offsets shared between all headers */
+#define FM10K_MSG_TYPE_SHIFT 0
+#define FM10K_MSG_TYPE_SIZE 4
+#define FM10K_MSG_TAIL_SHIFT 4
+#define FM10K_MSG_TAIL_SIZE 4
+#define FM10K_MSG_HEAD_SHIFT 8
+#define FM10K_MSG_HEAD_SIZE 4
+#define FM10K_MSG_RSVD0_SHIFT 12
+#define FM10K_MSG_RSVD0_SIZE 4
+
+/* offsets for data/disconnect headers */
+#define FM10K_MSG_CRC_SHIFT 16
+#define FM10K_MSG_CRC_SIZE 16
+
+/* offsets for connect headers */
+#define FM10K_MSG_CONNECT_SIZE_SHIFT 16
+#define FM10K_MSG_CONNECT_SIZE_SIZE 16
+
+/* offsets for error headers */
+#define FM10K_MSG_ERR_NO_SHIFT 16
+#define FM10K_MSG_ERR_NO_SIZE 16
+
+enum fm10k_msg_type {
+ FM10K_MSG_DATA = 0x8,
+ FM10K_MSG_CONNECT = 0xC,
+ FM10K_MSG_DISCONNECT = 0xD,
+ FM10K_MSG_ERROR = 0xE,
+};
+
+/* HNI/SM Mailbox FIFO format
+ * 3 2 1 0
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-------+-----------------------+-------+-----------------------+
+ * | Error | Remote Head |Version| Local Tail |
+ * +-------+-----------------------+-------+-----------------------+
+ * | |
+ * . Local FIFO Data .
+ * . .
+ * +-------+-----------------------+-------+-----------------------+
+ *
+ * The layout above describes the format for the FIFOs used by the host
+ * network interface and the switch manager to communicate messages back
+ * and forth. Both the HNI and the switch maintain one such FIFO. The
+ * layout in memory has the switch manager FIFO followed immediately by
+ * the HNI FIFO. For this reason I am using just the pointer to the
+ * HNI FIFO in the mailbox ops as the offset between the two is fixed.
+ *
+ * The header for the FIFO is broken out into the following fields:
+ * Local Tail: Offset into FIFO region for next DWORD to write.
+ * Version: Version info for mailbox, only values of 0/1 are supported.
+ * Remote Head: Offset into remote FIFO to indicate how much we have read.
+ * Error: Error indication, values TBD.
+ */
+
+/* version number for switch manager mailboxes */
+#define FM10K_SM_MBX_VERSION 1
+#define FM10K_SM_MBX_FIFO_LEN (FM10K_MBMEM_PF_XOR - 1)
+
+/* offsets shared between all SM FIFO headers */
+#define FM10K_MSG_SM_TAIL_SHIFT 0
+#define FM10K_MSG_SM_TAIL_SIZE 12
+#define FM10K_MSG_SM_VER_SHIFT 12
+#define FM10K_MSG_SM_VER_SIZE 4
+#define FM10K_MSG_SM_HEAD_SHIFT 16
+#define FM10K_MSG_SM_HEAD_SIZE 12
+#define FM10K_MSG_SM_ERR_SHIFT 28
+#define FM10K_MSG_SM_ERR_SIZE 4
+
+/* All error messages returned by mailbox functions
+ * The value -511 is 0xFE01 in hex. The idea is to order the errors
+ * from 0xFE01 - 0xFEFF so error codes are easily visible in the mailbox
+ * messages. This also helps to avoid error number collisions as Linux
+ * doesn't appear to use error numbers 256 - 511.
+ */
+#define FM10K_MBX_ERR(_n) ((_n) - 512)
+#define FM10K_MBX_ERR_NO_MBX FM10K_MBX_ERR(0x01)
+#define FM10K_MBX_ERR_NO_SPACE FM10K_MBX_ERR(0x03)
+#define FM10K_MBX_ERR_TAIL FM10K_MBX_ERR(0x05)
+#define FM10K_MBX_ERR_HEAD FM10K_MBX_ERR(0x06)
+#define FM10K_MBX_ERR_SRC FM10K_MBX_ERR(0x08)
+#define FM10K_MBX_ERR_TYPE FM10K_MBX_ERR(0x09)
+#define FM10K_MBX_ERR_SIZE FM10K_MBX_ERR(0x0B)
+#define FM10K_MBX_ERR_BUSY FM10K_MBX_ERR(0x0C)
+#define FM10K_MBX_ERR_RSVD0 FM10K_MBX_ERR(0x0E)
+#define FM10K_MBX_ERR_CRC FM10K_MBX_ERR(0x0F)
+
+#define FM10K_MBX_CRC_SEED 0xFFFF
+
+struct fm10k_mbx_ops {
+ s32 (*connect)(struct fm10k_hw *, struct fm10k_mbx_info *);
+ void (*disconnect)(struct fm10k_hw *, struct fm10k_mbx_info *);
+ bool (*rx_ready)(struct fm10k_mbx_info *);
+ bool (*tx_ready)(struct fm10k_mbx_info *, u16);
+ bool (*tx_complete)(struct fm10k_mbx_info *);
+ s32 (*enqueue_tx)(struct fm10k_hw *, struct fm10k_mbx_info *,
+ const u32 *);
+ s32 (*process)(struct fm10k_hw *, struct fm10k_mbx_info *);
+ s32 (*register_handlers)(struct fm10k_mbx_info *,
+ const struct fm10k_msg_data *);
+};
+
+struct fm10k_mbx_fifo {
+ u32 *buffer;
+ u16 head;
+ u16 tail;
+ u16 size;
+};
+
+/* size of buffer to be stored in mailbox for FIFOs */
+#define FM10K_MBX_TX_BUFFER_SIZE 512
+#define FM10K_MBX_RX_BUFFER_SIZE 128
+#define FM10K_MBX_BUFFER_SIZE \
+ (FM10K_MBX_TX_BUFFER_SIZE + FM10K_MBX_RX_BUFFER_SIZE)
+
+/* minimum and maximum message size in dwords */
+#define FM10K_MBX_MSG_MAX_SIZE \
+ ((FM10K_MBX_TX_BUFFER_SIZE - 1) & (FM10K_MBX_RX_BUFFER_SIZE - 1))
+#define FM10K_VFMBX_MSG_MTU ((FM10K_VFMBMEM_LEN / 2) - 1)
+
+#define FM10K_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */
+#define FM10K_MBX_INIT_DELAY 500 /* microseconds between retries */
+
+struct fm10k_mbx_info {
+ /* function pointers for mailbox operations */
+ struct fm10k_mbx_ops ops;
+ const struct fm10k_msg_data *msg_data;
+
+ /* message FIFOs */
+ struct fm10k_mbx_fifo rx;
+ struct fm10k_mbx_fifo tx;
+
+ /* delay for handling timeouts */
+ u32 timeout;
+ u32 udelay;
+
+ /* mailbox state info */
+ u32 mbx_reg, mbmem_reg, mbx_lock, mbx_hdr;
+ u16 max_size, mbmem_len;
+ u16 tail, tail_len, pulled;
+ u16 head, head_len, pushed;
+ u16 local, remote;
+ enum fm10k_mbx_state state;
+
+ /* result of last mailbox test */
+ s32 test_result;
+
+ /* statistics */
+ u64 tx_busy;
+ u64 tx_dropped;
+ u64 tx_messages;
+ u64 tx_dwords;
+ u64 rx_messages;
+ u64 rx_dwords;
+ u64 rx_parse_err;
+
+ /* Buffer to store messages */
+ u32 buffer[FM10K_MBX_BUFFER_SIZE];
+};
+
+s32 fm10k_pfvf_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *,
+ const struct fm10k_msg_data *, u8);
+s32 fm10k_sm_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *,
+ const struct fm10k_msg_data *);
+
+#endif /* _FM10K_MBX_H_ */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
new file mode 100644
index 0000000..dcec000
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
@@ -0,0 +1,1431 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k.h"
+#include <linux/vmalloc.h>
+#if IS_ENABLED(CONFIG_VXLAN)
+#include <net/vxlan.h>
+#endif /* CONFIG_VXLAN */
+
+/**
+ * fm10k_setup_tx_resources - allocate Tx resources (Descriptors)
+ * @tx_ring: tx descriptor ring (for a specific queue) to setup
+ *
+ * Return 0 on success, negative on failure
+ **/
+int fm10k_setup_tx_resources(struct fm10k_ring *tx_ring)
+{
+ struct device *dev = tx_ring->dev;
+ int size;
+
+ size = sizeof(struct fm10k_tx_buffer) * tx_ring->count;
+
+ tx_ring->tx_buffer = vzalloc(size);
+ if (!tx_ring->tx_buffer)
+ goto err;
+
+ u64_stats_init(&tx_ring->syncp);
+
+ /* round up to nearest 4K */
+ tx_ring->size = tx_ring->count * sizeof(struct fm10k_tx_desc);
+ tx_ring->size = ALIGN(tx_ring->size, 4096);
+
+ tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size,
+ &tx_ring->dma, GFP_KERNEL);
+ if (!tx_ring->desc)
+ goto err;
+
+ return 0;
+
+err:
+ vfree(tx_ring->tx_buffer);
+ tx_ring->tx_buffer = NULL;
+ return -ENOMEM;
+}
+
+/**
+ * fm10k_setup_all_tx_resources - allocate all queues Tx resources
+ * @interface: board private structure
+ *
+ * If this function returns with an error, then it's possible one or
+ * more of the rings is populated (while the rest are not). It is the
+ * callers duty to clean those orphaned rings.
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int fm10k_setup_all_tx_resources(struct fm10k_intfc *interface)
+{
+ int i, err = 0;
+
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ err = fm10k_setup_tx_resources(interface->tx_ring[i]);
+ if (!err)
+ continue;
+
+ netif_err(interface, probe, interface->netdev,
+ "Allocation for Tx Queue %u failed\n", i);
+ goto err_setup_tx;
+ }
+
+ return 0;
+err_setup_tx:
+ /* rewind the index freeing the rings as we go */
+ while (i--)
+ fm10k_free_tx_resources(interface->tx_ring[i]);
+ return err;
+}
+
+/**
+ * fm10k_setup_rx_resources - allocate Rx resources (Descriptors)
+ * @rx_ring: rx descriptor ring (for a specific queue) to setup
+ *
+ * Returns 0 on success, negative on failure
+ **/
+int fm10k_setup_rx_resources(struct fm10k_ring *rx_ring)
+{
+ struct device *dev = rx_ring->dev;
+ int size;
+
+ size = sizeof(struct fm10k_rx_buffer) * rx_ring->count;
+
+ rx_ring->rx_buffer = vzalloc(size);
+ if (!rx_ring->rx_buffer)
+ goto err;
+
+ u64_stats_init(&rx_ring->syncp);
+
+ /* Round up to nearest 4K */
+ rx_ring->size = rx_ring->count * sizeof(union fm10k_rx_desc);
+ rx_ring->size = ALIGN(rx_ring->size, 4096);
+
+ rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
+ &rx_ring->dma, GFP_KERNEL);
+ if (!rx_ring->desc)
+ goto err;
+
+ return 0;
+err:
+ vfree(rx_ring->rx_buffer);
+ rx_ring->rx_buffer = NULL;
+ return -ENOMEM;
+}
+
+/**
+ * fm10k_setup_all_rx_resources - allocate all queues Rx resources
+ * @interface: board private structure
+ *
+ * If this function returns with an error, then it's possible one or
+ * more of the rings is populated (while the rest are not). It is the
+ * callers duty to clean those orphaned rings.
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int fm10k_setup_all_rx_resources(struct fm10k_intfc *interface)
+{
+ int i, err = 0;
+
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ err = fm10k_setup_rx_resources(interface->rx_ring[i]);
+ if (!err)
+ continue;
+
+ netif_err(interface, probe, interface->netdev,
+ "Allocation for Rx Queue %u failed\n", i);
+ goto err_setup_rx;
+ }
+
+ return 0;
+err_setup_rx:
+ /* rewind the index freeing the rings as we go */
+ while (i--)
+ fm10k_free_rx_resources(interface->rx_ring[i]);
+ return err;
+}
+
+void fm10k_unmap_and_free_tx_resource(struct fm10k_ring *ring,
+ struct fm10k_tx_buffer *tx_buffer)
+{
+ if (tx_buffer->skb) {
+ dev_kfree_skb_any(tx_buffer->skb);
+ if (dma_unmap_len(tx_buffer, len))
+ dma_unmap_single(ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ } else if (dma_unmap_len(tx_buffer, len)) {
+ dma_unmap_page(ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ }
+ tx_buffer->next_to_watch = NULL;
+ tx_buffer->skb = NULL;
+ dma_unmap_len_set(tx_buffer, len, 0);
+ /* tx_buffer must be completely set up in the transmit path */
+}
+
+/**
+ * fm10k_clean_tx_ring - Free Tx Buffers
+ * @tx_ring: ring to be cleaned
+ **/
+static void fm10k_clean_tx_ring(struct fm10k_ring *tx_ring)
+{
+ struct fm10k_tx_buffer *tx_buffer;
+ unsigned long size;
+ u16 i;
+
+ /* ring already cleared, nothing to do */
+ if (!tx_ring->tx_buffer)
+ return;
+
+ /* Free all the Tx ring sk_buffs */
+ for (i = 0; i < tx_ring->count; i++) {
+ tx_buffer = &tx_ring->tx_buffer[i];
+ fm10k_unmap_and_free_tx_resource(tx_ring, tx_buffer);
+ }
+
+ /* reset BQL values */
+ netdev_tx_reset_queue(txring_txq(tx_ring));
+
+ size = sizeof(struct fm10k_tx_buffer) * tx_ring->count;
+ memset(tx_ring->tx_buffer, 0, size);
+
+ /* Zero out the descriptor ring */
+ memset(tx_ring->desc, 0, tx_ring->size);
+}
+
+/**
+ * fm10k_free_tx_resources - Free Tx Resources per Queue
+ * @tx_ring: Tx descriptor ring for a specific queue
+ *
+ * Free all transmit software resources
+ **/
+void fm10k_free_tx_resources(struct fm10k_ring *tx_ring)
+{
+ fm10k_clean_tx_ring(tx_ring);
+
+ vfree(tx_ring->tx_buffer);
+ tx_ring->tx_buffer = NULL;
+
+ /* if not set, then don't free */
+ if (!tx_ring->desc)
+ return;
+
+ dma_free_coherent(tx_ring->dev, tx_ring->size,
+ tx_ring->desc, tx_ring->dma);
+ tx_ring->desc = NULL;
+}
+
+/**
+ * fm10k_clean_all_tx_rings - Free Tx Buffers for all queues
+ * @interface: board private structure
+ **/
+void fm10k_clean_all_tx_rings(struct fm10k_intfc *interface)
+{
+ int i;
+
+ for (i = 0; i < interface->num_tx_queues; i++)
+ fm10k_clean_tx_ring(interface->tx_ring[i]);
+
+ /* remove any stale timestamp buffers and free them */
+ skb_queue_purge(&interface->ts_tx_skb_queue);
+}
+
+/**
+ * fm10k_free_all_tx_resources - Free Tx Resources for All Queues
+ * @interface: board private structure
+ *
+ * Free all transmit software resources
+ **/
+static void fm10k_free_all_tx_resources(struct fm10k_intfc *interface)
+{
+ int i = interface->num_tx_queues;
+
+ while (i--)
+ fm10k_free_tx_resources(interface->tx_ring[i]);
+}
+
+/**
+ * fm10k_clean_rx_ring - Free Rx Buffers per Queue
+ * @rx_ring: ring to free buffers from
+ **/
+static void fm10k_clean_rx_ring(struct fm10k_ring *rx_ring)
+{
+ unsigned long size;
+ u16 i;
+
+ if (!rx_ring->rx_buffer)
+ return;
+
+ if (rx_ring->skb)
+ dev_kfree_skb(rx_ring->skb);
+ rx_ring->skb = NULL;
+
+ /* Free all the Rx ring sk_buffs */
+ for (i = 0; i < rx_ring->count; i++) {
+ struct fm10k_rx_buffer *buffer = &rx_ring->rx_buffer[i];
+ /* clean-up will only set page pointer to NULL */
+ if (!buffer->page)
+ continue;
+
+ dma_unmap_page(rx_ring->dev, buffer->dma,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+ __free_page(buffer->page);
+
+ buffer->page = NULL;
+ }
+
+ size = sizeof(struct fm10k_rx_buffer) * rx_ring->count;
+ memset(rx_ring->rx_buffer, 0, size);
+
+ /* Zero out the descriptor ring */
+ memset(rx_ring->desc, 0, rx_ring->size);
+
+ rx_ring->next_to_alloc = 0;
+ rx_ring->next_to_clean = 0;
+ rx_ring->next_to_use = 0;
+}
+
+/**
+ * fm10k_free_rx_resources - Free Rx Resources
+ * @rx_ring: ring to clean the resources from
+ *
+ * Free all receive software resources
+ **/
+void fm10k_free_rx_resources(struct fm10k_ring *rx_ring)
+{
+ fm10k_clean_rx_ring(rx_ring);
+
+ vfree(rx_ring->rx_buffer);
+ rx_ring->rx_buffer = NULL;
+
+ /* if not set, then don't free */
+ if (!rx_ring->desc)
+ return;
+
+ dma_free_coherent(rx_ring->dev, rx_ring->size,
+ rx_ring->desc, rx_ring->dma);
+
+ rx_ring->desc = NULL;
+}
+
+/**
+ * fm10k_clean_all_rx_rings - Free Rx Buffers for all queues
+ * @interface: board private structure
+ **/
+void fm10k_clean_all_rx_rings(struct fm10k_intfc *interface)
+{
+ int i;
+
+ for (i = 0; i < interface->num_rx_queues; i++)
+ fm10k_clean_rx_ring(interface->rx_ring[i]);
+}
+
+/**
+ * fm10k_free_all_rx_resources - Free Rx Resources for All Queues
+ * @interface: board private structure
+ *
+ * Free all receive software resources
+ **/
+static void fm10k_free_all_rx_resources(struct fm10k_intfc *interface)
+{
+ int i = interface->num_rx_queues;
+
+ while (i--)
+ fm10k_free_rx_resources(interface->rx_ring[i]);
+}
+
+/**
+ * fm10k_request_glort_range - Request GLORTs for use in configuring rules
+ * @interface: board private structure
+ *
+ * This function allocates a range of glorts for this inteface to use.
+ **/
+static void fm10k_request_glort_range(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ u16 mask = (~hw->mac.dglort_map) >> FM10K_DGLORTMAP_MASK_SHIFT;
+
+ /* establish GLORT base */
+ interface->glort = hw->mac.dglort_map & FM10K_DGLORTMAP_NONE;
+ interface->glort_count = 0;
+
+ /* nothing we can do until mask is allocated */
+ if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE)
+ return;
+
+ /* we support 3 possible GLORT configurations.
+ * 1: VFs consume all but the last 1
+ * 2: VFs and PF split glorts with possible gap between
+ * 3: VFs allocated first 64, all others belong to PF
+ */
+ if (mask <= hw->iov.total_vfs) {
+ interface->glort_count = 1;
+ interface->glort += mask;
+ } else if (mask < 64) {
+ interface->glort_count = (mask + 1) / 2;
+ interface->glort += interface->glort_count;
+ } else {
+ interface->glort_count = mask - 63;
+ interface->glort += 64;
+ }
+}
+
+/**
+ * fm10k_del_vxlan_port_all
+ * @interface: board private structure
+ *
+ * This function frees the entire vxlan_port list
+ **/
+static void fm10k_del_vxlan_port_all(struct fm10k_intfc *interface)
+{
+ struct fm10k_vxlan_port *vxlan_port;
+
+ /* flush all entries from list */
+ vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
+ struct fm10k_vxlan_port, list);
+ while (vxlan_port) {
+ list_del(&vxlan_port->list);
+ kfree(vxlan_port);
+ vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
+ struct fm10k_vxlan_port,
+ list);
+ }
+}
+
+/**
+ * fm10k_restore_vxlan_port
+ * @interface: board private structure
+ *
+ * This function restores the value in the tunnel_cfg register after reset
+ **/
+static void fm10k_restore_vxlan_port(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_vxlan_port *vxlan_port;
+
+ /* only the PF supports configuring tunnels */
+ if (hw->mac.type != fm10k_mac_pf)
+ return;
+
+ vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
+ struct fm10k_vxlan_port, list);
+
+ /* restore tunnel configuration register */
+ fm10k_write_reg(hw, FM10K_TUNNEL_CFG,
+ (vxlan_port ? ntohs(vxlan_port->port) : 0) |
+ (ETH_P_TEB << FM10K_TUNNEL_CFG_NVGRE_SHIFT));
+}
+
+/**
+ * fm10k_add_vxlan_port
+ * @netdev: network interface device structure
+ * @sa_family: Address family of new port
+ * @port: port number used for VXLAN
+ *
+ * This funciton is called when a new VXLAN interface has added a new port
+ * number to the range that is currently in use for VXLAN. The new port
+ * number is always added to the tail so that the port number list should
+ * match the order in which the ports were allocated. The head of the list
+ * is always used as the VXLAN port number for offloads.
+ **/
+static void fm10k_add_vxlan_port(struct net_device *dev,
+ sa_family_t sa_family, __be16 port) {
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_vxlan_port *vxlan_port;
+
+ /* only the PF supports configuring tunnels */
+ if (interface->hw.mac.type != fm10k_mac_pf)
+ return;
+
+ /* existing ports are pulled out so our new entry is always last */
+ fm10k_vxlan_port_for_each(vxlan_port, interface) {
+ if ((vxlan_port->port == port) &&
+ (vxlan_port->sa_family == sa_family)) {
+ list_del(&vxlan_port->list);
+ goto insert_tail;
+ }
+ }
+
+ /* allocate memory to track ports */
+ vxlan_port = kmalloc(sizeof(*vxlan_port), GFP_ATOMIC);
+ if (!vxlan_port)
+ return;
+ vxlan_port->port = port;
+ vxlan_port->sa_family = sa_family;
+
+insert_tail:
+ /* add new port value to list */
+ list_add_tail(&vxlan_port->list, &interface->vxlan_port);
+
+ fm10k_restore_vxlan_port(interface);
+}
+
+/**
+ * fm10k_del_vxlan_port
+ * @netdev: network interface device structure
+ * @sa_family: Address family of freed port
+ * @port: port number used for VXLAN
+ *
+ * This funciton is called when a new VXLAN interface has freed a port
+ * number from the range that is currently in use for VXLAN. The freed
+ * port is removed from the list and the new head is used to determine
+ * the port number for offloads.
+ **/
+static void fm10k_del_vxlan_port(struct net_device *dev,
+ sa_family_t sa_family, __be16 port) {
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_vxlan_port *vxlan_port;
+
+ if (interface->hw.mac.type != fm10k_mac_pf)
+ return;
+
+ /* find the port in the list and free it */
+ fm10k_vxlan_port_for_each(vxlan_port, interface) {
+ if ((vxlan_port->port == port) &&
+ (vxlan_port->sa_family == sa_family)) {
+ list_del(&vxlan_port->list);
+ kfree(vxlan_port);
+ break;
+ }
+ }
+
+ fm10k_restore_vxlan_port(interface);
+}
+
+/**
+ * fm10k_open - Called when a network interface is made active
+ * @netdev: network interface device structure
+ *
+ * Returns 0 on success, negative value on failure
+ *
+ * The open entry point is called when a network interface is made
+ * active by the system (IFF_UP). At this point all resources needed
+ * for transmit and receive operations are allocated, the interrupt
+ * handler is registered with the OS, the watchdog timer is started,
+ * and the stack is notified that the interface is ready.
+ **/
+int fm10k_open(struct net_device *netdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ int err;
+
+ /* allocate transmit descriptors */
+ err = fm10k_setup_all_tx_resources(interface);
+ if (err)
+ goto err_setup_tx;
+
+ /* allocate receive descriptors */
+ err = fm10k_setup_all_rx_resources(interface);
+ if (err)
+ goto err_setup_rx;
+
+ /* allocate interrupt resources */
+ err = fm10k_qv_request_irq(interface);
+ if (err)
+ goto err_req_irq;
+
+ /* setup GLORT assignment for this port */
+ fm10k_request_glort_range(interface);
+
+ /* Notify the stack of the actual queue counts */
+
+ err = netif_set_real_num_rx_queues(netdev,
+ interface->num_rx_queues);
+ if (err)
+ goto err_set_queues;
+
+#if IS_ENABLED(CONFIG_VXLAN)
+ /* update VXLAN port configuration */
+ vxlan_get_rx_port(netdev);
+
+#endif
+ fm10k_up(interface);
+
+ return 0;
+
+err_set_queues:
+ fm10k_qv_free_irq(interface);
+err_req_irq:
+ fm10k_free_all_rx_resources(interface);
+err_setup_rx:
+ fm10k_free_all_tx_resources(interface);
+err_setup_tx:
+ return err;
+}
+
+/**
+ * fm10k_close - Disables a network interface
+ * @netdev: network interface device structure
+ *
+ * Returns 0, this is not allowed to fail
+ *
+ * The close entry point is called when an interface is de-activated
+ * by the OS. The hardware is still under the drivers control, but
+ * needs to be disabled. A global MAC reset is issued to stop the
+ * hardware, and all transmit and receive resources are freed.
+ **/
+int fm10k_close(struct net_device *netdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+
+ fm10k_down(interface);
+
+ fm10k_qv_free_irq(interface);
+
+ fm10k_del_vxlan_port_all(interface);
+
+ fm10k_free_all_tx_resources(interface);
+ fm10k_free_all_rx_resources(interface);
+
+ return 0;
+}
+
+static netdev_tx_t fm10k_xmit_frame(struct sk_buff *skb, struct net_device *dev)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ unsigned int r_idx = 0;
+ int err;
+
+ if ((skb->protocol == htons(ETH_P_8021Q)) &&
+ !vlan_tx_tag_present(skb)) {
+ /* FM10K only supports hardware tagging, any tags in frame
+ * are considered 2nd level or "outer" tags
+ */
+ struct vlan_hdr *vhdr;
+ __be16 proto;
+
+ /* make sure skb is not shared */
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (!skb)
+ return NETDEV_TX_OK;
+
+ /* make sure there is enough room to move the ethernet header */
+ if (unlikely(!pskb_may_pull(skb, VLAN_ETH_HLEN)))
+ return NETDEV_TX_OK;
+
+ /* verify the skb head is not shared */
+ err = skb_cow_head(skb, 0);
+ if (err)
+ return NETDEV_TX_OK;
+
+ /* locate vlan header */
+ vhdr = (struct vlan_hdr *)(skb->data + ETH_HLEN);
+
+ /* pull the 2 key pieces of data out of it */
+ __vlan_hwaccel_put_tag(skb,
+ htons(ETH_P_8021Q),
+ ntohs(vhdr->h_vlan_TCI));
+ proto = vhdr->h_vlan_encapsulated_proto;
+ skb->protocol = (ntohs(proto) >= 1536) ? proto :
+ htons(ETH_P_802_2);
+
+ /* squash it by moving the ethernet addresses up 4 bytes */
+ memmove(skb->data + VLAN_HLEN, skb->data, 12);
+ __skb_pull(skb, VLAN_HLEN);
+ skb_reset_mac_header(skb);
+ }
+
+ /* The minimum packet size for a single buffer is 17B so pad the skb
+ * in order to meet this minimum size requirement.
+ */
+ if (unlikely(skb->len < 17)) {
+ int pad_len = 17 - skb->len;
+
+ if (skb_pad(skb, pad_len))
+ return NETDEV_TX_OK;
+ __skb_put(skb, pad_len);
+ }
+
+ /* prepare packet for hardware time stamping */
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
+ fm10k_ts_tx_enqueue(interface, skb);
+
+ if (r_idx >= interface->num_tx_queues)
+ r_idx %= interface->num_tx_queues;
+
+ err = fm10k_xmit_frame_ring(skb, interface->tx_ring[r_idx]);
+
+ return err;
+}
+
+static int fm10k_change_mtu(struct net_device *dev, int new_mtu)
+{
+ if (new_mtu < 68 || new_mtu > FM10K_MAX_JUMBO_FRAME_SIZE)
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+
+ return 0;
+}
+
+/**
+ * fm10k_tx_timeout - Respond to a Tx Hang
+ * @netdev: network interface device structure
+ **/
+static void fm10k_tx_timeout(struct net_device *netdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ bool real_tx_hang = false;
+ int i;
+
+#define TX_TIMEO_LIMIT 16000
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ struct fm10k_ring *tx_ring = interface->tx_ring[i];
+
+ if (check_for_tx_hang(tx_ring) && fm10k_check_tx_hang(tx_ring))
+ real_tx_hang = true;
+ }
+
+ if (real_tx_hang) {
+ fm10k_tx_timeout_reset(interface);
+ } else {
+ netif_info(interface, drv, netdev,
+ "Fake Tx hang detected with timeout of %d seconds\n",
+ netdev->watchdog_timeo/HZ);
+
+ /* fake Tx hang - increase the kernel timeout */
+ if (netdev->watchdog_timeo < TX_TIMEO_LIMIT)
+ netdev->watchdog_timeo *= 2;
+ }
+}
+
+static int fm10k_uc_vlan_unsync(struct net_device *netdev,
+ const unsigned char *uc_addr)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ u16 glort = interface->glort;
+ u16 vid = interface->vid;
+ bool set = !!(vid / VLAN_N_VID);
+ int err;
+
+ /* drop any leading bits on the VLAN ID */
+ vid &= VLAN_N_VID - 1;
+
+ err = hw->mac.ops.update_uc_addr(hw, glort, uc_addr, vid, set, 0);
+ if (err)
+ return err;
+
+ /* return non-zero value as we are only doing a partial sync/unsync */
+ return 1;
+}
+
+static int fm10k_mc_vlan_unsync(struct net_device *netdev,
+ const unsigned char *mc_addr)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ u16 glort = interface->glort;
+ u16 vid = interface->vid;
+ bool set = !!(vid / VLAN_N_VID);
+ int err;
+
+ /* drop any leading bits on the VLAN ID */
+ vid &= VLAN_N_VID - 1;
+
+ err = hw->mac.ops.update_mc_addr(hw, glort, mc_addr, vid, set);
+ if (err)
+ return err;
+
+ /* return non-zero value as we are only doing a partial sync/unsync */
+ return 1;
+}
+
+static int fm10k_update_vid(struct net_device *netdev, u16 vid, bool set)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ s32 err;
+
+ /* updates do not apply to VLAN 0 */
+ if (!vid)
+ return 0;
+
+ if (vid >= VLAN_N_VID)
+ return -EINVAL;
+
+ /* Verify we have permission to add VLANs */
+ if (hw->mac.vlan_override)
+ return -EACCES;
+
+ /* if default VLAN is already present do nothing */
+ if (vid == hw->mac.default_vid)
+ return -EBUSY;
+
+ /* update active_vlans bitmask */
+ set_bit(vid, interface->active_vlans);
+ if (!set)
+ clear_bit(vid, interface->active_vlans);
+
+ fm10k_mbx_lock(interface);
+
+ /* only need to update the VLAN if not in promiscous mode */
+ if (!(netdev->flags & IFF_PROMISC)) {
+ err = hw->mac.ops.update_vlan(hw, vid, 0, set);
+ if (err)
+ return err;
+ }
+
+ /* update our base MAC address */
+ err = hw->mac.ops.update_uc_addr(hw, interface->glort, hw->mac.addr,
+ vid, set, 0);
+ if (err)
+ return err;
+
+ /* set vid prior to syncing/unsyncing the VLAN */
+ interface->vid = vid + (set ? VLAN_N_VID : 0);
+
+ /* Update the unicast and multicast address list to add/drop VLAN */
+ __dev_uc_unsync(netdev, fm10k_uc_vlan_unsync);
+ __dev_mc_unsync(netdev, fm10k_mc_vlan_unsync);
+
+ fm10k_mbx_unlock(interface);
+
+ return 0;
+}
+
+static int fm10k_vlan_rx_add_vid(struct net_device *netdev,
+ __always_unused __be16 proto, u16 vid)
+{
+ /* update VLAN and address table based on changes */
+ return fm10k_update_vid(netdev, vid, true);
+}
+
+static int fm10k_vlan_rx_kill_vid(struct net_device *netdev,
+ __always_unused __be16 proto, u16 vid)
+{
+ /* update VLAN and address table based on changes */
+ return fm10k_update_vid(netdev, vid, false);
+}
+
+static u16 fm10k_find_next_vlan(struct fm10k_intfc *interface, u16 vid)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ u16 default_vid = hw->mac.default_vid;
+ u16 vid_limit = vid < default_vid ? default_vid : VLAN_N_VID;
+
+ vid = find_next_bit(interface->active_vlans, vid_limit, ++vid);
+
+ return vid;
+}
+
+static void fm10k_clear_unused_vlans(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ u32 vid, prev_vid;
+
+ /* loop through and find any gaps in the table */
+ for (vid = 0, prev_vid = 0;
+ prev_vid < VLAN_N_VID;
+ prev_vid = vid + 1, vid = fm10k_find_next_vlan(interface, vid)) {
+ if (prev_vid == vid)
+ continue;
+
+ /* send request to clear multiple bits at a time */
+ prev_vid += (vid - prev_vid - 1) << FM10K_VLAN_LENGTH_SHIFT;
+ hw->mac.ops.update_vlan(hw, prev_vid, 0, false);
+ }
+}
+
+static int __fm10k_uc_sync(struct net_device *dev,
+ const unsigned char *addr, bool sync)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+ u16 vid, glort = interface->glort;
+ s32 err;
+
+ if (!is_valid_ether_addr(addr))
+ return -EADDRNOTAVAIL;
+
+ /* update table with current entries */
+ for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 0;
+ vid < VLAN_N_VID;
+ vid = fm10k_find_next_vlan(interface, vid)) {
+ err = hw->mac.ops.update_uc_addr(hw, glort, addr,
+ vid, sync, 0);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int fm10k_uc_sync(struct net_device *dev,
+ const unsigned char *addr)
+{
+ return __fm10k_uc_sync(dev, addr, true);
+}
+
+static int fm10k_uc_unsync(struct net_device *dev,
+ const unsigned char *addr)
+{
+ return __fm10k_uc_sync(dev, addr, false);
+}
+
+static int fm10k_set_mac(struct net_device *dev, void *p)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+ struct sockaddr *addr = p;
+ s32 err = 0;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ if (dev->flags & IFF_UP) {
+ /* setting MAC address requires mailbox */
+ fm10k_mbx_lock(interface);
+
+ err = fm10k_uc_sync(dev, addr->sa_data);
+ if (!err)
+ fm10k_uc_unsync(dev, hw->mac.addr);
+
+ fm10k_mbx_unlock(interface);
+ }
+
+ if (!err) {
+ ether_addr_copy(dev->dev_addr, addr->sa_data);
+ ether_addr_copy(hw->mac.addr, addr->sa_data);
+ dev->addr_assign_type &= ~NET_ADDR_RANDOM;
+ }
+
+ /* if we had a mailbox error suggest trying again */
+ return err ? -EAGAIN : 0;
+}
+
+static int __fm10k_mc_sync(struct net_device *dev,
+ const unsigned char *addr, bool sync)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+ u16 vid, glort = interface->glort;
+ s32 err;
+
+ if (!is_multicast_ether_addr(addr))
+ return -EADDRNOTAVAIL;
+
+ /* update table with current entries */
+ for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 0;
+ vid < VLAN_N_VID;
+ vid = fm10k_find_next_vlan(interface, vid)) {
+ err = hw->mac.ops.update_mc_addr(hw, glort, addr, vid, sync);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int fm10k_mc_sync(struct net_device *dev,
+ const unsigned char *addr)
+{
+ return __fm10k_mc_sync(dev, addr, true);
+}
+
+static int fm10k_mc_unsync(struct net_device *dev,
+ const unsigned char *addr)
+{
+ return __fm10k_mc_sync(dev, addr, false);
+}
+
+static void fm10k_set_rx_mode(struct net_device *dev)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+ int xcast_mode;
+
+ /* no need to update the harwdare if we are not running */
+ if (!(dev->flags & IFF_UP))
+ return;
+
+ /* determine new mode based on flags */
+ xcast_mode = (dev->flags & IFF_PROMISC) ? FM10K_XCAST_MODE_PROMISC :
+ (dev->flags & IFF_ALLMULTI) ? FM10K_XCAST_MODE_ALLMULTI :
+ (dev->flags & (IFF_BROADCAST | IFF_MULTICAST)) ?
+ FM10K_XCAST_MODE_MULTI : FM10K_XCAST_MODE_NONE;
+
+ fm10k_mbx_lock(interface);
+
+ /* syncronize all of the addresses */
+ if (xcast_mode != FM10K_XCAST_MODE_PROMISC) {
+ __dev_uc_sync(dev, fm10k_uc_sync, fm10k_uc_unsync);
+ if (xcast_mode != FM10K_XCAST_MODE_ALLMULTI)
+ __dev_mc_sync(dev, fm10k_mc_sync, fm10k_mc_unsync);
+ }
+
+ /* if we aren't changing modes there is nothing to do */
+ if (interface->xcast_mode != xcast_mode) {
+ /* update VLAN table */
+ if (xcast_mode == FM10K_XCAST_MODE_PROMISC)
+ hw->mac.ops.update_vlan(hw, FM10K_VLAN_ALL, 0, true);
+ if (interface->xcast_mode == FM10K_XCAST_MODE_PROMISC)
+ fm10k_clear_unused_vlans(interface);
+
+ /* update xcast mode */
+ hw->mac.ops.update_xcast_mode(hw, interface->glort, xcast_mode);
+
+ /* record updated xcast mode state */
+ interface->xcast_mode = xcast_mode;
+ }
+
+ fm10k_mbx_unlock(interface);
+}
+
+void fm10k_restore_rx_state(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int xcast_mode;
+ u16 vid, glort;
+
+ /* restore our address if perm_addr is set */
+ if (hw->mac.type == fm10k_mac_vf) {
+ if (is_valid_ether_addr(hw->mac.perm_addr)) {
+ ether_addr_copy(hw->mac.addr, hw->mac.perm_addr);
+ ether_addr_copy(netdev->perm_addr, hw->mac.perm_addr);
+ ether_addr_copy(netdev->dev_addr, hw->mac.perm_addr);
+ netdev->addr_assign_type &= ~NET_ADDR_RANDOM;
+ }
+
+ if (hw->mac.vlan_override)
+ netdev->features &= ~NETIF_F_HW_VLAN_CTAG_RX;
+ else
+ netdev->features |= NETIF_F_HW_VLAN_CTAG_RX;
+ }
+
+ /* record glort for this interface */
+ glort = interface->glort;
+
+ /* convert interface flags to xcast mode */
+ if (netdev->flags & IFF_PROMISC)
+ xcast_mode = FM10K_XCAST_MODE_PROMISC;
+ else if (netdev->flags & IFF_ALLMULTI)
+ xcast_mode = FM10K_XCAST_MODE_ALLMULTI;
+ else if (netdev->flags & (IFF_BROADCAST | IFF_MULTICAST))
+ xcast_mode = FM10K_XCAST_MODE_MULTI;
+ else
+ xcast_mode = FM10K_XCAST_MODE_NONE;
+
+ fm10k_mbx_lock(interface);
+
+ /* Enable logical port */
+ hw->mac.ops.update_lport_state(hw, glort, interface->glort_count, true);
+
+ /* update VLAN table */
+ hw->mac.ops.update_vlan(hw, FM10K_VLAN_ALL, 0,
+ xcast_mode == FM10K_XCAST_MODE_PROMISC);
+
+ /* Add filter for VLAN 0 */
+ hw->mac.ops.update_vlan(hw, 0, 0, true);
+
+ /* update table with current entries */
+ for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 0;
+ vid < VLAN_N_VID;
+ vid = fm10k_find_next_vlan(interface, vid)) {
+ hw->mac.ops.update_vlan(hw, vid, 0, true);
+ hw->mac.ops.update_uc_addr(hw, glort, hw->mac.addr,
+ vid, true, 0);
+ }
+
+ /* syncronize all of the addresses */
+ if (xcast_mode != FM10K_XCAST_MODE_PROMISC) {
+ __dev_uc_sync(netdev, fm10k_uc_sync, fm10k_uc_unsync);
+ if (xcast_mode != FM10K_XCAST_MODE_ALLMULTI)
+ __dev_mc_sync(netdev, fm10k_mc_sync, fm10k_mc_unsync);
+ }
+
+ /* update xcast mode */
+ hw->mac.ops.update_xcast_mode(hw, glort, xcast_mode);
+
+ fm10k_mbx_unlock(interface);
+
+ /* record updated xcast mode state */
+ interface->xcast_mode = xcast_mode;
+
+ /* Restore tunnel configuration */
+ fm10k_restore_vxlan_port(interface);
+}
+
+void fm10k_reset_rx_state(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+
+ fm10k_mbx_lock(interface);
+
+ /* clear the logical port state on lower device */
+ hw->mac.ops.update_lport_state(hw, interface->glort,
+ interface->glort_count, false);
+
+ fm10k_mbx_unlock(interface);
+
+ /* reset flags to default state */
+ interface->xcast_mode = FM10K_XCAST_MODE_NONE;
+
+ /* clear the sync flag since the lport has been dropped */
+ __dev_uc_unsync(netdev, NULL);
+ __dev_mc_unsync(netdev, NULL);
+}
+
+/**
+ * fm10k_get_stats64 - Get System Network Statistics
+ * @netdev: network interface device structure
+ * @stats: storage space for 64bit statistics
+ *
+ * Returns 64bit statistics, for use in the ndo_get_stats64 callback. This
+ * function replaces fm10k_get_stats for kernels which support it.
+ */
+static struct rtnl_link_stats64 *fm10k_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_ring *ring;
+ unsigned int start, i;
+ u64 bytes, packets;
+
+ rcu_read_lock();
+
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ ring = ACCESS_ONCE(interface->rx_ring[i]);
+
+ if (!ring)
+ continue;
+
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ packets = ring->stats.packets;
+ bytes = ring->stats.bytes;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+
+ stats->rx_packets += packets;
+ stats->rx_bytes += bytes;
+ }
+
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ ring = ACCESS_ONCE(interface->rx_ring[i]);
+
+ if (!ring)
+ continue;
+
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ packets = ring->stats.packets;
+ bytes = ring->stats.bytes;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+
+ stats->tx_packets += packets;
+ stats->tx_bytes += bytes;
+ }
+
+ rcu_read_unlock();
+
+ /* following stats updated by fm10k_service_task() */
+ stats->rx_missed_errors = netdev->stats.rx_missed_errors;
+
+ return stats;
+}
+
+int fm10k_setup_tc(struct net_device *dev, u8 tc)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ /* Currently only the PF supports priority classes */
+ if (tc && (interface->hw.mac.type != fm10k_mac_pf))
+ return -EINVAL;
+
+ /* Hardware supports up to 8 traffic classes */
+ if (tc > 8)
+ return -EINVAL;
+
+ /* Hardware has to reinitialize queues to match packet
+ * buffer alignment. Unfortunately, the hardware is not
+ * flexible enough to do this dynamically.
+ */
+ if (netif_running(dev))
+ fm10k_close(dev);
+
+ fm10k_mbx_free_irq(interface);
+
+ fm10k_clear_queueing_scheme(interface);
+
+ /* we expect the prio_tc map to be repopulated later */
+ netdev_reset_tc(dev);
+ netdev_set_num_tc(dev, tc);
+
+ fm10k_init_queueing_scheme(interface);
+
+ fm10k_mbx_request_irq(interface);
+
+ if (netif_running(dev))
+ fm10k_open(dev);
+
+ /* flag to indicate SWPRI has yet to be updated */
+ interface->flags |= FM10K_FLAG_SWPRI_CONFIG;
+
+ return 0;
+}
+
+static int fm10k_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+{
+ switch (cmd) {
+ case SIOCGHWTSTAMP:
+ return fm10k_get_ts_config(netdev, ifr);
+ case SIOCSHWTSTAMP:
+ return fm10k_set_ts_config(netdev, ifr);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void fm10k_assign_l2_accel(struct fm10k_intfc *interface,
+ struct fm10k_l2_accel *l2_accel)
+{
+ struct fm10k_ring *ring;
+ int i;
+
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ ring = interface->rx_ring[i];
+ rcu_assign_pointer(ring->l2_accel, l2_accel);
+ }
+
+ interface->l2_accel = l2_accel;
+}
+
+static void *fm10k_dfwd_add_station(struct net_device *dev,
+ struct net_device *sdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_l2_accel *l2_accel = interface->l2_accel;
+ struct fm10k_l2_accel *old_l2_accel = NULL;
+ struct fm10k_dglort_cfg dglort = { 0 };
+ struct fm10k_hw *hw = &interface->hw;
+ int size = 0, i;
+ u16 glort;
+
+ /* allocate l2 accel structure if it is not available */
+ if (!l2_accel) {
+ /* verify there is enough free GLORTs to support l2_accel */
+ if (interface->glort_count < 7)
+ return ERR_PTR(-EBUSY);
+
+ size = offsetof(struct fm10k_l2_accel, macvlan[7]);
+ l2_accel = kzalloc(size, GFP_KERNEL);
+ if (!l2_accel)
+ return ERR_PTR(-ENOMEM);
+
+ l2_accel->size = 7;
+ l2_accel->dglort = interface->glort;
+
+ /* update pointers */
+ fm10k_assign_l2_accel(interface, l2_accel);
+ /* do not expand if we are at our limit */
+ } else if ((l2_accel->count == FM10K_MAX_STATIONS) ||
+ (l2_accel->count == (interface->glort_count - 1))) {
+ return ERR_PTR(-EBUSY);
+ /* expand if we have hit the size limit */
+ } else if (l2_accel->count == l2_accel->size) {
+ old_l2_accel = l2_accel;
+ size = offsetof(struct fm10k_l2_accel,
+ macvlan[(l2_accel->size * 2) + 1]);
+ l2_accel = kzalloc(size, GFP_KERNEL);
+ if (!l2_accel)
+ return ERR_PTR(-ENOMEM);
+
+ memcpy(l2_accel, old_l2_accel,
+ offsetof(struct fm10k_l2_accel,
+ macvlan[old_l2_accel->size]));
+
+ l2_accel->size = (old_l2_accel->size * 2) + 1;
+
+ /* update pointers */
+ fm10k_assign_l2_accel(interface, l2_accel);
+ kfree_rcu(old_l2_accel, rcu);
+ }
+
+ /* add macvlan to accel table, and record GLORT for position */
+ for (i = 0; i < l2_accel->size; i++) {
+ if (!l2_accel->macvlan[i])
+ break;
+ }
+
+ /* record station */
+ l2_accel->macvlan[i] = sdev;
+ l2_accel->count++;
+
+ /* configure default DGLORT mapping for RSS/DCB */
+ dglort.idx = fm10k_dglort_pf_rss;
+ dglort.inner_rss = 1;
+ dglort.rss_l = fls(interface->ring_feature[RING_F_RSS].mask);
+ dglort.pc_l = fls(interface->ring_feature[RING_F_QOS].mask);
+ dglort.glort = interface->glort;
+ dglort.shared_l = fls(l2_accel->size);
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+
+ /* Add rules for this specific dglort to the switch */
+ fm10k_mbx_lock(interface);
+
+ glort = l2_accel->dglort + 1 + i;
+ hw->mac.ops.update_xcast_mode(hw, glort, FM10K_XCAST_MODE_MULTI);
+ hw->mac.ops.update_uc_addr(hw, glort, sdev->dev_addr, 0, true, 0);
+
+ fm10k_mbx_unlock(interface);
+
+ return sdev;
+}
+
+static void fm10k_dfwd_del_station(struct net_device *dev, void *priv)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_l2_accel *l2_accel = ACCESS_ONCE(interface->l2_accel);
+ struct fm10k_dglort_cfg dglort = { 0 };
+ struct fm10k_hw *hw = &interface->hw;
+ struct net_device *sdev = priv;
+ int i;
+ u16 glort;
+
+ if (!l2_accel)
+ return;
+
+ /* search table for matching interface */
+ for (i = 0; i < l2_accel->size; i++) {
+ if (l2_accel->macvlan[i] == sdev)
+ break;
+ }
+
+ /* exit if macvlan not found */
+ if (i == l2_accel->size)
+ return;
+
+ /* Remove any rules specific to this dglort */
+ fm10k_mbx_lock(interface);
+
+ glort = l2_accel->dglort + 1 + i;
+ hw->mac.ops.update_xcast_mode(hw, glort, FM10K_XCAST_MODE_NONE);
+ hw->mac.ops.update_uc_addr(hw, glort, sdev->dev_addr, 0, false, 0);
+
+ fm10k_mbx_unlock(interface);
+
+ /* record removal */
+ l2_accel->macvlan[i] = NULL;
+ l2_accel->count--;
+
+ /* configure default DGLORT mapping for RSS/DCB */
+ dglort.idx = fm10k_dglort_pf_rss;
+ dglort.inner_rss = 1;
+ dglort.rss_l = fls(interface->ring_feature[RING_F_RSS].mask);
+ dglort.pc_l = fls(interface->ring_feature[RING_F_QOS].mask);
+ dglort.glort = interface->glort;
+ if (l2_accel)
+ dglort.shared_l = fls(l2_accel->size);
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+
+ /* If table is empty remove it */
+ if (l2_accel->count == 0) {
+ fm10k_assign_l2_accel(interface, NULL);
+ kfree_rcu(l2_accel, rcu);
+ }
+}
+
+static const struct net_device_ops fm10k_netdev_ops = {
+ .ndo_open = fm10k_open,
+ .ndo_stop = fm10k_close,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_start_xmit = fm10k_xmit_frame,
+ .ndo_set_mac_address = fm10k_set_mac,
+ .ndo_change_mtu = fm10k_change_mtu,
+ .ndo_tx_timeout = fm10k_tx_timeout,
+ .ndo_vlan_rx_add_vid = fm10k_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = fm10k_vlan_rx_kill_vid,
+ .ndo_set_rx_mode = fm10k_set_rx_mode,
+ .ndo_get_stats64 = fm10k_get_stats64,
+ .ndo_setup_tc = fm10k_setup_tc,
+ .ndo_set_vf_mac = fm10k_ndo_set_vf_mac,
+ .ndo_set_vf_vlan = fm10k_ndo_set_vf_vlan,
+ .ndo_set_vf_rate = fm10k_ndo_set_vf_bw,
+ .ndo_get_vf_config = fm10k_ndo_get_vf_config,
+ .ndo_add_vxlan_port = fm10k_add_vxlan_port,
+ .ndo_del_vxlan_port = fm10k_del_vxlan_port,
+ .ndo_do_ioctl = fm10k_ioctl,
+ .ndo_dfwd_add_station = fm10k_dfwd_add_station,
+ .ndo_dfwd_del_station = fm10k_dfwd_del_station,
+};
+
+#define DEFAULT_DEBUG_LEVEL_SHIFT 3
+
+struct net_device *fm10k_alloc_netdev(void)
+{
+ struct fm10k_intfc *interface;
+ struct net_device *dev;
+
+ dev = alloc_etherdev_mq(sizeof(struct fm10k_intfc), MAX_QUEUES);
+ if (!dev)
+ return NULL;
+
+ /* set net device and ethtool ops */
+ dev->netdev_ops = &fm10k_netdev_ops;
+ fm10k_set_ethtool_ops(dev);
+
+ /* configure default debug level */
+ interface = netdev_priv(dev);
+ interface->msg_enable = (1 << DEFAULT_DEBUG_LEVEL_SHIFT) - 1;
+
+ /* configure default features */
+ dev->features |= NETIF_F_IP_CSUM |
+ NETIF_F_IPV6_CSUM |
+ NETIF_F_SG |
+ NETIF_F_TSO |
+ NETIF_F_TSO6 |
+ NETIF_F_TSO_ECN |
+ NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_RXHASH |
+ NETIF_F_RXCSUM;
+
+ /* all features defined to this point should be changeable */
+ dev->hw_features |= dev->features;
+
+ /* allow user to enable L2 forwarding acceleration */
+ dev->hw_features |= NETIF_F_HW_L2FW_DOFFLOAD;
+
+ /* configure VLAN features */
+ dev->vlan_features |= dev->features;
+
+ /* configure tunnel offloads */
+ dev->hw_enc_features = NETIF_F_IP_CSUM |
+ NETIF_F_TSO |
+ NETIF_F_TSO6 |
+ NETIF_F_TSO_ECN |
+ NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_IPV6_CSUM |
+ NETIF_F_SG;
+
+ /* we want to leave these both on as we cannot disable VLAN tag
+ * insertion or stripping on the hardware since it is contained
+ * in the FTAG and not in the frame itself.
+ */
+ dev->features |= NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_FILTER;
+
+ dev->priv_flags |= IFF_UNICAST_FLT;
+
+ return dev;
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
new file mode 100644
index 0000000..e02036c
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
@@ -0,0 +1,2166 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include <linux/module.h>
+#include <linux/aer.h>
+
+#include "fm10k.h"
+
+static const struct fm10k_info *fm10k_info_tbl[] = {
+ [fm10k_device_pf] = &fm10k_pf_info,
+ [fm10k_device_vf] = &fm10k_vf_info,
+};
+
+/**
+ * fm10k_pci_tbl - PCI Device ID Table
+ *
+ * Wildcard entries (PCI_ANY_ID) should come last
+ * Last entry must be all 0s
+ *
+ * { Vendor ID, Device ID, SubVendor ID, SubDevice ID,
+ * Class, Class Mask, private data (not used) }
+ */
+static const struct pci_device_id fm10k_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, FM10K_DEV_ID_PF), fm10k_device_pf },
+ { PCI_VDEVICE(INTEL, FM10K_DEV_ID_VF), fm10k_device_vf },
+ /* required last entry */
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, fm10k_pci_tbl);
+
+u16 fm10k_read_pci_cfg_word(struct fm10k_hw *hw, u32 reg)
+{
+ struct fm10k_intfc *interface = hw->back;
+ u16 value = 0;
+
+ if (FM10K_REMOVED(hw->hw_addr))
+ return ~value;
+
+ pci_read_config_word(interface->pdev, reg, &value);
+ if (value == 0xFFFF)
+ fm10k_write_flush(hw);
+
+ return value;
+}
+
+u32 fm10k_read_reg(struct fm10k_hw *hw, int reg)
+{
+ u32 __iomem *hw_addr = ACCESS_ONCE(hw->hw_addr);
+ u32 value = 0;
+
+ if (FM10K_REMOVED(hw_addr))
+ return ~value;
+
+ value = readl(&hw_addr[reg]);
+ if (!(~value) && (!reg || !(~readl(hw_addr)))) {
+ struct fm10k_intfc *interface = hw->back;
+ struct net_device *netdev = interface->netdev;
+
+ hw->hw_addr = NULL;
+ netif_device_detach(netdev);
+ netdev_err(netdev, "PCIe link lost, device now detached\n");
+ }
+
+ return value;
+}
+
+static int fm10k_hw_ready(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+
+ fm10k_write_flush(hw);
+
+ return FM10K_REMOVED(hw->hw_addr) ? -ENODEV : 0;
+}
+
+void fm10k_service_event_schedule(struct fm10k_intfc *interface)
+{
+ if (!test_bit(__FM10K_SERVICE_DISABLE, &interface->state) &&
+ !test_and_set_bit(__FM10K_SERVICE_SCHED, &interface->state))
+ schedule_work(&interface->service_task);
+}
+
+static void fm10k_service_event_complete(struct fm10k_intfc *interface)
+{
+ BUG_ON(!test_bit(__FM10K_SERVICE_SCHED, &interface->state));
+
+ /* flush memory to make sure state is correct before next watchog */
+ smp_mb__before_atomic();
+ clear_bit(__FM10K_SERVICE_SCHED, &interface->state);
+}
+
+/**
+ * fm10k_service_timer - Timer Call-back
+ * @data: pointer to interface cast into an unsigned long
+ **/
+static void fm10k_service_timer(unsigned long data)
+{
+ struct fm10k_intfc *interface = (struct fm10k_intfc *)data;
+
+ /* Reset the timer */
+ mod_timer(&interface->service_timer, (HZ * 2) + jiffies);
+
+ fm10k_service_event_schedule(interface);
+}
+
+static void fm10k_detach_subtask(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+
+ /* do nothing if device is still present or hw_addr is set */
+ if (netif_device_present(netdev) || interface->hw.hw_addr)
+ return;
+
+ rtnl_lock();
+
+ if (netif_running(netdev))
+ dev_close(netdev);
+
+ rtnl_unlock();
+}
+
+static void fm10k_reinit(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int err;
+
+ WARN_ON(in_interrupt());
+
+ /* put off any impending NetWatchDogTimeout */
+ netdev->trans_start = jiffies;
+
+ while (test_and_set_bit(__FM10K_RESETTING, &interface->state))
+ usleep_range(1000, 2000);
+
+ rtnl_lock();
+
+ fm10k_iov_suspend(interface->pdev);
+
+ if (netif_running(netdev))
+ fm10k_close(netdev);
+
+ fm10k_mbx_free_irq(interface);
+
+ /* delay any future reset requests */
+ interface->last_reset = jiffies + (10 * HZ);
+
+ /* reset and initialize the hardware so it is in a known state */
+ err = hw->mac.ops.reset_hw(hw) ? : hw->mac.ops.init_hw(hw);
+ if (err)
+ dev_err(&interface->pdev->dev, "init_hw failed: %d\n", err);
+
+ /* reassociate interrupts */
+ fm10k_mbx_request_irq(interface);
+
+ /* reset clock */
+ fm10k_ts_reset(interface);
+
+ if (netif_running(netdev))
+ fm10k_open(netdev);
+
+ fm10k_iov_resume(interface->pdev);
+
+ rtnl_unlock();
+
+ clear_bit(__FM10K_RESETTING, &interface->state);
+}
+
+static void fm10k_reset_subtask(struct fm10k_intfc *interface)
+{
+ if (!(interface->flags & FM10K_FLAG_RESET_REQUESTED))
+ return;
+
+ interface->flags &= ~FM10K_FLAG_RESET_REQUESTED;
+
+ netdev_err(interface->netdev, "Reset interface\n");
+ interface->tx_timeout_count++;
+
+ fm10k_reinit(interface);
+}
+
+/**
+ * fm10k_configure_swpri_map - Configure Receive SWPRI to PC mapping
+ * @interface: board private structure
+ *
+ * Configure the SWPRI to PC mapping for the port.
+ **/
+static void fm10k_configure_swpri_map(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int i;
+
+ /* clear flag indicating update is needed */
+ interface->flags &= ~FM10K_FLAG_SWPRI_CONFIG;
+
+ /* these registers are only available on the PF */
+ if (hw->mac.type != fm10k_mac_pf)
+ return;
+
+ /* configure SWPRI to PC map */
+ for (i = 0; i < FM10K_SWPRI_MAX; i++)
+ fm10k_write_reg(hw, FM10K_SWPRI_MAP(i),
+ netdev_get_prio_tc_map(netdev, i));
+}
+
+/**
+ * fm10k_watchdog_update_host_state - Update the link status based on host.
+ * @interface: board private structure
+ **/
+static void fm10k_watchdog_update_host_state(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ s32 err;
+
+ if (test_bit(__FM10K_LINK_DOWN, &interface->state)) {
+ interface->host_ready = false;
+ if (time_is_after_jiffies(interface->link_down_event))
+ return;
+ clear_bit(__FM10K_LINK_DOWN, &interface->state);
+ }
+
+ if (interface->flags & FM10K_FLAG_SWPRI_CONFIG) {
+ if (rtnl_trylock()) {
+ fm10k_configure_swpri_map(interface);
+ rtnl_unlock();
+ }
+ }
+
+ /* lock the mailbox for transmit and receive */
+ fm10k_mbx_lock(interface);
+
+ err = hw->mac.ops.get_host_state(hw, &interface->host_ready);
+ if (err && time_is_before_jiffies(interface->last_reset))
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
+ /* free the lock */
+ fm10k_mbx_unlock(interface);
+}
+
+/**
+ * fm10k_mbx_subtask - Process upstream and downstream mailboxes
+ * @interface: board private structure
+ *
+ * This function will process both the upstream and downstream mailboxes.
+ * It is necessary for us to hold the rtnl_lock while doing this as the
+ * mailbox accesses are protected by this lock.
+ **/
+static void fm10k_mbx_subtask(struct fm10k_intfc *interface)
+{
+ /* process upstream mailbox and update device state */
+ fm10k_watchdog_update_host_state(interface);
+
+ /* process downstream mailboxes */
+ fm10k_iov_mbx(interface);
+}
+
+/**
+ * fm10k_watchdog_host_is_ready - Update netdev status based on host ready
+ * @interface: board private structure
+ **/
+static void fm10k_watchdog_host_is_ready(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+
+ /* only continue if link state is currently down */
+ if (netif_carrier_ok(netdev))
+ return;
+
+ netif_info(interface, drv, netdev, "NIC Link is up\n");
+
+ netif_carrier_on(netdev);
+ netif_tx_wake_all_queues(netdev);
+}
+
+/**
+ * fm10k_watchdog_host_not_ready - Update netdev status based on host not ready
+ * @interface: board private structure
+ **/
+static void fm10k_watchdog_host_not_ready(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+
+ /* only continue if link state is currently up */
+ if (!netif_carrier_ok(netdev))
+ return;
+
+ netif_info(interface, drv, netdev, "NIC Link is down\n");
+
+ netif_carrier_off(netdev);
+ netif_tx_stop_all_queues(netdev);
+}
+
+/**
+ * fm10k_update_stats - Update the board statistics counters.
+ * @interface: board private structure
+ **/
+void fm10k_update_stats(struct fm10k_intfc *interface)
+{
+ struct net_device_stats *net_stats = &interface->netdev->stats;
+ struct fm10k_hw *hw = &interface->hw;
+ u64 rx_errors = 0, rx_csum_errors = 0, tx_csum_errors = 0;
+ u64 restart_queue = 0, tx_busy = 0, alloc_failed = 0;
+ u64 rx_bytes_nic = 0, rx_pkts_nic = 0, rx_drops_nic = 0;
+ u64 tx_bytes_nic = 0, tx_pkts_nic = 0;
+ u64 bytes, pkts;
+ int i;
+
+ /* do not allow stats update via service task for next second */
+ interface->next_stats_update = jiffies + HZ;
+
+ /* gather some stats to the interface struct that are per queue */
+ for (bytes = 0, pkts = 0, i = 0; i < interface->num_tx_queues; i++) {
+ struct fm10k_ring *tx_ring = interface->tx_ring[i];
+
+ restart_queue += tx_ring->tx_stats.restart_queue;
+ tx_busy += tx_ring->tx_stats.tx_busy;
+ tx_csum_errors += tx_ring->tx_stats.csum_err;
+ bytes += tx_ring->stats.bytes;
+ pkts += tx_ring->stats.packets;
+ }
+
+ interface->restart_queue = restart_queue;
+ interface->tx_busy = tx_busy;
+ net_stats->tx_bytes = bytes;
+ net_stats->tx_packets = pkts;
+ interface->tx_csum_errors = tx_csum_errors;
+ /* gather some stats to the interface struct that are per queue */
+ for (bytes = 0, pkts = 0, i = 0; i < interface->num_rx_queues; i++) {
+ struct fm10k_ring *rx_ring = interface->rx_ring[i];
+
+ bytes += rx_ring->stats.bytes;
+ pkts += rx_ring->stats.packets;
+ alloc_failed += rx_ring->rx_stats.alloc_failed;
+ rx_csum_errors += rx_ring->rx_stats.csum_err;
+ rx_errors += rx_ring->rx_stats.errors;
+ }
+
+ net_stats->rx_bytes = bytes;
+ net_stats->rx_packets = pkts;
+ interface->alloc_failed = alloc_failed;
+ interface->rx_csum_errors = rx_csum_errors;
+ interface->rx_errors = rx_errors;
+
+ hw->mac.ops.update_hw_stats(hw, &interface->stats);
+
+ for (i = 0; i < FM10K_MAX_QUEUES_PF; i++) {
+ struct fm10k_hw_stats_q *q = &interface->stats.q[i];
+
+ tx_bytes_nic += q->tx_bytes.count;
+ tx_pkts_nic += q->tx_packets.count;
+ rx_bytes_nic += q->rx_bytes.count;
+ rx_pkts_nic += q->rx_packets.count;
+ rx_drops_nic += q->rx_drops.count;
+ }
+
+ interface->tx_bytes_nic = tx_bytes_nic;
+ interface->tx_packets_nic = tx_pkts_nic;
+ interface->rx_bytes_nic = rx_bytes_nic;
+ interface->rx_packets_nic = rx_pkts_nic;
+ interface->rx_drops_nic = rx_drops_nic;
+
+ /* Fill out the OS statistics structure */
+ net_stats->rx_errors = interface->stats.xec.count;
+ net_stats->rx_dropped = interface->stats.nodesc_drop.count;
+}
+
+/**
+ * fm10k_watchdog_flush_tx - flush queues on host not ready
+ * @interface - pointer to the device interface structure
+ **/
+static void fm10k_watchdog_flush_tx(struct fm10k_intfc *interface)
+{
+ int some_tx_pending = 0;
+ int i;
+
+ /* nothing to do if carrier is up */
+ if (netif_carrier_ok(interface->netdev))
+ return;
+
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ struct fm10k_ring *tx_ring = interface->tx_ring[i];
+
+ if (tx_ring->next_to_use != tx_ring->next_to_clean) {
+ some_tx_pending = 1;
+ break;
+ }
+ }
+
+ /* We've lost link, so the controller stops DMA, but we've got
+ * queued Tx work that's never going to get done, so reset
+ * controller to flush Tx.
+ */
+ if (some_tx_pending)
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+}
+
+/**
+ * fm10k_watchdog_subtask - check and bring link up
+ * @interface - pointer to the device interface structure
+ **/
+static void fm10k_watchdog_subtask(struct fm10k_intfc *interface)
+{
+ /* if interface is down do nothing */
+ if (test_bit(__FM10K_DOWN, &interface->state) ||
+ test_bit(__FM10K_RESETTING, &interface->state))
+ return;
+
+ if (interface->host_ready)
+ fm10k_watchdog_host_is_ready(interface);
+ else
+ fm10k_watchdog_host_not_ready(interface);
+
+ /* update stats only once every second */
+ if (time_is_before_jiffies(interface->next_stats_update))
+ fm10k_update_stats(interface);
+
+ /* flush any uncompleted work */
+ fm10k_watchdog_flush_tx(interface);
+}
+
+/**
+ * fm10k_check_hang_subtask - check for hung queues and dropped interrupts
+ * @interface - pointer to the device interface structure
+ *
+ * This function serves two purposes. First it strobes the interrupt lines
+ * in order to make certain interrupts are occurring. Secondly it sets the
+ * bits needed to check for TX hangs. As a result we should immediately
+ * determine if a hang has occurred.
+ */
+static void fm10k_check_hang_subtask(struct fm10k_intfc *interface)
+{
+ int i;
+
+ /* If we're down or resetting, just bail */
+ if (test_bit(__FM10K_DOWN, &interface->state) ||
+ test_bit(__FM10K_RESETTING, &interface->state))
+ return;
+
+ /* rate limit tx hang checks to only once every 2 seconds */
+ if (time_is_after_eq_jiffies(interface->next_tx_hang_check))
+ return;
+ interface->next_tx_hang_check = jiffies + (2 * HZ);
+
+ if (netif_carrier_ok(interface->netdev)) {
+ /* Force detection of hung controller */
+ for (i = 0; i < interface->num_tx_queues; i++)
+ set_check_for_tx_hang(interface->tx_ring[i]);
+
+ /* Rearm all in-use q_vectors for immediate firing */
+ for (i = 0; i < interface->num_q_vectors; i++) {
+ struct fm10k_q_vector *qv = interface->q_vector[i];
+
+ if (!qv->tx.count && !qv->rx.count)
+ continue;
+ writel(FM10K_ITR_ENABLE | FM10K_ITR_PENDING2, qv->itr);
+ }
+ }
+}
+
+/**
+ * fm10k_service_task - manages and runs subtasks
+ * @work: pointer to work_struct containing our data
+ **/
+static void fm10k_service_task(struct work_struct *work)
+{
+ struct fm10k_intfc *interface;
+
+ interface = container_of(work, struct fm10k_intfc, service_task);
+
+ /* tasks always capable of running, but must be rtnl protected */
+ fm10k_mbx_subtask(interface);
+ fm10k_detach_subtask(interface);
+ fm10k_reset_subtask(interface);
+
+ /* tasks only run when interface is up */
+ fm10k_watchdog_subtask(interface);
+ fm10k_check_hang_subtask(interface);
+ fm10k_ts_tx_subtask(interface);
+
+ /* release lock on service events to allow scheduling next event */
+ fm10k_service_event_complete(interface);
+}
+
+/**
+ * fm10k_configure_tx_ring - Configure Tx ring after Reset
+ * @interface: board private structure
+ * @ring: structure containing ring specific data
+ *
+ * Configure the Tx descriptor ring after a reset.
+ **/
+static void fm10k_configure_tx_ring(struct fm10k_intfc *interface,
+ struct fm10k_ring *ring)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ u64 tdba = ring->dma;
+ u32 size = ring->count * sizeof(struct fm10k_tx_desc);
+ u32 txint = FM10K_INT_MAP_DISABLE;
+ u32 txdctl = FM10K_TXDCTL_ENABLE | (1 << FM10K_TXDCTL_MAX_TIME_SHIFT);
+ u8 reg_idx = ring->reg_idx;
+
+ /* disable queue to avoid issues while updating state */
+ fm10k_write_reg(hw, FM10K_TXDCTL(reg_idx), 0);
+ fm10k_write_flush(hw);
+
+ /* possible poll here to verify ring resources have been cleaned */
+
+ /* set location and size for descriptor ring */
+ fm10k_write_reg(hw, FM10K_TDBAL(reg_idx), tdba & DMA_BIT_MASK(32));
+ fm10k_write_reg(hw, FM10K_TDBAH(reg_idx), tdba >> 32);
+ fm10k_write_reg(hw, FM10K_TDLEN(reg_idx), size);
+
+ /* reset head and tail pointers */
+ fm10k_write_reg(hw, FM10K_TDH(reg_idx), 0);
+ fm10k_write_reg(hw, FM10K_TDT(reg_idx), 0);
+
+ /* store tail pointer */
+ ring->tail = &interface->uc_addr[FM10K_TDT(reg_idx)];
+
+ /* reset ntu and ntc to place SW in sync with hardwdare */
+ ring->next_to_clean = 0;
+ ring->next_to_use = 0;
+
+ /* Map interrupt */
+ if (ring->q_vector) {
+ txint = ring->q_vector->v_idx + NON_Q_VECTORS(hw);
+ txint |= FM10K_INT_MAP_TIMER0;
+ }
+
+ fm10k_write_reg(hw, FM10K_TXINT(reg_idx), txint);
+
+ /* enable use of FTAG bit in Tx descriptor, register is RO for VF */
+ fm10k_write_reg(hw, FM10K_PFVTCTL(reg_idx),
+ FM10K_PFVTCTL_FTAG_DESC_ENABLE);
+
+ /* enable queue */
+ fm10k_write_reg(hw, FM10K_TXDCTL(reg_idx), txdctl);
+}
+
+/**
+ * fm10k_enable_tx_ring - Verify Tx ring is enabled after configuration
+ * @interface: board private structure
+ * @ring: structure containing ring specific data
+ *
+ * Verify the Tx descriptor ring is ready for transmit.
+ **/
+static void fm10k_enable_tx_ring(struct fm10k_intfc *interface,
+ struct fm10k_ring *ring)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ int wait_loop = 10;
+ u32 txdctl;
+ u8 reg_idx = ring->reg_idx;
+
+ /* if we are already enabled just exit */
+ if (fm10k_read_reg(hw, FM10K_TXDCTL(reg_idx)) & FM10K_TXDCTL_ENABLE)
+ return;
+
+ /* poll to verify queue is enabled */
+ do {
+ usleep_range(1000, 2000);
+ txdctl = fm10k_read_reg(hw, FM10K_TXDCTL(reg_idx));
+ } while (!(txdctl & FM10K_TXDCTL_ENABLE) && --wait_loop);
+ if (!wait_loop)
+ netif_err(interface, drv, interface->netdev,
+ "Could not enable Tx Queue %d\n", reg_idx);
+}
+
+/**
+ * fm10k_configure_tx - Configure Transmit Unit after Reset
+ * @interface: board private structure
+ *
+ * Configure the Tx unit of the MAC after a reset.
+ **/
+static void fm10k_configure_tx(struct fm10k_intfc *interface)
+{
+ int i;
+
+ /* Setup the HW Tx Head and Tail descriptor pointers */
+ for (i = 0; i < interface->num_tx_queues; i++)
+ fm10k_configure_tx_ring(interface, interface->tx_ring[i]);
+
+ /* poll here to verify that Tx rings are now enabled */
+ for (i = 0; i < interface->num_tx_queues; i++)
+ fm10k_enable_tx_ring(interface, interface->tx_ring[i]);
+}
+
+/**
+ * fm10k_configure_rx_ring - Configure Rx ring after Reset
+ * @interface: board private structure
+ * @ring: structure containing ring specific data
+ *
+ * Configure the Rx descriptor ring after a reset.
+ **/
+static void fm10k_configure_rx_ring(struct fm10k_intfc *interface,
+ struct fm10k_ring *ring)
+{
+ u64 rdba = ring->dma;
+ struct fm10k_hw *hw = &interface->hw;
+ u32 size = ring->count * sizeof(union fm10k_rx_desc);
+ u32 rxqctl = FM10K_RXQCTL_ENABLE | FM10K_RXQCTL_PF;
+ u32 rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY;
+ u32 srrctl = FM10K_SRRCTL_BUFFER_CHAINING_EN;
+ u32 rxint = FM10K_INT_MAP_DISABLE;
+ u8 rx_pause = interface->rx_pause;
+ u8 reg_idx = ring->reg_idx;
+
+ /* disable queue to avoid issues while updating state */
+ fm10k_write_reg(hw, FM10K_RXQCTL(reg_idx), 0);
+ fm10k_write_flush(hw);
+
+ /* possible poll here to verify ring resources have been cleaned */
+
+ /* set location and size for descriptor ring */
+ fm10k_write_reg(hw, FM10K_RDBAL(reg_idx), rdba & DMA_BIT_MASK(32));
+ fm10k_write_reg(hw, FM10K_RDBAH(reg_idx), rdba >> 32);
+ fm10k_write_reg(hw, FM10K_RDLEN(reg_idx), size);
+
+ /* reset head and tail pointers */
+ fm10k_write_reg(hw, FM10K_RDH(reg_idx), 0);
+ fm10k_write_reg(hw, FM10K_RDT(reg_idx), 0);
+
+ /* store tail pointer */
+ ring->tail = &interface->uc_addr[FM10K_RDT(reg_idx)];
+
+ /* reset ntu and ntc to place SW in sync with hardwdare */
+ ring->next_to_clean = 0;
+ ring->next_to_use = 0;
+ ring->next_to_alloc = 0;
+
+ /* Configure the Rx buffer size for one buff without split */
+ srrctl |= FM10K_RX_BUFSZ >> FM10K_SRRCTL_BSIZEPKT_SHIFT;
+
+ /* Configure the Rx ring to supress loopback packets */
+ srrctl |= FM10K_SRRCTL_LOOPBACK_SUPPRESS;
+ fm10k_write_reg(hw, FM10K_SRRCTL(reg_idx), srrctl);
+
+ /* Enable drop on empty */
+#ifdef CONFIG_DCB
+ if (interface->pfc_en)
+ rx_pause = interface->pfc_en;
+#endif
+ if (!(rx_pause & (1 << ring->qos_pc)))
+ rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY;
+
+ fm10k_write_reg(hw, FM10K_RXDCTL(reg_idx), rxdctl);
+
+ /* assign default VLAN to queue */
+ ring->vid = hw->mac.default_vid;
+
+ /* Map interrupt */
+ if (ring->q_vector) {
+ rxint = ring->q_vector->v_idx + NON_Q_VECTORS(hw);
+ rxint |= FM10K_INT_MAP_TIMER1;
+ }
+
+ fm10k_write_reg(hw, FM10K_RXINT(reg_idx), rxint);
+
+ /* enable queue */
+ fm10k_write_reg(hw, FM10K_RXQCTL(reg_idx), rxqctl);
+
+ /* place buffers on ring for receive data */
+ fm10k_alloc_rx_buffers(ring, fm10k_desc_unused(ring));
+}
+
+/**
+ * fm10k_update_rx_drop_en - Configures the drop enable bits for Rx rings
+ * @interface: board private structure
+ *
+ * Configure the drop enable bits for the Rx rings.
+ **/
+void fm10k_update_rx_drop_en(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ u8 rx_pause = interface->rx_pause;
+ int i;
+
+#ifdef CONFIG_DCB
+ if (interface->pfc_en)
+ rx_pause = interface->pfc_en;
+
+#endif
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ struct fm10k_ring *ring = interface->rx_ring[i];
+ u32 rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY;
+ u8 reg_idx = ring->reg_idx;
+
+ if (!(rx_pause & (1 << ring->qos_pc)))
+ rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY;
+
+ fm10k_write_reg(hw, FM10K_RXDCTL(reg_idx), rxdctl);
+ }
+}
+
+/**
+ * fm10k_configure_dglort - Configure Receive DGLORT after reset
+ * @interface: board private structure
+ *
+ * Configure the DGLORT description and RSS tables.
+ **/
+static void fm10k_configure_dglort(struct fm10k_intfc *interface)
+{
+ struct fm10k_dglort_cfg dglort = { 0 };
+ struct fm10k_hw *hw = &interface->hw;
+ int i;
+ u32 mrqc;
+
+ /* Fill out hash function seeds */
+ for (i = 0; i < FM10K_RSSRK_SIZE; i++)
+ fm10k_write_reg(hw, FM10K_RSSRK(0, i), interface->rssrk[i]);
+
+ /* Write RETA table to hardware */
+ for (i = 0; i < FM10K_RETA_SIZE; i++)
+ fm10k_write_reg(hw, FM10K_RETA(0, i), interface->reta[i]);
+
+ /* Generate RSS hash based on packet types, TCP/UDP
+ * port numbers and/or IPv4/v6 src and dst addresses
+ */
+ mrqc = FM10K_MRQC_IPV4 |
+ FM10K_MRQC_TCP_IPV4 |
+ FM10K_MRQC_IPV6 |
+ FM10K_MRQC_TCP_IPV6;
+
+ if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV4_UDP)
+ mrqc |= FM10K_MRQC_UDP_IPV4;
+ if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV6_UDP)
+ mrqc |= FM10K_MRQC_UDP_IPV6;
+
+ fm10k_write_reg(hw, FM10K_MRQC(0), mrqc);
+
+ /* configure default DGLORT mapping for RSS/DCB */
+ dglort.inner_rss = 1;
+ dglort.rss_l = fls(interface->ring_feature[RING_F_RSS].mask);
+ dglort.pc_l = fls(interface->ring_feature[RING_F_QOS].mask);
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+
+ /* assign GLORT per queue for queue mapped testing */
+ if (interface->glort_count > 64) {
+ memset(&dglort, 0, sizeof(dglort));
+ dglort.inner_rss = 1;
+ dglort.glort = interface->glort + 64;
+ dglort.idx = fm10k_dglort_pf_queue;
+ dglort.queue_l = fls(interface->num_rx_queues - 1);
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+ }
+
+ /* assign glort value for RSS/DCB specific to this interface */
+ memset(&dglort, 0, sizeof(dglort));
+ dglort.inner_rss = 1;
+ dglort.glort = interface->glort;
+ dglort.rss_l = fls(interface->ring_feature[RING_F_RSS].mask);
+ dglort.pc_l = fls(interface->ring_feature[RING_F_QOS].mask);
+ /* configure DGLORT mapping for RSS/DCB */
+ dglort.idx = fm10k_dglort_pf_rss;
+ if (interface->l2_accel)
+ dglort.shared_l = fls(interface->l2_accel->size);
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+}
+
+/**
+ * fm10k_configure_rx - Configure Receive Unit after Reset
+ * @interface: board private structure
+ *
+ * Configure the Rx unit of the MAC after a reset.
+ **/
+static void fm10k_configure_rx(struct fm10k_intfc *interface)
+{
+ int i;
+
+ /* Configure SWPRI to PC map */
+ fm10k_configure_swpri_map(interface);
+
+ /* Configure RSS and DGLORT map */
+ fm10k_configure_dglort(interface);
+
+ /* Setup the HW Rx Head and Tail descriptor pointers */
+ for (i = 0; i < interface->num_rx_queues; i++)
+ fm10k_configure_rx_ring(interface, interface->rx_ring[i]);
+
+ /* possible poll here to verify that Rx rings are now enabled */
+}
+
+static void fm10k_napi_enable_all(struct fm10k_intfc *interface)
+{
+ struct fm10k_q_vector *q_vector;
+ int q_idx;
+
+ for (q_idx = 0; q_idx < interface->num_q_vectors; q_idx++) {
+ q_vector = interface->q_vector[q_idx];
+ napi_enable(&q_vector->napi);
+ }
+}
+
+static irqreturn_t fm10k_msix_clean_rings(int irq, void *data)
+{
+ struct fm10k_q_vector *q_vector = data;
+
+ if (q_vector->rx.count || q_vector->tx.count)
+ napi_schedule(&q_vector->napi);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t fm10k_msix_mbx_vf(int irq, void *data)
+{
+ struct fm10k_intfc *interface = data;
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+
+ /* re-enable mailbox interrupt and indicate 20us delay */
+ fm10k_write_reg(hw, FM10K_VFITR(FM10K_MBX_VECTOR),
+ FM10K_ITR_ENABLE | FM10K_MBX_INT_DELAY);
+
+ /* service upstream mailbox */
+ if (fm10k_mbx_trylock(interface)) {
+ mbx->ops.process(hw, mbx);
+ fm10k_mbx_unlock(interface);
+ }
+
+ hw->mac.get_host_state = 1;
+ fm10k_service_event_schedule(interface);
+
+ return IRQ_HANDLED;
+}
+
+#define FM10K_ERR_MSG(type) case (type): error = #type; break
+static void fm10k_print_fault(struct fm10k_intfc *interface, int type,
+ struct fm10k_fault *fault)
+{
+ struct pci_dev *pdev = interface->pdev;
+ char *error;
+
+ switch (type) {
+ case FM10K_PCA_FAULT:
+ switch (fault->type) {
+ default:
+ error = "Unknown PCA error";
+ break;
+ FM10K_ERR_MSG(PCA_NO_FAULT);
+ FM10K_ERR_MSG(PCA_UNMAPPED_ADDR);
+ FM10K_ERR_MSG(PCA_BAD_QACCESS_PF);
+ FM10K_ERR_MSG(PCA_BAD_QACCESS_VF);
+ FM10K_ERR_MSG(PCA_MALICIOUS_REQ);
+ FM10K_ERR_MSG(PCA_POISONED_TLP);
+ FM10K_ERR_MSG(PCA_TLP_ABORT);
+ }
+ break;
+ case FM10K_THI_FAULT:
+ switch (fault->type) {
+ default:
+ error = "Unknown THI error";
+ break;
+ FM10K_ERR_MSG(THI_NO_FAULT);
+ FM10K_ERR_MSG(THI_MAL_DIS_Q_FAULT);
+ }
+ break;
+ case FM10K_FUM_FAULT:
+ switch (fault->type) {
+ default:
+ error = "Unknown FUM error";
+ break;
+ FM10K_ERR_MSG(FUM_NO_FAULT);
+ FM10K_ERR_MSG(FUM_UNMAPPED_ADDR);
+ FM10K_ERR_MSG(FUM_BAD_VF_QACCESS);
+ FM10K_ERR_MSG(FUM_ADD_DECODE_ERR);
+ FM10K_ERR_MSG(FUM_RO_ERROR);
+ FM10K_ERR_MSG(FUM_QPRC_CRC_ERROR);
+ FM10K_ERR_MSG(FUM_CSR_TIMEOUT);
+ FM10K_ERR_MSG(FUM_INVALID_TYPE);
+ FM10K_ERR_MSG(FUM_INVALID_LENGTH);
+ FM10K_ERR_MSG(FUM_INVALID_BE);
+ FM10K_ERR_MSG(FUM_INVALID_ALIGN);
+ }
+ break;
+ default:
+ error = "Undocumented fault";
+ break;
+ }
+
+ dev_warn(&pdev->dev,
+ "%s Address: 0x%llx SpecInfo: 0x%x Func: %02x.%0x\n",
+ error, fault->address, fault->specinfo,
+ PCI_SLOT(fault->func), PCI_FUNC(fault->func));
+}
+
+static void fm10k_report_fault(struct fm10k_intfc *interface, u32 eicr)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_fault fault = { 0 };
+ int type, err;
+
+ for (eicr &= FM10K_EICR_FAULT_MASK, type = FM10K_PCA_FAULT;
+ eicr;
+ eicr >>= 1, type += FM10K_FAULT_SIZE) {
+ /* only check if there is an error reported */
+ if (!(eicr & 0x1))
+ continue;
+
+ /* retrieve fault info */
+ err = hw->mac.ops.get_fault(hw, type, &fault);
+ if (err) {
+ dev_err(&interface->pdev->dev,
+ "error reading fault\n");
+ continue;
+ }
+
+ fm10k_print_fault(interface, type, &fault);
+ }
+}
+
+static void fm10k_reset_drop_on_empty(struct fm10k_intfc *interface, u32 eicr)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ const u32 rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY;
+ u32 maxholdq;
+ int q;
+
+ if (!(eicr & FM10K_EICR_MAXHOLDTIME))
+ return;
+
+ maxholdq = fm10k_read_reg(hw, FM10K_MAXHOLDQ(7));
+ if (maxholdq)
+ fm10k_write_reg(hw, FM10K_MAXHOLDQ(7), maxholdq);
+ for (q = 255;;) {
+ if (maxholdq & (1 << 31)) {
+ if (q < FM10K_MAX_QUEUES_PF) {
+ interface->rx_overrun_pf++;
+ fm10k_write_reg(hw, FM10K_RXDCTL(q), rxdctl);
+ } else {
+ interface->rx_overrun_vf++;
+ }
+ }
+
+ maxholdq *= 2;
+ if (!maxholdq)
+ q &= ~(32 - 1);
+
+ if (!q)
+ break;
+
+ if (q-- % 32)
+ continue;
+
+ maxholdq = fm10k_read_reg(hw, FM10K_MAXHOLDQ(q / 32));
+ if (maxholdq)
+ fm10k_write_reg(hw, FM10K_MAXHOLDQ(q / 32), maxholdq);
+ }
+}
+
+static irqreturn_t fm10k_msix_mbx_pf(int irq, void *data)
+{
+ struct fm10k_intfc *interface = data;
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 eicr;
+
+ /* unmask any set bits related to this interrupt */
+ eicr = fm10k_read_reg(hw, FM10K_EICR);
+ fm10k_write_reg(hw, FM10K_EICR, eicr & (FM10K_EICR_MAILBOX |
+ FM10K_EICR_SWITCHREADY |
+ FM10K_EICR_SWITCHNOTREADY));
+
+ /* report any faults found to the message log */
+ fm10k_report_fault(interface, eicr);
+
+ /* reset any queues disabled due to receiver overrun */
+ fm10k_reset_drop_on_empty(interface, eicr);
+
+ /* service mailboxes */
+ if (fm10k_mbx_trylock(interface)) {
+ mbx->ops.process(hw, mbx);
+ fm10k_iov_event(interface);
+ fm10k_mbx_unlock(interface);
+ }
+
+ /* if switch toggled state we should reset GLORTs */
+ if (eicr & FM10K_EICR_SWITCHNOTREADY) {
+ /* force link down for at least 4 seconds */
+ interface->link_down_event = jiffies + (4 * HZ);
+ set_bit(__FM10K_LINK_DOWN, &interface->state);
+
+ /* reset dglort_map back to no config */
+ hw->mac.dglort_map = FM10K_DGLORTMAP_NONE;
+ }
+
+ /* we should validate host state after interrupt event */
+ hw->mac.get_host_state = 1;
+ fm10k_service_event_schedule(interface);
+
+ /* re-enable mailbox interrupt and indicate 20us delay */
+ fm10k_write_reg(hw, FM10K_ITR(FM10K_MBX_VECTOR),
+ FM10K_ITR_ENABLE | FM10K_MBX_INT_DELAY);
+
+ return IRQ_HANDLED;
+}
+
+void fm10k_mbx_free_irq(struct fm10k_intfc *interface)
+{
+ struct msix_entry *entry = &interface->msix_entries[FM10K_MBX_VECTOR];
+ struct fm10k_hw *hw = &interface->hw;
+ int itr_reg;
+
+ /* disconnect the mailbox */
+ hw->mbx.ops.disconnect(hw, &hw->mbx);
+
+ /* disable Mailbox cause */
+ if (hw->mac.type == fm10k_mac_pf) {
+ fm10k_write_reg(hw, FM10K_EIMR,
+ FM10K_EIMR_DISABLE(PCA_FAULT) |
+ FM10K_EIMR_DISABLE(FUM_FAULT) |
+ FM10K_EIMR_DISABLE(MAILBOX) |
+ FM10K_EIMR_DISABLE(SWITCHREADY) |
+ FM10K_EIMR_DISABLE(SWITCHNOTREADY) |
+ FM10K_EIMR_DISABLE(SRAMERROR) |
+ FM10K_EIMR_DISABLE(VFLR) |
+ FM10K_EIMR_DISABLE(MAXHOLDTIME));
+ itr_reg = FM10K_ITR(FM10K_MBX_VECTOR);
+ } else {
+ itr_reg = FM10K_VFITR(FM10K_MBX_VECTOR);
+ }
+
+ fm10k_write_reg(hw, itr_reg, FM10K_ITR_MASK_SET);
+
+ free_irq(entry->vector, interface);
+}
+
+static s32 fm10k_mbx_mac_addr(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ bool vlan_override = hw->mac.vlan_override;
+ u16 default_vid = hw->mac.default_vid;
+ struct fm10k_intfc *interface;
+ s32 err;
+
+ err = fm10k_msg_mac_vlan_vf(hw, results, mbx);
+ if (err)
+ return err;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ /* MAC was changed so we need reset */
+ if (is_valid_ether_addr(hw->mac.perm_addr) &&
+ memcmp(hw->mac.perm_addr, hw->mac.addr, ETH_ALEN))
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
+ /* VLAN override was changed, or default VLAN changed */
+ if ((vlan_override != hw->mac.vlan_override) ||
+ (default_vid != hw->mac.default_vid))
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
+ return 0;
+}
+
+static s32 fm10k_1588_msg_vf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_intfc *interface;
+ u64 timestamp;
+ s32 err;
+
+ err = fm10k_tlv_attr_get_u64(results[FM10K_1588_MSG_TIMESTAMP],
+ ×tamp);
+ if (err)
+ return err;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ fm10k_ts_tx_hwtstamp(interface, 0, timestamp);
+
+ return 0;
+}
+
+/* generic error handler for mailbox issues */
+static s32 fm10k_mbx_error(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_intfc *interface;
+ struct pci_dev *pdev;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+ pdev = interface->pdev;
+
+ dev_err(&pdev->dev, "Unknown message ID %u\n",
+ **results & FM10K_TLV_ID_MASK);
+
+ return 0;
+}
+
+static const struct fm10k_msg_data vf_mbx_data[] = {
+ FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
+ FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_mbx_mac_addr),
+ FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf),
+ FM10K_VF_MSG_1588_HANDLER(fm10k_1588_msg_vf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_mbx_error),
+};
+
+static int fm10k_mbx_request_irq_vf(struct fm10k_intfc *interface)
+{
+ struct msix_entry *entry = &interface->msix_entries[FM10K_MBX_VECTOR];
+ struct net_device *dev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int err;
+
+ /* Use timer0 for interrupt moderation on the mailbox */
+ u32 itr = FM10K_INT_MAP_TIMER0 | entry->entry;
+
+ /* register mailbox handlers */
+ err = hw->mbx.ops.register_handlers(&hw->mbx, vf_mbx_data);
+ if (err)
+ return err;
+
+ /* request the IRQ */
+ err = request_irq(entry->vector, fm10k_msix_mbx_vf, 0,
+ dev->name, interface);
+ if (err) {
+ netif_err(interface, probe, dev,
+ "request_irq for msix_mbx failed: %d\n", err);
+ return err;
+ }
+
+ /* map all of the interrupt sources */
+ fm10k_write_reg(hw, FM10K_VFINT_MAP, itr);
+
+ /* enable interrupt */
+ fm10k_write_reg(hw, FM10K_VFITR(entry->entry), FM10K_ITR_ENABLE);
+
+ return 0;
+}
+
+static s32 fm10k_lport_map(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_intfc *interface;
+ u32 dglort_map = hw->mac.dglort_map;
+ s32 err;
+
+ err = fm10k_msg_lport_map_pf(hw, results, mbx);
+ if (err)
+ return err;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ /* we need to reset if port count was just updated */
+ if (dglort_map != hw->mac.dglort_map)
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
+ return 0;
+}
+
+static s32 fm10k_update_pvid(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_intfc *interface;
+ u16 glort, pvid;
+ u32 pvid_update;
+ s32 err;
+
+ err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID],
+ &pvid_update);
+ if (err)
+ return err;
+
+ /* extract values from the pvid update */
+ glort = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_GLORT);
+ pvid = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_PVID);
+
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ /* verify VID is valid */
+ if (pvid >= FM10K_VLAN_TABLE_VID_MAX)
+ return FM10K_ERR_PARAM;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ /* check to see if this belongs to one of the VFs */
+ err = fm10k_iov_update_pvid(interface, glort, pvid);
+ if (!err)
+ return 0;
+
+ /* we need to reset if default VLAN was just updated */
+ if (pvid != hw->mac.default_vid)
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
+ hw->mac.default_vid = pvid;
+
+ return 0;
+}
+
+static s32 fm10k_1588_msg_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_swapi_1588_timestamp timestamp;
+ struct fm10k_iov_data *iov_data;
+ struct fm10k_intfc *interface;
+ u16 sglort, vf_idx;
+ s32 err;
+
+ err = fm10k_tlv_attr_get_le_struct(
+ results[FM10K_PF_ATTR_ID_1588_TIMESTAMP],
+ ×tamp, sizeof(timestamp));
+ if (err)
+ return err;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ if (timestamp.dglort) {
+ fm10k_ts_tx_hwtstamp(interface, timestamp.dglort,
+ le64_to_cpu(timestamp.egress));
+ return 0;
+ }
+
+ /* either dglort or sglort must be set */
+ if (!timestamp.sglort)
+ return FM10K_ERR_PARAM;
+
+ /* verify GLORT is at least one of the ones we own */
+ sglort = le16_to_cpu(timestamp.sglort);
+ if (!fm10k_glort_valid_pf(hw, sglort))
+ return FM10K_ERR_PARAM;
+
+ if (sglort == interface->glort) {
+ fm10k_ts_tx_hwtstamp(interface, 0,
+ le64_to_cpu(timestamp.ingress));
+ return 0;
+ }
+
+ /* if there is no iov_data then there is no mailboxes to process */
+ if (!ACCESS_ONCE(interface->iov_data))
+ return FM10K_ERR_PARAM;
+
+ rcu_read_lock();
+
+ /* notify VF if this timestamp belongs to it */
+ iov_data = interface->iov_data;
+ vf_idx = (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE) - sglort;
+
+ if (!iov_data || vf_idx >= iov_data->num_vfs) {
+ err = FM10K_ERR_PARAM;
+ goto err_unlock;
+ }
+
+ err = hw->iov.ops.report_timestamp(hw, &iov_data->vf_info[vf_idx],
+ le64_to_cpu(timestamp.ingress));
+
+err_unlock:
+ rcu_read_unlock();
+
+ return err;
+}
+
+static const struct fm10k_msg_data pf_mbx_data[] = {
+ FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf),
+ FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_lport_map),
+ FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_update_pvid),
+ FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(fm10k_1588_msg_pf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_mbx_error),
+};
+
+static int fm10k_mbx_request_irq_pf(struct fm10k_intfc *interface)
+{
+ struct msix_entry *entry = &interface->msix_entries[FM10K_MBX_VECTOR];
+ struct net_device *dev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int err;
+
+ /* Use timer0 for interrupt moderation on the mailbox */
+ u32 mbx_itr = FM10K_INT_MAP_TIMER0 | entry->entry;
+ u32 other_itr = FM10K_INT_MAP_IMMEDIATE | entry->entry;
+
+ /* register mailbox handlers */
+ err = hw->mbx.ops.register_handlers(&hw->mbx, pf_mbx_data);
+ if (err)
+ return err;
+
+ /* request the IRQ */
+ err = request_irq(entry->vector, fm10k_msix_mbx_pf, 0,
+ dev->name, interface);
+ if (err) {
+ netif_err(interface, probe, dev,
+ "request_irq for msix_mbx failed: %d\n", err);
+ return err;
+ }
+
+ /* Enable interrupts w/ no moderation for "other" interrupts */
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_PCIeFault), other_itr);
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_SwitchUpDown), other_itr);
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_SRAM), other_itr);
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_MaxHoldTime), other_itr);
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_VFLR), other_itr);
+
+ /* Enable interrupts w/ moderation for mailbox */
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_Mailbox), mbx_itr);
+
+ /* Enable individual interrupt causes */
+ fm10k_write_reg(hw, FM10K_EIMR, FM10K_EIMR_ENABLE(PCA_FAULT) |
+ FM10K_EIMR_ENABLE(FUM_FAULT) |
+ FM10K_EIMR_ENABLE(MAILBOX) |
+ FM10K_EIMR_ENABLE(SWITCHREADY) |
+ FM10K_EIMR_ENABLE(SWITCHNOTREADY) |
+ FM10K_EIMR_ENABLE(SRAMERROR) |
+ FM10K_EIMR_ENABLE(VFLR) |
+ FM10K_EIMR_ENABLE(MAXHOLDTIME));
+
+ /* enable interrupt */
+ fm10k_write_reg(hw, FM10K_ITR(entry->entry), FM10K_ITR_ENABLE);
+
+ return 0;
+}
+
+int fm10k_mbx_request_irq(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ int err;
+
+ /* enable Mailbox cause */
+ if (hw->mac.type == fm10k_mac_pf)
+ err = fm10k_mbx_request_irq_pf(interface);
+ else
+ err = fm10k_mbx_request_irq_vf(interface);
+
+ /* connect mailbox */
+ if (!err)
+ err = hw->mbx.ops.connect(hw, &hw->mbx);
+
+ return err;
+}
+
+/**
+ * fm10k_qv_free_irq - release interrupts associated with queue vectors
+ * @interface: board private structure
+ *
+ * Release all interrupts associated with this interface
+ **/
+void fm10k_qv_free_irq(struct fm10k_intfc *interface)
+{
+ int vector = interface->num_q_vectors;
+ struct fm10k_hw *hw = &interface->hw;
+ struct msix_entry *entry;
+
+ entry = &interface->msix_entries[NON_Q_VECTORS(hw) + vector];
+
+ while (vector) {
+ struct fm10k_q_vector *q_vector;
+
+ vector--;
+ entry--;
+ q_vector = interface->q_vector[vector];
+
+ if (!q_vector->tx.count && !q_vector->rx.count)
+ continue;
+
+ /* disable interrupts */
+
+ writel(FM10K_ITR_MASK_SET, q_vector->itr);
+
+ free_irq(entry->vector, q_vector);
+ }
+}
+
+/**
+ * fm10k_qv_request_irq - initialize interrupts for queue vectors
+ * @interface: board private structure
+ *
+ * Attempts to configure interrupts using the best available
+ * capabilities of the hardware and kernel.
+ **/
+int fm10k_qv_request_irq(struct fm10k_intfc *interface)
+{
+ struct net_device *dev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ struct msix_entry *entry;
+ int ri = 0, ti = 0;
+ int vector, err;
+
+ entry = &interface->msix_entries[NON_Q_VECTORS(hw)];
+
+ for (vector = 0; vector < interface->num_q_vectors; vector++) {
+ struct fm10k_q_vector *q_vector = interface->q_vector[vector];
+
+ /* name the vector */
+ if (q_vector->tx.count && q_vector->rx.count) {
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-TxRx-%d", dev->name, ri++);
+ ti++;
+ } else if (q_vector->rx.count) {
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-rx-%d", dev->name, ri++);
+ } else if (q_vector->tx.count) {
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-tx-%d", dev->name, ti++);
+ } else {
+ /* skip this unused q_vector */
+ continue;
+ }
+
+ /* Assign ITR register to q_vector */
+ q_vector->itr = (hw->mac.type == fm10k_mac_pf) ?
+ &interface->uc_addr[FM10K_ITR(entry->entry)] :
+ &interface->uc_addr[FM10K_VFITR(entry->entry)];
+
+ /* request the IRQ */
+ err = request_irq(entry->vector, &fm10k_msix_clean_rings, 0,
+ q_vector->name, q_vector);
+ if (err) {
+ netif_err(interface, probe, dev,
+ "request_irq failed for MSIX interrupt Error: %d\n",
+ err);
+ goto err_out;
+ }
+
+ /* Enable q_vector */
+ writel(FM10K_ITR_ENABLE, q_vector->itr);
+
+ entry++;
+ }
+
+ return 0;
+
+err_out:
+ /* wind through the ring freeing all entries and vectors */
+ while (vector) {
+ struct fm10k_q_vector *q_vector;
+
+ entry--;
+ vector--;
+ q_vector = interface->q_vector[vector];
+
+ if (!q_vector->tx.count && !q_vector->rx.count)
+ continue;
+
+ /* disable interrupts */
+
+ writel(FM10K_ITR_MASK_SET, q_vector->itr);
+
+ free_irq(entry->vector, q_vector);
+ }
+
+ return err;
+}
+
+void fm10k_up(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+
+ /* Enable Tx/Rx DMA */
+ hw->mac.ops.start_hw(hw);
+
+ /* configure Tx descriptor rings */
+ fm10k_configure_tx(interface);
+
+ /* configure Rx descriptor rings */
+ fm10k_configure_rx(interface);
+
+ /* configure interrupts */
+ hw->mac.ops.update_int_moderator(hw);
+
+ /* clear down bit to indicate we are ready to go */
+ clear_bit(__FM10K_DOWN, &interface->state);
+
+ /* enable polling cleanups */
+ fm10k_napi_enable_all(interface);
+
+ /* re-establish Rx filters */
+ fm10k_restore_rx_state(interface);
+
+ /* enable transmits */
+ netif_tx_start_all_queues(interface->netdev);
+
+ /* kick off the service timer */
+ mod_timer(&interface->service_timer, jiffies);
+}
+
+static void fm10k_napi_disable_all(struct fm10k_intfc *interface)
+{
+ struct fm10k_q_vector *q_vector;
+ int q_idx;
+
+ for (q_idx = 0; q_idx < interface->num_q_vectors; q_idx++) {
+ q_vector = interface->q_vector[q_idx];
+ napi_disable(&q_vector->napi);
+ }
+}
+
+void fm10k_down(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+
+ /* signal that we are down to the interrupt handler and service task */
+ set_bit(__FM10K_DOWN, &interface->state);
+
+ /* call carrier off first to avoid false dev_watchdog timeouts */
+ netif_carrier_off(netdev);
+
+ /* disable transmits */
+ netif_tx_stop_all_queues(netdev);
+ netif_tx_disable(netdev);
+
+ /* reset Rx filters */
+ fm10k_reset_rx_state(interface);
+
+ /* allow 10ms for device to quiesce */
+ usleep_range(10000, 20000);
+
+ /* disable polling routines */
+ fm10k_napi_disable_all(interface);
+
+ del_timer_sync(&interface->service_timer);
+
+ /* capture stats one last time before stopping interface */
+ fm10k_update_stats(interface);
+
+ /* Disable DMA engine for Tx/Rx */
+ hw->mac.ops.stop_hw(hw);
+
+ /* free any buffers still on the rings */
+ fm10k_clean_all_tx_rings(interface);
+}
+
+/**
+ * fm10k_sw_init - Initialize general software structures
+ * @interface: host interface private structure to initialize
+ *
+ * fm10k_sw_init initializes the interface private data structure.
+ * Fields are initialized based on PCI device information and
+ * OS network device settings (MTU size).
+ **/
+static int fm10k_sw_init(struct fm10k_intfc *interface,
+ const struct pci_device_id *ent)
+{
+ static const u32 seed[FM10K_RSSRK_SIZE] = { 0xda565a6d, 0xc20e5b25,
+ 0x3d256741, 0xb08fa343,
+ 0xcb2bcad0, 0xb4307bae,
+ 0xa32dcb77, 0x0cf23080,
+ 0x3bb7426a, 0xfa01acbe };
+ const struct fm10k_info *fi = fm10k_info_tbl[ent->driver_data];
+ struct fm10k_hw *hw = &interface->hw;
+ struct pci_dev *pdev = interface->pdev;
+ struct net_device *netdev = interface->netdev;
+ unsigned int rss;
+ int err;
+
+ /* initialize back pointer */
+ hw->back = interface;
+ hw->hw_addr = interface->uc_addr;
+
+ /* PCI config space info */
+ hw->vendor_id = pdev->vendor;
+ hw->device_id = pdev->device;
+ hw->revision_id = pdev->revision;
+ hw->subsystem_vendor_id = pdev->subsystem_vendor;
+ hw->subsystem_device_id = pdev->subsystem_device;
+
+ /* Setup hw api */
+ memcpy(&hw->mac.ops, fi->mac_ops, sizeof(hw->mac.ops));
+ hw->mac.type = fi->mac;
+
+ /* Setup IOV handlers */
+ if (fi->iov_ops)
+ memcpy(&hw->iov.ops, fi->iov_ops, sizeof(hw->iov.ops));
+
+ /* Set common capability flags and settings */
+ rss = min_t(int, FM10K_MAX_RSS_INDICES, num_online_cpus());
+ interface->ring_feature[RING_F_RSS].limit = rss;
+ fi->get_invariants(hw);
+
+ /* pick up the PCIe bus settings for reporting later */
+ if (hw->mac.ops.get_bus_info)
+ hw->mac.ops.get_bus_info(hw);
+
+ /* limit the usable DMA range */
+ if (hw->mac.ops.set_dma_mask)
+ hw->mac.ops.set_dma_mask(hw, dma_get_mask(&pdev->dev));
+
+ /* update netdev with DMA restrictions */
+ if (dma_get_mask(&pdev->dev) > DMA_BIT_MASK(32)) {
+ netdev->features |= NETIF_F_HIGHDMA;
+ netdev->vlan_features |= NETIF_F_HIGHDMA;
+ }
+
+ /* delay any future reset requests */
+ interface->last_reset = jiffies + (10 * HZ);
+
+ /* reset and initialize the hardware so it is in a known state */
+ err = hw->mac.ops.reset_hw(hw) ? : hw->mac.ops.init_hw(hw);
+ if (err) {
+ dev_err(&pdev->dev, "init_hw failed: %d\n", err);
+ return err;
+ }
+
+ /* initialize hardware statistics */
+ hw->mac.ops.update_hw_stats(hw, &interface->stats);
+
+ /* Set upper limit on IOV VFs that can be allocated */
+ pci_sriov_set_totalvfs(pdev, hw->iov.total_vfs);
+
+ /* Start with random Ethernet address */
+ eth_random_addr(hw->mac.addr);
+
+ /* Initialize MAC address from hardware */
+ err = hw->mac.ops.read_mac_addr(hw);
+ if (err) {
+ dev_warn(&pdev->dev,
+ "Failed to obtain MAC address defaulting to random\n");
+ /* tag address assignment as random */
+ netdev->addr_assign_type |= NET_ADDR_RANDOM;
+ }
+
+ memcpy(netdev->dev_addr, hw->mac.addr, netdev->addr_len);
+ memcpy(netdev->perm_addr, hw->mac.addr, netdev->addr_len);
+
+ if (!is_valid_ether_addr(netdev->perm_addr)) {
+ dev_err(&pdev->dev, "Invalid MAC Address\n");
+ return -EIO;
+ }
+
+ /* assign BAR 4 resources for use with PTP */
+ if (fm10k_read_reg(hw, FM10K_CTRL) & FM10K_CTRL_BAR4_ALLOWED)
+ interface->sw_addr = ioremap(pci_resource_start(pdev, 4),
+ pci_resource_len(pdev, 4));
+ hw->sw_addr = interface->sw_addr;
+
+ /* Only the PF can support VXLAN and NVGRE offloads */
+ if (hw->mac.type != fm10k_mac_pf) {
+ netdev->hw_enc_features = 0;
+ netdev->features &= ~NETIF_F_GSO_UDP_TUNNEL;
+ netdev->hw_features &= ~NETIF_F_GSO_UDP_TUNNEL;
+ }
+
+ /* initialize DCBNL interface */
+ fm10k_dcbnl_set_ops(netdev);
+
+ /* Initialize service timer and service task */
+ set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
+ setup_timer(&interface->service_timer, &fm10k_service_timer,
+ (unsigned long)interface);
+ INIT_WORK(&interface->service_task, fm10k_service_task);
+
+ /* Intitialize timestamp data */
+ fm10k_ts_init(interface);
+
+ /* set default ring sizes */
+ interface->tx_ring_count = FM10K_DEFAULT_TXD;
+ interface->rx_ring_count = FM10K_DEFAULT_RXD;
+
+ /* set default interrupt moderation */
+ interface->tx_itr = FM10K_ITR_10K;
+ interface->rx_itr = FM10K_ITR_ADAPTIVE | FM10K_ITR_20K;
+
+ /* initialize vxlan_port list */
+ INIT_LIST_HEAD(&interface->vxlan_port);
+
+ /* initialize RSS key */
+ memcpy(interface->rssrk, seed, sizeof(seed));
+
+ /* Start off interface as being down */
+ set_bit(__FM10K_DOWN, &interface->state);
+
+ return 0;
+}
+
+static void fm10k_slot_warn(struct fm10k_intfc *interface)
+{
+ struct device *dev = &interface->pdev->dev;
+ struct fm10k_hw *hw = &interface->hw;
+
+ if (hw->mac.ops.is_slot_appropriate(hw))
+ return;
+
+ dev_warn(dev,
+ "For optimal performance, a %s %s slot is recommended.\n",
+ (hw->bus_caps.width == fm10k_bus_width_pcie_x1 ? "x1" :
+ hw->bus_caps.width == fm10k_bus_width_pcie_x4 ? "x4" :
+ "x8"),
+ (hw->bus_caps.speed == fm10k_bus_speed_2500 ? "2.5GT/s" :
+ hw->bus_caps.speed == fm10k_bus_speed_5000 ? "5.0GT/s" :
+ "8.0GT/s"));
+ dev_warn(dev,
+ "A slot with more lanes and/or higher speed is suggested.\n");
+}
+
+/**
+ * fm10k_probe - Device Initialization Routine
+ * @pdev: PCI device information struct
+ * @ent: entry in fm10k_pci_tbl
+ *
+ * Returns 0 on success, negative on failure
+ *
+ * fm10k_probe initializes an interface identified by a pci_dev structure.
+ * The OS initialization, configuring of the interface private structure,
+ * and a hardware reset occur.
+ **/
+static int fm10k_probe(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct net_device *netdev;
+ struct fm10k_intfc *interface;
+ struct fm10k_hw *hw;
+ int err;
+ u64 dma_mask;
+
+ err = pci_enable_device_mem(pdev);
+ if (err)
+ return err;
+
+ /* By default fm10k only supports a 48 bit DMA mask */
+ dma_mask = DMA_BIT_MASK(48) | dma_get_required_mask(&pdev->dev);
+
+ if ((dma_mask <= DMA_BIT_MASK(32)) ||
+ dma_set_mask_and_coherent(&pdev->dev, dma_mask)) {
+ dma_mask &= DMA_BIT_MASK(32);
+
+ err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
+ if (err) {
+ err = dma_set_coherent_mask(&pdev->dev,
+ DMA_BIT_MASK(32));
+ if (err) {
+ dev_err(&pdev->dev,
+ "No usable DMA configuration, aborting\n");
+ goto err_dma;
+ }
+ }
+ }
+
+ err = pci_request_selected_regions(pdev,
+ pci_select_bars(pdev,
+ IORESOURCE_MEM),
+ fm10k_driver_name);
+ if (err) {
+ dev_err(&pdev->dev,
+ "pci_request_selected_regions failed 0x%x\n", err);
+ goto err_pci_reg;
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+
+ pci_set_master(pdev);
+ pci_save_state(pdev);
+
+ netdev = fm10k_alloc_netdev();
+ if (!netdev) {
+ err = -ENOMEM;
+ goto err_alloc_netdev;
+ }
+
+ SET_NETDEV_DEV(netdev, &pdev->dev);
+
+ interface = netdev_priv(netdev);
+ pci_set_drvdata(pdev, interface);
+
+ interface->netdev = netdev;
+ interface->pdev = pdev;
+ hw = &interface->hw;
+
+ interface->uc_addr = ioremap(pci_resource_start(pdev, 0),
+ FM10K_UC_ADDR_SIZE);
+ if (!interface->uc_addr) {
+ err = -EIO;
+ goto err_ioremap;
+ }
+
+ err = fm10k_sw_init(interface, ent);
+ if (err)
+ goto err_sw_init;
+
+ /* enable debugfs support */
+ fm10k_dbg_intfc_init(interface);
+
+ err = fm10k_init_queueing_scheme(interface);
+ if (err)
+ goto err_sw_init;
+
+ err = fm10k_mbx_request_irq(interface);
+ if (err)
+ goto err_mbx_interrupt;
+
+ /* final check of hardware state before registering the interface */
+ err = fm10k_hw_ready(interface);
+ if (err)
+ goto err_register;
+
+ err = register_netdev(netdev);
+ if (err)
+ goto err_register;
+
+ /* carrier off reporting is important to ethtool even BEFORE open */
+ netif_carrier_off(netdev);
+
+ /* stop all the transmit queues from transmitting until link is up */
+ netif_tx_stop_all_queues(netdev);
+
+ /* Register PTP interface */
+ fm10k_ptp_register(interface);
+
+ /* print bus type/speed/width info */
+ dev_info(&pdev->dev, "(PCI Express:%s Width: %s Payload: %s)\n",
+ (hw->bus.speed == fm10k_bus_speed_8000 ? "8.0GT/s" :
+ hw->bus.speed == fm10k_bus_speed_5000 ? "5.0GT/s" :
+ hw->bus.speed == fm10k_bus_speed_2500 ? "2.5GT/s" :
+ "Unknown"),
+ (hw->bus.width == fm10k_bus_width_pcie_x8 ? "x8" :
+ hw->bus.width == fm10k_bus_width_pcie_x4 ? "x4" :
+ hw->bus.width == fm10k_bus_width_pcie_x1 ? "x1" :
+ "Unknown"),
+ (hw->bus.payload == fm10k_bus_payload_128 ? "128B" :
+ hw->bus.payload == fm10k_bus_payload_256 ? "256B" :
+ hw->bus.payload == fm10k_bus_payload_512 ? "512B" :
+ "Unknown"));
+
+ /* print warning for non-optimal configurations */
+ fm10k_slot_warn(interface);
+
+ /* enable SR-IOV after registering netdev to enforce PF/VF ordering */
+ fm10k_iov_configure(pdev, 0);
+
+ /* clear the service task disable bit to allow service task to start */
+ clear_bit(__FM10K_SERVICE_DISABLE, &interface->state);
+
+ return 0;
+
+err_register:
+ fm10k_mbx_free_irq(interface);
+err_mbx_interrupt:
+ fm10k_clear_queueing_scheme(interface);
+err_sw_init:
+ if (interface->sw_addr)
+ iounmap(interface->sw_addr);
+ iounmap(interface->uc_addr);
+err_ioremap:
+ free_netdev(netdev);
+err_alloc_netdev:
+ pci_release_selected_regions(pdev,
+ pci_select_bars(pdev, IORESOURCE_MEM));
+err_pci_reg:
+err_dma:
+ pci_disable_device(pdev);
+ return err;
+}
+
+/**
+ * fm10k_remove - Device Removal Routine
+ * @pdev: PCI device information struct
+ *
+ * fm10k_remove is called by the PCI subsystem to alert the driver
+ * that it should release a PCI device. The could be caused by a
+ * Hot-Plug event, or because the driver is going to be removed from
+ * memory.
+ **/
+static void fm10k_remove(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct net_device *netdev = interface->netdev;
+
+ set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
+ cancel_work_sync(&interface->service_task);
+
+ /* free netdev, this may bounce the interrupts due to setup_tc */
+ if (netdev->reg_state == NETREG_REGISTERED)
+ unregister_netdev(netdev);
+
+ /* cleanup timestamp handling */
+ fm10k_ptp_unregister(interface);
+
+ /* release VFs */
+ fm10k_iov_disable(pdev);
+
+ /* disable mailbox interrupt */
+ fm10k_mbx_free_irq(interface);
+
+ /* free interrupts */
+ fm10k_clear_queueing_scheme(interface);
+
+ /* remove any debugfs interfaces */
+ fm10k_dbg_intfc_exit(interface);
+
+ if (interface->sw_addr)
+ iounmap(interface->sw_addr);
+ iounmap(interface->uc_addr);
+
+ free_netdev(netdev);
+
+ pci_release_selected_regions(pdev,
+ pci_select_bars(pdev, IORESOURCE_MEM));
+
+ pci_disable_pcie_error_reporting(pdev);
+
+ pci_disable_device(pdev);
+}
+
+#ifdef CONFIG_PM
+/**
+ * fm10k_resume - Restore device to pre-sleep state
+ * @pdev: PCI device information struct
+ *
+ * fm10k_resume is called after the system has powered back up from a sleep
+ * state and is ready to resume operation. This function is meant to restore
+ * the device back to its pre-sleep state.
+ **/
+static int fm10k_resume(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ u32 err;
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+
+ /* pci_restore_state clears dev->state_saved so call
+ * pci_save_state to restore it.
+ */
+ pci_save_state(pdev);
+
+ err = pci_enable_device_mem(pdev);
+ if (err) {
+ dev_err(&pdev->dev, "Cannot enable PCI device from suspend\n");
+ return err;
+ }
+ pci_set_master(pdev);
+
+ pci_wake_from_d3(pdev, false);
+
+ /* refresh hw_addr in case it was dropped */
+ hw->hw_addr = interface->uc_addr;
+
+ /* reset hardware to known state */
+ err = hw->mac.ops.init_hw(&interface->hw);
+ if (err)
+ return err;
+
+ /* reset statistics starting values */
+ hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
+
+ /* reset clock */
+ fm10k_ts_reset(interface);
+
+ rtnl_lock();
+
+ err = fm10k_init_queueing_scheme(interface);
+ if (!err) {
+ fm10k_mbx_request_irq(interface);
+ if (netif_running(netdev))
+ err = fm10k_open(netdev);
+ }
+
+ rtnl_unlock();
+
+ if (err)
+ return err;
+
+ /* restore SR-IOV interface */
+ fm10k_iov_resume(pdev);
+
+ netif_device_attach(netdev);
+
+ return 0;
+}
+
+/**
+ * fm10k_suspend - Prepare the device for a system sleep state
+ * @pdev: PCI device information struct
+ *
+ * fm10k_suspend is meant to shutdown the device prior to the system entering
+ * a sleep state. The fm10k hardware does not support wake on lan so the
+ * driver simply needs to shut down the device so it is in a low power state.
+ **/
+static int fm10k_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct net_device *netdev = interface->netdev;
+ int err = 0;
+
+ netif_device_detach(netdev);
+
+ fm10k_iov_suspend(pdev);
+
+ rtnl_lock();
+
+ if (netif_running(netdev))
+ fm10k_close(netdev);
+
+ fm10k_mbx_free_irq(interface);
+
+ fm10k_clear_queueing_scheme(interface);
+
+ rtnl_unlock();
+
+ err = pci_save_state(pdev);
+ if (err)
+ return err;
+
+ pci_disable_device(pdev);
+ pci_wake_from_d3(pdev, false);
+ pci_set_power_state(pdev, PCI_D3hot);
+
+ return 0;
+}
+
+#endif /* CONFIG_PM */
+/**
+ * fm10k_io_error_detected - called when PCI error is detected
+ * @pdev: Pointer to PCI device
+ * @state: The current pci connection state
+ *
+ * This function is called after a PCI bus error affecting
+ * this device has been detected.
+ */
+static pci_ers_result_t fm10k_io_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct net_device *netdev = interface->netdev;
+
+ netif_device_detach(netdev);
+
+ if (state == pci_channel_io_perm_failure)
+ return PCI_ERS_RESULT_DISCONNECT;
+
+ if (netif_running(netdev))
+ fm10k_close(netdev);
+
+ fm10k_mbx_free_irq(interface);
+
+ pci_disable_device(pdev);
+
+ /* Request a slot reset. */
+ return PCI_ERS_RESULT_NEED_RESET;
+}
+
+/**
+ * fm10k_io_slot_reset - called after the pci bus has been reset.
+ * @pdev: Pointer to PCI device
+ *
+ * Restart the card from scratch, as if from a cold-boot.
+ */
+static pci_ers_result_t fm10k_io_slot_reset(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ pci_ers_result_t result;
+
+ if (pci_enable_device_mem(pdev)) {
+ dev_err(&pdev->dev,
+ "Cannot re-enable PCI device after reset.\n");
+ result = PCI_ERS_RESULT_DISCONNECT;
+ } else {
+ pci_set_master(pdev);
+ pci_restore_state(pdev);
+
+ /* After second error pci->state_saved is false, this
+ * resets it so EEH doesn't break.
+ */
+ pci_save_state(pdev);
+
+ pci_wake_from_d3(pdev, false);
+
+ /* refresh hw_addr in case it was dropped */
+ interface->hw.hw_addr = interface->uc_addr;
+
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+ fm10k_service_event_schedule(interface);
+
+ result = PCI_ERS_RESULT_RECOVERED;
+ }
+
+ pci_cleanup_aer_uncorrect_error_status(pdev);
+
+ return result;
+}
+
+/**
+ * fm10k_io_resume - called when traffic can start flowing again.
+ * @pdev: Pointer to PCI device
+ *
+ * This callback is called when the error recovery driver tells us that
+ * its OK to resume normal operation.
+ */
+static void fm10k_io_resume(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int err = 0;
+
+ /* reset hardware to known state */
+ hw->mac.ops.init_hw(&interface->hw);
+
+ /* reset statistics starting values */
+ hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
+
+ /* reassociate interrupts */
+ fm10k_mbx_request_irq(interface);
+
+ /* reset clock */
+ fm10k_ts_reset(interface);
+
+ if (netif_running(netdev))
+ err = fm10k_open(netdev);
+
+ /* final check of hardware state before registering the interface */
+ err = err ? : fm10k_hw_ready(interface);
+
+ if (!err)
+ netif_device_attach(netdev);
+}
+
+static const struct pci_error_handlers fm10k_err_handler = {
+ .error_detected = fm10k_io_error_detected,
+ .slot_reset = fm10k_io_slot_reset,
+ .resume = fm10k_io_resume,
+};
+
+static struct pci_driver fm10k_driver = {
+ .name = fm10k_driver_name,
+ .id_table = fm10k_pci_tbl,
+ .probe = fm10k_probe,
+ .remove = fm10k_remove,
+#ifdef CONFIG_PM
+ .suspend = fm10k_suspend,
+ .resume = fm10k_resume,
+#endif
+ .sriov_configure = fm10k_iov_configure,
+ .err_handler = &fm10k_err_handler
+};
+
+/**
+ * fm10k_register_pci_driver - register driver interface
+ *
+ * This funciton is called on module load in order to register the driver.
+ **/
+int fm10k_register_pci_driver(void)
+{
+ return pci_register_driver(&fm10k_driver);
+}
+
+/**
+ * fm10k_unregister_pci_driver - unregister driver interface
+ *
+ * This funciton is called on module unload in order to remove the driver.
+ **/
+void fm10k_unregister_pci_driver(void)
+{
+ pci_unregister_driver(&fm10k_driver);
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c
new file mode 100644
index 0000000..275423d
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c
@@ -0,0 +1,1880 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k_pf.h"
+#include "fm10k_vf.h"
+
+/**
+ * fm10k_reset_hw_pf - PF hardware reset
+ * @hw: pointer to hardware structure
+ *
+ * This function should return the hardware to a state similar to the
+ * one it is in after being powered on.
+ **/
+static s32 fm10k_reset_hw_pf(struct fm10k_hw *hw)
+{
+ s32 err;
+ u32 reg;
+ u16 i;
+
+ /* Disable interrupts */
+ fm10k_write_reg(hw, FM10K_EIMR, FM10K_EIMR_DISABLE(ALL));
+
+ /* Lock ITR2 reg 0 into itself and disable interrupt moderation */
+ fm10k_write_reg(hw, FM10K_ITR2(0), 0);
+ fm10k_write_reg(hw, FM10K_INT_CTRL, 0);
+
+ /* We assume here Tx and Rx queue 0 are owned by the PF */
+
+ /* Shut off VF access to their queues forcing them to queue 0 */
+ for (i = 0; i < FM10K_TQMAP_TABLE_SIZE; i++) {
+ fm10k_write_reg(hw, FM10K_TQMAP(i), 0);
+ fm10k_write_reg(hw, FM10K_RQMAP(i), 0);
+ }
+
+ /* shut down all rings */
+ err = fm10k_disable_queues_generic(hw, FM10K_MAX_QUEUES);
+ if (err)
+ return err;
+
+ /* Verify that DMA is no longer active */
+ reg = fm10k_read_reg(hw, FM10K_DMA_CTRL);
+ if (reg & (FM10K_DMA_CTRL_TX_ACTIVE | FM10K_DMA_CTRL_RX_ACTIVE))
+ return FM10K_ERR_DMA_PENDING;
+
+ /* Inititate data path reset */
+ reg |= FM10K_DMA_CTRL_DATAPATH_RESET;
+ fm10k_write_reg(hw, FM10K_DMA_CTRL, reg);
+
+ /* Flush write and allow 100us for reset to complete */
+ fm10k_write_flush(hw);
+ udelay(FM10K_RESET_TIMEOUT);
+
+ /* Verify we made it out of reset */
+ reg = fm10k_read_reg(hw, FM10K_IP);
+ if (!(reg & FM10K_IP_NOTINRESET))
+ err = FM10K_ERR_RESET_FAILED;
+
+ return err;
+}
+
+/**
+ * fm10k_is_ari_hierarchy_pf - Indicate ARI hierarchy support
+ * @hw: pointer to hardware structure
+ *
+ * Looks at the ARI hierarchy bit to determine whether ARI is supported or not.
+ **/
+static bool fm10k_is_ari_hierarchy_pf(struct fm10k_hw *hw)
+{
+ u16 sriov_ctrl = fm10k_read_pci_cfg_word(hw, FM10K_PCIE_SRIOV_CTRL);
+
+ return !!(sriov_ctrl & FM10K_PCIE_SRIOV_CTRL_VFARI);
+}
+
+/**
+ * fm10k_init_hw_pf - PF hardware initialization
+ * @hw: pointer to hardware structure
+ *
+ **/
+static s32 fm10k_init_hw_pf(struct fm10k_hw *hw)
+{
+ u32 dma_ctrl, txqctl;
+ u16 i;
+
+ /* Establish default VSI as valid */
+ fm10k_write_reg(hw, FM10K_DGLORTDEC(fm10k_dglort_default), 0);
+ fm10k_write_reg(hw, FM10K_DGLORTMAP(fm10k_dglort_default),
+ FM10K_DGLORTMAP_ANY);
+
+ /* Invalidate all other GLORT entries */
+ for (i = 1; i < FM10K_DGLORT_COUNT; i++)
+ fm10k_write_reg(hw, FM10K_DGLORTMAP(i), FM10K_DGLORTMAP_NONE);
+
+ /* reset ITR2(0) to point to itself */
+ fm10k_write_reg(hw, FM10K_ITR2(0), 0);
+
+ /* reset VF ITR2(0) to point to 0 avoid PF registers */
+ fm10k_write_reg(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), 0);
+
+ /* loop through all PF ITR2 registers pointing them to the previous */
+ for (i = 1; i < FM10K_ITR_REG_COUNT_PF; i++)
+ fm10k_write_reg(hw, FM10K_ITR2(i), i - 1);
+
+ /* Enable interrupt moderator if not already enabled */
+ fm10k_write_reg(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR);
+
+ /* compute the default txqctl configuration */
+ txqctl = FM10K_TXQCTL_PF | FM10K_TXQCTL_UNLIMITED_BW |
+ (hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT);
+
+ for (i = 0; i < FM10K_MAX_QUEUES; i++) {
+ /* configure rings for 256 Queue / 32 Descriptor cache mode */
+ fm10k_write_reg(hw, FM10K_TQDLOC(i),
+ (i * FM10K_TQDLOC_BASE_32_DESC) |
+ FM10K_TQDLOC_SIZE_32_DESC);
+ fm10k_write_reg(hw, FM10K_TXQCTL(i), txqctl);
+
+ /* configure rings to provide TPH processing hints */
+ fm10k_write_reg(hw, FM10K_TPH_TXCTRL(i),
+ FM10K_TPH_TXCTRL_DESC_TPHEN |
+ FM10K_TPH_TXCTRL_DESC_RROEN |
+ FM10K_TPH_TXCTRL_DESC_WROEN |
+ FM10K_TPH_TXCTRL_DATA_RROEN);
+ fm10k_write_reg(hw, FM10K_TPH_RXCTRL(i),
+ FM10K_TPH_RXCTRL_DESC_TPHEN |
+ FM10K_TPH_RXCTRL_DESC_RROEN |
+ FM10K_TPH_RXCTRL_DATA_WROEN |
+ FM10K_TPH_RXCTRL_HDR_WROEN);
+ }
+
+ /* set max hold interval to align with 1.024 usec in all modes */
+ switch (hw->bus.speed) {
+ case fm10k_bus_speed_2500:
+ dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1;
+ break;
+ case fm10k_bus_speed_5000:
+ dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2;
+ break;
+ case fm10k_bus_speed_8000:
+ dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3;
+ break;
+ default:
+ dma_ctrl = 0;
+ break;
+ }
+
+ /* Configure TSO flags */
+ fm10k_write_reg(hw, FM10K_DTXTCPFLGL, FM10K_TSO_FLAGS_LOW);
+ fm10k_write_reg(hw, FM10K_DTXTCPFLGH, FM10K_TSO_FLAGS_HI);
+
+ /* Enable DMA engine
+ * Set Rx Descriptor size to 32
+ * Set Minimum MSS to 64
+ * Set Maximum number of Rx queues to 256 / 32 Descriptor
+ */
+ dma_ctrl |= FM10K_DMA_CTRL_TX_ENABLE | FM10K_DMA_CTRL_RX_ENABLE |
+ FM10K_DMA_CTRL_RX_DESC_SIZE | FM10K_DMA_CTRL_MINMSS_64 |
+ FM10K_DMA_CTRL_32_DESC;
+
+ fm10k_write_reg(hw, FM10K_DMA_CTRL, dma_ctrl);
+
+ /* record maximum queue count, we limit ourselves to 128 */
+ hw->mac.max_queues = FM10K_MAX_QUEUES_PF;
+
+ /* We support either 64 VFs or 7 VFs depending on if we have ARI */
+ hw->iov.total_vfs = fm10k_is_ari_hierarchy_pf(hw) ? 64 : 7;
+
+ return 0;
+}
+
+/**
+ * fm10k_is_slot_appropriate_pf - Indicate appropriate slot for this SKU
+ * @hw: pointer to hardware structure
+ *
+ * Looks at the PCIe bus info to confirm whether or not this slot can support
+ * the necessary bandwidth for this device.
+ **/
+static bool fm10k_is_slot_appropriate_pf(struct fm10k_hw *hw)
+{
+ return (hw->bus.speed == hw->bus_caps.speed) &&
+ (hw->bus.width == hw->bus_caps.width);
+}
+
+/**
+ * fm10k_update_vlan_pf - Update status of VLAN ID in VLAN filter table
+ * @hw: pointer to hardware structure
+ * @vid: VLAN ID to add to table
+ * @vsi: Index indicating VF ID or PF ID in table
+ * @set: Indicates if this is a set or clear operation
+ *
+ * This function adds or removes the corresponding VLAN ID from the VLAN
+ * filter table for the corresponding function. In addition to the
+ * standard set/clear that supports one bit a multi-bit write is
+ * supported to set 64 bits at a time.
+ **/
+static s32 fm10k_update_vlan_pf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set)
+{
+ u32 vlan_table, reg, mask, bit, len;
+
+ /* verify the VSI index is valid */
+ if (vsi > FM10K_VLAN_TABLE_VSI_MAX)
+ return FM10K_ERR_PARAM;
+
+ /* VLAN multi-bit write:
+ * The multi-bit write has several parts to it.
+ * 3 2 1 0
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | RSVD0 | Length |C|RSVD0| VLAN ID |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ *
+ * VLAN ID: Vlan Starting value
+ * RSVD0: Reserved section, must be 0
+ * C: Flag field, 0 is set, 1 is clear (Used in VF VLAN message)
+ * Length: Number of times to repeat the bit being set
+ */
+ len = vid >> 16;
+ vid = (vid << 17) >> 17;
+
+ /* verify the reserved 0 fields are 0 */
+ if (len >= FM10K_VLAN_TABLE_VID_MAX ||
+ vid >= FM10K_VLAN_TABLE_VID_MAX)
+ return FM10K_ERR_PARAM;
+
+ /* Loop through the table updating all required VLANs */
+ for (reg = FM10K_VLAN_TABLE(vsi, vid / 32), bit = vid % 32;
+ len < FM10K_VLAN_TABLE_VID_MAX;
+ len -= 32 - bit, reg++, bit = 0) {
+ /* record the initial state of the register */
+ vlan_table = fm10k_read_reg(hw, reg);
+
+ /* truncate mask if we are at the start or end of the run */
+ mask = (~(u32)0 >> ((len < 31) ? 31 - len : 0)) << bit;
+
+ /* make necessary modifications to the register */
+ mask &= set ? ~vlan_table : vlan_table;
+ if (mask)
+ fm10k_write_reg(hw, reg, vlan_table ^ mask);
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_read_mac_addr_pf - Read device MAC address
+ * @hw: pointer to the HW structure
+ *
+ * Reads the device MAC address from the SM_AREA and stores the value.
+ **/
+static s32 fm10k_read_mac_addr_pf(struct fm10k_hw *hw)
+{
+ u8 perm_addr[ETH_ALEN];
+ u32 serial_num;
+ int i;
+
+ serial_num = fm10k_read_reg(hw, FM10K_SM_AREA(1));
+
+ /* last byte should be all 1's */
+ if ((~serial_num) << 24)
+ return FM10K_ERR_INVALID_MAC_ADDR;
+
+ perm_addr[0] = (u8)(serial_num >> 24);
+ perm_addr[1] = (u8)(serial_num >> 16);
+ perm_addr[2] = (u8)(serial_num >> 8);
+
+ serial_num = fm10k_read_reg(hw, FM10K_SM_AREA(0));
+
+ /* first byte should be all 1's */
+ if ((~serial_num) >> 24)
+ return FM10K_ERR_INVALID_MAC_ADDR;
+
+ perm_addr[3] = (u8)(serial_num >> 16);
+ perm_addr[4] = (u8)(serial_num >> 8);
+ perm_addr[5] = (u8)(serial_num);
+
+ for (i = 0; i < ETH_ALEN; i++) {
+ hw->mac.perm_addr[i] = perm_addr[i];
+ hw->mac.addr[i] = perm_addr[i];
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_glort_valid_pf - Validate that the provided glort is valid
+ * @hw: pointer to the HW structure
+ * @glort: base glort to be validated
+ *
+ * This function will return an error if the provided glort is invalid
+ **/
+bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort)
+{
+ glort &= hw->mac.dglort_map >> FM10K_DGLORTMAP_MASK_SHIFT;
+
+ return glort == (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE);
+}
+
+/**
+ * fm10k_update_uc_addr_pf - Update device unicast addresss
+ * @hw: pointer to the HW structure
+ * @glort: base resource tag for this request
+ * @mac: MAC address to add/remove from table
+ * @vid: VLAN ID to add/remove from table
+ * @add: Indicates if this is an add or remove operation
+ * @flags: flags field to indicate add and secure
+ *
+ * This function generates a message to the Switch API requesting
+ * that the given logical port add/remove the given L2 MAC/VLAN address.
+ **/
+static s32 fm10k_update_xc_addr_pf(struct fm10k_hw *hw, u16 glort,
+ const u8 *mac, u16 vid, bool add, u8 flags)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ struct fm10k_mac_update mac_update;
+ u32 msg[5];
+
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ /* drop upper 4 bits of VLAN ID */
+ vid = (vid << 4) >> 4;
+
+ /* record fields */
+ mac_update.mac_lower = cpu_to_le32(((u32)mac[2] << 24) |
+ ((u32)mac[3] << 16) |
+ ((u32)mac[4] << 8) |
+ ((u32)mac[5]));
+ mac_update.mac_upper = cpu_to_le16(((u32)mac[0] << 8) |
+ ((u32)mac[1]));
+ mac_update.vlan = cpu_to_le16(vid);
+ mac_update.glort = cpu_to_le16(glort);
+ mac_update.action = add ? 0 : 1;
+ mac_update.flags = flags;
+
+ /* populate mac_update fields */
+ fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE);
+ fm10k_tlv_attr_put_le_struct(msg, FM10K_PF_ATTR_ID_MAC_UPDATE,
+ &mac_update, sizeof(mac_update));
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_update_uc_addr_pf - Update device unicast addresss
+ * @hw: pointer to the HW structure
+ * @glort: base resource tag for this request
+ * @mac: MAC address to add/remove from table
+ * @vid: VLAN ID to add/remove from table
+ * @add: Indicates if this is an add or remove operation
+ * @flags: flags field to indicate add and secure
+ *
+ * This function is used to add or remove unicast addresses for
+ * the PF.
+ **/
+static s32 fm10k_update_uc_addr_pf(struct fm10k_hw *hw, u16 glort,
+ const u8 *mac, u16 vid, bool add, u8 flags)
+{
+ /* verify MAC address is valid */
+ if (!is_valid_ether_addr(mac))
+ return FM10K_ERR_PARAM;
+
+ return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, flags);
+}
+
+/**
+ * fm10k_update_mc_addr_pf - Update device multicast addresses
+ * @hw: pointer to the HW structure
+ * @glort: base resource tag for this request
+ * @mac: MAC address to add/remove from table
+ * @vid: VLAN ID to add/remove from table
+ * @add: Indicates if this is an add or remove operation
+ *
+ * This function is used to add or remove multicast MAC addresses for
+ * the PF.
+ **/
+static s32 fm10k_update_mc_addr_pf(struct fm10k_hw *hw, u16 glort,
+ const u8 *mac, u16 vid, bool add)
+{
+ /* verify multicast address is valid */
+ if (!is_multicast_ether_addr(mac))
+ return FM10K_ERR_PARAM;
+
+ return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, 0);
+}
+
+/**
+ * fm10k_update_xcast_mode_pf - Request update of multicast mode
+ * @hw: pointer to hardware structure
+ * @glort: base resource tag for this request
+ * @mode: integer value indicating mode being requested
+ *
+ * This function will attempt to request a higher mode for the port
+ * so that it can enable either multicast, multicast promiscuous, or
+ * promiscuous mode of operation.
+ **/
+static s32 fm10k_update_xcast_mode_pf(struct fm10k_hw *hw, u16 glort, u8 mode)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[3], xcast_mode;
+
+ if (mode > FM10K_XCAST_MODE_NONE)
+ return FM10K_ERR_PARAM;
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ /* write xcast mode as a single u32 value,
+ * lower 16 bits: glort
+ * upper 16 bits: mode
+ */
+ xcast_mode = ((u32)mode << 16) | glort;
+
+ /* generate message requesting to change xcast mode */
+ fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_XCAST_MODES);
+ fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_XCAST_MODE, xcast_mode);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_update_int_moderator_pf - Update interrupt moderator linked list
+ * @hw: pointer to hardware structure
+ *
+ * This function walks through the MSI-X vector table to determine the
+ * number of active interrupts and based on that information updates the
+ * interrupt moderator linked list.
+ **/
+static void fm10k_update_int_moderator_pf(struct fm10k_hw *hw)
+{
+ u32 i;
+
+ /* Disable interrupt moderator */
+ fm10k_write_reg(hw, FM10K_INT_CTRL, 0);
+
+ /* loop through PF from last to first looking enabled vectors */
+ for (i = FM10K_ITR_REG_COUNT_PF - 1; i; i--) {
+ if (!fm10k_read_reg(hw, FM10K_MSIX_VECTOR_MASK(i)))
+ break;
+ }
+
+ /* always reset VFITR2[0] to point to last enabled PF vector*/
+ fm10k_write_reg(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), i);
+
+ /* reset ITR2[0] to point to last enabled PF vector */
+ if (!hw->iov.num_vfs)
+ fm10k_write_reg(hw, FM10K_ITR2(0), i);
+
+ /* Enable interrupt moderator */
+ fm10k_write_reg(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR);
+}
+
+/**
+ * fm10k_update_lport_state_pf - Notify the switch of a change in port state
+ * @hw: pointer to the HW structure
+ * @glort: base resource tag for this request
+ * @count: number of logical ports being updated
+ * @enable: boolean value indicating enable or disable
+ *
+ * This function is used to add/remove a logical port from the switch.
+ **/
+static s32 fm10k_update_lport_state_pf(struct fm10k_hw *hw, u16 glort,
+ u16 count, bool enable)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[3], lport_msg;
+
+ /* do nothing if we are being asked to create or destroy 0 ports */
+ if (!count)
+ return 0;
+
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ /* construct the lport message from the 2 pieces of data we have */
+ lport_msg = ((u32)count << 16) | glort;
+
+ /* generate lport create/delete message */
+ fm10k_tlv_msg_init(msg, enable ? FM10K_PF_MSG_ID_LPORT_CREATE :
+ FM10K_PF_MSG_ID_LPORT_DELETE);
+ fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_PORT, lport_msg);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_configure_dglort_map_pf - Configures GLORT entry and queues
+ * @hw: pointer to hardware structure
+ * @dglort: pointer to dglort configuration structure
+ *
+ * Reads the configuration structure contained in dglort_cfg and uses
+ * that information to then populate a DGLORTMAP/DEC entry and the queues
+ * to which it has been assigned.
+ **/
+static s32 fm10k_configure_dglort_map_pf(struct fm10k_hw *hw,
+ struct fm10k_dglort_cfg *dglort)
+{
+ u16 glort, queue_count, vsi_count, pc_count;
+ u16 vsi, queue, pc, q_idx;
+ u32 txqctl, dglortdec, dglortmap;
+
+ /* verify the dglort pointer */
+ if (!dglort)
+ return FM10K_ERR_PARAM;
+
+ /* verify the dglort values */
+ if ((dglort->idx > 7) || (dglort->rss_l > 7) || (dglort->pc_l > 3) ||
+ (dglort->vsi_l > 6) || (dglort->vsi_b > 64) ||
+ (dglort->queue_l > 8) || (dglort->queue_b >= 256))
+ return FM10K_ERR_PARAM;
+
+ /* determine count of VSIs and queues */
+ queue_count = 1 << (dglort->rss_l + dglort->pc_l);
+ vsi_count = 1 << (dglort->vsi_l + dglort->queue_l);
+ glort = dglort->glort;
+ q_idx = dglort->queue_b;
+
+ /* configure SGLORT for queues */
+ for (vsi = 0; vsi < vsi_count; vsi++, glort++) {
+ for (queue = 0; queue < queue_count; queue++, q_idx++) {
+ if (q_idx >= FM10K_MAX_QUEUES)
+ break;
+
+ fm10k_write_reg(hw, FM10K_TX_SGLORT(q_idx), glort);
+ fm10k_write_reg(hw, FM10K_RX_SGLORT(q_idx), glort);
+ }
+ }
+
+ /* determine count of PCs and queues */
+ queue_count = 1 << (dglort->queue_l + dglort->rss_l + dglort->vsi_l);
+ pc_count = 1 << dglort->pc_l;
+
+ /* configure PC for Tx queues */
+ for (pc = 0; pc < pc_count; pc++) {
+ q_idx = pc + dglort->queue_b;
+ for (queue = 0; queue < queue_count; queue++) {
+ if (q_idx >= FM10K_MAX_QUEUES)
+ break;
+
+ txqctl = fm10k_read_reg(hw, FM10K_TXQCTL(q_idx));
+ txqctl &= ~FM10K_TXQCTL_PC_MASK;
+ txqctl |= pc << FM10K_TXQCTL_PC_SHIFT;
+ fm10k_write_reg(hw, FM10K_TXQCTL(q_idx), txqctl);
+
+ q_idx += pc_count;
+ }
+ }
+
+ /* configure DGLORTDEC */
+ dglortdec = ((u32)(dglort->rss_l) << FM10K_DGLORTDEC_RSSLENGTH_SHIFT) |
+ ((u32)(dglort->queue_b) << FM10K_DGLORTDEC_QBASE_SHIFT) |
+ ((u32)(dglort->pc_l) << FM10K_DGLORTDEC_PCLENGTH_SHIFT) |
+ ((u32)(dglort->vsi_b) << FM10K_DGLORTDEC_VSIBASE_SHIFT) |
+ ((u32)(dglort->vsi_l) << FM10K_DGLORTDEC_VSILENGTH_SHIFT) |
+ ((u32)(dglort->queue_l));
+ if (dglort->inner_rss)
+ dglortdec |= FM10K_DGLORTDEC_INNERRSS_ENABLE;
+
+ /* configure DGLORTMAP */
+ dglortmap = (dglort->idx == fm10k_dglort_default) ?
+ FM10K_DGLORTMAP_ANY : FM10K_DGLORTMAP_ZERO;
+ dglortmap <<= dglort->vsi_l + dglort->queue_l + dglort->shared_l;
+ dglortmap |= dglort->glort;
+
+ /* write values to hardware */
+ fm10k_write_reg(hw, FM10K_DGLORTDEC(dglort->idx), dglortdec);
+ fm10k_write_reg(hw, FM10K_DGLORTMAP(dglort->idx), dglortmap);
+
+ return 0;
+}
+
+u16 fm10k_queues_per_pool(struct fm10k_hw *hw)
+{
+ u16 num_pools = hw->iov.num_pools;
+
+ return (num_pools > 32) ? 2 : (num_pools > 16) ? 4 : (num_pools > 8) ?
+ 8 : FM10K_MAX_QUEUES_POOL;
+}
+
+u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx)
+{
+ u16 num_vfs = hw->iov.num_vfs;
+ u16 vf_q_idx = FM10K_MAX_QUEUES;
+
+ vf_q_idx -= fm10k_queues_per_pool(hw) * (num_vfs - vf_idx);
+
+ return vf_q_idx;
+}
+
+static u16 fm10k_vectors_per_pool(struct fm10k_hw *hw)
+{
+ u16 num_pools = hw->iov.num_pools;
+
+ return (num_pools > 32) ? 8 : (num_pools > 16) ? 16 :
+ FM10K_MAX_VECTORS_POOL;
+}
+
+static u16 fm10k_vf_vector_index(struct fm10k_hw *hw, u16 vf_idx)
+{
+ u16 vf_v_idx = FM10K_MAX_VECTORS_PF;
+
+ vf_v_idx += fm10k_vectors_per_pool(hw) * vf_idx;
+
+ return vf_v_idx;
+}
+
+/**
+ * fm10k_iov_assign_resources_pf - Assign pool resources for virtualization
+ * @hw: pointer to the HW structure
+ * @num_vfs: number of VFs to be allocated
+ * @num_pools: number of virtualization pools to be allocated
+ *
+ * Allocates queues and traffic classes to virtualization entities to prepare
+ * the PF for SR-IOV and VMDq
+ **/
+static s32 fm10k_iov_assign_resources_pf(struct fm10k_hw *hw, u16 num_vfs,
+ u16 num_pools)
+{
+ u16 qmap_stride, qpp, vpp, vf_q_idx, vf_q_idx0, qmap_idx;
+ u32 vid = hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT;
+ int i, j;
+
+ /* hardware only supports up to 64 pools */
+ if (num_pools > 64)
+ return FM10K_ERR_PARAM;
+
+ /* the number of VFs cannot exceed the number of pools */
+ if ((num_vfs > num_pools) || (num_vfs > hw->iov.total_vfs))
+ return FM10K_ERR_PARAM;
+
+ /* record number of virtualization entities */
+ hw->iov.num_vfs = num_vfs;
+ hw->iov.num_pools = num_pools;
+
+ /* determine qmap offsets and counts */
+ qmap_stride = (num_vfs > 8) ? 32 : 256;
+ qpp = fm10k_queues_per_pool(hw);
+ vpp = fm10k_vectors_per_pool(hw);
+
+ /* calculate starting index for queues */
+ vf_q_idx = fm10k_vf_queue_index(hw, 0);
+ qmap_idx = 0;
+
+ /* establish TCs with -1 credits and no quanta to prevent transmit */
+ for (i = 0; i < num_vfs; i++) {
+ fm10k_write_reg(hw, FM10K_TC_MAXCREDIT(i), 0);
+ fm10k_write_reg(hw, FM10K_TC_RATE(i), 0);
+ fm10k_write_reg(hw, FM10K_TC_CREDIT(i),
+ FM10K_TC_CREDIT_CREDIT_MASK);
+ }
+
+ /* zero out all mbmem registers */
+ for (i = FM10K_VFMBMEM_LEN * num_vfs; i--;)
+ fm10k_write_reg(hw, FM10K_MBMEM(i), 0);
+
+ /* clear event notification of VF FLR */
+ fm10k_write_reg(hw, FM10K_PFVFLREC(0), ~0);
+ fm10k_write_reg(hw, FM10K_PFVFLREC(1), ~0);
+
+ /* loop through unallocated rings assigning them back to PF */
+ for (i = FM10K_MAX_QUEUES_PF; i < vf_q_idx; i++) {
+ fm10k_write_reg(hw, FM10K_TXDCTL(i), 0);
+ fm10k_write_reg(hw, FM10K_TXQCTL(i), FM10K_TXQCTL_PF | vid);
+ fm10k_write_reg(hw, FM10K_RXQCTL(i), FM10K_RXQCTL_PF);
+ }
+
+ /* PF should have already updated VFITR2[0] */
+
+ /* update all ITR registers to flow to VFITR2[0] */
+ for (i = FM10K_ITR_REG_COUNT_PF + 1; i < FM10K_ITR_REG_COUNT; i++) {
+ if (!(i & (vpp - 1)))
+ fm10k_write_reg(hw, FM10K_ITR2(i), i - vpp);
+ else
+ fm10k_write_reg(hw, FM10K_ITR2(i), i - 1);
+ }
+
+ /* update PF ITR2[0] to reference the last vector */
+ fm10k_write_reg(hw, FM10K_ITR2(0),
+ fm10k_vf_vector_index(hw, num_vfs - 1));
+
+ /* loop through rings populating rings and TCs */
+ for (i = 0; i < num_vfs; i++) {
+ /* record index for VF queue 0 for use in end of loop */
+ vf_q_idx0 = vf_q_idx;
+
+ for (j = 0; j < qpp; j++, qmap_idx++, vf_q_idx++) {
+ /* assign VF and locked TC to queues */
+ fm10k_write_reg(hw, FM10K_TXDCTL(vf_q_idx), 0);
+ fm10k_write_reg(hw, FM10K_TXQCTL(vf_q_idx),
+ (i << FM10K_TXQCTL_TC_SHIFT) | i |
+ FM10K_TXQCTL_VF | vid);
+ fm10k_write_reg(hw, FM10K_RXDCTL(vf_q_idx),
+ FM10K_RXDCTL_WRITE_BACK_MIN_DELAY |
+ FM10K_RXDCTL_DROP_ON_EMPTY);
+ fm10k_write_reg(hw, FM10K_RXQCTL(vf_q_idx),
+ FM10K_RXQCTL_VF |
+ (i << FM10K_RXQCTL_VF_SHIFT));
+
+ /* map queue pair to VF */
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), vf_q_idx);
+ fm10k_write_reg(hw, FM10K_RQMAP(qmap_idx), vf_q_idx);
+ }
+
+ /* repeat the first ring for all of the remaining VF rings */
+ for (; j < qmap_stride; j++, qmap_idx++) {
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), vf_q_idx0);
+ fm10k_write_reg(hw, FM10K_RQMAP(qmap_idx), vf_q_idx0);
+ }
+ }
+
+ /* loop through remaining indexes assigning all to queue 0 */
+ while (qmap_idx < FM10K_TQMAP_TABLE_SIZE) {
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), 0);
+ fm10k_write_reg(hw, FM10K_RQMAP(qmap_idx), 0);
+ qmap_idx++;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_iov_configure_tc_pf - Configure the shaping group for VF
+ * @hw: pointer to the HW structure
+ * @vf_idx: index of VF receiving GLORT
+ * @rate: Rate indicated in Mb/s
+ *
+ * Configured the TC for a given VF to allow only up to a given number
+ * of Mb/s of outgoing Tx throughput.
+ **/
+static s32 fm10k_iov_configure_tc_pf(struct fm10k_hw *hw, u16 vf_idx, int rate)
+{
+ /* configure defaults */
+ u32 interval = FM10K_TC_RATE_INTERVAL_4US_GEN3;
+ u32 tc_rate = FM10K_TC_RATE_QUANTA_MASK;
+
+ /* verify vf is in range */
+ if (vf_idx >= hw->iov.num_vfs)
+ return FM10K_ERR_PARAM;
+
+ /* set interval to align with 4.096 usec in all modes */
+ switch (hw->bus.speed) {
+ case fm10k_bus_speed_2500:
+ interval = FM10K_TC_RATE_INTERVAL_4US_GEN1;
+ break;
+ case fm10k_bus_speed_5000:
+ interval = FM10K_TC_RATE_INTERVAL_4US_GEN2;
+ break;
+ default:
+ break;
+ }
+
+ if (rate) {
+ if (rate > FM10K_VF_TC_MAX || rate < FM10K_VF_TC_MIN)
+ return FM10K_ERR_PARAM;
+
+ /* The quanta is measured in Bytes per 4.096 or 8.192 usec
+ * The rate is provided in Mbits per second
+ * To tralslate from rate to quanta we need to multiply the
+ * rate by 8.192 usec and divide by 8 bits/byte. To avoid
+ * dealing with floating point we can round the values up
+ * to the nearest whole number ratio which gives us 128 / 125.
+ */
+ tc_rate = (rate * 128) / 125;
+
+ /* try to keep the rate limiting accurate by increasing
+ * the number of credits and interval for rates less than 4Gb/s
+ */
+ if (rate < 4000)
+ interval <<= 1;
+ else
+ tc_rate >>= 1;
+ }
+
+ /* update rate limiter with new values */
+ fm10k_write_reg(hw, FM10K_TC_RATE(vf_idx), tc_rate | interval);
+ fm10k_write_reg(hw, FM10K_TC_MAXCREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K);
+ fm10k_write_reg(hw, FM10K_TC_CREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K);
+
+ return 0;
+}
+
+/**
+ * fm10k_iov_assign_int_moderator_pf - Add VF interrupts to moderator list
+ * @hw: pointer to the HW structure
+ * @vf_idx: index of VF receiving GLORT
+ *
+ * Update the interrupt moderator linked list to include any MSI-X
+ * interrupts which the VF has enabled in the MSI-X vector table.
+ **/
+static s32 fm10k_iov_assign_int_moderator_pf(struct fm10k_hw *hw, u16 vf_idx)
+{
+ u16 vf_v_idx, vf_v_limit, i;
+
+ /* verify vf is in range */
+ if (vf_idx >= hw->iov.num_vfs)
+ return FM10K_ERR_PARAM;
+
+ /* determine vector offset and count*/
+ vf_v_idx = fm10k_vf_vector_index(hw, vf_idx);
+ vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw);
+
+ /* search for first vector that is not masked */
+ for (i = vf_v_limit - 1; i > vf_v_idx; i--) {
+ if (!fm10k_read_reg(hw, FM10K_MSIX_VECTOR_MASK(i)))
+ break;
+ }
+
+ /* reset linked list so it now includes our active vectors */
+ if (vf_idx == (hw->iov.num_vfs - 1))
+ fm10k_write_reg(hw, FM10K_ITR2(0), i);
+ else
+ fm10k_write_reg(hw, FM10K_ITR2(vf_v_limit), i);
+
+ return 0;
+}
+
+/**
+ * fm10k_iov_assign_default_mac_vlan_pf - Assign a MAC and VLAN to VF
+ * @hw: pointer to the HW structure
+ * @vf_info: pointer to VF information structure
+ *
+ * Assign a MAC address and default VLAN to a VF and notify it of the update
+ **/
+static s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw,
+ struct fm10k_vf_info *vf_info)
+{
+ u16 qmap_stride, queues_per_pool, vf_q_idx, timeout, qmap_idx, i;
+ u32 msg[4], txdctl, txqctl, tdbal = 0, tdbah = 0;
+ s32 err = 0;
+ u16 vf_idx, vf_vid;
+
+ /* verify vf is in range */
+ if (!vf_info || vf_info->vf_idx >= hw->iov.num_vfs)
+ return FM10K_ERR_PARAM;
+
+ /* determine qmap offsets and counts */
+ qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256;
+ queues_per_pool = fm10k_queues_per_pool(hw);
+
+ /* calculate starting index for queues */
+ vf_idx = vf_info->vf_idx;
+ vf_q_idx = fm10k_vf_queue_index(hw, vf_idx);
+ qmap_idx = qmap_stride * vf_idx;
+
+ /* MAP Tx queue back to 0 temporarily, and disable it */
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), 0);
+ fm10k_write_reg(hw, FM10K_TXDCTL(vf_q_idx), 0);
+
+ /* determine correct default VLAN ID */
+ if (vf_info->pf_vid)
+ vf_vid = vf_info->pf_vid | FM10K_VLAN_CLEAR;
+ else
+ vf_vid = vf_info->sw_vid;
+
+ /* generate MAC_ADDR request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN);
+ fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_DEFAULT_MAC,
+ vf_info->mac, vf_vid);
+
+ /* load onto outgoing mailbox, ignore any errors on enqueue */
+ if (vf_info->mbx.ops.enqueue_tx)
+ vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg);
+
+ /* verify ring has disabled before modifying base address registers */
+ txdctl = fm10k_read_reg(hw, FM10K_TXDCTL(vf_q_idx));
+ for (timeout = 0; txdctl & FM10K_TXDCTL_ENABLE; timeout++) {
+ /* limit ourselves to a 1ms timeout */
+ if (timeout == 10) {
+ err = FM10K_ERR_DMA_PENDING;
+ goto err_out;
+ }
+
+ usleep_range(100, 200);
+ txdctl = fm10k_read_reg(hw, FM10K_TXDCTL(vf_q_idx));
+ }
+
+ /* Update base address registers to contain MAC address */
+ if (is_valid_ether_addr(vf_info->mac)) {
+ tdbal = (((u32)vf_info->mac[3]) << 24) |
+ (((u32)vf_info->mac[4]) << 16) |
+ (((u32)vf_info->mac[5]) << 8);
+
+ tdbah = (((u32)0xFF) << 24) |
+ (((u32)vf_info->mac[0]) << 16) |
+ (((u32)vf_info->mac[1]) << 8) |
+ ((u32)vf_info->mac[2]);
+ }
+
+ /* Record the base address into queue 0 */
+ fm10k_write_reg(hw, FM10K_TDBAL(vf_q_idx), tdbal);
+ fm10k_write_reg(hw, FM10K_TDBAH(vf_q_idx), tdbah);
+
+err_out:
+ /* configure Queue control register */
+ txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) &
+ FM10K_TXQCTL_VID_MASK;
+ txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) |
+ FM10K_TXQCTL_VF | vf_idx;
+
+ /* assign VID */
+ for (i = 0; i < queues_per_pool; i++)
+ fm10k_write_reg(hw, FM10K_TXQCTL(vf_q_idx + i), txqctl);
+
+ /* restore the queue back to VF ownership */
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), vf_q_idx);
+ return err;
+}
+
+/**
+ * fm10k_iov_reset_resources_pf - Reassign queues and interrupts to a VF
+ * @hw: pointer to the HW structure
+ * @vf_info: pointer to VF information structure
+ *
+ * Reassign the interrupts and queues to a VF following an FLR
+ **/
+static s32 fm10k_iov_reset_resources_pf(struct fm10k_hw *hw,
+ struct fm10k_vf_info *vf_info)
+{
+ u16 qmap_stride, queues_per_pool, vf_q_idx, qmap_idx;
+ u32 tdbal = 0, tdbah = 0, txqctl, rxqctl;
+ u16 vf_v_idx, vf_v_limit, vf_vid;
+ u8 vf_idx = vf_info->vf_idx;
+ int i;
+
+ /* verify vf is in range */
+ if (vf_idx >= hw->iov.num_vfs)
+ return FM10K_ERR_PARAM;
+
+ /* clear event notification of VF FLR */
+ fm10k_write_reg(hw, FM10K_PFVFLREC(vf_idx / 32), 1 << (vf_idx % 32));
+
+ /* force timeout and then disconnect the mailbox */
+ vf_info->mbx.timeout = 0;
+ if (vf_info->mbx.ops.disconnect)
+ vf_info->mbx.ops.disconnect(hw, &vf_info->mbx);
+
+ /* determine vector offset and count*/
+ vf_v_idx = fm10k_vf_vector_index(hw, vf_idx);
+ vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw);
+
+ /* determine qmap offsets and counts */
+ qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256;
+ queues_per_pool = fm10k_queues_per_pool(hw);
+ qmap_idx = qmap_stride * vf_idx;
+
+ /* make all the queues inaccessible to the VF */
+ for (i = qmap_idx; i < (qmap_idx + qmap_stride); i++) {
+ fm10k_write_reg(hw, FM10K_TQMAP(i), 0);
+ fm10k_write_reg(hw, FM10K_RQMAP(i), 0);
+ }
+
+ /* calculate starting index for queues */
+ vf_q_idx = fm10k_vf_queue_index(hw, vf_idx);
+
+ /* determine correct default VLAN ID */
+ if (vf_info->pf_vid)
+ vf_vid = vf_info->pf_vid;
+ else
+ vf_vid = vf_info->sw_vid;
+
+ /* configure Queue control register */
+ txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) |
+ (vf_idx << FM10K_TXQCTL_TC_SHIFT) |
+ FM10K_TXQCTL_VF | vf_idx;
+ rxqctl = FM10K_RXQCTL_VF | (vf_idx << FM10K_RXQCTL_VF_SHIFT);
+
+ /* stop further DMA and reset queue ownership back to VF */
+ for (i = vf_q_idx; i < (queues_per_pool + vf_q_idx); i++) {
+ fm10k_write_reg(hw, FM10K_TXDCTL(i), 0);
+ fm10k_write_reg(hw, FM10K_TXQCTL(i), txqctl);
+ fm10k_write_reg(hw, FM10K_RXDCTL(i),
+ FM10K_RXDCTL_WRITE_BACK_MIN_DELAY |
+ FM10K_RXDCTL_DROP_ON_EMPTY);
+ fm10k_write_reg(hw, FM10K_RXQCTL(i), rxqctl);
+ }
+
+ /* reset TC with -1 credits and no quanta to prevent transmit */
+ fm10k_write_reg(hw, FM10K_TC_MAXCREDIT(vf_idx), 0);
+ fm10k_write_reg(hw, FM10K_TC_RATE(vf_idx), 0);
+ fm10k_write_reg(hw, FM10K_TC_CREDIT(vf_idx),
+ FM10K_TC_CREDIT_CREDIT_MASK);
+
+ /* update our first entry in the table based on previous VF */
+ if (!vf_idx)
+ hw->mac.ops.update_int_moderator(hw);
+ else
+ hw->iov.ops.assign_int_moderator(hw, vf_idx - 1);
+
+ /* reset linked list so it now includes our active vectors */
+ if (vf_idx == (hw->iov.num_vfs - 1))
+ fm10k_write_reg(hw, FM10K_ITR2(0), vf_v_idx);
+ else
+ fm10k_write_reg(hw, FM10K_ITR2(vf_v_limit), vf_v_idx);
+
+ /* link remaining vectors so that next points to previous */
+ for (vf_v_idx++; vf_v_idx < vf_v_limit; vf_v_idx++)
+ fm10k_write_reg(hw, FM10K_ITR2(vf_v_idx), vf_v_idx - 1);
+
+ /* zero out MBMEM, VLAN_TABLE, RETA, RSSRK, and MRQC registers */
+ for (i = FM10K_VFMBMEM_LEN; i--;)
+ fm10k_write_reg(hw, FM10K_MBMEM_VF(vf_idx, i), 0);
+ for (i = FM10K_VLAN_TABLE_SIZE; i--;)
+ fm10k_write_reg(hw, FM10K_VLAN_TABLE(vf_info->vsi, i), 0);
+ for (i = FM10K_RETA_SIZE; i--;)
+ fm10k_write_reg(hw, FM10K_RETA(vf_info->vsi, i), 0);
+ for (i = FM10K_RSSRK_SIZE; i--;)
+ fm10k_write_reg(hw, FM10K_RSSRK(vf_info->vsi, i), 0);
+ fm10k_write_reg(hw, FM10K_MRQC(vf_info->vsi), 0);
+
+ /* Update base address registers to contain MAC address */
+ if (is_valid_ether_addr(vf_info->mac)) {
+ tdbal = (((u32)vf_info->mac[3]) << 24) |
+ (((u32)vf_info->mac[4]) << 16) |
+ (((u32)vf_info->mac[5]) << 8);
+ tdbah = (((u32)0xFF) << 24) |
+ (((u32)vf_info->mac[0]) << 16) |
+ (((u32)vf_info->mac[1]) << 8) |
+ ((u32)vf_info->mac[2]);
+ }
+
+ /* map queue pairs back to VF from last to first*/
+ for (i = queues_per_pool; i--;) {
+ fm10k_write_reg(hw, FM10K_TDBAL(vf_q_idx + i), tdbal);
+ fm10k_write_reg(hw, FM10K_TDBAH(vf_q_idx + i), tdbah);
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx + i), vf_q_idx + i);
+ fm10k_write_reg(hw, FM10K_RQMAP(qmap_idx + i), vf_q_idx + i);
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_iov_set_lport_pf - Assign and enable a logical port for a given VF
+ * @hw: pointer to hardware structure
+ * @vf_info: pointer to VF information structure
+ * @lport_idx: Logical port offset from the hardware glort
+ * @flags: Set of capability flags to extend port beyond basic functionality
+ *
+ * This function allows enabling a VF port by assigning it a GLORT and
+ * setting the flags so that it can enable an Rx mode.
+ **/
+static s32 fm10k_iov_set_lport_pf(struct fm10k_hw *hw,
+ struct fm10k_vf_info *vf_info,
+ u16 lport_idx, u8 flags)
+{
+ u16 glort = (hw->mac.dglort_map + lport_idx) & FM10K_DGLORTMAP_NONE;
+
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ vf_info->vf_flags = flags | FM10K_VF_FLAG_NONE_CAPABLE;
+ vf_info->glort = glort;
+
+ return 0;
+}
+
+/**
+ * fm10k_iov_reset_lport_pf - Disable a logical port for a given VF
+ * @hw: pointer to hardware structure
+ * @vf_info: pointer to VF information structure
+ *
+ * This function disables a VF port by stripping it of a GLORT and
+ * setting the flags so that it cannot enable any Rx mode.
+ **/
+static void fm10k_iov_reset_lport_pf(struct fm10k_hw *hw,
+ struct fm10k_vf_info *vf_info)
+{
+ u32 msg[1];
+
+ /* need to disable the port if it is already enabled */
+ if (FM10K_VF_FLAG_ENABLED(vf_info)) {
+ /* notify switch that this port has been disabled */
+ fm10k_update_lport_state_pf(hw, vf_info->glort, 1, false);
+
+ /* generate port state response to notify VF it is not ready */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE);
+ vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg);
+ }
+
+ /* clear flags and glort if it exists */
+ vf_info->vf_flags = 0;
+ vf_info->glort = 0;
+}
+
+/**
+ * fm10k_iov_update_stats_pf - Updates hardware related statistics for VFs
+ * @hw: pointer to hardware structure
+ * @q: stats for all queues of a VF
+ * @vf_idx: index of VF
+ *
+ * This function collects queue stats for VFs.
+ **/
+static void fm10k_iov_update_stats_pf(struct fm10k_hw *hw,
+ struct fm10k_hw_stats_q *q,
+ u16 vf_idx)
+{
+ u32 idx, qpp;
+
+ /* get stats for all of the queues */
+ qpp = fm10k_queues_per_pool(hw);
+ idx = fm10k_vf_queue_index(hw, vf_idx);
+ fm10k_update_hw_stats_q(hw, q, idx, qpp);
+}
+
+static s32 fm10k_iov_report_timestamp_pf(struct fm10k_hw *hw,
+ struct fm10k_vf_info *vf_info,
+ u64 timestamp)
+{
+ u32 msg[4];
+
+ /* generate port state response to notify VF it is not ready */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_1588);
+ fm10k_tlv_attr_put_u64(msg, FM10K_1588_MSG_TIMESTAMP, timestamp);
+
+ return vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg);
+}
+
+/**
+ * fm10k_iov_msg_msix_pf - Message handler for MSI-X request from VF
+ * @hw: Pointer to hardware structure
+ * @results: Pointer array to message, results[0] is pointer to message
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This function is a default handler for MSI-X requests from the VF. The
+ * assumption is that in this case it is acceptable to just directly
+ * hand off the message form the VF to the underlying shared code.
+ **/
+s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
+ u8 vf_idx = vf_info->vf_idx;
+
+ return hw->iov.ops.assign_int_moderator(hw, vf_idx);
+}
+
+/**
+ * fm10k_iov_msg_mac_vlan_pf - Message handler for MAC/VLAN request from VF
+ * @hw: Pointer to hardware structure
+ * @results: Pointer array to message, results[0] is pointer to message
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This function is a default handler for MAC/VLAN requests from the VF.
+ * The assumption is that in this case it is acceptable to just directly
+ * hand off the message form the VF to the underlying shared code.
+ **/
+s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
+ int err = 0;
+ u8 mac[ETH_ALEN];
+ u32 *result;
+ u16 vlan;
+ u32 vid;
+
+ /* we shouldn't be updating rules on a disabled interface */
+ if (!FM10K_VF_FLAG_ENABLED(vf_info))
+ err = FM10K_ERR_PARAM;
+
+ if (!err && !!results[FM10K_MAC_VLAN_MSG_VLAN]) {
+ result = results[FM10K_MAC_VLAN_MSG_VLAN];
+
+ /* record VLAN id requested */
+ err = fm10k_tlv_attr_get_u32(result, &vid);
+ if (err)
+ return err;
+
+ /* if VLAN ID is 0, set the default VLAN ID instead of 0 */
+ if (!vid || (vid == FM10K_VLAN_CLEAR)) {
+ if (vf_info->pf_vid)
+ vid |= vf_info->pf_vid;
+ else
+ vid |= vf_info->sw_vid;
+ } else if (vid != vf_info->pf_vid) {
+ return FM10K_ERR_PARAM;
+ }
+
+ /* update VSI info for VF in regards to VLAN table */
+ err = hw->mac.ops.update_vlan(hw, vid, vf_info->vsi,
+ !(vid & FM10K_VLAN_CLEAR));
+ }
+
+ if (!err && !!results[FM10K_MAC_VLAN_MSG_MAC]) {
+ result = results[FM10K_MAC_VLAN_MSG_MAC];
+
+ /* record unicast MAC address requested */
+ err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan);
+ if (err)
+ return err;
+
+ /* block attempts to set MAC for a locked device */
+ if (is_valid_ether_addr(vf_info->mac) &&
+ memcmp(mac, vf_info->mac, ETH_ALEN))
+ return FM10K_ERR_PARAM;
+
+ /* if VLAN ID is 0, set the default VLAN ID instead of 0 */
+ if (!vlan || (vlan == FM10K_VLAN_CLEAR)) {
+ if (vf_info->pf_vid)
+ vlan |= vf_info->pf_vid;
+ else
+ vlan |= vf_info->sw_vid;
+ } else if (vf_info->pf_vid) {
+ return FM10K_ERR_PARAM;
+ }
+
+ /* notify switch of request for new unicast address */
+ err = hw->mac.ops.update_uc_addr(hw, vf_info->glort, mac, vlan,
+ !(vlan & FM10K_VLAN_CLEAR), 0);
+ }
+
+ if (!err && !!results[FM10K_MAC_VLAN_MSG_MULTICAST]) {
+ result = results[FM10K_MAC_VLAN_MSG_MULTICAST];
+
+ /* record multicast MAC address requested */
+ err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan);
+ if (err)
+ return err;
+
+ /* verify that the VF is allowed to request multicast */
+ if (!(vf_info->vf_flags & FM10K_VF_FLAG_MULTI_ENABLED))
+ return FM10K_ERR_PARAM;
+
+ /* if VLAN ID is 0, set the default VLAN ID instead of 0 */
+ if (!vlan || (vlan == FM10K_VLAN_CLEAR)) {
+ if (vf_info->pf_vid)
+ vlan |= vf_info->pf_vid;
+ else
+ vlan |= vf_info->sw_vid;
+ } else if (vf_info->pf_vid) {
+ return FM10K_ERR_PARAM;
+ }
+
+ /* notify switch of request for new multicast address */
+ err = hw->mac.ops.update_mc_addr(hw, vf_info->glort, mac,
+ !(vlan & FM10K_VLAN_CLEAR), 0);
+ }
+
+ return err;
+}
+
+/**
+ * fm10k_iov_supported_xcast_mode_pf - Determine best match for xcast mode
+ * @vf_info: VF info structure containing capability flags
+ * @mode: Requested xcast mode
+ *
+ * This function outputs the mode that most closely matches the requested
+ * mode. If not modes match it will request we disable the port
+ **/
+static u8 fm10k_iov_supported_xcast_mode_pf(struct fm10k_vf_info *vf_info,
+ u8 mode)
+{
+ u8 vf_flags = vf_info->vf_flags;
+
+ /* match up mode to capabilities as best as possible */
+ switch (mode) {
+ case FM10K_XCAST_MODE_PROMISC:
+ if (vf_flags & FM10K_VF_FLAG_PROMISC_CAPABLE)
+ return FM10K_XCAST_MODE_PROMISC;
+ /* fallthough */
+ case FM10K_XCAST_MODE_ALLMULTI:
+ if (vf_flags & FM10K_VF_FLAG_ALLMULTI_CAPABLE)
+ return FM10K_XCAST_MODE_ALLMULTI;
+ /* fallthough */
+ case FM10K_XCAST_MODE_MULTI:
+ if (vf_flags & FM10K_VF_FLAG_MULTI_CAPABLE)
+ return FM10K_XCAST_MODE_MULTI;
+ /* fallthough */
+ case FM10K_XCAST_MODE_NONE:
+ if (vf_flags & FM10K_VF_FLAG_NONE_CAPABLE)
+ return FM10K_XCAST_MODE_NONE;
+ /* fallthough */
+ default:
+ break;
+ }
+
+ /* disable interface as it should not be able to request any */
+ return FM10K_XCAST_MODE_DISABLE;
+}
+
+/**
+ * fm10k_iov_msg_lport_state_pf - Message handler for port state requests
+ * @hw: Pointer to hardware structure
+ * @results: Pointer array to message, results[0] is pointer to message
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This function is a default handler for port state requests. The port
+ * state requests for now are basic and consist of enabling or disabling
+ * the port.
+ **/
+s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
+ u32 *result;
+ s32 err = 0;
+ u32 msg[2];
+ u8 mode = 0;
+
+ /* verify VF is allowed to enable even minimal mode */
+ if (!(vf_info->vf_flags & FM10K_VF_FLAG_NONE_CAPABLE))
+ return FM10K_ERR_PARAM;
+
+ if (!!results[FM10K_LPORT_STATE_MSG_XCAST_MODE]) {
+ result = results[FM10K_LPORT_STATE_MSG_XCAST_MODE];
+
+ /* XCAST mode update requested */
+ err = fm10k_tlv_attr_get_u8(result, &mode);
+ if (err)
+ return FM10K_ERR_PARAM;
+
+ /* prep for possible demotion depending on capabilities */
+ mode = fm10k_iov_supported_xcast_mode_pf(vf_info, mode);
+
+ /* if mode is not currently enabled, enable it */
+ if (!(FM10K_VF_FLAG_ENABLED(vf_info) & (1 << mode)))
+ fm10k_update_xcast_mode_pf(hw, vf_info->glort, mode);
+
+ /* swap mode back to a bit flag */
+ mode = FM10K_VF_FLAG_SET_MODE(mode);
+ } else if (!results[FM10K_LPORT_STATE_MSG_DISABLE]) {
+ /* need to disable the port if it is already enabled */
+ if (FM10K_VF_FLAG_ENABLED(vf_info))
+ err = fm10k_update_lport_state_pf(hw, vf_info->glort,
+ 1, false);
+
+ /* when enabling the port we should reset the rate limiters */
+ hw->iov.ops.configure_tc(hw, vf_info->vf_idx, vf_info->rate);
+
+ /* set mode for minimal functionality */
+ mode = FM10K_VF_FLAG_SET_MODE_NONE;
+
+ /* generate port state response to notify VF it is ready */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE);
+ fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_READY);
+ mbx->ops.enqueue_tx(hw, mbx, msg);
+ }
+
+ /* if enable state toggled note the update */
+ if (!err && (!FM10K_VF_FLAG_ENABLED(vf_info) != !mode))
+ err = fm10k_update_lport_state_pf(hw, vf_info->glort, 1,
+ !!mode);
+
+ /* if state change succeeded, then update our stored state */
+ mode |= FM10K_VF_FLAG_CAPABLE(vf_info);
+ if (!err)
+ vf_info->vf_flags = mode;
+
+ return err;
+}
+
+const struct fm10k_msg_data fm10k_iov_msg_data_pf[] = {
+ FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
+ FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf),
+ FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf),
+ FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error),
+};
+
+/**
+ * fm10k_update_stats_hw_pf - Updates hardware related statistics of PF
+ * @hw: pointer to hardware structure
+ * @stats: pointer to the stats structure to update
+ *
+ * This function collects and aggregates global and per queue hardware
+ * statistics.
+ **/
+static void fm10k_update_hw_stats_pf(struct fm10k_hw *hw,
+ struct fm10k_hw_stats *stats)
+{
+ u32 timeout, ur, ca, um, xec, vlan_drop, loopback_drop, nodesc_drop;
+ u32 id, id_prev;
+
+ /* Use Tx queue 0 as a canary to detect a reset */
+ id = fm10k_read_reg(hw, FM10K_TXQCTL(0));
+
+ /* Read Global Statistics */
+ do {
+ timeout = fm10k_read_hw_stats_32b(hw, FM10K_STATS_TIMEOUT,
+ &stats->timeout);
+ ur = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UR, &stats->ur);
+ ca = fm10k_read_hw_stats_32b(hw, FM10K_STATS_CA, &stats->ca);
+ um = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UM, &stats->um);
+ xec = fm10k_read_hw_stats_32b(hw, FM10K_STATS_XEC, &stats->xec);
+ vlan_drop = fm10k_read_hw_stats_32b(hw, FM10K_STATS_VLAN_DROP,
+ &stats->vlan_drop);
+ loopback_drop = fm10k_read_hw_stats_32b(hw,
+ FM10K_STATS_LOOPBACK_DROP,
+ &stats->loopback_drop);
+ nodesc_drop = fm10k_read_hw_stats_32b(hw,
+ FM10K_STATS_NODESC_DROP,
+ &stats->nodesc_drop);
+
+ /* if value has not changed then we have consistent data */
+ id_prev = id;
+ id = fm10k_read_reg(hw, FM10K_TXQCTL(0));
+ } while ((id ^ id_prev) & FM10K_TXQCTL_ID_MASK);
+
+ /* drop non-ID bits and set VALID ID bit */
+ id &= FM10K_TXQCTL_ID_MASK;
+ id |= FM10K_STAT_VALID;
+
+ /* Update Global Statistics */
+ if (stats->stats_idx == id) {
+ stats->timeout.count += timeout;
+ stats->ur.count += ur;
+ stats->ca.count += ca;
+ stats->um.count += um;
+ stats->xec.count += xec;
+ stats->vlan_drop.count += vlan_drop;
+ stats->loopback_drop.count += loopback_drop;
+ stats->nodesc_drop.count += nodesc_drop;
+ }
+
+ /* Update bases and record current PF id */
+ fm10k_update_hw_base_32b(&stats->timeout, timeout);
+ fm10k_update_hw_base_32b(&stats->ur, ur);
+ fm10k_update_hw_base_32b(&stats->ca, ca);
+ fm10k_update_hw_base_32b(&stats->um, um);
+ fm10k_update_hw_base_32b(&stats->xec, xec);
+ fm10k_update_hw_base_32b(&stats->vlan_drop, vlan_drop);
+ fm10k_update_hw_base_32b(&stats->loopback_drop, loopback_drop);
+ fm10k_update_hw_base_32b(&stats->nodesc_drop, nodesc_drop);
+ stats->stats_idx = id;
+
+ /* Update Queue Statistics */
+ fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues);
+}
+
+/**
+ * fm10k_rebind_hw_stats_pf - Resets base for hardware statistics of PF
+ * @hw: pointer to hardware structure
+ * @stats: pointer to the stats structure to update
+ *
+ * This function resets the base for global and per queue hardware
+ * statistics.
+ **/
+static void fm10k_rebind_hw_stats_pf(struct fm10k_hw *hw,
+ struct fm10k_hw_stats *stats)
+{
+ /* Unbind Global Statistics */
+ fm10k_unbind_hw_stats_32b(&stats->timeout);
+ fm10k_unbind_hw_stats_32b(&stats->ur);
+ fm10k_unbind_hw_stats_32b(&stats->ca);
+ fm10k_unbind_hw_stats_32b(&stats->um);
+ fm10k_unbind_hw_stats_32b(&stats->xec);
+ fm10k_unbind_hw_stats_32b(&stats->vlan_drop);
+ fm10k_unbind_hw_stats_32b(&stats->loopback_drop);
+ fm10k_unbind_hw_stats_32b(&stats->nodesc_drop);
+
+ /* Unbind Queue Statistics */
+ fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues);
+
+ /* Reinitialize bases for all stats */
+ fm10k_update_hw_stats_pf(hw, stats);
+}
+
+/**
+ * fm10k_set_dma_mask_pf - Configures PhyAddrSpace to limit DMA to system
+ * @hw: pointer to hardware structure
+ * @dma_mask: 64 bit DMA mask required for platform
+ *
+ * This function sets the PHYADDR.PhyAddrSpace bits for the endpoint in order
+ * to limit the access to memory beyond what is physically in the system.
+ **/
+static void fm10k_set_dma_mask_pf(struct fm10k_hw *hw, u64 dma_mask)
+{
+ /* we need to write the upper 32 bits of DMA mask to PhyAddrSpace */
+ u32 phyaddr = (u32)(dma_mask >> 32);
+
+ fm10k_write_reg(hw, FM10K_PHYADDR, phyaddr);
+}
+
+/**
+ * fm10k_get_fault_pf - Record a fault in one of the interface units
+ * @hw: pointer to hardware structure
+ * @type: pointer to fault type register offset
+ * @fault: pointer to memory location to record the fault
+ *
+ * Record the fault register contents to the fault data structure and
+ * clear the entry from the register.
+ *
+ * Returns ERR_PARAM if invalid register is specified or no error is present.
+ **/
+static s32 fm10k_get_fault_pf(struct fm10k_hw *hw, int type,
+ struct fm10k_fault *fault)
+{
+ u32 func;
+
+ /* verify the fault register is in range and is aligned */
+ switch (type) {
+ case FM10K_PCA_FAULT:
+ case FM10K_THI_FAULT:
+ case FM10K_FUM_FAULT:
+ break;
+ default:
+ return FM10K_ERR_PARAM;
+ }
+
+ /* only service faults that are valid */
+ func = fm10k_read_reg(hw, type + FM10K_FAULT_FUNC);
+ if (!(func & FM10K_FAULT_FUNC_VALID))
+ return FM10K_ERR_PARAM;
+
+ /* read remaining fields */
+ fault->address = fm10k_read_reg(hw, type + FM10K_FAULT_ADDR_HI);
+ fault->address <<= 32;
+ fault->address = fm10k_read_reg(hw, type + FM10K_FAULT_ADDR_LO);
+ fault->specinfo = fm10k_read_reg(hw, type + FM10K_FAULT_SPECINFO);
+
+ /* clear valid bit to allow for next error */
+ fm10k_write_reg(hw, type + FM10K_FAULT_FUNC, FM10K_FAULT_FUNC_VALID);
+
+ /* Record which function triggered the error */
+ if (func & FM10K_FAULT_FUNC_PF)
+ fault->func = 0;
+ else
+ fault->func = 1 + ((func & FM10K_FAULT_FUNC_VF_MASK) >>
+ FM10K_FAULT_FUNC_VF_SHIFT);
+
+ /* record fault type */
+ fault->type = func & FM10K_FAULT_FUNC_TYPE_MASK;
+
+ return 0;
+}
+
+/**
+ * fm10k_request_lport_map_pf - Request LPORT map from the switch API
+ * @hw: pointer to hardware structure
+ *
+ **/
+static s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[1];
+
+ /* issue request asking for LPORT map */
+ fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_LPORT_MAP);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_get_host_state_pf - Returns the state of the switch and mailbox
+ * @hw: pointer to hardware structure
+ * @switch_ready: pointer to boolean value that will record switch state
+ *
+ * This funciton will check the DMA_CTRL2 register and mailbox in order
+ * to determine if the switch is ready for the PF to begin requesting
+ * addresses and mapping traffic to the local interface.
+ **/
+static s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready)
+{
+ s32 ret_val = 0;
+ u32 dma_ctrl2;
+
+ /* verify the switch is ready for interraction */
+ dma_ctrl2 = fm10k_read_reg(hw, FM10K_DMA_CTRL2);
+ if (!(dma_ctrl2 & FM10K_DMA_CTRL2_SWITCH_READY))
+ goto out;
+
+ /* retrieve generic host state info */
+ ret_val = fm10k_get_host_state_generic(hw, switch_ready);
+ if (ret_val)
+ goto out;
+
+ /* interface cannot receive traffic without logical ports */
+ if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE)
+ ret_val = fm10k_request_lport_map_pf(hw);
+
+out:
+ return ret_val;
+}
+
+/* This structure defines the attibutes to be parsed below */
+const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = {
+ FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_LPORT_MAP),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_msg_lport_map_pf - Message handler for lport_map message from SM
+ * @hw: Pointer to hardware structure
+ * @results: pointer array containing parsed data
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This handler configures the lport mapping based on the reply from the
+ * switch API.
+ **/
+s32 fm10k_msg_lport_map_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ u16 glort, mask;
+ u32 dglort_map;
+ s32 err;
+
+ err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_LPORT_MAP],
+ &dglort_map);
+ if (err)
+ return err;
+
+ /* extract values out of the header */
+ glort = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_GLORT);
+ mask = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_MASK);
+
+ /* verify mask is set and none of the masked bits in glort are set */
+ if (!mask || (glort & ~mask))
+ return FM10K_ERR_PARAM;
+
+ /* verify the mask is contiguous, and that it is 1's followed by 0's */
+ if (((~(mask - 1) & mask) + mask) & FM10K_DGLORTMAP_NONE)
+ return FM10K_ERR_PARAM;
+
+ /* record the glort, mask, and port count */
+ hw->mac.dglort_map = dglort_map;
+
+ return 0;
+}
+
+const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[] = {
+ FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_UPDATE_PVID),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_msg_update_pvid_pf - Message handler for port VLAN message from SM
+ * @hw: Pointer to hardware structure
+ * @results: pointer array containing parsed data
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This handler configures the default VLAN for the PF
+ **/
+s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ u16 glort, pvid;
+ u32 pvid_update;
+ s32 err;
+
+ err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID],
+ &pvid_update);
+ if (err)
+ return err;
+
+ /* extract values from the pvid update */
+ glort = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_GLORT);
+ pvid = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_PVID);
+
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ /* verify VID is valid */
+ if (pvid >= FM10K_VLAN_TABLE_VID_MAX)
+ return FM10K_ERR_PARAM;
+
+ /* record the port VLAN ID value */
+ hw->mac.default_vid = pvid;
+
+ return 0;
+}
+
+/**
+ * fm10k_record_global_table_data - Move global table data to swapi table info
+ * @from: pointer to source table data structure
+ * @to: pointer to destination table info structure
+ *
+ * This function is will copy table_data to the table_info contained in
+ * the hw struct.
+ **/
+static void fm10k_record_global_table_data(struct fm10k_global_table_data *from,
+ struct fm10k_swapi_table_info *to)
+{
+ /* convert from le32 struct to CPU byte ordered values */
+ to->used = le32_to_cpu(from->used);
+ to->avail = le32_to_cpu(from->avail);
+}
+
+const struct fm10k_tlv_attr fm10k_err_msg_attr[] = {
+ FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_ERR,
+ sizeof(struct fm10k_swapi_error)),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_msg_err_pf - Message handler for error reply
+ * @hw: Pointer to hardware structure
+ * @results: pointer array containing parsed data
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This handler will capture the data for any error replies to previous
+ * messages that the PF has sent.
+ **/
+s32 fm10k_msg_err_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_swapi_error err_msg;
+ s32 err;
+
+ /* extract structure from message */
+ err = fm10k_tlv_attr_get_le_struct(results[FM10K_PF_ATTR_ID_ERR],
+ &err_msg, sizeof(err_msg));
+ if (err)
+ return err;
+
+ /* record table status */
+ fm10k_record_global_table_data(&err_msg.mac, &hw->swapi.mac);
+ fm10k_record_global_table_data(&err_msg.nexthop, &hw->swapi.nexthop);
+ fm10k_record_global_table_data(&err_msg.ffu, &hw->swapi.ffu);
+
+ /* record SW API status value */
+ hw->swapi.status = le32_to_cpu(err_msg.status);
+
+ return 0;
+}
+
+const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[] = {
+ FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_1588_TIMESTAMP,
+ sizeof(struct fm10k_swapi_1588_timestamp)),
+ FM10K_TLV_ATTR_LAST
+};
+
+/* currently there is no shared 1588 timestamp handler */
+
+/**
+ * fm10k_adjust_systime_pf - Adjust systime frequency
+ * @hw: pointer to hardware structure
+ * @ppb: adjustment rate in parts per billion
+ *
+ * This function will adjust the SYSTIME_CFG register contained in BAR 4
+ * if this function is supported for BAR 4 access. The adjustment amount
+ * is based on the parts per billion value provided and adjusted to a
+ * value based on parts per 2^48 clock cycles.
+ *
+ * If adjustment is not supported or the requested value is too large
+ * we will return an error.
+ **/
+static s32 fm10k_adjust_systime_pf(struct fm10k_hw *hw, s32 ppb)
+{
+ u64 systime_adjust;
+
+ /* if sw_addr is not set we don't have switch register access */
+ if (!hw->sw_addr)
+ return ppb ? FM10K_ERR_PARAM : 0;
+
+ /* we must convert the value from parts per billion to parts per
+ * 2^48 cycles. In addition I have opted to only use the 30 most
+ * significant bits of the adjustment value as the 8 least
+ * significant bits are located in another register and represent
+ * a value significantly less than a part per billion, the result
+ * of dropping the 8 least significant bits is that the adjustment
+ * value is effectively multiplied by 2^8 when we write it.
+ *
+ * As a result of all this the math for this breaks down as follows:
+ * ppb / 10^9 == adjust * 2^8 / 2^48
+ * If we solve this for adjust, and simplify it comes out as:
+ * ppb * 2^31 / 5^9 == adjust
+ */
+ systime_adjust = (ppb < 0) ? -ppb : ppb;
+ systime_adjust <<= 31;
+ do_div(systime_adjust, 1953125);
+
+ /* verify the requested adjustment value is in range */
+ if (systime_adjust > FM10K_SW_SYSTIME_ADJUST_MASK)
+ return FM10K_ERR_PARAM;
+
+ if (ppb < 0)
+ systime_adjust |= FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE;
+
+ fm10k_write_sw_reg(hw, FM10K_SW_SYSTIME_ADJUST, (u32)systime_adjust);
+
+ return 0;
+}
+
+/**
+ * fm10k_read_systime_pf - Reads value of systime registers
+ * @hw: pointer to the hardware structure
+ *
+ * Function reads the content of 2 registers, combined to represent a 64 bit
+ * value measured in nanosecods. In order to guarantee the value is accurate
+ * we check the 32 most significant bits both before and after reading the
+ * 32 least significant bits to verify they didn't change as we were reading
+ * the registers.
+ **/
+static u64 fm10k_read_systime_pf(struct fm10k_hw *hw)
+{
+ u32 systime_l, systime_h, systime_tmp;
+
+ systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1);
+
+ do {
+ systime_tmp = systime_h;
+ systime_l = fm10k_read_reg(hw, FM10K_SYSTIME);
+ systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1);
+ } while (systime_tmp != systime_h);
+
+ return ((u64)systime_h << 32) | systime_l;
+}
+
+static const struct fm10k_msg_data fm10k_msg_data_pf[] = {
+ FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf),
+ FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf),
+ FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error),
+};
+
+static struct fm10k_mac_ops mac_ops_pf = {
+ .get_bus_info = &fm10k_get_bus_info_generic,
+ .reset_hw = &fm10k_reset_hw_pf,
+ .init_hw = &fm10k_init_hw_pf,
+ .start_hw = &fm10k_start_hw_generic,
+ .stop_hw = &fm10k_stop_hw_generic,
+ .is_slot_appropriate = &fm10k_is_slot_appropriate_pf,
+ .update_vlan = &fm10k_update_vlan_pf,
+ .read_mac_addr = &fm10k_read_mac_addr_pf,
+ .update_uc_addr = &fm10k_update_uc_addr_pf,
+ .update_mc_addr = &fm10k_update_mc_addr_pf,
+ .update_xcast_mode = &fm10k_update_xcast_mode_pf,
+ .update_int_moderator = &fm10k_update_int_moderator_pf,
+ .update_lport_state = &fm10k_update_lport_state_pf,
+ .update_hw_stats = &fm10k_update_hw_stats_pf,
+ .rebind_hw_stats = &fm10k_rebind_hw_stats_pf,
+ .configure_dglort_map = &fm10k_configure_dglort_map_pf,
+ .set_dma_mask = &fm10k_set_dma_mask_pf,
+ .get_fault = &fm10k_get_fault_pf,
+ .get_host_state = &fm10k_get_host_state_pf,
+ .adjust_systime = &fm10k_adjust_systime_pf,
+ .read_systime = &fm10k_read_systime_pf,
+};
+
+static struct fm10k_iov_ops iov_ops_pf = {
+ .assign_resources = &fm10k_iov_assign_resources_pf,
+ .configure_tc = &fm10k_iov_configure_tc_pf,
+ .assign_int_moderator = &fm10k_iov_assign_int_moderator_pf,
+ .assign_default_mac_vlan = fm10k_iov_assign_default_mac_vlan_pf,
+ .reset_resources = &fm10k_iov_reset_resources_pf,
+ .set_lport = &fm10k_iov_set_lport_pf,
+ .reset_lport = &fm10k_iov_reset_lport_pf,
+ .update_stats = &fm10k_iov_update_stats_pf,
+ .report_timestamp = &fm10k_iov_report_timestamp_pf,
+};
+
+static s32 fm10k_get_invariants_pf(struct fm10k_hw *hw)
+{
+ fm10k_get_invariants_generic(hw);
+
+ return fm10k_sm_mbx_init(hw, &hw->mbx, fm10k_msg_data_pf);
+}
+
+struct fm10k_info fm10k_pf_info = {
+ .mac = fm10k_mac_pf,
+ .get_invariants = &fm10k_get_invariants_pf,
+ .mac_ops = &mac_ops_pf,
+ .iov_ops = &iov_ops_pf,
+};
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pf.h b/drivers/net/ethernet/intel/fm10k/fm10k_pf.h
new file mode 100644
index 0000000..7ab1db4
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pf.h
@@ -0,0 +1,135 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_PF_H_
+#define _FM10K_PF_H_
+
+#include "fm10k_type.h"
+#include "fm10k_common.h"
+
+bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort);
+u16 fm10k_queues_per_pool(struct fm10k_hw *hw);
+u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx);
+
+enum fm10k_pf_tlv_msg_id_v1 {
+ FM10K_PF_MSG_ID_TEST = 0x000, /* msg ID reserved */
+ FM10K_PF_MSG_ID_XCAST_MODES = 0x001,
+ FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE = 0x002,
+ FM10K_PF_MSG_ID_LPORT_MAP = 0x100,
+ FM10K_PF_MSG_ID_LPORT_CREATE = 0x200,
+ FM10K_PF_MSG_ID_LPORT_DELETE = 0x201,
+ FM10K_PF_MSG_ID_CONFIG = 0x300,
+ FM10K_PF_MSG_ID_UPDATE_PVID = 0x400,
+ FM10K_PF_MSG_ID_CREATE_FLOW_TABLE = 0x501,
+ FM10K_PF_MSG_ID_DELETE_FLOW_TABLE = 0x502,
+ FM10K_PF_MSG_ID_UPDATE_FLOW = 0x503,
+ FM10K_PF_MSG_ID_DELETE_FLOW = 0x504,
+ FM10K_PF_MSG_ID_SET_FLOW_STATE = 0x505,
+ FM10K_PF_MSG_ID_GET_1588_INFO = 0x506,
+ FM10K_PF_MSG_ID_1588_TIMESTAMP = 0x701,
+};
+
+enum fm10k_pf_tlv_attr_id_v1 {
+ FM10K_PF_ATTR_ID_ERR = 0x00,
+ FM10K_PF_ATTR_ID_LPORT_MAP = 0x01,
+ FM10K_PF_ATTR_ID_XCAST_MODE = 0x02,
+ FM10K_PF_ATTR_ID_MAC_UPDATE = 0x03,
+ FM10K_PF_ATTR_ID_VLAN_UPDATE = 0x04,
+ FM10K_PF_ATTR_ID_CONFIG = 0x05,
+ FM10K_PF_ATTR_ID_CREATE_FLOW_TABLE = 0x06,
+ FM10K_PF_ATTR_ID_DELETE_FLOW_TABLE = 0x07,
+ FM10K_PF_ATTR_ID_UPDATE_FLOW = 0x08,
+ FM10K_PF_ATTR_ID_FLOW_STATE = 0x09,
+ FM10K_PF_ATTR_ID_FLOW_HANDLE = 0x0A,
+ FM10K_PF_ATTR_ID_DELETE_FLOW = 0x0B,
+ FM10K_PF_ATTR_ID_PORT = 0x0C,
+ FM10K_PF_ATTR_ID_UPDATE_PVID = 0x0D,
+ FM10K_PF_ATTR_ID_1588_TIMESTAMP = 0x10,
+};
+
+#define FM10K_MSG_LPORT_MAP_GLORT_SHIFT 0
+#define FM10K_MSG_LPORT_MAP_GLORT_SIZE 16
+#define FM10K_MSG_LPORT_MAP_MASK_SHIFT 16
+#define FM10K_MSG_LPORT_MAP_MASK_SIZE 16
+
+#define FM10K_MSG_UPDATE_PVID_GLORT_SHIFT 0
+#define FM10K_MSG_UPDATE_PVID_GLORT_SIZE 16
+#define FM10K_MSG_UPDATE_PVID_PVID_SHIFT 16
+#define FM10K_MSG_UPDATE_PVID_PVID_SIZE 16
+
+struct fm10k_mac_update {
+ __le32 mac_lower;
+ __le16 mac_upper;
+ __le16 vlan;
+ __le16 glort;
+ u8 flags;
+ u8 action;
+};
+
+struct fm10k_global_table_data {
+ __le32 used;
+ __le32 avail;
+};
+
+struct fm10k_swapi_error {
+ __le32 status;
+ struct fm10k_global_table_data mac;
+ struct fm10k_global_table_data nexthop;
+ struct fm10k_global_table_data ffu;
+};
+
+struct fm10k_swapi_1588_timestamp {
+ __le64 egress;
+ __le64 ingress;
+ __le16 dglort;
+ __le16 sglort;
+};
+
+s32 fm10k_msg_lport_map_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
+extern const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[];
+#define FM10K_PF_MSG_LPORT_MAP_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_MAP, \
+ fm10k_lport_map_msg_attr, func)
+s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *, u32 **,
+ struct fm10k_mbx_info *);
+extern const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[];
+#define FM10K_PF_MSG_UPDATE_PVID_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_UPDATE_PVID, \
+ fm10k_update_pvid_msg_attr, func)
+
+s32 fm10k_msg_err_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
+extern const struct fm10k_tlv_attr fm10k_err_msg_attr[];
+#define FM10K_PF_MSG_ERR_HANDLER(msg, func) \
+ FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_##msg, fm10k_err_msg_attr, func)
+
+extern const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[];
+#define FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_1588_TIMESTAMP, \
+ fm10k_1588_timestamp_msg_attr, func)
+
+s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
+s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *, u32 **,
+ struct fm10k_mbx_info *);
+s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *, u32 **,
+ struct fm10k_mbx_info *);
+extern const struct fm10k_msg_data fm10k_iov_msg_data_pf[];
+
+extern struct fm10k_info fm10k_pf_info;
+#endif /* _FM10K_PF_H */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_ptp.c b/drivers/net/ethernet/intel/fm10k/fm10k_ptp.c
new file mode 100644
index 0000000..7822809
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_ptp.c
@@ -0,0 +1,463 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include <linux/ptp_classify.h>
+#include <linux/ptp_clock_kernel.h>
+
+#include "fm10k.h"
+
+#define FM10K_TS_TX_TIMEOUT (HZ * 15)
+
+void fm10k_systime_to_hwtstamp(struct fm10k_intfc *interface,
+ struct skb_shared_hwtstamps *hwtstamp,
+ u64 systime)
+{
+ unsigned long flags;
+
+ read_lock_irqsave(&interface->systime_lock, flags);
+ systime += interface->ptp_adjust;
+ read_unlock_irqrestore(&interface->systime_lock, flags);
+
+ hwtstamp->hwtstamp = ns_to_ktime(systime);
+}
+
+static struct sk_buff *fm10k_ts_tx_skb(struct fm10k_intfc *interface,
+ __le16 dglort)
+{
+ struct sk_buff_head *list = &interface->ts_tx_skb_queue;
+ struct sk_buff *skb;
+
+ skb_queue_walk(list, skb) {
+ if (FM10K_CB(skb)->fi.w.dglort == dglort)
+ return skb;
+ }
+
+ return NULL;
+}
+
+void fm10k_ts_tx_enqueue(struct fm10k_intfc *interface, struct sk_buff *skb)
+{
+ struct sk_buff_head *list = &interface->ts_tx_skb_queue;
+ struct sk_buff *clone;
+ unsigned long flags;
+ __le16 dglort;
+
+ /* create clone for us to return on the Tx path */
+ clone = skb_clone_sk(skb);
+ if (!clone)
+ return;
+
+ FM10K_CB(clone)->ts_tx_timeout = jiffies + FM10K_TS_TX_TIMEOUT;
+ dglort = FM10K_CB(clone)->fi.w.dglort;
+
+ spin_lock_irqsave(&list->lock, flags);
+
+ /* attempt to locate any buffers with the same dglort,
+ * if none are present then insert skb in tail of list
+ */
+ skb = fm10k_ts_tx_skb(interface, FM10K_CB(clone)->fi.w.dglort);
+ if (!skb)
+ __skb_queue_tail(list, clone);
+
+ spin_unlock_irqrestore(&list->lock, flags);
+
+ /* if list is already has one then we just free the clone */
+ if (skb)
+ kfree_skb(skb);
+ else
+ skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS;
+}
+
+void fm10k_ts_tx_hwtstamp(struct fm10k_intfc *interface, __le16 dglort,
+ u64 systime)
+{
+ struct skb_shared_hwtstamps shhwtstamps;
+ struct sk_buff_head *list = &interface->ts_tx_skb_queue;
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ spin_lock_irqsave(&list->lock, flags);
+
+ /* attempt to locate and pull the sk_buff out of the list */
+ skb = fm10k_ts_tx_skb(interface, dglort);
+ if (skb)
+ __skb_unlink(skb, list);
+
+ spin_unlock_irqrestore(&list->lock, flags);
+
+ /* if not found do nothing */
+ if (!skb)
+ return;
+
+ /* timestamp the sk_buff and return it to the socket */
+ fm10k_systime_to_hwtstamp(interface, &shhwtstamps, systime);
+ skb_complete_tx_timestamp(skb, &shhwtstamps);
+}
+
+void fm10k_ts_tx_subtask(struct fm10k_intfc *interface)
+{
+ struct sk_buff_head *list = &interface->ts_tx_skb_queue;
+ struct sk_buff *skb, *tmp;
+ unsigned long flags;
+
+ /* If we're down or resetting, just bail */
+ if (test_bit(__FM10K_DOWN, &interface->state) ||
+ test_bit(__FM10K_RESETTING, &interface->state))
+ return;
+
+ spin_lock_irqsave(&list->lock, flags);
+
+ /* walk though the list and flush any expired timestamp packets */
+ skb_queue_walk_safe(list, skb, tmp) {
+ if (!time_is_after_jiffies(FM10K_CB(skb)->ts_tx_timeout))
+ continue;
+ __skb_unlink(skb, list);
+ kfree_skb(skb);
+ interface->tx_hwtstamp_timeouts++;
+ }
+
+ spin_unlock_irqrestore(&list->lock, flags);
+}
+
+static u64 fm10k_systime_read(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+
+ return hw->mac.ops.read_systime(hw);
+}
+
+void fm10k_ts_reset(struct fm10k_intfc *interface)
+{
+ s64 ns = ktime_to_ns(ktime_get_real());
+ unsigned long flags;
+
+ /* reinitialize the clock */
+ write_lock_irqsave(&interface->systime_lock, flags);
+ interface->ptp_adjust = fm10k_systime_read(interface) - ns;
+ write_unlock_irqrestore(&interface->systime_lock, flags);
+}
+
+void fm10k_ts_init(struct fm10k_intfc *interface)
+{
+ /* Initialize lock protecting systime access */
+ rwlock_init(&interface->systime_lock);
+
+ /* Initialize skb queue for pending timestamp requests */
+ skb_queue_head_init(&interface->ts_tx_skb_queue);
+
+ /* reset the clock to current kernel time */
+ fm10k_ts_reset(interface);
+}
+
+/**
+ * fm10k_get_ts_config - get current hardware timestamping configuration
+ * @netdev: network interface device structure
+ * @ifreq: ioctl data
+ *
+ * This function returns the current timestamping settings. Rather than
+ * attempt to deconstruct registers to fill in the values, simply keep a copy
+ * of the old settings around, and return a copy when requested.
+ */
+int fm10k_get_ts_config(struct net_device *netdev, struct ifreq *ifr)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct hwtstamp_config *config = &interface->ts_config;
+
+ return copy_to_user(ifr->ifr_data, config, sizeof(*config)) ?
+ -EFAULT : 0;
+}
+
+/**
+ * fm10k_set_ts_config - control hardware time stamping
+ * @netdev: network interface device structure
+ * @ifreq: ioctl data
+ *
+ * Outgoing time stamping can be enabled and disabled. Play nice and
+ * disable it when requested, although it shouldn't cause any overhead
+ * when no packet needs it. At most one packet in the queue may be
+ * marked for time stamping, otherwise it would be impossible to tell
+ * for sure to which packet the hardware time stamp belongs.
+ *
+ * Incoming time stamping has to be configured via the hardware
+ * filters. Not all combinations are supported, in particular event
+ * type has to be specified. Matching the kind of event packet is
+ * not supported, with the exception of "all V2 events regardless of
+ * level 2 or 4".
+ *
+ * Since hardware always timestamps Path delay packets when timestamping V2
+ * packets, regardless of the type specified in the register, only use V2
+ * Event mode. This more accurately tells the user what the hardware is going
+ * to do anyways.
+ */
+int fm10k_set_ts_config(struct net_device *netdev, struct ifreq *ifr)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct hwtstamp_config ts_config;
+
+ if (copy_from_user(&ts_config, ifr->ifr_data, sizeof(ts_config)))
+ return -EFAULT;
+
+ /* reserved for future extensions */
+ if (ts_config.flags)
+ return -EINVAL;
+
+ switch (ts_config.tx_type) {
+ case HWTSTAMP_TX_OFF:
+ break;
+ case HWTSTAMP_TX_ON:
+ /* we likely need some check here to see if this is supported */
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ switch (ts_config.rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+ interface->flags &= ~FM10K_FLAG_RX_TS_ENABLED;
+ break;
+ case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+ case HWTSTAMP_FILTER_ALL:
+ interface->flags |= FM10K_FLAG_RX_TS_ENABLED;
+ ts_config.rx_filter = HWTSTAMP_FILTER_ALL;
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ /* save these settings for future reference */
+ interface->ts_config = ts_config;
+
+ return copy_to_user(ifr->ifr_data, &ts_config, sizeof(ts_config)) ?
+ -EFAULT : 0;
+}
+
+static int fm10k_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
+{
+ struct fm10k_intfc *interface;
+ struct fm10k_hw *hw;
+ int err;
+
+ interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
+ hw = &interface->hw;
+
+ err = hw->mac.ops.adjust_systime(hw, ppb);
+
+ /* the only error we should see is if the value is out of range */
+ return (err == FM10K_ERR_PARAM) ? -ERANGE : err;
+}
+
+static int fm10k_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+{
+ struct fm10k_intfc *interface;
+ unsigned long flags;
+
+ interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
+
+ write_lock_irqsave(&interface->systime_lock, flags);
+ interface->ptp_adjust += delta;
+ write_unlock_irqrestore(&interface->systime_lock, flags);
+
+ return 0;
+}
+
+static int fm10k_ptp_gettime(struct ptp_clock_info *ptp, struct timespec *ts)
+{
+ struct fm10k_intfc *interface;
+ unsigned long flags;
+ u64 now;
+
+ interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
+
+ read_lock_irqsave(&interface->systime_lock, flags);
+ now = fm10k_systime_read(interface) + interface->ptp_adjust;
+ read_unlock_irqrestore(&interface->systime_lock, flags);
+
+ *ts = ns_to_timespec(now);
+
+ return 0;
+}
+
+static int fm10k_ptp_settime(struct ptp_clock_info *ptp,
+ const struct timespec *ts)
+{
+ struct fm10k_intfc *interface;
+ unsigned long flags;
+ u64 ns = timespec_to_ns(ts);
+
+ interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
+
+ write_lock_irqsave(&interface->systime_lock, flags);
+ interface->ptp_adjust = fm10k_systime_read(interface) - ns;
+ write_unlock_irqrestore(&interface->systime_lock, flags);
+
+ return 0;
+}
+
+static int fm10k_ptp_enable(struct ptp_clock_info *ptp,
+ struct ptp_clock_request *rq, int on)
+{
+ struct ptp_clock_time *t = &rq->perout.period;
+ struct fm10k_intfc *interface;
+ struct fm10k_hw *hw;
+ u64 period;
+ u32 step;
+
+ /* we can only support periodic output */
+ if (rq->type != PTP_CLK_REQ_PEROUT)
+ return -EINVAL;
+
+ /* verify the requested channel is there */
+ if (rq->perout.index >= ptp->n_per_out)
+ return -EINVAL;
+
+ /* we cannot enforce start time as there is no
+ * mechanism for that in the hardware, we can only control
+ * the period.
+ */
+
+ /* we cannot support periods greater than 4 seconds due to reg limit */
+ if (t->sec > 4 || t->sec < 0)
+ return -ERANGE;
+
+ interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
+ hw = &interface->hw;
+
+ /* we simply cannot support the operation if we don't have BAR4 */
+ if (!hw->sw_addr)
+ return -ENOTSUPP;
+
+ /* convert to unsigned 64b ns, verify we can put it in a 32b register */
+ period = t->sec * 1000000000LL + t->nsec;
+
+ /* determine the minimum size for period */
+ step = 2 * (fm10k_read_reg(hw, FM10K_SYSTIME_CFG) &
+ FM10K_SYSTIME_CFG_STEP_MASK);
+
+ /* verify the value is in range supported by hardware */
+ if ((period && (period < step)) || (period > U32_MAX))
+ return -ERANGE;
+
+ /* notify hardware of request to being sending pulses */
+ fm10k_write_sw_reg(hw, FM10K_SW_SYSTIME_PULSE(rq->perout.index),
+ (u32)period);
+
+ return 0;
+}
+
+static struct ptp_pin_desc fm10k_ptp_pd[2] = {
+ {
+ .name = "IEEE1588_PULSE0",
+ .index = 0,
+ .func = PTP_PF_PEROUT,
+ .chan = 0
+ },
+ {
+ .name = "IEEE1588_PULSE1",
+ .index = 1,
+ .func = PTP_PF_PEROUT,
+ .chan = 1
+ }
+};
+
+static int fm10k_ptp_verify(struct ptp_clock_info *ptp, unsigned int pin,
+ enum ptp_pin_function func, unsigned int chan)
+{
+ /* verify the requested pin is there */
+ if (pin >= ptp->n_pins || !ptp->pin_config)
+ return -EINVAL;
+
+ /* enforce locked channels, no changing them */
+ if (chan != ptp->pin_config[pin].chan)
+ return -EINVAL;
+
+ /* we want to keep the functions locked as well */
+ if (func != ptp->pin_config[pin].func)
+ return -EINVAL;
+
+ return 0;
+}
+
+void fm10k_ptp_register(struct fm10k_intfc *interface)
+{
+ struct ptp_clock_info *ptp_caps = &interface->ptp_caps;
+ struct device *dev = &interface->pdev->dev;
+ struct ptp_clock *ptp_clock;
+
+ snprintf(ptp_caps->name, sizeof(ptp_caps->name),
+ "%s", interface->netdev->name);
+ ptp_caps->owner = THIS_MODULE;
+ /* This math is simply the inverse of the math in
+ * fm10k_adjust_systime_pf applied to an adjustment value
+ * of 2^30 - 1 which is the maximum value of the register:
+ * max_ppb == ((2^30 - 1) * 5^9) / 2^31
+ */
+ ptp_caps->max_adj = 976562;
+ ptp_caps->adjfreq = fm10k_ptp_adjfreq;
+ ptp_caps->adjtime = fm10k_ptp_adjtime;
+ ptp_caps->gettime = fm10k_ptp_gettime;
+ ptp_caps->settime = fm10k_ptp_settime;
+
+ /* provide pins if BAR4 is accessible */
+ if (interface->sw_addr) {
+ /* enable periodic outputs */
+ ptp_caps->n_per_out = 2;
+ ptp_caps->enable = fm10k_ptp_enable;
+
+ /* enable clock pins */
+ ptp_caps->verify = fm10k_ptp_verify;
+ ptp_caps->n_pins = 2;
+ ptp_caps->pin_config = fm10k_ptp_pd;
+ }
+
+ ptp_clock = ptp_clock_register(ptp_caps, dev);
+ if (IS_ERR(ptp_clock)) {
+ ptp_clock = NULL;
+ dev_err(dev, "ptp_clock_register failed\n");
+ } else {
+ dev_info(dev, "registered PHC device %s\n", ptp_caps->name);
+ }
+
+ interface->ptp_clock = ptp_clock;
+}
+
+void fm10k_ptp_unregister(struct fm10k_intfc *interface)
+{
+ struct ptp_clock *ptp_clock = interface->ptp_clock;
+ struct device *dev = &interface->pdev->dev;
+
+ if (!ptp_clock)
+ return;
+
+ interface->ptp_clock = NULL;
+
+ ptp_clock_unregister(ptp_clock);
+ dev_info(dev, "removed PHC %s\n", interface->ptp_caps.name);
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_tlv.c b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.c
new file mode 100644
index 0000000..fd0a05f
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.c
@@ -0,0 +1,863 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k_tlv.h"
+
+/**
+ * fm10k_tlv_msg_init - Initialize message block for TLV data storage
+ * @msg: Pointer to message block
+ * @msg_id: Message ID indicating message type
+ *
+ * This function return success if provided with a valid message pointer
+ **/
+s32 fm10k_tlv_msg_init(u32 *msg, u16 msg_id)
+{
+ /* verify pointer is not NULL */
+ if (!msg)
+ return FM10K_ERR_PARAM;
+
+ *msg = (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT) | msg_id;
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_put_null_string - Place null terminated string on message
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ * @string: Pointer to string to be stored in attribute
+ *
+ * This function will reorder a string to be CPU endian and store it in
+ * the attribute buffer. It will return success if provided with a valid
+ * pointers.
+ **/
+s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id,
+ const unsigned char *string)
+{
+ u32 attr_data = 0, len = 0;
+ u32 *attr;
+
+ /* verify pointers are not NULL */
+ if (!string || !msg)
+ return FM10K_ERR_PARAM;
+
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+
+ /* copy string into local variable and then write to msg */
+ do {
+ /* write data to message */
+ if (len && !(len % 4)) {
+ attr[len / 4] = attr_data;
+ attr_data = 0;
+ }
+
+ /* record character to offset location */
+ attr_data |= (u32)(*string) << (8 * (len % 4));
+ len++;
+
+ /* test for NULL and then increment */
+ } while (*(string++));
+
+ /* write last piece of data to message */
+ attr[(len + 3) / 4] = attr_data;
+
+ /* record attribute header, update message length */
+ len <<= FM10K_TLV_LEN_SHIFT;
+ attr[0] = len | attr_id;
+
+ /* add header length to length */
+ len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+ *msg += FM10K_TLV_LEN_ALIGN(len);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_get_null_string - Get null terminated string from attribute
+ * @attr: Pointer to attribute
+ * @string: Pointer to location of destination string
+ *
+ * This function pulls the string back out of the attribute and will place
+ * it in the array pointed by by string. It will return success if provided
+ * with a valid pointers.
+ **/
+s32 fm10k_tlv_attr_get_null_string(u32 *attr, unsigned char *string)
+{
+ u32 len;
+
+ /* verify pointers are not NULL */
+ if (!string || !attr)
+ return FM10K_ERR_PARAM;
+
+ len = *attr >> FM10K_TLV_LEN_SHIFT;
+ attr++;
+
+ while (len--)
+ string[len] = (u8)(attr[len / 4] >> (8 * (len % 4)));
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_put_mac_vlan - Store MAC/VLAN attribute in message
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ * @mac_addr: MAC address to be stored
+ *
+ * This function will reorder a MAC address to be CPU endian and store it
+ * in the attribute buffer. It will return success if provided with a
+ * valid pointers.
+ **/
+s32 fm10k_tlv_attr_put_mac_vlan(u32 *msg, u16 attr_id,
+ const u8 *mac_addr, u16 vlan)
+{
+ u32 len = ETH_ALEN << FM10K_TLV_LEN_SHIFT;
+ u32 *attr;
+
+ /* verify pointers are not NULL */
+ if (!msg || !mac_addr)
+ return FM10K_ERR_PARAM;
+
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+
+ /* record attribute header, update message length */
+ attr[0] = len | attr_id;
+
+ /* copy value into local variable and then write to msg */
+ attr[1] = le32_to_cpu(*(const __le32 *)&mac_addr[0]);
+ attr[2] = le16_to_cpu(*(const __le16 *)&mac_addr[4]);
+ attr[2] |= (u32)vlan << 16;
+
+ /* add header length to length */
+ len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+ *msg += FM10K_TLV_LEN_ALIGN(len);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_get_mac_vlan - Get MAC/VLAN stored in attribute
+ * @attr: Pointer to attribute
+ * @attr_id: Attribute ID
+ * @mac_addr: location of buffer to store MAC address
+ *
+ * This function pulls the MAC address back out of the attribute and will
+ * place it in the array pointed by by mac_addr. It will return success
+ * if provided with a valid pointers.
+ **/
+s32 fm10k_tlv_attr_get_mac_vlan(u32 *attr, u8 *mac_addr, u16 *vlan)
+{
+ /* verify pointers are not NULL */
+ if (!mac_addr || !attr)
+ return FM10K_ERR_PARAM;
+
+ *(__le32 *)&mac_addr[0] = cpu_to_le32(attr[1]);
+ *(__le16 *)&mac_addr[4] = cpu_to_le16((u16)(attr[2]));
+ *vlan = (u16)(attr[2] >> 16);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_put_bool - Add header indicating value "true"
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ *
+ * This function will simply add an attribute header, the fact
+ * that the header is here means the attribute value is true, else
+ * it is false. The function will return success if provided with a
+ * valid pointers.
+ **/
+s32 fm10k_tlv_attr_put_bool(u32 *msg, u16 attr_id)
+{
+ /* verify pointers are not NULL */
+ if (!msg)
+ return FM10K_ERR_PARAM;
+
+ /* record attribute header */
+ msg[FM10K_TLV_DWORD_LEN(*msg)] = attr_id;
+
+ /* add header length to length */
+ *msg += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_put_value - Store integer value attribute in message
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ * @value: Value to be written
+ * @len: Size of value
+ *
+ * This function will place an integer value of up to 8 bytes in size
+ * in a message attribute. The function will return success provided
+ * that msg is a valid pointer, and len is 1, 2, 4, or 8.
+ **/
+s32 fm10k_tlv_attr_put_value(u32 *msg, u16 attr_id, s64 value, u32 len)
+{
+ u32 *attr;
+
+ /* verify non-null msg and len is 1, 2, 4, or 8 */
+ if (!msg || !len || len > 8 || (len & (len - 1)))
+ return FM10K_ERR_PARAM;
+
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+
+ if (len < 4) {
+ attr[1] = (u32)value & ((0x1ul << (8 * len)) - 1);
+ } else {
+ attr[1] = (u32)value;
+ if (len > 4)
+ attr[2] = (u32)(value >> 32);
+ }
+
+ /* record attribute header, update message length */
+ len <<= FM10K_TLV_LEN_SHIFT;
+ attr[0] = len | attr_id;
+
+ /* add header length to length */
+ len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+ *msg += FM10K_TLV_LEN_ALIGN(len);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_get_value - Get integer value stored in attribute
+ * @attr: Pointer to attribute
+ * @value: Pointer to destination buffer
+ * @len: Size of value
+ *
+ * This function will place an integer value of up to 8 bytes in size
+ * in the offset pointed to by value. The function will return success
+ * provided that pointers are valid and the len value matches the
+ * attribute length.
+ **/
+s32 fm10k_tlv_attr_get_value(u32 *attr, void *value, u32 len)
+{
+ /* verify pointers are not NULL */
+ if (!attr || !value)
+ return FM10K_ERR_PARAM;
+
+ if ((*attr >> FM10K_TLV_LEN_SHIFT) != len)
+ return FM10K_ERR_PARAM;
+
+ if (len == 8)
+ *(u64 *)value = ((u64)attr[2] << 32) | attr[1];
+ else if (len == 4)
+ *(u32 *)value = attr[1];
+ else if (len == 2)
+ *(u16 *)value = (u16)attr[1];
+ else
+ *(u8 *)value = (u8)attr[1];
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_put_le_struct - Store little endian structure in message
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ * @le_struct: Pointer to structure to be written
+ * @len: Size of le_struct
+ *
+ * This function will place a little endian structure value in a message
+ * attribute. The function will return success provided that all pointers
+ * are valid and length is a non-zero multiple of 4.
+ **/
+s32 fm10k_tlv_attr_put_le_struct(u32 *msg, u16 attr_id,
+ const void *le_struct, u32 len)
+{
+ const __le32 *le32_ptr = (const __le32 *)le_struct;
+ u32 *attr;
+ u32 i;
+
+ /* verify non-null msg and len is in 32 bit words */
+ if (!msg || !len || (len % 4))
+ return FM10K_ERR_PARAM;
+
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+
+ /* copy le32 structure into host byte order at 32b boundaries */
+ for (i = 0; i < (len / 4); i++)
+ attr[i + 1] = le32_to_cpu(le32_ptr[i]);
+
+ /* record attribute header, update message length */
+ len <<= FM10K_TLV_LEN_SHIFT;
+ attr[0] = len | attr_id;
+
+ /* add header length to length */
+ len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+ *msg += FM10K_TLV_LEN_ALIGN(len);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_get_le_struct - Get little endian struct form attribute
+ * @attr: Pointer to attribute
+ * @le_struct: Pointer to structure to be written
+ * @len: Size of structure
+ *
+ * This function will place a little endian structure in the buffer
+ * pointed to by le_struct. The function will return success
+ * provided that pointers are valid and the len value matches the
+ * attribute length.
+ **/
+s32 fm10k_tlv_attr_get_le_struct(u32 *attr, void *le_struct, u32 len)
+{
+ __le32 *le32_ptr = (__le32 *)le_struct;
+ u32 i;
+
+ /* verify pointers are not NULL */
+ if (!le_struct || !attr)
+ return FM10K_ERR_PARAM;
+
+ if ((*attr >> FM10K_TLV_LEN_SHIFT) != len)
+ return FM10K_ERR_PARAM;
+
+ attr++;
+
+ for (i = 0; len; i++, len -= 4)
+ le32_ptr[i] = cpu_to_le32(attr[i]);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_nest_start - Start a set of nested attributes
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ *
+ * This function will mark off a new nested region for encapsulating
+ * a given set of attributes. The idea is if you wish to place a secondary
+ * structure within the message this mechanism allows for that. The
+ * function will return NULL on failure, and a pointer to the start
+ * of the nested attributes on success.
+ **/
+u32 *fm10k_tlv_attr_nest_start(u32 *msg, u16 attr_id)
+{
+ u32 *attr;
+
+ /* verify pointer is not NULL */
+ if (!msg)
+ return NULL;
+
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+
+ attr[0] = attr_id;
+
+ /* return pointer to nest header */
+ return attr;
+}
+
+/**
+ * fm10k_tlv_attr_nest_start - Start a set of nested attributes
+ * @msg: Pointer to message block
+ *
+ * This function closes off an existing set of nested attributes. The
+ * message pointer should be pointing to the parent of the nest. So in
+ * the case of a nest within the nest this would be the outer nest pointer.
+ * This function will return success provided all pointers are valid.
+ **/
+s32 fm10k_tlv_attr_nest_stop(u32 *msg)
+{
+ u32 *attr;
+ u32 len;
+
+ /* verify pointer is not NULL */
+ if (!msg)
+ return FM10K_ERR_PARAM;
+
+ /* locate the nested header and retrieve its length */
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+ len = (attr[0] >> FM10K_TLV_LEN_SHIFT) << FM10K_TLV_LEN_SHIFT;
+
+ /* only include nest if data was added to it */
+ if (len) {
+ len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+ *msg += len;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_validate - Validate attribute metadata
+ * @attr: Pointer to attribute
+ * @tlv_attr: Type and length info for attribute
+ *
+ * This function does some basic validation of the input TLV. It
+ * verifies the length, and in the case of null terminated strings
+ * it verifies that the last byte is null. The function will
+ * return FM10K_ERR_PARAM if any attribute is malformed, otherwise
+ * it returns 0.
+ **/
+static s32 fm10k_tlv_attr_validate(u32 *attr,
+ const struct fm10k_tlv_attr *tlv_attr)
+{
+ u32 attr_id = *attr & FM10K_TLV_ID_MASK;
+ u16 len = *attr >> FM10K_TLV_LEN_SHIFT;
+
+ /* verify this is an attribute and not a message */
+ if (*attr & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT))
+ return FM10K_ERR_PARAM;
+
+ /* search through the list of attributes to find a matching ID */
+ while (tlv_attr->id < attr_id)
+ tlv_attr++;
+
+ /* if didn't find a match then we should exit */
+ if (tlv_attr->id != attr_id)
+ return FM10K_NOT_IMPLEMENTED;
+
+ /* move to start of attribute data */
+ attr++;
+
+ switch (tlv_attr->type) {
+ case FM10K_TLV_NULL_STRING:
+ if (!len ||
+ (attr[(len - 1) / 4] & (0xFF << (8 * ((len - 1) % 4)))))
+ return FM10K_ERR_PARAM;
+ if (len > tlv_attr->len)
+ return FM10K_ERR_PARAM;
+ break;
+ case FM10K_TLV_MAC_ADDR:
+ if (len != ETH_ALEN)
+ return FM10K_ERR_PARAM;
+ break;
+ case FM10K_TLV_BOOL:
+ if (len)
+ return FM10K_ERR_PARAM;
+ break;
+ case FM10K_TLV_UNSIGNED:
+ case FM10K_TLV_SIGNED:
+ if (len != tlv_attr->len)
+ return FM10K_ERR_PARAM;
+ break;
+ case FM10K_TLV_LE_STRUCT:
+ /* struct must be 4 byte aligned */
+ if ((len % 4) || len != tlv_attr->len)
+ return FM10K_ERR_PARAM;
+ break;
+ case FM10K_TLV_NESTED:
+ /* nested attributes must be 4 byte aligned */
+ if (len % 4)
+ return FM10K_ERR_PARAM;
+ break;
+ default:
+ /* attribute id is mapped to bad value */
+ return FM10K_ERR_PARAM;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_parse - Parses stream of attribute data
+ * @attr: Pointer to attribute list
+ * @results: Pointer array to store pointers to attributes
+ * @tlv_attr: Type and length info for attributes
+ *
+ * This function validates a stream of attributes and parses them
+ * up into an array of pointers stored in results. The function will
+ * return FM10K_ERR_PARAM on any input or message error,
+ * FM10K_NOT_IMPLEMENTED for any attribute that is outside of the array
+ * and 0 on success.
+ **/
+s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results,
+ const struct fm10k_tlv_attr *tlv_attr)
+{
+ u32 i, attr_id, offset = 0;
+ s32 err = 0;
+ u16 len;
+
+ /* verify pointers are not NULL */
+ if (!attr || !results)
+ return FM10K_ERR_PARAM;
+
+ /* initialize results to NULL */
+ for (i = 0; i < FM10K_TLV_RESULTS_MAX; i++)
+ results[i] = NULL;
+
+ /* pull length from the message header */
+ len = *attr >> FM10K_TLV_LEN_SHIFT;
+
+ /* no attributes to parse if there is no length */
+ if (!len)
+ return 0;
+
+ /* no attributes to parse, just raw data, message becomes attribute */
+ if (!tlv_attr) {
+ results[0] = attr;
+ return 0;
+ }
+
+ /* move to start of attribute data */
+ attr++;
+
+ /* run through list parsing all attributes */
+ while (offset < len) {
+ attr_id = *attr & FM10K_TLV_ID_MASK;
+
+ if (attr_id < FM10K_TLV_RESULTS_MAX)
+ err = fm10k_tlv_attr_validate(attr, tlv_attr);
+ else
+ err = FM10K_NOT_IMPLEMENTED;
+
+ if (err < 0)
+ return err;
+ if (!err)
+ results[attr_id] = attr;
+
+ /* update offset */
+ offset += FM10K_TLV_DWORD_LEN(*attr) * 4;
+
+ /* move to next attribute */
+ attr = &attr[FM10K_TLV_DWORD_LEN(*attr)];
+ }
+
+ /* we should find ourselves at the end of the list */
+ if (offset != len)
+ return FM10K_ERR_PARAM;
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_msg_parse - Parses message header and calls function handler
+ * @hw: Pointer to hardware structure
+ * @msg: Pointer to message
+ * @mbx: Pointer to mailbox information structure
+ * @func: Function array containing list of message handling functions
+ *
+ * This function should be the first function called upon receiving a
+ * message. The handler will identify the message type and call the correct
+ * handler for the given message. It will return the value from the function
+ * call on a recognized message type, otherwise it will return
+ * FM10K_NOT_IMPLEMENTED on an unrecognized type.
+ **/
+s32 fm10k_tlv_msg_parse(struct fm10k_hw *hw, u32 *msg,
+ struct fm10k_mbx_info *mbx,
+ const struct fm10k_msg_data *data)
+{
+ u32 *results[FM10K_TLV_RESULTS_MAX];
+ u32 msg_id;
+ s32 err;
+
+ /* verify pointer is not NULL */
+ if (!msg || !data)
+ return FM10K_ERR_PARAM;
+
+ /* verify this is a message and not an attribute */
+ if (!(*msg & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT)))
+ return FM10K_ERR_PARAM;
+
+ /* grab message ID */
+ msg_id = *msg & FM10K_TLV_ID_MASK;
+
+ while (data->id < msg_id)
+ data++;
+
+ /* if we didn't find it then pass it up as an error */
+ if (data->id != msg_id) {
+ while (data->id != FM10K_TLV_ERROR)
+ data++;
+ }
+
+ /* parse the attributes into the results list */
+ err = fm10k_tlv_attr_parse(msg, results, data->attr);
+ if (err < 0)
+ return err;
+
+ return data->func(hw, results, mbx);
+}
+
+/**
+ * fm10k_tlv_msg_error - Default handler for unrecognized TLV message IDs
+ * @hw: Pointer to hardware structure
+ * @results: Pointer array to message, results[0] is pointer to message
+ * @mbx: Unused mailbox pointer
+ *
+ * This function is a default handler for unrecognized messages. At a
+ * a minimum it just indicates that the message requested was
+ * unimplemented.
+ **/
+s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ return FM10K_NOT_IMPLEMENTED;
+}
+
+static const unsigned char test_str[] = "fm10k";
+static const unsigned char test_mac[ETH_ALEN] = { 0x12, 0x34, 0x56,
+ 0x78, 0x9a, 0xbc };
+static const u16 test_vlan = 0x0FED;
+static const u64 test_u64 = 0xfedcba9876543210ull;
+static const u32 test_u32 = 0x87654321;
+static const u16 test_u16 = 0x8765;
+static const u8 test_u8 = 0x87;
+static const s64 test_s64 = -0x123456789abcdef0ll;
+static const s32 test_s32 = -0x1235678;
+static const s16 test_s16 = -0x1234;
+static const s8 test_s8 = -0x12;
+static const __le32 test_le[2] = { cpu_to_le32(0x12345678),
+ cpu_to_le32(0x9abcdef0)};
+
+/* The message below is meant to be used as a test message to demonstrate
+ * how to use the TLV interface and to test the types. Normally this code
+ * be compiled out by stripping the code wrapped in FM10K_TLV_TEST_MSG
+ */
+const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = {
+ FM10K_TLV_ATTR_NULL_STRING(FM10K_TEST_MSG_STRING, 80),
+ FM10K_TLV_ATTR_MAC_ADDR(FM10K_TEST_MSG_MAC_ADDR),
+ FM10K_TLV_ATTR_U8(FM10K_TEST_MSG_U8),
+ FM10K_TLV_ATTR_U16(FM10K_TEST_MSG_U16),
+ FM10K_TLV_ATTR_U32(FM10K_TEST_MSG_U32),
+ FM10K_TLV_ATTR_U64(FM10K_TEST_MSG_U64),
+ FM10K_TLV_ATTR_S8(FM10K_TEST_MSG_S8),
+ FM10K_TLV_ATTR_S16(FM10K_TEST_MSG_S16),
+ FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_S32),
+ FM10K_TLV_ATTR_S64(FM10K_TEST_MSG_S64),
+ FM10K_TLV_ATTR_LE_STRUCT(FM10K_TEST_MSG_LE_STRUCT, 8),
+ FM10K_TLV_ATTR_NESTED(FM10K_TEST_MSG_NESTED),
+ FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_RESULT),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_tlv_msg_test_generate_data - Stuff message with data
+ * @msg: Pointer to message
+ * @attr_flags: List of flags indicating what attributes to add
+ *
+ * This function is meant to load a message buffer with attribute data
+ **/
+static void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags)
+{
+ if (attr_flags & (1 << FM10K_TEST_MSG_STRING))
+ fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING,
+ test_str);
+ if (attr_flags & (1 << FM10K_TEST_MSG_MAC_ADDR))
+ fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR,
+ test_mac, test_vlan);
+ if (attr_flags & (1 << FM10K_TEST_MSG_U8))
+ fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8, test_u8);
+ if (attr_flags & (1 << FM10K_TEST_MSG_U16))
+ fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16);
+ if (attr_flags & (1 << FM10K_TEST_MSG_U32))
+ fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32);
+ if (attr_flags & (1 << FM10K_TEST_MSG_U64))
+ fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64);
+ if (attr_flags & (1 << FM10K_TEST_MSG_S8))
+ fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8, test_s8);
+ if (attr_flags & (1 << FM10K_TEST_MSG_S16))
+ fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16);
+ if (attr_flags & (1 << FM10K_TEST_MSG_S32))
+ fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32);
+ if (attr_flags & (1 << FM10K_TEST_MSG_S64))
+ fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64);
+ if (attr_flags & (1 << FM10K_TEST_MSG_LE_STRUCT))
+ fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT,
+ test_le, 8);
+}
+
+/**
+ * fm10k_tlv_msg_test_create - Create a test message testing all attributes
+ * @msg: Pointer to message
+ * @attr_flags: List of flags indicating what attributes to add
+ *
+ * This function is meant to load a message buffer with all attribute types
+ * including a nested attribute.
+ **/
+void fm10k_tlv_msg_test_create(u32 *msg, u32 attr_flags)
+{
+ u32 *nest = NULL;
+
+ fm10k_tlv_msg_init(msg, FM10K_TLV_MSG_ID_TEST);
+
+ fm10k_tlv_msg_test_generate_data(msg, attr_flags);
+
+ /* check for nested attributes */
+ attr_flags >>= FM10K_TEST_MSG_NESTED;
+
+ if (attr_flags) {
+ nest = fm10k_tlv_attr_nest_start(msg, FM10K_TEST_MSG_NESTED);
+
+ fm10k_tlv_msg_test_generate_data(nest, attr_flags);
+
+ fm10k_tlv_attr_nest_stop(msg);
+ }
+}
+
+/**
+ * fm10k_tlv_msg_test - Validate all results on test message receive
+ * @hw: Pointer to hardware structure
+ * @results: Pointer array to attributes in the mesage
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This function does a check to verify all attributes match what the test
+ * message placed in the message buffer. It is the default handler
+ * for TLV test messages.
+ **/
+s32 fm10k_tlv_msg_test(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ u32 *nest_results[FM10K_TLV_RESULTS_MAX];
+ unsigned char result_str[80];
+ unsigned char result_mac[ETH_ALEN];
+ s32 err = 0;
+ __le32 result_le[2];
+ u16 result_vlan;
+ u64 result_u64;
+ u32 result_u32;
+ u16 result_u16;
+ u8 result_u8;
+ s64 result_s64;
+ s32 result_s32;
+ s16 result_s16;
+ s8 result_s8;
+ u32 reply[3];
+
+ /* retrieve results of a previous test */
+ if (!!results[FM10K_TEST_MSG_RESULT])
+ return fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_RESULT],
+ &mbx->test_result);
+
+parse_nested:
+ if (!!results[FM10K_TEST_MSG_STRING]) {
+ err = fm10k_tlv_attr_get_null_string(
+ results[FM10K_TEST_MSG_STRING],
+ result_str);
+ if (!err && memcmp(test_str, result_str, sizeof(test_str)))
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_MAC_ADDR]) {
+ err = fm10k_tlv_attr_get_mac_vlan(
+ results[FM10K_TEST_MSG_MAC_ADDR],
+ result_mac, &result_vlan);
+ if (!err && memcmp(test_mac, result_mac, ETH_ALEN))
+ err = FM10K_ERR_INVALID_VALUE;
+ if (!err && test_vlan != result_vlan)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_U8]) {
+ err = fm10k_tlv_attr_get_u8(results[FM10K_TEST_MSG_U8],
+ &result_u8);
+ if (!err && test_u8 != result_u8)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_U16]) {
+ err = fm10k_tlv_attr_get_u16(results[FM10K_TEST_MSG_U16],
+ &result_u16);
+ if (!err && test_u16 != result_u16)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_U32]) {
+ err = fm10k_tlv_attr_get_u32(results[FM10K_TEST_MSG_U32],
+ &result_u32);
+ if (!err && test_u32 != result_u32)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_U64]) {
+ err = fm10k_tlv_attr_get_u64(results[FM10K_TEST_MSG_U64],
+ &result_u64);
+ if (!err && test_u64 != result_u64)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_S8]) {
+ err = fm10k_tlv_attr_get_s8(results[FM10K_TEST_MSG_S8],
+ &result_s8);
+ if (!err && test_s8 != result_s8)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_S16]) {
+ err = fm10k_tlv_attr_get_s16(results[FM10K_TEST_MSG_S16],
+ &result_s16);
+ if (!err && test_s16 != result_s16)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_S32]) {
+ err = fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_S32],
+ &result_s32);
+ if (!err && test_s32 != result_s32)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_S64]) {
+ err = fm10k_tlv_attr_get_s64(results[FM10K_TEST_MSG_S64],
+ &result_s64);
+ if (!err && test_s64 != result_s64)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_LE_STRUCT]) {
+ err = fm10k_tlv_attr_get_le_struct(
+ results[FM10K_TEST_MSG_LE_STRUCT],
+ result_le,
+ sizeof(result_le));
+ if (!err && memcmp(test_le, result_le, sizeof(test_le)))
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+
+ if (!!results[FM10K_TEST_MSG_NESTED]) {
+ /* clear any pointers */
+ memset(nest_results, 0, sizeof(nest_results));
+
+ /* parse the nested attributes into the nest results list */
+ err = fm10k_tlv_attr_parse(results[FM10K_TEST_MSG_NESTED],
+ nest_results,
+ fm10k_tlv_msg_test_attr);
+ if (err)
+ goto report_result;
+
+ /* loop back through to the start */
+ results = nest_results;
+ goto parse_nested;
+ }
+
+report_result:
+ /* generate reply with test result */
+ fm10k_tlv_msg_init(reply, FM10K_TLV_MSG_ID_TEST);
+ fm10k_tlv_attr_put_s32(reply, FM10K_TEST_MSG_RESULT, err);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, reply);
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_tlv.h b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.h
new file mode 100644
index 0000000..7e045e8
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.h
@@ -0,0 +1,186 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_TLV_H_
+#define _FM10K_TLV_H_
+
+/* forward declaration */
+struct fm10k_msg_data;
+
+#include "fm10k_type.h"
+
+/* Message / Argument header format
+ * 3 2 1 0
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Length | Flags | Type / ID |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ *
+ * The message header format described here is used for messages that are
+ * passed between the PF and the VF. To allow for messages larger then
+ * mailbox size we will provide a message with the above header and it
+ * will be segmented and transported to the mailbox to the other side where
+ * it is reassembled. It contains the following fields:
+ * Len: Length of the message in bytes excluding the message header
+ * Flags: TBD
+ * Rule: These will be the message/argument types we pass
+ */
+/* message data header */
+#define FM10K_TLV_ID_SHIFT 0
+#define FM10K_TLV_ID_SIZE 16
+#define FM10K_TLV_ID_MASK ((1u << FM10K_TLV_ID_SIZE) - 1)
+#define FM10K_TLV_FLAGS_SHIFT 16
+#define FM10K_TLV_FLAGS_MSG 0x1
+#define FM10K_TLV_FLAGS_SIZE 4
+#define FM10K_TLV_LEN_SHIFT 20
+#define FM10K_TLV_LEN_SIZE 12
+
+#define FM10K_TLV_HDR_LEN 4ul
+#define FM10K_TLV_LEN_ALIGN_MASK \
+ ((FM10K_TLV_HDR_LEN - 1) << FM10K_TLV_LEN_SHIFT)
+#define FM10K_TLV_LEN_ALIGN(tlv) \
+ (((tlv) + FM10K_TLV_LEN_ALIGN_MASK) & ~FM10K_TLV_LEN_ALIGN_MASK)
+#define FM10K_TLV_DWORD_LEN(tlv) \
+ ((u16)((FM10K_TLV_LEN_ALIGN(tlv)) >> (FM10K_TLV_LEN_SHIFT + 2)) + 1)
+
+#define FM10K_TLV_RESULTS_MAX 32
+
+enum fm10k_tlv_type {
+ FM10K_TLV_NULL_STRING,
+ FM10K_TLV_MAC_ADDR,
+ FM10K_TLV_BOOL,
+ FM10K_TLV_UNSIGNED,
+ FM10K_TLV_SIGNED,
+ FM10K_TLV_LE_STRUCT,
+ FM10K_TLV_NESTED,
+ FM10K_TLV_MAX_TYPE
+};
+
+#define FM10K_TLV_ERROR (~0u)
+
+struct fm10k_tlv_attr {
+ unsigned int id;
+ enum fm10k_tlv_type type;
+ u16 len;
+};
+
+#define FM10K_TLV_ATTR_NULL_STRING(id, len) { id, FM10K_TLV_NULL_STRING, len }
+#define FM10K_TLV_ATTR_MAC_ADDR(id) { id, FM10K_TLV_MAC_ADDR, 6 }
+#define FM10K_TLV_ATTR_BOOL(id) { id, FM10K_TLV_BOOL, 0 }
+#define FM10K_TLV_ATTR_U8(id) { id, FM10K_TLV_UNSIGNED, 1 }
+#define FM10K_TLV_ATTR_U16(id) { id, FM10K_TLV_UNSIGNED, 2 }
+#define FM10K_TLV_ATTR_U32(id) { id, FM10K_TLV_UNSIGNED, 4 }
+#define FM10K_TLV_ATTR_U64(id) { id, FM10K_TLV_UNSIGNED, 8 }
+#define FM10K_TLV_ATTR_S8(id) { id, FM10K_TLV_SIGNED, 1 }
+#define FM10K_TLV_ATTR_S16(id) { id, FM10K_TLV_SIGNED, 2 }
+#define FM10K_TLV_ATTR_S32(id) { id, FM10K_TLV_SIGNED, 4 }
+#define FM10K_TLV_ATTR_S64(id) { id, FM10K_TLV_SIGNED, 8 }
+#define FM10K_TLV_ATTR_LE_STRUCT(id, len) { id, FM10K_TLV_LE_STRUCT, len }
+#define FM10K_TLV_ATTR_NESTED(id) { id, FM10K_TLV_NESTED }
+#define FM10K_TLV_ATTR_LAST { FM10K_TLV_ERROR }
+
+struct fm10k_msg_data {
+ unsigned int id;
+ const struct fm10k_tlv_attr *attr;
+ s32 (*func)(struct fm10k_hw *, u32 **,
+ struct fm10k_mbx_info *);
+};
+
+#define FM10K_MSG_HANDLER(id, attr, func) { id, attr, func }
+
+s32 fm10k_tlv_msg_init(u32 *, u16);
+s32 fm10k_tlv_attr_put_null_string(u32 *, u16, const unsigned char *);
+s32 fm10k_tlv_attr_get_null_string(u32 *, unsigned char *);
+s32 fm10k_tlv_attr_put_mac_vlan(u32 *, u16, const u8 *, u16);
+s32 fm10k_tlv_attr_get_mac_vlan(u32 *, u8 *, u16 *);
+s32 fm10k_tlv_attr_put_bool(u32 *, u16);
+s32 fm10k_tlv_attr_put_value(u32 *, u16, s64, u32);
+#define fm10k_tlv_attr_put_u8(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 1)
+#define fm10k_tlv_attr_put_u16(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 2)
+#define fm10k_tlv_attr_put_u32(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 4)
+#define fm10k_tlv_attr_put_u64(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 8)
+#define fm10k_tlv_attr_put_s8(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 1)
+#define fm10k_tlv_attr_put_s16(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 2)
+#define fm10k_tlv_attr_put_s32(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 4)
+#define fm10k_tlv_attr_put_s64(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 8)
+s32 fm10k_tlv_attr_get_value(u32 *, void *, u32);
+#define fm10k_tlv_attr_get_u8(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(u8))
+#define fm10k_tlv_attr_get_u16(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(u16))
+#define fm10k_tlv_attr_get_u32(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(u32))
+#define fm10k_tlv_attr_get_u64(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(u64))
+#define fm10k_tlv_attr_get_s8(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(s8))
+#define fm10k_tlv_attr_get_s16(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(s16))
+#define fm10k_tlv_attr_get_s32(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(s32))
+#define fm10k_tlv_attr_get_s64(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(s64))
+s32 fm10k_tlv_attr_put_le_struct(u32 *, u16, const void *, u32);
+s32 fm10k_tlv_attr_get_le_struct(u32 *, void *, u32);
+u32 *fm10k_tlv_attr_nest_start(u32 *, u16);
+s32 fm10k_tlv_attr_nest_stop(u32 *);
+s32 fm10k_tlv_attr_parse(u32 *, u32 **, const struct fm10k_tlv_attr *);
+s32 fm10k_tlv_msg_parse(struct fm10k_hw *, u32 *, struct fm10k_mbx_info *,
+ const struct fm10k_msg_data *);
+s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *);
+
+#define FM10K_TLV_MSG_ID_TEST 0
+
+enum fm10k_tlv_test_attr_id {
+ FM10K_TEST_MSG_UNSET,
+ FM10K_TEST_MSG_STRING,
+ FM10K_TEST_MSG_MAC_ADDR,
+ FM10K_TEST_MSG_U8,
+ FM10K_TEST_MSG_U16,
+ FM10K_TEST_MSG_U32,
+ FM10K_TEST_MSG_U64,
+ FM10K_TEST_MSG_S8,
+ FM10K_TEST_MSG_S16,
+ FM10K_TEST_MSG_S32,
+ FM10K_TEST_MSG_S64,
+ FM10K_TEST_MSG_LE_STRUCT,
+ FM10K_TEST_MSG_NESTED,
+ FM10K_TEST_MSG_RESULT,
+ FM10K_TEST_MSG_MAX
+};
+
+extern const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[];
+void fm10k_tlv_msg_test_create(u32 *, u32);
+s32 fm10k_tlv_msg_test(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
+
+#define FM10K_TLV_MSG_TEST_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_TLV_MSG_ID_TEST, fm10k_tlv_msg_test_attr, func)
+#define FM10K_TLV_MSG_ERROR_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_TLV_ERROR, NULL, func)
+#endif /* _FM10K_MSG_H_ */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_type.h b/drivers/net/ethernet/intel/fm10k/fm10k_type.h
new file mode 100644
index 0000000..280296f
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_type.h
@@ -0,0 +1,770 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_TYPE_H_
+#define _FM10K_TYPE_H_
+
+/* forward declaration */
+struct fm10k_hw;
+
+#include <linux/types.h>
+#include <asm/byteorder.h>
+#include <linux/etherdevice.h>
+
+#include "fm10k_mbx.h"
+
+#define FM10K_DEV_ID_PF 0x15A4
+#define FM10K_DEV_ID_VF 0x15A5
+
+#define FM10K_MAX_QUEUES 256
+#define FM10K_MAX_QUEUES_PF 128
+#define FM10K_MAX_QUEUES_POOL 16
+
+#define FM10K_48_BIT_MASK 0x0000FFFFFFFFFFFFull
+#define FM10K_STAT_VALID 0x80000000
+
+/* PCI Bus Info */
+#define FM10K_PCIE_LINK_CAP 0x7C
+#define FM10K_PCIE_LINK_STATUS 0x82
+#define FM10K_PCIE_LINK_WIDTH 0x3F0
+#define FM10K_PCIE_LINK_WIDTH_1 0x10
+#define FM10K_PCIE_LINK_WIDTH_2 0x20
+#define FM10K_PCIE_LINK_WIDTH_4 0x40
+#define FM10K_PCIE_LINK_WIDTH_8 0x80
+#define FM10K_PCIE_LINK_SPEED 0xF
+#define FM10K_PCIE_LINK_SPEED_2500 0x1
+#define FM10K_PCIE_LINK_SPEED_5000 0x2
+#define FM10K_PCIE_LINK_SPEED_8000 0x3
+
+/* PCIe payload size */
+#define FM10K_PCIE_DEV_CAP 0x74
+#define FM10K_PCIE_DEV_CAP_PAYLOAD 0x07
+#define FM10K_PCIE_DEV_CAP_PAYLOAD_128 0x00
+#define FM10K_PCIE_DEV_CAP_PAYLOAD_256 0x01
+#define FM10K_PCIE_DEV_CAP_PAYLOAD_512 0x02
+#define FM10K_PCIE_DEV_CTRL 0x78
+#define FM10K_PCIE_DEV_CTRL_PAYLOAD 0xE0
+#define FM10K_PCIE_DEV_CTRL_PAYLOAD_128 0x00
+#define FM10K_PCIE_DEV_CTRL_PAYLOAD_256 0x20
+#define FM10K_PCIE_DEV_CTRL_PAYLOAD_512 0x40
+
+/* PCIe MSI-X Capability info */
+#define FM10K_PCI_MSIX_MSG_CTRL 0xB2
+#define FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK 0x7FF
+#define FM10K_MAX_MSIX_VECTORS 256
+#define FM10K_MAX_VECTORS_PF 256
+#define FM10K_MAX_VECTORS_POOL 32
+
+/* PCIe SR-IOV Info */
+#define FM10K_PCIE_SRIOV_CTRL 0x190
+#define FM10K_PCIE_SRIOV_CTRL_VFARI 0x10
+
+#define FM10K_ERR_PARAM -2
+#define FM10K_ERR_REQUESTS_PENDING -4
+#define FM10K_ERR_RESET_REQUESTED -5
+#define FM10K_ERR_DMA_PENDING -6
+#define FM10K_ERR_RESET_FAILED -7
+#define FM10K_ERR_INVALID_MAC_ADDR -8
+#define FM10K_ERR_INVALID_VALUE -9
+#define FM10K_NOT_IMPLEMENTED 0x7FFFFFFF
+
+/* Start of PF registers */
+#define FM10K_CTRL 0x0000
+#define FM10K_CTRL_BAR4_ALLOWED 0x00000004
+
+#define FM10K_CTRL_EXT 0x0001
+#define FM10K_GCR 0x0003
+#define FM10K_GCR_EXT 0x0005
+
+/* Interrupt control registers */
+#define FM10K_EICR 0x0006
+#define FM10K_EICR_FAULT_MASK 0x0000003F
+#define FM10K_EICR_MAILBOX 0x00000040
+#define FM10K_EICR_SWITCHREADY 0x00000080
+#define FM10K_EICR_SWITCHNOTREADY 0x00000100
+#define FM10K_EICR_SWITCHINTERRUPT 0x00000200
+#define FM10K_EICR_VFLR 0x00000800
+#define FM10K_EICR_MAXHOLDTIME 0x00001000
+#define FM10K_EIMR 0x0007
+#define FM10K_EIMR_PCA_FAULT 0x00000001
+#define FM10K_EIMR_THI_FAULT 0x00000010
+#define FM10K_EIMR_FUM_FAULT 0x00000400
+#define FM10K_EIMR_MAILBOX 0x00001000
+#define FM10K_EIMR_SWITCHREADY 0x00004000
+#define FM10K_EIMR_SWITCHNOTREADY 0x00010000
+#define FM10K_EIMR_SWITCHINTERRUPT 0x00040000
+#define FM10K_EIMR_SRAMERROR 0x00100000
+#define FM10K_EIMR_VFLR 0x00400000
+#define FM10K_EIMR_MAXHOLDTIME 0x01000000
+#define FM10K_EIMR_ALL 0x55555555
+#define FM10K_EIMR_DISABLE(NAME) ((FM10K_EIMR_ ## NAME) << 0)
+#define FM10K_EIMR_ENABLE(NAME) ((FM10K_EIMR_ ## NAME) << 1)
+#define FM10K_FAULT_ADDR_LO 0x0
+#define FM10K_FAULT_ADDR_HI 0x1
+#define FM10K_FAULT_SPECINFO 0x2
+#define FM10K_FAULT_FUNC 0x3
+#define FM10K_FAULT_SIZE 0x4
+#define FM10K_FAULT_FUNC_VALID 0x00008000
+#define FM10K_FAULT_FUNC_PF 0x00004000
+#define FM10K_FAULT_FUNC_VF_MASK 0x00003F00
+#define FM10K_FAULT_FUNC_VF_SHIFT 8
+#define FM10K_FAULT_FUNC_TYPE_MASK 0x000000FF
+
+#define FM10K_PCA_FAULT 0x0008
+#define FM10K_THI_FAULT 0x0010
+#define FM10K_FUM_FAULT 0x001C
+
+/* Rx queue timeout indicator */
+#define FM10K_MAXHOLDQ(_n) ((_n) + 0x0020)
+
+/* Switch Manager info */
+#define FM10K_SM_AREA(_n) ((_n) + 0x0028)
+
+/* GLORT mapping registers */
+#define FM10K_DGLORTMAP(_n) ((_n) + 0x0030)
+#define FM10K_DGLORT_COUNT 8
+#define FM10K_DGLORTMAP_MASK_SHIFT 16
+#define FM10K_DGLORTMAP_ANY 0x00000000
+#define FM10K_DGLORTMAP_NONE 0x0000FFFF
+#define FM10K_DGLORTMAP_ZERO 0xFFFF0000
+#define FM10K_DGLORTDEC(_n) ((_n) + 0x0038)
+#define FM10K_DGLORTDEC_VSILENGTH_SHIFT 4
+#define FM10K_DGLORTDEC_VSIBASE_SHIFT 7
+#define FM10K_DGLORTDEC_PCLENGTH_SHIFT 14
+#define FM10K_DGLORTDEC_QBASE_SHIFT 16
+#define FM10K_DGLORTDEC_RSSLENGTH_SHIFT 24
+#define FM10K_DGLORTDEC_INNERRSS_ENABLE 0x08000000
+#define FM10K_TUNNEL_CFG 0x0040
+#define FM10K_TUNNEL_CFG_NVGRE_SHIFT 16
+#define FM10K_SWPRI_MAP(_n) ((_n) + 0x0050)
+#define FM10K_SWPRI_MAX 16
+#define FM10K_RSSRK(_n, _m) (((_n) * 0x10) + (_m) + 0x0800)
+#define FM10K_RSSRK_SIZE 10
+#define FM10K_RSSRK_ENTRIES_PER_REG 4
+#define FM10K_RETA(_n, _m) (((_n) * 0x20) + (_m) + 0x1000)
+#define FM10K_RETA_SIZE 32
+#define FM10K_RETA_ENTRIES_PER_REG 4
+#define FM10K_MAX_RSS_INDICES 128
+
+/* Rate limiting registers */
+#define FM10K_TC_CREDIT(_n) ((_n) + 0x2000)
+#define FM10K_TC_CREDIT_CREDIT_MASK 0x001FFFFF
+#define FM10K_TC_MAXCREDIT(_n) ((_n) + 0x2040)
+#define FM10K_TC_MAXCREDIT_64K 0x00010000
+#define FM10K_TC_RATE(_n) ((_n) + 0x2080)
+#define FM10K_TC_RATE_QUANTA_MASK 0x0000FFFF
+#define FM10K_TC_RATE_INTERVAL_4US_GEN1 0x00020000
+#define FM10K_TC_RATE_INTERVAL_4US_GEN2 0x00040000
+#define FM10K_TC_RATE_INTERVAL_4US_GEN3 0x00080000
+
+/* DMA control registers */
+#define FM10K_DMA_CTRL 0x20C3
+#define FM10K_DMA_CTRL_TX_ENABLE 0x00000001
+#define FM10K_DMA_CTRL_TX_ACTIVE 0x00000008
+#define FM10K_DMA_CTRL_RX_ENABLE 0x00000010
+#define FM10K_DMA_CTRL_RX_ACTIVE 0x00000080
+#define FM10K_DMA_CTRL_RX_DESC_SIZE 0x00000100
+#define FM10K_DMA_CTRL_MINMSS_64 0x00008000
+#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3 0x04800000
+#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2 0x04000000
+#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1 0x03800000
+#define FM10K_DMA_CTRL_DATAPATH_RESET 0x20000000
+#define FM10K_DMA_CTRL_32_DESC 0x00000000
+
+#define FM10K_DMA_CTRL2 0x20C4
+#define FM10K_DMA_CTRL2_SWITCH_READY 0x00002000
+
+/* TSO flags configuration
+ * First packet contains all flags except for fin and psh
+ * Middle packet contains only urg and ack
+ * Last packet contains urg, ack, fin, and psh
+ */
+#define FM10K_TSO_FLAGS_LOW 0x00300FF6
+#define FM10K_TSO_FLAGS_HI 0x00000039
+#define FM10K_DTXTCPFLGL 0x20C5
+#define FM10K_DTXTCPFLGH 0x20C6
+
+#define FM10K_TPH_CTRL 0x20C7
+#define FM10K_MRQC(_n) ((_n) + 0x2100)
+#define FM10K_MRQC_TCP_IPV4 0x00000001
+#define FM10K_MRQC_IPV4 0x00000002
+#define FM10K_MRQC_IPV6 0x00000010
+#define FM10K_MRQC_TCP_IPV6 0x00000020
+#define FM10K_MRQC_UDP_IPV4 0x00000040
+#define FM10K_MRQC_UDP_IPV6 0x00000080
+
+#define FM10K_TQMAP(_n) ((_n) + 0x2800)
+#define FM10K_TQMAP_TABLE_SIZE 2048
+#define FM10K_RQMAP(_n) ((_n) + 0x3000)
+
+/* Hardware Statistics */
+#define FM10K_STATS_TIMEOUT 0x3800
+#define FM10K_STATS_UR 0x3801
+#define FM10K_STATS_CA 0x3802
+#define FM10K_STATS_UM 0x3803
+#define FM10K_STATS_XEC 0x3804
+#define FM10K_STATS_VLAN_DROP 0x3805
+#define FM10K_STATS_LOOPBACK_DROP 0x3806
+#define FM10K_STATS_NODESC_DROP 0x3807
+
+/* Timesync registers */
+#define FM10K_SYSTIME 0x3814
+#define FM10K_SYSTIME_CFG 0x3818
+#define FM10K_SYSTIME_CFG_STEP_MASK 0x0000000F
+
+/* PCIe state registers */
+#define FM10K_PHYADDR 0x381C
+
+/* Rx ring registers */
+#define FM10K_RDBAL(_n) ((0x40 * (_n)) + 0x4000)
+#define FM10K_RDBAH(_n) ((0x40 * (_n)) + 0x4001)
+#define FM10K_RDLEN(_n) ((0x40 * (_n)) + 0x4002)
+#define FM10K_TPH_RXCTRL(_n) ((0x40 * (_n)) + 0x4003)
+#define FM10K_TPH_RXCTRL_DESC_TPHEN 0x00000020
+#define FM10K_TPH_RXCTRL_DESC_RROEN 0x00000200
+#define FM10K_TPH_RXCTRL_DATA_WROEN 0x00002000
+#define FM10K_TPH_RXCTRL_HDR_WROEN 0x00008000
+#define FM10K_RDH(_n) ((0x40 * (_n)) + 0x4004)
+#define FM10K_RDT(_n) ((0x40 * (_n)) + 0x4005)
+#define FM10K_RXQCTL(_n) ((0x40 * (_n)) + 0x4006)
+#define FM10K_RXQCTL_ENABLE 0x00000001
+#define FM10K_RXQCTL_PF 0x000000FC
+#define FM10K_RXQCTL_VF_SHIFT 2
+#define FM10K_RXQCTL_VF 0x00000100
+#define FM10K_RXQCTL_ID_MASK (FM10K_RXQCTL_PF | FM10K_RXQCTL_VF)
+#define FM10K_RXDCTL(_n) ((0x40 * (_n)) + 0x4007)
+#define FM10K_RXDCTL_WRITE_BACK_MIN_DELAY 0x00000001
+#define FM10K_RXDCTL_DROP_ON_EMPTY 0x00000200
+#define FM10K_RXINT(_n) ((0x40 * (_n)) + 0x4008)
+#define FM10K_SRRCTL(_n) ((0x40 * (_n)) + 0x4009)
+#define FM10K_SRRCTL_BSIZEPKT_SHIFT 8 /* shift _right_ */
+#define FM10K_SRRCTL_LOOPBACK_SUPPRESS 0x40000000
+#define FM10K_SRRCTL_BUFFER_CHAINING_EN 0x80000000
+
+/* Rx Statistics */
+#define FM10K_QPRC(_n) ((0x40 * (_n)) + 0x400A)
+#define FM10K_QPRDC(_n) ((0x40 * (_n)) + 0x400B)
+#define FM10K_QBRC_L(_n) ((0x40 * (_n)) + 0x400C)
+#define FM10K_QBRC_H(_n) ((0x40 * (_n)) + 0x400D)
+
+/* Rx GLORT register */
+#define FM10K_RX_SGLORT(_n) ((0x40 * (_n)) + 0x400E)
+
+/* Tx ring registers */
+#define FM10K_TDBAL(_n) ((0x40 * (_n)) + 0x8000)
+#define FM10K_TDBAH(_n) ((0x40 * (_n)) + 0x8001)
+#define FM10K_TDLEN(_n) ((0x40 * (_n)) + 0x8002)
+#define FM10K_TPH_TXCTRL(_n) ((0x40 * (_n)) + 0x8003)
+#define FM10K_TPH_TXCTRL_DESC_TPHEN 0x00000020
+#define FM10K_TPH_TXCTRL_DESC_RROEN 0x00000200
+#define FM10K_TPH_TXCTRL_DESC_WROEN 0x00000800
+#define FM10K_TPH_TXCTRL_DATA_RROEN 0x00002000
+#define FM10K_TDH(_n) ((0x40 * (_n)) + 0x8004)
+#define FM10K_TDT(_n) ((0x40 * (_n)) + 0x8005)
+#define FM10K_TXDCTL(_n) ((0x40 * (_n)) + 0x8006)
+#define FM10K_TXDCTL_ENABLE 0x00004000
+#define FM10K_TXDCTL_MAX_TIME_SHIFT 16
+#define FM10K_TXQCTL(_n) ((0x40 * (_n)) + 0x8007)
+#define FM10K_TXQCTL_PF 0x0000003F
+#define FM10K_TXQCTL_VF 0x00000040
+#define FM10K_TXQCTL_ID_MASK (FM10K_TXQCTL_PF | FM10K_TXQCTL_VF)
+#define FM10K_TXQCTL_PC_SHIFT 7
+#define FM10K_TXQCTL_PC_MASK 0x00000380
+#define FM10K_TXQCTL_TC_SHIFT 10
+#define FM10K_TXQCTL_VID_SHIFT 16
+#define FM10K_TXQCTL_VID_MASK 0x0FFF0000
+#define FM10K_TXQCTL_UNLIMITED_BW 0x10000000
+#define FM10K_TXINT(_n) ((0x40 * (_n)) + 0x8008)
+
+/* Tx Statistics */
+#define FM10K_QPTC(_n) ((0x40 * (_n)) + 0x8009)
+#define FM10K_QBTC_L(_n) ((0x40 * (_n)) + 0x800A)
+#define FM10K_QBTC_H(_n) ((0x40 * (_n)) + 0x800B)
+
+/* Tx Push registers */
+#define FM10K_TQDLOC(_n) ((0x40 * (_n)) + 0x800C)
+#define FM10K_TQDLOC_BASE_32_DESC 0x08
+#define FM10K_TQDLOC_SIZE_32_DESC 0x00050000
+
+/* Tx GLORT registers */
+#define FM10K_TX_SGLORT(_n) ((0x40 * (_n)) + 0x800D)
+#define FM10K_PFVTCTL(_n) ((0x40 * (_n)) + 0x800E)
+#define FM10K_PFVTCTL_FTAG_DESC_ENABLE 0x00000001
+
+/* Interrupt moderation and control registers */
+#define FM10K_INT_MAP(_n) ((_n) + 0x10080)
+#define FM10K_INT_MAP_TIMER0 0x00000000
+#define FM10K_INT_MAP_TIMER1 0x00000100
+#define FM10K_INT_MAP_IMMEDIATE 0x00000200
+#define FM10K_INT_MAP_DISABLE 0x00000300
+#define FM10K_MSIX_VECTOR_MASK(_n) ((0x4 * (_n)) + 0x11003)
+#define FM10K_INT_CTRL 0x12000
+#define FM10K_INT_CTRL_ENABLEMODERATOR 0x00000400
+#define FM10K_ITR(_n) ((_n) + 0x12400)
+#define FM10K_ITR_INTERVAL1_SHIFT 12
+#define FM10K_ITR_PENDING2 0x10000000
+#define FM10K_ITR_AUTOMASK 0x20000000
+#define FM10K_ITR_MASK_SET 0x40000000
+#define FM10K_ITR_MASK_CLEAR 0x80000000
+#define FM10K_ITR2(_n) ((0x2 * (_n)) + 0x12800)
+#define FM10K_ITR_REG_COUNT 768
+#define FM10K_ITR_REG_COUNT_PF 256
+
+/* Switch manager interrupt registers */
+#define FM10K_IP 0x13000
+#define FM10K_IP_NOTINRESET 0x00000100
+
+/* VLAN registers */
+#define FM10K_VLAN_TABLE(_n, _m) ((0x80 * (_n)) + (_m) + 0x14000)
+#define FM10K_VLAN_TABLE_SIZE 128
+
+/* VLAN specific message offsets */
+#define FM10K_VLAN_TABLE_VID_MAX 4096
+#define FM10K_VLAN_TABLE_VSI_MAX 64
+#define FM10K_VLAN_LENGTH_SHIFT 16
+#define FM10K_VLAN_CLEAR (1 << 15)
+#define FM10K_VLAN_ALL \
+ ((FM10K_VLAN_TABLE_VID_MAX - 1) << FM10K_VLAN_LENGTH_SHIFT)
+
+/* VF FLR event notification registers */
+#define FM10K_PFVFLRE(_n) ((0x1 * (_n)) + 0x18844)
+#define FM10K_PFVFLREC(_n) ((0x1 * (_n)) + 0x18846)
+
+/* Defines for size of uncacheable memories */
+#define FM10K_UC_ADDR_START 0x000000 /* start of standard regs */
+#define FM10K_UC_ADDR_END 0x100000 /* end of standard regs */
+#define FM10K_UC_ADDR_SIZE (FM10K_UC_ADDR_END - FM10K_UC_ADDR_START)
+
+/* Define timeouts for resets and disables */
+#define FM10K_QUEUE_DISABLE_TIMEOUT 100
+#define FM10K_RESET_TIMEOUT 100
+
+/* VF registers */
+#define FM10K_VFCTRL 0x00000
+#define FM10K_VFCTRL_RST 0x00000008
+#define FM10K_VFINT_MAP 0x00030
+#define FM10K_VFSYSTIME 0x00040
+#define FM10K_VFITR(_n) ((_n) + 0x00060)
+
+/* Registers contained in BAR 4 for Switch management */
+#define FM10K_SW_SYSTIME_ADJUST 0x0224D
+#define FM10K_SW_SYSTIME_ADJUST_MASK 0x3FFFFFFF
+#define FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE 0x80000000
+#define FM10K_SW_SYSTIME_PULSE(_n) ((_n) + 0x02252)
+
+enum fm10k_int_source {
+ fm10k_int_Mailbox = 0,
+ fm10k_int_PCIeFault = 1,
+ fm10k_int_SwitchUpDown = 2,
+ fm10k_int_SwitchEvent = 3,
+ fm10k_int_SRAM = 4,
+ fm10k_int_VFLR = 5,
+ fm10k_int_MaxHoldTime = 6,
+ fm10k_int_sources_max_pf
+};
+
+/* PCIe bus speeds */
+enum fm10k_bus_speed {
+ fm10k_bus_speed_unknown = 0,
+ fm10k_bus_speed_2500 = 2500,
+ fm10k_bus_speed_5000 = 5000,
+ fm10k_bus_speed_8000 = 8000,
+ fm10k_bus_speed_reserved
+};
+
+/* PCIe bus widths */
+enum fm10k_bus_width {
+ fm10k_bus_width_unknown = 0,
+ fm10k_bus_width_pcie_x1 = 1,
+ fm10k_bus_width_pcie_x2 = 2,
+ fm10k_bus_width_pcie_x4 = 4,
+ fm10k_bus_width_pcie_x8 = 8,
+ fm10k_bus_width_reserved
+};
+
+/* PCIe payload sizes */
+enum fm10k_bus_payload {
+ fm10k_bus_payload_unknown = 0,
+ fm10k_bus_payload_128 = 1,
+ fm10k_bus_payload_256 = 2,
+ fm10k_bus_payload_512 = 3,
+ fm10k_bus_payload_reserved
+};
+
+/* Bus parameters */
+struct fm10k_bus_info {
+ enum fm10k_bus_speed speed;
+ enum fm10k_bus_width width;
+ enum fm10k_bus_payload payload;
+};
+
+/* Statistics related declarations */
+struct fm10k_hw_stat {
+ u64 count;
+ u32 base_l;
+ u32 base_h;
+};
+
+struct fm10k_hw_stats_q {
+ struct fm10k_hw_stat tx_bytes;
+ struct fm10k_hw_stat tx_packets;
+#define tx_stats_idx tx_packets.base_h
+ struct fm10k_hw_stat rx_bytes;
+ struct fm10k_hw_stat rx_packets;
+#define rx_stats_idx rx_packets.base_h
+ struct fm10k_hw_stat rx_drops;
+};
+
+struct fm10k_hw_stats {
+ struct fm10k_hw_stat timeout;
+#define stats_idx timeout.base_h
+ struct fm10k_hw_stat ur;
+ struct fm10k_hw_stat ca;
+ struct fm10k_hw_stat um;
+ struct fm10k_hw_stat xec;
+ struct fm10k_hw_stat vlan_drop;
+ struct fm10k_hw_stat loopback_drop;
+ struct fm10k_hw_stat nodesc_drop;
+ struct fm10k_hw_stats_q q[FM10K_MAX_QUEUES_PF];
+};
+
+/* Establish DGLORT feature priority */
+enum fm10k_dglortdec_idx {
+ fm10k_dglort_default = 0,
+ fm10k_dglort_vf_rsvd0 = 1,
+ fm10k_dglort_vf_rss = 2,
+ fm10k_dglort_pf_rsvd0 = 3,
+ fm10k_dglort_pf_queue = 4,
+ fm10k_dglort_pf_vsi = 5,
+ fm10k_dglort_pf_rsvd1 = 6,
+ fm10k_dglort_pf_rss = 7
+};
+
+struct fm10k_dglort_cfg {
+ u16 glort; /* GLORT base */
+ u16 queue_b; /* Base value for queue */
+ u8 vsi_b; /* Base value for VSI */
+ u8 idx; /* index of DGLORTDEC entry */
+ u8 rss_l; /* RSS indices */
+ u8 pc_l; /* Priority Class indices */
+ u8 vsi_l; /* Number of bits from GLORT used to determine VSI */
+ u8 queue_l; /* Number of bits from GLORT used to determine queue */
+ u8 shared_l; /* Ignored bits from GLORT resulting in shared VSI */
+ u8 inner_rss; /* Boolean value if inner header is used for RSS */
+};
+
+enum fm10k_pca_fault {
+ PCA_NO_FAULT,
+ PCA_UNMAPPED_ADDR,
+ PCA_BAD_QACCESS_PF,
+ PCA_BAD_QACCESS_VF,
+ PCA_MALICIOUS_REQ,
+ PCA_POISONED_TLP,
+ PCA_TLP_ABORT,
+ __PCA_MAX
+};
+
+enum fm10k_thi_fault {
+ THI_NO_FAULT,
+ THI_MAL_DIS_Q_FAULT,
+ __THI_MAX
+};
+
+enum fm10k_fum_fault {
+ FUM_NO_FAULT,
+ FUM_UNMAPPED_ADDR,
+ FUM_POISONED_TLP,
+ FUM_BAD_VF_QACCESS,
+ FUM_ADD_DECODE_ERR,
+ FUM_RO_ERROR,
+ FUM_QPRC_CRC_ERROR,
+ FUM_CSR_TIMEOUT,
+ FUM_INVALID_TYPE,
+ FUM_INVALID_LENGTH,
+ FUM_INVALID_BE,
+ FUM_INVALID_ALIGN,
+ __FUM_MAX
+};
+
+struct fm10k_fault {
+ u64 address; /* Address at the time fault was detected */
+ u32 specinfo; /* Extra info on this fault (fault dependent) */
+ u8 type; /* Fault value dependent on subunit */
+ u8 func; /* Function number of the fault */
+};
+
+struct fm10k_mac_ops {
+ /* basic bring-up and tear-down */
+ s32 (*reset_hw)(struct fm10k_hw *);
+ s32 (*init_hw)(struct fm10k_hw *);
+ s32 (*start_hw)(struct fm10k_hw *);
+ s32 (*stop_hw)(struct fm10k_hw *);
+ s32 (*get_bus_info)(struct fm10k_hw *);
+ s32 (*get_host_state)(struct fm10k_hw *, bool *);
+ bool (*is_slot_appropriate)(struct fm10k_hw *);
+ s32 (*update_vlan)(struct fm10k_hw *, u32, u8, bool);
+ s32 (*read_mac_addr)(struct fm10k_hw *);
+ s32 (*update_uc_addr)(struct fm10k_hw *, u16, const u8 *,
+ u16, bool, u8);
+ s32 (*update_mc_addr)(struct fm10k_hw *, u16, const u8 *, u16, bool);
+ s32 (*update_xcast_mode)(struct fm10k_hw *, u16, u8);
+ void (*update_int_moderator)(struct fm10k_hw *);
+ s32 (*update_lport_state)(struct fm10k_hw *, u16, u16, bool);
+ void (*update_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *);
+ void (*rebind_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *);
+ s32 (*configure_dglort_map)(struct fm10k_hw *,
+ struct fm10k_dglort_cfg *);
+ void (*set_dma_mask)(struct fm10k_hw *, u64);
+ s32 (*get_fault)(struct fm10k_hw *, int, struct fm10k_fault *);
+ void (*request_lport_map)(struct fm10k_hw *);
+ s32 (*adjust_systime)(struct fm10k_hw *, s32 ppb);
+ u64 (*read_systime)(struct fm10k_hw *);
+};
+
+enum fm10k_mac_type {
+ fm10k_mac_unknown = 0,
+ fm10k_mac_pf,
+ fm10k_mac_vf,
+ fm10k_num_macs
+};
+
+struct fm10k_mac_info {
+ struct fm10k_mac_ops ops;
+ enum fm10k_mac_type type;
+ u8 addr[ETH_ALEN];
+ u8 perm_addr[ETH_ALEN];
+ u16 default_vid;
+ u16 max_msix_vectors;
+ u16 max_queues;
+ bool vlan_override;
+ bool get_host_state;
+ bool tx_ready;
+ u32 dglort_map;
+};
+
+struct fm10k_swapi_table_info {
+ u32 used;
+ u32 avail;
+};
+
+struct fm10k_swapi_info {
+ u32 status;
+ struct fm10k_swapi_table_info mac;
+ struct fm10k_swapi_table_info nexthop;
+ struct fm10k_swapi_table_info ffu;
+};
+
+enum fm10k_xcast_modes {
+ FM10K_XCAST_MODE_ALLMULTI = 0,
+ FM10K_XCAST_MODE_MULTI = 1,
+ FM10K_XCAST_MODE_PROMISC = 2,
+ FM10K_XCAST_MODE_NONE = 3,
+ FM10K_XCAST_MODE_DISABLE = 4
+};
+
+#define FM10K_VF_TC_MAX 100000 /* 100,000 Mb/s aka 100Gb/s */
+#define FM10K_VF_TC_MIN 1 /* 1 Mb/s is the slowest rate */
+
+struct fm10k_vf_info {
+ /* mbx must be first field in struct unless all default IOV message
+ * handlers are redone as the assumption is that vf_info starts
+ * at the same offset as the mailbox
+ */
+ struct fm10k_mbx_info mbx; /* PF side of VF mailbox */
+ int rate; /* Tx BW cap as defined by OS */
+ u16 glort; /* resource tag for this VF */
+ u16 sw_vid; /* Switch API assigned VLAN */
+ u16 pf_vid; /* PF assigned Default VLAN */
+ u8 mac[ETH_ALEN]; /* PF Default MAC address */
+ u8 vsi; /* VSI idenfifier */
+ u8 vf_idx; /* which VF this is */
+ u8 vf_flags; /* flags indicating what modes
+ * are supported for the port
+ */
+};
+
+#define FM10K_VF_FLAG_ALLMULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_ALLMULTI)
+#define FM10K_VF_FLAG_MULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_MULTI)
+#define FM10K_VF_FLAG_PROMISC_CAPABLE ((u8)1 << FM10K_XCAST_MODE_PROMISC)
+#define FM10K_VF_FLAG_NONE_CAPABLE ((u8)1 << FM10K_XCAST_MODE_NONE)
+#define FM10K_VF_FLAG_CAPABLE(vf_info) ((vf_info)->vf_flags & (u8)0xF)
+#define FM10K_VF_FLAG_ENABLED(vf_info) ((vf_info)->vf_flags >> 4)
+#define FM10K_VF_FLAG_SET_MODE(mode) ((u8)0x10 << (mode))
+#define FM10K_VF_FLAG_SET_MODE_NONE \
+ FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_NONE)
+#define FM10K_VF_FLAG_MULTI_ENABLED \
+ (FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_ALLMULTI) | \
+ FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_MULTI) | \
+ FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_PROMISC))
+
+struct fm10k_iov_ops {
+ /* IOV related bring-up and tear-down */
+ s32 (*assign_resources)(struct fm10k_hw *, u16, u16);
+ s32 (*configure_tc)(struct fm10k_hw *, u16, int);
+ s32 (*assign_int_moderator)(struct fm10k_hw *, u16);
+ s32 (*assign_default_mac_vlan)(struct fm10k_hw *,
+ struct fm10k_vf_info *);
+ s32 (*reset_resources)(struct fm10k_hw *,
+ struct fm10k_vf_info *);
+ s32 (*set_lport)(struct fm10k_hw *, struct fm10k_vf_info *, u16, u8);
+ void (*reset_lport)(struct fm10k_hw *, struct fm10k_vf_info *);
+ void (*update_stats)(struct fm10k_hw *, struct fm10k_hw_stats_q *, u16);
+ s32 (*report_timestamp)(struct fm10k_hw *, struct fm10k_vf_info *, u64);
+};
+
+struct fm10k_iov_info {
+ struct fm10k_iov_ops ops;
+ u16 total_vfs;
+ u16 num_vfs;
+ u16 num_pools;
+};
+
+enum fm10k_devices {
+ fm10k_device_pf,
+ fm10k_device_vf,
+};
+
+struct fm10k_info {
+ enum fm10k_mac_type mac;
+ s32 (*get_invariants)(struct fm10k_hw *);
+ struct fm10k_mac_ops *mac_ops;
+ struct fm10k_iov_ops *iov_ops;
+};
+
+struct fm10k_hw {
+ u32 __iomem *hw_addr;
+ u32 __iomem *sw_addr;
+ void *back;
+ struct fm10k_mac_info mac;
+ struct fm10k_bus_info bus;
+ struct fm10k_bus_info bus_caps;
+ struct fm10k_iov_info iov;
+ struct fm10k_mbx_info mbx;
+ struct fm10k_swapi_info swapi;
+ u16 device_id;
+ u16 vendor_id;
+ u16 subsystem_device_id;
+ u16 subsystem_vendor_id;
+ u8 revision_id;
+};
+
+/* Number of Transmit and Receive Descriptors must be a multiple of 8 */
+#define FM10K_REQ_TX_DESCRIPTOR_MULTIPLE 8
+#define FM10K_REQ_RX_DESCRIPTOR_MULTIPLE 8
+
+/* Transmit Descriptor */
+struct fm10k_tx_desc {
+ __le64 buffer_addr; /* Address of the descriptor's data buffer */
+ __le16 buflen; /* Length of data to be DMAed */
+ __le16 vlan; /* VLAN_ID and VPRI to be inserted in FTAG */
+ __le16 mss; /* MSS for segmentation offload */
+ u8 hdrlen; /* Header size for segmentation offload */
+ u8 flags; /* Status and offload request flags */
+};
+
+/* Transmit Descriptor Cache Structure */
+struct fm10k_tx_desc_cache {
+ struct fm10k_tx_desc tx_desc[256];
+};
+
+#define FM10K_TXD_FLAG_INT 0x01
+#define FM10K_TXD_FLAG_TIME 0x02
+#define FM10K_TXD_FLAG_CSUM 0x04
+#define FM10K_TXD_FLAG_FTAG 0x10
+#define FM10K_TXD_FLAG_RS 0x20
+#define FM10K_TXD_FLAG_LAST 0x40
+#define FM10K_TXD_FLAG_DONE 0x80
+
+/* These macros are meant to enable optimal placement of the RS and INT
+ * bits. It will point us to the last descriptor in the cache for either the
+ * start of the packet, or the end of the packet. If the index is actually
+ * at the start of the FIFO it will point to the offset for the last index
+ * in the FIFO to prevent an unnecessary write.
+ */
+#define FM10K_TXD_WB_FIFO_SIZE 4
+
+/* Receive Descriptor - 32B */
+union fm10k_rx_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ __le64 reserved; /* Empty space, RSS hash */
+ __le64 timestamp;
+ } q; /* Read, Writeback, 64b quad-words */
+ struct {
+ __le32 data; /* RSS and header data */
+ __le32 rss; /* RSS Hash */
+ __le32 staterr;
+ __le32 vlan_len;
+ __le32 glort; /* sglort/dglort */
+ } d; /* Writeback, 32b double-words */
+ struct {
+ __le16 pkt_info; /* RSS, Pkt type */
+ __le16 hdr_info; /* Splithdr, hdrlen, xC */
+ __le16 rss_lower;
+ __le16 rss_upper;
+ __le16 status; /* status/error */
+ __le16 csum_err; /* checksum or extended error value */
+ __le16 length; /* Packet length */
+ __le16 vlan; /* VLAN tag */
+ __le16 dglort;
+ __le16 sglort;
+ } w; /* Writeback, 16b words */
+};
+
+#define FM10K_RXD_RSSTYPE_MASK 0x000F
+enum fm10k_rdesc_rss_type {
+ FM10K_RSSTYPE_NONE = 0x0,
+ FM10K_RSSTYPE_IPV4_TCP = 0x1,
+ FM10K_RSSTYPE_IPV4 = 0x2,
+ FM10K_RSSTYPE_IPV6_TCP = 0x3,
+ /* Reserved 0x4 */
+ FM10K_RSSTYPE_IPV6 = 0x5,
+ /* Reserved 0x6 */
+ FM10K_RSSTYPE_IPV4_UDP = 0x7,
+ FM10K_RSSTYPE_IPV6_UDP = 0x8
+ /* Reserved 0x9 - 0xF */
+};
+
+#define FM10K_RXD_HDR_INFO_XC_MASK 0x0006
+enum fm10k_rxdesc_xc {
+ FM10K_XC_UNICAST = 0x0,
+ FM10K_XC_MULTICAST = 0x4,
+ FM10K_XC_BROADCAST = 0x6
+};
+
+#define FM10K_RXD_STATUS_DD 0x0001 /* Descriptor done */
+#define FM10K_RXD_STATUS_EOP 0x0002 /* End of packet */
+#define FM10K_RXD_STATUS_L4CS 0x0010 /* Indicates an L4 csum */
+#define FM10K_RXD_STATUS_L4CS2 0x0040 /* Inner header L4 csum */
+#define FM10K_RXD_STATUS_L4E2 0x0800 /* Inner header L4 csum err */
+#define FM10K_RXD_STATUS_IPE2 0x1000 /* Inner header IPv4 csum err */
+#define FM10K_RXD_STATUS_RXE 0x2000 /* Generic Rx error */
+#define FM10K_RXD_STATUS_L4E 0x4000 /* L4 csum error */
+#define FM10K_RXD_STATUS_IPE 0x8000 /* IPv4 csum error */
+
+struct fm10k_ftag {
+ __be16 swpri_type_user;
+ __be16 vlan;
+ __be16 sglort;
+ __be16 dglort;
+};
+
+#endif /* _FM10K_TYPE_H */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_vf.c b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c
new file mode 100644
index 0000000..f0aa0f9
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c
@@ -0,0 +1,578 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k_vf.h"
+
+/**
+ * fm10k_stop_hw_vf - Stop Tx/Rx units
+ * @hw: pointer to hardware structure
+ *
+ **/
+static s32 fm10k_stop_hw_vf(struct fm10k_hw *hw)
+{
+ u8 *perm_addr = hw->mac.perm_addr;
+ u32 bal = 0, bah = 0;
+ s32 err;
+ u16 i;
+
+ /* we need to disable the queues before taking further steps */
+ err = fm10k_stop_hw_generic(hw);
+ if (err)
+ return err;
+
+ /* If permenant address is set then we need to restore it */
+ if (is_valid_ether_addr(perm_addr)) {
+ bal = (((u32)perm_addr[3]) << 24) |
+ (((u32)perm_addr[4]) << 16) |
+ (((u32)perm_addr[5]) << 8);
+ bah = (((u32)0xFF) << 24) |
+ (((u32)perm_addr[0]) << 16) |
+ (((u32)perm_addr[1]) << 8) |
+ ((u32)perm_addr[2]);
+ }
+
+ /* The queues have already been disabled so we just need to
+ * update their base address registers
+ */
+ for (i = 0; i < hw->mac.max_queues; i++) {
+ fm10k_write_reg(hw, FM10K_TDBAL(i), bal);
+ fm10k_write_reg(hw, FM10K_TDBAH(i), bah);
+ fm10k_write_reg(hw, FM10K_RDBAL(i), bal);
+ fm10k_write_reg(hw, FM10K_RDBAH(i), bah);
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_reset_hw_vf - VF hardware reset
+ * @hw: pointer to hardware structure
+ *
+ * This function should return the hardare to a state similar to the
+ * one it is in after just being initialized.
+ **/
+static s32 fm10k_reset_hw_vf(struct fm10k_hw *hw)
+{
+ s32 err;
+
+ /* shut down queues we own and reset DMA configuration */
+ err = fm10k_stop_hw_vf(hw);
+ if (err)
+ return err;
+
+ /* Inititate VF reset */
+ fm10k_write_reg(hw, FM10K_VFCTRL, FM10K_VFCTRL_RST);
+
+ /* Flush write and allow 100us for reset to complete */
+ fm10k_write_flush(hw);
+ udelay(FM10K_RESET_TIMEOUT);
+
+ /* Clear reset bit and verify it was cleared */
+ fm10k_write_reg(hw, FM10K_VFCTRL, 0);
+ if (fm10k_read_reg(hw, FM10K_VFCTRL) & FM10K_VFCTRL_RST)
+ err = FM10K_ERR_RESET_FAILED;
+
+ return err;
+}
+
+/**
+ * fm10k_init_hw_vf - VF hardware initialization
+ * @hw: pointer to hardware structure
+ *
+ **/
+static s32 fm10k_init_hw_vf(struct fm10k_hw *hw)
+{
+ u32 tqdloc, tqdloc0 = ~fm10k_read_reg(hw, FM10K_TQDLOC(0));
+ s32 err;
+ u16 i;
+
+ /* assume we always have at least 1 queue */
+ for (i = 1; tqdloc0 && (i < FM10K_MAX_QUEUES_POOL); i++) {
+ /* verify the Descriptor cache offsets are increasing */
+ tqdloc = ~fm10k_read_reg(hw, FM10K_TQDLOC(i));
+ if (!tqdloc || (tqdloc == tqdloc0))
+ break;
+
+ /* check to verify the PF doesn't own any of our queues */
+ if (!~fm10k_read_reg(hw, FM10K_TXQCTL(i)) ||
+ !~fm10k_read_reg(hw, FM10K_RXQCTL(i)))
+ break;
+ }
+
+ /* shut down queues we own and reset DMA configuration */
+ err = fm10k_disable_queues_generic(hw, i);
+ if (err)
+ return err;
+
+ /* record maximum queue count */
+ hw->mac.max_queues = i;
+
+ return 0;
+}
+
+/**
+ * fm10k_is_slot_appropriate_vf - Indicate appropriate slot for this SKU
+ * @hw: pointer to hardware structure
+ *
+ * Looks at the PCIe bus info to confirm whether or not this slot can support
+ * the necessary bandwidth for this device. Since the VF has no control over
+ * the "slot" it is in, always indicate that the slot is appropriate.
+ **/
+static bool fm10k_is_slot_appropriate_vf(struct fm10k_hw *hw)
+{
+ return true;
+}
+
+/* This structure defines the attibutes to be parsed below */
+const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[] = {
+ FM10K_TLV_ATTR_U32(FM10K_MAC_VLAN_MSG_VLAN),
+ FM10K_TLV_ATTR_BOOL(FM10K_MAC_VLAN_MSG_SET),
+ FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MAC),
+ FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_DEFAULT_MAC),
+ FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MULTICAST),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_update_vlan_vf - Update status of VLAN ID in VLAN filter table
+ * @hw: pointer to hardware structure
+ * @vid: VLAN ID to add to table
+ * @vsi: Reserved, should always be 0
+ * @set: Indicates if this is a set or clear operation
+ *
+ * This function adds or removes the corresponding VLAN ID from the VLAN
+ * filter table for this VF.
+ **/
+static s32 fm10k_update_vlan_vf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[4];
+
+ /* verify the index is not set */
+ if (vsi)
+ return FM10K_ERR_PARAM;
+
+ /* verify upper 4 bits of vid and length are 0 */
+ if ((vid << 16 | vid) >> 28)
+ return FM10K_ERR_PARAM;
+
+ /* encode set bit into the VLAN ID */
+ if (!set)
+ vid |= FM10K_VLAN_CLEAR;
+
+ /* generate VLAN request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN);
+ fm10k_tlv_attr_put_u32(msg, FM10K_MAC_VLAN_MSG_VLAN, vid);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_msg_mac_vlan_vf - Read device MAC address from mailbox message
+ * @hw: pointer to the HW structure
+ * @results: Attributes for message
+ * @mbx: unused mailbox data
+ *
+ * This function should determine the MAC address for the VF
+ **/
+s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ u8 perm_addr[ETH_ALEN];
+ u16 vid;
+ s32 err;
+
+ /* record MAC address requested */
+ err = fm10k_tlv_attr_get_mac_vlan(
+ results[FM10K_MAC_VLAN_MSG_DEFAULT_MAC],
+ perm_addr, &vid);
+ if (err)
+ return err;
+
+ ether_addr_copy(hw->mac.perm_addr, perm_addr);
+ hw->mac.default_vid = vid & (FM10K_VLAN_TABLE_VID_MAX - 1);
+ hw->mac.vlan_override = !!(vid & FM10K_VLAN_CLEAR);
+
+ return 0;
+}
+
+/**
+ * fm10k_read_mac_addr_vf - Read device MAC address
+ * @hw: pointer to the HW structure
+ *
+ * This function should determine the MAC address for the VF
+ **/
+static s32 fm10k_read_mac_addr_vf(struct fm10k_hw *hw)
+{
+ u8 perm_addr[ETH_ALEN];
+ u32 base_addr;
+
+ base_addr = fm10k_read_reg(hw, FM10K_TDBAL(0));
+
+ /* last byte should be 0 */
+ if (base_addr << 24)
+ return FM10K_ERR_INVALID_MAC_ADDR;
+
+ perm_addr[3] = (u8)(base_addr >> 24);
+ perm_addr[4] = (u8)(base_addr >> 16);
+ perm_addr[5] = (u8)(base_addr >> 8);
+
+ base_addr = fm10k_read_reg(hw, FM10K_TDBAH(0));
+
+ /* first byte should be all 1's */
+ if ((~base_addr) >> 24)
+ return FM10K_ERR_INVALID_MAC_ADDR;
+
+ perm_addr[0] = (u8)(base_addr >> 16);
+ perm_addr[1] = (u8)(base_addr >> 8);
+ perm_addr[2] = (u8)(base_addr);
+
+ ether_addr_copy(hw->mac.perm_addr, perm_addr);
+ ether_addr_copy(hw->mac.addr, perm_addr);
+
+ return 0;
+}
+
+/**
+ * fm10k_update_uc_addr_vf - Update device unicast address
+ * @hw: pointer to the HW structure
+ * @glort: unused
+ * @mac: MAC address to add/remove from table
+ * @vid: VLAN ID to add/remove from table
+ * @add: Indicates if this is an add or remove operation
+ * @flags: flags field to indicate add and secure - unused
+ *
+ * This function is used to add or remove unicast MAC addresses for
+ * the VF.
+ **/
+static s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw, u16 glort,
+ const u8 *mac, u16 vid, bool add, u8 flags)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[7];
+
+ /* verify VLAN ID is valid */
+ if (vid >= FM10K_VLAN_TABLE_VID_MAX)
+ return FM10K_ERR_PARAM;
+
+ /* verify MAC address is valid */
+ if (!is_valid_ether_addr(mac))
+ return FM10K_ERR_PARAM;
+
+ /* verify we are not locked down on the MAC address */
+ if (is_valid_ether_addr(hw->mac.perm_addr) &&
+ memcmp(hw->mac.perm_addr, mac, ETH_ALEN))
+ return FM10K_ERR_PARAM;
+
+ /* add bit to notify us if this is a set of clear operation */
+ if (!add)
+ vid |= FM10K_VLAN_CLEAR;
+
+ /* generate VLAN request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN);
+ fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MAC, mac, vid);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_update_mc_addr_vf - Update device multicast address
+ * @hw: pointer to the HW structure
+ * @glort: unused
+ * @mac: MAC address to add/remove from table
+ * @vid: VLAN ID to add/remove from table
+ * @add: Indicates if this is an add or remove operation
+ *
+ * This function is used to add or remove multicast MAC addresses for
+ * the VF.
+ **/
+static s32 fm10k_update_mc_addr_vf(struct fm10k_hw *hw, u16 glort,
+ const u8 *mac, u16 vid, bool add)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[7];
+
+ /* verify VLAN ID is valid */
+ if (vid >= FM10K_VLAN_TABLE_VID_MAX)
+ return FM10K_ERR_PARAM;
+
+ /* verify multicast address is valid */
+ if (!is_multicast_ether_addr(mac))
+ return FM10K_ERR_PARAM;
+
+ /* add bit to notify us if this is a set of clear operation */
+ if (!add)
+ vid |= FM10K_VLAN_CLEAR;
+
+ /* generate VLAN request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN);
+ fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MULTICAST,
+ mac, vid);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_update_int_moderator_vf - Request update of interrupt moderator list
+ * @hw: pointer to hardware structure
+ *
+ * This function will issue a request to the PF to rescan our MSI-X table
+ * and to update the interrupt moderator linked list.
+ **/
+static void fm10k_update_int_moderator_vf(struct fm10k_hw *hw)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[1];
+
+ /* generate MSI-X request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MSIX);
+
+ /* load onto outgoing mailbox */
+ mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/* This structure defines the attibutes to be parsed below */
+const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[] = {
+ FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_DISABLE),
+ FM10K_TLV_ATTR_U8(FM10K_LPORT_STATE_MSG_XCAST_MODE),
+ FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_READY),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_msg_lport_state_vf - Message handler for lport_state message from PF
+ * @hw: Pointer to hardware structure
+ * @results: pointer array containing parsed data
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This handler is meant to capture the indication from the PF that we
+ * are ready to bring up the interface.
+ **/
+s32 fm10k_msg_lport_state_vf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ hw->mac.dglort_map = !results[FM10K_LPORT_STATE_MSG_READY] ?
+ FM10K_DGLORTMAP_NONE : FM10K_DGLORTMAP_ZERO;
+
+ return 0;
+}
+
+/**
+ * fm10k_update_lport_state_vf - Update device state in lower device
+ * @hw: pointer to the HW structure
+ * @glort: unused
+ * @count: number of logical ports to enable - unused (always 1)
+ * @enable: boolean value indicating if this is an enable or disable request
+ *
+ * Notify the lower device of a state change. If the lower device is
+ * enabled we can add filters, if it is disabled all filters for this
+ * logical port are flushed.
+ **/
+static s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw, u16 glort,
+ u16 count, bool enable)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[2];
+
+ /* reset glort mask 0 as we have to wait to be enabled */
+ hw->mac.dglort_map = FM10K_DGLORTMAP_NONE;
+
+ /* generate port state request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE);
+ if (!enable)
+ fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_DISABLE);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_update_xcast_mode_vf - Request update of multicast mode
+ * @hw: pointer to hardware structure
+ * @glort: unused
+ * @mode: integer value indicating mode being requested
+ *
+ * This function will attempt to request a higher mode for the port
+ * so that it can enable either multicast, multicast promiscuous, or
+ * promiscuous mode of operation.
+ **/
+static s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw, u16 glort, u8 mode)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[3];
+
+ if (mode > FM10K_XCAST_MODE_NONE)
+ return FM10K_ERR_PARAM;
+ /* generate message requesting to change xcast mode */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE);
+ fm10k_tlv_attr_put_u8(msg, FM10K_LPORT_STATE_MSG_XCAST_MODE, mode);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+const struct fm10k_tlv_attr fm10k_1588_msg_attr[] = {
+ FM10K_TLV_ATTR_U64(FM10K_1588_MSG_TIMESTAMP),
+ FM10K_TLV_ATTR_LAST
+};
+
+/* currently there is no shared 1588 timestamp handler */
+
+/**
+ * fm10k_update_hw_stats_vf - Updates hardware related statistics of VF
+ * @hw: pointer to hardware structure
+ * @stats: pointer to statistics structure
+ *
+ * This function collects and aggregates per queue hardware statistics.
+ **/
+static void fm10k_update_hw_stats_vf(struct fm10k_hw *hw,
+ struct fm10k_hw_stats *stats)
+{
+ fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues);
+}
+
+/**
+ * fm10k_rebind_hw_stats_vf - Resets base for hardware statistics of VF
+ * @hw: pointer to hardware structure
+ * @stats: pointer to the stats structure to update
+ *
+ * This function resets the base for queue hardware statistics.
+ **/
+static void fm10k_rebind_hw_stats_vf(struct fm10k_hw *hw,
+ struct fm10k_hw_stats *stats)
+{
+ /* Unbind Queue Statistics */
+ fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues);
+
+ /* Reinitialize bases for all stats */
+ fm10k_update_hw_stats_vf(hw, stats);
+}
+
+/**
+ * fm10k_configure_dglort_map_vf - Configures GLORT entry and queues
+ * @hw: pointer to hardware structure
+ * @dglort: pointer to dglort configuration structure
+ *
+ * Reads the configuration structure contained in dglort_cfg and uses
+ * that information to then populate a DGLORTMAP/DEC entry and the queues
+ * to which it has been assigned.
+ **/
+static s32 fm10k_configure_dglort_map_vf(struct fm10k_hw *hw,
+ struct fm10k_dglort_cfg *dglort)
+{
+ /* verify the dglort pointer */
+ if (!dglort)
+ return FM10K_ERR_PARAM;
+
+ /* stub for now until we determine correct message for this */
+
+ return 0;
+}
+
+/**
+ * fm10k_adjust_systime_vf - Adjust systime frequency
+ * @hw: pointer to hardware structure
+ * @ppb: adjustment rate in parts per billion
+ *
+ * This function takes an adjustment rate in parts per billion and will
+ * verify that this value is 0 as the VF cannot support adjusting the
+ * systime clock.
+ *
+ * If the ppb value is non-zero the return is ERR_PARAM else success
+ **/
+static s32 fm10k_adjust_systime_vf(struct fm10k_hw *hw, s32 ppb)
+{
+ /* The VF cannot adjust the clock frequency, however it should
+ * already have a syntonic clock with whichever host interface is
+ * running as the master for the host interface clock domain so
+ * there should be not frequency adjustment necessary.
+ */
+ return ppb ? FM10K_ERR_PARAM : 0;
+}
+
+/**
+ * fm10k_read_systime_vf - Reads value of systime registers
+ * @hw: pointer to the hardware structure
+ *
+ * Function reads the content of 2 registers, combined to represent a 64 bit
+ * value measured in nanosecods. In order to guarantee the value is accurate
+ * we check the 32 most significant bits both before and after reading the
+ * 32 least significant bits to verify they didn't change as we were reading
+ * the registers.
+ **/
+static u64 fm10k_read_systime_vf(struct fm10k_hw *hw)
+{
+ u32 systime_l, systime_h, systime_tmp;
+
+ systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1);
+
+ do {
+ systime_tmp = systime_h;
+ systime_l = fm10k_read_reg(hw, FM10K_VFSYSTIME);
+ systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1);
+ } while (systime_tmp != systime_h);
+
+ return ((u64)systime_h << 32) | systime_l;
+}
+
+static const struct fm10k_msg_data fm10k_msg_data_vf[] = {
+ FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
+ FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf),
+ FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error),
+};
+
+static struct fm10k_mac_ops mac_ops_vf = {
+ .get_bus_info = &fm10k_get_bus_info_generic,
+ .reset_hw = &fm10k_reset_hw_vf,
+ .init_hw = &fm10k_init_hw_vf,
+ .start_hw = &fm10k_start_hw_generic,
+ .stop_hw = &fm10k_stop_hw_vf,
+ .is_slot_appropriate = &fm10k_is_slot_appropriate_vf,
+ .update_vlan = &fm10k_update_vlan_vf,
+ .read_mac_addr = &fm10k_read_mac_addr_vf,
+ .update_uc_addr = &fm10k_update_uc_addr_vf,
+ .update_mc_addr = &fm10k_update_mc_addr_vf,
+ .update_xcast_mode = &fm10k_update_xcast_mode_vf,
+ .update_int_moderator = &fm10k_update_int_moderator_vf,
+ .update_lport_state = &fm10k_update_lport_state_vf,
+ .update_hw_stats = &fm10k_update_hw_stats_vf,
+ .rebind_hw_stats = &fm10k_rebind_hw_stats_vf,
+ .configure_dglort_map = &fm10k_configure_dglort_map_vf,
+ .get_host_state = &fm10k_get_host_state_generic,
+ .adjust_systime = &fm10k_adjust_systime_vf,
+ .read_systime = &fm10k_read_systime_vf,
+};
+
+static s32 fm10k_get_invariants_vf(struct fm10k_hw *hw)
+{
+ fm10k_get_invariants_generic(hw);
+
+ return fm10k_pfvf_mbx_init(hw, &hw->mbx, fm10k_msg_data_vf, 0);
+}
+
+struct fm10k_info fm10k_vf_info = {
+ .mac = fm10k_mac_vf,
+ .get_invariants = &fm10k_get_invariants_vf,
+ .mac_ops = &mac_ops_vf,
+};
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_vf.h b/drivers/net/ethernet/intel/fm10k/fm10k_vf.h
new file mode 100644
index 0000000..06a99d7
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_vf.h
@@ -0,0 +1,78 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_VF_H_
+#define _FM10K_VF_H_
+
+#include "fm10k_type.h"
+#include "fm10k_common.h"
+
+enum fm10k_vf_tlv_msg_id {
+ FM10K_VF_MSG_ID_TEST = 0, /* msg ID reserved for testing */
+ FM10K_VF_MSG_ID_MSIX,
+ FM10K_VF_MSG_ID_MAC_VLAN,
+ FM10K_VF_MSG_ID_LPORT_STATE,
+ FM10K_VF_MSG_ID_1588,
+ FM10K_VF_MSG_ID_MAX,
+};
+
+enum fm10k_tlv_mac_vlan_attr_id {
+ FM10K_MAC_VLAN_MSG_VLAN,
+ FM10K_MAC_VLAN_MSG_SET,
+ FM10K_MAC_VLAN_MSG_MAC,
+ FM10K_MAC_VLAN_MSG_DEFAULT_MAC,
+ FM10K_MAC_VLAN_MSG_MULTICAST,
+ FM10K_MAC_VLAN_MSG_ID_MAX
+};
+
+enum fm10k_tlv_lport_state_attr_id {
+ FM10K_LPORT_STATE_MSG_DISABLE,
+ FM10K_LPORT_STATE_MSG_XCAST_MODE,
+ FM10K_LPORT_STATE_MSG_READY,
+ FM10K_LPORT_STATE_MSG_MAX
+};
+
+enum fm10k_tlv_1588_attr_id {
+ FM10K_1588_MSG_TIMESTAMP,
+ FM10K_1588_MSG_MAX
+};
+
+#define FM10K_VF_MSG_MSIX_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MSIX, NULL, func)
+
+s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
+extern const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[];
+#define FM10K_VF_MSG_MAC_VLAN_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MAC_VLAN, \
+ fm10k_mac_vlan_msg_attr, func)
+
+s32 fm10k_msg_lport_state_vf(struct fm10k_hw *, u32 **,
+ struct fm10k_mbx_info *);
+extern const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[];
+#define FM10K_VF_MSG_LPORT_STATE_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_LPORT_STATE, \
+ fm10k_lport_state_msg_attr, func)
+
+extern const struct fm10k_tlv_attr fm10k_1588_msg_attr[];
+#define FM10K_VF_MSG_1588_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_1588, fm10k_1588_msg_attr, func)
+
+extern struct fm10k_info fm10k_vf_info;
+#endif /* _FM10K_VF_H */
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index ac9f214..673d820 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -386,119 +386,87 @@
char name[IFNAMSIZ + 9];
#ifdef CONFIG_NET_RX_BUSY_POLL
- unsigned int state;
-#define IXGBE_QV_STATE_IDLE 0
-#define IXGBE_QV_STATE_NAPI 1 /* NAPI owns this QV */
-#define IXGBE_QV_STATE_POLL 2 /* poll owns this QV */
-#define IXGBE_QV_STATE_DISABLED 4 /* QV is disabled */
-#define IXGBE_QV_OWNED (IXGBE_QV_STATE_NAPI | IXGBE_QV_STATE_POLL)
-#define IXGBE_QV_LOCKED (IXGBE_QV_OWNED | IXGBE_QV_STATE_DISABLED)
-#define IXGBE_QV_STATE_NAPI_YIELD 8 /* NAPI yielded this QV */
-#define IXGBE_QV_STATE_POLL_YIELD 16 /* poll yielded this QV */
-#define IXGBE_QV_YIELD (IXGBE_QV_STATE_NAPI_YIELD | IXGBE_QV_STATE_POLL_YIELD)
-#define IXGBE_QV_USER_PEND (IXGBE_QV_STATE_POLL | IXGBE_QV_STATE_POLL_YIELD)
- spinlock_t lock;
+ atomic_t state;
#endif /* CONFIG_NET_RX_BUSY_POLL */
/* for dynamic allocation of rings associated with this q_vector */
struct ixgbe_ring ring[0] ____cacheline_internodealigned_in_smp;
};
+
#ifdef CONFIG_NET_RX_BUSY_POLL
+enum ixgbe_qv_state_t {
+ IXGBE_QV_STATE_IDLE = 0,
+ IXGBE_QV_STATE_NAPI,
+ IXGBE_QV_STATE_POLL,
+ IXGBE_QV_STATE_DISABLE
+};
+
static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector)
{
-
- spin_lock_init(&q_vector->lock);
- q_vector->state = IXGBE_QV_STATE_IDLE;
+ /* reset state to idle */
+ atomic_set(&q_vector->state, IXGBE_QV_STATE_IDLE);
}
/* called from the device poll routine to get ownership of a q_vector */
static inline bool ixgbe_qv_lock_napi(struct ixgbe_q_vector *q_vector)
{
- int rc = true;
- spin_lock_bh(&q_vector->lock);
- if (q_vector->state & IXGBE_QV_LOCKED) {
- WARN_ON(q_vector->state & IXGBE_QV_STATE_NAPI);
- q_vector->state |= IXGBE_QV_STATE_NAPI_YIELD;
- rc = false;
+ int rc = atomic_cmpxchg(&q_vector->state, IXGBE_QV_STATE_IDLE,
+ IXGBE_QV_STATE_NAPI);
#ifdef BP_EXTENDED_STATS
+ if (rc != IXGBE_QV_STATE_IDLE)
q_vector->tx.ring->stats.yields++;
#endif
- } else {
- /* we don't care if someone yielded */
- q_vector->state = IXGBE_QV_STATE_NAPI;
- }
- spin_unlock_bh(&q_vector->lock);
- return rc;
+
+ return rc == IXGBE_QV_STATE_IDLE;
}
/* returns true is someone tried to get the qv while napi had it */
-static inline bool ixgbe_qv_unlock_napi(struct ixgbe_q_vector *q_vector)
+static inline void ixgbe_qv_unlock_napi(struct ixgbe_q_vector *q_vector)
{
- int rc = false;
- spin_lock_bh(&q_vector->lock);
- WARN_ON(q_vector->state & (IXGBE_QV_STATE_POLL |
- IXGBE_QV_STATE_NAPI_YIELD));
+ WARN_ON(atomic_read(&q_vector->state) != IXGBE_QV_STATE_NAPI);
- if (q_vector->state & IXGBE_QV_STATE_POLL_YIELD)
- rc = true;
- /* will reset state to idle, unless QV is disabled */
- q_vector->state &= IXGBE_QV_STATE_DISABLED;
- spin_unlock_bh(&q_vector->lock);
- return rc;
+ /* flush any outstanding Rx frames */
+ if (q_vector->napi.gro_list)
+ napi_gro_flush(&q_vector->napi, false);
+
+ /* reset state to idle */
+ atomic_set(&q_vector->state, IXGBE_QV_STATE_IDLE);
}
/* called from ixgbe_low_latency_poll() */
static inline bool ixgbe_qv_lock_poll(struct ixgbe_q_vector *q_vector)
{
- int rc = true;
- spin_lock_bh(&q_vector->lock);
- if ((q_vector->state & IXGBE_QV_LOCKED)) {
- q_vector->state |= IXGBE_QV_STATE_POLL_YIELD;
- rc = false;
+ int rc = atomic_cmpxchg(&q_vector->state, IXGBE_QV_STATE_IDLE,
+ IXGBE_QV_STATE_POLL);
#ifdef BP_EXTENDED_STATS
- q_vector->rx.ring->stats.yields++;
+ if (rc != IXGBE_QV_STATE_IDLE)
+ q_vector->tx.ring->stats.yields++;
#endif
- } else {
- /* preserve yield marks */
- q_vector->state |= IXGBE_QV_STATE_POLL;
- }
- spin_unlock_bh(&q_vector->lock);
- return rc;
+ return rc == IXGBE_QV_STATE_IDLE;
}
/* returns true if someone tried to get the qv while it was locked */
-static inline bool ixgbe_qv_unlock_poll(struct ixgbe_q_vector *q_vector)
+static inline void ixgbe_qv_unlock_poll(struct ixgbe_q_vector *q_vector)
{
- int rc = false;
- spin_lock_bh(&q_vector->lock);
- WARN_ON(q_vector->state & (IXGBE_QV_STATE_NAPI));
+ WARN_ON(atomic_read(&q_vector->state) != IXGBE_QV_STATE_POLL);
- if (q_vector->state & IXGBE_QV_STATE_POLL_YIELD)
- rc = true;
- /* will reset state to idle, unless QV is disabled */
- q_vector->state &= IXGBE_QV_STATE_DISABLED;
- spin_unlock_bh(&q_vector->lock);
- return rc;
+ /* reset state to idle */
+ atomic_set(&q_vector->state, IXGBE_QV_STATE_IDLE);
}
/* true if a socket is polling, even if it did not get the lock */
static inline bool ixgbe_qv_busy_polling(struct ixgbe_q_vector *q_vector)
{
- WARN_ON(!(q_vector->state & IXGBE_QV_OWNED));
- return q_vector->state & IXGBE_QV_USER_PEND;
+ return atomic_read(&q_vector->state) == IXGBE_QV_STATE_POLL;
}
/* false if QV is currently owned */
static inline bool ixgbe_qv_disable(struct ixgbe_q_vector *q_vector)
{
- int rc = true;
- spin_lock_bh(&q_vector->lock);
- if (q_vector->state & IXGBE_QV_OWNED)
- rc = false;
- q_vector->state |= IXGBE_QV_STATE_DISABLED;
- spin_unlock_bh(&q_vector->lock);
+ int rc = atomic_cmpxchg(&q_vector->state, IXGBE_QV_STATE_IDLE,
+ IXGBE_QV_STATE_DISABLE);
- return rc;
+ return rc == IXGBE_QV_STATE_IDLE;
}
#else /* CONFIG_NET_RX_BUSY_POLL */
@@ -643,9 +611,7 @@
* thus the additional *_CAPABLE flags.
*/
u32 flags;
-#define IXGBE_FLAG_MSI_CAPABLE (u32)(1 << 0)
#define IXGBE_FLAG_MSI_ENABLED (u32)(1 << 1)
-#define IXGBE_FLAG_MSIX_CAPABLE (u32)(1 << 2)
#define IXGBE_FLAG_MSIX_ENABLED (u32)(1 << 3)
#define IXGBE_FLAG_RX_1BUF_CAPABLE (u32)(1 << 4)
#define IXGBE_FLAG_RX_PS_CAPABLE (u32)(1 << 5)
@@ -760,8 +726,6 @@
u8 __iomem *io_addr; /* Mainly for iounmap use */
u32 wol;
- u16 bd_number;
-
u16 eeprom_verh;
u16 eeprom_verl;
u16 eeprom_cap;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index e4100b5..cff383b 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -1303,7 +1303,7 @@
{ IXGBE_RAL(0), 16, TABLE64_TEST_LO, 0xFFFFFFFF, 0xFFFFFFFF },
{ IXGBE_RAL(0), 16, TABLE64_TEST_HI, 0x8001FFFF, 0x800CFFFF },
{ IXGBE_MTA(0), 128, TABLE32_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
- { 0, 0, 0, 0 }
+ { .reg = 0 }
};
/* default 82598 register test */
@@ -1331,7 +1331,7 @@
{ IXGBE_RAL(0), 16, TABLE64_TEST_LO, 0xFFFFFFFF, 0xFFFFFFFF },
{ IXGBE_RAL(0), 16, TABLE64_TEST_HI, 0x800CFFFF, 0x800CFFFF },
{ IXGBE_MTA(0), 128, TABLE32_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
- { 0, 0, 0, 0 }
+ { .reg = 0 }
};
static bool reg_pattern_test(struct ixgbe_adapter *adapter, u64 *data, int reg,
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index ae36fd6..ce40c77 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -696,46 +696,83 @@
ixgbe_set_rss_queues(adapter);
}
-static void ixgbe_acquire_msix_vectors(struct ixgbe_adapter *adapter,
- int vectors)
+/**
+ * ixgbe_acquire_msix_vectors - acquire MSI-X vectors
+ * @adapter: board private structure
+ *
+ * Attempts to acquire a suitable range of MSI-X vector interrupts. Will
+ * return a negative error code if unable to acquire MSI-X vectors for any
+ * reason.
+ */
+static int ixgbe_acquire_msix_vectors(struct ixgbe_adapter *adapter)
{
- int vector_threshold;
+ struct ixgbe_hw *hw = &adapter->hw;
+ int i, vectors, vector_threshold;
- /* We'll want at least 2 (vector_threshold):
- * 1) TxQ[0] + RxQ[0] handler
- * 2) Other (Link Status Change, etc.)
+ /* We start by asking for one vector per queue pair */
+ vectors = max(adapter->num_rx_queues, adapter->num_tx_queues);
+
+ /* It is easy to be greedy for MSI-X vectors. However, it really
+ * doesn't do much good if we have a lot more vectors than CPUs. We'll
+ * be somewhat conservative and only ask for (roughly) the same number
+ * of vectors as there are CPUs.
+ */
+ vectors = min_t(int, vectors, num_online_cpus());
+
+ /* Some vectors are necessary for non-queue interrupts */
+ vectors += NON_Q_VECTORS;
+
+ /* Hardware can only support a maximum of hw.mac->max_msix_vectors.
+ * With features such as RSS and VMDq, we can easily surpass the
+ * number of Rx and Tx descriptor queues supported by our device.
+ * Thus, we cap the maximum in the rare cases where the CPU count also
+ * exceeds our vector limit
+ */
+ vectors = min_t(int, vectors, hw->mac.max_msix_vectors);
+
+ /* We want a minimum of two MSI-X vectors for (1) a TxQ[0] + RxQ[0]
+ * handler, and (2) an Other (Link Status Change, etc.) handler.
*/
vector_threshold = MIN_MSIX_COUNT;
- /*
- * The more we get, the more we will assign to Tx/Rx Cleanup
- * for the separate queues...where Rx Cleanup >= Tx Cleanup.
- * Right now, we simply care about how many we'll get; we'll
- * set them up later while requesting irq's.
- */
+ adapter->msix_entries = kcalloc(vectors,
+ sizeof(struct msix_entry),
+ GFP_KERNEL);
+ if (!adapter->msix_entries)
+ return -ENOMEM;
+
+ for (i = 0; i < vectors; i++)
+ adapter->msix_entries[i].entry = i;
+
vectors = pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
vector_threshold, vectors);
if (vectors < 0) {
- /* Can't allocate enough MSI-X interrupts? Oh well.
- * This just means we'll go with either a single MSI
- * vector or fall back to legacy interrupts.
+ /* A negative count of allocated vectors indicates an error in
+ * acquiring within the specified range of MSI-X vectors
*/
- netif_printk(adapter, hw, KERN_DEBUG, adapter->netdev,
- "Unable to allocate MSI-X interrupts\n");
+ e_dev_warn("Failed to allocate MSI-X interrupts. Err: %d\n",
+ vectors);
+
adapter->flags &= ~IXGBE_FLAG_MSIX_ENABLED;
kfree(adapter->msix_entries);
adapter->msix_entries = NULL;
- } else {
- adapter->flags |= IXGBE_FLAG_MSIX_ENABLED; /* Woot! */
- /*
- * Adjust for only the vectors we'll use, which is minimum
- * of max_msix_q_vectors + NON_Q_VECTORS, or the number of
- * vectors we were allocated.
- */
- vectors -= NON_Q_VECTORS;
- adapter->num_q_vectors = min(vectors, adapter->max_q_vectors);
+
+ return vectors;
}
+
+ /* we successfully allocated some number of vectors within our
+ * requested range.
+ */
+ adapter->flags |= IXGBE_FLAG_MSIX_ENABLED;
+
+ /* Adjust for only the vectors we'll use, which is minimum
+ * of max_q_vectors, or the number of vectors we were allocated.
+ */
+ vectors -= NON_Q_VECTORS;
+ adapter->num_q_vectors = min_t(int, vectors, adapter->max_q_vectors);
+
+ return 0;
}
static void ixgbe_add_ring(struct ixgbe_ring *ring,
@@ -807,6 +844,11 @@
ixgbe_poll, 64);
napi_hash_add(&q_vector->napi);
+#ifdef CONFIG_NET_RX_BUSY_POLL
+ /* initialize busy poll */
+ atomic_set(&q_vector->state, IXGBE_QV_STATE_DISABLE);
+
+#endif
/* tie q_vector and adapter together */
adapter->q_vector[v_idx] = q_vector;
q_vector->adapter = adapter;
@@ -1049,51 +1091,20 @@
**/
static void ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter)
{
- struct ixgbe_hw *hw = &adapter->hw;
- int vector, v_budget, err;
+ int err;
- /*
- * It's easy to be greedy for MSI-X vectors, but it really
- * doesn't do us much good if we have a lot more vectors
- * than CPU's. So let's be conservative and only ask for
- * (roughly) the same number of vectors as there are CPU's.
- * The default is to use pairs of vectors.
- */
- v_budget = max(adapter->num_rx_queues, adapter->num_tx_queues);
- v_budget = min_t(int, v_budget, num_online_cpus());
- v_budget += NON_Q_VECTORS;
-
- /*
- * At the same time, hardware can only support a maximum of
- * hw.mac->max_msix_vectors vectors. With features
- * such as RSS and VMDq, we can easily surpass the number of Rx and Tx
- * descriptor queues supported by our device. Thus, we cap it off in
- * those rare cases where the cpu count also exceeds our vector limit.
- */
- v_budget = min_t(int, v_budget, hw->mac.max_msix_vectors);
-
- /* A failure in MSI-X entry allocation isn't fatal, but it does
- * mean we disable MSI-X capabilities of the adapter. */
- adapter->msix_entries = kcalloc(v_budget,
- sizeof(struct msix_entry), GFP_KERNEL);
- if (adapter->msix_entries) {
- for (vector = 0; vector < v_budget; vector++)
- adapter->msix_entries[vector].entry = vector;
-
- ixgbe_acquire_msix_vectors(adapter, v_budget);
-
- if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED)
- return;
- }
+ /* We will try to get MSI-X interrupts first */
+ if (!ixgbe_acquire_msix_vectors(adapter))
+ return;
/* At this point, we do not have MSI-X capabilities. We need to
* reconfigure or disable various features which require MSI-X
* capability.
*/
- /* disable DCB if number of TCs exceeds 1 */
+ /* Disable DCB unless we only have a single traffic class */
if (netdev_get_num_tc(adapter->netdev) > 1) {
- e_err(probe, "num TCs exceeds number of queues - disabling DCB\n");
+ e_dev_warn("Number of DCB TCs exceeds number of available queues. Disabling DCB support.\n");
netdev_reset_tc(adapter->netdev);
if (adapter->hw.mac.type == ixgbe_mac_82598EB)
@@ -1103,13 +1114,16 @@
adapter->temp_dcb_cfg.pfc_mode_enable = false;
adapter->dcb_cfg.pfc_mode_enable = false;
}
+
adapter->dcb_cfg.num_tcs.pg_tcs = 1;
adapter->dcb_cfg.num_tcs.pfc_tcs = 1;
- /* disable SR-IOV */
+ /* Disable SR-IOV support */
+ e_dev_warn("Disabling SR-IOV support\n");
ixgbe_disable_sriov(adapter);
- /* disable RSS */
+ /* Disable RSS */
+ e_dev_warn("Disabling RSS support\n");
adapter->ring_feature[RING_F_RSS].limit = 1;
/* recalculate number of queues now that many features have been
@@ -1119,13 +1133,11 @@
adapter->num_q_vectors = 1;
err = pci_enable_msi(adapter->pdev);
- if (err) {
- netif_printk(adapter, hw, KERN_DEBUG, adapter->netdev,
- "Unable to allocate MSI interrupt, falling back to legacy. Error: %d\n",
- err);
- return;
- }
- adapter->flags |= IXGBE_FLAG_MSI_ENABLED;
+ if (err)
+ e_dev_warn("Failed to allocate MSI interrupt, falling back to legacy. Error: %d\n",
+ err);
+ else
+ adapter->flags |= IXGBE_FLAG_MSI_ENABLED;
}
/**
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 166dc00..06ef5a3 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -440,7 +440,7 @@
{IXGBE_TXDCTL(0), "TXDCTL"},
/* List Terminator */
- {}
+ { .name = NULL }
};
@@ -2077,9 +2077,6 @@
q_vector->rx.total_packets += total_rx_packets;
q_vector->rx.total_bytes += total_rx_bytes;
- if (cleaned_count)
- ixgbe_alloc_rx_buffers(rx_ring, cleaned_count);
-
return total_rx_packets;
}
@@ -5186,15 +5183,15 @@
{
struct device *dev = tx_ring->dev;
int orig_node = dev_to_node(dev);
- int numa_node = -1;
+ int ring_node = -1;
int size;
size = sizeof(struct ixgbe_tx_buffer) * tx_ring->count;
if (tx_ring->q_vector)
- numa_node = tx_ring->q_vector->numa_node;
+ ring_node = tx_ring->q_vector->numa_node;
- tx_ring->tx_buffer_info = vzalloc_node(size, numa_node);
+ tx_ring->tx_buffer_info = vzalloc_node(size, ring_node);
if (!tx_ring->tx_buffer_info)
tx_ring->tx_buffer_info = vzalloc(size);
if (!tx_ring->tx_buffer_info)
@@ -5206,7 +5203,7 @@
tx_ring->size = tx_ring->count * sizeof(union ixgbe_adv_tx_desc);
tx_ring->size = ALIGN(tx_ring->size, 4096);
- set_dev_node(dev, numa_node);
+ set_dev_node(dev, ring_node);
tx_ring->desc = dma_alloc_coherent(dev,
tx_ring->size,
&tx_ring->dma,
@@ -5270,15 +5267,15 @@
{
struct device *dev = rx_ring->dev;
int orig_node = dev_to_node(dev);
- int numa_node = -1;
+ int ring_node = -1;
int size;
size = sizeof(struct ixgbe_rx_buffer) * rx_ring->count;
if (rx_ring->q_vector)
- numa_node = rx_ring->q_vector->numa_node;
+ ring_node = rx_ring->q_vector->numa_node;
- rx_ring->rx_buffer_info = vzalloc_node(size, numa_node);
+ rx_ring->rx_buffer_info = vzalloc_node(size, ring_node);
if (!rx_ring->rx_buffer_info)
rx_ring->rx_buffer_info = vzalloc(size);
if (!rx_ring->rx_buffer_info)
@@ -5290,7 +5287,7 @@
rx_ring->size = rx_ring->count * sizeof(union ixgbe_adv_rx_desc);
rx_ring->size = ALIGN(rx_ring->size, 4096);
- set_dev_node(dev, numa_node);
+ set_dev_node(dev, ring_node);
rx_ring->desc = dma_alloc_coherent(dev,
rx_ring->size,
&rx_ring->dma,
@@ -7111,9 +7108,10 @@
tx_flags |= IXGBE_TX_FLAGS_SW_VLAN;
}
- if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP &&
- !test_and_set_bit_lock(__IXGBE_PTP_TX_IN_PROGRESS,
- &adapter->state))) {
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+ adapter->ptp_clock &&
+ !test_and_set_bit_lock(__IXGBE_PTP_TX_IN_PROGRESS,
+ &adapter->state)) {
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
tx_flags |= IXGBE_TX_FLAGS_TSTAMP;
@@ -7984,7 +7982,6 @@
struct ixgbe_adapter *adapter = NULL;
struct ixgbe_hw *hw;
const struct ixgbe_info *ii = ixgbe_info_tbl[ent->driver_data];
- static int cards_found;
int i, err, pci_using_dac, expected_gts;
unsigned int indices = MAX_TX_QUEUES;
u8 part_str[IXGBE_PBANUM_LENGTH];
@@ -8070,8 +8067,6 @@
netdev->watchdog_timeo = 5 * HZ;
strlcpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
- adapter->bd_number = cards_found;
-
/* Setup hw api */
memcpy(&hw->mac.ops, ii->mac_ops, sizeof(hw->mac.ops));
hw->mac.type = ii->mac;
@@ -8355,7 +8350,6 @@
ixgbe_add_sanmac_netdev(netdev);
e_dev_info("%s\n", ixgbe_default_device_descr);
- cards_found++;
#ifdef CONFIG_IXGBE_HWMON
if (ixgbe_sysfs_init(adapter))
diff --git a/drivers/net/ethernet/intel/ixgbevf/ethtool.c b/drivers/net/ethernet/intel/ixgbevf/ethtool.c
index d420f12..cc0e5b7 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ethtool.c
@@ -523,7 +523,7 @@
{ IXGBE_VFTDBAL(0), 2, PATTERN_TEST, 0xFFFFFF80, 0xFFFFFFFF },
{ IXGBE_VFTDBAH(0), 2, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
{ IXGBE_VFTDLEN(0), 2, PATTERN_TEST, 0x000FFF80, 0x000FFF80 },
- { 0, 0, 0, 0 }
+ { .reg = 0 }
};
static const u32 register_test_patterns[] = {
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
index a0a1de9..ba96cb5 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
@@ -385,7 +385,6 @@
/* structs defined in ixgbe_vf.h */
struct ixgbe_hw hw;
u16 msg_enable;
- u16 bd_number;
/* Interrupt Throttle Rate */
u32 eitr_param;
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index c22a00c..030a219 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -3464,7 +3464,6 @@
struct ixgbevf_adapter *adapter = NULL;
struct ixgbe_hw *hw = NULL;
const struct ixgbevf_info *ii = ixgbevf_info_tbl[ent->driver_data];
- static int cards_found;
int err, pci_using_dac;
err = pci_enable_device(pdev);
@@ -3525,8 +3524,6 @@
ixgbevf_assign_netdev_ops(netdev);
- adapter->bd_number = cards_found;
-
/* Setup hw api */
memcpy(&hw->mac.ops, ii->mac_ops, sizeof(hw->mac.ops));
hw->mac.type = ii->mac;
@@ -3601,7 +3598,6 @@
hw_dbg(hw, "MAC: %d\n", hw->mac.type);
hw_dbg(hw, "Intel(R) 82599 Virtual Function\n");
- cards_found++;
return 0;
err_register:
diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c
index 65a4a0f..02a2e90 100644
--- a/drivers/net/ethernet/mellanox/mlx4/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c
@@ -2389,6 +2389,22 @@
}
EXPORT_SYMBOL_GPL(mlx4_phys_to_slaves_pport_actv);
+static int mlx4_slaves_closest_port(struct mlx4_dev *dev, int slave, int port)
+{
+ struct mlx4_active_ports actv_ports = mlx4_get_active_ports(dev, slave);
+ int min_port = find_first_bit(actv_ports.ports, dev->caps.num_ports)
+ + 1;
+ int max_port = min_port +
+ bitmap_weight(actv_ports.ports, dev->caps.num_ports);
+
+ if (port < min_port)
+ port = min_port;
+ else if (port >= max_port)
+ port = max_port - 1;
+
+ return port;
+}
+
int mlx4_set_vf_mac(struct mlx4_dev *dev, int port, int vf, u64 mac)
{
struct mlx4_priv *priv = mlx4_priv(dev);
@@ -2402,6 +2418,7 @@
if (slave < 0)
return -EINVAL;
+ port = mlx4_slaves_closest_port(dev, slave, port);
s_info = &priv->mfunc.master.vf_admin[slave].vport[port];
s_info->mac = mac;
mlx4_info(dev, "default mac on vf %d port %d to %llX will take afect only after vf restart\n",
@@ -2428,6 +2445,7 @@
if (slave < 0)
return -EINVAL;
+ port = mlx4_slaves_closest_port(dev, slave, port);
vf_admin = &priv->mfunc.master.vf_admin[slave].vport[port];
if ((0 == vlan) && (0 == qos))
@@ -2455,6 +2473,7 @@
struct mlx4_priv *priv;
priv = mlx4_priv(dev);
+ port = mlx4_slaves_closest_port(dev, slave, port);
vp_oper = &priv->mfunc.master.vf_oper[slave].vport[port];
if (MLX4_VGT != vp_oper->state.default_vlan) {
@@ -2482,6 +2501,7 @@
if (slave < 0)
return -EINVAL;
+ port = mlx4_slaves_closest_port(dev, slave, port);
s_info = &priv->mfunc.master.vf_admin[slave].vport[port];
s_info->spoofchk = setting;
@@ -2535,6 +2555,7 @@
if (slave < 0)
return -EINVAL;
+ port = mlx4_slaves_closest_port(dev, slave, port);
switch (link_state) {
case IFLA_VF_LINK_STATE_AUTO:
/* get current link state */
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
index e22f24f..35ff292 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
@@ -487,6 +487,9 @@
struct mlx4_en_dev *mdev = priv->mdev;
int err;
+ if (pause->autoneg)
+ return -EINVAL;
+
priv->prof->tx_pause = pause->tx_pause != 0;
priv->prof->rx_pause = pause->rx_pause != 0;
err = mlx4_SET_PORT_general(mdev->dev, priv->port,
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_main.c b/drivers/net/ethernet/mellanox/mlx4/en_main.c
index 3626fdf..2091ae88 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_main.c
@@ -78,27 +78,24 @@
#define MAX_PFC_TX 0xff
#define MAX_PFC_RX 0xff
-int en_print(const char *level, const struct mlx4_en_priv *priv,
- const char *format, ...)
+void en_print(const char *level, const struct mlx4_en_priv *priv,
+ const char *format, ...)
{
va_list args;
struct va_format vaf;
- int i;
va_start(args, format);
vaf.fmt = format;
vaf.va = &args;
if (priv->registered)
- i = printk("%s%s: %s: %pV",
- level, DRV_NAME, priv->dev->name, &vaf);
+ printk("%s%s: %s: %pV",
+ level, DRV_NAME, priv->dev->name, &vaf);
else
- i = printk("%s%s: %s: Port %d: %pV",
- level, DRV_NAME, dev_name(&priv->mdev->pdev->dev),
- priv->port, &vaf);
+ printk("%s%s: %s: Port %d: %pV",
+ level, DRV_NAME, dev_name(&priv->mdev->pdev->dev),
+ priv->port, &vaf);
va_end(args);
-
- return i;
}
void mlx4_en_update_loopback_state(struct net_device *dev,
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index abddcf8..f3032fe 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -2459,6 +2459,7 @@
}
priv->rx_ring_num = prof->rx_ring_num;
priv->cqe_factor = (mdev->dev->caps.cqe_size == 64) ? 1 : 0;
+ priv->cqe_size = mdev->dev->caps.cqe_size;
priv->mac_index = -1;
priv->msg_enable = MLX4_EN_MSG_LEVEL;
spin_lock_init(&priv->stats_lock);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 14686b6..a33048e 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -671,7 +671,7 @@
* descriptor offset can be deduced from the CQE index instead of
* reading 'cqe->index' */
index = cq->mcq.cons_index & ring->size_mask;
- cqe = &cq->buf[(index << factor) + factor];
+ cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
/* Process all completed CQEs */
while (XNOR(cqe->owner_sr_opcode & MLX4_CQE_OWNER_MASK,
@@ -858,7 +858,7 @@
++cq->mcq.cons_index;
index = (cq->mcq.cons_index) & ring->size_mask;
- cqe = &cq->buf[(index << factor) + factor];
+ cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
if (++polled == budget)
goto out;
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
index bc8f51c..adedc47 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
@@ -382,7 +382,7 @@
return true;
index = cons_index & size_mask;
- cqe = &buf[(index << factor) + factor];
+ cqe = mlx4_en_get_cqe(buf, index, priv->cqe_size) + factor;
ring_index = ring->cons & size_mask;
stamp_index = ring_index;
@@ -430,7 +430,7 @@
++cons_index;
index = cons_index & size_mask;
- cqe = &buf[(index << factor) + factor];
+ cqe = mlx4_en_get_cqe(buf, index, priv->cqe_size) + factor;
}
@@ -667,6 +667,7 @@
int lso_header_size;
void *fragptr;
bool bounce = false;
+ bool send_doorbell;
if (!priv->port_up)
goto tx_drop;
@@ -878,12 +879,16 @@
skb_tx_timestamp(skb);
- if (ring->bf_enabled && desc_size <= MAX_BF && !bounce && !vlan_tx_tag_present(skb)) {
+ send_doorbell = !skb->xmit_more || netif_xmit_stopped(ring->tx_queue);
+
+ if (ring->bf_enabled && desc_size <= MAX_BF && !bounce &&
+ !vlan_tx_tag_present(skb) && send_doorbell) {
tx_desc->ctrl.bf_qpn |= cpu_to_be32(ring->doorbell_qpn);
op_own |= htonl((bf_index & 0xffff) << 8);
- /* Ensure new descirptor hits memory
- * before setting ownership of this descriptor to HW */
+ /* Ensure new descriptor hits memory
+ * before setting ownership of this descriptor to HW
+ */
wmb();
tx_desc->ctrl.owner_opcode = op_own;
@@ -896,12 +901,16 @@
ring->bf.offset ^= ring->bf.buf_size;
} else {
- /* Ensure new descirptor hits memory
- * before setting ownership of this descriptor to HW */
+ /* Ensure new descriptor hits memory
+ * before setting ownership of this descriptor to HW
+ */
wmb();
tx_desc->ctrl.owner_opcode = op_own;
- wmb();
- iowrite32be(ring->doorbell_qpn, ring->bf.uar->map + MLX4_SEND_DOORBELL);
+ if (send_doorbell) {
+ wmb();
+ iowrite32be(ring->doorbell_qpn,
+ ring->bf.uar->map + MLX4_SEND_DOORBELL);
+ }
}
return NETDEV_TX_OK;
diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c
index 2a004b3..a49c9d1 100644
--- a/drivers/net/ethernet/mellanox/mlx4/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/eq.c
@@ -101,21 +101,24 @@
mb();
}
-static struct mlx4_eqe *get_eqe(struct mlx4_eq *eq, u32 entry, u8 eqe_factor)
+static struct mlx4_eqe *get_eqe(struct mlx4_eq *eq, u32 entry, u8 eqe_factor,
+ u8 eqe_size)
{
/* (entry & (eq->nent - 1)) gives us a cyclic array */
- unsigned long offset = (entry & (eq->nent - 1)) * (MLX4_EQ_ENTRY_SIZE << eqe_factor);
- /* CX3 is capable of extending the EQE from 32 to 64 bytes.
- * When this feature is enabled, the first (in the lower addresses)
+ unsigned long offset = (entry & (eq->nent - 1)) * eqe_size;
+ /* CX3 is capable of extending the EQE from 32 to 64 bytes with
+ * strides of 64B,128B and 256B.
+ * When 64B EQE is used, the first (in the lower addresses)
* 32 bytes in the 64 byte EQE are reserved and the next 32 bytes
* contain the legacy EQE information.
+ * In all other cases, the first 32B contains the legacy EQE info.
*/
return eq->page_list[offset / PAGE_SIZE].buf + (offset + (eqe_factor ? MLX4_EQ_ENTRY_SIZE : 0)) % PAGE_SIZE;
}
-static struct mlx4_eqe *next_eqe_sw(struct mlx4_eq *eq, u8 eqe_factor)
+static struct mlx4_eqe *next_eqe_sw(struct mlx4_eq *eq, u8 eqe_factor, u8 size)
{
- struct mlx4_eqe *eqe = get_eqe(eq, eq->cons_index, eqe_factor);
+ struct mlx4_eqe *eqe = get_eqe(eq, eq->cons_index, eqe_factor, size);
return !!(eqe->owner & 0x80) ^ !!(eq->cons_index & eq->nent) ? NULL : eqe;
}
@@ -459,8 +462,9 @@
enum slave_port_gen_event gen_event;
unsigned long flags;
struct mlx4_vport_state *s_info;
+ int eqe_size = dev->caps.eqe_size;
- while ((eqe = next_eqe_sw(eq, dev->caps.eqe_factor))) {
+ while ((eqe = next_eqe_sw(eq, dev->caps.eqe_factor, eqe_size))) {
/*
* Make sure we read EQ entry contents after we've
* checked the ownership bit.
@@ -894,8 +898,10 @@
eq->dev = dev;
eq->nent = roundup_pow_of_two(max(nent, 2));
- /* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes */
- npages = PAGE_ALIGN(eq->nent * (MLX4_EQ_ENTRY_SIZE << dev->caps.eqe_factor)) / PAGE_SIZE;
+ /* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes, with
+ * strides of 64B,128B and 256B.
+ */
+ npages = PAGE_ALIGN(eq->nent * dev->caps.eqe_size) / PAGE_SIZE;
eq->page_list = kmalloc(npages * sizeof *eq->page_list,
GFP_KERNEL);
@@ -997,8 +1003,10 @@
struct mlx4_cmd_mailbox *mailbox;
int err;
int i;
- /* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes */
- int npages = PAGE_ALIGN((MLX4_EQ_ENTRY_SIZE << dev->caps.eqe_factor) * eq->nent) / PAGE_SIZE;
+ /* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes, with
+ * strides of 64B,128B and 256B
+ */
+ int npages = PAGE_ALIGN(dev->caps.eqe_size * eq->nent) / PAGE_SIZE;
mailbox = mlx4_alloc_cmd_mailbox(dev);
if (IS_ERR(mailbox))
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
index 494753e..13b2e4a 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
@@ -137,7 +137,9 @@
[8] = "Dynamic QP updates support",
[9] = "Device managed flow steering IPoIB support",
[10] = "TCP/IP offloads/flow-steering for VXLAN support",
- [11] = "MAD DEMUX (Secure-Host) support"
+ [11] = "MAD DEMUX (Secure-Host) support",
+ [12] = "Large cache line (>64B) CQE stride support",
+ [13] = "Large cache line (>64B) EQE stride support"
};
int i;
@@ -557,6 +559,7 @@
#define QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET 0x74
#define QUERY_DEV_CAP_FLOW_STEERING_RANGE_EN_OFFSET 0x76
#define QUERY_DEV_CAP_FLOW_STEERING_MAX_QP_OFFSET 0x77
+#define QUERY_DEV_CAP_CQ_EQ_CACHE_LINE_STRIDE 0x7a
#define QUERY_DEV_CAP_RDMARC_ENTRY_SZ_OFFSET 0x80
#define QUERY_DEV_CAP_QPC_ENTRY_SZ_OFFSET 0x82
#define QUERY_DEV_CAP_AUX_ENTRY_SZ_OFFSET 0x84
@@ -733,6 +736,11 @@
dev_cap->max_rq_sg = field;
MLX4_GET(size, outbox, QUERY_DEV_CAP_MAX_DESC_SZ_RQ_OFFSET);
dev_cap->max_rq_desc_sz = size;
+ MLX4_GET(field, outbox, QUERY_DEV_CAP_CQ_EQ_CACHE_LINE_STRIDE);
+ if (field & (1 << 6))
+ dev_cap->flags2 |= MLX4_DEV_CAP_FLAG2_CQE_STRIDE;
+ if (field & (1 << 7))
+ dev_cap->flags2 |= MLX4_DEV_CAP_FLAG2_EQE_STRIDE;
MLX4_GET(dev_cap->bmme_flags, outbox,
QUERY_DEV_CAP_BMME_FLAGS_OFFSET);
@@ -1376,6 +1384,7 @@
#define INIT_HCA_CQC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x30)
#define INIT_HCA_LOG_CQ_OFFSET (INIT_HCA_QPC_OFFSET + 0x37)
#define INIT_HCA_EQE_CQE_OFFSETS (INIT_HCA_QPC_OFFSET + 0x38)
+#define INIT_HCA_EQE_CQE_STRIDE_OFFSET (INIT_HCA_QPC_OFFSET + 0x3b)
#define INIT_HCA_ALTC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x40)
#define INIT_HCA_AUXC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x50)
#define INIT_HCA_EQC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x60)
@@ -1452,11 +1461,25 @@
if (dev->caps.flags & MLX4_DEV_CAP_FLAG_64B_CQE) {
*(inbox + INIT_HCA_EQE_CQE_OFFSETS / 4) |= cpu_to_be32(1 << 30);
dev->caps.cqe_size = 64;
- dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_64B_CQE;
+ dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_LARGE_CQE;
} else {
dev->caps.cqe_size = 32;
}
+ /* CX3 is capable of extending CQEs\EQEs to strides larger than 64B */
+ if ((dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_EQE_STRIDE) &&
+ (dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_CQE_STRIDE)) {
+ dev->caps.eqe_size = cache_line_size();
+ dev->caps.cqe_size = cache_line_size();
+ dev->caps.eqe_factor = 0;
+ MLX4_PUT(inbox, (u8)((ilog2(dev->caps.eqe_size) - 5) << 4 |
+ (ilog2(dev->caps.eqe_size) - 5)),
+ INIT_HCA_EQE_CQE_STRIDE_OFFSET);
+
+ /* User still need to know to support CQE > 32B */
+ dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_LARGE_CQE;
+ }
+
/* QPC/EEC/CQC/EQC/RDMARC attributes */
MLX4_PUT(inbox, param->qpc_base, INIT_HCA_QPC_BASE_OFFSET);
@@ -1616,6 +1639,17 @@
if (byte_field & 0x40) /* 64-bytes cqe enabled */
param->dev_cap_enabled |= MLX4_DEV_CAP_64B_CQE_ENABLED;
+ /* CX3 is capable of extending CQEs\EQEs to strides larger than 64B */
+ MLX4_GET(byte_field, outbox, INIT_HCA_EQE_CQE_STRIDE_OFFSET);
+ if (byte_field) {
+ param->dev_cap_enabled |= MLX4_DEV_CAP_64B_EQE_ENABLED;
+ param->dev_cap_enabled |= MLX4_DEV_CAP_64B_CQE_ENABLED;
+ param->cqe_size = 1 << ((byte_field &
+ MLX4_CQE_SIZE_MASK_STRIDE) + 5);
+ param->eqe_size = 1 << (((byte_field &
+ MLX4_EQE_SIZE_MASK_STRIDE) >> 4) + 5);
+ }
+
/* TPT attributes */
MLX4_GET(param->dmpt_base, outbox, INIT_HCA_DMPT_BASE_OFFSET);
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.h b/drivers/net/ethernet/mellanox/mlx4/fw.h
index 1fce03e..9b835ae 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.h
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.h
@@ -178,6 +178,8 @@
u8 uar_page_sz; /* log pg sz in 4k chunks */
u8 steering_mode; /* for QUERY_HCA */
u64 dev_cap_enabled;
+ u16 cqe_size; /* For use only when CQE stride feature enabled */
+ u16 eqe_size; /* For use only when EQE stride feature enabled */
};
struct mlx4_init_ib_param {
diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index 7e2d5d5..1f10023 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -104,7 +104,8 @@
MODULE_PARM_DESC(enable_64b_cqe_eqe,
"Enable 64 byte CQEs/EQEs when the FW supports this (default: True)");
-#define PF_CONTEXT_BEHAVIOUR_MASK MLX4_FUNC_CAP_64B_EQE_CQE
+#define PF_CONTEXT_BEHAVIOUR_MASK (MLX4_FUNC_CAP_64B_EQE_CQE | \
+ MLX4_FUNC_CAP_EQE_CQE_STRIDE)
static char mlx4_version[] =
DRV_NAME ": Mellanox ConnectX core driver v"
@@ -196,6 +197,40 @@
dev->caps.port_mask[i] = dev->caps.port_type[i];
}
+static void mlx4_enable_cqe_eqe_stride(struct mlx4_dev *dev)
+{
+ struct mlx4_caps *dev_cap = &dev->caps;
+
+ /* FW not supporting or cancelled by user */
+ if (!(dev_cap->flags2 & MLX4_DEV_CAP_FLAG2_EQE_STRIDE) ||
+ !(dev_cap->flags2 & MLX4_DEV_CAP_FLAG2_CQE_STRIDE))
+ return;
+
+ /* Must have 64B CQE_EQE enabled by FW to use bigger stride
+ * When FW has NCSI it may decide not to report 64B CQE/EQEs
+ */
+ if (!(dev_cap->flags & MLX4_DEV_CAP_FLAG_64B_EQE) ||
+ !(dev_cap->flags & MLX4_DEV_CAP_FLAG_64B_CQE)) {
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_CQE_STRIDE;
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_EQE_STRIDE;
+ return;
+ }
+
+ if (cache_line_size() == 128 || cache_line_size() == 256) {
+ mlx4_dbg(dev, "Enabling CQE stride cacheLine supported\n");
+ /* Changing the real data inside CQE size to 32B */
+ dev_cap->flags &= ~MLX4_DEV_CAP_FLAG_64B_CQE;
+ dev_cap->flags &= ~MLX4_DEV_CAP_FLAG_64B_EQE;
+
+ if (mlx4_is_master(dev))
+ dev_cap->function_caps |= MLX4_FUNC_CAP_EQE_CQE_STRIDE;
+ } else {
+ mlx4_dbg(dev, "Disabling CQE stride cacheLine unsupported\n");
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_CQE_STRIDE;
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_EQE_STRIDE;
+ }
+}
+
static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
{
int err;
@@ -390,6 +425,14 @@
dev->caps.flags &= ~MLX4_DEV_CAP_FLAG_64B_CQE;
dev->caps.flags &= ~MLX4_DEV_CAP_FLAG_64B_EQE;
}
+
+ if (dev_cap->flags2 &
+ (MLX4_DEV_CAP_FLAG2_CQE_STRIDE |
+ MLX4_DEV_CAP_FLAG2_EQE_STRIDE)) {
+ mlx4_warn(dev, "Disabling EQE/CQE stride per user request\n");
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_CQE_STRIDE;
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_EQE_STRIDE;
+ }
}
if ((dev->caps.flags &
@@ -397,6 +440,9 @@
mlx4_is_master(dev))
dev->caps.function_caps |= MLX4_FUNC_CAP_64B_EQE_CQE;
+ if (!mlx4_is_slave(dev))
+ mlx4_enable_cqe_eqe_stride(dev);
+
return 0;
}
@@ -724,11 +770,22 @@
if (hca_param.dev_cap_enabled & MLX4_DEV_CAP_64B_CQE_ENABLED) {
dev->caps.cqe_size = 64;
- dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_64B_CQE;
+ dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_LARGE_CQE;
} else {
dev->caps.cqe_size = 32;
}
+ if (hca_param.dev_cap_enabled & MLX4_DEV_CAP_EQE_STRIDE_ENABLED) {
+ dev->caps.eqe_size = hca_param.eqe_size;
+ dev->caps.eqe_factor = 0;
+ }
+
+ if (hca_param.dev_cap_enabled & MLX4_DEV_CAP_CQE_STRIDE_ENABLED) {
+ dev->caps.cqe_size = hca_param.cqe_size;
+ /* User still need to know when CQE > 32B */
+ dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_LARGE_CQE;
+ }
+
dev->caps.flags2 &= ~MLX4_DEV_CAP_FLAG2_TS;
mlx4_warn(dev, "Timestamping is not supported in slave mode\n");
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4.h b/drivers/net/ethernet/mellanox/mlx4/mlx4.h
index b508c78..de10dbb 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4.h
@@ -285,6 +285,9 @@
#define MLX4_MPT_STATUS_SW 0xF0
#define MLX4_MPT_STATUS_HW 0x00
+#define MLX4_CQE_SIZE_MASK_STRIDE 0x3
+#define MLX4_EQE_SIZE_MASK_STRIDE 0x30
+
/*
* Must be packed because mtt_seg is 64 bits but only aligned to 32 bits.
*/
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
index 3de41be..6a4fc23 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -542,6 +542,7 @@
unsigned max_mtu;
int base_qpn;
int cqe_factor;
+ int cqe_size;
struct mlx4_en_rss_map rss_map;
__be32 ctrl_flags;
@@ -612,6 +613,11 @@
struct rcu_head rcu;
};
+static inline struct mlx4_cqe *mlx4_en_get_cqe(void *buf, int idx, int cqe_sz)
+{
+ return buf + idx * cqe_sz;
+}
+
#ifdef CONFIG_NET_RX_BUSY_POLL
static inline void mlx4_en_cq_init_lock(struct mlx4_en_cq *cq)
{
@@ -836,8 +842,8 @@
*/
__printf(3, 4)
-int en_print(const char *level, const struct mlx4_en_priv *priv,
- const char *format, ...);
+void en_print(const char *level, const struct mlx4_en_priv *priv,
+ const char *format, ...);
#define en_dbg(mlevel, priv, format, ...) \
do { \
diff --git a/drivers/net/ethernet/mellanox/mlx4/mr.c b/drivers/net/ethernet/mellanox/mlx4/mr.c
index 7d717ec..193a6ad 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mr.c
+++ b/drivers/net/ethernet/mellanox/mlx4/mr.c
@@ -298,6 +298,7 @@
MLX4_CMD_TIME_CLASS_B, MLX4_CMD_WRAPPED);
}
+/* Must protect against concurrent access */
int mlx4_mr_hw_get_mpt(struct mlx4_dev *dev, struct mlx4_mr *mmr,
struct mlx4_mpt_entry ***mpt_entry)
{
@@ -305,13 +306,10 @@
int key = key_to_hw_index(mmr->key) & (dev->caps.num_mpts - 1);
struct mlx4_cmd_mailbox *mailbox = NULL;
- /* Make sure that at this point we have single-threaded access only */
-
if (mmr->enabled != MLX4_MPT_EN_HW)
return -EINVAL;
err = mlx4_HW2SW_MPT(dev, NULL, key);
-
if (err) {
mlx4_warn(dev, "HW2SW_MPT failed (%d).", err);
mlx4_warn(dev, "Most likely the MR has MWs bound to it.\n");
@@ -333,7 +331,6 @@
0, MLX4_CMD_QUERY_MPT,
MLX4_CMD_TIME_CLASS_B,
MLX4_CMD_WRAPPED);
-
if (err)
goto free_mailbox;
@@ -378,9 +375,10 @@
err = mlx4_SW2HW_MPT(dev, mailbox, key);
}
- mmr->pd = be32_to_cpu((*mpt_entry)->pd_flags) & MLX4_MPT_PD_MASK;
- if (!err)
+ if (!err) {
+ mmr->pd = be32_to_cpu((*mpt_entry)->pd_flags) & MLX4_MPT_PD_MASK;
mmr->enabled = MLX4_MPT_EN_HW;
+ }
return err;
}
EXPORT_SYMBOL_GPL(mlx4_mr_hw_write_mpt);
@@ -400,11 +398,12 @@
int mlx4_mr_hw_change_pd(struct mlx4_dev *dev, struct mlx4_mpt_entry *mpt_entry,
u32 pdn)
{
- u32 pd_flags = be32_to_cpu(mpt_entry->pd_flags);
+ u32 pd_flags = be32_to_cpu(mpt_entry->pd_flags) & ~MLX4_MPT_PD_MASK;
/* The wrapper function will put the slave's id here */
if (mlx4_is_mfunc(dev))
pd_flags &= ~MLX4_MPT_PD_VF_MASK;
- mpt_entry->pd_flags = cpu_to_be32((pd_flags & ~MLX4_MPT_PD_MASK) |
+
+ mpt_entry->pd_flags = cpu_to_be32(pd_flags |
(pdn & MLX4_MPT_PD_MASK)
| MLX4_MPT_PD_FLAG_EN_INV);
return 0;
@@ -600,14 +599,18 @@
{
int err;
- mpt_entry->start = cpu_to_be64(mr->iova);
- mpt_entry->length = cpu_to_be64(mr->size);
- mpt_entry->entity_size = cpu_to_be32(mr->mtt.page_shift);
+ mpt_entry->start = cpu_to_be64(iova);
+ mpt_entry->length = cpu_to_be64(size);
+ mpt_entry->entity_size = cpu_to_be32(page_shift);
err = mlx4_mtt_init(dev, npages, page_shift, &mr->mtt);
if (err)
return err;
+ mpt_entry->pd_flags &= cpu_to_be32(MLX4_MPT_PD_MASK |
+ MLX4_MPT_PD_FLAG_EN_INV);
+ mpt_entry->flags &= cpu_to_be32(MLX4_MPT_FLAG_FREE |
+ MLX4_MPT_FLAG_SW_OWNS);
if (mr->mtt.order < 0) {
mpt_entry->flags |= cpu_to_be32(MLX4_MPT_FLAG_PHYSICAL);
mpt_entry->mtt_addr = 0;
@@ -617,6 +620,14 @@
if (mr->mtt.page_shift == 0)
mpt_entry->mtt_sz = cpu_to_be32(1 << mr->mtt.order);
}
+ if (mr->mtt.order >= 0 && mr->mtt.page_shift == 0) {
+ /* fast register MR in free state */
+ mpt_entry->flags |= cpu_to_be32(MLX4_MPT_FLAG_FREE);
+ mpt_entry->pd_flags |= cpu_to_be32(MLX4_MPT_PD_FLAG_FAST_REG |
+ MLX4_MPT_PD_FLAG_RAE);
+ } else {
+ mpt_entry->flags |= cpu_to_be32(MLX4_MPT_FLAG_SW_OWNS);
+ }
mr->enabled = MLX4_MPT_EN_SW;
return 0;
diff --git a/drivers/net/ethernet/mellanox/mlx4/port.c b/drivers/net/ethernet/mellanox/mlx4/port.c
index 9ba0c1c..94eeb2c 100644
--- a/drivers/net/ethernet/mellanox/mlx4/port.c
+++ b/drivers/net/ethernet/mellanox/mlx4/port.c
@@ -103,7 +103,8 @@
int i;
for (i = 0; i < MLX4_MAX_MAC_NUM; i++) {
- if ((mac & MLX4_MAC_MASK) ==
+ if (table->refs[i] &&
+ (MLX4_MAC_MASK & mac) ==
(MLX4_MAC_MASK & be64_to_cpu(table->entries[i])))
return i;
}
@@ -165,12 +166,14 @@
mutex_lock(&table->mutex);
for (i = 0; i < MLX4_MAX_MAC_NUM; i++) {
- if (free < 0 && !table->entries[i]) {
- free = i;
+ if (!table->refs[i]) {
+ if (free < 0)
+ free = i;
continue;
}
- if (mac == (MLX4_MAC_MASK & be64_to_cpu(table->entries[i]))) {
+ if ((MLX4_MAC_MASK & mac) ==
+ (MLX4_MAC_MASK & be64_to_cpu(table->entries[i]))) {
/* MAC already registered, increment ref count */
err = i;
++table->refs[i];
diff --git a/drivers/net/ethernet/mellanox/mlx4/qp.c b/drivers/net/ethernet/mellanox/mlx4/qp.c
index 0dc31d8..2301365 100644
--- a/drivers/net/ethernet/mellanox/mlx4/qp.c
+++ b/drivers/net/ethernet/mellanox/mlx4/qp.c
@@ -390,13 +390,14 @@
EXPORT_SYMBOL_GPL(mlx4_qp_alloc);
#define MLX4_UPDATE_QP_SUPPORTED_ATTRS MLX4_UPDATE_QP_SMAC
-int mlx4_update_qp(struct mlx4_dev *dev, struct mlx4_qp *qp,
+int mlx4_update_qp(struct mlx4_dev *dev, u32 qpn,
enum mlx4_update_qp_attr attr,
struct mlx4_update_qp_params *params)
{
struct mlx4_cmd_mailbox *mailbox;
struct mlx4_update_qp_context *cmd;
u64 pri_addr_path_mask = 0;
+ u64 qp_mask = 0;
int err = 0;
mailbox = mlx4_alloc_cmd_mailbox(dev);
@@ -413,9 +414,16 @@
cmd->qp_context.pri_path.grh_mylmc = params->smac_index;
}
- cmd->primary_addr_path_mask = cpu_to_be64(pri_addr_path_mask);
+ if (attr & MLX4_UPDATE_QP_VSD) {
+ qp_mask |= 1ULL << MLX4_UPD_QP_MASK_VSD;
+ if (params->flags & MLX4_UPDATE_QP_PARAMS_FLAGS_VSD_ENABLE)
+ cmd->qp_context.param3 |= cpu_to_be32(MLX4_STRIP_VLAN);
+ }
- err = mlx4_cmd(dev, mailbox->dma, qp->qpn & 0xffffff, 0,
+ cmd->primary_addr_path_mask = cpu_to_be64(pri_addr_path_mask);
+ cmd->qp_mask = cpu_to_be64(qp_mask);
+
+ err = mlx4_cmd(dev, mailbox->dma, qpn & 0xffffff, 0,
MLX4_CMD_UPDATE_QP, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE);
diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
index 1089367..5d2498d 100644
--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
@@ -702,11 +702,13 @@
struct mlx4_qp_context *qpc = inbox->buf + 8;
struct mlx4_vport_oper_state *vp_oper;
struct mlx4_priv *priv;
+ u32 qp_type;
int port;
port = (qpc->pri_path.sched_queue & 0x40) ? 2 : 1;
priv = mlx4_priv(dev);
vp_oper = &priv->mfunc.master.vf_oper[slave].vport[port];
+ qp_type = (be32_to_cpu(qpc->flags) >> 16) & 0xff;
if (MLX4_VGT != vp_oper->state.default_vlan) {
/* the reserved QPs (special, proxy, tunnel)
@@ -715,8 +717,20 @@
if (mlx4_is_qp_reserved(dev, qpn))
return 0;
- /* force strip vlan by clear vsd */
- qpc->param3 &= ~cpu_to_be32(MLX4_STRIP_VLAN);
+ /* force strip vlan by clear vsd, MLX QP refers to Raw Ethernet */
+ if (qp_type == MLX4_QP_ST_UD ||
+ (qp_type == MLX4_QP_ST_MLX && mlx4_is_eth(dev, port))) {
+ if (dev->caps.bmme_flags & MLX4_BMME_FLAG_VSD_INIT2RTR) {
+ *(__be32 *)inbox->buf =
+ cpu_to_be32(be32_to_cpu(*(__be32 *)inbox->buf) |
+ MLX4_QP_OPTPAR_VLAN_STRIPPING);
+ qpc->param3 &= ~cpu_to_be32(MLX4_STRIP_VLAN);
+ } else {
+ struct mlx4_update_qp_params params = {.flags = 0};
+
+ mlx4_update_qp(dev, qpn, MLX4_UPDATE_QP_VSD, ¶ms);
+ }
+ }
if (vp_oper->state.link_state == IFLA_VF_LINK_STATE_DISABLE &&
dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_UPDATE_QP) {
@@ -3998,13 +4012,17 @@
}
port = (rqp->sched_queue >> 6 & 1) + 1;
- smac_index = cmd->qp_context.pri_path.grh_mylmc;
- err = mac_find_smac_ix_in_slave(dev, slave, port,
- smac_index, &mac);
- if (err) {
- mlx4_err(dev, "Failed to update qpn 0x%x, MAC is invalid. smac_ix: %d\n",
- qpn, smac_index);
- goto err_mac;
+
+ if (pri_addr_path_mask & (1ULL << MLX4_UPD_QP_PATH_MASK_MAC_INDEX)) {
+ smac_index = cmd->qp_context.pri_path.grh_mylmc;
+ err = mac_find_smac_ix_in_slave(dev, slave, port,
+ smac_index, &mac);
+
+ if (err) {
+ mlx4_err(dev, "Failed to update qpn 0x%x, MAC is invalid. smac_ix: %d\n",
+ qpn, smac_index);
+ goto err_mac;
+ }
}
err = mlx4_cmd(dev, inbox->dma,
@@ -4818,7 +4836,7 @@
MLX4_VLAN_CTRL_ETH_RX_BLOCK_UNTAGGED;
upd_context = mailbox->buf;
- upd_context->qp_mask = cpu_to_be64(MLX4_UPD_QP_MASK_VSD);
+ upd_context->qp_mask = cpu_to_be64(1ULL << MLX4_UPD_QP_MASK_VSD);
spin_lock_irq(mlx4_tlock(dev));
list_for_each_entry_safe(qp, tmp, qp_list, com.list) {
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.c b/drivers/net/ethernet/neterion/vxge/vxge-main.c
index 4f40d7b..cc0485e 100644
--- a/drivers/net/ethernet/neterion/vxge/vxge-main.c
+++ b/drivers/net/ethernet/neterion/vxge/vxge-main.c
@@ -3537,7 +3537,7 @@
vxge_debug_entryexit(vdev->level_trace, "%s: %s:%d", vdev->ndev->name,
__func__, __LINE__);
- strncpy(buf, dev->name, IFNAMSIZ);
+ strlcpy(buf, dev->name, IFNAMSIZ);
flush_work(&vdev->reset_task);
diff --git a/drivers/net/ethernet/octeon/octeon_mgmt.c b/drivers/net/ethernet/octeon/octeon_mgmt.c
index 979c698..a422930 100644
--- a/drivers/net/ethernet/octeon/octeon_mgmt.c
+++ b/drivers/net/ethernet/octeon/octeon_mgmt.c
@@ -290,9 +290,11 @@
/* Read the hardware TX timestamp if one was recorded */
if (unlikely(re.s.tstamp)) {
struct skb_shared_hwtstamps ts;
+ u64 ns;
+
memset(&ts, 0, sizeof(ts));
/* Read the timestamp */
- u64 ns = cvmx_read_csr(CVMX_MIXX_TSTAMP(p->port));
+ ns = cvmx_read_csr(CVMX_MIXX_TSTAMP(p->port));
/* Remove the timestamp from the FIFO */
cvmx_write_csr(CVMX_MIXX_TSCTL(p->port), 0);
/* Tell the kernel about the timestamp */
diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/Kconfig b/drivers/net/ethernet/oki-semi/pch_gbe/Kconfig
index 44c8be1..5f7a352 100644
--- a/drivers/net/ethernet/oki-semi/pch_gbe/Kconfig
+++ b/drivers/net/ethernet/oki-semi/pch_gbe/Kconfig
@@ -7,6 +7,7 @@
depends on PCI && (X86_32 || COMPILE_TEST)
select MII
select PTP_1588_CLOCK_PCH
+ select NET_PTP_CLASSIFY
---help---
This is a gigabit ethernet driver for EG20T PCH.
EG20T PCH is the platform controller hub that is used in Intel's
diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_main.c b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
index 3e96f26..6c904a6 100644
--- a/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+++ b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
@@ -1922,7 +1922,7 @@
sbq_desc->p.skb = NULL;
skb_reserve(skb, NET_IP_ALIGN);
}
- while (length > 0) {
+ do {
lbq_desc = ql_get_curr_lchunk(qdev, rx_ring);
size = (length < rx_ring->lbq_buf_size) ? length :
rx_ring->lbq_buf_size;
@@ -1939,7 +1939,7 @@
skb->truesize += size;
length -= size;
i++;
- }
+ } while (length > 0);
ql_update_mac_hdr_len(qdev, ib_mac_rsp, lbq_desc->p.pg_chunk.va,
&hlen);
__pskb_pull_tail(skb, hlen);
diff --git a/drivers/net/ethernet/qualcomm/Kconfig b/drivers/net/ethernet/qualcomm/Kconfig
new file mode 100644
index 0000000..f3a4714
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/Kconfig
@@ -0,0 +1,30 @@
+#
+# Qualcomm network device configuration
+#
+
+config NET_VENDOR_QUALCOMM
+ bool "Qualcomm devices"
+ default y
+ depends on SPI_MASTER && OF_GPIO
+ ---help---
+ If you have a network (Ethernet) card belonging to this class, say Y
+ and read the Ethernet-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>.
+
+ Note that the answer to this question doesn't directly affect the
+ kernel: saying N will just cause the configurator to skip all
+ the questions about Qualcomm cards. If you say Y, you will be asked
+ for your specific card in the following questions.
+
+if NET_VENDOR_QUALCOMM
+
+config QCA7000
+ tristate "Qualcomm Atheros QCA7000 support"
+ depends on SPI_MASTER && OF_GPIO
+ ---help---
+ This SPI protocol driver supports the Qualcomm Atheros QCA7000.
+
+ To compile this driver as a module, choose M here. The module
+ will be called qcaspi.
+
+endif # NET_VENDOR_QUALCOMM
diff --git a/drivers/net/ethernet/qualcomm/Makefile b/drivers/net/ethernet/qualcomm/Makefile
new file mode 100644
index 0000000..9da2d75
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/Makefile
@@ -0,0 +1,6 @@
+#
+# Makefile for the Qualcomm network device drivers.
+#
+
+obj-$(CONFIG_QCA7000) += qcaspi.o
+qcaspi-objs := qca_spi.o qca_framing.o qca_7k.o qca_debug.o
diff --git a/drivers/net/ethernet/qualcomm/qca_7k.c b/drivers/net/ethernet/qualcomm/qca_7k.c
new file mode 100644
index 0000000..f0066fb
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_7k.c
@@ -0,0 +1,149 @@
+/*
+ *
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ *
+ */
+
+/* This module implements the Qualcomm Atheros SPI protocol for
+ * kernel-based SPI device.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/spi/spi.h>
+#include <linux/version.h>
+
+#include "qca_7k.h"
+
+void
+qcaspi_spi_error(struct qcaspi *qca)
+{
+ if (qca->sync != QCASPI_SYNC_READY)
+ return;
+
+ netdev_err(qca->net_dev, "spi error\n");
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ qca->stats.spi_err++;
+}
+
+int
+qcaspi_read_register(struct qcaspi *qca, u16 reg, u16 *result)
+{
+ __be16 rx_data;
+ __be16 tx_data;
+ struct spi_transfer *transfer;
+ struct spi_message *msg;
+ int ret;
+
+ tx_data = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_INTERNAL | reg);
+
+ if (qca->legacy_mode) {
+ msg = &qca->spi_msg1;
+ transfer = &qca->spi_xfer1;
+ transfer->tx_buf = &tx_data;
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ spi_sync(qca->spi_dev, msg);
+ } else {
+ msg = &qca->spi_msg2;
+ transfer = &qca->spi_xfer2[0];
+ transfer->tx_buf = &tx_data;
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ transfer = &qca->spi_xfer2[1];
+ }
+ transfer->tx_buf = NULL;
+ transfer->rx_buf = &rx_data;
+ transfer->len = QCASPI_CMD_LEN;
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (!ret)
+ ret = msg->status;
+
+ if (ret)
+ qcaspi_spi_error(qca);
+ else
+ *result = be16_to_cpu(rx_data);
+
+ return ret;
+}
+
+int
+qcaspi_write_register(struct qcaspi *qca, u16 reg, u16 value)
+{
+ __be16 tx_data[2];
+ struct spi_transfer *transfer;
+ struct spi_message *msg;
+ int ret;
+
+ tx_data[0] = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_INTERNAL | reg);
+ tx_data[1] = cpu_to_be16(value);
+
+ if (qca->legacy_mode) {
+ msg = &qca->spi_msg1;
+ transfer = &qca->spi_xfer1;
+ transfer->tx_buf = &tx_data[0];
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ spi_sync(qca->spi_dev, msg);
+ } else {
+ msg = &qca->spi_msg2;
+ transfer = &qca->spi_xfer2[0];
+ transfer->tx_buf = &tx_data[0];
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ transfer = &qca->spi_xfer2[1];
+ }
+ transfer->tx_buf = &tx_data[1];
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (!ret)
+ ret = msg->status;
+
+ if (ret)
+ qcaspi_spi_error(qca);
+
+ return ret;
+}
+
+int
+qcaspi_tx_cmd(struct qcaspi *qca, u16 cmd)
+{
+ __be16 tx_data;
+ struct spi_message *msg = &qca->spi_msg1;
+ struct spi_transfer *transfer = &qca->spi_xfer1;
+ int ret;
+
+ tx_data = cpu_to_be16(cmd);
+ transfer->len = sizeof(tx_data);
+ transfer->tx_buf = &tx_data;
+ transfer->rx_buf = NULL;
+
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (!ret)
+ ret = msg->status;
+
+ if (ret)
+ qcaspi_spi_error(qca);
+
+ return ret;
+}
diff --git a/drivers/net/ethernet/qualcomm/qca_7k.h b/drivers/net/ethernet/qualcomm/qca_7k.h
new file mode 100644
index 0000000..1cad851
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_7k.h
@@ -0,0 +1,72 @@
+/*
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ *
+ */
+
+/* Qualcomm Atheros SPI register definition.
+ *
+ * This module is designed to define the Qualcomm Atheros SPI
+ * register placeholders.
+ */
+
+#ifndef _QCA_7K_H
+#define _QCA_7K_H
+
+#include <linux/types.h>
+
+#include "qca_spi.h"
+
+#define QCA7K_SPI_READ (1 << 15)
+#define QCA7K_SPI_WRITE (0 << 15)
+#define QCA7K_SPI_INTERNAL (1 << 14)
+#define QCA7K_SPI_EXTERNAL (0 << 14)
+
+#define QCASPI_CMD_LEN 2
+#define QCASPI_HW_PKT_LEN 4
+#define QCASPI_HW_BUF_LEN 0xC5B
+
+/* SPI registers; */
+#define SPI_REG_BFR_SIZE 0x0100
+#define SPI_REG_WRBUF_SPC_AVA 0x0200
+#define SPI_REG_RDBUF_BYTE_AVA 0x0300
+#define SPI_REG_SPI_CONFIG 0x0400
+#define SPI_REG_SPI_STATUS 0x0500
+#define SPI_REG_INTR_CAUSE 0x0C00
+#define SPI_REG_INTR_ENABLE 0x0D00
+#define SPI_REG_RDBUF_WATERMARK 0x1200
+#define SPI_REG_WRBUF_WATERMARK 0x1300
+#define SPI_REG_SIGNATURE 0x1A00
+#define SPI_REG_ACTION_CTRL 0x1B00
+
+/* SPI_CONFIG register definition; */
+#define QCASPI_SLAVE_RESET_BIT (1 << 6)
+
+/* INTR_CAUSE/ENABLE register definition. */
+#define SPI_INT_WRBUF_BELOW_WM (1 << 10)
+#define SPI_INT_CPU_ON (1 << 6)
+#define SPI_INT_ADDR_ERR (1 << 3)
+#define SPI_INT_WRBUF_ERR (1 << 2)
+#define SPI_INT_RDBUF_ERR (1 << 1)
+#define SPI_INT_PKT_AVLBL (1 << 0)
+
+void qcaspi_spi_error(struct qcaspi *qca);
+int qcaspi_read_register(struct qcaspi *qca, u16 reg, u16 *result);
+int qcaspi_write_register(struct qcaspi *qca, u16 reg, u16 value);
+int qcaspi_tx_cmd(struct qcaspi *qca, u16 cmd);
+
+#endif /* _QCA_7K_H */
diff --git a/drivers/net/ethernet/qualcomm/qca_debug.c b/drivers/net/ethernet/qualcomm/qca_debug.c
new file mode 100644
index 0000000..8e28234
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_debug.c
@@ -0,0 +1,311 @@
+/*
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* This file contains debugging routines for use in the QCA7K driver.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/ethtool.h>
+#include <linux/seq_file.h>
+#include <linux/types.h>
+
+#include "qca_7k.h"
+#include "qca_debug.h"
+
+#define QCASPI_MAX_REGS 0x20
+
+static const u16 qcaspi_spi_regs[] = {
+ SPI_REG_BFR_SIZE,
+ SPI_REG_WRBUF_SPC_AVA,
+ SPI_REG_RDBUF_BYTE_AVA,
+ SPI_REG_SPI_CONFIG,
+ SPI_REG_SPI_STATUS,
+ SPI_REG_INTR_CAUSE,
+ SPI_REG_INTR_ENABLE,
+ SPI_REG_RDBUF_WATERMARK,
+ SPI_REG_WRBUF_WATERMARK,
+ SPI_REG_SIGNATURE,
+ SPI_REG_ACTION_CTRL
+};
+
+/* The order of these strings must match the order of the fields in
+ * struct qcaspi_stats
+ * See qca_spi.h
+ */
+static const char qcaspi_gstrings_stats[][ETH_GSTRING_LEN] = {
+ "Triggered resets",
+ "Device resets",
+ "Reset timeouts",
+ "Read errors",
+ "Write errors",
+ "Read buffer errors",
+ "Write buffer errors",
+ "Out of memory",
+ "Write buffer misses",
+ "Transmit ring full",
+ "SPI errors",
+};
+
+#ifdef CONFIG_DEBUG_FS
+
+static int
+qcaspi_info_show(struct seq_file *s, void *what)
+{
+ struct qcaspi *qca = s->private;
+
+ seq_printf(s, "RX buffer size : %lu\n",
+ (unsigned long)qca->buffer_size);
+
+ seq_puts(s, "TX ring state : ");
+
+ if (qca->txr.skb[qca->txr.head] == NULL)
+ seq_puts(s, "empty");
+ else if (qca->txr.skb[qca->txr.tail])
+ seq_puts(s, "full");
+ else
+ seq_puts(s, "in use");
+
+ seq_puts(s, "\n");
+
+ seq_printf(s, "TX ring size : %u\n",
+ qca->txr.size);
+
+ seq_printf(s, "Sync state : %u (",
+ (unsigned int)qca->sync);
+ switch (qca->sync) {
+ case QCASPI_SYNC_UNKNOWN:
+ seq_puts(s, "QCASPI_SYNC_UNKNOWN");
+ break;
+ case QCASPI_SYNC_RESET:
+ seq_puts(s, "QCASPI_SYNC_RESET");
+ break;
+ case QCASPI_SYNC_READY:
+ seq_puts(s, "QCASPI_SYNC_READY");
+ break;
+ default:
+ seq_puts(s, "INVALID");
+ break;
+ }
+ seq_puts(s, ")\n");
+
+ seq_printf(s, "IRQ : %d\n",
+ qca->spi_dev->irq);
+ seq_printf(s, "INTR REQ : %u\n",
+ qca->intr_req);
+ seq_printf(s, "INTR SVC : %u\n",
+ qca->intr_svc);
+
+ seq_printf(s, "SPI max speed : %lu\n",
+ (unsigned long)qca->spi_dev->max_speed_hz);
+ seq_printf(s, "SPI mode : %x\n",
+ qca->spi_dev->mode);
+ seq_printf(s, "SPI chip select : %u\n",
+ (unsigned int)qca->spi_dev->chip_select);
+ seq_printf(s, "SPI legacy mode : %u\n",
+ (unsigned int)qca->legacy_mode);
+ seq_printf(s, "SPI burst length : %u\n",
+ (unsigned int)qca->burst_len);
+
+ return 0;
+}
+
+static int
+qcaspi_info_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, qcaspi_info_show, inode->i_private);
+}
+
+static const struct file_operations qcaspi_info_ops = {
+ .open = qcaspi_info_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+void
+qcaspi_init_device_debugfs(struct qcaspi *qca)
+{
+ struct dentry *device_root;
+
+ device_root = debugfs_create_dir(dev_name(&qca->net_dev->dev), NULL);
+ qca->device_root = device_root;
+
+ if (IS_ERR(device_root) || !device_root) {
+ pr_warn("failed to create debugfs directory for %s\n",
+ dev_name(&qca->net_dev->dev));
+ return;
+ }
+ debugfs_create_file("info", S_IFREG | S_IRUGO, device_root, qca,
+ &qcaspi_info_ops);
+}
+
+void
+qcaspi_remove_device_debugfs(struct qcaspi *qca)
+{
+ debugfs_remove_recursive(qca->device_root);
+}
+
+#else /* CONFIG_DEBUG_FS */
+
+void
+qcaspi_init_device_debugfs(struct qcaspi *qca)
+{
+}
+
+void
+qcaspi_remove_device_debugfs(struct qcaspi *qca)
+{
+}
+
+#endif
+
+static void
+qcaspi_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *p)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ strlcpy(p->driver, QCASPI_DRV_NAME, sizeof(p->driver));
+ strlcpy(p->version, QCASPI_DRV_VERSION, sizeof(p->version));
+ strlcpy(p->fw_version, "QCA7000", sizeof(p->fw_version));
+ strlcpy(p->bus_info, dev_name(&qca->spi_dev->dev),
+ sizeof(p->bus_info));
+}
+
+static int
+qcaspi_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ cmd->transceiver = XCVR_INTERNAL;
+ cmd->supported = SUPPORTED_10baseT_Half;
+ ethtool_cmd_speed_set(cmd, SPEED_10);
+ cmd->duplex = DUPLEX_HALF;
+ cmd->port = PORT_OTHER;
+ cmd->autoneg = AUTONEG_DISABLE;
+
+ return 0;
+}
+
+static void
+qcaspi_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *estats, u64 *data)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+ struct qcaspi_stats *st = &qca->stats;
+
+ memcpy(data, st, ARRAY_SIZE(qcaspi_gstrings_stats) * sizeof(u64));
+}
+
+static void
+qcaspi_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
+{
+ switch (stringset) {
+ case ETH_SS_STATS:
+ memcpy(buf, &qcaspi_gstrings_stats,
+ sizeof(qcaspi_gstrings_stats));
+ break;
+ default:
+ WARN_ON(1);
+ break;
+ }
+}
+
+static int
+qcaspi_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return ARRAY_SIZE(qcaspi_gstrings_stats);
+ default:
+ return -EINVAL;
+ }
+}
+
+static int
+qcaspi_get_regs_len(struct net_device *dev)
+{
+ return sizeof(u32) * QCASPI_MAX_REGS;
+}
+
+static void
+qcaspi_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *p)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+ u32 *regs_buff = p;
+ unsigned int i;
+
+ regs->version = 1;
+ memset(regs_buff, 0, sizeof(u32) * QCASPI_MAX_REGS);
+
+ for (i = 0; i < ARRAY_SIZE(qcaspi_spi_regs); i++) {
+ u16 offset, value;
+
+ qcaspi_read_register(qca, qcaspi_spi_regs[i], &value);
+ offset = qcaspi_spi_regs[i] >> 8;
+ regs_buff[offset] = value;
+ }
+}
+
+static void
+qcaspi_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ ring->rx_max_pending = 4;
+ ring->tx_max_pending = TX_RING_MAX_LEN;
+ ring->rx_pending = 4;
+ ring->tx_pending = qca->txr.count;
+}
+
+static int
+qcaspi_set_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ if ((ring->rx_pending) ||
+ (ring->rx_mini_pending) ||
+ (ring->rx_jumbo_pending))
+ return -EINVAL;
+
+ if (netif_running(dev))
+ qcaspi_netdev_close(dev);
+
+ qca->txr.count = max_t(u32, ring->tx_pending, TX_RING_MIN_LEN);
+ qca->txr.count = min_t(u16, qca->txr.count, TX_RING_MAX_LEN);
+
+ if (netif_running(dev))
+ qcaspi_netdev_open(dev);
+
+ return 0;
+}
+
+static const struct ethtool_ops qcaspi_ethtool_ops = {
+ .get_drvinfo = qcaspi_get_drvinfo,
+ .get_link = ethtool_op_get_link,
+ .get_settings = qcaspi_get_settings,
+ .get_ethtool_stats = qcaspi_get_ethtool_stats,
+ .get_strings = qcaspi_get_strings,
+ .get_sset_count = qcaspi_get_sset_count,
+ .get_regs_len = qcaspi_get_regs_len,
+ .get_regs = qcaspi_get_regs,
+ .get_ringparam = qcaspi_get_ringparam,
+ .set_ringparam = qcaspi_set_ringparam,
+};
+
+void qcaspi_set_ethtool_ops(struct net_device *dev)
+{
+ dev->ethtool_ops = &qcaspi_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/qualcomm/qca_debug.h b/drivers/net/ethernet/qualcomm/qca_debug.h
new file mode 100644
index 0000000..46a7858
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_debug.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* This file contains debugging routines for use in the QCA7K driver.
+ */
+
+#ifndef _QCA_DEBUG_H
+#define _QCA_DEBUG_H
+
+#include "qca_spi.h"
+
+void qcaspi_init_device_debugfs(struct qcaspi *qca);
+
+void qcaspi_remove_device_debugfs(struct qcaspi *qca);
+
+void qcaspi_set_ethtool_ops(struct net_device *dev);
+
+#endif /* _QCA_DEBUG_H */
diff --git a/drivers/net/ethernet/qualcomm/qca_framing.c b/drivers/net/ethernet/qualcomm/qca_framing.c
new file mode 100644
index 0000000..faa924c
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_framing.c
@@ -0,0 +1,156 @@
+/*
+ * Copyright (c) 2011, 2012, Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* Atheros ethernet framing. Every Ethernet frame is surrounded
+ * by an atheros frame while transmitted over a serial channel;
+ */
+
+#include <linux/kernel.h>
+
+#include "qca_framing.h"
+
+u16
+qcafrm_create_header(u8 *buf, u16 length)
+{
+ __le16 len;
+
+ if (!buf)
+ return 0;
+
+ len = cpu_to_le16(length);
+
+ buf[0] = 0xAA;
+ buf[1] = 0xAA;
+ buf[2] = 0xAA;
+ buf[3] = 0xAA;
+ buf[4] = len & 0xff;
+ buf[5] = (len >> 8) & 0xff;
+ buf[6] = 0;
+ buf[7] = 0;
+
+ return QCAFRM_HEADER_LEN;
+}
+
+u16
+qcafrm_create_footer(u8 *buf)
+{
+ if (!buf)
+ return 0;
+
+ buf[0] = 0x55;
+ buf[1] = 0x55;
+ return QCAFRM_FOOTER_LEN;
+}
+
+/* Gather received bytes and try to extract a full ethernet frame by
+ * following a simple state machine.
+ *
+ * Return: QCAFRM_GATHER No ethernet frame fully received yet.
+ * QCAFRM_NOHEAD Header expected but not found.
+ * QCAFRM_INVLEN Atheros frame length is invalid
+ * QCAFRM_NOTAIL Footer expected but not found.
+ * > 0 Number of byte in the fully received
+ * Ethernet frame
+ */
+
+s32
+qcafrm_fsm_decode(struct qcafrm_handle *handle, u8 *buf, u16 buf_len, u8 recv_byte)
+{
+ s32 ret = QCAFRM_GATHER;
+ u16 len;
+
+ switch (handle->state) {
+ case QCAFRM_HW_LEN0:
+ case QCAFRM_HW_LEN1:
+ /* by default, just go to next state */
+ handle->state--;
+
+ if (recv_byte != 0x00) {
+ /* first two bytes of length must be 0 */
+ handle->state = QCAFRM_HW_LEN0;
+ }
+ break;
+ case QCAFRM_HW_LEN2:
+ case QCAFRM_HW_LEN3:
+ handle->state--;
+ break;
+ /* 4 bytes header pattern */
+ case QCAFRM_WAIT_AA1:
+ case QCAFRM_WAIT_AA2:
+ case QCAFRM_WAIT_AA3:
+ case QCAFRM_WAIT_AA4:
+ if (recv_byte != 0xAA) {
+ ret = QCAFRM_NOHEAD;
+ handle->state = QCAFRM_HW_LEN0;
+ } else {
+ handle->state--;
+ }
+ break;
+ /* 2 bytes length. */
+ /* Borrow offset field to hold length for now. */
+ case QCAFRM_WAIT_LEN_BYTE0:
+ handle->offset = recv_byte;
+ handle->state = QCAFRM_WAIT_LEN_BYTE1;
+ break;
+ case QCAFRM_WAIT_LEN_BYTE1:
+ handle->offset = handle->offset | (recv_byte << 8);
+ handle->state = QCAFRM_WAIT_RSVD_BYTE1;
+ break;
+ case QCAFRM_WAIT_RSVD_BYTE1:
+ handle->state = QCAFRM_WAIT_RSVD_BYTE2;
+ break;
+ case QCAFRM_WAIT_RSVD_BYTE2:
+ len = handle->offset;
+ if (len > buf_len || len < QCAFRM_ETHMINLEN) {
+ ret = QCAFRM_INVLEN;
+ handle->state = QCAFRM_HW_LEN0;
+ } else {
+ handle->state = (enum qcafrm_state)(len + 1);
+ /* Remaining number of bytes. */
+ handle->offset = 0;
+ }
+ break;
+ default:
+ /* Receiving Ethernet frame itself. */
+ buf[handle->offset] = recv_byte;
+ handle->offset++;
+ handle->state--;
+ break;
+ case QCAFRM_WAIT_551:
+ if (recv_byte != 0x55) {
+ ret = QCAFRM_NOTAIL;
+ handle->state = QCAFRM_HW_LEN0;
+ } else {
+ handle->state = QCAFRM_WAIT_552;
+ }
+ break;
+ case QCAFRM_WAIT_552:
+ if (recv_byte != 0x55) {
+ ret = QCAFRM_NOTAIL;
+ handle->state = QCAFRM_HW_LEN0;
+ } else {
+ ret = handle->offset;
+ /* Frame is fully received. */
+ handle->state = QCAFRM_HW_LEN0;
+ }
+ break;
+ }
+
+ return ret;
+}
diff --git a/drivers/net/ethernet/qualcomm/qca_framing.h b/drivers/net/ethernet/qualcomm/qca_framing.h
new file mode 100644
index 0000000..5d96595
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_framing.h
@@ -0,0 +1,134 @@
+/*
+ * Copyright (c) 2011, 2012, Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* Atheros Ethernet framing. Every Ethernet frame is surrounded by an atheros
+ * frame while transmitted over a serial channel.
+ */
+
+#ifndef _QCA_FRAMING_H
+#define _QCA_FRAMING_H
+
+#include <linux/if_ether.h>
+#include <linux/if_vlan.h>
+#include <linux/types.h>
+
+/* Frame is currently being received */
+#define QCAFRM_GATHER 0
+
+/* No header byte while expecting it */
+#define QCAFRM_NOHEAD (QCAFRM_ERR_BASE - 1)
+
+/* No tailer byte while expecting it */
+#define QCAFRM_NOTAIL (QCAFRM_ERR_BASE - 2)
+
+/* Frame length is invalid */
+#define QCAFRM_INVLEN (QCAFRM_ERR_BASE - 3)
+
+/* Frame length is invalid */
+#define QCAFRM_INVFRAME (QCAFRM_ERR_BASE - 4)
+
+/* Min/Max Ethernet MTU */
+#define QCAFRM_ETHMINMTU 46
+#define QCAFRM_ETHMAXMTU 1500
+
+/* Min/Max frame lengths */
+#define QCAFRM_ETHMINLEN (QCAFRM_ETHMINMTU + ETH_HLEN)
+#define QCAFRM_ETHMAXLEN (QCAFRM_ETHMAXMTU + VLAN_ETH_HLEN)
+
+/* QCA7K header len */
+#define QCAFRM_HEADER_LEN 8
+
+/* QCA7K footer len */
+#define QCAFRM_FOOTER_LEN 2
+
+/* QCA7K Framing. */
+#define QCAFRM_ERR_BASE -1000
+
+enum qcafrm_state {
+ QCAFRM_HW_LEN0 = 0x8000,
+ QCAFRM_HW_LEN1 = QCAFRM_HW_LEN0 - 1,
+ QCAFRM_HW_LEN2 = QCAFRM_HW_LEN1 - 1,
+ QCAFRM_HW_LEN3 = QCAFRM_HW_LEN2 - 1,
+
+ /* Waiting first 0xAA of header */
+ QCAFRM_WAIT_AA1 = QCAFRM_HW_LEN3 - 1,
+
+ /* Waiting second 0xAA of header */
+ QCAFRM_WAIT_AA2 = QCAFRM_WAIT_AA1 - 1,
+
+ /* Waiting third 0xAA of header */
+ QCAFRM_WAIT_AA3 = QCAFRM_WAIT_AA2 - 1,
+
+ /* Waiting fourth 0xAA of header */
+ QCAFRM_WAIT_AA4 = QCAFRM_WAIT_AA3 - 1,
+
+ /* Waiting Byte 0-1 of length (litte endian) */
+ QCAFRM_WAIT_LEN_BYTE0 = QCAFRM_WAIT_AA4 - 1,
+ QCAFRM_WAIT_LEN_BYTE1 = QCAFRM_WAIT_AA4 - 2,
+
+ /* Reserved bytes */
+ QCAFRM_WAIT_RSVD_BYTE1 = QCAFRM_WAIT_AA4 - 3,
+ QCAFRM_WAIT_RSVD_BYTE2 = QCAFRM_WAIT_AA4 - 4,
+
+ /* The frame length is used as the state until
+ * the end of the Ethernet frame
+ * Waiting for first 0x55 of footer
+ */
+ QCAFRM_WAIT_551 = 1,
+
+ /* Waiting for second 0x55 of footer */
+ QCAFRM_WAIT_552 = QCAFRM_WAIT_551 - 1
+};
+
+/* Structure to maintain the frame decoding during reception. */
+
+struct qcafrm_handle {
+ /* Current decoding state */
+ enum qcafrm_state state;
+
+ /* Offset in buffer (borrowed for length too) */
+ s16 offset;
+
+ /* Frame length as kept by this module */
+ u16 len;
+};
+
+u16 qcafrm_create_header(u8 *buf, u16 len);
+
+u16 qcafrm_create_footer(u8 *buf);
+
+static inline void qcafrm_fsm_init(struct qcafrm_handle *handle)
+{
+ handle->state = QCAFRM_HW_LEN0;
+}
+
+/* Gather received bytes and try to extract a full Ethernet frame
+ * by following a simple state machine.
+ *
+ * Return: QCAFRM_GATHER No Ethernet frame fully received yet.
+ * QCAFRM_NOHEAD Header expected but not found.
+ * QCAFRM_INVLEN QCA7K frame length is invalid
+ * QCAFRM_NOTAIL Footer expected but not found.
+ * > 0 Number of byte in the fully received
+ * Ethernet frame
+ */
+
+s32 qcafrm_fsm_decode(struct qcafrm_handle *handle, u8 *buf, u16 buf_len, u8 recv_byte);
+
+#endif /* _QCA_FRAMING_H */
diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
new file mode 100644
index 0000000..74eb520
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_spi.c
@@ -0,0 +1,993 @@
+/*
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* This module implements the Qualcomm Atheros SPI protocol for
+ * kernel-based SPI device; it is essentially an Ethernet-to-SPI
+ * serial converter;
+ */
+
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_arp.h>
+#include <linux/if_ether.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/netdevice.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_net.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/spi/spi.h>
+#include <linux/types.h>
+#include <linux/version.h>
+
+#include "qca_7k.h"
+#include "qca_debug.h"
+#include "qca_framing.h"
+#include "qca_spi.h"
+
+#define MAX_DMA_BURST_LEN 5000
+
+/* Modules parameters */
+#define QCASPI_CLK_SPEED_MIN 1000000
+#define QCASPI_CLK_SPEED_MAX 16000000
+#define QCASPI_CLK_SPEED 8000000
+static int qcaspi_clkspeed;
+module_param(qcaspi_clkspeed, int, 0);
+MODULE_PARM_DESC(qcaspi_clkspeed, "SPI bus clock speed (Hz). Use 1000000-16000000.");
+
+#define QCASPI_BURST_LEN_MIN 1
+#define QCASPI_BURST_LEN_MAX MAX_DMA_BURST_LEN
+static int qcaspi_burst_len = MAX_DMA_BURST_LEN;
+module_param(qcaspi_burst_len, int, 0);
+MODULE_PARM_DESC(qcaspi_burst_len, "Number of data bytes per burst. Use 1-5000.");
+
+#define QCASPI_PLUGGABLE_MIN 0
+#define QCASPI_PLUGGABLE_MAX 1
+static int qcaspi_pluggable = QCASPI_PLUGGABLE_MIN;
+module_param(qcaspi_pluggable, int, 0);
+MODULE_PARM_DESC(qcaspi_pluggable, "Pluggable SPI connection (yes/no).");
+
+#define QCASPI_MTU QCAFRM_ETHMAXMTU
+#define QCASPI_TX_TIMEOUT (1 * HZ)
+#define QCASPI_QCA7K_REBOOT_TIME_MS 1000
+
+static void
+start_spi_intr_handling(struct qcaspi *qca, u16 *intr_cause)
+{
+ *intr_cause = 0;
+
+ qcaspi_write_register(qca, SPI_REG_INTR_ENABLE, 0);
+ qcaspi_read_register(qca, SPI_REG_INTR_CAUSE, intr_cause);
+ netdev_dbg(qca->net_dev, "interrupts: 0x%04x\n", *intr_cause);
+}
+
+static void
+end_spi_intr_handling(struct qcaspi *qca, u16 intr_cause)
+{
+ u16 intr_enable = (SPI_INT_CPU_ON |
+ SPI_INT_PKT_AVLBL |
+ SPI_INT_RDBUF_ERR |
+ SPI_INT_WRBUF_ERR);
+
+ qcaspi_write_register(qca, SPI_REG_INTR_CAUSE, intr_cause);
+ qcaspi_write_register(qca, SPI_REG_INTR_ENABLE, intr_enable);
+ netdev_dbg(qca->net_dev, "acking int: 0x%04x\n", intr_cause);
+}
+
+static u32
+qcaspi_write_burst(struct qcaspi *qca, u8 *src, u32 len)
+{
+ __be16 cmd;
+ struct spi_message *msg = &qca->spi_msg2;
+ struct spi_transfer *transfer = &qca->spi_xfer2[0];
+ int ret;
+
+ cmd = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_EXTERNAL);
+ transfer->tx_buf = &cmd;
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ transfer = &qca->spi_xfer2[1];
+ transfer->tx_buf = src;
+ transfer->rx_buf = NULL;
+ transfer->len = len;
+
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
+ qcaspi_spi_error(qca);
+ return 0;
+ }
+
+ return len;
+}
+
+static u32
+qcaspi_write_legacy(struct qcaspi *qca, u8 *src, u32 len)
+{
+ struct spi_message *msg = &qca->spi_msg1;
+ struct spi_transfer *transfer = &qca->spi_xfer1;
+ int ret;
+
+ transfer->tx_buf = src;
+ transfer->rx_buf = NULL;
+ transfer->len = len;
+
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (ret || (msg->actual_length != len)) {
+ qcaspi_spi_error(qca);
+ return 0;
+ }
+
+ return len;
+}
+
+static u32
+qcaspi_read_burst(struct qcaspi *qca, u8 *dst, u32 len)
+{
+ struct spi_message *msg = &qca->spi_msg2;
+ __be16 cmd;
+ struct spi_transfer *transfer = &qca->spi_xfer2[0];
+ int ret;
+
+ cmd = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_EXTERNAL);
+ transfer->tx_buf = &cmd;
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ transfer = &qca->spi_xfer2[1];
+ transfer->tx_buf = NULL;
+ transfer->rx_buf = dst;
+ transfer->len = len;
+
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
+ qcaspi_spi_error(qca);
+ return 0;
+ }
+
+ return len;
+}
+
+static u32
+qcaspi_read_legacy(struct qcaspi *qca, u8 *dst, u32 len)
+{
+ struct spi_message *msg = &qca->spi_msg1;
+ struct spi_transfer *transfer = &qca->spi_xfer1;
+ int ret;
+
+ transfer->tx_buf = NULL;
+ transfer->rx_buf = dst;
+ transfer->len = len;
+
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (ret || (msg->actual_length != len)) {
+ qcaspi_spi_error(qca);
+ return 0;
+ }
+
+ return len;
+}
+
+static int
+qcaspi_tx_frame(struct qcaspi *qca, struct sk_buff *skb)
+{
+ u32 count;
+ u32 written;
+ u32 offset;
+ u32 len;
+
+ len = skb->len;
+
+ qcaspi_write_register(qca, SPI_REG_BFR_SIZE, len);
+ if (qca->legacy_mode)
+ qcaspi_tx_cmd(qca, QCA7K_SPI_WRITE | QCA7K_SPI_EXTERNAL);
+
+ offset = 0;
+ while (len) {
+ count = len;
+ if (count > qca->burst_len)
+ count = qca->burst_len;
+
+ if (qca->legacy_mode) {
+ written = qcaspi_write_legacy(qca,
+ skb->data + offset,
+ count);
+ } else {
+ written = qcaspi_write_burst(qca,
+ skb->data + offset,
+ count);
+ }
+
+ if (written != count)
+ return -1;
+
+ offset += count;
+ len -= count;
+ }
+
+ return 0;
+}
+
+static int
+qcaspi_transmit(struct qcaspi *qca)
+{
+ struct net_device_stats *n_stats = &qca->net_dev->stats;
+ u16 available = 0;
+ u32 pkt_len;
+ u16 new_head;
+ u16 packets = 0;
+
+ if (qca->txr.skb[qca->txr.head] == NULL)
+ return 0;
+
+ qcaspi_read_register(qca, SPI_REG_WRBUF_SPC_AVA, &available);
+
+ while (qca->txr.skb[qca->txr.head]) {
+ pkt_len = qca->txr.skb[qca->txr.head]->len + QCASPI_HW_PKT_LEN;
+
+ if (available < pkt_len) {
+ if (packets == 0)
+ qca->stats.write_buf_miss++;
+ break;
+ }
+
+ if (qcaspi_tx_frame(qca, qca->txr.skb[qca->txr.head]) == -1) {
+ qca->stats.write_err++;
+ return -1;
+ }
+
+ packets++;
+ n_stats->tx_packets++;
+ n_stats->tx_bytes += qca->txr.skb[qca->txr.head]->len;
+ available -= pkt_len;
+
+ /* remove the skb from the queue */
+ /* XXX After inconsistent lock states netif_tx_lock()
+ * has been replaced by netif_tx_lock_bh() and so on.
+ */
+ netif_tx_lock_bh(qca->net_dev);
+ dev_kfree_skb(qca->txr.skb[qca->txr.head]);
+ qca->txr.skb[qca->txr.head] = NULL;
+ qca->txr.size -= pkt_len;
+ new_head = qca->txr.head + 1;
+ if (new_head >= qca->txr.count)
+ new_head = 0;
+ qca->txr.head = new_head;
+ if (netif_queue_stopped(qca->net_dev))
+ netif_wake_queue(qca->net_dev);
+ netif_tx_unlock_bh(qca->net_dev);
+ }
+
+ return 0;
+}
+
+static int
+qcaspi_receive(struct qcaspi *qca)
+{
+ struct net_device *net_dev = qca->net_dev;
+ struct net_device_stats *n_stats = &net_dev->stats;
+ u16 available = 0;
+ u32 bytes_read;
+ u8 *cp;
+
+ /* Allocate rx SKB if we don't have one available. */
+ if (!qca->rx_skb) {
+ qca->rx_skb = netdev_alloc_skb(net_dev,
+ net_dev->mtu + VLAN_ETH_HLEN);
+ if (!qca->rx_skb) {
+ netdev_dbg(net_dev, "out of RX resources\n");
+ qca->stats.out_of_mem++;
+ return -1;
+ }
+ }
+
+ /* Read the packet size. */
+ qcaspi_read_register(qca, SPI_REG_RDBUF_BYTE_AVA, &available);
+ netdev_dbg(net_dev, "qcaspi_receive: SPI_REG_RDBUF_BYTE_AVA: Value: %08x\n",
+ available);
+
+ if (available == 0) {
+ netdev_dbg(net_dev, "qcaspi_receive called without any data being available!\n");
+ return -1;
+ }
+
+ qcaspi_write_register(qca, SPI_REG_BFR_SIZE, available);
+
+ if (qca->legacy_mode)
+ qcaspi_tx_cmd(qca, QCA7K_SPI_READ | QCA7K_SPI_EXTERNAL);
+
+ while (available) {
+ u32 count = available;
+
+ if (count > qca->burst_len)
+ count = qca->burst_len;
+
+ if (qca->legacy_mode) {
+ bytes_read = qcaspi_read_legacy(qca, qca->rx_buffer,
+ count);
+ } else {
+ bytes_read = qcaspi_read_burst(qca, qca->rx_buffer,
+ count);
+ }
+
+ netdev_dbg(net_dev, "available: %d, byte read: %d\n",
+ available, bytes_read);
+
+ if (bytes_read) {
+ available -= bytes_read;
+ } else {
+ qca->stats.read_err++;
+ return -1;
+ }
+
+ cp = qca->rx_buffer;
+
+ while ((bytes_read--) && (qca->rx_skb)) {
+ s32 retcode;
+
+ retcode = qcafrm_fsm_decode(&qca->frm_handle,
+ qca->rx_skb->data,
+ skb_tailroom(qca->rx_skb),
+ *cp);
+ cp++;
+ switch (retcode) {
+ case QCAFRM_GATHER:
+ case QCAFRM_NOHEAD:
+ break;
+ case QCAFRM_NOTAIL:
+ netdev_dbg(net_dev, "no RX tail\n");
+ n_stats->rx_errors++;
+ n_stats->rx_dropped++;
+ break;
+ case QCAFRM_INVLEN:
+ netdev_dbg(net_dev, "invalid RX length\n");
+ n_stats->rx_errors++;
+ n_stats->rx_dropped++;
+ break;
+ default:
+ qca->rx_skb->dev = qca->net_dev;
+ n_stats->rx_packets++;
+ n_stats->rx_bytes += retcode;
+ skb_put(qca->rx_skb, retcode);
+ qca->rx_skb->protocol = eth_type_trans(
+ qca->rx_skb, qca->rx_skb->dev);
+ qca->rx_skb->ip_summed = CHECKSUM_UNNECESSARY;
+ netif_rx_ni(qca->rx_skb);
+ qca->rx_skb = netdev_alloc_skb(net_dev,
+ net_dev->mtu + VLAN_ETH_HLEN);
+ if (!qca->rx_skb) {
+ netdev_dbg(net_dev, "out of RX resources\n");
+ n_stats->rx_errors++;
+ qca->stats.out_of_mem++;
+ break;
+ }
+ }
+ }
+ }
+
+ return 0;
+}
+
+/* Check that tx ring stores only so much bytes
+ * that fit into the internal QCA buffer.
+ */
+
+static int
+qcaspi_tx_ring_has_space(struct tx_ring *txr)
+{
+ if (txr->skb[txr->tail])
+ return 0;
+
+ return (txr->size + QCAFRM_ETHMAXLEN < QCASPI_HW_BUF_LEN) ? 1 : 0;
+}
+
+/* Flush the tx ring. This function is only safe to
+ * call from the qcaspi_spi_thread.
+ */
+
+static void
+qcaspi_flush_tx_ring(struct qcaspi *qca)
+{
+ int i;
+
+ /* XXX After inconsistent lock states netif_tx_lock()
+ * has been replaced by netif_tx_lock_bh() and so on.
+ */
+ netif_tx_lock_bh(qca->net_dev);
+ for (i = 0; i < TX_RING_MAX_LEN; i++) {
+ if (qca->txr.skb[i]) {
+ dev_kfree_skb(qca->txr.skb[i]);
+ qca->txr.skb[i] = NULL;
+ qca->net_dev->stats.tx_dropped++;
+ }
+ }
+ qca->txr.tail = 0;
+ qca->txr.head = 0;
+ qca->txr.size = 0;
+ netif_tx_unlock_bh(qca->net_dev);
+}
+
+static void
+qcaspi_qca7k_sync(struct qcaspi *qca, int event)
+{
+ u16 signature = 0;
+ u16 spi_config;
+ u16 wrbuf_space = 0;
+ static u16 reset_count;
+
+ if (event == QCASPI_EVENT_CPUON) {
+ /* Read signature twice, if not valid
+ * go back to unknown state.
+ */
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+ if (signature != QCASPI_GOOD_SIGNATURE) {
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ netdev_dbg(qca->net_dev, "sync: got CPU on, but signature was invalid, restart\n");
+ } else {
+ /* ensure that the WRBUF is empty */
+ qcaspi_read_register(qca, SPI_REG_WRBUF_SPC_AVA,
+ &wrbuf_space);
+ if (wrbuf_space != QCASPI_HW_BUF_LEN) {
+ netdev_dbg(qca->net_dev, "sync: got CPU on, but wrbuf not empty. reset!\n");
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ } else {
+ netdev_dbg(qca->net_dev, "sync: got CPU on, now in sync\n");
+ qca->sync = QCASPI_SYNC_READY;
+ return;
+ }
+ }
+ }
+
+ switch (qca->sync) {
+ case QCASPI_SYNC_READY:
+ /* Read signature, if not valid go to unknown state. */
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+ if (signature != QCASPI_GOOD_SIGNATURE) {
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ netdev_dbg(qca->net_dev, "sync: bad signature, restart\n");
+ /* don't reset right away */
+ return;
+ }
+ break;
+ case QCASPI_SYNC_UNKNOWN:
+ /* Read signature, if not valid stay in unknown state */
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+ if (signature != QCASPI_GOOD_SIGNATURE) {
+ netdev_dbg(qca->net_dev, "sync: could not read signature to reset device, retry.\n");
+ return;
+ }
+
+ /* TODO: use GPIO to reset QCA7000 in legacy mode*/
+ netdev_dbg(qca->net_dev, "sync: resetting device.\n");
+ qcaspi_read_register(qca, SPI_REG_SPI_CONFIG, &spi_config);
+ spi_config |= QCASPI_SLAVE_RESET_BIT;
+ qcaspi_write_register(qca, SPI_REG_SPI_CONFIG, spi_config);
+
+ qca->sync = QCASPI_SYNC_RESET;
+ qca->stats.trig_reset++;
+ reset_count = 0;
+ break;
+ case QCASPI_SYNC_RESET:
+ reset_count++;
+ netdev_dbg(qca->net_dev, "sync: waiting for CPU on, count %u.\n",
+ reset_count);
+ if (reset_count >= QCASPI_RESET_TIMEOUT) {
+ /* reset did not seem to take place, try again */
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ qca->stats.reset_timeout++;
+ netdev_dbg(qca->net_dev, "sync: reset timeout, restarting process.\n");
+ }
+ break;
+ }
+}
+
+static int
+qcaspi_spi_thread(void *data)
+{
+ struct qcaspi *qca = data;
+ u16 intr_cause = 0;
+
+ netdev_info(qca->net_dev, "SPI thread created\n");
+ while (!kthread_should_stop()) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if ((qca->intr_req == qca->intr_svc) &&
+ (qca->txr.skb[qca->txr.head] == NULL) &&
+ (qca->sync == QCASPI_SYNC_READY))
+ schedule();
+
+ set_current_state(TASK_RUNNING);
+
+ netdev_dbg(qca->net_dev, "have work to do. int: %d, tx_skb: %p\n",
+ qca->intr_req - qca->intr_svc,
+ qca->txr.skb[qca->txr.head]);
+
+ qcaspi_qca7k_sync(qca, QCASPI_EVENT_UPDATE);
+
+ if (qca->sync != QCASPI_SYNC_READY) {
+ netdev_dbg(qca->net_dev, "sync: not ready %u, turn off carrier and flush\n",
+ (unsigned int)qca->sync);
+ netif_stop_queue(qca->net_dev);
+ netif_carrier_off(qca->net_dev);
+ qcaspi_flush_tx_ring(qca);
+ msleep(QCASPI_QCA7K_REBOOT_TIME_MS);
+ }
+
+ if (qca->intr_svc != qca->intr_req) {
+ qca->intr_svc = qca->intr_req;
+ start_spi_intr_handling(qca, &intr_cause);
+
+ if (intr_cause & SPI_INT_CPU_ON) {
+ qcaspi_qca7k_sync(qca, QCASPI_EVENT_CPUON);
+
+ /* not synced. */
+ if (qca->sync != QCASPI_SYNC_READY)
+ continue;
+
+ qca->stats.device_reset++;
+ netif_wake_queue(qca->net_dev);
+ netif_carrier_on(qca->net_dev);
+ }
+
+ if (intr_cause & SPI_INT_RDBUF_ERR) {
+ /* restart sync */
+ netdev_dbg(qca->net_dev, "===> rdbuf error!\n");
+ qca->stats.read_buf_err++;
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ continue;
+ }
+
+ if (intr_cause & SPI_INT_WRBUF_ERR) {
+ /* restart sync */
+ netdev_dbg(qca->net_dev, "===> wrbuf error!\n");
+ qca->stats.write_buf_err++;
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ continue;
+ }
+
+ /* can only handle other interrupts
+ * if sync has occured
+ */
+ if (qca->sync == QCASPI_SYNC_READY) {
+ if (intr_cause & SPI_INT_PKT_AVLBL)
+ qcaspi_receive(qca);
+ }
+
+ end_spi_intr_handling(qca, intr_cause);
+ }
+
+ if (qca->sync == QCASPI_SYNC_READY)
+ qcaspi_transmit(qca);
+ }
+ set_current_state(TASK_RUNNING);
+ netdev_info(qca->net_dev, "SPI thread exit\n");
+
+ return 0;
+}
+
+static irqreturn_t
+qcaspi_intr_handler(int irq, void *data)
+{
+ struct qcaspi *qca = data;
+
+ qca->intr_req++;
+ if (qca->spi_thread &&
+ qca->spi_thread->state != TASK_RUNNING)
+ wake_up_process(qca->spi_thread);
+
+ return IRQ_HANDLED;
+}
+
+int
+qcaspi_netdev_open(struct net_device *dev)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+ int ret = 0;
+
+ if (!qca)
+ return -EINVAL;
+
+ qca->intr_req = 1;
+ qca->intr_svc = 0;
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ qcafrm_fsm_init(&qca->frm_handle);
+
+ qca->spi_thread = kthread_run((void *)qcaspi_spi_thread,
+ qca, "%s", dev->name);
+
+ if (IS_ERR(qca->spi_thread)) {
+ netdev_err(dev, "%s: unable to start kernel thread.\n",
+ QCASPI_DRV_NAME);
+ return PTR_ERR(qca->spi_thread);
+ }
+
+ ret = request_irq(qca->spi_dev->irq, qcaspi_intr_handler, 0,
+ dev->name, qca);
+ if (ret) {
+ netdev_err(dev, "%s: unable to get IRQ %d (irqval=%d).\n",
+ QCASPI_DRV_NAME, qca->spi_dev->irq, ret);
+ kthread_stop(qca->spi_thread);
+ return ret;
+ }
+
+ netif_start_queue(qca->net_dev);
+
+ return 0;
+}
+
+int
+qcaspi_netdev_close(struct net_device *dev)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ netif_stop_queue(dev);
+
+ qcaspi_write_register(qca, SPI_REG_INTR_ENABLE, 0);
+ free_irq(qca->spi_dev->irq, qca);
+
+ kthread_stop(qca->spi_thread);
+ qca->spi_thread = NULL;
+ qcaspi_flush_tx_ring(qca);
+
+ return 0;
+}
+
+static netdev_tx_t
+qcaspi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ u32 frame_len;
+ u8 *ptmp;
+ struct qcaspi *qca = netdev_priv(dev);
+ u16 new_tail;
+ struct sk_buff *tskb;
+ u8 pad_len = 0;
+
+ if (skb->len < QCAFRM_ETHMINLEN)
+ pad_len = QCAFRM_ETHMINLEN - skb->len;
+
+ if (qca->txr.skb[qca->txr.tail]) {
+ netdev_warn(qca->net_dev, "queue was unexpectedly full!\n");
+ netif_stop_queue(qca->net_dev);
+ qca->stats.ring_full++;
+ return NETDEV_TX_BUSY;
+ }
+
+ if ((skb_headroom(skb) < QCAFRM_HEADER_LEN) ||
+ (skb_tailroom(skb) < QCAFRM_FOOTER_LEN + pad_len)) {
+ tskb = skb_copy_expand(skb, QCAFRM_HEADER_LEN,
+ QCAFRM_FOOTER_LEN + pad_len, GFP_ATOMIC);
+ if (!tskb) {
+ netdev_dbg(qca->net_dev, "could not allocate tx_buff\n");
+ qca->stats.out_of_mem++;
+ return NETDEV_TX_BUSY;
+ }
+ dev_kfree_skb(skb);
+ skb = tskb;
+ }
+
+ frame_len = skb->len + pad_len;
+
+ ptmp = skb_push(skb, QCAFRM_HEADER_LEN);
+ qcafrm_create_header(ptmp, frame_len);
+
+ if (pad_len) {
+ ptmp = skb_put(skb, pad_len);
+ memset(ptmp, 0, pad_len);
+ }
+
+ ptmp = skb_put(skb, QCAFRM_FOOTER_LEN);
+ qcafrm_create_footer(ptmp);
+
+ netdev_dbg(qca->net_dev, "Tx-ing packet: Size: 0x%08x\n",
+ skb->len);
+
+ qca->txr.size += skb->len + QCASPI_HW_PKT_LEN;
+
+ new_tail = qca->txr.tail + 1;
+ if (new_tail >= qca->txr.count)
+ new_tail = 0;
+
+ qca->txr.skb[qca->txr.tail] = skb;
+ qca->txr.tail = new_tail;
+
+ if (!qcaspi_tx_ring_has_space(&qca->txr)) {
+ netif_stop_queue(qca->net_dev);
+ qca->stats.ring_full++;
+ }
+
+ dev->trans_start = jiffies;
+
+ if (qca->spi_thread &&
+ qca->spi_thread->state != TASK_RUNNING)
+ wake_up_process(qca->spi_thread);
+
+ return NETDEV_TX_OK;
+}
+
+static void
+qcaspi_netdev_tx_timeout(struct net_device *dev)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ netdev_info(qca->net_dev, "Transmit timeout at %ld, latency %ld\n",
+ jiffies, jiffies - dev->trans_start);
+ qca->net_dev->stats.tx_errors++;
+ /* wake the queue if there is room */
+ if (qcaspi_tx_ring_has_space(&qca->txr))
+ netif_wake_queue(dev);
+}
+
+static int
+qcaspi_netdev_init(struct net_device *dev)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ dev->mtu = QCASPI_MTU;
+ dev->type = ARPHRD_ETHER;
+ qca->clkspeed = qcaspi_clkspeed;
+ qca->burst_len = qcaspi_burst_len;
+ qca->spi_thread = NULL;
+ qca->buffer_size = (dev->mtu + VLAN_ETH_HLEN + QCAFRM_HEADER_LEN +
+ QCAFRM_FOOTER_LEN + 4) * 4;
+
+ memset(&qca->stats, 0, sizeof(struct qcaspi_stats));
+
+ qca->rx_buffer = kmalloc(qca->buffer_size, GFP_KERNEL);
+ if (!qca->rx_buffer)
+ return -ENOBUFS;
+
+ qca->rx_skb = netdev_alloc_skb(dev, qca->net_dev->mtu + VLAN_ETH_HLEN);
+ if (!qca->rx_skb) {
+ kfree(qca->rx_buffer);
+ netdev_info(qca->net_dev, "Failed to allocate RX sk_buff.\n");
+ return -ENOBUFS;
+ }
+
+ return 0;
+}
+
+static void
+qcaspi_netdev_uninit(struct net_device *dev)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ kfree(qca->rx_buffer);
+ qca->buffer_size = 0;
+ if (qca->rx_skb)
+ dev_kfree_skb(qca->rx_skb);
+}
+
+static int
+qcaspi_netdev_change_mtu(struct net_device *dev, int new_mtu)
+{
+ if ((new_mtu < QCAFRM_ETHMINMTU) || (new_mtu > QCAFRM_ETHMAXMTU))
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+
+ return 0;
+}
+
+static const struct net_device_ops qcaspi_netdev_ops = {
+ .ndo_init = qcaspi_netdev_init,
+ .ndo_uninit = qcaspi_netdev_uninit,
+ .ndo_open = qcaspi_netdev_open,
+ .ndo_stop = qcaspi_netdev_close,
+ .ndo_start_xmit = qcaspi_netdev_xmit,
+ .ndo_change_mtu = qcaspi_netdev_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_tx_timeout = qcaspi_netdev_tx_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+};
+
+static void
+qcaspi_netdev_setup(struct net_device *dev)
+{
+ struct qcaspi *qca = NULL;
+
+ ether_setup(dev);
+
+ dev->netdev_ops = &qcaspi_netdev_ops;
+ qcaspi_set_ethtool_ops(dev);
+ dev->watchdog_timeo = QCASPI_TX_TIMEOUT;
+ dev->flags = IFF_MULTICAST;
+ dev->tx_queue_len = 100;
+
+ qca = netdev_priv(dev);
+ memset(qca, 0, sizeof(struct qcaspi));
+
+ memset(&qca->spi_xfer1, 0, sizeof(struct spi_transfer));
+ memset(&qca->spi_xfer2, 0, sizeof(struct spi_transfer) * 2);
+
+ spi_message_init(&qca->spi_msg1);
+ spi_message_add_tail(&qca->spi_xfer1, &qca->spi_msg1);
+
+ spi_message_init(&qca->spi_msg2);
+ spi_message_add_tail(&qca->spi_xfer2[0], &qca->spi_msg2);
+ spi_message_add_tail(&qca->spi_xfer2[1], &qca->spi_msg2);
+
+ memset(&qca->txr, 0, sizeof(qca->txr));
+ qca->txr.count = TX_RING_MAX_LEN;
+}
+
+static const struct of_device_id qca_spi_of_match[] = {
+ { .compatible = "qca,qca7000" },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, qca_spi_of_match);
+
+static int
+qca_spi_probe(struct spi_device *spi_device)
+{
+ struct qcaspi *qca = NULL;
+ struct net_device *qcaspi_devs = NULL;
+ u8 legacy_mode = 0;
+ u16 signature;
+ const char *mac;
+
+ if (!spi_device->dev.of_node) {
+ dev_err(&spi_device->dev, "Missing device tree\n");
+ return -EINVAL;
+ }
+
+ legacy_mode = of_property_read_bool(spi_device->dev.of_node,
+ "qca,legacy-mode");
+
+ if (qcaspi_clkspeed == 0) {
+ if (spi_device->max_speed_hz)
+ qcaspi_clkspeed = spi_device->max_speed_hz;
+ else
+ qcaspi_clkspeed = QCASPI_CLK_SPEED;
+ }
+
+ if ((qcaspi_clkspeed < QCASPI_CLK_SPEED_MIN) ||
+ (qcaspi_clkspeed > QCASPI_CLK_SPEED_MAX)) {
+ dev_info(&spi_device->dev, "Invalid clkspeed: %d\n",
+ qcaspi_clkspeed);
+ return -EINVAL;
+ }
+
+ if ((qcaspi_burst_len < QCASPI_BURST_LEN_MIN) ||
+ (qcaspi_burst_len > QCASPI_BURST_LEN_MAX)) {
+ dev_info(&spi_device->dev, "Invalid burst len: %d\n",
+ qcaspi_burst_len);
+ return -EINVAL;
+ }
+
+ if ((qcaspi_pluggable < QCASPI_PLUGGABLE_MIN) ||
+ (qcaspi_pluggable > QCASPI_PLUGGABLE_MAX)) {
+ dev_info(&spi_device->dev, "Invalid pluggable: %d\n",
+ qcaspi_pluggable);
+ return -EINVAL;
+ }
+
+ dev_info(&spi_device->dev, "ver=%s, clkspeed=%d, burst_len=%d, pluggable=%d\n",
+ QCASPI_DRV_VERSION,
+ qcaspi_clkspeed,
+ qcaspi_burst_len,
+ qcaspi_pluggable);
+
+ spi_device->mode = SPI_MODE_3;
+ spi_device->max_speed_hz = qcaspi_clkspeed;
+ if (spi_setup(spi_device) < 0) {
+ dev_err(&spi_device->dev, "Unable to setup SPI device\n");
+ return -EFAULT;
+ }
+
+ qcaspi_devs = alloc_etherdev(sizeof(struct qcaspi));
+ if (!qcaspi_devs)
+ return -ENOMEM;
+
+ qcaspi_netdev_setup(qcaspi_devs);
+
+ qca = netdev_priv(qcaspi_devs);
+ if (!qca) {
+ free_netdev(qcaspi_devs);
+ dev_err(&spi_device->dev, "Fail to retrieve private structure\n");
+ return -ENOMEM;
+ }
+ qca->net_dev = qcaspi_devs;
+ qca->spi_dev = spi_device;
+ qca->legacy_mode = legacy_mode;
+
+ mac = of_get_mac_address(spi_device->dev.of_node);
+
+ if (mac)
+ ether_addr_copy(qca->net_dev->dev_addr, mac);
+
+ if (!is_valid_ether_addr(qca->net_dev->dev_addr)) {
+ eth_hw_addr_random(qca->net_dev);
+ dev_info(&spi_device->dev, "Using random MAC address: %pM\n",
+ qca->net_dev->dev_addr);
+ }
+
+ netif_carrier_off(qca->net_dev);
+
+ if (!qcaspi_pluggable) {
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+
+ if (signature != QCASPI_GOOD_SIGNATURE) {
+ dev_err(&spi_device->dev, "Invalid signature (0x%04X)\n",
+ signature);
+ free_netdev(qcaspi_devs);
+ return -EFAULT;
+ }
+ }
+
+ if (register_netdev(qcaspi_devs)) {
+ dev_info(&spi_device->dev, "Unable to register net device %s\n",
+ qcaspi_devs->name);
+ free_netdev(qcaspi_devs);
+ return -EFAULT;
+ }
+
+ spi_set_drvdata(spi_device, qcaspi_devs);
+
+ qcaspi_init_device_debugfs(qca);
+
+ return 0;
+}
+
+static int
+qca_spi_remove(struct spi_device *spi_device)
+{
+ struct net_device *qcaspi_devs = spi_get_drvdata(spi_device);
+ struct qcaspi *qca = netdev_priv(qcaspi_devs);
+
+ qcaspi_remove_device_debugfs(qca);
+
+ unregister_netdev(qcaspi_devs);
+ free_netdev(qcaspi_devs);
+
+ return 0;
+}
+
+static const struct spi_device_id qca_spi_id[] = {
+ { "qca7000", 0 },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(spi, qca_spi_id);
+
+static struct spi_driver qca_spi_driver = {
+ .driver = {
+ .name = QCASPI_DRV_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = qca_spi_of_match,
+ },
+ .id_table = qca_spi_id,
+ .probe = qca_spi_probe,
+ .remove = qca_spi_remove,
+};
+module_spi_driver(qca_spi_driver);
+
+MODULE_DESCRIPTION("Qualcomm Atheros SPI Driver");
+MODULE_AUTHOR("Qualcomm Atheros Communications");
+MODULE_AUTHOR("Stefan Wahren <stefan.wahren@i2se.com>");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_VERSION(QCASPI_DRV_VERSION);
diff --git a/drivers/net/ethernet/qualcomm/qca_spi.h b/drivers/net/ethernet/qualcomm/qca_spi.h
new file mode 100644
index 0000000..6e31a0e
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_spi.h
@@ -0,0 +1,114 @@
+/*
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* Qualcomm Atheros SPI register definition.
+ *
+ * This module is designed to define the Qualcomm Atheros SPI register
+ * placeholders;
+ */
+
+#ifndef _QCA_SPI_H
+#define _QCA_SPI_H
+
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/spi/spi.h>
+#include <linux/types.h>
+
+#include "qca_framing.h"
+
+#define QCASPI_DRV_VERSION "0.2.7-i"
+#define QCASPI_DRV_NAME "qcaspi"
+
+#define QCASPI_GOOD_SIGNATURE 0xAA55
+
+#define TX_RING_MAX_LEN 10
+#define TX_RING_MIN_LEN 2
+
+/* sync related constants */
+#define QCASPI_SYNC_UNKNOWN 0
+#define QCASPI_SYNC_RESET 1
+#define QCASPI_SYNC_READY 2
+
+#define QCASPI_RESET_TIMEOUT 10
+
+/* sync events */
+#define QCASPI_EVENT_UPDATE 0
+#define QCASPI_EVENT_CPUON 1
+
+struct tx_ring {
+ struct sk_buff *skb[TX_RING_MAX_LEN];
+ u16 head;
+ u16 tail;
+ u16 size;
+ u16 count;
+};
+
+struct qcaspi_stats {
+ u64 trig_reset;
+ u64 device_reset;
+ u64 reset_timeout;
+ u64 read_err;
+ u64 write_err;
+ u64 read_buf_err;
+ u64 write_buf_err;
+ u64 out_of_mem;
+ u64 write_buf_miss;
+ u64 ring_full;
+ u64 spi_err;
+};
+
+struct qcaspi {
+ struct net_device *net_dev;
+ struct spi_device *spi_dev;
+ struct task_struct *spi_thread;
+
+ struct tx_ring txr;
+ struct qcaspi_stats stats;
+
+ struct spi_message spi_msg1;
+ struct spi_message spi_msg2;
+ struct spi_transfer spi_xfer1;
+ struct spi_transfer spi_xfer2[2];
+
+ u8 *rx_buffer;
+ u32 buffer_size;
+ u8 sync;
+
+ struct qcafrm_handle frm_handle;
+ struct sk_buff *rx_skb;
+
+ unsigned int intr_req;
+ unsigned int intr_svc;
+
+#ifdef CONFIG_DEBUG_FS
+ struct dentry *device_root;
+#endif
+
+ /* user configurable options */
+ u32 clkspeed;
+ u8 legacy_mode;
+ u16 burst_len;
+};
+
+int qcaspi_netdev_open(struct net_device *dev);
+int qcaspi_netdev_close(struct net_device *dev);
+
+#endif /* _QCA_SPI_H */
diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
index 02dd92a..1d81238 100644
--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169.c
@@ -1847,33 +1847,31 @@
netdev_features_t features)
{
struct rtl8169_private *tp = netdev_priv(dev);
- netdev_features_t changed = features ^ dev->features;
void __iomem *ioaddr = tp->mmio_addr;
+ u32 rx_config;
- if (!(changed & (NETIF_F_RXALL | NETIF_F_RXCSUM |
- NETIF_F_HW_VLAN_CTAG_RX)))
- return;
+ rx_config = RTL_R32(RxConfig);
+ if (features & NETIF_F_RXALL)
+ rx_config |= (AcceptErr | AcceptRunt);
+ else
+ rx_config &= ~(AcceptErr | AcceptRunt);
- if (changed & (NETIF_F_RXCSUM | NETIF_F_HW_VLAN_CTAG_RX)) {
- if (features & NETIF_F_RXCSUM)
- tp->cp_cmd |= RxChkSum;
- else
- tp->cp_cmd &= ~RxChkSum;
+ RTL_W32(RxConfig, rx_config);
- if (dev->features & NETIF_F_HW_VLAN_CTAG_RX)
- tp->cp_cmd |= RxVlan;
- else
- tp->cp_cmd &= ~RxVlan;
+ if (features & NETIF_F_RXCSUM)
+ tp->cp_cmd |= RxChkSum;
+ else
+ tp->cp_cmd &= ~RxChkSum;
- RTL_W16(CPlusCmd, tp->cp_cmd);
- RTL_R16(CPlusCmd);
- }
- if (changed & NETIF_F_RXALL) {
- int tmp = (RTL_R32(RxConfig) & ~(AcceptErr | AcceptRunt));
- if (features & NETIF_F_RXALL)
- tmp |= (AcceptErr | AcceptRunt);
- RTL_W32(RxConfig, tmp);
- }
+ if (features & NETIF_F_HW_VLAN_CTAG_RX)
+ tp->cp_cmd |= RxVlan;
+ else
+ tp->cp_cmd &= ~RxVlan;
+
+ tp->cp_cmd |= RTL_R16(CPlusCmd) & ~(RxVlan | RxChkSum);
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ RTL_R16(CPlusCmd);
}
static int rtl8169_set_features(struct net_device *dev,
@@ -1881,8 +1879,11 @@
{
struct rtl8169_private *tp = netdev_priv(dev);
+ features &= NETIF_F_RXALL | NETIF_F_RXCSUM | NETIF_F_HW_VLAN_CTAG_RX;
+
rtl_lock_work(tp);
- __rtl8169_set_features(dev, features);
+ if (features ^ dev->features)
+ __rtl8169_set_features(dev, features);
rtl_unlock_work(tp);
return 0;
@@ -7531,8 +7532,7 @@
}
}
-static int
-rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{
const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data;
const unsigned int region = cfg->region;
@@ -7607,7 +7607,7 @@
goto err_out_mwi_2;
}
- tp->cp_cmd = RxChkSum;
+ tp->cp_cmd = 0;
if ((sizeof(dma_addr_t) > 4) &&
!pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) && use_dac) {
@@ -7648,13 +7648,6 @@
pci_set_master(pdev);
- /*
- * Pretend we are using VLANs; This bypasses a nasty bug where
- * Interrupts stop flowing on high load on 8110SCd controllers.
- */
- if (tp->mac_version == RTL_GIGA_MAC_VER_05)
- tp->cp_cmd |= RxVlan;
-
rtl_init_mdio_ops(tp);
rtl_init_pll_power_ops(tp);
rtl_init_jumbo_ops(tp);
@@ -7738,8 +7731,14 @@
dev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO |
NETIF_F_HIGHDMA;
+ tp->cp_cmd |= RxChkSum | RxVlan;
+
+ /*
+ * Pretend we are using VLANs; This bypasses a nasty bug where
+ * Interrupts stop flowing on high load on 8110SCd controllers.
+ */
if (tp->mac_version == RTL_GIGA_MAC_VER_05)
- /* 8110SCd requires hardware Rx VLAN - disallow toggling */
+ /* Disallow toggling */
dev->hw_features &= ~NETIF_F_HW_VLAN_CTAG_RX;
if (tp->txd_version == RTL_TD_0)
diff --git a/drivers/net/ethernet/sfc/farch.c b/drivers/net/ethernet/sfc/farch.c
index 0537381..6859437 100644
--- a/drivers/net/ethernet/sfc/farch.c
+++ b/drivers/net/ethernet/sfc/farch.c
@@ -2933,6 +2933,9 @@
u32 crc;
int bit;
+ if (!efx_dev_registered(efx))
+ return;
+
netif_addr_lock_bh(net_dev);
efx->unicast_filter = !(net_dev->flags & IFF_PROMISC);
diff --git a/drivers/net/ethernet/stmicro/stmmac/Kconfig b/drivers/net/ethernet/stmicro/stmmac/Kconfig
index 2d09c11..b02d4a3 100644
--- a/drivers/net/ethernet/stmicro/stmmac/Kconfig
+++ b/drivers/net/ethernet/stmicro/stmmac/Kconfig
@@ -26,6 +26,16 @@
If unsure, say N.
+config DWMAC_MESON
+ bool "Amlogic Meson dwmac support"
+ depends on STMMAC_PLATFORM && ARCH_MESON
+ help
+ Support for Ethernet controller on Amlogic Meson SoCs.
+
+ This selects the Amlogic Meson SoC glue layer support for
+ the stmmac device driver. This driver is used for Meson6 and
+ Meson8 SoCs.
+
config DWMAC_SOCFPGA
bool "SOCFPGA dwmac support"
depends on STMMAC_PLATFORM && MFD_SYSCON && (ARCH_SOCFPGA || COMPILE_TEST)
diff --git a/drivers/net/ethernet/stmicro/stmmac/Makefile b/drivers/net/ethernet/stmicro/stmmac/Makefile
index 18695eb..0533d0b 100644
--- a/drivers/net/ethernet/stmicro/stmmac/Makefile
+++ b/drivers/net/ethernet/stmicro/stmmac/Makefile
@@ -1,6 +1,7 @@
obj-$(CONFIG_STMMAC_ETH) += stmmac.o
stmmac-$(CONFIG_STMMAC_PLATFORM) += stmmac_platform.o
stmmac-$(CONFIG_STMMAC_PCI) += stmmac_pci.o
+stmmac-$(CONFIG_DWMAC_MESON) += dwmac-meson.o
stmmac-$(CONFIG_DWMAC_SUNXI) += dwmac-sunxi.o
stmmac-$(CONFIG_DWMAC_STI) += dwmac-sti.o
stmmac-$(CONFIG_DWMAC_SOCFPGA) += dwmac-socfpga.o
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c
new file mode 100644
index 0000000..d225a60
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c
@@ -0,0 +1,67 @@
+/*
+ * Amlogic Meson DWMAC glue layer
+ *
+ * Copyright (C) 2014 Beniamino Galvani <b.galvani@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/device.h>
+#include <linux/ethtool.h>
+#include <linux/io.h>
+#include <linux/ioport.h>
+#include <linux/platform_device.h>
+#include <linux/stmmac.h>
+
+#define ETHMAC_SPEED_100 BIT(1)
+
+struct meson_dwmac {
+ struct device *dev;
+ void __iomem *reg;
+};
+
+static void meson6_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+{
+ struct meson_dwmac *dwmac = priv;
+ unsigned int val;
+
+ val = readl(dwmac->reg);
+
+ switch (speed) {
+ case SPEED_10:
+ val &= ~ETHMAC_SPEED_100;
+ break;
+ case SPEED_100:
+ val |= ETHMAC_SPEED_100;
+ break;
+ }
+
+ writel(val, dwmac->reg);
+}
+
+static void *meson6_dwmac_setup(struct platform_device *pdev)
+{
+ struct meson_dwmac *dwmac;
+ struct resource *res;
+
+ dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
+ if (!dwmac)
+ return ERR_PTR(-ENOMEM);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ dwmac->reg = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(dwmac->reg))
+ return dwmac->reg;
+
+ return dwmac;
+}
+
+const struct stmmac_of_data meson6_dwmac_data = {
+ .setup = meson6_dwmac_setup,
+ .fix_mac_speed = meson6_dwmac_fix_mac_speed,
+};
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
index ddc6115..3aad413 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
@@ -120,9 +120,9 @@
}
dwmac->splitter_base = devm_ioremap_resource(dev, &res_splitter);
- if (!dwmac->splitter_base) {
+ if (IS_ERR(dwmac->splitter_base)) {
dev_info(dev, "Failed to mapping emac splitter\n");
- return -EINVAL;
+ return PTR_ERR(dwmac->splitter_base);
}
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
index 58097c0..4452889 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
@@ -137,6 +137,9 @@
bool stmmac_eee_init(struct stmmac_priv *priv);
#ifdef CONFIG_STMMAC_PLATFORM
+#ifdef CONFIG_DWMAC_MESON
+extern const struct stmmac_of_data meson6_dwmac_data;
+#endif
#ifdef CONFIG_DWMAC_SUNXI
extern const struct stmmac_of_data sun7i_gmac_data;
#endif
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
index bb524a9..6521717 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
@@ -30,6 +30,9 @@
#include "stmmac.h"
static const struct of_device_id stmmac_dt_ids[] = {
+#ifdef CONFIG_DWMAC_MESON
+ { .compatible = "amlogic,meson6-dwmac", .data = &meson6_dwmac_data},
+#endif
#ifdef CONFIG_DWMAC_SUNXI
{ .compatible = "allwinner,sun7i-a20-gmac", .data = &sun7i_gmac_data},
#endif
diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c
index a4657a4..edb8609 100644
--- a/drivers/net/ethernet/sun/sunvnet.c
+++ b/drivers/net/ethernet/sun/sunvnet.c
@@ -37,6 +37,8 @@
*/
#define VNET_MAX_RETRIES 10
+static int __vnet_tx_trigger(struct vnet_port *port, u32 start);
+
/* Ordered from largest major to lowest */
static struct vio_version vnet_versions[] = {
{ .major = 1, .minor = 0 },
@@ -283,10 +285,18 @@
port->raddr[0], port->raddr[1],
port->raddr[2], port->raddr[3],
port->raddr[4], port->raddr[5]);
- err = -ECONNRESET;
+ break;
}
} while (err == -EAGAIN);
+ if (err <= 0 && vio_dring_state == VIO_DRING_STOPPED) {
+ port->stop_rx_idx = end;
+ port->stop_rx = true;
+ } else {
+ port->stop_rx_idx = 0;
+ port->stop_rx = false;
+ }
+
return err;
}
@@ -350,14 +360,17 @@
if (IS_ERR(desc))
return PTR_ERR(desc);
+ if (desc->hdr.state != VIO_DESC_READY)
+ return 1;
+
+ rmb();
+
viodbg(DATA, "vio_walk_rx_one desc[%02x:%02x:%08x:%08x:%llx:%llx]\n",
desc->hdr.state, desc->hdr.ack,
desc->size, desc->ncookies,
desc->cookies[0].cookie_addr,
desc->cookies[0].cookie_size);
- if (desc->hdr.state != VIO_DESC_READY)
- return 1;
err = vnet_rx_one(port, desc->size, desc->cookies, desc->ncookies);
if (err == -ECONNRESET)
return err;
@@ -448,7 +461,7 @@
struct net_device *dev;
struct vnet *vp;
u32 end;
-
+ struct vio_net_desc *desc;
if (unlikely(pkt->tag.stype_env != VIO_DRING_DATA))
return 0;
@@ -456,7 +469,24 @@
if (unlikely(!idx_is_pending(dr, end)))
return 0;
+ /* sync for race conditions with vnet_start_xmit() and tell xmit it
+ * is time to send a trigger.
+ */
dr->cons = next_idx(end, dr);
+ desc = vio_dring_entry(dr, dr->cons);
+ if (desc->hdr.state == VIO_DESC_READY && port->start_cons) {
+ /* vnet_start_xmit() just populated this dring but missed
+ * sending the "start" LDC message to the consumer.
+ * Send a "start" trigger on its behalf.
+ */
+ if (__vnet_tx_trigger(port, dr->cons) > 0)
+ port->start_cons = false;
+ else
+ port->start_cons = true;
+ } else {
+ port->start_cons = true;
+ }
+
vp = port->vp;
dev = vp->dev;
@@ -597,7 +627,7 @@
local_irq_restore(flags);
}
-static int __vnet_tx_trigger(struct vnet_port *port)
+static int __vnet_tx_trigger(struct vnet_port *port, u32 start)
{
struct vio_dring_state *dr = &port->vio.drings[VIO_DRIVER_TX_RING];
struct vio_dring_data hdr = {
@@ -608,12 +638,21 @@
.sid = vio_send_sid(&port->vio),
},
.dring_ident = dr->ident,
- .start_idx = dr->prod,
+ .start_idx = start,
.end_idx = (u32) -1,
};
int err, delay;
int retries = 0;
+ if (port->stop_rx) {
+ err = vnet_send_ack(port,
+ &port->vio.drings[VIO_DRIVER_RX_RING],
+ port->stop_rx_idx, -1,
+ VIO_DRING_STOPPED);
+ if (err <= 0)
+ return err;
+ }
+
hdr.seq = dr->snd_nxt;
delay = 1;
do {
@@ -734,7 +773,30 @@
d->hdr.state = VIO_DESC_READY;
- err = __vnet_tx_trigger(port);
+ /* Exactly one ldc "start" trigger (for dr->cons) needs to be sent
+ * to notify the consumer that some descriptors are READY.
+ * After that "start" trigger, no additional triggers are needed until
+ * a DRING_STOPPED is received from the consumer. The dr->cons field
+ * (set up by vnet_ack()) has the value of the next dring index
+ * that has not yet been ack-ed. We send a "start" trigger here
+ * if, and only if, start_cons is true (reset it afterward). Conversely,
+ * vnet_ack() should check if the dring corresponding to cons
+ * is marked READY, but start_cons was false.
+ * If so, vnet_ack() should send out the missed "start" trigger.
+ *
+ * Note that the wmb() above makes sure the cookies et al. are
+ * not globally visible before the VIO_DESC_READY, and that the
+ * stores are ordered correctly by the compiler. The consumer will
+ * not proceed until the VIO_DESC_READY is visible assuring that
+ * the consumer does not observe anything related to descriptors
+ * out of order. The HV trap from the LDC start trigger is the
+ * producer to consumer announcement that work is available to the
+ * consumer
+ */
+ if (!port->start_cons)
+ goto ldc_start_done; /* previous trigger suffices */
+
+ err = __vnet_tx_trigger(port, dr->cons);
if (unlikely(err < 0)) {
netdev_info(dev, "TX trigger error %d\n", err);
d->hdr.state = VIO_DESC_FREE;
@@ -742,6 +804,9 @@
goto out_dropped_unlock;
}
+ldc_start_done:
+ port->start_cons = false;
+
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
@@ -1035,6 +1100,7 @@
(sizeof(struct ldc_trans_cookie) * 2));
dr->num_entries = VNET_TX_RING_SIZE;
dr->prod = dr->cons = 0;
+ port->start_cons = true; /* need an initial trigger */
dr->pending = VNET_TX_RING_SIZE;
dr->ncookies = ncookies;
diff --git a/drivers/net/ethernet/sun/sunvnet.h b/drivers/net/ethernet/sun/sunvnet.h
index de5c2c6..da49337 100644
--- a/drivers/net/ethernet/sun/sunvnet.h
+++ b/drivers/net/ethernet/sun/sunvnet.h
@@ -40,6 +40,10 @@
struct vnet_tx_entry tx_bufs[VNET_TX_RING_SIZE];
struct list_head list;
+
+ u32 stop_rx_idx;
+ bool stop_rx;
+ bool start_cons;
};
static inline struct vnet_port *to_vnet_port(struct vio_driver_state *vio)
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 5c3f1f3..45ba50e 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -701,6 +701,28 @@
cpsw_dual_emac_src_port_detect(status, priv, ndev, skb);
if (unlikely(status < 0) || unlikely(!netif_running(ndev))) {
+ bool ndev_status = false;
+ struct cpsw_slave *slave = priv->slaves;
+ int n;
+
+ if (priv->data.dual_emac) {
+ /* In dual emac mode check for all interfaces */
+ for (n = priv->data.slaves; n; n--, slave++)
+ if (netif_running(slave->ndev))
+ ndev_status = true;
+ }
+
+ if (ndev_status && (status >= 0)) {
+ /* The packet received is for the interface which
+ * is already down and the other interface is up
+ * and running, intead of freeing which results
+ * in reducing of the number of rx descriptor in
+ * DMA engine, requeue skb back to cpdma.
+ */
+ new_skb = skb;
+ goto requeue;
+ }
+
/* the interface is going down, skbs are purged */
dev_kfree_skb_any(skb);
return;
@@ -719,6 +741,7 @@
new_skb = skb;
}
+requeue:
ret = cpdma_chan_submit(priv->rxch, new_skb, new_skb->data,
skb_tailroom(new_skb), 0);
if (WARN_ON(ret < 0))
@@ -2354,10 +2377,19 @@
struct net_device *ndev = platform_get_drvdata(pdev);
struct cpsw_priv *priv = netdev_priv(ndev);
- if (netif_running(ndev))
- cpsw_ndo_stop(ndev);
+ if (priv->data.dual_emac) {
+ int i;
- for_each_slave(priv, soft_reset_slave);
+ for (i = 0; i < priv->data.slaves; i++) {
+ if (netif_running(priv->slaves[i].ndev))
+ cpsw_ndo_stop(priv->slaves[i].ndev);
+ soft_reset_slave(priv->slaves + i);
+ }
+ } else {
+ if (netif_running(ndev))
+ cpsw_ndo_stop(ndev);
+ for_each_slave(priv, soft_reset_slave);
+ }
pm_runtime_put_sync(&pdev->dev);
@@ -2371,14 +2403,24 @@
{
struct platform_device *pdev = to_platform_device(dev);
struct net_device *ndev = platform_get_drvdata(pdev);
+ struct cpsw_priv *priv = netdev_priv(ndev);
pm_runtime_get_sync(&pdev->dev);
/* Select default pin state */
pinctrl_pm_select_default_state(&pdev->dev);
- if (netif_running(ndev))
- cpsw_ndo_open(ndev);
+ if (priv->data.dual_emac) {
+ int i;
+
+ for (i = 0; i < priv->data.slaves; i++) {
+ if (netif_running(priv->slaves[i].ndev))
+ cpsw_ndo_open(priv->slaves[i].ndev);
+ }
+ } else {
+ if (netif_running(ndev))
+ cpsw_ndo_open(ndev);
+ }
return 0;
}
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
index c8fd941..4ea2d4e 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
@@ -1485,7 +1485,6 @@
if (!ndev)
return -ENOMEM;
- ether_setup(ndev);
platform_set_drvdata(op, ndev);
SET_NETDEV_DEV(ndev, &op->dev);
diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c
index c44eaf0..caed6ee 100644
--- a/drivers/net/fddi/defxx.c
+++ b/drivers/net/fddi/defxx.c
@@ -466,7 +466,8 @@
*bar_len = (bar | PI_MEM_ADD_MASK_M) + 1;
} else {
*bar_start = base_addr;
- *bar_len = PI_ESIC_K_CSR_IO_LEN;
+ *bar_len = PI_ESIC_K_CSR_IO_LEN +
+ PI_ESIC_K_BURST_HOLDOFF_LEN;
}
}
if (dfx_bus_tc) {
@@ -683,6 +684,9 @@
if (dfx_bus_eisa) {
unsigned long base_addr = to_eisa_device(bdev)->base_addr;
+ /* Disable the board before fiddling with the decoders. */
+ outb(0, base_addr + PI_ESIC_K_SLOT_CNTRL);
+
/* Get the interrupt level from the ESIC chip. */
val = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
val &= PI_CONFIG_STAT_0_M_IRQ;
@@ -709,38 +713,46 @@
/*
* Enable memory decoding (MEMCS0) and/or port decoding
* (IOCS1/IOCS0) as appropriate in Function Control
- * Register. One of the port chip selects seems to be
- * used for the Burst Holdoff register, but this bit of
- * documentation is missing and as yet it has not been
- * determined which of the two. This is also the reason
- * the size of the decoded port range is twice as large
- * as one required by the PDQ.
+ * Register. IOCS0 is used for PDQ registers, taking 16
+ * 32-bit words, while IOCS1 is used for the Burst Holdoff
+ * register, taking a single 32-bit word only. We use the
+ * slot-specific I/O range as per the ESIC spec, that is
+ * set bits 15:12 in the mask registers to mask them out.
*/
/* Set the decode range of the board. */
- val = ((bp->base.port >> 12) << PI_IO_CMP_V_SLOT);
- outb(base_addr + PI_ESIC_K_IO_ADD_CMP_0_1, val);
- outb(base_addr + PI_ESIC_K_IO_ADD_CMP_0_0, 0);
- outb(base_addr + PI_ESIC_K_IO_ADD_CMP_1_1, val);
- outb(base_addr + PI_ESIC_K_IO_ADD_CMP_1_0, 0);
- val = PI_ESIC_K_CSR_IO_LEN - 1;
- outb(base_addr + PI_ESIC_K_IO_ADD_MASK_0_1, (val >> 8) & 0xff);
- outb(base_addr + PI_ESIC_K_IO_ADD_MASK_0_0, val & 0xff);
- outb(base_addr + PI_ESIC_K_IO_ADD_MASK_1_1, (val >> 8) & 0xff);
- outb(base_addr + PI_ESIC_K_IO_ADD_MASK_1_0, val & 0xff);
+ val = 0;
+ outb(val, base_addr + PI_ESIC_K_IO_ADD_CMP_0_1);
+ val = PI_DEFEA_K_CSR_IO;
+ outb(val, base_addr + PI_ESIC_K_IO_ADD_CMP_0_0);
+
+ val = PI_IO_CMP_M_SLOT;
+ outb(val, base_addr + PI_ESIC_K_IO_ADD_MASK_0_1);
+ val = (PI_ESIC_K_CSR_IO_LEN - 1) & ~3;
+ outb(val, base_addr + PI_ESIC_K_IO_ADD_MASK_0_0);
+
+ val = 0;
+ outb(val, base_addr + PI_ESIC_K_IO_ADD_CMP_1_1);
+ val = PI_DEFEA_K_BURST_HOLDOFF;
+ outb(val, base_addr + PI_ESIC_K_IO_ADD_CMP_1_0);
+
+ val = PI_IO_CMP_M_SLOT;
+ outb(val, base_addr + PI_ESIC_K_IO_ADD_MASK_1_1);
+ val = (PI_ESIC_K_BURST_HOLDOFF_LEN - 1) & ~3;
+ outb(val, base_addr + PI_ESIC_K_IO_ADD_MASK_1_0);
/* Enable the decoders. */
val = PI_FUNCTION_CNTRL_M_IOCS1 | PI_FUNCTION_CNTRL_M_IOCS0;
if (dfx_use_mmio)
val |= PI_FUNCTION_CNTRL_M_MEMCS0;
- outb(base_addr + PI_ESIC_K_FUNCTION_CNTRL, val);
+ outb(val, base_addr + PI_ESIC_K_FUNCTION_CNTRL);
/*
* Enable access to the rest of the module
* (including PDQ and packet memory).
*/
val = PI_SLOT_CNTRL_M_ENB;
- outb(base_addr + PI_ESIC_K_SLOT_CNTRL, val);
+ outb(val, base_addr + PI_ESIC_K_SLOT_CNTRL);
/*
* Map PDQ registers into memory or port space. This is
@@ -748,15 +760,15 @@
*/
val = inb(base_addr + PI_DEFEA_K_BURST_HOLDOFF);
if (dfx_use_mmio)
- val |= PI_BURST_HOLDOFF_V_MEM_MAP;
+ val |= PI_BURST_HOLDOFF_M_MEM_MAP;
else
- val &= ~PI_BURST_HOLDOFF_V_MEM_MAP;
- outb(base_addr + PI_DEFEA_K_BURST_HOLDOFF, val);
+ val &= ~PI_BURST_HOLDOFF_M_MEM_MAP;
+ outb(val, base_addr + PI_DEFEA_K_BURST_HOLDOFF);
/* Enable interrupts at EISA bus interface chip (ESIC) */
val = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
val |= PI_CONFIG_STAT_0_M_INT_ENB;
- outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, val);
+ outb(val, base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
}
if (dfx_bus_pci) {
struct pci_dev *pdev = to_pci_dev(bdev);
@@ -825,7 +837,7 @@
/* Disable interrupts at EISA bus interface chip (ESIC) */
val = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
val &= ~PI_CONFIG_STAT_0_M_INT_ENB;
- outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, val);
+ outb(val, base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
}
if (dfx_bus_pci) {
/* Disable interrupts at PCI bus interface chip (PFI) */
@@ -1917,7 +1929,7 @@
/* Disable interrupts at the ESIC */
status &= ~PI_CONFIG_STAT_0_M_INT_ENB;
- outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, status);
+ outb(status, base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
/* Call interrupt service routine for this adapter */
dfx_int_common(dev);
@@ -1925,7 +1937,7 @@
/* Reenable interrupts at the ESIC */
status = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
status |= PI_CONFIG_STAT_0_M_INT_ENB;
- outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, status);
+ outb(status, base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
spin_unlock(&bp->lock);
}
diff --git a/drivers/net/fddi/defxx.h b/drivers/net/fddi/defxx.h
index adb63f3..9527f01 100644
--- a/drivers/net/fddi/defxx.h
+++ b/drivers/net/fddi/defxx.h
@@ -1479,8 +1479,10 @@
/* Define EISA controller register offsets */
-#define PI_ESIC_K_CSR_IO_LEN 0x80 /* 128 bytes */
+#define PI_ESIC_K_CSR_IO_LEN 0x40 /* 64 bytes */
+#define PI_ESIC_K_BURST_HOLDOFF_LEN 0x04 /* 4 bytes */
+#define PI_DEFEA_K_CSR_IO 0x000
#define PI_DEFEA_K_BURST_HOLDOFF 0x040
#define PI_ESIC_K_SLOT_ID 0xC80
@@ -1558,11 +1560,9 @@
#define PI_MEM_ADD_MASK_M 0x3ff
-/*
- * Define the fields in the IO Compare registers.
- * The driver must initialize the slot field with the slot ID shifted by the
- * amount shown below.
- */
+/* Define the fields in the I/O Address Compare and Mask registers. */
+
+#define PI_IO_CMP_M_SLOT 0xf0
#define PI_IO_CMP_V_SLOT 4
diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
index a969555..726edab 100644
--- a/drivers/net/macvlan.c
+++ b/drivers/net/macvlan.c
@@ -36,6 +36,7 @@
#include <linux/netpoll.h>
#define MACVLAN_HASH_SIZE (1 << BITS_PER_BYTE)
+#define MACVLAN_BC_QUEUE_LEN 1000
struct macvlan_port {
struct net_device *dev;
@@ -248,7 +249,7 @@
goto err;
spin_lock(&port->bc_queue.lock);
- if (skb_queue_len(&port->bc_queue) < skb->dev->tx_queue_len) {
+ if (skb_queue_len(&port->bc_queue) < MACVLAN_BC_QUEUE_LEN) {
__skb_queue_tail(&port->bc_queue, nskb);
err = 0;
}
@@ -806,6 +807,7 @@
features,
mask);
features |= ALWAYS_ON_FEATURES;
+ features &= ~NETIF_F_NETNS_LOCAL;
return features;
}
diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
index 09dd6e1..daae699 100644
--- a/drivers/net/phy/bcm7xxx.c
+++ b/drivers/net/phy/bcm7xxx.c
@@ -196,13 +196,22 @@
static int bcm7xxx_28nm_config_init(struct phy_device *phydev)
{
- int ret;
+ u8 rev = PHY_BRCM_7XXX_REV(phydev->dev_flags);
+ u8 patch = PHY_BRCM_7XXX_PATCH(phydev->dev_flags);
+ int ret = 0;
- ret = bcm7445_config_init(phydev);
- if (ret)
- return ret;
+ dev_info(&phydev->dev, "PHY revision: 0x%02x, patch: %d\n", rev, patch);
- ret = bcm7xxx_28nm_afe_config_init(phydev);
+ switch (rev) {
+ case 0xa0:
+ case 0xb0:
+ ret = bcm7445_config_init(phydev);
+ break;
+ default:
+ ret = bcm7xxx_28nm_afe_config_init(phydev);
+ break;
+ }
+
if (ret)
return ret;
@@ -257,8 +266,8 @@
phy_write(phydev, MII_BCM7XXX_AUX_MODE, MII_BCM7XX_64CLK_MDIO);
phy_read(phydev, MII_BCM7XXX_AUX_MODE);
- /* Workaround only required for 100Mbits/sec */
- if (!(phydev->dev_flags & PHY_BRCM_100MBPS_WAR))
+ /* Workaround only required for 100Mbits/sec capable PHYs */
+ if (phydev->supported & PHY_GBIT_FEATURES)
return 0;
/* set shadow mode 2 */
diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
index fd0ea7c..011dbda 100644
--- a/drivers/net/phy/micrel.c
+++ b/drivers/net/phy/micrel.c
@@ -592,8 +592,7 @@
.phy_id = PHY_ID_KSZ9031,
.phy_id_mask = 0x00fffff0,
.name = "Micrel KSZ9031 Gigabit PHY",
- .features = (PHY_GBIT_FEATURES | SUPPORTED_Pause
- | SUPPORTED_Asym_Pause),
+ .features = (PHY_GBIT_FEATURES | SUPPORTED_Pause),
.flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
.config_init = ksz9031_config_init,
.config_aneg = genphy_config_aneg,
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
index 2130c75..a4d4c4a 100644
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -22,6 +22,8 @@
#include <linux/ip.h>
#include <linux/ipv6.h>
#include <net/ip6_checksum.h>
+#include <uapi/linux/mdio.h>
+#include <linux/mdio.h>
/* Version Information */
#define DRIVER_VERSION "v1.06.0 (2014/03/03)"
@@ -129,7 +131,9 @@
#define OCP_SRAM_ADDR 0xa436
#define OCP_SRAM_DATA 0xa438
#define OCP_DOWN_SPEED 0xa442
-#define OCP_EEE_CFG2 0xa5d0
+#define OCP_EEE_ABLE 0xa5c4
+#define OCP_EEE_ADV 0xa5d0
+#define OCP_EEE_LPABLE 0xa5d2
#define OCP_ADC_CFG 0xbc06
/* SRAM Register */
@@ -361,7 +365,8 @@
#define EEE_NWAY_EN 0x1000
#define TX_QUIET_EN 0x0200
#define RX_QUIET_EN 0x0100
-#define SDRISETIME 0x0010 /* bit 4 ~ 6 */
+#define sd_rise_time_mask 0x0070
+#define sd_rise_time(x) (min(x, 7) << 4) /* bit 4 ~ 6 */
#define RG_RXLPI_MSK_HFDUP 0x0008
#define SDFALLTIME 0x0007 /* bit 0 ~ 2 */
@@ -373,7 +378,8 @@
#define RG_EEEPRG_EN 0x0010
/* OCP_EEE_CONFIG3 */
-#define FST_SNR_EYE_R 0x1500 /* bit 7 ~ 15 */
+#define fast_snr_mask 0xff80
+#define fast_snr(x) (min(x, 0x1ff) << 7) /* bit 7 ~ 15 */
#define RG_LFS_SEL 0x0060 /* bit 6 ~ 5 */
#define MSK_PH 0x0006 /* bit 0 ~ 3 */
@@ -382,11 +388,6 @@
#define FUN_ADDR 0x0000
#define FUN_DATA 0x4000
/* bit[4:0] device addr */
-#define DEVICE_ADDR 0x0007
-
-/* OCP_EEE_DATA */
-#define EEE_ADDR 0x003C
-#define EEE_DATA 0x0002
/* OCP_EEE_CFG */
#define CTAP_SHORT_EN 0x0040
@@ -395,10 +396,6 @@
/* OCP_DOWN_SPEED */
#define EN_10M_BGOFF 0x0080
-/* OCP_EEE_CFG2 */
-#define MY1000_EEE 0x0004
-#define MY100_EEE 0x0002
-
/* OCP_ADC_CFG */
#define CKADSEL_L 0x0100
#define ADC_EN 0x0080
@@ -506,6 +503,7 @@
#define IPF (1 << 23) /* IP checksum fail */
#define UDPF (1 << 22) /* UDP checksum fail */
#define TCPF (1 << 21) /* TCP checksum fail */
+#define RX_VLAN_TAG (1 << 16)
__le32 opts4;
__le32 opts5;
@@ -531,6 +529,7 @@
#define MSS_MAX 0x7ffU
#define TCPHO_SHIFT 17
#define TCPHO_MAX 0x7ffU
+#define TX_VLAN_TAG (1 << 16)
};
struct r8152;
@@ -575,6 +574,8 @@
void (*up)(struct r8152 *);
void (*down)(struct r8152 *);
void (*unload)(struct r8152 *);
+ int (*eee_get)(struct r8152 *, struct ethtool_eee *);
+ int (*eee_set)(struct r8152 *, struct ethtool_eee *);
} rtl_ops;
int intr_interval;
@@ -1423,6 +1424,25 @@
return ret;
}
+static inline void rtl_tx_vlan_tag(struct tx_desc *desc, struct sk_buff *skb)
+{
+ if (vlan_tx_tag_present(skb)) {
+ u32 opts2;
+
+ opts2 = TX_VLAN_TAG | swab16(vlan_tx_tag_get(skb));
+ desc->opts2 |= cpu_to_le32(opts2);
+ }
+}
+
+static inline void rtl_rx_vlan_tag(struct rx_desc *desc, struct sk_buff *skb)
+{
+ u32 opts2 = le32_to_cpu(desc->opts2);
+
+ if (opts2 & RX_VLAN_TAG)
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+ swab16(opts2 & 0xffff));
+}
+
static int r8152_tx_csum(struct r8152 *tp, struct tx_desc *desc,
struct sk_buff *skb, u32 len, u32 transport_offset)
{
@@ -1550,6 +1570,8 @@
continue;
}
+ rtl_tx_vlan_tag(tx_desc, skb);
+
tx_data += sizeof(*tx_desc);
len = skb->len;
@@ -1691,6 +1713,7 @@
memcpy(skb->data, rx_data, pkt_len);
skb_put(skb, pkt_len);
skb->protocol = eth_type_trans(skb, netdev);
+ rtl_rx_vlan_tag(rx_desc, skb);
netif_receive_skb(skb);
stats->rx_packets++;
stats->rx_bytes += pkt_len;
@@ -2026,7 +2049,7 @@
return rtl_enable(tp);
}
-static void rtl8152_disable(struct r8152 *tp)
+static void rtl_disable(struct r8152 *tp)
{
u32 ocp_data;
int i;
@@ -2082,6 +2105,34 @@
ocp_write_word(tp, MCU_TYPE_USB, USB_PM_CTRL_STATUS, ocp_data);
}
+static void rtl_rx_vlan_en(struct r8152 *tp, bool enable)
+{
+ u32 ocp_data;
+
+ ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_CPCR);
+ if (enable)
+ ocp_data |= CPCR_RX_VLAN;
+ else
+ ocp_data &= ~CPCR_RX_VLAN;
+ ocp_write_word(tp, MCU_TYPE_PLA, PLA_CPCR, ocp_data);
+}
+
+static int rtl8152_set_features(struct net_device *dev,
+ netdev_features_t features)
+{
+ netdev_features_t changed = features ^ dev->features;
+ struct r8152 *tp = netdev_priv(dev);
+
+ if (changed & NETIF_F_HW_VLAN_CTAG_RX) {
+ if (features & NETIF_F_HW_VLAN_CTAG_RX)
+ rtl_rx_vlan_en(tp, true);
+ else
+ rtl_rx_vlan_en(tp, false);
+ }
+
+ return 0;
+}
+
#define WAKE_ANY (WAKE_PHY | WAKE_MAGIC | WAKE_UCAST | WAKE_BCAST | WAKE_MCAST)
static u32 __rtl_get_wol(struct r8152 *tp)
@@ -2239,6 +2290,13 @@
LINKENA | DIS_SDSAVE);
}
+static void rtl8152_disable(struct r8152 *tp)
+{
+ r8152b_disable_aldps(tp);
+ rtl_disable(tp);
+ r8152b_enable_aldps(tp);
+}
+
static void r8152b_hw_phy_cfg(struct r8152 *tp)
{
u16 data;
@@ -2249,11 +2307,8 @@
r8152_mdio_write(tp, MII_BMCR, data);
}
- r8152b_disable_aldps(tp);
-
rtl_clear_bp(tp);
- r8152b_enable_aldps(tp);
set_bit(PHY_RESET, &tp->flags);
}
@@ -2262,9 +2317,6 @@
u32 ocp_data;
int i;
- if (test_bit(RTL8152_UNPLUG, &tp->flags))
- return;
-
ocp_data = ocp_read_dword(tp, MCU_TYPE_PLA, PLA_RCR);
ocp_data &= ~RCR_ACPT_ALL;
ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, ocp_data);
@@ -2330,9 +2382,7 @@
ocp_write_dword(tp, MCU_TYPE_USB, USB_TX_DMA,
TEST_MODE_DISABLE | TX_SIZE_ADJUST1);
- ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_CPCR);
- ocp_data &= ~CPCR_RX_VLAN;
- ocp_write_word(tp, MCU_TYPE_PLA, PLA_CPCR, ocp_data);
+ rtl_rx_vlan_en(tp, tp->netdev->features & NETIF_F_HW_VLAN_CTAG_RX);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_RMS, RTL8152_RMS);
@@ -2354,7 +2404,7 @@
ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL1, RXFIFO_THR2_OOB);
ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL2, RXFIFO_THR3_OOB);
- rtl8152_disable(tp);
+ rtl_disable(tp);
for (i = 0; i < 1000; i++) {
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL);
@@ -2376,9 +2426,7 @@
ocp_write_word(tp, MCU_TYPE_PLA, PLA_RMS, RTL8152_RMS);
- ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_CPCR);
- ocp_data |= CPCR_RX_VLAN;
- ocp_write_word(tp, MCU_TYPE_PLA, PLA_CPCR, ocp_data);
+ rtl_rx_vlan_en(tp, true);
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PAL_BDC_CR);
ocp_data |= ALDPS_PROXY_MODE;
@@ -2492,9 +2540,6 @@
u32 ocp_data;
int i;
- if (test_bit(RTL8152_UNPLUG, &tp->flags))
- return;
-
rxdy_gated_en(tp, true);
r8153_teredo_off(tp);
@@ -2532,9 +2577,7 @@
usleep_range(1000, 2000);
}
- ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_CPCR);
- ocp_data &= ~CPCR_RX_VLAN;
- ocp_write_word(tp, MCU_TYPE_PLA, PLA_CPCR, ocp_data);
+ rtl_rx_vlan_en(tp, tp->netdev->features & NETIF_F_HW_VLAN_CTAG_RX);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_RMS, RTL8153_RMS);
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_MTPS, MTPS_JUMBO);
@@ -2567,7 +2610,7 @@
ocp_data &= ~NOW_IS_OOB;
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL, ocp_data);
- rtl8152_disable(tp);
+ rtl_disable(tp);
for (i = 0; i < 1000; i++) {
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL);
@@ -2593,9 +2636,7 @@
ocp_data &= ~TEREDO_WAKE_MASK;
ocp_write_word(tp, MCU_TYPE_PLA, PLA_TEREDO_CFG, ocp_data);
- ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_CPCR);
- ocp_data |= CPCR_RX_VLAN;
- ocp_write_word(tp, MCU_TYPE_PLA, PLA_CPCR, ocp_data);
+ rtl_rx_vlan_en(tp, true);
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PAL_BDC_CR);
ocp_data |= ALDPS_PROXY_MODE;
@@ -2631,6 +2672,13 @@
ocp_reg_write(tp, OCP_POWER_CFG, data);
}
+static void rtl8153_disable(struct r8152 *tp)
+{
+ r8153_disable_aldps(tp);
+ rtl_disable(tp);
+ r8153_enable_aldps(tp);
+}
+
static int rtl8152_set_speed(struct r8152 *tp, u8 autoneg, u16 speed, u8 duplex)
{
u16 bmcr, anar, gbcr;
@@ -2721,6 +2769,16 @@
return ret;
}
+static void rtl8152_up(struct r8152 *tp)
+{
+ if (test_bit(RTL8152_UNPLUG, &tp->flags))
+ return;
+
+ r8152b_disable_aldps(tp);
+ r8152b_exit_oob(tp);
+ r8152b_enable_aldps(tp);
+}
+
static void rtl8152_down(struct r8152 *tp)
{
if (test_bit(RTL8152_UNPLUG, &tp->flags)) {
@@ -2734,6 +2792,16 @@
r8152b_enable_aldps(tp);
}
+static void rtl8153_up(struct r8152 *tp)
+{
+ if (test_bit(RTL8152_UNPLUG, &tp->flags))
+ return;
+
+ r8153_disable_aldps(tp);
+ r8153_first_init(tp);
+ r8153_enable_aldps(tp);
+}
+
static void rtl8153_down(struct r8152 *tp)
{
if (test_bit(RTL8152_UNPLUG, &tp->flags)) {
@@ -2888,43 +2956,92 @@
return res;
}
-static void r8152b_enable_eee(struct r8152 *tp)
+static inline void r8152_mmd_indirect(struct r8152 *tp, u16 dev, u16 reg)
{
+ ocp_reg_write(tp, OCP_EEE_AR, FUN_ADDR | dev);
+ ocp_reg_write(tp, OCP_EEE_DATA, reg);
+ ocp_reg_write(tp, OCP_EEE_AR, FUN_DATA | dev);
+}
+
+static u16 r8152_mmd_read(struct r8152 *tp, u16 dev, u16 reg)
+{
+ u16 data;
+
+ r8152_mmd_indirect(tp, dev, reg);
+ data = ocp_reg_read(tp, OCP_EEE_DATA);
+ ocp_reg_write(tp, OCP_EEE_AR, 0x0000);
+
+ return data;
+}
+
+static void r8152_mmd_write(struct r8152 *tp, u16 dev, u16 reg, u16 data)
+{
+ r8152_mmd_indirect(tp, dev, reg);
+ ocp_reg_write(tp, OCP_EEE_DATA, data);
+ ocp_reg_write(tp, OCP_EEE_AR, 0x0000);
+}
+
+static void r8152_eee_en(struct r8152 *tp, bool enable)
+{
+ u16 config1, config2, config3;
u32 ocp_data;
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EEE_CR);
- ocp_data |= EEE_RX_EN | EEE_TX_EN;
+ config1 = ocp_reg_read(tp, OCP_EEE_CONFIG1) & ~sd_rise_time_mask;
+ config2 = ocp_reg_read(tp, OCP_EEE_CONFIG2);
+ config3 = ocp_reg_read(tp, OCP_EEE_CONFIG3) & ~fast_snr_mask;
+
+ if (enable) {
+ ocp_data |= EEE_RX_EN | EEE_TX_EN;
+ config1 |= EEE_10_CAP | EEE_NWAY_EN | TX_QUIET_EN | RX_QUIET_EN;
+ config1 |= sd_rise_time(1);
+ config2 |= RG_DACQUIET_EN | RG_LDVQUIET_EN;
+ config3 |= fast_snr(42);
+ } else {
+ ocp_data &= ~(EEE_RX_EN | EEE_TX_EN);
+ config1 &= ~(EEE_10_CAP | EEE_NWAY_EN | TX_QUIET_EN |
+ RX_QUIET_EN);
+ config1 |= sd_rise_time(7);
+ config2 &= ~(RG_DACQUIET_EN | RG_LDVQUIET_EN);
+ config3 |= fast_snr(511);
+ }
+
ocp_write_word(tp, MCU_TYPE_PLA, PLA_EEE_CR, ocp_data);
- ocp_reg_write(tp, OCP_EEE_CONFIG1, RG_TXLPI_MSK_HFDUP | RG_MATCLR_EN |
- EEE_10_CAP | EEE_NWAY_EN |
- TX_QUIET_EN | RX_QUIET_EN |
- SDRISETIME | RG_RXLPI_MSK_HFDUP |
- SDFALLTIME);
- ocp_reg_write(tp, OCP_EEE_CONFIG2, RG_LPIHYS_NUM | RG_DACQUIET_EN |
- RG_LDVQUIET_EN | RG_CKRSEL |
- RG_EEEPRG_EN);
- ocp_reg_write(tp, OCP_EEE_CONFIG3, FST_SNR_EYE_R | RG_LFS_SEL | MSK_PH);
- ocp_reg_write(tp, OCP_EEE_AR, FUN_ADDR | DEVICE_ADDR);
- ocp_reg_write(tp, OCP_EEE_DATA, EEE_ADDR);
- ocp_reg_write(tp, OCP_EEE_AR, FUN_DATA | DEVICE_ADDR);
- ocp_reg_write(tp, OCP_EEE_DATA, EEE_DATA);
- ocp_reg_write(tp, OCP_EEE_AR, 0x0000);
+ ocp_reg_write(tp, OCP_EEE_CONFIG1, config1);
+ ocp_reg_write(tp, OCP_EEE_CONFIG2, config2);
+ ocp_reg_write(tp, OCP_EEE_CONFIG3, config3);
+}
+
+static void r8152b_enable_eee(struct r8152 *tp)
+{
+ r8152_eee_en(tp, true);
+ r8152_mmd_write(tp, MDIO_MMD_AN, MDIO_AN_EEE_ADV, MDIO_EEE_100TX);
+}
+
+static void r8153_eee_en(struct r8152 *tp, bool enable)
+{
+ u32 ocp_data;
+ u16 config;
+
+ ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EEE_CR);
+ config = ocp_reg_read(tp, OCP_EEE_CFG);
+
+ if (enable) {
+ ocp_data |= EEE_RX_EN | EEE_TX_EN;
+ config |= EEE10_EN;
+ } else {
+ ocp_data &= ~(EEE_RX_EN | EEE_TX_EN);
+ config &= ~EEE10_EN;
+ }
+
+ ocp_write_word(tp, MCU_TYPE_PLA, PLA_EEE_CR, ocp_data);
+ ocp_reg_write(tp, OCP_EEE_CFG, config);
}
static void r8153_enable_eee(struct r8152 *tp)
{
- u32 ocp_data;
- u16 data;
-
- ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EEE_CR);
- ocp_data |= EEE_RX_EN | EEE_TX_EN;
- ocp_write_word(tp, MCU_TYPE_PLA, PLA_EEE_CR, ocp_data);
- data = ocp_reg_read(tp, OCP_EEE_CFG);
- data |= EEE10_EN;
- ocp_reg_write(tp, OCP_EEE_CFG, data);
- data = ocp_reg_read(tp, OCP_EEE_CFG2);
- data |= MY1000_EEE | MY100_EEE;
- ocp_reg_write(tp, OCP_EEE_CFG2, data);
+ r8153_eee_en(tp, true);
+ ocp_reg_write(tp, OCP_EEE_ADV, MDIO_EEE_1000T | MDIO_EEE_100TX);
}
static void r8152b_enable_fc(struct r8152 *tp)
@@ -2952,6 +3069,8 @@
if (test_bit(RTL8152_UNPLUG, &tp->flags))
return;
+ r8152b_disable_aldps(tp);
+
if (tp->version == RTL_VER_01) {
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_LED_FEATURE);
ocp_data &= ~LED_MODE_MASK;
@@ -2990,6 +3109,7 @@
if (test_bit(RTL8152_UNPLUG, &tp->flags))
return;
+ r8153_disable_aldps(tp);
r8153_u1u2en(tp, false);
for (i = 0; i < 500; i++) {
@@ -3250,6 +3370,122 @@
}
}
+static int r8152_get_eee(struct r8152 *tp, struct ethtool_eee *eee)
+{
+ u32 ocp_data, lp, adv, supported = 0;
+ u16 val;
+
+ val = r8152_mmd_read(tp, MDIO_MMD_PCS, MDIO_PCS_EEE_ABLE);
+ supported = mmd_eee_cap_to_ethtool_sup_t(val);
+
+ val = r8152_mmd_read(tp, MDIO_MMD_AN, MDIO_AN_EEE_ADV);
+ adv = mmd_eee_adv_to_ethtool_adv_t(val);
+
+ val = r8152_mmd_read(tp, MDIO_MMD_AN, MDIO_AN_EEE_LPABLE);
+ lp = mmd_eee_adv_to_ethtool_adv_t(val);
+
+ ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EEE_CR);
+ ocp_data &= EEE_RX_EN | EEE_TX_EN;
+
+ eee->eee_enabled = !!ocp_data;
+ eee->eee_active = !!(supported & adv & lp);
+ eee->supported = supported;
+ eee->advertised = adv;
+ eee->lp_advertised = lp;
+
+ return 0;
+}
+
+static int r8152_set_eee(struct r8152 *tp, struct ethtool_eee *eee)
+{
+ u16 val = ethtool_adv_to_mmd_eee_adv_t(eee->advertised);
+
+ r8152_eee_en(tp, eee->eee_enabled);
+
+ if (!eee->eee_enabled)
+ val = 0;
+
+ r8152_mmd_write(tp, MDIO_MMD_AN, MDIO_AN_EEE_ADV, val);
+
+ return 0;
+}
+
+static int r8153_get_eee(struct r8152 *tp, struct ethtool_eee *eee)
+{
+ u32 ocp_data, lp, adv, supported = 0;
+ u16 val;
+
+ val = ocp_reg_read(tp, OCP_EEE_ABLE);
+ supported = mmd_eee_cap_to_ethtool_sup_t(val);
+
+ val = ocp_reg_read(tp, OCP_EEE_ADV);
+ adv = mmd_eee_adv_to_ethtool_adv_t(val);
+
+ val = ocp_reg_read(tp, OCP_EEE_LPABLE);
+ lp = mmd_eee_adv_to_ethtool_adv_t(val);
+
+ ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EEE_CR);
+ ocp_data &= EEE_RX_EN | EEE_TX_EN;
+
+ eee->eee_enabled = !!ocp_data;
+ eee->eee_active = !!(supported & adv & lp);
+ eee->supported = supported;
+ eee->advertised = adv;
+ eee->lp_advertised = lp;
+
+ return 0;
+}
+
+static int r8153_set_eee(struct r8152 *tp, struct ethtool_eee *eee)
+{
+ u16 val = ethtool_adv_to_mmd_eee_adv_t(eee->advertised);
+
+ r8153_eee_en(tp, eee->eee_enabled);
+
+ if (!eee->eee_enabled)
+ val = 0;
+
+ ocp_reg_write(tp, OCP_EEE_ADV, val);
+
+ return 0;
+}
+
+static int
+rtl_ethtool_get_eee(struct net_device *net, struct ethtool_eee *edata)
+{
+ struct r8152 *tp = netdev_priv(net);
+ int ret;
+
+ ret = usb_autopm_get_interface(tp->intf);
+ if (ret < 0)
+ goto out;
+
+ ret = tp->rtl_ops.eee_get(tp, edata);
+
+ usb_autopm_put_interface(tp->intf);
+
+out:
+ return ret;
+}
+
+static int
+rtl_ethtool_set_eee(struct net_device *net, struct ethtool_eee *edata)
+{
+ struct r8152 *tp = netdev_priv(net);
+ int ret;
+
+ ret = usb_autopm_get_interface(tp->intf);
+ if (ret < 0)
+ goto out;
+
+ ret = tp->rtl_ops.eee_set(tp, edata);
+
+ usb_autopm_put_interface(tp->intf);
+
+out:
+ return ret;
+}
+
static struct ethtool_ops ops = {
.get_drvinfo = rtl8152_get_drvinfo,
.get_settings = rtl8152_get_settings,
@@ -3262,6 +3498,8 @@
.get_strings = rtl8152_get_strings,
.get_sset_count = rtl8152_get_sset_count,
.get_ethtool_stats = rtl8152_get_ethtool_stats,
+ .get_eee = rtl_ethtool_get_eee,
+ .set_eee = rtl_ethtool_set_eee,
};
static int rtl8152_ioctl(struct net_device *netdev, struct ifreq *rq, int cmd)
@@ -3330,6 +3568,7 @@
.ndo_do_ioctl = rtl8152_ioctl,
.ndo_start_xmit = rtl8152_start_xmit,
.ndo_tx_timeout = rtl8152_tx_timeout,
+ .ndo_set_features = rtl8152_set_features,
.ndo_set_rx_mode = rtl8152_set_rx_mode,
.ndo_set_mac_address = rtl8152_set_mac_address,
.ndo_change_mtu = rtl8152_change_mtu,
@@ -3399,18 +3638,22 @@
ops->init = r8152b_init;
ops->enable = rtl8152_enable;
ops->disable = rtl8152_disable;
- ops->up = r8152b_exit_oob;
+ ops->up = rtl8152_up;
ops->down = rtl8152_down;
ops->unload = rtl8152_unload;
+ ops->eee_get = r8152_get_eee;
+ ops->eee_set = r8152_set_eee;
ret = 0;
break;
case PRODUCT_ID_RTL8153:
ops->init = r8153_init;
ops->enable = rtl8153_enable;
- ops->disable = rtl8152_disable;
- ops->up = r8153_first_init;
+ ops->disable = rtl8153_disable;
+ ops->up = rtl8153_up;
ops->down = rtl8153_down;
ops->unload = rtl8153_unload;
+ ops->eee_get = r8153_get_eee;
+ ops->eee_set = r8153_set_eee;
ret = 0;
break;
default:
@@ -3423,10 +3666,12 @@
case PRODUCT_ID_SAMSUNG:
ops->init = r8153_init;
ops->enable = rtl8153_enable;
- ops->disable = rtl8152_disable;
- ops->up = r8153_first_init;
+ ops->disable = rtl8153_disable;
+ ops->up = rtl8153_up;
ops->down = rtl8153_down;
ops->unload = rtl8153_unload;
+ ops->eee_get = r8153_get_eee;
+ ops->eee_set = r8153_set_eee;
ret = 0;
break;
default:
@@ -3484,10 +3729,16 @@
netdev->features |= NETIF_F_RXCSUM | NETIF_F_IP_CSUM | NETIF_F_SG |
NETIF_F_TSO | NETIF_F_FRAGLIST | NETIF_F_IPV6_CSUM |
- NETIF_F_TSO6;
+ NETIF_F_TSO6 | NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_TX;
netdev->hw_features = NETIF_F_RXCSUM | NETIF_F_IP_CSUM | NETIF_F_SG |
NETIF_F_TSO | NETIF_F_FRAGLIST |
- NETIF_F_IPV6_CSUM | NETIF_F_TSO6;
+ NETIF_F_IPV6_CSUM | NETIF_F_TSO6 |
+ NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_TX;
+ netdev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO |
+ NETIF_F_HIGHDMA | NETIF_F_FRAGLIST |
+ NETIF_F_IPV6_CSUM | NETIF_F_TSO6;
netdev->ethtool_ops = &ops;
netif_set_gso_max_size(netdev, RTL_LIMITED_TSO_SIZE);
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 9359a13..3d0ce446 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -546,8 +546,8 @@
skb_put(skb, GOOD_PACKET_LEN);
hdr = skb_vnet_hdr(skb);
+ sg_init_table(rq->sg, MAX_SKB_FRAGS + 2);
sg_set_buf(rq->sg, &hdr->hdr, sizeof hdr->hdr);
-
skb_to_sgvec(skb, rq->sg + 1, 0, skb->len);
err = virtqueue_add_inbuf(rq->vq, rq->sg, 2, skb, gfp);
@@ -563,6 +563,8 @@
char *p;
int i, err, offset;
+ sg_init_table(rq->sg, MAX_SKB_FRAGS + 2);
+
/* page in rq->sg[MAX_SKB_FRAGS + 1] is list tail */
for (i = MAX_SKB_FRAGS + 1; i > 1; --i) {
first = get_a_page(rq, gfp);
@@ -899,6 +901,7 @@
if (vi->mergeable_rx_bufs)
hdr->mhdr.num_buffers = 0;
+ sg_init_table(sq->sg, MAX_SKB_FRAGS + 2);
if (can_push) {
__skb_push(skb, hdr_len);
num_sg = skb_to_sgvec(skb, sq->sg, 0, skb->len);
diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
index 53c3ec1..34e102e 100644
--- a/drivers/net/vxlan.c
+++ b/drivers/net/vxlan.c
@@ -42,6 +42,7 @@
#include <net/netns/generic.h>
#include <net/vxlan.h>
#include <net/protocol.h>
+#include <net/udp_tunnel.h>
#if IS_ENABLED(CONFIG_IPV6)
#include <net/ipv6.h>
#include <net/addrconf.h>
@@ -1062,7 +1063,6 @@
spin_lock(&vn->sock_lock);
hlist_del_rcu(&vs->hlist);
- rcu_assign_sk_user_data(vs->sock->sk, NULL);
vxlan_notify_del_rx_port(vs);
spin_unlock(&vn->sock_lock);
@@ -1336,7 +1336,6 @@
}
#if IS_ENABLED(CONFIG_IPV6)
-
static struct sk_buff *vxlan_na_create(struct sk_buff *request,
struct neighbour *n, bool isrouter)
{
@@ -1570,13 +1569,6 @@
return false;
}
-static inline struct sk_buff *vxlan_handle_offloads(struct sk_buff *skb,
- bool udp_csum)
-{
- int type = udp_csum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
- return iptunnel_handle_offloads(skb, udp_csum, type);
-}
-
#if IS_ENABLED(CONFIG_IPV6)
static int vxlan6_xmit_skb(struct vxlan_sock *vs,
struct dst_entry *dst, struct sk_buff *skb,
@@ -1585,13 +1577,12 @@
__be16 src_port, __be16 dst_port, __be32 vni,
bool xnet)
{
- struct ipv6hdr *ip6h;
struct vxlanhdr *vxh;
- struct udphdr *uh;
int min_headroom;
int err;
+ bool udp_sum = !udp_get_no_check6_tx(vs->sock->sk);
- skb = vxlan_handle_offloads(skb, !udp_get_no_check6_tx(vs->sock->sk));
+ skb = udp_tunnel_handle_offloads(skb, udp_sum);
if (IS_ERR(skb))
return -EINVAL;
@@ -1619,38 +1610,8 @@
vxh->vx_flags = htonl(VXLAN_FLAGS);
vxh->vx_vni = vni;
- __skb_push(skb, sizeof(*uh));
- skb_reset_transport_header(skb);
- uh = udp_hdr(skb);
-
- uh->dest = dst_port;
- uh->source = src_port;
-
- uh->len = htons(skb->len);
-
- memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
- IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED |
- IPSKB_REROUTED);
- skb_dst_set(skb, dst);
-
- udp6_set_csum(udp_get_no_check6_tx(vs->sock->sk), skb,
- saddr, daddr, skb->len);
-
- __skb_push(skb, sizeof(*ip6h));
- skb_reset_network_header(skb);
- ip6h = ipv6_hdr(skb);
- ip6h->version = 6;
- ip6h->priority = prio;
- ip6h->flow_lbl[0] = 0;
- ip6h->flow_lbl[1] = 0;
- ip6h->flow_lbl[2] = 0;
- ip6h->payload_len = htons(skb->len);
- ip6h->nexthdr = IPPROTO_UDP;
- ip6h->hop_limit = ttl;
- ip6h->daddr = *daddr;
- ip6h->saddr = *saddr;
-
- ip6tunnel_xmit(skb, dev);
+ udp_tunnel6_xmit_skb(vs->sock, dst, skb, dev, saddr, daddr, prio,
+ ttl, src_port, dst_port);
return 0;
}
#endif
@@ -1661,11 +1622,11 @@
__be16 src_port, __be16 dst_port, __be32 vni, bool xnet)
{
struct vxlanhdr *vxh;
- struct udphdr *uh;
int min_headroom;
int err;
+ bool udp_sum = !vs->sock->sk->sk_no_check_tx;
- skb = vxlan_handle_offloads(skb, !vs->sock->sk->sk_no_check_tx);
+ skb = udp_tunnel_handle_offloads(skb, udp_sum);
if (IS_ERR(skb))
return -EINVAL;
@@ -1691,20 +1652,8 @@
vxh->vx_flags = htonl(VXLAN_FLAGS);
vxh->vx_vni = vni;
- __skb_push(skb, sizeof(*uh));
- skb_reset_transport_header(skb);
- uh = udp_hdr(skb);
-
- uh->dest = dst_port;
- uh->source = src_port;
-
- uh->len = htons(skb->len);
-
- udp_set_csum(vs->sock->sk->sk_no_check_tx, skb,
- src, dst, skb->len);
-
- return iptunnel_xmit(vs->sock->sk, rt, skb, src, dst, IPPROTO_UDP,
- tos, ttl, df, xnet);
+ return udp_tunnel_xmit_skb(vs->sock, rt, skb, src, dst, tos,
+ ttl, df, src_port, dst_port, xnet);
}
EXPORT_SYMBOL_GPL(vxlan_xmit_skb);
@@ -2333,8 +2282,7 @@
static void vxlan_del_work(struct work_struct *work)
{
struct vxlan_sock *vs = container_of(work, struct vxlan_sock, del_work);
-
- sk_release_kernel(vs->sock->sk);
+ udp_tunnel_sock_release(vs->sock);
kfree_rcu(vs, rcu);
}
@@ -2367,11 +2315,6 @@
if (err < 0)
return ERR_PTR(err);
- /* Disable multicast loopback */
- inet_sk(sock->sk)->mc_loop = 0;
-
- udp_set_convert_csum(sock->sk, true);
-
return sock;
}
@@ -2383,9 +2326,9 @@
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
struct vxlan_sock *vs;
struct socket *sock;
- struct sock *sk;
unsigned int h;
bool ipv6 = !!(flags & VXLAN_F_IPV6);
+ struct udp_tunnel_sock_cfg tunnel_cfg;
vs = kzalloc(sizeof(*vs), GFP_KERNEL);
if (!vs)
@@ -2403,11 +2346,9 @@
}
vs->sock = sock;
- sk = sock->sk;
atomic_set(&vs->refcnt, 1);
vs->rcv = rcv;
vs->data = data;
- rcu_assign_sk_user_data(vs->sock->sk, vs);
/* Initialize the vxlan udp offloads structure */
vs->udp_offloads.port = port;
@@ -2420,14 +2361,12 @@
spin_unlock(&vn->sock_lock);
/* Mark socket as an encapsulation socket. */
- udp_sk(sk)->encap_type = 1;
- udp_sk(sk)->encap_rcv = vxlan_udp_encap_recv;
-#if IS_ENABLED(CONFIG_IPV6)
- if (ipv6)
- ipv6_stub->udpv6_encap_enable();
- else
-#endif
- udp_encap_enable();
+ tunnel_cfg.sk_user_data = vs;
+ tunnel_cfg.encap_type = 1;
+ tunnel_cfg.encap_rcv = vxlan_udp_encap_recv;
+ tunnel_cfg.encap_destroy = NULL;
+
+ setup_udp_tunnel_sock(net, sock, &tunnel_cfg);
return vs;
}
diff --git a/drivers/net/wireless/ath/ath.h b/drivers/net/wireless/ath/ath.h
index c1a4ade..a3b6e27 100644
--- a/drivers/net/wireless/ath/ath.h
+++ b/drivers/net/wireless/ath/ath.h
@@ -234,6 +234,7 @@
* AR9462.
* @ATH_DBG_DFS: radar datection
* @ATH_DBG_WOW: Wake on Wireless
+ * @ATH_DBG_DYNACK: dynack handling
* @ATH_DBG_ANY: enable all debugging
*
* The debug level is used to control the amount and type of debugging output
@@ -262,6 +263,7 @@
ATH_DBG_DFS = 0x00010000,
ATH_DBG_WOW = 0x00020000,
ATH_DBG_CHAN_CTX = 0x00040000,
+ ATH_DBG_DYNACK = 0x00080000,
ATH_DBG_ANY = 0xffffffff
};
diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
index b858c82..1f35bd1 100644
--- a/drivers/net/wireless/ath/ath10k/mac.c
+++ b/drivers/net/wireless/ath/ath10k/mac.c
@@ -4838,7 +4838,6 @@
IEEE80211_HW_MFP_CAPABLE |
IEEE80211_HW_REPORTS_TX_ACK_STATUS |
IEEE80211_HW_HAS_RATE_CONTROL |
- IEEE80211_HW_SUPPORTS_STATIC_SMPS |
IEEE80211_HW_AP_LINK_PS |
IEEE80211_HW_SPECTRUM_MGMT;
@@ -4846,8 +4845,10 @@
* bytes is used for padding/alignment if necessary. */
ar->hw->extra_tx_headroom += sizeof(struct htt_data_tx_desc_frag)*2 + 4;
+ ar->hw->wiphy->features |= NL80211_FEATURE_STATIC_SMPS;
+
if (ar->ht_cap_info & WMI_HT_CAP_DYNAMIC_SMPS)
- ar->hw->flags |= IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS;
+ ar->hw->wiphy->features |= NL80211_FEATURE_DYNAMIC_SMPS;
if (ar->ht_cap_info & WMI_HT_CAP_ENABLED) {
ar->hw->flags |= IEEE80211_HW_AMPDU_AGGREGATION;
diff --git a/drivers/net/wireless/ath/ath5k/mac80211-ops.c b/drivers/net/wireless/ath/ath5k/mac80211-ops.c
index b65c38f..ab2709a 100644
--- a/drivers/net/wireless/ath/ath5k/mac80211-ops.c
+++ b/drivers/net/wireless/ath/ath5k/mac80211-ops.c
@@ -704,7 +704,7 @@
* reset.
*/
static void
-ath5k_set_coverage_class(struct ieee80211_hw *hw, u8 coverage_class)
+ath5k_set_coverage_class(struct ieee80211_hw *hw, s16 coverage_class)
{
struct ath5k_hw *ah = hw->priv;
diff --git a/drivers/net/wireless/ath/ath9k/Kconfig b/drivers/net/wireless/ath/ath9k/Kconfig
index b8f570e..896e632 100644
--- a/drivers/net/wireless/ath/ath9k/Kconfig
+++ b/drivers/net/wireless/ath/ath9k/Kconfig
@@ -92,6 +92,15 @@
developed. At this point enabling this option won't do anything
except increase code size.
+config ATH9K_DYNACK
+ bool "Atheros ath9k ACK timeout estimation algorithm (EXPERIMENTAL)"
+ depends on ATH9K
+ default n
+ ---help---
+ This option enables ath9k dynamic ACK timeout estimation algorithm
+ based on ACK frame RX timestamp, TX frame timestamp and frame
+ duration
+
config ATH9K_TX99
bool "Atheros ath9k TX99 testing support"
depends on ATH9K_DEBUGFS && CFG80211_CERTIFICATION_ONUS
diff --git a/drivers/net/wireless/ath/ath9k/Makefile b/drivers/net/wireless/ath/ath9k/Makefile
index 6b4020a..73704c1 100644
--- a/drivers/net/wireless/ath/ath9k/Makefile
+++ b/drivers/net/wireless/ath/ath9k/Makefile
@@ -49,6 +49,9 @@
ath9k_hw-$(CONFIG_ATH9K_BTCOEX_SUPPORT) += btcoex.o \
ar9003_mci.o
+
+ath9k_hw-$(CONFIG_ATH9K_DYNACK) += dynack.o
+
obj-$(CONFIG_ATH9K_HW) += ath9k_hw.o
obj-$(CONFIG_ATH9K_COMMON) += ath9k_common.o
diff --git a/drivers/net/wireless/ath/ath9k/ar9002_mac.c b/drivers/net/wireless/ath/ath9k/ar9002_mac.c
index 59af9f9..669cb37 100644
--- a/drivers/net/wireless/ath/ath9k/ar9002_mac.c
+++ b/drivers/net/wireless/ath/ath9k/ar9002_mac.c
@@ -381,6 +381,13 @@
ts->evm1 = ads->AR_TxEVM1;
ts->evm2 = ads->AR_TxEVM2;
+ status = ACCESS_ONCE(ads->ds_ctl4);
+ ts->duration[0] = MS(status, AR_PacketDur0);
+ ts->duration[1] = MS(status, AR_PacketDur1);
+ status = ACCESS_ONCE(ads->ds_ctl5);
+ ts->duration[2] = MS(status, AR_PacketDur2);
+ ts->duration[3] = MS(status, AR_PacketDur3);
+
return 0;
}
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_mac.c b/drivers/net/wireless/ath/ath9k/ar9003_mac.c
index 71e38e8..e5f7c11 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_mac.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_mac.c
@@ -355,9 +355,11 @@
struct ath_tx_status *ts)
{
struct ar9003_txs *ads;
+ struct ar9003_txc *adc;
u32 status;
ads = &ah->ts_ring[ah->ts_tail];
+ adc = (struct ar9003_txc *)ads;
status = ACCESS_ONCE(ads->status8);
if ((status & AR_TxDone) == 0)
@@ -426,6 +428,13 @@
ts->ts_rssi_ext1 = MS(status, AR_TxRSSIAnt11);
ts->ts_rssi_ext2 = MS(status, AR_TxRSSIAnt12);
+ status = ACCESS_ONCE(adc->ctl15);
+ ts->duration[0] = MS(status, AR_PacketDur0);
+ ts->duration[1] = MS(status, AR_PacketDur1);
+ status = ACCESS_ONCE(adc->ctl16);
+ ts->duration[2] = MS(status, AR_PacketDur2);
+ ts->duration[3] = MS(status, AR_PacketDur3);
+
memset(ads, 0, sizeof(*ads));
return 0;
diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h
index c690601..8cd116e 100644
--- a/drivers/net/wireless/ath/ath9k/ath9k.h
+++ b/drivers/net/wireless/ath/ath9k/ath9k.h
@@ -274,6 +274,9 @@
struct ath_rx_rate_stats rx_rate_stats;
#endif
u8 key_idx[4];
+
+ u32 ackto;
+ struct list_head list;
};
struct ath_tx_control {
@@ -314,7 +317,6 @@
bool discard_next;
u32 *rxlink;
u32 num_pkts;
- unsigned int rxfilter;
struct list_head rxbuf;
struct ath_descdma rxdma;
struct ath_rx_edma rx_edma[ATH9K_RX_QUEUE_MAX];
@@ -350,6 +352,9 @@
bool active;
bool assigned;
bool switch_after_beacon;
+
+ short nvifs;
+ unsigned int rxfilter;
};
enum ath_chanctx_event {
@@ -376,6 +381,9 @@
struct ath_chanctx_sched {
bool beacon_pending;
bool offchannel_pending;
+ bool wait_switch;
+ bool force_noa_update;
+ bool extend_absence;
enum ath_chanctx_state state;
u8 beacon_miss;
@@ -449,7 +457,7 @@
void ath9k_chanctx_wake_queues(struct ath_softc *sc);
void ath_chanctx_check_active(struct ath_softc *sc, struct ath_chanctx *ctx);
-void ath_chanctx_beacon_recv_ev(struct ath_softc *sc, u32 ts,
+void ath_chanctx_beacon_recv_ev(struct ath_softc *sc,
enum ath_chanctx_event ev);
void ath_chanctx_beacon_sent_ev(struct ath_softc *sc,
enum ath_chanctx_event ev);
@@ -478,7 +486,7 @@
static inline void ath9k_deinit_channel_context(struct ath_softc *sc)
{
}
-static inline void ath_chanctx_beacon_recv_ev(struct ath_softc *sc, u32 ts,
+static inline void ath_chanctx_beacon_recv_ev(struct ath_softc *sc,
enum ath_chanctx_event ev)
{
}
@@ -527,7 +535,7 @@
#endif /* CONFIG_ATH9K_CHANNEL_CONTEXT */
int ath_reset_internal(struct ath_softc *sc, struct ath9k_channel *hchan);
-int ath_startrecv(struct ath_softc *sc);
+void ath_startrecv(struct ath_softc *sc);
bool ath_stoprecv(struct ath_softc *sc);
u32 ath_calcrxfilter(struct ath_softc *sc);
int ath_rx_init(struct ath_softc *sc, int nbufs);
@@ -572,6 +580,8 @@
/* VIFs */
/********/
+#define P2P_DEFAULT_CTWIN 10
+
struct ath_vif {
struct list_head list;
@@ -590,8 +600,10 @@
u32 offchannel_start;
u32 offchannel_duration;
- u32 periodic_noa_start;
- u32 periodic_noa_duration;
+ /* These are used for both periodic and one-shot */
+ u32 noa_start;
+ u32 noa_duration;
+ bool periodic_noa;
};
struct ath9k_vif_iter_data {
@@ -960,7 +972,6 @@
bool ps_enabled;
bool ps_idle;
short nbcnvifs;
- short nvifs;
unsigned long ps_usecount;
struct ath_rx rx;
diff --git a/drivers/net/wireless/ath/ath9k/beacon.c b/drivers/net/wireless/ath/ath9k/beacon.c
index b2f56d8..a6af855 100644
--- a/drivers/net/wireless/ath/ath9k/beacon.c
+++ b/drivers/net/wireless/ath/ath9k/beacon.c
@@ -183,7 +183,7 @@
spin_unlock_bh(&cabq->axq_lock);
if (skb && cabq_depth) {
- if (sc->nvifs > 1) {
+ if (sc->cur_chan->nvifs > 1) {
ath_dbg(common, BEACON,
"Flushing previous cabq traffic\n");
ath_draintxq(sc, cabq);
@@ -514,6 +514,18 @@
struct ieee80211_vif *vif)
{
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
+ struct ath_vif *avp = (void *)vif->drv_priv;
+
+ if (ath9k_is_chanctx_enabled()) {
+ /*
+ * If the VIF is not present in the current channel context,
+ * then we can't do the usual opmode checks. Allow the
+ * beacon config for the VIF to be updated in this case and
+ * return immediately.
+ */
+ if (sc->cur_chan != avp->chanctx)
+ return true;
+ }
if (sc->sc_ah->opmode == NL80211_IFTYPE_AP) {
if ((vif->type != NL80211_IFTYPE_AP) ||
diff --git a/drivers/net/wireless/ath/ath9k/channel.c b/drivers/net/wireless/ath/ath9k/channel.c
index 409f912..77c99eb5 100644
--- a/drivers/net/wireless/ath/ath9k/channel.c
+++ b/drivers/net/wireless/ath/ath9k/channel.c
@@ -83,8 +83,6 @@
if (hw->conf.radar_enabled) {
u32 rxfilter;
- /* set HW specific DFS configuration */
- ath9k_hw_set_radar_params(ah);
rxfilter = ath9k_hw_getrxfilter(ah);
rxfilter |= ATH9K_RX_FILTER_PHYRADAR |
ATH9K_RX_FILTER_PHYERR;
@@ -262,6 +260,9 @@
cur = sc->cur_chan;
prev = ath_chanctx_get_next(sc, cur);
+ if (!prev->switch_after_beacon)
+ return;
+
getrawmonotonic(&ts);
cur_tsf = (u32) cur->tsf_val +
ath9k_hw_get_tsf_offset(&cur->tsf_ts, &ts);
@@ -310,7 +311,6 @@
struct ath_chanctx *ctx;
u32 tsf_time;
u32 beacon_int;
- bool noa_changed = false;
if (vif)
avp = (struct ath_vif *) vif->drv_priv;
@@ -333,7 +333,7 @@
break;
}
- if (sc->sched.offchannel_pending) {
+ if (sc->sched.offchannel_pending && !sc->sched.wait_switch) {
sc->sched.offchannel_pending = false;
sc->next_chan = &sc->offchannel.chan;
sc->sched.state = ATH_CHANCTX_STATE_WAIT_FOR_BEACON;
@@ -372,44 +372,91 @@
sc->sched.switch_start_time = tsf_time;
sc->cur_chan->last_beacon = sc->sched.next_tbtt;
- /* Prevent wrap-around issues */
- if (avp->periodic_noa_duration &&
- tsf_time - avp->periodic_noa_start > BIT(30))
- avp->periodic_noa_duration = 0;
+ /*
+ * If an offchannel switch is scheduled to happen after
+ * a beacon transmission, update the NoA with one-shot
+ * values and increment the index.
+ */
+ if (sc->next_chan == &sc->offchannel.chan) {
+ avp->noa_index++;
+ avp->offchannel_start = tsf_time;
+ avp->offchannel_duration = sc->sched.offchannel_duration;
- if (ctx->active && !avp->periodic_noa_duration) {
- avp->periodic_noa_start = tsf_time;
- avp->periodic_noa_duration =
- TU_TO_USEC(cur_conf->beacon_interval) / 2 -
- sc->sched.channel_switch_time;
- noa_changed = true;
- } else if (!ctx->active && avp->periodic_noa_duration) {
- avp->periodic_noa_duration = 0;
- noa_changed = true;
+ ath_dbg(common, CHAN_CTX,
+ "offchannel noa_duration: %d, noa_start: %d, noa_index: %d\n",
+ avp->offchannel_duration,
+ avp->offchannel_start,
+ avp->noa_index);
+
+ /*
+ * When multiple contexts are active, the NoA
+ * has to be recalculated and advertised after
+ * an offchannel operation.
+ */
+ if (ctx->active && avp->noa_duration)
+ avp->noa_duration = 0;
+
+ break;
+ }
+
+ /*
+ * Clear the extend_absence flag if it had been
+ * set during the previous beacon transmission,
+ * since we need to revert to the normal NoA
+ * schedule.
+ */
+ if (ctx->active && sc->sched.extend_absence) {
+ avp->noa_duration = 0;
+ sc->sched.extend_absence = false;
}
/* If at least two consecutive beacons were missed on the STA
* chanctx, stay on the STA channel for one extra beacon period,
* to resync the timer properly.
*/
- if (ctx->active && sc->sched.beacon_miss >= 2)
- sc->sched.offchannel_duration = 3 * beacon_int / 2;
-
- if (sc->sched.offchannel_duration) {
- noa_changed = true;
- avp->offchannel_start = tsf_time;
- avp->offchannel_duration =
- sc->sched.offchannel_duration;
+ if (ctx->active && sc->sched.beacon_miss >= 2) {
+ avp->noa_duration = 0;
+ sc->sched.extend_absence = true;
}
- if (noa_changed)
- avp->noa_index++;
+ /* Prevent wrap-around issues */
+ if (avp->noa_duration && tsf_time - avp->noa_start > BIT(30))
+ avp->noa_duration = 0;
- ath_dbg(common, CHAN_CTX,
- "periodic_noa_duration: %d, periodic_noa_start: %d, noa_index: %d\n",
- avp->periodic_noa_duration,
- avp->periodic_noa_start,
- avp->noa_index);
+ /*
+ * If multiple contexts are active, start periodic
+ * NoA and increment the index for the first
+ * announcement.
+ */
+ if (ctx->active &&
+ (!avp->noa_duration || sc->sched.force_noa_update)) {
+ avp->noa_index++;
+ avp->noa_start = tsf_time;
+
+ if (sc->sched.extend_absence)
+ avp->noa_duration = (3 * beacon_int / 2) +
+ sc->sched.channel_switch_time;
+ else
+ avp->noa_duration =
+ TU_TO_USEC(cur_conf->beacon_interval) / 2 +
+ sc->sched.channel_switch_time;
+
+ if (test_bit(ATH_OP_SCANNING, &common->op_flags) ||
+ sc->sched.extend_absence)
+ avp->periodic_noa = false;
+ else
+ avp->periodic_noa = true;
+
+ ath_dbg(common, CHAN_CTX,
+ "noa_duration: %d, noa_start: %d, noa_index: %d, periodic: %d\n",
+ avp->noa_duration,
+ avp->noa_start,
+ avp->noa_index,
+ avp->periodic_noa);
+ }
+
+ if (ctx->active && sc->sched.force_noa_update)
+ sc->sched.force_noa_update = false;
break;
case ATH_CHANCTX_EVENT_BEACON_SENT:
@@ -490,9 +537,11 @@
"Move chanctx state to WAIT_FOR_TIMER (event SWITCH)\n");
sc->sched.state = ATH_CHANCTX_STATE_WAIT_FOR_TIMER;
+ sc->sched.wait_switch = false;
tsf_time = TU_TO_USEC(cur_conf->beacon_interval) / 2;
- if (sc->sched.beacon_miss >= 2) {
+
+ if (sc->sched.extend_absence) {
sc->sched.beacon_miss = 0;
tsf_time *= 3;
}
@@ -560,10 +609,9 @@
ath_chanctx_event(sc, NULL, ev);
}
-void ath_chanctx_beacon_recv_ev(struct ath_softc *sc, u32 ts,
+void ath_chanctx_beacon_recv_ev(struct ath_softc *sc,
enum ath_chanctx_event ev)
{
- sc->sched.next_tbtt = ts;
ath_chanctx_event(sc, NULL, ev);
}
@@ -587,8 +635,18 @@
if (test_bit(ATH_OP_MULTI_CHANNEL, &common->op_flags) &&
(sc->cur_chan != ctx) && (ctx == &sc->offchannel.chan)) {
+ if (chandef)
+ ctx->chandef = *chandef;
+
sc->sched.offchannel_pending = true;
+ sc->sched.wait_switch = true;
+ sc->sched.offchannel_duration =
+ jiffies_to_usecs(sc->offchannel.duration) +
+ sc->sched.channel_switch_time;
+
spin_unlock_bh(&sc->chan_lock);
+ ath_dbg(common, CHAN_CTX,
+ "Set offchannel_pending to true\n");
return;
}
@@ -601,7 +659,7 @@
if (sc->next_chan == &sc->offchannel.chan) {
sc->sched.offchannel_duration =
- TU_TO_USEC(sc->offchannel.duration) +
+ jiffies_to_usecs(sc->offchannel.duration) +
sc->sched.channel_switch_time;
if (chandef) {
@@ -688,7 +746,8 @@
} else if (sc->offchannel.roc_vif) {
vif = sc->offchannel.roc_vif;
sc->offchannel.chan.txpower = vif->bss_conf.txpower;
- sc->offchannel.duration = sc->offchannel.roc_duration;
+ sc->offchannel.duration =
+ msecs_to_jiffies(sc->offchannel.roc_duration);
sc->offchannel.state = ATH_OFFCHANNEL_ROC_START;
ath_chanctx_offchan_switch(sc, sc->offchannel.roc_chan);
} else {
@@ -724,6 +783,10 @@
sc->offchannel.state = ATH_OFFCHANNEL_IDLE;
ieee80211_scan_completed(sc->hw, abort);
clear_bit(ATH_OP_SCANNING, &common->op_flags);
+ spin_lock_bh(&sc->chan_lock);
+ if (test_bit(ATH_OP_MULTI_CHANNEL, &common->op_flags))
+ sc->sched.force_noa_update = true;
+ spin_unlock_bh(&sc->chan_lock);
ath_offchannel_next(sc);
ath9k_ps_restore(sc);
}
@@ -959,8 +1022,8 @@
break;
sc->offchannel.state = ATH_OFFCHANNEL_ROC_WAIT;
- mod_timer(&sc->offchannel.timer, jiffies +
- msecs_to_jiffies(sc->offchannel.duration));
+ mod_timer(&sc->offchannel.timer,
+ jiffies + sc->offchannel.duration);
ieee80211_ready_on_channel(sc->hw);
break;
case ATH_OFFCHANNEL_ROC_DONE:
@@ -1022,7 +1085,10 @@
sc->cur_chan = sc->next_chan;
sc->cur_chan->stopped = false;
sc->next_chan = NULL;
- sc->sched.offchannel_duration = 0;
+
+ if (!sc->sched.offchannel_pending)
+ sc->sched.offchannel_duration = 0;
+
if (sc->sched.state != ATH_CHANCTX_STATE_FORCE_ACTIVE)
sc->sched.state = ATH_CHANCTX_STATE_IDLE;
@@ -1165,6 +1231,30 @@
ath9k_update_p2p_ps_timer(sc, avp);
}
+static u8 ath9k_get_ctwin(struct ath_softc *sc, struct ath_vif *avp)
+{
+ struct ath_beacon_config *cur_conf = &sc->cur_chan->beacon;
+ u8 switch_time, ctwin;
+
+ /*
+ * Channel switch in multi-channel mode is deferred
+ * by a quarter beacon interval when handling
+ * ATH_CHANCTX_EVENT_BEACON_PREPARE, so the P2P-GO
+ * interface is guaranteed to be discoverable
+ * for that duration after a TBTT.
+ */
+ switch_time = cur_conf->beacon_interval / 4;
+
+ ctwin = avp->vif->bss_conf.p2p_noa_attr.oppps_ctwindow;
+ if (ctwin && (ctwin < switch_time))
+ return ctwin;
+
+ if (switch_time < P2P_DEFAULT_CTWIN)
+ return 0;
+
+ return P2P_DEFAULT_CTWIN;
+}
+
void ath9k_beacon_add_noa(struct ath_softc *sc, struct ath_vif *avp,
struct sk_buff *skb)
{
@@ -1182,10 +1272,10 @@
int noa_len, noa_desc, i = 0;
u8 *hdr;
- if (!avp->offchannel_duration && !avp->periodic_noa_duration)
+ if (!avp->offchannel_duration && !avp->noa_duration)
return;
- noa_desc = !!avp->offchannel_duration + !!avp->periodic_noa_duration;
+ noa_desc = !!avp->offchannel_duration + !!avp->noa_duration;
noa_len = 2 + sizeof(struct ieee80211_p2p_noa_desc) * noa_desc;
hdr = skb_put(skb, sizeof(noa_ie_hdr));
@@ -1197,13 +1287,19 @@
memset(noa, 0, noa_len);
noa->index = avp->noa_index;
- if (avp->periodic_noa_duration) {
- u32 interval = TU_TO_USEC(sc->cur_chan->beacon.beacon_interval);
+ noa->oppps_ctwindow = ath9k_get_ctwin(sc, avp);
- noa->desc[i].count = 255;
- noa->desc[i].start_time = cpu_to_le32(avp->periodic_noa_start);
- noa->desc[i].duration = cpu_to_le32(avp->periodic_noa_duration);
- noa->desc[i].interval = cpu_to_le32(interval);
+ if (avp->noa_duration) {
+ if (avp->periodic_noa) {
+ u32 interval = TU_TO_USEC(sc->cur_chan->beacon.beacon_interval);
+ noa->desc[i].count = 255;
+ noa->desc[i].interval = cpu_to_le32(interval);
+ } else {
+ noa->desc[i].count = 1;
+ }
+
+ noa->desc[i].start_time = cpu_to_le32(avp->noa_start);
+ noa->desc[i].duration = cpu_to_le32(avp->noa_duration);
i++;
}
diff --git a/drivers/net/wireless/ath/ath9k/common-beacon.c b/drivers/net/wireless/ath/ath9k/common-beacon.c
index 733be51..6ad4447 100644
--- a/drivers/net/wireless/ath/ath9k/common-beacon.c
+++ b/drivers/net/wireless/ath/ath9k/common-beacon.c
@@ -57,7 +57,7 @@
struct ath9k_beacon_state *bs)
{
struct ath_common *common = ath9k_hw_common(ah);
- int dtim_intval, sleepduration;
+ int dtim_intval;
u64 tsf;
/* No need to configure beacon if we are not associated */
@@ -75,7 +75,6 @@
* last beacon we received (which may be none).
*/
dtim_intval = conf->intval * conf->dtim_period;
- sleepduration = ah->hw->conf.listen_interval * conf->intval;
/*
* Pull nexttbtt forward to reflect the current
@@ -113,7 +112,7 @@
*/
bs->bs_sleepduration = TU_TO_USEC(roundup(IEEE80211_MS_TO_TU(100),
- sleepduration));
+ conf->intval));
if (bs->bs_sleepduration > bs->bs_dtimperiod)
bs->bs_sleepduration = bs->bs_dtimperiod;
diff --git a/drivers/net/wireless/ath/ath9k/debug.c b/drivers/net/wireless/ath/ath9k/debug.c
index d227936..46f20a3 100644
--- a/drivers/net/wireless/ath/ath9k/debug.c
+++ b/drivers/net/wireless/ath/ath9k/debug.c
@@ -838,7 +838,7 @@
iter_data.nmeshes, iter_data.nwds);
len += scnprintf(buf + len, sizeof(buf) - len,
" ADHOC: %i TOTAL: %hi BEACON-VIF: %hi\n",
- iter_data.nadhocs, sc->nvifs, sc->nbcnvifs);
+ iter_data.nadhocs, sc->cur_chan->nvifs, sc->nbcnvifs);
}
if (len > sizeof(buf))
@@ -1169,6 +1169,29 @@
};
#endif
+#ifdef CONFIG_ATH9K_DYNACK
+static ssize_t read_file_ackto(struct file *file, char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ struct ath_softc *sc = file->private_data;
+ struct ath_hw *ah = sc->sc_ah;
+ char buf[32];
+ unsigned int len;
+
+ len = sprintf(buf, "%u %c\n", ah->dynack.ackto,
+ (ah->dynack.enabled) ? 'A' : 'S');
+
+ return simple_read_from_buffer(user_buf, count, ppos, buf, len);
+}
+
+static const struct file_operations fops_ackto = {
+ .read = read_file_ackto,
+ .open = simple_open,
+ .owner = THIS_MODULE,
+ .llseek = default_llseek,
+};
+#endif
+
/* Ethtool support for get-stats */
#define AMKSTR(nm) #nm "_BE", #nm "_BK", #nm "_VI", #nm "_VO"
@@ -1374,5 +1397,10 @@
&fops_btcoex);
#endif
+#ifdef CONFIG_ATH9K_DYNACK
+ debugfs_create_file("ack_to", S_IRUSR | S_IWUSR, sc->debug.debugfs_phy,
+ sc, &fops_ackto);
+#endif
+
return 0;
}
diff --git a/drivers/net/wireless/ath/ath9k/dynack.c b/drivers/net/wireless/ath/ath9k/dynack.c
new file mode 100644
index 0000000..6ae8e0b
--- /dev/null
+++ b/drivers/net/wireless/ath/ath9k/dynack.c
@@ -0,0 +1,351 @@
+/*
+ * Copyright (c) 2014, Lorenzo Bianconi <lorenzo.bianconi83@gmail.com>
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include "ath9k.h"
+#include "hw.h"
+#include "dynack.h"
+
+#define COMPUTE_TO (5 * HZ)
+#define LATEACK_DELAY (10 * HZ)
+#define LATEACK_TO 256
+#define MAX_DELAY 300
+#define EWMA_LEVEL 96
+#define EWMA_DIV 128
+
+/**
+ * ath_dynack_ewma - EWMA (Exponentially Weighted Moving Average) calculation
+ *
+ */
+static inline u32 ath_dynack_ewma(u32 old, u32 new)
+{
+ return (new * (EWMA_DIV - EWMA_LEVEL) + old * EWMA_LEVEL) / EWMA_DIV;
+}
+
+/**
+ * ath_dynack_get_sifs - get sifs time based on phy used
+ * @ah: ath hw
+ * @phy: phy used
+ *
+ */
+static inline u32 ath_dynack_get_sifs(struct ath_hw *ah, int phy)
+{
+ u32 sifs = CCK_SIFS_TIME;
+
+ if (phy == WLAN_RC_PHY_OFDM) {
+ if (IS_CHAN_QUARTER_RATE(ah->curchan))
+ sifs = OFDM_SIFS_TIME_QUARTER;
+ else if (IS_CHAN_HALF_RATE(ah->curchan))
+ sifs = OFDM_SIFS_TIME_HALF;
+ else
+ sifs = OFDM_SIFS_TIME;
+ }
+ return sifs;
+}
+
+/**
+ * ath_dynack_bssidmask - filter out ACK frames based on BSSID mask
+ * @ah: ath hw
+ * @mac: receiver address
+ */
+static inline bool ath_dynack_bssidmask(struct ath_hw *ah, const u8 *mac)
+{
+ int i;
+ struct ath_common *common = ath9k_hw_common(ah);
+
+ for (i = 0; i < ETH_ALEN; i++) {
+ if ((common->macaddr[i] & common->bssidmask[i]) !=
+ (mac[i] & common->bssidmask[i]))
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * ath_dynack_compute_ackto - compute ACK timeout as the maximum STA timeout
+ * @ah: ath hw
+ *
+ * should be called while holding qlock
+ */
+static void ath_dynack_compute_ackto(struct ath_hw *ah)
+{
+ struct ath_node *an;
+ u32 to = 0;
+ struct ath_dynack *da = &ah->dynack;
+ struct ath_common *common = ath9k_hw_common(ah);
+
+ list_for_each_entry(an, &da->nodes, list)
+ if (an->ackto > to)
+ to = an->ackto;
+
+ if (to && da->ackto != to) {
+ u32 slottime;
+
+ slottime = (to - 3) / 2;
+ da->ackto = to;
+ ath_dbg(common, DYNACK, "ACK timeout %u slottime %u\n",
+ da->ackto, slottime);
+ ath9k_hw_setslottime(ah, slottime);
+ ath9k_hw_set_ack_timeout(ah, da->ackto);
+ ath9k_hw_set_cts_timeout(ah, da->ackto);
+ }
+}
+
+/**
+ * ath_dynack_compute_to - compute STA ACK timeout
+ * @ah: ath hw
+ *
+ * should be called while holding qlock
+ */
+static void ath_dynack_compute_to(struct ath_hw *ah)
+{
+ u32 ackto, ack_ts;
+ u8 *dst, *src;
+ struct ieee80211_sta *sta;
+ struct ath_node *an;
+ struct ts_info *st_ts;
+ struct ath_dynack *da = &ah->dynack;
+
+ rcu_read_lock();
+
+ while (da->st_rbf.h_rb != da->st_rbf.t_rb &&
+ da->ack_rbf.h_rb != da->ack_rbf.t_rb) {
+ ack_ts = da->ack_rbf.tstamp[da->ack_rbf.h_rb];
+ st_ts = &da->st_rbf.ts[da->st_rbf.h_rb];
+ dst = da->st_rbf.addr[da->st_rbf.h_rb].h_dest;
+ src = da->st_rbf.addr[da->st_rbf.h_rb].h_src;
+
+ ath_dbg(ath9k_hw_common(ah), DYNACK,
+ "ack_ts %u st_ts %u st_dur %u [%u-%u]\n",
+ ack_ts, st_ts->tstamp, st_ts->dur,
+ da->ack_rbf.h_rb, da->st_rbf.h_rb);
+
+ if (ack_ts > st_ts->tstamp + st_ts->dur) {
+ ackto = ack_ts - st_ts->tstamp - st_ts->dur;
+
+ if (ackto < MAX_DELAY) {
+ sta = ieee80211_find_sta_by_ifaddr(ah->hw, dst,
+ src);
+ if (sta) {
+ an = (struct ath_node *)sta->drv_priv;
+ an->ackto = ath_dynack_ewma(an->ackto,
+ ackto);
+ ath_dbg(ath9k_hw_common(ah), DYNACK,
+ "%pM to %u\n", dst, an->ackto);
+ if (time_is_before_jiffies(da->lto)) {
+ ath_dynack_compute_ackto(ah);
+ da->lto = jiffies + COMPUTE_TO;
+ }
+ }
+ INCR(da->ack_rbf.h_rb, ATH_DYN_BUF);
+ }
+ INCR(da->st_rbf.h_rb, ATH_DYN_BUF);
+ } else {
+ INCR(da->ack_rbf.h_rb, ATH_DYN_BUF);
+ }
+ }
+
+ rcu_read_unlock();
+}
+
+/**
+ * ath_dynack_sample_tx_ts - status timestamp sampling method
+ * @ah: ath hw
+ * @skb: socket buffer
+ * @ts: tx status info
+ *
+ */
+void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb,
+ struct ath_tx_status *ts)
+{
+ u8 ridx;
+ struct ieee80211_hdr *hdr;
+ struct ath_dynack *da = &ah->dynack;
+ struct ath_common *common = ath9k_hw_common(ah);
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+
+ if ((info->flags & IEEE80211_TX_CTL_NO_ACK) || !da->enabled)
+ return;
+
+ spin_lock_bh(&da->qlock);
+
+ hdr = (struct ieee80211_hdr *)skb->data;
+
+ /* late ACK */
+ if (ts->ts_status & ATH9K_TXERR_XRETRY) {
+ if (ieee80211_is_assoc_req(hdr->frame_control) ||
+ ieee80211_is_assoc_resp(hdr->frame_control)) {
+ ath_dbg(common, DYNACK, "late ack\n");
+ ath9k_hw_setslottime(ah, (LATEACK_TO - 3) / 2);
+ ath9k_hw_set_ack_timeout(ah, LATEACK_TO);
+ ath9k_hw_set_cts_timeout(ah, LATEACK_TO);
+ da->lto = jiffies + LATEACK_DELAY;
+ }
+
+ spin_unlock_bh(&da->qlock);
+ return;
+ }
+
+ ridx = ts->ts_rateindex;
+
+ da->st_rbf.ts[da->st_rbf.t_rb].tstamp = ts->ts_tstamp;
+ da->st_rbf.ts[da->st_rbf.t_rb].dur = ts->duration[ts->ts_rateindex];
+ ether_addr_copy(da->st_rbf.addr[da->st_rbf.t_rb].h_dest, hdr->addr1);
+ ether_addr_copy(da->st_rbf.addr[da->st_rbf.t_rb].h_src, hdr->addr2);
+
+ if (!(info->status.rates[ridx].flags & IEEE80211_TX_RC_MCS)) {
+ u32 phy, sifs;
+ const struct ieee80211_rate *rate;
+ struct ieee80211_tx_rate *rates = info->status.rates;
+
+ rate = &common->sbands[info->band].bitrates[rates[ridx].idx];
+ if (info->band == IEEE80211_BAND_2GHZ &&
+ !(rate->flags & IEEE80211_RATE_ERP_G))
+ phy = WLAN_RC_PHY_CCK;
+ else
+ phy = WLAN_RC_PHY_OFDM;
+
+ sifs = ath_dynack_get_sifs(ah, phy);
+ da->st_rbf.ts[da->st_rbf.t_rb].dur -= sifs;
+ }
+
+ ath_dbg(common, DYNACK, "{%pM} tx sample %u [dur %u][h %u-t %u]\n",
+ hdr->addr1, da->st_rbf.ts[da->st_rbf.t_rb].tstamp,
+ da->st_rbf.ts[da->st_rbf.t_rb].dur, da->st_rbf.h_rb,
+ (da->st_rbf.t_rb + 1) % ATH_DYN_BUF);
+
+ INCR(da->st_rbf.t_rb, ATH_DYN_BUF);
+ if (da->st_rbf.t_rb == da->st_rbf.h_rb)
+ INCR(da->st_rbf.h_rb, ATH_DYN_BUF);
+
+ ath_dynack_compute_to(ah);
+
+ spin_unlock_bh(&da->qlock);
+}
+EXPORT_SYMBOL(ath_dynack_sample_tx_ts);
+
+/**
+ * ath_dynack_sample_ack_ts - ACK timestamp sampling method
+ * @ah: ath hw
+ * @skb: socket buffer
+ * @ts: rx timestamp
+ *
+ */
+void ath_dynack_sample_ack_ts(struct ath_hw *ah, struct sk_buff *skb,
+ u32 ts)
+{
+ struct ath_dynack *da = &ah->dynack;
+ struct ath_common *common = ath9k_hw_common(ah);
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+
+ if (!ath_dynack_bssidmask(ah, hdr->addr1) || !da->enabled)
+ return;
+
+ spin_lock_bh(&da->qlock);
+ da->ack_rbf.tstamp[da->ack_rbf.t_rb] = ts;
+
+ ath_dbg(common, DYNACK, "rx sample %u [h %u-t %u]\n",
+ da->ack_rbf.tstamp[da->ack_rbf.t_rb],
+ da->ack_rbf.h_rb, (da->ack_rbf.t_rb + 1) % ATH_DYN_BUF);
+
+ INCR(da->ack_rbf.t_rb, ATH_DYN_BUF);
+ if (da->ack_rbf.t_rb == da->ack_rbf.h_rb)
+ INCR(da->ack_rbf.h_rb, ATH_DYN_BUF);
+
+ ath_dynack_compute_to(ah);
+
+ spin_unlock_bh(&da->qlock);
+}
+EXPORT_SYMBOL(ath_dynack_sample_ack_ts);
+
+/**
+ * ath_dynack_node_init - init ath_node related info
+ * @ah: ath hw
+ * @an: ath node
+ *
+ */
+void ath_dynack_node_init(struct ath_hw *ah, struct ath_node *an)
+{
+ /* ackto = slottime + sifs + air delay */
+ u32 ackto = ATH9K_SLOT_TIME_9 + 16 + 64;
+ struct ath_dynack *da = &ah->dynack;
+
+ an->ackto = ackto;
+
+ spin_lock(&da->qlock);
+ list_add_tail(&an->list, &da->nodes);
+ spin_unlock(&da->qlock);
+}
+EXPORT_SYMBOL(ath_dynack_node_init);
+
+/**
+ * ath_dynack_node_deinit - deinit ath_node related info
+ * @ah: ath hw
+ * @an: ath node
+ *
+ */
+void ath_dynack_node_deinit(struct ath_hw *ah, struct ath_node *an)
+{
+ struct ath_dynack *da = &ah->dynack;
+
+ spin_lock(&da->qlock);
+ list_del(&an->list);
+ spin_unlock(&da->qlock);
+}
+EXPORT_SYMBOL(ath_dynack_node_deinit);
+
+/**
+ * ath_dynack_reset - reset dynack processing
+ * @ah: ath hw
+ *
+ */
+void ath_dynack_reset(struct ath_hw *ah)
+{
+ /* ackto = slottime + sifs + air delay */
+ u32 ackto = ATH9K_SLOT_TIME_9 + 16 + 64;
+ struct ath_dynack *da = &ah->dynack;
+
+ da->lto = jiffies;
+ da->ackto = ackto;
+
+ da->st_rbf.t_rb = 0;
+ da->st_rbf.h_rb = 0;
+ da->ack_rbf.t_rb = 0;
+ da->ack_rbf.h_rb = 0;
+
+ /* init acktimeout */
+ ath9k_hw_setslottime(ah, (ackto - 3) / 2);
+ ath9k_hw_set_ack_timeout(ah, ackto);
+ ath9k_hw_set_cts_timeout(ah, ackto);
+}
+EXPORT_SYMBOL(ath_dynack_reset);
+
+/**
+ * ath_dynack_init - init dynack data structure
+ * @ah: ath hw
+ *
+ */
+void ath_dynack_init(struct ath_hw *ah)
+{
+ struct ath_dynack *da = &ah->dynack;
+
+ memset(da, 0, sizeof(struct ath_dynack));
+
+ spin_lock_init(&da->qlock);
+ INIT_LIST_HEAD(&da->nodes);
+
+ ah->hw->wiphy->features |= NL80211_FEATURE_ACKTO_ESTIMATION;
+}
diff --git a/drivers/net/wireless/ath/ath9k/dynack.h b/drivers/net/wireless/ath/ath9k/dynack.h
new file mode 100644
index 0000000..6d7bef9
--- /dev/null
+++ b/drivers/net/wireless/ath/ath9k/dynack.h
@@ -0,0 +1,103 @@
+/*
+ * Copyright (c) 2014, Lorenzo Bianconi <lorenzo.bianconi83@gmail.com>
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef DYNACK_H
+#define DYNACK_H
+
+#define ATH_DYN_BUF 64
+
+struct ath_hw;
+struct ath_node;
+
+/**
+ * struct ath_dyn_rxbuf - ACK frame ring buffer
+ * @h_rb: ring buffer head
+ * @t_rb: ring buffer tail
+ * @tstamp: ACK RX timestamp buffer
+ */
+struct ath_dyn_rxbuf {
+ u16 h_rb, t_rb;
+ u32 tstamp[ATH_DYN_BUF];
+};
+
+struct ts_info {
+ u32 tstamp;
+ u32 dur;
+};
+
+struct haddr_pair {
+ u8 h_dest[ETH_ALEN];
+ u8 h_src[ETH_ALEN];
+};
+
+/**
+ * struct ath_dyn_txbuf - tx frame ring buffer
+ * @h_rb: ring buffer head
+ * @t_rb: ring buffer tail
+ * @addr: dest/src address pair for a given TX frame
+ * @ts: TX frame timestamp buffer
+ */
+struct ath_dyn_txbuf {
+ u16 h_rb, t_rb;
+ struct haddr_pair addr[ATH_DYN_BUF];
+ struct ts_info ts[ATH_DYN_BUF];
+};
+
+/**
+ * struct ath_dynack - dynack processing info
+ * @enabled: enable dyn ack processing
+ * @ackto: current ACK timeout
+ * @lto: last ACK timeout computation
+ * @nodes: ath_node linked list
+ * @qlock: ts queue spinlock
+ * @ack_rbf: ACK ts ring buffer
+ * @st_rbf: status ts ring buffer
+ */
+struct ath_dynack {
+ bool enabled;
+ int ackto;
+ unsigned long lto;
+
+ struct list_head nodes;
+
+ /* protect timestamp queue access */
+ spinlock_t qlock;
+ struct ath_dyn_rxbuf ack_rbf;
+ struct ath_dyn_txbuf st_rbf;
+};
+
+#if defined(CONFIG_ATH9K_DYNACK)
+void ath_dynack_reset(struct ath_hw *ah);
+void ath_dynack_node_init(struct ath_hw *ah, struct ath_node *an);
+void ath_dynack_node_deinit(struct ath_hw *ah, struct ath_node *an);
+void ath_dynack_init(struct ath_hw *ah);
+void ath_dynack_sample_ack_ts(struct ath_hw *ah, struct sk_buff *skb, u32 ts);
+void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb,
+ struct ath_tx_status *ts);
+#else
+static inline void ath_dynack_init(struct ath_hw *ah) {}
+static inline void ath_dynack_node_init(struct ath_hw *ah,
+ struct ath_node *an) {}
+static inline void ath_dynack_node_deinit(struct ath_hw *ah,
+ struct ath_node *an) {}
+static inline void ath_dynack_sample_ack_ts(struct ath_hw *ah,
+ struct sk_buff *skb, u32 ts) {}
+static inline void ath_dynack_sample_tx_ts(struct ath_hw *ah,
+ struct sk_buff *skb,
+ struct ath_tx_status *ts) {}
+#endif
+
+#endif /* DYNACK_H */
diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_main.c b/drivers/net/wireless/ath/ath9k/htc_drv_main.c
index 5627917..994fff1 100644
--- a/drivers/net/wireless/ath/ath9k/htc_drv_main.c
+++ b/drivers/net/wireless/ath/ath9k/htc_drv_main.c
@@ -1722,7 +1722,7 @@
}
static void ath9k_htc_set_coverage_class(struct ieee80211_hw *hw,
- u8 coverage_class)
+ s16 coverage_class)
{
struct ath9k_htc_priv *priv = hw->priv;
diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
index bb86eb2..f0484b1 100644
--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
@@ -978,7 +978,7 @@
struct ath_hw *ah = common->ah;
struct ath_htc_rx_status *rxstatus;
struct ath_rx_status rx_stats;
- bool decrypt_error;
+ bool decrypt_error = false;
if (skb->len < HTC_RX_FRAME_HEADER_SIZE) {
ath_err(common, "Corrupted RX frame, dropping (len: %d)\n",
diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
index 69bbea1..3aed729 100644
--- a/drivers/net/wireless/ath/ath9k/hw.c
+++ b/drivers/net/wireless/ath/ath9k/hw.c
@@ -647,6 +647,8 @@
return ret;
}
+ ath_dynack_init(ah);
+
return 0;
}
EXPORT_SYMBOL(ath9k_hw_init);
@@ -935,21 +937,21 @@
REG_WRITE(ah, AR_D_GBL_IFS_SIFS, val);
}
-static void ath9k_hw_setslottime(struct ath_hw *ah, u32 us)
+void ath9k_hw_setslottime(struct ath_hw *ah, u32 us)
{
u32 val = ath9k_hw_mac_to_clks(ah, us);
val = min(val, (u32) 0xFFFF);
REG_WRITE(ah, AR_D_GBL_IFS_SLOT, val);
}
-static void ath9k_hw_set_ack_timeout(struct ath_hw *ah, u32 us)
+void ath9k_hw_set_ack_timeout(struct ath_hw *ah, u32 us)
{
u32 val = ath9k_hw_mac_to_clks(ah, us);
val = min(val, (u32) MS(0xFFFFFFFF, AR_TIME_OUT_ACK));
REG_RMW_FIELD(ah, AR_TIME_OUT, AR_TIME_OUT_ACK, val);
}
-static void ath9k_hw_set_cts_timeout(struct ath_hw *ah, u32 us)
+void ath9k_hw_set_cts_timeout(struct ath_hw *ah, u32 us)
{
u32 val = ath9k_hw_mac_to_clks(ah, us);
val = min(val, (u32) MS(0xFFFFFFFF, AR_TIME_OUT_CTS));
@@ -1053,6 +1055,14 @@
ctstimeout += 48 - sifstime - ah->slottime;
}
+ if (ah->dynack.enabled) {
+ acktimeout = ah->dynack.ackto;
+ ctstimeout = acktimeout;
+ slottime = (acktimeout - 3) / 2;
+ } else {
+ ah->dynack.ackto = acktimeout;
+ }
+
ath9k_hw_set_sifs_time(ah, sifstime);
ath9k_hw_setslottime(ah, slottime);
ath9k_hw_set_ack_timeout(ah, acktimeout);
@@ -1954,6 +1964,12 @@
if (AR_SREV_9565(ah) && common->bt_ant_diversity)
REG_SET_BIT(ah, AR_BTCOEX_WL_LNADIV, AR_BTCOEX_WL_LNADIV_FORCE_ON);
+ if (ah->hw->conf.radar_enabled) {
+ /* set HW specific DFS configuration */
+ ah->radar_conf.ext_channel = IS_CHAN_HT40(chan);
+ ath9k_hw_set_radar_params(ah);
+ }
+
return 0;
}
EXPORT_SYMBOL(ath9k_hw_reset);
diff --git a/drivers/net/wireless/ath/ath9k/hw.h b/drivers/net/wireless/ath/ath9k/hw.h
index 51b4ebe..b9eef33 100644
--- a/drivers/net/wireless/ath/ath9k/hw.h
+++ b/drivers/net/wireless/ath/ath9k/hw.h
@@ -29,6 +29,7 @@
#include "reg.h"
#include "phy.h"
#include "btcoex.h"
+#include "dynack.h"
#include "../regd.h"
@@ -924,6 +925,8 @@
int (*external_reset)(void);
const struct firmware *eeprom_blob;
+
+ struct ath_dynack dynack;
};
struct ath_bus_ops {
@@ -1080,6 +1083,10 @@
void ath9k_ani_reset(struct ath_hw *ah, bool is_scanning);
void ath9k_hw_ani_monitor(struct ath_hw *ah, struct ath9k_channel *chan);
+void ath9k_hw_set_ack_timeout(struct ath_hw *ah, u32 us);
+void ath9k_hw_set_cts_timeout(struct ath_hw *ah, u32 us);
+void ath9k_hw_setslottime(struct ath_hw *ah, u32 us);
+
#ifdef CONFIG_ATH9K_BTCOEX_SUPPORT
static inline bool ath9k_hw_btcoex_is_enabled(struct ath_hw *ah)
{
diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c
index ca10a8b..156a944 100644
--- a/drivers/net/wireless/ath/ath9k/init.c
+++ b/drivers/net/wireless/ath/ath9k/init.c
@@ -763,8 +763,9 @@
if (AR_SREV_9160_10_OR_LATER(sc->sc_ah) || ath9k_modparam_nohwcrypt)
hw->flags |= IEEE80211_HW_MFP_CAPABLE;
- hw->wiphy->features |= (NL80211_FEATURE_ACTIVE_MONITOR |
- NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE);
+ hw->wiphy->features |= NL80211_FEATURE_ACTIVE_MONITOR |
+ NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE |
+ NL80211_FEATURE_P2P_GO_CTWIN;
if (!config_enabled(CONFIG_ATH9K_TX99)) {
hw->wiphy->interface_modes =
@@ -810,7 +811,7 @@
/* allow 4 queues per channel context +
* 1 cab queue + 1 offchannel tx queue
*/
- hw->queues = 10;
+ hw->queues = ATH9K_NUM_TX_QUEUES;
/* last queue for offchannel */
hw->offchannel_tx_hw_queue = hw->queues - 1;
hw->max_rates = 4;
diff --git a/drivers/net/wireless/ath/ath9k/mac.h b/drivers/net/wireless/ath/ath9k/mac.h
index 6c56caf..cd05a77 100644
--- a/drivers/net/wireless/ath/ath9k/mac.h
+++ b/drivers/net/wireless/ath/ath9k/mac.h
@@ -121,6 +121,7 @@
u32 evm0;
u32 evm1;
u32 evm2;
+ u32 duration[4];
};
struct ath_rx_status {
diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
index d9be831..fbf23ac 100644
--- a/drivers/net/wireless/ath/ath9k/main.c
+++ b/drivers/net/wireless/ath/ath9k/main.c
@@ -224,16 +224,11 @@
struct ath_common *common = ath9k_hw_common(ah);
unsigned long flags;
- if (ath_startrecv(sc) != 0) {
- ath_err(common, "Unable to restart recv logic\n");
- return false;
- }
-
+ ath9k_calculate_summary_state(sc, sc->cur_chan);
+ ath_startrecv(sc);
ath9k_cmn_update_txpow(ah, sc->curtxpow,
sc->cur_chan->txpower, &sc->curtxpow);
-
clear_bit(ATH_OP_HW_RESET, &common->op_flags);
- ath9k_calculate_summary_state(sc, sc->cur_chan);
if (!sc->cur_chan->offchannel && start) {
/* restore per chanctx TSF timer */
@@ -350,12 +345,16 @@
memset(&an->key_idx, 0, sizeof(an->key_idx));
ath_tx_node_init(sc, an);
+
+ ath_dynack_node_init(sc->sc_ah, an);
}
static void ath_node_detach(struct ath_softc *sc, struct ieee80211_sta *sta)
{
struct ath_node *an = (struct ath_node *)sta->drv_priv;
ath_tx_node_cleanup(sc, an);
+
+ ath_dynack_node_deinit(sc->sc_ah, an);
}
void ath9k_tasklet(unsigned long data)
@@ -505,7 +504,7 @@
* touch anything. Note this can happen early
* on if the IRQ is shared.
*/
- if (test_bit(ATH_OP_INVALID, &common->op_flags))
+ if (!ah || test_bit(ATH_OP_INVALID, &common->op_flags))
return IRQ_NONE;
/* shared irq, not for us */
@@ -916,8 +915,6 @@
switch (vif->type) {
case NL80211_IFTYPE_AP:
iter_data->naps++;
- if (vif->bss_conf.enable_beacon)
- iter_data->beacons = true;
break;
case NL80211_IFTYPE_STATION:
iter_data->nstations++;
@@ -960,21 +957,6 @@
list_for_each_entry(avp, &ctx->vifs, list)
ath9k_vif_iter(iter_data, avp->vif->addr, avp->vif);
-
-#ifdef CONFIG_ATH9K_CHANNEL_CONTEXT
- if (ctx == &sc->offchannel.chan) {
- struct ieee80211_vif *vif;
-
- if (sc->offchannel.state < ATH_OFFCHANNEL_ROC_START)
- vif = sc->offchannel.scan_vif;
- else
- vif = sc->offchannel.roc_vif;
-
- if (vif)
- ath9k_vif_iter(iter_data, vif->addr, vif);
- iter_data->beacons = false;
- }
-#endif
}
static void ath9k_set_assoc_state(struct ath_softc *sc,
@@ -985,13 +967,6 @@
unsigned long flags;
set_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags);
- /* Set the AID, BSSID and do beacon-sync only when
- * the HW opmode is STATION.
- *
- * But the primary bit is set above in any case.
- */
- if (sc->sc_ah->opmode != NL80211_IFTYPE_STATION)
- return;
ether_addr_copy(common->curbssid, bss_conf->bssid);
common->curaid = bss_conf->aid;
@@ -1014,6 +989,43 @@
vif->addr, common->curbssid);
}
+#ifdef CONFIG_ATH9K_CHANNEL_CONTEXT
+static void ath9k_set_offchannel_state(struct ath_softc *sc)
+{
+ struct ath_hw *ah = sc->sc_ah;
+ struct ath_common *common = ath9k_hw_common(ah);
+ struct ieee80211_vif *vif = NULL;
+
+ ath9k_ps_wakeup(sc);
+
+ if (sc->offchannel.state < ATH_OFFCHANNEL_ROC_START)
+ vif = sc->offchannel.scan_vif;
+ else
+ vif = sc->offchannel.roc_vif;
+
+ if (WARN_ON(!vif))
+ goto exit;
+
+ eth_zero_addr(common->curbssid);
+ eth_broadcast_addr(common->bssidmask);
+ ether_addr_copy(common->macaddr, vif->addr);
+ common->curaid = 0;
+ ah->opmode = vif->type;
+ ah->imask &= ~ATH9K_INT_SWBA;
+ ah->imask &= ~ATH9K_INT_TSFOOR;
+ ah->slottime = ATH9K_SLOT_TIME_9;
+
+ ath_hw_setbssidmask(common);
+ ath9k_hw_setopmode(ah);
+ ath9k_hw_write_associd(sc->sc_ah);
+ ath9k_hw_set_interrupts(ah);
+ ath9k_hw_init_global_settings(ah);
+
+exit:
+ ath9k_ps_restore(sc);
+}
+#endif
+
/* Called with sc->mutex held. */
void ath9k_calculate_summary_state(struct ath_softc *sc,
struct ath_chanctx *ctx)
@@ -1021,12 +1033,18 @@
struct ath_hw *ah = sc->sc_ah;
struct ath_common *common = ath9k_hw_common(ah);
struct ath9k_vif_iter_data iter_data;
+ struct ath_beacon_config *cur_conf;
ath_chanctx_check_active(sc, ctx);
if (ctx != sc->cur_chan)
return;
+#ifdef CONFIG_ATH9K_CHANNEL_CONTEXT
+ if (ctx == &sc->offchannel.chan)
+ return ath9k_set_offchannel_state(sc);
+#endif
+
ath9k_ps_wakeup(sc);
ath9k_calculate_iter_data(sc, ctx, &iter_data);
@@ -1037,8 +1055,11 @@
ath_hw_setbssidmask(common);
if (iter_data.naps > 0) {
+ cur_conf = &ctx->beacon;
ath9k_hw_set_tsfadjust(ah, true);
ah->opmode = NL80211_IFTYPE_AP;
+ if (cur_conf->enable_beacon)
+ iter_data.beacons = true;
} else {
ath9k_hw_set_tsfadjust(ah, false);
@@ -1067,13 +1088,11 @@
if (ah->opmode == NL80211_IFTYPE_STATION) {
bool changed = (iter_data.primary_sta != ctx->primary_sta);
- iter_data.beacons = true;
if (iter_data.primary_sta) {
+ iter_data.beacons = true;
ath9k_set_assoc_state(sc, iter_data.primary_sta,
changed);
- if (!ctx->primary_sta ||
- !ctx->primary_sta->bss_conf.assoc)
- ctx->primary_sta = iter_data.primary_sta;
+ ctx->primary_sta = iter_data.primary_sta;
} else {
ctx->primary_sta = NULL;
memset(common->curbssid, 0, ETH_ALEN);
@@ -1102,11 +1121,23 @@
else
clear_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags);
- ctx->primary_sta = iter_data.primary_sta;
-
ath9k_ps_restore(sc);
}
+static void ath9k_assign_hw_queues(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ int i;
+
+ for (i = 0; i < IEEE80211_NUM_ACS; i++)
+ vif->hw_queue[i] = i;
+
+ if (vif->type == NL80211_IFTYPE_AP)
+ vif->cab_queue = hw->queues - 2;
+ else
+ vif->cab_queue = IEEE80211_INVAL_HW_QUEUE;
+}
+
static int ath9k_add_interface(struct ieee80211_hw *hw,
struct ieee80211_vif *vif)
{
@@ -1115,12 +1146,11 @@
struct ath_common *common = ath9k_hw_common(ah);
struct ath_vif *avp = (void *)vif->drv_priv;
struct ath_node *an = &avp->mcast_node;
- int i;
mutex_lock(&sc->mutex);
if (config_enabled(CONFIG_ATH9K_TX99)) {
- if (sc->nvifs >= 1) {
+ if (sc->cur_chan->nvifs >= 1) {
mutex_unlock(&sc->mutex);
return -EOPNOTSUPP;
}
@@ -1128,7 +1158,7 @@
}
ath_dbg(common, CONFIG, "Attach a VIF of type: %d\n", vif->type);
- sc->nvifs++;
+ sc->cur_chan->nvifs++;
if (ath9k_uses_beacons(vif->type))
ath9k_beacon_assign_slot(sc, vif);
@@ -1138,12 +1168,8 @@
avp->chanctx = sc->cur_chan;
list_add_tail(&avp->list, &avp->chanctx->vifs);
}
- for (i = 0; i < IEEE80211_NUM_ACS; i++)
- vif->hw_queue[i] = i;
- if (vif->type == NL80211_IFTYPE_AP)
- vif->cab_queue = hw->queues - 2;
- else
- vif->cab_queue = IEEE80211_INVAL_HW_QUEUE;
+
+ ath9k_assign_hw_queues(hw, vif);
an->sc = sc;
an->sta = NULL;
@@ -1163,7 +1189,6 @@
struct ath_softc *sc = hw->priv;
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
struct ath_vif *avp = (void *)vif->drv_priv;
- int i;
mutex_lock(&sc->mutex);
@@ -1183,14 +1208,7 @@
if (ath9k_uses_beacons(vif->type))
ath9k_beacon_assign_slot(sc, vif);
- for (i = 0; i < IEEE80211_NUM_ACS; i++)
- vif->hw_queue[i] = i;
-
- if (vif->type == NL80211_IFTYPE_AP)
- vif->cab_queue = hw->queues - 2;
- else
- vif->cab_queue = IEEE80211_INVAL_HW_QUEUE;
-
+ ath9k_assign_hw_queues(hw, vif);
ath9k_calculate_summary_state(sc, avp->chanctx);
mutex_unlock(&sc->mutex);
@@ -1210,7 +1228,7 @@
ath9k_p2p_remove_vif(sc, vif);
- sc->nvifs--;
+ sc->cur_chan->nvifs--;
sc->tx99_vif = NULL;
if (!ath9k_is_chanctx_enabled())
list_del(&avp->list);
@@ -1430,7 +1448,10 @@
changed_flags &= SUPPORTED_FILTERS;
*total_flags &= SUPPORTED_FILTERS;
- sc->rx.rxfilter = *total_flags;
+ spin_lock_bh(&sc->chan_lock);
+ sc->cur_chan->rxfilter = *total_flags;
+ spin_unlock_bh(&sc->chan_lock);
+
ath9k_ps_wakeup(sc);
rfilt = ath_calcrxfilter(sc);
ath9k_hw_setrxfilter(sc->sc_ah, rfilt);
@@ -1695,9 +1716,9 @@
if ((changed & BSS_CHANGED_BEACON_ENABLED) ||
(changed & BSS_CHANGED_BEACON_INT) ||
(changed & BSS_CHANGED_BEACON_INFO)) {
+ ath9k_beacon_config(sc, vif, changed);
if (changed & BSS_CHANGED_BEACON_ENABLED)
ath9k_calculate_summary_state(sc, avp->chanctx);
- ath9k_beacon_config(sc, vif, changed);
}
if ((avp->chanctx == sc->cur_chan) &&
@@ -1859,7 +1880,22 @@
return 0;
}
-static void ath9k_set_coverage_class(struct ieee80211_hw *hw, u8 coverage_class)
+static void ath9k_enable_dynack(struct ath_softc *sc)
+{
+#ifdef CONFIG_ATH9K_DYNACK
+ u32 rfilt;
+ struct ath_hw *ah = sc->sc_ah;
+
+ ath_dynack_reset(ah);
+
+ ah->dynack.enabled = true;
+ rfilt = ath_calcrxfilter(sc);
+ ath9k_hw_setrxfilter(ah, rfilt);
+#endif
+}
+
+static void ath9k_set_coverage_class(struct ieee80211_hw *hw,
+ s16 coverage_class)
{
struct ath_softc *sc = hw->priv;
struct ath_hw *ah = sc->sc_ah;
@@ -1868,11 +1904,22 @@
return;
mutex_lock(&sc->mutex);
- ah->coverage_class = coverage_class;
- ath9k_ps_wakeup(sc);
- ath9k_hw_init_global_settings(ah);
- ath9k_ps_restore(sc);
+ if (coverage_class >= 0) {
+ ah->coverage_class = coverage_class;
+ if (ah->dynack.enabled) {
+ u32 rfilt;
+
+ ah->dynack.enabled = false;
+ rfilt = ath_calcrxfilter(sc);
+ ath9k_hw_setrxfilter(ah, rfilt);
+ }
+ ath9k_ps_wakeup(sc);
+ ath9k_hw_init_global_settings(ah);
+ ath9k_ps_restore(sc);
+ } else if (!ah->dynack.enabled) {
+ ath9k_enable_dynack(sc);
+ }
mutex_unlock(&sc->mutex);
}
diff --git a/drivers/net/wireless/ath/ath9k/recv.c b/drivers/net/wireless/ath/ath9k/recv.c
index 2aaf233..6914e21 100644
--- a/drivers/net/wireless/ath/ath9k/recv.c
+++ b/drivers/net/wireless/ath/ath9k/recv.c
@@ -387,7 +387,9 @@
if (sc->hw->conf.radar_enabled)
rfilt |= ATH9K_RX_FILTER_PHYRADAR | ATH9K_RX_FILTER_PHYERR;
- if (sc->rx.rxfilter & FIF_PROBE_REQ)
+ spin_lock_bh(&sc->chan_lock);
+
+ if (sc->cur_chan->rxfilter & FIF_PROBE_REQ)
rfilt |= ATH9K_RX_FILTER_PROBEREQ;
/*
@@ -398,24 +400,25 @@
if (sc->sc_ah->is_monitoring)
rfilt |= ATH9K_RX_FILTER_PROM;
- if (sc->rx.rxfilter & FIF_CONTROL)
+ if ((sc->cur_chan->rxfilter & FIF_CONTROL) ||
+ sc->sc_ah->dynack.enabled)
rfilt |= ATH9K_RX_FILTER_CONTROL;
if ((sc->sc_ah->opmode == NL80211_IFTYPE_STATION) &&
- (sc->nvifs <= 1) &&
- !(sc->rx.rxfilter & FIF_BCN_PRBRESP_PROMISC))
+ (sc->cur_chan->nvifs <= 1) &&
+ !(sc->cur_chan->rxfilter & FIF_BCN_PRBRESP_PROMISC))
rfilt |= ATH9K_RX_FILTER_MYBEACON;
else
rfilt |= ATH9K_RX_FILTER_BEACON;
if ((sc->sc_ah->opmode == NL80211_IFTYPE_AP) ||
- (sc->rx.rxfilter & FIF_PSPOLL))
+ (sc->cur_chan->rxfilter & FIF_PSPOLL))
rfilt |= ATH9K_RX_FILTER_PSPOLL;
- if (conf_is_ht(&sc->hw->conf))
+ if (sc->cur_chandef.width != NL80211_CHAN_WIDTH_20_NOHT)
rfilt |= ATH9K_RX_FILTER_COMP_BAR;
- if (sc->nvifs > 1 || (sc->rx.rxfilter & FIF_OTHER_BSS)) {
+ if (sc->cur_chan->nvifs > 1 || (sc->cur_chan->rxfilter & FIF_OTHER_BSS)) {
/* This is needed for older chips */
if (sc->sc_ah->hw_version.macVersion <= AR_SREV_VERSION_9160)
rfilt |= ATH9K_RX_FILTER_PROM;
@@ -429,18 +432,20 @@
test_bit(ATH_OP_SCANNING, &common->op_flags))
rfilt |= ATH9K_RX_FILTER_BEACON;
+ spin_unlock_bh(&sc->chan_lock);
+
return rfilt;
}
-int ath_startrecv(struct ath_softc *sc)
+void ath_startrecv(struct ath_softc *sc)
{
struct ath_hw *ah = sc->sc_ah;
struct ath_rxbuf *bf, *tbf;
if (ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) {
ath_edma_start_recv(sc);
- return 0;
+ return;
}
if (list_empty(&sc->rx.rxbuf))
@@ -463,8 +468,6 @@
start_recv:
ath_opmode_init(sc);
ath9k_hw_startpcureceive(ah, sc->cur_chan->offchannel);
-
- return 0;
}
static void ath_flushrecv(struct ath_softc *sc)
@@ -535,6 +538,7 @@
static void ath_rx_ps_beacon(struct ath_softc *sc, struct sk_buff *skb)
{
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
+ bool skip_beacon = false;
if (skb->len < 24 + 8 + 2 + 2)
return;
@@ -545,7 +549,16 @@
sc->ps_flags &= ~PS_BEACON_SYNC;
ath_dbg(common, PS,
"Reconfigure beacon timers based on synchronized timestamp\n");
- if (!(WARN_ON_ONCE(sc->cur_chan->beacon.beacon_interval == 0)))
+
+#ifdef CONFIG_ATH9K_CHANNEL_CONTEXT
+ if (ath9k_is_chanctx_enabled()) {
+ if (sc->cur_chan == &sc->offchannel.chan)
+ skip_beacon = true;
+ }
+#endif
+
+ if (!skip_beacon &&
+ !(WARN_ON_ONCE(sc->cur_chan->beacon.beacon_interval == 0)))
ath9k_set_beacon(sc);
ath9k_p2p_beacon_sync(sc);
@@ -867,8 +880,13 @@
* everything but the rate is checked here, the rate check is done
* separately to avoid doing two lookups for a rate for each frame.
*/
- if (!ath9k_cmn_rx_accept(common, hdr, rx_status, rx_stats, decrypt_error, sc->rx.rxfilter))
+ spin_lock_bh(&sc->chan_lock);
+ if (!ath9k_cmn_rx_accept(common, hdr, rx_status, rx_stats, decrypt_error,
+ sc->cur_chan->rxfilter)) {
+ spin_unlock_bh(&sc->chan_lock);
return -EINVAL;
+ }
+ spin_unlock_bh(&sc->chan_lock);
if (ath_is_mybeacon(common, hdr)) {
RX_STAT_INC(rx_beacons);
@@ -894,7 +912,7 @@
if (ath9k_is_chanctx_enabled()) {
if (rx_stats->is_mybeacon)
- ath_chanctx_beacon_recv_ev(sc, rx_stats->rs_tstamp,
+ ath_chanctx_beacon_recv_ev(sc,
ATH_CHANCTX_EVENT_BEACON_RECEIVED);
}
@@ -992,6 +1010,7 @@
unsigned long flags;
dma_addr_t new_buf_addr;
unsigned int budget = 512;
+ struct ieee80211_hdr *hdr;
if (edma)
dma_type = DMA_BIDIRECTIONAL;
@@ -1121,6 +1140,10 @@
ath9k_apply_ampdu_details(sc, &rs, rxs);
ath_debug_rate_stats(sc, &rs, skb);
+ hdr = (struct ieee80211_hdr *)skb->data;
+ if (ieee80211_is_ack(hdr->frame_control))
+ ath_dynack_sample_ack_ts(sc->sc_ah, skb, rs.rs_tstamp);
+
ieee80211_rx(hw, skb);
requeue_drop_frag:
diff --git a/drivers/net/wireless/ath/ath9k/tx99.c b/drivers/net/wireless/ath/ath9k/tx99.c
index 2397292..8a69d08 100644
--- a/drivers/net/wireless/ath/ath9k/tx99.c
+++ b/drivers/net/wireless/ath/ath9k/tx99.c
@@ -174,7 +174,7 @@
ssize_t len;
int r;
- if (sc->nvifs > 1)
+ if (sc->cur_chan->nvifs > 1)
return -EOPNOTSUPP;
len = min(count, sizeof(buf) - 1);
diff --git a/drivers/net/wireless/ath/ath9k/wow.c b/drivers/net/wireless/ath/ath9k/wow.c
index 33531d9..5f30e58 100644
--- a/drivers/net/wireless/ath/ath9k/wow.c
+++ b/drivers/net/wireless/ath/ath9k/wow.c
@@ -232,7 +232,7 @@
goto fail_wow;
}
- if (sc->nvifs > 1) {
+ if (sc->cur_chan->nvifs > 1) {
ath_dbg(common, WOW, "WoW for multivif is not yet supported\n");
ret = 1;
goto fail_wow;
diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
index 2819866..93ad31b 100644
--- a/drivers/net/wireless/ath/ath9k/xmit.c
+++ b/drivers/net/wireless/ath/ath9k/xmit.c
@@ -587,6 +587,10 @@
memcpy(tx_info->control.rates, rates, sizeof(rates));
ath_tx_rc_status(sc, bf, ts, nframes, nbad, txok);
rc_update = false;
+ if (bf == bf->bf_lastbf)
+ ath_dynack_sample_tx_ts(sc->sc_ah,
+ bf->bf_mpdu,
+ ts);
}
ath_tx_complete_buf(sc, bf, txq, &bf_head, ts,
@@ -687,6 +691,7 @@
memcpy(info->control.rates, bf->rates,
sizeof(info->control.rates));
ath_tx_rc_status(sc, bf, ts, 1, txok ? 0 : 1, txok);
+ ath_dynack_sample_tx_ts(sc->sc_ah, bf->bf_mpdu, ts);
}
ath_tx_complete_buf(sc, bf, txq, bf_head, ts, txok);
} else
diff --git a/drivers/net/wireless/ath/wil6210/Kconfig b/drivers/net/wireless/ath/wil6210/Kconfig
index ce8c038..481680a 100644
--- a/drivers/net/wireless/ath/wil6210/Kconfig
+++ b/drivers/net/wireless/ath/wil6210/Kconfig
@@ -39,3 +39,12 @@
option if you are interested in debugging the driver.
If unsure, say Y to make it easier to debug problems.
+
+config WIL6210_PLATFORM_MSM
+ bool "wil6210 MSM platform specific support"
+ depends on WIL6210
+ depends on ARCH_MSM
+ default y
+ ---help---
+ Say Y here to enable wil6210 driver support for MSM
+ platform specific features
diff --git a/drivers/net/wireless/ath/wil6210/Makefile b/drivers/net/wireless/ath/wil6210/Makefile
index c7a3465..a471d74 100644
--- a/drivers/net/wireless/ath/wil6210/Makefile
+++ b/drivers/net/wireless/ath/wil6210/Makefile
@@ -10,7 +10,10 @@
wil6210-y += txrx.o
wil6210-y += debug.o
wil6210-y += rx_reorder.o
+wil6210-y += fw.o
wil6210-$(CONFIG_WIL6210_TRACING) += trace.o
+wil6210-y += wil_platform.o
+wil6210-$(CONFIG_WIL6210_PLATFORM_MSM) += wil_platform_msm.o
# for tracing framework to find trace.h
CFLAGS_trace.o := -I$(src)
diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
index a00f318..f3a31e8 100644
--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
+++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
@@ -296,6 +296,7 @@
n = min(request->n_channels, 4U);
for (i = 0; i < n; i++) {
int ch = request->channels[i]->hw_value;
+
if (ch == 0) {
wil_err(wil,
"Scan requested for unknown frequency %dMhz\n",
@@ -308,9 +309,23 @@
request->channels[i]->center_freq);
}
+ if (request->ie_len)
+ print_hex_dump_bytes("Scan IE ", DUMP_PREFIX_OFFSET,
+ request->ie, request->ie_len);
+ else
+ wil_dbg_misc(wil, "Scan has no IE's\n");
+
+ rc = wmi_set_ie(wil, WMI_FRAME_PROBE_REQ, request->ie_len,
+ request->ie);
+ if (rc) {
+ wil_err(wil, "Aborting scan, set_ie failed: %d\n", rc);
+ goto out;
+ }
+
rc = wmi_send(wil, WMI_START_SCAN_CMDID, &cmd, sizeof(cmd.cmd) +
cmd.cmd.num_channels * sizeof(cmd.cmd.channel_list[0]));
+out:
if (rc) {
del_timer_sync(&wil->scan_timer);
wil->scan_request = NULL;
@@ -319,6 +334,22 @@
return rc;
}
+static void wil_print_connect_params(struct wil6210_priv *wil,
+ struct cfg80211_connect_params *sme)
+{
+ wil_info(wil, "Connecting to:\n");
+ if (sme->channel) {
+ wil_info(wil, " Channel: %d freq %d\n",
+ sme->channel->hw_value, sme->channel->center_freq);
+ }
+ if (sme->bssid)
+ wil_info(wil, " BSSID: %pM\n", sme->bssid);
+ if (sme->ssid)
+ print_hex_dump(KERN_INFO, " SSID: ", DUMP_PREFIX_OFFSET,
+ 16, 1, sme->ssid, sme->ssid_len, true);
+ wil_info(wil, " Privacy: %s\n", sme->privacy ? "secure" : "open");
+}
+
static int wil_cfg80211_connect(struct wiphy *wiphy,
struct net_device *ndev,
struct cfg80211_connect_params *sme)
@@ -335,6 +366,8 @@
test_bit(wil_status_fwconnected, &wil->status))
return -EALREADY;
+ wil_print_connect_params(wil, sme);
+
bss = cfg80211_get_bss(wiphy, sme->channel, sme->bssid,
sme->ssid, sme->ssid_len,
WLAN_CAPABILITY_ESS, WLAN_CAPABILITY_ESS);
@@ -360,22 +393,22 @@
sme->ie_len);
goto out;
}
- /*
- * For secure assoc, send:
- * (1) WMI_DELETE_CIPHER_KEY_CMD
- * (2) WMI_SET_APPIE_CMD
- */
+ /* For secure assoc, send WMI_DELETE_CIPHER_KEY_CMD */
rc = wmi_del_cipher_key(wil, 0, bss->bssid);
if (rc) {
wil_err(wil, "WMI_DELETE_CIPHER_KEY_CMD failed\n");
goto out;
}
- /* WMI_SET_APPIE_CMD */
- rc = wmi_set_ie(wil, WMI_FRAME_ASSOC_REQ, sme->ie_len, sme->ie);
- if (rc) {
- wil_err(wil, "WMI_SET_APPIE_CMD failed\n");
- goto out;
- }
+ }
+
+ /* WMI_SET_APPIE_CMD. ie may contain rsn info as well as other info
+ * elements. Send it also in case it's empty, to erase previously set
+ * ies in FW.
+ */
+ rc = wmi_set_ie(wil, WMI_FRAME_ASSOC_REQ, sme->ie_len, sme->ie);
+ if (rc) {
+ wil_err(wil, "WMI_SET_APPIE_CMD failed\n");
+ goto out;
}
/* WMI_CONNECT_CMD */
@@ -621,6 +654,45 @@
return rc;
}
+static int wil_cfg80211_change_beacon(struct wiphy *wiphy,
+ struct net_device *ndev,
+ struct cfg80211_beacon_data *bcon)
+{
+ struct wil6210_priv *wil = wiphy_to_wil(wiphy);
+ int rc;
+
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
+ if (wil_fix_bcon(wil, bcon)) {
+ wil_dbg_misc(wil, "Fixed bcon\n");
+ wil_print_bcon_data(bcon);
+ }
+
+ /* FW do not form regular beacon, so bcon IE's are not set
+ * For the DMG bcon, when it will be supported, bcon IE's will
+ * be reused; add something like:
+ * wmi_set_ie(wil, WMI_FRAME_BEACON, bcon->beacon_ies_len,
+ * bcon->beacon_ies);
+ */
+ rc = wmi_set_ie(wil, WMI_FRAME_PROBE_RESP,
+ bcon->proberesp_ies_len,
+ bcon->proberesp_ies);
+ if (rc) {
+ wil_err(wil, "set_ie(PROBE_RESP) failed\n");
+ return rc;
+ }
+
+ rc = wmi_set_ie(wil, WMI_FRAME_ASSOC_RESP,
+ bcon->assocresp_ies_len,
+ bcon->assocresp_ies);
+ if (rc) {
+ wil_err(wil, "set_ie(ASSOC_RESP) failed\n");
+ return rc;
+ }
+
+ return 0;
+}
+
static int wil_cfg80211_start_ap(struct wiphy *wiphy,
struct net_device *ndev,
struct cfg80211_ap_settings *info)
@@ -658,12 +730,8 @@
mutex_lock(&wil->mutex);
- rc = wil_reset(wil);
- if (rc)
- goto out;
-
- /* Rx VRING. */
- rc = wil_rx_init(wil);
+ __wil_down(wil);
+ rc = __wil_up(wil);
if (rc)
goto out;
@@ -671,9 +739,6 @@
if (rc)
goto out;
- /* MAC address - pre-requisite for other commands */
- wmi_set_mac_address(wil, ndev->dev_addr);
-
/* IE's */
/* bcon 'head IE's are not relevant for 60g band */
/*
@@ -695,7 +760,6 @@
if (rc)
goto out;
-
netif_carrier_on(ndev);
out:
@@ -706,7 +770,7 @@
static int wil_cfg80211_stop_ap(struct wiphy *wiphy,
struct net_device *ndev)
{
- int rc = 0;
+ int rc, rc1;
struct wil6210_priv *wil = wiphy_to_wil(wiphy);
wil_dbg_misc(wil, "%s()\n", __func__);
@@ -715,8 +779,12 @@
rc = wmi_pcp_stop(wil);
+ __wil_down(wil);
+ rc1 = __wil_up(wil);
+
mutex_unlock(&wil->mutex);
- return rc;
+
+ return min(rc, rc1);
}
static int wil_cfg80211_del_station(struct wiphy *wiphy,
@@ -746,6 +814,7 @@
.del_key = wil_cfg80211_del_key,
.set_default_key = wil_cfg80211_set_default_key,
/* AP mode */
+ .change_beacon = wil_cfg80211_change_beacon,
.start_ap = wil_cfg80211_start_ap,
.stop_ap = wil_cfg80211_stop_ap,
.del_station = wil_cfg80211_del_station,
@@ -755,6 +824,7 @@
{
/* TODO: set real value */
wiphy->max_scan_ssids = 10;
+ wiphy->max_scan_ie_len = WMI_MAX_IE_LEN;
wiphy->max_num_pmkids = 0 /* TODO: */;
wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
BIT(NL80211_IFTYPE_AP) |
@@ -764,8 +834,8 @@
*/
wiphy->flags |= WIPHY_FLAG_HAVE_AP_SME |
WIPHY_FLAG_AP_PROBE_RESP_OFFLOAD;
- dev_warn(wiphy_dev(wiphy), "%s : flags = 0x%08x\n",
- __func__, wiphy->flags);
+ dev_dbg(wiphy_dev(wiphy), "%s : flags = 0x%08x\n",
+ __func__, wiphy->flags);
wiphy->probe_resp_offload =
NL80211_PROBE_RESP_OFFLOAD_SUPPORT_WPS |
NL80211_PROBE_RESP_OFFLOAD_SUPPORT_WPS2 |
@@ -786,7 +856,9 @@
int rc = 0;
struct wireless_dev *wdev;
- wdev = kzalloc(sizeof(struct wireless_dev), GFP_KERNEL);
+ dev_dbg(dev, "%s()\n", __func__);
+
+ wdev = kzalloc(sizeof(*wdev), GFP_KERNEL);
if (!wdev)
return ERR_PTR(-ENOMEM);
@@ -818,6 +890,8 @@
{
struct wireless_dev *wdev = wil_to_wdev(wil);
+ dev_dbg(wil_to_dev(wil), "%s()\n", __func__);
+
if (!wdev)
return;
diff --git a/drivers/net/wireless/ath/wil6210/debug.c b/drivers/net/wireless/ath/wil6210/debug.c
index 9eeabf4..8d99021 100644
--- a/drivers/net/wireless/ath/wil6210/debug.c
+++ b/drivers/net/wireless/ath/wil6210/debug.c
@@ -17,43 +17,37 @@
#include "wil6210.h"
#include "trace.h"
-int wil_err(struct wil6210_priv *wil, const char *fmt, ...)
+void wil_err(struct wil6210_priv *wil, const char *fmt, ...)
{
struct net_device *ndev = wil_to_ndev(wil);
struct va_format vaf = {
.fmt = fmt,
};
va_list args;
- int ret;
va_start(args, fmt);
vaf.va = &args;
- ret = netdev_err(ndev, "%pV", &vaf);
+ netdev_err(ndev, "%pV", &vaf);
trace_wil6210_log_err(&vaf);
va_end(args);
-
- return ret;
}
-int wil_info(struct wil6210_priv *wil, const char *fmt, ...)
+void wil_info(struct wil6210_priv *wil, const char *fmt, ...)
{
struct net_device *ndev = wil_to_ndev(wil);
struct va_format vaf = {
.fmt = fmt,
};
va_list args;
- int ret;
va_start(args, fmt);
vaf.va = &args;
- ret = netdev_info(ndev, "%pV", &vaf);
+ netdev_info(ndev, "%pV", &vaf);
trace_wil6210_log_info(&vaf);
va_end(args);
-
- return ret;
}
-int wil_dbg_trace(struct wil6210_priv *wil, const char *fmt, ...)
+void wil_dbg_trace(struct wil6210_priv *wil, const char *fmt, ...)
{
struct va_format vaf = {
.fmt = fmt,
@@ -64,6 +58,4 @@
vaf.va = &args;
trace_wil6210_log_dbg(&vaf);
va_end(args);
-
- return 0;
}
diff --git a/drivers/net/wireless/ath/wil6210/debugfs.c b/drivers/net/wireless/ath/wil6210/debugfs.c
index b1c6a72..eb2204e 100644
--- a/drivers/net/wireless/ath/wil6210/debugfs.c
+++ b/drivers/net/wireless/ath/wil6210/debugfs.c
@@ -61,20 +61,22 @@
if (x)
seq_printf(s, "0x%08x\n", ioread32(x));
else
- seq_printf(s, "???\n");
+ seq_puts(s, "???\n");
if (vring->va && (vring->size < 1025)) {
uint i;
+
for (i = 0; i < vring->size; i++) {
volatile struct vring_tx_desc *d = &vring->va[i].tx;
+
if ((i % 64) == 0 && (i != 0))
- seq_printf(s, "\n");
+ seq_puts(s, "\n");
seq_printf(s, "%c", (d->dma.status & BIT(0)) ?
_s : (vring->ctx[i].skb ? _h : 'h'));
}
- seq_printf(s, "\n");
+ seq_puts(s, "\n");
}
- seq_printf(s, "}\n");
+ seq_puts(s, "}\n");
}
static int wil_vring_debugfs_show(struct seq_file *s, void *data)
@@ -85,7 +87,7 @@
wil_print_vring(s, wil, "rx", &wil->vring_rx, 'S', '_');
for (i = 0; i < ARRAY_SIZE(wil->vring_tx); i++) {
- struct vring *vring = &(wil->vring_tx[i]);
+ struct vring *vring = &wil->vring_tx[i];
struct vring_tx_data *txdata = &wil->vring_tx_data[i];
if (vring->va) {
@@ -163,7 +165,7 @@
if (!wmi_addr(wil, r.base) ||
!wmi_addr(wil, r.tail) ||
!wmi_addr(wil, r.head)) {
- seq_printf(s, " ??? pointers are garbage?\n");
+ seq_puts(s, " ??? pointers are garbage?\n");
goto out;
}
@@ -182,6 +184,7 @@
le32_to_cpu(d.addr));
if (0 == wmi_read_hdr(wil, d.addr, &hdr)) {
u16 len = le16_to_cpu(hdr.len);
+
seq_printf(s, " -> %04x %04x %04x %02x\n",
le16_to_cpu(hdr.seq), len,
le16_to_cpu(hdr.type), hdr.flags);
@@ -199,6 +202,7 @@
wil_memcpy_fromio_32(databuf, src, len);
while (n < len) {
int l = min(len - n, 16);
+
hex_dump_to_buffer(databuf + n, l,
16, 1, printbuf,
sizeof(printbuf),
@@ -208,11 +212,11 @@
}
}
} else {
- seq_printf(s, "\n");
+ seq_puts(s, "\n");
}
}
out:
- seq_printf(s, "}\n");
+ seq_puts(s, "}\n");
}
static int wil_mbox_debugfs_show(struct seq_file *s, void *data)
@@ -271,11 +275,13 @@
*(ulong *)data = val;
return 0;
}
+
static int wil_debugfs_ulong_get(void *data, u64 *val)
{
*val = *(ulong *)data;
return 0;
}
+
DEFINE_SIMPLE_ATTRIBUTE(wil_fops_ulong, wil_debugfs_ulong_get,
wil_debugfs_ulong_set, "%llu\n");
@@ -302,7 +308,7 @@
int i;
for (i = 0; tbl[i].name; i++) {
- struct dentry *f = NULL;
+ struct dentry *f;
switch (tbl[i].type) {
case doff_u32:
@@ -322,6 +328,8 @@
tbl[i].mode, dbg,
base + tbl[i].off);
break;
+ default:
+ f = ERR_PTR(-EINVAL);
}
if (IS_ERR_OR_NULL(f))
wil_err(wil, "Create file \"%s\": err %ld\n",
@@ -339,6 +347,7 @@
{"IMC", S_IWUSR, offsetof(struct RGF_ICR, IMC), doff_io32},
{},
};
+
static int wil6210_debugfs_create_ISR(struct wil6210_priv *wil,
const char *name,
struct dentry *parent, u32 off)
@@ -422,7 +431,7 @@
};
static ssize_t wil_read_file_ioblob(struct file *file, char __user *user_buf,
- size_t count, loff_t *ppos)
+ size_t count, loff_t *ppos)
{
enum { max_count = 4096 };
struct debugfs_blob_wrapper *blob = file->private_data;
@@ -474,6 +483,7 @@
{
return debugfs_create_file(name, mode, parent, blob, &fops_ioblob);
}
+
/*---reset---*/
static ssize_t wil_write_file_reset(struct file *file, const char __user *buf,
size_t len, loff_t *ppos)
@@ -499,6 +509,7 @@
.write = wil_write_file_reset,
.open = simple_open,
};
+
/*---write channel 1..4 to rxon for it, 0 to rxoff---*/
static ssize_t wil_write_file_rxon(struct file *file, const char __user *buf,
size_t len, loff_t *ppos)
@@ -509,6 +520,7 @@
bool on;
char *kbuf = kmalloc(len + 1, GFP_KERNEL);
+
if (!kbuf)
return -ENOMEM;
if (copy_from_user(kbuf, buf, len)) {
@@ -545,6 +557,7 @@
.write = wil_write_file_rxon,
.open = simple_open,
};
+
/*---tx_mgmt---*/
/* Write mgmt frame to this file to send it */
static ssize_t wil_write_file_txmgmt(struct file *file, const char __user *buf,
@@ -555,8 +568,8 @@
struct wireless_dev *wdev = wil_to_wdev(wil);
struct cfg80211_mgmt_tx_params params;
int rc;
-
void *frame = kmalloc(len, GFP_KERNEL);
+
if (!frame)
return -ENOMEM;
@@ -625,8 +638,10 @@
{
char printbuf[16 * 3 + 2];
int i = 0;
+
while (i < len) {
int l = min(len - i, 16);
+
hex_dump_to_buffer(p + i, l, 16, 1, printbuf,
sizeof(printbuf), false);
seq_printf(s, "%s%s\n", prefix, printbuf);
@@ -664,10 +679,8 @@
struct wil6210_priv *wil = s->private;
struct vring *vring;
bool tx = (dbg_vring_index < WIL6210_MAX_TX_RINGS);
- if (tx)
- vring = &(wil->vring_tx[dbg_vring_index]);
- else
- vring = &wil->vring_rx;
+
+ vring = tx ? &wil->vring_tx[dbg_vring_index] : &wil->vring_rx;
if (!vring->va) {
if (tx)
@@ -682,7 +695,7 @@
* only field used, .dma.length, is the same
*/
volatile struct vring_tx_desc *d =
- &(vring->va[dbg_txdesc_index].tx);
+ &vring->va[dbg_txdesc_index].tx;
volatile u32 *u = (volatile u32 *)d;
struct sk_buff *skb = vring->ctx[dbg_txdesc_index].skb;
@@ -702,7 +715,7 @@
wil_seq_print_skb(s, skb);
kfree_skb(skb);
}
- seq_printf(s, "}\n");
+ seq_puts(s, "}\n");
} else {
if (tx)
seq_printf(s, "[%2d] TxDesc index (%d) >= size (%d)\n",
@@ -816,6 +829,7 @@
.read = seq_read,
.llseek = seq_lseek,
};
+
/*---------SSID------------*/
static ssize_t wil_read_file_ssid(struct file *file, char __user *user_buf,
size_t count, loff_t *ppos)
@@ -878,10 +892,10 @@
{
struct wil6210_priv *wil = s->private;
u32 t_m, t_r;
-
int rc = wmi_get_temperature(wil, &t_m, &t_r);
+
if (rc) {
- seq_printf(s, "Failed\n");
+ seq_puts(s, "Failed\n");
return 0;
}
@@ -937,6 +951,7 @@
for (i = 0; i < ARRAY_SIZE(wil->sta); i++) {
struct wil_sta_info *p = &wil->sta[i];
char *status = "unknown";
+
switch (p->status) {
case wil_sta_unused:
status = "unused ";
@@ -997,7 +1012,6 @@
rxf_old = rxf;
txf_old = txf;
-
#define CHECK_QSTATE(x) (state & BIT(__QUEUE_STATE_ ## x)) ? \
" " __stringify(x) : ""
@@ -1032,6 +1046,7 @@
{
int i;
u16 index = ((r->head_seq_num - r->ssn) & 0xfff) % r->buf_size;
+
seq_printf(s, "0x%03x [", r->head_seq_num);
for (i = 0; i < r->buf_size; i++) {
if (i == index)
@@ -1046,10 +1061,12 @@
{
struct wil6210_priv *wil = s->private;
int i, tid;
+ unsigned long flags;
for (i = 0; i < ARRAY_SIZE(wil->sta); i++) {
struct wil_sta_info *p = &wil->sta[i];
char *status = "unknown";
+
switch (p->status) {
case wil_sta_unused:
status = "unused ";
@@ -1065,13 +1082,16 @@
(p->data_port_open ? " data_port_open" : ""));
if (p->status == wil_sta_connected) {
+ spin_lock_irqsave(&p->tid_rx_lock, flags);
for (tid = 0; tid < WIL_STA_TID_NUM; tid++) {
struct wil_tid_ampdu_rx *r = p->tid_rx[tid];
+
if (r) {
seq_printf(s, "[%2d] ", tid);
wil_print_rxtid(s, r);
}
}
+ spin_unlock_irqrestore(&p->tid_rx_lock, flags);
}
}
diff --git a/drivers/net/wireless/ath/wil6210/fw.c b/drivers/net/wireless/ath/wil6210/fw.c
new file mode 100644
index 0000000..8c6f3b0
--- /dev/null
+++ b/drivers/net/wireless/ath/wil6210/fw.c
@@ -0,0 +1,45 @@
+/*
+ * Copyright (c) 2014 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+#include <linux/firmware.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/crc32.h>
+#include "wil6210.h"
+#include "fw.h"
+
+MODULE_FIRMWARE(WIL_FW_NAME);
+
+/* target operations */
+/* register read */
+#define R(a) ioread32(wil->csr + HOSTADDR(a))
+/* register write. wmb() to make sure it is completed */
+#define W(a, v) do { iowrite32(v, wil->csr + HOSTADDR(a)); wmb(); } while (0)
+/* register set = read, OR, write */
+#define S(a, v) W(a, R(a) | v)
+/* register clear = read, AND with inverted, write */
+#define C(a, v) W(a, R(a) & ~v)
+
+static
+void wil_memset_toio_32(volatile void __iomem *dst, u32 val,
+ size_t count)
+{
+ volatile u32 __iomem *d = dst;
+
+ for (count += 4; count > 4; count -= 4)
+ __raw_writel(val, d++);
+}
+
+#include "fw_inc.c"
diff --git a/drivers/net/wireless/ath/wil6210/fw.h b/drivers/net/wireless/ath/wil6210/fw.h
new file mode 100644
index 0000000..7a2c6c1
--- /dev/null
+++ b/drivers/net/wireless/ath/wil6210/fw.h
@@ -0,0 +1,149 @@
+/*
+ * Copyright (c) 2014 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#define WIL_FW_SIGNATURE (0x36323130) /* '0126' */
+#define WIL_FW_FMT_VERSION (1) /* format version driver supports */
+
+enum wil_fw_record_type {
+ wil_fw_type_comment = 1,
+ wil_fw_type_data = 2,
+ wil_fw_type_fill = 3,
+ wil_fw_type_action = 4,
+ wil_fw_type_verify = 5,
+ wil_fw_type_file_header = 6,
+ wil_fw_type_direct_write = 7,
+ wil_fw_type_gateway_data = 8,
+ wil_fw_type_gateway_data4 = 9,
+};
+
+struct wil_fw_record_head {
+ __le16 type; /* enum wil_fw_record_type */
+ __le16 flags; /* to be defined */
+ __le32 size; /* whole record, bytes after head */
+} __packed;
+
+/* data block. write starting from @addr
+ * data_size inferred from the @head.size. For this case,
+ * data_size = @head.size - offsetof(struct wil_fw_record_data, data)
+ */
+struct wil_fw_record_data { /* type == wil_fw_type_data */
+ __le32 addr;
+ __le32 data[0]; /* [data_size], see above */
+} __packed;
+
+/* fill with constant @value, @size bytes starting from @addr */
+struct wil_fw_record_fill { /* type == wil_fw_type_fill */
+ __le32 addr;
+ __le32 value;
+ __le32 size;
+} __packed;
+
+/* free-form comment
+ * for informational purpose, data_size is @head.size from record header
+ */
+struct wil_fw_record_comment { /* type == wil_fw_type_comment */
+ u8 data[0]; /* free-form data [data_size], see above */
+} __packed;
+
+/* perform action
+ * data_size = @head.size - offsetof(struct wil_fw_record_action, data)
+ */
+struct wil_fw_record_action { /* type == wil_fw_type_action */
+ __le32 action; /* action to perform: reset, wait for fw ready etc. */
+ __le32 data[0]; /* action specific, [data_size], see above */
+} __packed;
+
+/* data block for struct wil_fw_record_direct_write */
+struct wil_fw_data_dwrite {
+ __le32 addr;
+ __le32 value;
+ __le32 mask;
+} __packed;
+
+/* write @value to the @addr,
+ * preserve original bits accordingly to the @mask
+ * data_size is @head.size where @head is record header
+ */
+struct wil_fw_record_direct_write { /* type == wil_fw_type_direct_write */
+ struct wil_fw_data_dwrite data[0];
+} __packed;
+
+/* verify condition: [@addr] & @mask == @value
+ * if condition not met, firmware download fails
+ */
+struct wil_fw_record_verify { /* type == wil_fw_verify */
+ __le32 addr; /* read from this address */
+ __le32 value; /* reference value */
+ __le32 mask; /* mask for verification */
+} __packed;
+
+/* file header
+ * First record of every file
+ */
+struct wil_fw_record_file_header {
+ __le32 signature ; /* Wilocity signature */
+ __le32 reserved;
+ __le32 crc; /* crc32 of the following data */
+ __le32 version; /* format version */
+ __le32 data_len; /* total data in file, including this record */
+ u8 comment[32]; /* short description */
+} __packed;
+
+/* 1-dword gateway */
+/* data block for the struct wil_fw_record_gateway_data */
+struct wil_fw_data_gw {
+ __le32 addr;
+ __le32 value;
+} __packed;
+
+/* gateway write block.
+ * write starting address and values from the data buffer
+ * through the gateway
+ * data_size inferred from the @head.size. For this case,
+ * data_size = @head.size - offsetof(struct wil_fw_record_gateway_data, data)
+ */
+struct wil_fw_record_gateway_data { /* type == wil_fw_type_gateway_data */
+ __le32 gateway_addr_addr;
+ __le32 gateway_value_addr;
+ __le32 gateway_cmd_addr;
+ __le32 gateway_ctrl_address;
+#define WIL_FW_GW_CTL_BUSY BIT(29) /* gateway busy performing operation */
+#define WIL_FW_GW_CTL_RUN BIT(30) /* start gateway operation */
+ __le32 command;
+ struct wil_fw_data_gw data[0]; /* total size [data_size], see above */
+} __packed;
+
+/* 4-dword gateway */
+/* data block for the struct wil_fw_record_gateway_data4 */
+struct wil_fw_data_gw4 {
+ __le32 addr;
+ __le32 value[4];
+} __packed;
+
+/* gateway write block.
+ * write starting address and values from the data buffer
+ * through the gateway
+ * data_size inferred from the @head.size. For this case,
+ * data_size = @head.size - offsetof(struct wil_fw_record_gateway_data4, data)
+ */
+struct wil_fw_record_gateway_data4 { /* type == wil_fw_type_gateway_data4 */
+ __le32 gateway_addr_addr;
+ __le32 gateway_value_addr[4];
+ __le32 gateway_cmd_addr;
+ __le32 gateway_ctrl_address; /* same logic as for 1-dword gw */
+ __le32 command;
+ struct wil_fw_data_gw4 data[0]; /* total size [data_size], see above */
+} __packed;
diff --git a/drivers/net/wireless/ath/wil6210/fw_inc.c b/drivers/net/wireless/ath/wil6210/fw_inc.c
new file mode 100644
index 0000000..44cb71f
--- /dev/null
+++ b/drivers/net/wireless/ath/wil6210/fw_inc.c
@@ -0,0 +1,495 @@
+/*
+ * Copyright (c) 2014 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* Algorithmic part of the firmware download.
+ * To be included in the container file providing framework
+ */
+
+#define wil_err_fw(wil, fmt, arg...) wil_err(wil, "ERR[ FW ]" fmt, ##arg)
+#define wil_dbg_fw(wil, fmt, arg...) wil_dbg(wil, "DBG[ FW ]" fmt, ##arg)
+#define wil_hex_dump_fw(prefix_str, prefix_type, rowsize, \
+ groupsize, buf, len, ascii) \
+ print_hex_dump_debug("DBG[ FW ]" prefix_str, \
+ prefix_type, rowsize, \
+ groupsize, buf, len, ascii)
+
+#define FW_ADDR_CHECK(ioaddr, val, msg) do { \
+ ioaddr = wmi_buffer(wil, val); \
+ if (!ioaddr) { \
+ wil_err_fw(wil, "bad " msg ": 0x%08x\n", \
+ le32_to_cpu(val)); \
+ return -EINVAL; \
+ } \
+ } while (0)
+
+/**
+ * wil_fw_verify - verify firmware file validity
+ *
+ * perform various checks for the firmware file header.
+ * records are not validated.
+ *
+ * Return file size or negative error
+ */
+static int wil_fw_verify(struct wil6210_priv *wil, const u8 *data, size_t size)
+{
+ const struct wil_fw_record_head *hdr = (const void *)data;
+ struct wil_fw_record_file_header fh;
+ const struct wil_fw_record_file_header *fh_;
+ u32 crc;
+ u32 dlen;
+
+ if (size % 4) {
+ wil_err_fw(wil, "image size not aligned: %zu\n", size);
+ return -EINVAL;
+ }
+ /* have enough data for the file header? */
+ if (size < sizeof(*hdr) + sizeof(fh)) {
+ wil_err_fw(wil, "file too short: %zu bytes\n", size);
+ return -EINVAL;
+ }
+
+ /* start with the file header? */
+ if (le16_to_cpu(hdr->type) != wil_fw_type_file_header) {
+ wil_err_fw(wil, "no file header\n");
+ return -EINVAL;
+ }
+
+ /* data_len */
+ fh_ = (struct wil_fw_record_file_header *)&hdr[1];
+ dlen = le32_to_cpu(fh_->data_len);
+ if (dlen % 4) {
+ wil_err_fw(wil, "data length not aligned: %lu\n", (ulong)dlen);
+ return -EINVAL;
+ }
+ if (size < dlen) {
+ wil_err_fw(wil, "file truncated at %zu/%lu\n",
+ size, (ulong)dlen);
+ return -EINVAL;
+ }
+ if (dlen < sizeof(*hdr) + sizeof(fh)) {
+ wil_err_fw(wil, "data length too short: %lu\n", (ulong)dlen);
+ return -EINVAL;
+ }
+
+ /* signature */
+ if (le32_to_cpu(fh_->signature) != WIL_FW_SIGNATURE) {
+ wil_err_fw(wil, "bad header signature: 0x%08x\n",
+ le32_to_cpu(fh_->signature));
+ return -EINVAL;
+ }
+
+ /* version */
+ if (le32_to_cpu(fh_->version) > WIL_FW_FMT_VERSION) {
+ wil_err_fw(wil, "unsupported header version: %d\n",
+ le32_to_cpu(fh_->version));
+ return -EINVAL;
+ }
+
+ /* checksum. ~crc32(~0, data, size) when fh.crc set to 0*/
+ fh = *fh_;
+ fh.crc = 0;
+
+ crc = crc32_le(~0, (unsigned char const *)hdr, sizeof(*hdr));
+ crc = crc32_le(crc, (unsigned char const *)&fh, sizeof(fh));
+ crc = crc32_le(crc, (unsigned char const *)&fh_[1],
+ dlen - sizeof(*hdr) - sizeof(fh));
+ crc = ~crc;
+
+ if (crc != le32_to_cpu(fh_->crc)) {
+ wil_err_fw(wil, "checksum mismatch:"
+ " calculated for %lu bytes 0x%08x != 0x%08x\n",
+ (ulong)dlen, crc, le32_to_cpu(fh_->crc));
+ return -EINVAL;
+ }
+
+ return (int)dlen;
+}
+
+static int fw_handle_comment(struct wil6210_priv *wil, const void *data,
+ size_t size)
+{
+ wil_hex_dump_fw("", DUMP_PREFIX_OFFSET, 16, 1, data, size, true);
+
+ return 0;
+}
+
+static int fw_handle_data(struct wil6210_priv *wil, const void *data,
+ size_t size)
+{
+ const struct wil_fw_record_data *d = data;
+ void __iomem *dst;
+ size_t s = size - sizeof(*d);
+
+ if (size < sizeof(*d) + sizeof(u32)) {
+ wil_err_fw(wil, "data record too short: %zu\n", size);
+ return -EINVAL;
+ }
+
+ FW_ADDR_CHECK(dst, d->addr, "address");
+ wil_dbg_fw(wil, "write [0x%08x] <== %zu bytes\n", le32_to_cpu(d->addr),
+ s);
+ wil_memcpy_toio_32(dst, d->data, s);
+ wmb(); /* finish before processing next record */
+
+ return 0;
+}
+
+static int fw_handle_fill(struct wil6210_priv *wil, const void *data,
+ size_t size)
+{
+ const struct wil_fw_record_fill *d = data;
+ void __iomem *dst;
+ u32 v;
+ size_t s = (size_t)le32_to_cpu(d->size);
+
+ if (size != sizeof(*d)) {
+ wil_err_fw(wil, "bad size for fill record: %zu\n", size);
+ return -EINVAL;
+ }
+
+ if (s < sizeof(u32)) {
+ wil_err_fw(wil, "fill size too short: %zu\n", s);
+ return -EINVAL;
+ }
+
+ if (s % sizeof(u32)) {
+ wil_err_fw(wil, "fill size not aligned: %zu\n", s);
+ return -EINVAL;
+ }
+
+ FW_ADDR_CHECK(dst, d->addr, "address");
+
+ v = le32_to_cpu(d->value);
+ wil_dbg_fw(wil, "fill [0x%08x] <== 0x%08x, %zu bytes\n",
+ le32_to_cpu(d->addr), v, s);
+ wil_memset_toio_32(dst, v, s);
+ wmb(); /* finish before processing next record */
+
+ return 0;
+}
+
+static int fw_handle_file_header(struct wil6210_priv *wil, const void *data,
+ size_t size)
+{
+ const struct wil_fw_record_file_header *d = data;
+
+ if (size != sizeof(*d)) {
+ wil_err_fw(wil, "file header length incorrect: %zu\n", size);
+ return -EINVAL;
+ }
+
+ wil_dbg_fw(wil, "new file, ver. %d, %i bytes\n",
+ d->version, d->data_len);
+ wil_hex_dump_fw("", DUMP_PREFIX_OFFSET, 16, 1, d->comment,
+ sizeof(d->comment), true);
+
+ return 0;
+}
+
+static int fw_handle_direct_write(struct wil6210_priv *wil, const void *data,
+ size_t size)
+{
+ const struct wil_fw_record_direct_write *d = data;
+ const struct wil_fw_data_dwrite *block = d->data;
+ int n, i;
+
+ if (size % sizeof(*block)) {
+ wil_err_fw(wil, "record size not aligned on %zu: %zu\n",
+ sizeof(*block), size);
+ return -EINVAL;
+ }
+ n = size / sizeof(*block);
+
+ for (i = 0; i < n; i++) {
+ void __iomem *dst;
+ u32 m = le32_to_cpu(block[i].mask);
+ u32 v = le32_to_cpu(block[i].value);
+ u32 x, y;
+
+ FW_ADDR_CHECK(dst, block[i].addr, "address");
+
+ x = ioread32(dst);
+ y = (x & m) | (v & ~m);
+ wil_dbg_fw(wil, "write [0x%08x] <== 0x%08x "
+ "(old 0x%08x val 0x%08x mask 0x%08x)\n",
+ le32_to_cpu(block[i].addr), y, x, v, m);
+ iowrite32(y, dst);
+ wmb(); /* finish before processing next record */
+ }
+
+ return 0;
+}
+
+static int gw_write(struct wil6210_priv *wil, void __iomem *gwa_addr,
+ void __iomem *gwa_cmd, void __iomem *gwa_ctl, u32 gw_cmd,
+ u32 a)
+{
+ unsigned delay = 0;
+
+ iowrite32(a, gwa_addr);
+ iowrite32(gw_cmd, gwa_cmd);
+ wmb(); /* finish before activate gw */
+
+ iowrite32(WIL_FW_GW_CTL_RUN, gwa_ctl); /* activate gw */
+ do {
+ udelay(1); /* typical time is few usec */
+ if (delay++ > 100) {
+ wil_err_fw(wil, "gw timeout\n");
+ return -EINVAL;
+ }
+ } while (ioread32(gwa_ctl) & WIL_FW_GW_CTL_BUSY); /* gw done? */
+
+ return 0;
+}
+
+static int fw_handle_gateway_data(struct wil6210_priv *wil, const void *data,
+ size_t size)
+{
+ const struct wil_fw_record_gateway_data *d = data;
+ const struct wil_fw_data_gw *block = d->data;
+ void __iomem *gwa_addr;
+ void __iomem *gwa_val;
+ void __iomem *gwa_cmd;
+ void __iomem *gwa_ctl;
+ u32 gw_cmd;
+ int n, i;
+
+ if (size < sizeof(*d) + sizeof(*block)) {
+ wil_err_fw(wil, "gateway record too short: %zu\n", size);
+ return -EINVAL;
+ }
+
+ if ((size - sizeof(*d)) % sizeof(*block)) {
+ wil_err_fw(wil, "gateway record data size"
+ " not aligned on %zu: %zu\n",
+ sizeof(*block), size - sizeof(*d));
+ return -EINVAL;
+ }
+ n = (size - sizeof(*d)) / sizeof(*block);
+
+ gw_cmd = le32_to_cpu(d->command);
+
+ wil_dbg_fw(wil, "gw write record [%3d] blocks, cmd 0x%08x\n",
+ n, gw_cmd);
+
+ FW_ADDR_CHECK(gwa_addr, d->gateway_addr_addr, "gateway_addr_addr");
+ FW_ADDR_CHECK(gwa_val, d->gateway_value_addr, "gateway_value_addr");
+ FW_ADDR_CHECK(gwa_cmd, d->gateway_cmd_addr, "gateway_cmd_addr");
+ FW_ADDR_CHECK(gwa_ctl, d->gateway_ctrl_address, "gateway_ctrl_address");
+
+ wil_dbg_fw(wil, "gw addresses: addr 0x%08x val 0x%08x"
+ " cmd 0x%08x ctl 0x%08x\n",
+ le32_to_cpu(d->gateway_addr_addr),
+ le32_to_cpu(d->gateway_value_addr),
+ le32_to_cpu(d->gateway_cmd_addr),
+ le32_to_cpu(d->gateway_ctrl_address));
+
+ for (i = 0; i < n; i++) {
+ int rc;
+ u32 a = le32_to_cpu(block[i].addr);
+ u32 v = le32_to_cpu(block[i].value);
+
+ wil_dbg_fw(wil, " gw write[%3d] [0x%08x] <== 0x%08x\n",
+ i, a, v);
+
+ iowrite32(v, gwa_val);
+ rc = gw_write(wil, gwa_addr, gwa_cmd, gwa_ctl, gw_cmd, a);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
+static int fw_handle_gateway_data4(struct wil6210_priv *wil, const void *data,
+ size_t size)
+{
+ const struct wil_fw_record_gateway_data4 *d = data;
+ const struct wil_fw_data_gw4 *block = d->data;
+ void __iomem *gwa_addr;
+ void __iomem *gwa_val[ARRAY_SIZE(block->value)];
+ void __iomem *gwa_cmd;
+ void __iomem *gwa_ctl;
+ u32 gw_cmd;
+ int n, i, k;
+
+ if (size < sizeof(*d) + sizeof(*block)) {
+ wil_err_fw(wil, "gateway4 record too short: %zu\n", size);
+ return -EINVAL;
+ }
+
+ if ((size - sizeof(*d)) % sizeof(*block)) {
+ wil_err_fw(wil, "gateway4 record data size"
+ " not aligned on %zu: %zu\n",
+ sizeof(*block), size - sizeof(*d));
+ return -EINVAL;
+ }
+ n = (size - sizeof(*d)) / sizeof(*block);
+
+ gw_cmd = le32_to_cpu(d->command);
+
+ wil_dbg_fw(wil, "gw4 write record [%3d] blocks, cmd 0x%08x\n",
+ n, gw_cmd);
+
+ FW_ADDR_CHECK(gwa_addr, d->gateway_addr_addr, "gateway_addr_addr");
+ for (k = 0; k < ARRAY_SIZE(block->value); k++)
+ FW_ADDR_CHECK(gwa_val[k], d->gateway_value_addr[k],
+ "gateway_value_addr");
+ FW_ADDR_CHECK(gwa_cmd, d->gateway_cmd_addr, "gateway_cmd_addr");
+ FW_ADDR_CHECK(gwa_ctl, d->gateway_ctrl_address, "gateway_ctrl_address");
+
+ wil_dbg_fw(wil, "gw4 addresses: addr 0x%08x cmd 0x%08x ctl 0x%08x\n",
+ le32_to_cpu(d->gateway_addr_addr),
+ le32_to_cpu(d->gateway_cmd_addr),
+ le32_to_cpu(d->gateway_ctrl_address));
+ wil_hex_dump_fw("val addresses: ", DUMP_PREFIX_NONE, 16, 4,
+ d->gateway_value_addr, sizeof(d->gateway_value_addr),
+ false);
+
+ for (i = 0; i < n; i++) {
+ int rc;
+ u32 a = le32_to_cpu(block[i].addr);
+ u32 v[ARRAY_SIZE(block->value)];
+
+ for (k = 0; k < ARRAY_SIZE(block->value); k++)
+ v[k] = le32_to_cpu(block[i].value[k]);
+
+ wil_dbg_fw(wil, " gw4 write[%3d] [0x%08x] <==\n", i, a);
+ wil_hex_dump_fw(" val ", DUMP_PREFIX_NONE, 16, 4, v,
+ sizeof(v), false);
+
+ for (k = 0; k < ARRAY_SIZE(block->value); k++)
+ iowrite32(v[k], gwa_val[k]);
+ rc = gw_write(wil, gwa_addr, gwa_cmd, gwa_ctl, gw_cmd, a);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
+static const struct {
+ int type;
+ int (*handler)(struct wil6210_priv *wil, const void *data, size_t size);
+} wil_fw_handlers[] = {
+ {wil_fw_type_comment, fw_handle_comment},
+ {wil_fw_type_data, fw_handle_data},
+ {wil_fw_type_fill, fw_handle_fill},
+ /* wil_fw_type_action */
+ /* wil_fw_type_verify */
+ {wil_fw_type_file_header, fw_handle_file_header},
+ {wil_fw_type_direct_write, fw_handle_direct_write},
+ {wil_fw_type_gateway_data, fw_handle_gateway_data},
+ {wil_fw_type_gateway_data4, fw_handle_gateway_data4},
+};
+
+static int wil_fw_handle_record(struct wil6210_priv *wil, int type,
+ const void *data, size_t size)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wil_fw_handlers); i++) {
+ if (wil_fw_handlers[i].type == type)
+ return wil_fw_handlers[i].handler(wil, data, size);
+ }
+
+ wil_err_fw(wil, "unknown record type: %d\n", type);
+ return -EINVAL;
+}
+
+/**
+ * wil_fw_load - load FW into device
+ *
+ * Load the FW and uCode code and data to the corresponding device
+ * memory regions
+ *
+ * Return error code
+ */
+static int wil_fw_load(struct wil6210_priv *wil, const void *data, size_t size)
+{
+ int rc = 0;
+ const struct wil_fw_record_head *hdr;
+ size_t s, hdr_sz;
+
+ for (hdr = data;; hdr = (const void *)hdr + s, size -= s) {
+ if (size < sizeof(*hdr))
+ break;
+ hdr_sz = le32_to_cpu(hdr->size);
+ s = sizeof(*hdr) + hdr_sz;
+ if (s > size)
+ break;
+ if (hdr_sz % 4) {
+ wil_err_fw(wil, "unaligned record size: %zu\n",
+ hdr_sz);
+ return -EINVAL;
+ }
+ rc = wil_fw_handle_record(wil, le16_to_cpu(hdr->type),
+ &hdr[1], hdr_sz);
+ if (rc)
+ return rc;
+ }
+ if (size) {
+ wil_err_fw(wil, "unprocessed bytes: %zu\n", size);
+ if (size >= sizeof(*hdr)) {
+ wil_err_fw(wil, "Stop at offset %ld"
+ " record type %d [%zd bytes]\n",
+ (const void *)hdr - data,
+ le16_to_cpu(hdr->type), hdr_sz);
+ }
+ return -EINVAL;
+ }
+ /* Mark FW as loaded from host */
+ S(RGF_USER_USAGE_6, 1);
+
+ return rc;
+}
+
+/**
+ * wil_request_firmware - Request firmware and load to device
+ *
+ * Request firmware image from the file and load it to device
+ *
+ * Return error code
+ */
+int wil_request_firmware(struct wil6210_priv *wil, const char *name)
+{
+ int rc, rc1;
+ const struct firmware *fw;
+ size_t sz;
+ const void *d;
+
+ rc = request_firmware(&fw, name, wil_to_pcie_dev(wil));
+ if (rc) {
+ wil_err_fw(wil, "Failed to load firmware %s\n", name);
+ return rc;
+ }
+ wil_dbg_fw(wil, "Loading <%s>, %zu bytes\n", name, fw->size);
+
+ for (sz = fw->size, d = fw->data; sz; sz -= rc1, d += rc1) {
+ rc1 = wil_fw_verify(wil, d, sz);
+ if (rc1 < 0) {
+ rc = rc1;
+ goto out;
+ }
+ rc = wil_fw_load(wil, d, rc1);
+ if (rc < 0)
+ goto out;
+ }
+
+out:
+ release_firmware(fw);
+ return rc;
+}
diff --git a/drivers/net/wireless/ath/wil6210/interrupt.c b/drivers/net/wireless/ath/wil6210/interrupt.c
index 98bfbb6..7269bac 100644
--- a/drivers/net/wireless/ath/wil6210/interrupt.c
+++ b/drivers/net/wireless/ath/wil6210/interrupt.c
@@ -135,7 +135,7 @@
HOSTADDR(RGF_DMA_PSEUDO_CAUSE_MASK_SW));
}
-void wil6210_disable_irq(struct wil6210_priv *wil)
+void wil_mask_irq(struct wil6210_priv *wil)
{
wil_dbg_irq(wil, "%s()\n", __func__);
@@ -145,7 +145,7 @@
wil6210_mask_irq_pseudo(wil);
}
-void wil6210_enable_irq(struct wil6210_priv *wil)
+void wil_unmask_irq(struct wil6210_priv *wil)
{
wil_dbg_irq(wil, "%s()\n", __func__);
@@ -196,8 +196,13 @@
wil_dbg_irq(wil, "RX done\n");
isr &= ~BIT_DMA_EP_RX_ICR_RX_DONE;
if (test_bit(wil_status_reset_done, &wil->status)) {
- wil_dbg_txrx(wil, "NAPI(Rx) schedule\n");
- napi_schedule(&wil->napi_rx);
+ if (test_bit(wil_status_napi_en, &wil->status)) {
+ wil_dbg_txrx(wil, "NAPI(Rx) schedule\n");
+ napi_schedule(&wil->napi_rx);
+ } else {
+ wil_err(wil, "Got Rx interrupt while "
+ "stopping interface\n");
+ }
} else {
wil_err(wil, "Got Rx interrupt while in reset\n");
}
@@ -506,7 +511,8 @@
return rc;
}
-/* can't use wil_ioread32_and_clear because ICC value is not ser yet */
+
+/* can't use wil_ioread32_and_clear because ICC value is not set yet */
static inline void wil_clear32(void __iomem *addr)
{
u32 x = ioread32(addr);
@@ -522,11 +528,15 @@
offsetof(struct RGF_ICR, ICR));
wil_clear32(wil->csr + HOSTADDR(RGF_DMA_EP_MISC_ICR) +
offsetof(struct RGF_ICR, ICR));
+ wmb(); /* make sure write completed */
}
int wil6210_init_irq(struct wil6210_priv *wil, int irq)
{
int rc;
+
+ wil_dbg_misc(wil, "%s() n_msi=%d\n", __func__, wil->n_msi);
+
if (wil->n_msi == 3)
rc = wil6210_request_3msi(wil, irq);
else
@@ -534,17 +544,14 @@
wil6210_thread_irq,
wil->n_msi ? 0 : IRQF_SHARED,
WIL_NAME, wil);
- if (rc)
- return rc;
-
- wil6210_enable_irq(wil);
-
- return 0;
+ return rc;
}
void wil6210_fini_irq(struct wil6210_priv *wil, int irq)
{
- wil6210_disable_irq(wil);
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
+ wil_mask_irq(wil);
free_irq(irq, wil);
if (wil->n_msi == 3) {
free_irq(irq + 1, wil);
diff --git a/drivers/net/wireless/ath/wil6210/main.c b/drivers/net/wireless/ath/wil6210/main.c
index b69d90f..21667e0 100644
--- a/drivers/net/wireless/ath/wil6210/main.c
+++ b/drivers/net/wireless/ath/wil6210/main.c
@@ -20,11 +20,19 @@
#include "wil6210.h"
#include "txrx.h"
+#include "wmi.h"
+
+#define WAIT_FOR_DISCONNECT_TIMEOUT_MS 2000
+#define WAIT_FOR_DISCONNECT_INTERVAL_MS 10
static bool no_fw_recovery;
module_param(no_fw_recovery, bool, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(no_fw_recovery, " disable FW error recovery");
+static bool no_fw_load = true;
+module_param(no_fw_load, bool, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(no_fw_load, " do not download FW, use one in on-card flash.");
+
#define RST_DELAY (20) /* msec, for loop in @wil_target_reset */
#define RST_COUNT (1 + 1000/RST_DELAY) /* round up to be above 1 sec total */
@@ -67,6 +75,7 @@
struct net_device *ndev = wil_to_ndev(wil);
struct wireless_dev *wdev = wil->wdev;
struct wil_sta_info *sta = &wil->sta[cid];
+
wil_dbg_misc(wil, "%s(CID %d, status %d)\n", __func__, cid,
sta->status);
@@ -86,9 +95,16 @@
}
for (i = 0; i < WIL_STA_TID_NUM; i++) {
- struct wil_tid_ampdu_rx *r = sta->tid_rx[i];
+ struct wil_tid_ampdu_rx *r;
+ unsigned long flags;
+
+ spin_lock_irqsave(&sta->tid_rx_lock, flags);
+
+ r = sta->tid_rx[i];
sta->tid_rx[i] = NULL;
wil_tid_ampdu_rx_free(wil, r);
+
+ spin_unlock_irqrestore(&sta->tid_rx_lock, flags);
}
for (i = 0; i < ARRAY_SIZE(wil->vring_tx); i++) {
if (wil->vring2cid_tid[i][0] == cid)
@@ -205,10 +221,8 @@
case NL80211_IFTYPE_MONITOR:
wil_info(wil, "fw error recovery started (try %d)...\n",
wil->recovery_count);
- wil_reset(wil);
-
- /* need to re-allocate Rx ring after reset */
- wil_rx_init(wil);
+ __wil_down(wil);
+ __wil_up(wil);
break;
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_P2P_GO:
@@ -223,6 +237,7 @@
static int wil_find_free_vring(struct wil6210_priv *wil)
{
int i;
+
for (i = 0; i < WIL6210_MAX_TX_RINGS; i++) {
if (!wil->vring_tx[i].va)
return i;
@@ -257,14 +272,19 @@
int wil_priv_init(struct wil6210_priv *wil)
{
+ uint i;
+
wil_dbg_misc(wil, "%s()\n", __func__);
memset(wil->sta, 0, sizeof(wil->sta));
+ for (i = 0; i < WIL6210_MAX_CID; i++)
+ spin_lock_init(&wil->sta[i].tid_rx_lock);
mutex_init(&wil->mutex);
mutex_init(&wil->wmi_mutex);
init_completion(&wil->wmi_ready);
+ init_completion(&wil->wmi_call);
wil->pending_connect_cid = -1;
setup_timer(&wil->connect_timer, wil_connect_timer_fn, (ulong)wil);
@@ -295,12 +315,16 @@
void wil6210_disconnect(struct wil6210_priv *wil, const u8 *bssid)
{
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
del_timer_sync(&wil->connect_timer);
_wil6210_disconnect(wil, bssid);
}
void wil_priv_deinit(struct wil6210_priv *wil)
{
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
del_timer_sync(&wil->scan_timer);
cancel_work_sync(&wil->disconnect_worker);
cancel_work_sync(&wil->fw_error_worker);
@@ -312,6 +336,28 @@
destroy_workqueue(wil->wmi_wq);
}
+/* target operations */
+/* register read */
+#define R(a) ioread32(wil->csr + HOSTADDR(a))
+/* register write. wmb() to make sure it is completed */
+#define W(a, v) do { iowrite32(v, wil->csr + HOSTADDR(a)); wmb(); } while (0)
+/* register set = read, OR, write */
+#define S(a, v) W(a, R(a) | v)
+/* register clear = read, AND with inverted, write */
+#define C(a, v) W(a, R(a) & ~v)
+
+static inline void wil_halt_cpu(struct wil6210_priv *wil)
+{
+ W(RGF_USER_USER_CPU_0, BIT_USER_USER_CPU_MAN_RST);
+ W(RGF_USER_MAC_CPU_0, BIT_USER_MAC_CPU_MAN_RST);
+}
+
+static inline void wil_release_cpu(struct wil6210_priv *wil)
+{
+ /* Start CPU */
+ W(RGF_USER_USER_CPU_0, 1);
+}
+
static int wil_target_reset(struct wil6210_priv *wil)
{
int delay = 0;
@@ -321,60 +367,41 @@
wil_dbg_misc(wil, "Resetting \"%s\"...\n", wil->board->name);
- /* register read */
-#define R(a) ioread32(wil->csr + HOSTADDR(a))
- /* register write */
-#define W(a, v) iowrite32(v, wil->csr + HOSTADDR(a))
- /* register set = read, OR, write */
-#define S(a, v) W(a, R(a) | v)
- /* register clear = read, AND with inverted, write */
-#define C(a, v) W(a, R(a) & ~v)
-
- wmb(); /* If host reorder writes here -> race in NIC */
- W(RGF_USER_MAC_CPU_0, BIT(1)); /* mac_cpu_man_rst */
wil->hw_version = R(RGF_USER_FW_REV_ID);
rev_id = wil->hw_version & 0xff;
/* Clear MAC link up */
S(RGF_HP_CTRL, BIT(15));
- /* hpal_perst_from_pad_src_n_mask */
- S(RGF_USER_CLKS_CTL_SW_RST_MASK_0, BIT(6));
- /* car_perst_rst_src_n_mask */
- S(RGF_USER_CLKS_CTL_SW_RST_MASK_0, BIT(7));
- wmb(); /* order is important here */
+ S(RGF_USER_CLKS_CTL_SW_RST_MASK_0, BIT_HPAL_PERST_FROM_PAD);
+ S(RGF_USER_CLKS_CTL_SW_RST_MASK_0, BIT_CAR_PERST_RST);
+
+ wil_halt_cpu(wil);
+ C(RGF_USER_CLKS_CTL_0, BIT_USER_CLKS_CAR_AHB_SW_SEL); /* 40 MHz */
if (is_sparrow) {
W(RGF_USER_CLKS_CTL_EXT_SW_RST_VEC_0, 0x3ff81f);
- wmb(); /* order is important here */
+ W(RGF_USER_CLKS_CTL_EXT_SW_RST_VEC_1, 0xf);
}
- W(RGF_USER_USER_CPU_0, BIT(1)); /* user_cpu_man_rst */
- wmb(); /* If host reorder writes here -> race in NIC */
- W(RGF_USER_MAC_CPU_0, BIT(1)); /* mac_cpu_man_rst */
- wmb(); /* order is important here */
-
W(RGF_USER_CLKS_CTL_SW_RST_VEC_2, 0xFE000000);
W(RGF_USER_CLKS_CTL_SW_RST_VEC_1, 0x0000003F);
- W(RGF_USER_CLKS_CTL_SW_RST_VEC_3, is_sparrow ? 0x000000B0 : 0x00000170);
- W(RGF_USER_CLKS_CTL_SW_RST_VEC_0, 0xFFE7FC00);
- wmb(); /* order is important here */
+ W(RGF_USER_CLKS_CTL_SW_RST_VEC_3, is_sparrow ? 0x000000f0 : 0x00000170);
+ W(RGF_USER_CLKS_CTL_SW_RST_VEC_0, 0xFFE7FE00);
if (is_sparrow) {
W(RGF_USER_CLKS_CTL_EXT_SW_RST_VEC_0, 0x0);
- wmb(); /* order is important here */
+ W(RGF_USER_CLKS_CTL_EXT_SW_RST_VEC_1, 0x0);
}
W(RGF_USER_CLKS_CTL_SW_RST_VEC_3, 0);
W(RGF_USER_CLKS_CTL_SW_RST_VEC_2, 0);
W(RGF_USER_CLKS_CTL_SW_RST_VEC_1, 0);
W(RGF_USER_CLKS_CTL_SW_RST_VEC_0, 0);
- wmb(); /* order is important here */
if (is_sparrow) {
W(RGF_USER_CLKS_CTL_SW_RST_VEC_3, 0x00000003);
/* reset A2 PCIE AHB */
W(RGF_USER_CLKS_CTL_SW_RST_VEC_2, 0x00008000);
-
} else {
W(RGF_USER_CLKS_CTL_SW_RST_VEC_3, 0x00000001);
if (rev_id == 1) {
@@ -384,12 +411,10 @@
W(RGF_PCIE_LOS_COUNTER_CTL, BIT(6) | BIT(8));
W(RGF_USER_CLKS_CTL_SW_RST_VEC_2, 0x00008000);
}
-
}
/* TODO: check order here!!! Erez code is different */
W(RGF_USER_CLKS_CTL_SW_RST_VEC_0, 0);
- wmb(); /* order is important here */
/* wait until device ready. typical time is 200..250 msec */
do {
@@ -407,16 +432,15 @@
W(RGF_PCIE_LOS_COUNTER_CTL, BIT(8));
C(RGF_USER_CLKS_CTL_0, BIT_USER_CLKS_RST_PWGD);
- wmb(); /* order is important here */
wil_dbg_misc(wil, "Reset completed in %d ms\n", delay * RST_DELAY);
return 0;
+}
#undef R
#undef W
#undef S
#undef C
-}
void wil_mbox_ring_le2cpus(struct wil6210_mbox_ring *r)
{
@@ -431,6 +455,7 @@
{
ulong to = msecs_to_jiffies(1000);
ulong left = wait_for_completion_timeout(&wil->wmi_ready, to);
+
if (0 == left) {
wil_err(wil, "Firmware not ready\n");
return -ETIME;
@@ -450,15 +475,15 @@
{
int rc;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
WARN_ON(!mutex_is_locked(&wil->mutex));
+ WARN_ON(test_bit(wil_status_napi_en, &wil->status));
cancel_work_sync(&wil->disconnect_worker);
wil6210_disconnect(wil, NULL);
wil->status = 0; /* prevent NAPI from being scheduled */
- if (test_bit(wil_status_napi_en, &wil->status)) {
- napi_synchronize(&wil->napi_rx);
- }
if (wil->scan_request) {
wil_dbg_misc(wil, "Abort scan_request 0x%p\n",
@@ -468,7 +493,7 @@
wil->scan_request = NULL;
}
- wil6210_disable_irq(wil);
+ wil_mask_irq(wil);
wmi_event_flush(wil);
@@ -480,13 +505,38 @@
if (rc)
return rc;
+ if (!no_fw_load) {
+ wil_info(wil, "Use firmware <%s>\n", WIL_FW_NAME);
+ wil_halt_cpu(wil);
+ /* Loading f/w from the file */
+ rc = wil_request_firmware(wil, WIL_FW_NAME);
+ if (rc)
+ return rc;
+
+ /* clear any interrupts which on-card-firmware may have set */
+ wil6210_clear_irq(wil);
+ { /* CAF_ICR - clear and mask */
+ u32 a = HOSTADDR(RGF_CAF_ICR) +
+ offsetof(struct RGF_ICR, ICR);
+ u32 m = HOSTADDR(RGF_CAF_ICR) +
+ offsetof(struct RGF_ICR, IMV);
+ u32 icr = ioread32(wil->csr + a);
+
+ iowrite32(icr, wil->csr + a); /* W1C */
+ iowrite32(~0, wil->csr + m);
+ wmb(); /* wait for completion */
+ }
+ wil_release_cpu(wil);
+ } else {
+ wil_info(wil, "Use firmware from on-card flash\n");
+ }
/* init after reset */
wil->pending_connect_cid = -1;
reinit_completion(&wil->wmi_ready);
+ reinit_completion(&wil->wmi_call);
- /* TODO: release MAC reset */
- wil6210_enable_irq(wil);
+ wil_unmask_irq(wil);
/* we just started MAC, wait for FW ready */
rc = wil_wait_for_fw_ready(wil);
@@ -522,7 +572,7 @@
netif_carrier_off(ndev);
}
-static int __wil_up(struct wil6210_priv *wil)
+int __wil_up(struct wil6210_priv *wil)
{
struct net_device *ndev = wil_to_ndev(wil);
struct wireless_dev *wdev = wil->wdev;
@@ -568,11 +618,15 @@
/* MAC address - pre-requisite for other commands */
wmi_set_mac_address(wil, ndev->dev_addr);
-
+ wil_dbg_misc(wil, "NAPI enable\n");
napi_enable(&wil->napi_rx);
napi_enable(&wil->napi_tx);
set_bit(wil_status_napi_en, &wil->status);
+ if (wil->platform_ops.bus_request)
+ wil->platform_ops.bus_request(wil->platform_handle,
+ WIL_MAX_BUS_REQUEST_KBPS);
+
return 0;
}
@@ -580,6 +634,8 @@
{
int rc;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
mutex_lock(&wil->mutex);
rc = __wil_up(wil);
mutex_unlock(&wil->mutex);
@@ -587,13 +643,23 @@
return rc;
}
-static int __wil_down(struct wil6210_priv *wil)
+int __wil_down(struct wil6210_priv *wil)
{
+ int iter = WAIT_FOR_DISCONNECT_TIMEOUT_MS /
+ WAIT_FOR_DISCONNECT_INTERVAL_MS;
+
WARN_ON(!mutex_is_locked(&wil->mutex));
- clear_bit(wil_status_napi_en, &wil->status);
- napi_disable(&wil->napi_rx);
- napi_disable(&wil->napi_tx);
+ if (wil->platform_ops.bus_request)
+ wil->platform_ops.bus_request(wil->platform_handle, 0);
+
+ wil_disable_irq(wil);
+ if (test_and_clear_bit(wil_status_napi_en, &wil->status)) {
+ napi_disable(&wil->napi_rx);
+ napi_disable(&wil->napi_tx);
+ wil_dbg_misc(wil, "NAPI disable\n");
+ }
+ wil_enable_irq(wil);
if (wil->scan_request) {
wil_dbg_misc(wil, "Abort scan_request 0x%p\n",
@@ -603,7 +669,24 @@
wil->scan_request = NULL;
}
- wil6210_disconnect(wil, NULL);
+ if (test_bit(wil_status_fwconnected, &wil->status) ||
+ test_bit(wil_status_fwconnecting, &wil->status))
+ wmi_send(wil, WMI_DISCONNECT_CMDID, NULL, 0);
+
+ /* make sure wil is idle (not connected) */
+ mutex_unlock(&wil->mutex);
+ while (iter--) {
+ int idle = !test_bit(wil_status_fwconnected, &wil->status) &&
+ !test_bit(wil_status_fwconnecting, &wil->status);
+ if (idle)
+ break;
+ msleep(WAIT_FOR_DISCONNECT_INTERVAL_MS);
+ }
+ mutex_lock(&wil->mutex);
+
+ if (!iter)
+ wil_err(wil, "timeout waiting for idle FW/HW\n");
+
wil_rx_fini(wil);
return 0;
@@ -613,6 +696,8 @@
{
int rc;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
mutex_lock(&wil->mutex);
rc = __wil_down(wil);
mutex_unlock(&wil->mutex);
diff --git a/drivers/net/wireless/ath/wil6210/netdev.c b/drivers/net/wireless/ath/wil6210/netdev.c
index a44c2b6..1c0c77d 100644
--- a/drivers/net/wireless/ath/wil6210/netdev.c
+++ b/drivers/net/wireless/ath/wil6210/netdev.c
@@ -17,11 +17,14 @@
#include <linux/etherdevice.h>
#include "wil6210.h"
+#include "txrx.h"
static int wil_open(struct net_device *ndev)
{
struct wil6210_priv *wil = ndev_to_wil(ndev);
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
return wil_up(wil);
}
@@ -29,6 +32,8 @@
{
struct wil6210_priv *wil = ndev_to_wil(ndev);
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
return wil_down(wil);
}
@@ -36,8 +41,10 @@
{
struct wil6210_priv *wil = ndev_to_wil(ndev);
- if (new_mtu < 68 || new_mtu > IEEE80211_MAX_DATA_LEN_DMG)
+ if (new_mtu < 68 || new_mtu > (TX_BUF_LEN - ETH_HLEN)) {
+ wil_err(wil, "invalid MTU %d\n", new_mtu);
return -EINVAL;
+ }
wil_dbg_misc(wil, "change MTU %d -> %d\n", ndev->mtu, new_mtu);
ndev->mtu = new_mtu;
@@ -121,6 +128,8 @@
wil->csr = csr;
wil->wdev = wdev;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
rc = wil_priv_init(wil);
if (rc) {
dev_err(dev, "wil_priv_init failed\n");
@@ -169,6 +178,8 @@
{
struct net_device *ndev = wil_to_ndev(wil);
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
if (!ndev)
return;
@@ -185,6 +196,8 @@
struct net_device *ndev = wil_to_ndev(wil);
int rc;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
rc = register_netdev(ndev);
if (rc < 0) {
dev_err(&ndev->dev, "Failed to register netdev: %d\n", rc);
@@ -200,5 +213,7 @@
{
struct net_device *ndev = wil_to_ndev(wil);
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
unregister_netdev(ndev);
}
diff --git a/drivers/net/wireless/ath/wil6210/pcie_bus.c b/drivers/net/wireless/ath/wil6210/pcie_bus.c
index 38dcbea..66626a8 100644
--- a/drivers/net/wireless/ath/wil6210/pcie_bus.c
+++ b/drivers/net/wireless/ath/wil6210/pcie_bus.c
@@ -17,6 +17,7 @@
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/moduleparam.h>
+#include <linux/interrupt.h>
#include "wil6210.h"
@@ -30,6 +31,28 @@
module_param(debug_fw, bool, S_IRUGO);
MODULE_PARM_DESC(debug_fw, " load driver if FW not ready. For FW debug");
+void wil_disable_irq(struct wil6210_priv *wil)
+{
+ int irq = wil->pdev->irq;
+
+ disable_irq(irq);
+ if (wil->n_msi == 3) {
+ disable_irq(irq + 1);
+ disable_irq(irq + 2);
+ }
+}
+
+void wil_enable_irq(struct wil6210_priv *wil)
+{
+ int irq = wil->pdev->irq;
+
+ enable_irq(irq);
+ if (wil->n_msi == 3) {
+ enable_irq(irq + 1);
+ enable_irq(irq + 2);
+ }
+}
+
/* Bus ops */
static int wil_if_pcie_enable(struct wil6210_priv *wil)
{
@@ -41,6 +64,8 @@
*/
int msi_only = pdev->msi_enabled;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
pdev->msi_enabled = 0;
pci_set_master(pdev);
@@ -107,6 +132,8 @@
{
struct pci_dev *pdev = wil->pdev;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
pci_clear_master(pdev);
/* disable and release IRQ */
wil6210_fini_irq(wil, pdev->irq);
@@ -180,6 +207,10 @@
wil->board = board;
wil6210_clear_irq(wil);
+
+ wil->platform_handle =
+ wil_platform_init(&pdev->dev, &wil->platform_ops);
+
/* FW should raise IRQ when ready */
rc = wil_if_pcie_enable(wil);
if (rc) {
@@ -204,6 +235,8 @@
bus_disable:
wil_if_pcie_disable(wil);
if_free:
+ if (wil->platform_ops.uninit)
+ wil->platform_ops.uninit(wil->platform_handle);
wil_if_free(wil);
err_iounmap:
pci_iounmap(pdev, csr);
@@ -220,9 +253,13 @@
struct wil6210_priv *wil = pci_get_drvdata(pdev);
void __iomem *csr = wil->csr;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
wil6210_debugfs_remove(wil);
- wil_if_pcie_disable(wil);
wil_if_remove(wil);
+ wil_if_pcie_disable(wil);
+ if (wil->platform_ops.uninit)
+ wil->platform_ops.uninit(wil->platform_handle);
wil_if_free(wil);
pci_iounmap(pdev, csr);
pci_release_region(pdev, 0);
diff --git a/drivers/net/wireless/ath/wil6210/rx_reorder.c b/drivers/net/wireless/ath/wil6210/rx_reorder.c
index 97c6a24..489cb73 100644
--- a/drivers/net/wireless/ath/wil6210/rx_reorder.c
+++ b/drivers/net/wireless/ath/wil6210/rx_reorder.c
@@ -98,22 +98,25 @@
int mid = wil_rxdesc_mid(d);
u16 seq = wil_rxdesc_seq(d);
struct wil_sta_info *sta = &wil->sta[cid];
- struct wil_tid_ampdu_rx *r = sta->tid_rx[tid];
+ struct wil_tid_ampdu_rx *r;
u16 hseq;
int index;
+ unsigned long flags;
wil_dbg_txrx(wil, "MID %d CID %d TID %d Seq 0x%03x\n",
mid, cid, tid, seq);
+ spin_lock_irqsave(&sta->tid_rx_lock, flags);
+
+ r = sta->tid_rx[tid];
if (!r) {
+ spin_unlock_irqrestore(&sta->tid_rx_lock, flags);
wil_netif_rx_any(skb, ndev);
return;
}
hseq = r->head_seq_num;
- spin_lock(&r->reorder_lock);
-
/** Due to the race between WMI events, where BACK establishment
* reported, and data Rx, few packets may be pass up before reorder
* buffer get allocated. Catch up by pretending SSN is what we
@@ -176,13 +179,14 @@
wil_reorder_release(wil, r);
out:
- spin_unlock(&r->reorder_lock);
+ spin_unlock_irqrestore(&sta->tid_rx_lock, flags);
}
struct wil_tid_ampdu_rx *wil_tid_ampdu_rx_alloc(struct wil6210_priv *wil,
int size, u16 ssn)
{
struct wil_tid_ampdu_rx *r = kzalloc(sizeof(*r), GFP_KERNEL);
+
if (!r)
return NULL;
@@ -197,7 +201,6 @@
return NULL;
}
- spin_lock_init(&r->reorder_lock);
r->ssn = ssn;
r->head_seq_num = ssn;
r->buf_size = size;
diff --git a/drivers/net/wireless/ath/wil6210/txrx.c b/drivers/net/wireless/ath/wil6210/txrx.c
index 9bd920d..2936ef0 100644
--- a/drivers/net/wireless/ath/wil6210/txrx.c
+++ b/drivers/net/wireless/ath/wil6210/txrx.c
@@ -52,6 +52,7 @@
{
return wil_vring_next_tail(vring) == vring->swhead;
}
+
/*
* Available space in Tx Vring
*/
@@ -86,6 +87,8 @@
size_t sz = vring->size * sizeof(vring->va[0]);
uint i;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
BUILD_BUG_ON(sizeof(vring->va[0]) != 32);
vring->swhead = 0;
@@ -110,7 +113,8 @@
* we can use any
*/
for (i = 0; i < vring->size; i++) {
- volatile struct vring_tx_desc *_d = &(vring->va[i].tx);
+ volatile struct vring_tx_desc *_d = &vring->va[i].tx;
+
_d->dma.status = TX_DMA_STATUS_DU;
}
@@ -125,6 +129,7 @@
{
dma_addr_t pa = wil_desc_addr(&d->dma.addr);
u16 dmalen = le16_to_cpu(d->dma.length);
+
switch (ctx->mapped_as) {
case wil_mapped_as_single:
dma_unmap_single(dev, pa, dmalen, DMA_TO_DEVICE);
@@ -143,6 +148,18 @@
struct device *dev = wil_to_dev(wil);
size_t sz = vring->size * sizeof(vring->va[0]);
+ if (tx) {
+ int vring_index = vring - wil->vring_tx;
+
+ wil_dbg_misc(wil, "free Tx vring %d [%d] 0x%p:%pad 0x%p\n",
+ vring_index, vring->size, vring->va,
+ &vring->pa, vring->ctx);
+ } else {
+ wil_dbg_misc(wil, "free Rx vring [%d] 0x%p:%pad 0x%p\n",
+ vring->size, vring->va,
+ &vring->pa, vring->ctx);
+ }
+
while (!wil_vring_is_empty(vring)) {
dma_addr_t pa;
u16 dmalen;
@@ -191,11 +208,12 @@
struct device *dev = wil_to_dev(wil);
unsigned int sz = RX_BUF_LEN;
struct vring_rx_desc dd, *d = ⅆ
- volatile struct vring_rx_desc *_d = &(vring->va[i].rx);
+ volatile struct vring_rx_desc *_d = &vring->va[i].rx;
dma_addr_t pa;
/* TODO align */
struct sk_buff *skb = dev_alloc_skb(sz + headroom);
+
if (unlikely(!skb))
return -ENOMEM;
@@ -274,9 +292,11 @@
*/
int len = min_t(int, 8 + sizeof(phy_data),
wil_rxdesc_phy_length(d));
+
if (len > 8) {
void *p = skb_tail_pointer(skb);
void *pa = PTR_ALIGN(p, 8);
+
if (skb_tailroom(skb) >= len + (pa - p)) {
phy_length = len - 8;
memcpy(phy_data, pa, phy_length);
@@ -372,13 +392,12 @@
int cid;
struct wil_net_stats *stats;
-
BUILD_BUG_ON(sizeof(struct vring_rx_desc) > sizeof(skb->cb));
if (wil_vring_is_empty(vring))
return NULL;
- _d = &(vring->va[vring->swhead].rx);
+ _d = &vring->va[vring->swhead].rx;
if (!(_d->dma.status & RX_DMA_STATUS_DU)) {
/* it is not error, we just reached end of Rx done area */
return NULL;
@@ -532,7 +551,7 @@
[GRO_NORMAL] = "GRO_NORMAL",
[GRO_DROP] = "GRO_DROP",
};
- wil_dbg_txrx(wil, "Rx complete %d bytes => %s,\n",
+ wil_dbg_txrx(wil, "Rx complete %d bytes => %s\n",
len, gro_res_str[rc]);
}
}
@@ -573,7 +592,6 @@
else
wil_netif_rx_any(skb, ndev);
}
-
}
wil_rx_refill(wil, v->size);
}
@@ -583,6 +601,8 @@
struct vring *vring = &wil->vring_rx;
int rc;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
if (vring->va) {
wil_err(wil, "Rx ring already allocated\n");
return -EINVAL;
@@ -612,6 +632,8 @@
{
struct vring *vring = &wil->vring_rx;
+ wil_dbg_misc(wil, "%s()\n", __func__);
+
if (vring->va)
wil_vring_free(wil, vring, 0);
}
@@ -646,6 +668,9 @@
struct vring *vring = &wil->vring_tx[id];
struct vring_tx_data *txdata = &wil->vring_tx_data[id];
+ wil_dbg_misc(wil, "%s() max_mpdu_size %d\n", __func__,
+ cmd.vring_cfg.tx_sw_ring.max_mpdu_size);
+
if (vring->va) {
wil_err(wil, "Tx ring [%d] already allocated\n", id);
rc = -EINVAL;
@@ -695,6 +720,8 @@
if (!vring->va)
return;
+ wil_dbg_misc(wil, "%s() id=%d\n", __func__, id);
+
/* make sure NAPI won't touch this vring */
wil->vring_tx_data[id].enabled = 0;
if (test_bit(wil_status_napi_en, &wil->status))
@@ -721,6 +748,7 @@
for (i = 0; i < ARRAY_SIZE(wil->vring2cid_tid); i++) {
if (wil->vring2cid_tid[i][0] == cid) {
struct vring *v = &wil->vring_tx[i];
+
wil_dbg_txrx(wil, "%s(%pM) -> [%d]\n",
__func__, eth->h_dest, i);
if (v->va) {
@@ -740,6 +768,7 @@
{
struct ethhdr *eth = (void *)skb->data;
int cid = wil->vring2cid_tid[vring_index][0];
+
memcpy(eth->h_dest, wil->sta[cid].addr, ETH_ALEN);
}
@@ -750,7 +779,7 @@
* duplicate skb and send it to other active vrings
*/
static struct vring *wil_tx_bcast(struct wil6210_priv *wil,
- struct sk_buff *skb)
+ struct sk_buff *skb)
{
struct vring *v, *v2;
struct sk_buff *skb2;
@@ -833,8 +862,8 @@
}
static int wil_tx_desc_offload_cksum_set(struct wil6210_priv *wil,
- struct vring_tx_desc *d,
- struct sk_buff *skb)
+ struct vring_tx_desc *d,
+ struct sk_buff *skb)
{
int protocol;
@@ -902,10 +931,9 @@
1 + nr_frags);
return -ENOMEM;
}
- _d = &(vring->va[i].tx);
+ _d = &vring->va[i].tx;
- pa = dma_map_single(dev, skb->data,
- skb_headlen(skb), DMA_TO_DEVICE);
+ pa = dma_map_single(dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE);
wil_dbg_txrx(wil, "Tx skb %d bytes 0x%p -> %pad\n", skb_headlen(skb),
skb->data, &pa);
@@ -934,10 +962,11 @@
const struct skb_frag_struct *frag =
&skb_shinfo(skb)->frags[f];
int len = skb_frag_size(frag);
+
i = (swhead + f + 1) % vring->size;
- _d = &(vring->va[i].tx);
+ _d = &vring->va[i].tx;
pa = skb_frag_dma_map(dev, frag, 0, skb_frag_size(frag),
- DMA_TO_DEVICE);
+ DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, pa)))
goto dma_error;
vring->ctx[i].mapped_as = wil_mapped_as_page;
@@ -982,7 +1011,7 @@
i = (swhead + f) % vring->size;
ctx = &vring->ctx[i];
- _d = &(vring->va[i].tx);
+ _d = &vring->va[i].tx;
*d = *_d;
_d->dma.status = TX_DMA_STATUS_DU;
wil_txdesc_unmap(dev, d, ctx);
@@ -996,7 +1025,6 @@
return -EINVAL;
}
-
netdev_tx_t wil_start_xmit(struct sk_buff *skb, struct net_device *ndev)
{
struct wil6210_priv *wil = ndev_to_wil(ndev);
@@ -1024,15 +1052,15 @@
pr_once_fw = false;
/* find vring */
- if (is_unicast_ether_addr(eth->h_dest)) {
+ if (is_unicast_ether_addr(eth->h_dest))
vring = wil_find_tx_vring(wil, skb);
- } else {
+ else
vring = wil_tx_bcast(wil, skb);
- }
if (!vring) {
wil_dbg_txrx(wil, "No Tx VRING found for %pM\n", eth->h_dest);
goto drop;
}
+
/* set up vring entry */
rc = wil_tx_vring(wil, vring, skb);
diff --git a/drivers/net/wireless/ath/wil6210/txrx.h b/drivers/net/wireless/ath/wil6210/txrx.h
index a1ac4f8..de04671 100644
--- a/drivers/net/wireless/ath/wil6210/txrx.h
+++ b/drivers/net/wireless/ath/wil6210/txrx.h
@@ -20,9 +20,9 @@
#define BUF_SW_OWNED (1)
#define BUF_HW_OWNED (0)
-/* size of max. Rx packet */
-#define RX_BUF_LEN (2048)
-#define TX_BUF_LEN (2048)
+/* size of max. Tx/Rx buffers, as supported by FW */
+#define RX_BUF_LEN (2242)
+#define TX_BUF_LEN (2242)
/* how many bytes to reserve for rtap header? */
#define WIL6210_RTAP_SIZE (128)
@@ -237,7 +237,6 @@
#define DMA_CFG_DESC_TX_0_L4_TYPE_LEN 2
#define DMA_CFG_DESC_TX_0_L4_TYPE_MSK 0xC0000000 /* L4 type: 0-UDP, 2-TCP */
-
#define DMA_CFG_DESC_TX_OFFLOAD_CFG_MAC_LEN_POS 0
#define DMA_CFG_DESC_TX_OFFLOAD_CFG_MAC_LEN_LEN 7
#define DMA_CFG_DESC_TX_OFFLOAD_CFG_MAC_LEN_MSK 0x7F /* MAC hdr len */
@@ -246,7 +245,6 @@
#define DMA_CFG_DESC_TX_OFFLOAD_CFG_L3T_IPV4_LEN 1
#define DMA_CFG_DESC_TX_OFFLOAD_CFG_L3T_IPV4_MSK 0x80 /* 1-IPv4, 0-IPv6 */
-
#define TX_DMA_STATUS_DU BIT(0)
struct vring_tx_dma {
@@ -347,7 +345,6 @@
#define RX_DMA_ERROR_L3_ERR BIT(4)
#define RX_DMA_ERROR_L4_ERR BIT(5)
-
/* Status field */
#define RX_DMA_STATUS_DU BIT(0)
#define RX_DMA_STATUS_ERROR BIT(2)
diff --git a/drivers/net/wireless/ath/wil6210/wil6210.h b/drivers/net/wireless/ath/wil6210/wil6210.h
index f8718fe..41aa793 100644
--- a/drivers/net/wireless/ath/wil6210/wil6210.h
+++ b/drivers/net/wireless/ath/wil6210/wil6210.h
@@ -21,8 +21,13 @@
#include <linux/wireless.h>
#include <net/cfg80211.h>
#include <linux/timex.h>
+#include "wil_platform.h"
+
#define WIL_NAME "wil6210"
+#define WIL_FW_NAME "wil6210.fw"
+
+#define WIL_MAX_BUS_REQUEST_KBPS 800000 /* ~6.1Gbps */
struct wil_board {
int board;
@@ -86,22 +91,29 @@
/* registers - FW addresses */
#define RGF_USER_USAGE_1 (0x880004)
+#define RGF_USER_USAGE_6 (0x880018)
#define RGF_USER_HW_MACHINE_STATE (0x8801dc)
#define HW_MACHINE_BOOT_DONE (0x3fffffd)
#define RGF_USER_USER_CPU_0 (0x8801e0)
+ #define BIT_USER_USER_CPU_MAN_RST BIT(1) /* user_cpu_man_rst */
#define RGF_USER_MAC_CPU_0 (0x8801fc)
+ #define BIT_USER_MAC_CPU_MAN_RST BIT(1) /* mac_cpu_man_rst */
#define RGF_USER_USER_SCRATCH_PAD (0x8802bc)
#define RGF_USER_FW_REV_ID (0x880a8c) /* chip revision */
#define RGF_USER_CLKS_CTL_0 (0x880abc)
+ #define BIT_USER_CLKS_CAR_AHB_SW_SEL BIT(1) /* ref clk/PLL */
#define BIT_USER_CLKS_RST_PWGD BIT(11) /* reset on "power good" */
#define RGF_USER_CLKS_CTL_SW_RST_VEC_0 (0x880b04)
#define RGF_USER_CLKS_CTL_SW_RST_VEC_1 (0x880b08)
#define RGF_USER_CLKS_CTL_SW_RST_VEC_2 (0x880b0c)
#define RGF_USER_CLKS_CTL_SW_RST_VEC_3 (0x880b10)
#define RGF_USER_CLKS_CTL_SW_RST_MASK_0 (0x880b14)
+ #define BIT_HPAL_PERST_FROM_PAD BIT(6)
+ #define BIT_CAR_PERST_RST BIT(7)
#define RGF_USER_USER_ICR (0x880b4c) /* struct RGF_ICR */
#define BIT_USER_USER_ICR_SW_INT_2 BIT(18)
#define RGF_USER_CLKS_CTL_EXT_SW_RST_VEC_0 (0x880c18)
+#define RGF_USER_CLKS_CTL_EXT_SW_RST_VEC_1 (0x880c2c)
#define RGF_DMA_EP_TX_ICR (0x881bb4) /* struct RGF_ICR */
#define BIT_DMA_EP_TX_ICR_TX_DONE BIT(0)
@@ -136,6 +148,8 @@
/* MAC timer, usec, for packet lifetime */
#define RGF_MAC_MTRL_COUNTER_0 (0x886aa8)
+#define RGF_CAF_ICR (0x88946c) /* struct RGF_ICR */
+
/* popular locations */
#define HOST_MBOX HOSTADDR(RGF_USER_USER_SCRATCH_PAD)
#define HOST_SW_INT (HOSTADDR(RGF_USER_USER_ICR) + \
@@ -154,6 +168,7 @@
u32 host; /* PCI/Host address - BAR0 + 0x880000 */
const char *name; /* for debugfs */
};
+
/* array size should be in sync with actual definition in the wmi.c */
extern const struct fw_map fw_mapping[7];
@@ -303,18 +318,12 @@
* @timeout: reset timer value (in TUs).
* @dialog_token: dialog token for aggregation session
* @rcu_head: RCU head used for freeing this struct
- * @reorder_lock: serializes access to reorder buffer, see below.
*
* This structure's lifetime is managed by RCU, assignments to
* the array holding it must hold the aggregation mutex.
*
- * The @reorder_lock is used to protect the members of this
- * struct, except for @timeout, @buf_size and @dialog_token,
- * which are constant across the lifetime of the struct (the
- * dialog token being used only for debugging).
*/
struct wil_tid_ampdu_rx {
- spinlock_t reorder_lock; /* see above */
struct sk_buff **reorder_buf;
unsigned long *reorder_time;
struct timer_list session_timer;
@@ -363,6 +372,7 @@
bool data_port_open; /* can send any data, not only EAPOL */
/* Rx BACK */
struct wil_tid_ampdu_rx *tid_rx[WIL_STA_TID_NUM];
+ spinlock_t tid_rx_lock; /* guarding tid_rx array */
unsigned long tid_rx_timer_expired[BITS_TO_LONGS(WIL_STA_TID_NUM)];
unsigned long tid_rx_stop_requested[BITS_TO_LONGS(WIL_STA_TID_NUM)];
};
@@ -389,6 +399,7 @@
struct mutex wmi_mutex;
struct wil6210_mbox_ctl mbox_ctl;
struct completion wmi_ready;
+ struct completion wmi_call;
u16 wmi_seq;
u16 reply_id; /**< wait for this WMI event */
void *reply_buf;
@@ -426,6 +437,9 @@
/* debugfs */
struct dentry *debug;
struct debugfs_blob_wrapper blobs[ARRAY_SIZE(fw_mapping)];
+
+ void *platform_handle;
+ struct wil_platform_ops platform_ops;
};
#define wil_to_wiphy(i) (i->wdev->wiphy)
@@ -435,10 +449,11 @@
#define wdev_to_wil(w) (struct wil6210_priv *)(wdev_priv(w))
#define wil_to_ndev(i) (wil_to_wdev(i)->netdev)
#define ndev_to_wil(n) (wdev_to_wil(n->ieee80211_ptr))
+#define wil_to_pcie_dev(i) (&i->pdev->dev)
-int wil_dbg_trace(struct wil6210_priv *wil, const char *fmt, ...);
-int wil_err(struct wil6210_priv *wil, const char *fmt, ...);
-int wil_info(struct wil6210_priv *wil, const char *fmt, ...);
+void wil_dbg_trace(struct wil6210_priv *wil, const char *fmt, ...);
+void wil_err(struct wil6210_priv *wil, const char *fmt, ...);
+void wil_info(struct wil6210_priv *wil, const char *fmt, ...);
#define wil_dbg(wil, fmt, arg...) do { \
netdev_dbg(wil_to_ndev(wil), fmt, ##arg); \
wil_dbg_trace(wil, fmt, ##arg); \
@@ -449,6 +464,7 @@
#define wil_dbg_wmi(wil, fmt, arg...) wil_dbg(wil, "DBG[ WMI]" fmt, ##arg)
#define wil_dbg_misc(wil, fmt, arg...) wil_dbg(wil, "DBG[MISC]" fmt, ##arg)
+#if defined(CONFIG_DYNAMIC_DEBUG)
#define wil_hex_dump_txrx(prefix_str, prefix_type, rowsize, \
groupsize, buf, len, ascii) \
print_hex_dump_debug("DBG[TXRX]" prefix_str,\
@@ -460,6 +476,19 @@
print_hex_dump_debug("DBG[ WMI]" prefix_str,\
prefix_type, rowsize, \
groupsize, buf, len, ascii)
+#else /* defined(CONFIG_DYNAMIC_DEBUG) */
+static inline
+void wil_hex_dump_txrx(const char *prefix_str, int prefix_type, int rowsize,
+ int groupsize, const void *buf, size_t len, bool ascii)
+{
+}
+
+static inline
+void wil_hex_dump_wmi(const char *prefix_str, int prefix_type, int rowsize,
+ int groupsize, const void *buf, size_t len, bool ascii)
+{
+}
+#endif /* defined(CONFIG_DYNAMIC_DEBUG) */
void wil_memcpy_fromio_32(void *dst, const volatile void __iomem *src,
size_t count);
@@ -477,7 +506,9 @@
void wil_link_on(struct wil6210_priv *wil);
void wil_link_off(struct wil6210_priv *wil);
int wil_up(struct wil6210_priv *wil);
+int __wil_up(struct wil6210_priv *wil);
int wil_down(struct wil6210_priv *wil);
+int __wil_down(struct wil6210_priv *wil);
void wil_mbox_ring_le2cpus(struct wil6210_mbox_ring *r);
int wil_find_cid(struct wil6210_priv *wil, const u8 *mac);
@@ -510,8 +541,10 @@
void wil6210_clear_irq(struct wil6210_priv *wil);
int wil6210_init_irq(struct wil6210_priv *wil, int irq);
void wil6210_fini_irq(struct wil6210_priv *wil, int irq);
-void wil6210_disable_irq(struct wil6210_priv *wil);
-void wil6210_enable_irq(struct wil6210_priv *wil);
+void wil_mask_irq(struct wil6210_priv *wil);
+void wil_unmask_irq(struct wil6210_priv *wil);
+void wil_disable_irq(struct wil6210_priv *wil);
+void wil_enable_irq(struct wil6210_priv *wil);
int wil_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
struct cfg80211_mgmt_tx_params *params,
u64 *cookie);
@@ -547,4 +580,5 @@
int wil_iftype_nl2wmi(enum nl80211_iftype type);
+int wil_request_firmware(struct wil6210_priv *wil, const char *name);
#endif /* __WIL6210_H__ */
diff --git a/drivers/net/wireless/ath/wil6210/wil_platform.c b/drivers/net/wireless/ath/wil6210/wil_platform.c
new file mode 100644
index 0000000..8f1d78f
--- /dev/null
+++ b/drivers/net/wireless/ath/wil6210/wil_platform.c
@@ -0,0 +1,49 @@
+/*
+ * Copyright (c) 2014 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include "linux/device.h"
+#include "wil_platform.h"
+
+#ifdef CONFIG_WIL6210_PLATFORM_MSM
+#include "wil_platform_msm.h"
+#endif
+
+/**
+ * wil_platform_init() - wil6210 platform module init
+ *
+ * The function must be called before all other functions in this module.
+ * It returns a handle which is used with the rest of the API
+ *
+ */
+void *wil_platform_init(struct device *dev, struct wil_platform_ops *ops)
+{
+ void *handle = NULL;
+
+ if (!ops) {
+ dev_err(dev, "Invalid parameter. Cannot init platform module\n");
+ return NULL;
+ }
+
+#ifdef CONFIG_WIL6210_PLATFORM_MSM
+ handle = wil_platform_msm_init(dev, ops);
+ if (handle)
+ return handle;
+#endif
+
+ /* other platform specific init functions should be called here */
+
+ return handle;
+}
diff --git a/drivers/net/wireless/ath/wil6210/wil_platform.h b/drivers/net/wireless/ath/wil6210/wil_platform.h
new file mode 100644
index 0000000..158c73b
--- /dev/null
+++ b/drivers/net/wireless/ath/wil6210/wil_platform.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright (c) 2014 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef __WIL_PLATFORM_H__
+#define __WIL_PLATFORM_H__
+
+struct device;
+
+/**
+ * struct wil_platform_ops - wil platform module callbacks
+ */
+struct wil_platform_ops {
+ int (*bus_request)(void *handle, uint32_t kbps /* KBytes/Sec */);
+ int (*suspend)(void *handle);
+ int (*resume)(void *handle);
+ void (*uninit)(void *handle);
+};
+
+void *wil_platform_init(struct device *dev, struct wil_platform_ops *ops);
+
+#endif /* __WIL_PLATFORM_H__ */
diff --git a/drivers/net/wireless/ath/wil6210/wil_platform_msm.c b/drivers/net/wireless/ath/wil6210/wil_platform_msm.c
new file mode 100644
index 0000000..b354a74
--- /dev/null
+++ b/drivers/net/wireless/ath/wil6210/wil_platform_msm.c
@@ -0,0 +1,257 @@
+/*
+ * Copyright (c) 2014 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <linux/of.h>
+#include <linux/slab.h>
+#include <linux/msm-bus.h>
+
+#include "wil_platform.h"
+#include "wil_platform_msm.h"
+
+/**
+ * struct wil_platform_msm - wil6210 msm platform module info
+ *
+ * @dev: device object
+ * @msm_bus_handle: handle for using msm_bus API
+ * @pdata: bus scale info retrieved from DT
+ */
+struct wil_platform_msm {
+ struct device *dev;
+ uint32_t msm_bus_handle;
+ struct msm_bus_scale_pdata *pdata;
+};
+
+#define KBTOB(a) (a * 1000ULL)
+
+/**
+ * wil_platform_get_pdata() - Generate bus client data from device tree
+ * provided by clients.
+ *
+ * dev: device object
+ * of_node: Device tree node to extract information from
+ *
+ * The function returns a valid pointer to the allocated bus-scale-pdata
+ * if the vectors were correctly read from the client's device node.
+ * Any error in reading or parsing the device node will return NULL
+ * to the caller.
+ */
+static struct msm_bus_scale_pdata *wil_platform_get_pdata(
+ struct device *dev,
+ struct device_node *of_node)
+{
+ struct msm_bus_scale_pdata *pdata;
+ struct msm_bus_paths *usecase;
+ int i, j, ret, len;
+ unsigned int num_usecases, num_paths, mem_size;
+ const uint32_t *vec_arr;
+ struct msm_bus_vectors *vectors;
+
+ /* first read num_usecases and num_paths so we can calculate
+ * amount of memory to allocate
+ */
+ ret = of_property_read_u32(of_node, "qcom,msm-bus,num-cases",
+ &num_usecases);
+ if (ret) {
+ dev_err(dev, "Error: num-usecases not found\n");
+ return NULL;
+ }
+
+ ret = of_property_read_u32(of_node, "qcom,msm-bus,num-paths",
+ &num_paths);
+ if (ret) {
+ dev_err(dev, "Error: num_paths not found\n");
+ return NULL;
+ }
+
+ /* pdata memory layout:
+ * msm_bus_scale_pdata
+ * msm_bus_paths[num_usecases]
+ * msm_bus_vectors[num_usecases][num_paths]
+ */
+ mem_size = sizeof(struct msm_bus_scale_pdata) +
+ sizeof(struct msm_bus_paths) * num_usecases +
+ sizeof(struct msm_bus_vectors) * num_usecases * num_paths;
+
+ pdata = kzalloc(mem_size, GFP_KERNEL);
+ if (!pdata)
+ return NULL;
+
+ ret = of_property_read_string(of_node, "qcom,msm-bus,name",
+ &pdata->name);
+ if (ret) {
+ dev_err(dev, "Error: Client name not found\n");
+ goto err;
+ }
+
+ if (of_property_read_bool(of_node, "qcom,msm-bus,active-only")) {
+ pdata->active_only = 1;
+ } else {
+ dev_info(dev, "active_only flag absent.\n");
+ dev_info(dev, "Using dual context by default\n");
+ }
+
+ pdata->num_usecases = num_usecases;
+ pdata->usecase = (struct msm_bus_paths *)(pdata + 1);
+
+ vec_arr = of_get_property(of_node, "qcom,msm-bus,vectors-KBps", &len);
+ if (vec_arr == NULL) {
+ dev_err(dev, "Error: Vector array not found\n");
+ goto err;
+ }
+
+ if (len != num_usecases * num_paths * sizeof(uint32_t) * 4) {
+ dev_err(dev, "Error: Length-error on getting vectors\n");
+ goto err;
+ }
+
+ vectors = (struct msm_bus_vectors *)(pdata->usecase + num_usecases);
+ for (i = 0; i < num_usecases; i++) {
+ usecase = &pdata->usecase[i];
+ usecase->num_paths = num_paths;
+ usecase->vectors = &vectors[i];
+
+ for (j = 0; j < num_paths; j++) {
+ int index = ((i * num_paths) + j) * 4;
+
+ usecase->vectors[j].src = be32_to_cpu(vec_arr[index]);
+ usecase->vectors[j].dst =
+ be32_to_cpu(vec_arr[index + 1]);
+ usecase->vectors[j].ab = (uint64_t)
+ KBTOB(be32_to_cpu(vec_arr[index + 2]));
+ usecase->vectors[j].ib = (uint64_t)
+ KBTOB(be32_to_cpu(vec_arr[index + 3]));
+ }
+ }
+
+ return pdata;
+
+err:
+ kfree(pdata);
+
+ return NULL;
+}
+
+/* wil_platform API (callbacks) */
+
+static int wil_platform_bus_request(void *handle,
+ uint32_t kbps /* KBytes/Sec */)
+{
+ int rc, i;
+ struct wil_platform_msm *msm = (struct wil_platform_msm *)handle;
+ int vote = 0; /* vote 0 in case requested kbps cannot be satisfied */
+ struct msm_bus_paths *usecase;
+ uint32_t usecase_kbps;
+ uint32_t min_kbps = ~0;
+
+ /* find the lowest usecase that is bigger than requested kbps */
+ for (i = 0; i < msm->pdata->num_usecases; i++) {
+ usecase = &msm->pdata->usecase[i];
+ /* assume we have single path (vectors[0]). If we ever
+ * have multiple paths, need to define the behavior */
+ usecase_kbps = div64_u64(usecase->vectors[0].ib, 1000);
+ if (usecase_kbps >= kbps && usecase_kbps < min_kbps) {
+ min_kbps = usecase_kbps;
+ vote = i;
+ }
+ }
+
+ rc = msm_bus_scale_client_update_request(msm->msm_bus_handle, vote);
+ if (rc)
+ dev_err(msm->dev, "Failed msm_bus voting. kbps=%d vote=%d, rc=%d\n",
+ kbps, vote, rc);
+ else
+ /* TOOD: remove */
+ dev_info(msm->dev, "msm_bus_scale_client_update_request succeeded. kbps=%d vote=%d\n",
+ kbps, vote);
+
+ return rc;
+}
+
+static void wil_platform_uninit(void *handle)
+{
+ struct wil_platform_msm *msm = (struct wil_platform_msm *)handle;
+
+ dev_info(msm->dev, "wil_platform_uninit\n");
+
+ if (msm->msm_bus_handle)
+ msm_bus_scale_unregister_client(msm->msm_bus_handle);
+
+ kfree(msm->pdata);
+ kfree(msm);
+}
+
+static int wil_platform_msm_bus_register(struct wil_platform_msm *msm,
+ struct device_node *node)
+{
+ msm->pdata = wil_platform_get_pdata(msm->dev, node);
+ if (!msm->pdata) {
+ dev_err(msm->dev, "Failed getting DT info\n");
+ return -EINVAL;
+ }
+
+ msm->msm_bus_handle = msm_bus_scale_register_client(msm->pdata);
+ if (!msm->msm_bus_handle) {
+ dev_err(msm->dev, "Failed msm_bus registration\n");
+ return -EINVAL;
+ }
+
+ dev_info(msm->dev, "msm_bus registration succeeded! handle 0x%x\n",
+ msm->msm_bus_handle);
+
+ return 0;
+}
+
+/**
+ * wil_platform_msm_init() - wil6210 msm platform module init
+ *
+ * The function must be called before all other functions in this module.
+ * It returns a handle which is used with the rest of the API
+ *
+ */
+void *wil_platform_msm_init(struct device *dev, struct wil_platform_ops *ops)
+{
+ struct device_node *of_node;
+ struct wil_platform_msm *msm;
+ int rc;
+
+ of_node = of_find_compatible_node(NULL, NULL, "qcom,wil6210");
+ if (!of_node) {
+ /* this could mean non-msm platform */
+ dev_err(dev, "DT node not found\n");
+ return NULL;
+ }
+
+ msm = kzalloc(sizeof(*msm), GFP_KERNEL);
+ if (!msm)
+ return NULL;
+
+ msm->dev = dev;
+
+ /* register with msm_bus module for scaling requests */
+ rc = wil_platform_msm_bus_register(msm, of_node);
+ if (rc)
+ goto cleanup;
+
+ memset(ops, 0, sizeof(*ops));
+ ops->bus_request = wil_platform_bus_request;
+ ops->uninit = wil_platform_uninit;
+
+ return (void *)msm;
+
+cleanup:
+ kfree(msm);
+ return NULL;
+}
diff --git a/drivers/net/wireless/ath/wil6210/wil_platform_msm.h b/drivers/net/wireless/ath/wil6210/wil_platform_msm.h
new file mode 100644
index 0000000..2f2229e
--- /dev/null
+++ b/drivers/net/wireless/ath/wil6210/wil_platform_msm.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) 2014 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef __WIL_PLATFORM__MSM_H__
+#define __WIL_PLATFORM_MSM_H__
+
+#include "wil_platform.h"
+
+void *wil_platform_msm_init(struct device *dev, struct wil_platform_ops *ops);
+
+#endif /* __WIL_PLATFORM__MSM_H__ */
diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c
index b1aaaee..bd781c7 100644
--- a/drivers/net/wireless/ath/wil6210/wmi.c
+++ b/drivers/net/wireless/ath/wil6210/wmi.c
@@ -157,6 +157,7 @@
struct wil6210_mbox_hdr *hdr)
{
void __iomem *src = wmi_buffer(wil, ptr);
+
if (!src)
return -EINVAL;
@@ -278,6 +279,7 @@
struct net_device *ndev = wil_to_ndev(wil);
struct wireless_dev *wdev = wil->wdev;
struct wmi_ready_event *evt = d;
+
wil->fw_version = le32_to_cpu(evt->sw_version);
wil->n_mids = evt->numof_additional_mids;
@@ -298,7 +300,7 @@
wil_dbg_wmi(wil, "WMI: got FW ready event\n");
set_bit(wil_status_fwready, &wil->status);
- /* reuse wmi_ready for the firmware ready indication */
+ /* let the reset sequence continue */
complete(&wil->wmi_ready);
}
@@ -595,27 +597,40 @@
return;
}
+ mutex_lock(&wil->mutex);
+
cid = wil->vring2cid_tid[evt->ringid][0];
if (cid >= WIL6210_MAX_CID) {
wil_err(wil, "invalid CID %d for vring %d\n", cid, evt->ringid);
- return;
+ goto out;
}
sta = &wil->sta[cid];
if (sta->status == wil_sta_unused) {
wil_err(wil, "CID %d unused\n", cid);
- return;
+ goto out;
}
wil_dbg_wmi(wil, "BACK for CID %d %pM\n", cid, sta->addr);
for (i = 0; i < WIL_STA_TID_NUM; i++) {
- struct wil_tid_ampdu_rx *r = sta->tid_rx[i];
+ struct wil_tid_ampdu_rx *r;
+ unsigned long flags;
+
+ spin_lock_irqsave(&sta->tid_rx_lock, flags);
+
+ r = sta->tid_rx[i];
sta->tid_rx[i] = NULL;
wil_tid_ampdu_rx_free(wil, r);
+
+ spin_unlock_irqrestore(&sta->tid_rx_lock, flags);
+
if ((evt->status == WMI_BA_AGREED) && evt->agg_wsize)
sta->tid_rx[i] = wil_tid_ampdu_rx_alloc(wil,
evt->agg_wsize, 0);
}
+
+out:
+ mutex_unlock(&wil->mutex);
}
static const struct {
@@ -653,7 +668,7 @@
unsigned n;
if (!test_bit(wil_status_reset_done, &wil->status)) {
- wil_err(wil, "Reset not completed\n");
+ wil_err(wil, "Reset in progress. Cannot handle WMI event\n");
return;
}
@@ -708,6 +723,7 @@
struct wil6210_mbox_hdr_wmi *wmi = &evt->event.wmi;
u16 id = le16_to_cpu(wmi->id);
u32 tstamp = le32_to_cpu(wmi->timestamp);
+
wil_dbg_wmi(wil, "WMI event 0x%04x MID %d @%d msec\n",
id, wmi->mid, tstamp);
trace_wil6210_wmi_event(wmi, &wmi[1],
@@ -748,8 +764,8 @@
wil->reply_id = reply_id;
wil->reply_buf = reply;
wil->reply_size = reply_size;
- remain = wait_for_completion_timeout(&wil->wmi_ready,
- msecs_to_jiffies(to_msec));
+ remain = wait_for_completion_timeout(&wil->wmi_call,
+ msecs_to_jiffies(to_msec));
if (0 == remain) {
wil_err(wil, "wmi_call(0x%04x->0x%04x) timeout %d msec\n",
cmdid, reply_id, to_msec);
@@ -953,8 +969,11 @@
int rc;
u16 len = sizeof(struct wmi_set_appie_cmd) + ie_len;
struct wmi_set_appie_cmd *cmd = kzalloc(len, GFP_KERNEL);
+
if (!cmd)
return -ENOMEM;
+ if (!ie)
+ ie_len = 0;
cmd->mgmt_frm_type = type;
/* BUG: FW API define ieLen as u8. Will fix FW */
@@ -1128,6 +1147,9 @@
struct wil6210_mbox_hdr_wmi *wmi = (void *)(&hdr[1]);
void *evt_data = (void *)(&wmi[1]);
u16 id = le16_to_cpu(wmi->id);
+
+ wil_dbg_wmi(wil, "Handle WMI 0x%04x (reply_id 0x%04x)\n",
+ id, wil->reply_id);
/* check if someone waits for this event */
if (wil->reply_id && wil->reply_id == id) {
if (wil->reply_buf) {
@@ -1138,7 +1160,7 @@
len - sizeof(*wmi));
}
wil_dbg_wmi(wil, "Complete WMI 0x%04x\n", id);
- complete(&wil->wmi_ready);
+ complete(&wil->wmi_call);
return;
}
/* unsolicited event */
@@ -1184,9 +1206,11 @@
struct pending_wmi_event *evt;
struct list_head *lh;
+ wil_dbg_wmi(wil, "Start %s\n", __func__);
while ((lh = next_wmi_ev(wil)) != NULL) {
evt = list_entry(lh, struct pending_wmi_event, list);
wmi_event_handle(wil, &evt->event.hdr);
kfree(evt);
}
+ wil_dbg_wmi(wil, "Finished %s\n", __func__);
}
diff --git a/drivers/net/wireless/ath/wil6210/wmi.h b/drivers/net/wireless/ath/wil6210/wmi.h
index 061618c..27b9743 100644
--- a/drivers/net/wireless/ath/wil6210/wmi.h
+++ b/drivers/net/wireless/ath/wil6210/wmi.h
@@ -179,7 +179,6 @@
WMI_CRYPT_AES_GCMP = 0x20,
};
-
enum wmi_connect_ctrl_flag_bits {
WMI_CONNECT_ASSOC_POLICY_USER = 0x0001,
WMI_CONNECT_SEND_REASSOC = 0x0002,
@@ -219,7 +218,6 @@
__le16 disconnect_reason;
} __packed;
-
/*
* WMI_SET_PMK_CMDID
*/
@@ -234,7 +232,6 @@
u8 pmk[WMI_PMK_LEN];
} __packed;
-
/*
* WMI_SET_PASSPHRASE_CMDID
*/
@@ -273,7 +270,6 @@
u8 mac[WMI_MAC_LEN];
} __packed;
-
/*
* WMI_START_SCAN_CMDID
*
@@ -325,7 +321,6 @@
u8 ssid[WMI_MAX_SSID_LEN];
} __packed;
-
/*
* WMI_SET_APPIE_CMDID
* Add Application specified IE to a management frame
@@ -351,7 +346,6 @@
u8 ie_info[0];
} __packed;
-
/*
* WMI_PXMT_RANGE_CFG_CMDID
*/
@@ -380,7 +374,6 @@
__le32 rf_mgmt_type;
} __packed;
-
/*
* WMI_RF_RX_TEST_CMDID
*/
@@ -426,7 +419,6 @@
u8 disable_sec;
} __packed;
-
/******* P2P ***********/
/*
@@ -797,7 +789,6 @@
__le32 measure_marlon_r_en;
} __packed;
-
/*
* WMI Events
*/
@@ -887,7 +878,6 @@
* Events data structures
*/
-
enum wmi_fw_status {
WMI_FW_STATUS_SUCCESS,
WMI_FW_STATUS_FAILURE,
@@ -1038,8 +1028,8 @@
__le16 protocol_reason_status; /* reason code, see 802.11 spec. */
u8 bssid[WMI_MAC_LEN]; /* set if known */
u8 disconnect_reason; /* see wmi_disconnect_reason */
- u8 assoc_resp_len; /* not in use */
- u8 assoc_info[0]; /* not in use */
+ u8 assoc_resp_len; /* not used */
+ u8 assoc_info[0]; /* not used */
} __packed;
/*
@@ -1081,7 +1071,6 @@
__le16 reason;
} __packed;
-
/*
* WMI_VRING_CFG_DONE_EVENTID
*/
@@ -1147,7 +1136,6 @@
u8 reserved[3];
} __packed;
-
/*
* WMI_GET_PCP_CHANNEL_EVENTID
*/
@@ -1156,7 +1144,6 @@
u8 reserved[3];
} __packed;
-
/*
* WMI_PORT_ALLOCATED_EVENTID
*/
@@ -1260,7 +1247,6 @@
u8 channel; /* From Radio MNGR */
} __packed;
-
/*
* WMI_TX_MGMT_PACKET_EVENTID
*/
diff --git a/drivers/net/wireless/b43/b43.h b/drivers/net/wireless/b43/b43.h
index 95a9433..bb12586 100644
--- a/drivers/net/wireless/b43/b43.h
+++ b/drivers/net/wireless/b43/b43.h
@@ -45,6 +45,7 @@
#define B43_MMIO_RAM_DATA 0x134
#define B43_MMIO_PS_STATUS 0x140
#define B43_MMIO_RADIO_HWENABLED_HI 0x158
+#define B43_MMIO_MAC_HW_CAP 0x15C /* MAC capabilities (corerev >= 13) */
#define B43_MMIO_SHM_CONTROL 0x160
#define B43_MMIO_SHM_DATA 0x164
#define B43_MMIO_SHM_DATA_UNALIGNED 0x166
@@ -253,6 +254,8 @@
#define B43_SHM_SH_CHAN 0x00A0 /* Current channel (low 8bit only) */
#define B43_SHM_SH_CHAN_5GHZ 0x0100 /* Bit set, if 5 Ghz channel */
#define B43_SHM_SH_CHAN_40MHZ 0x0200 /* Bit set, if 40 Mhz channel width */
+#define B43_SHM_SH_MACHW_L 0x00C0 /* Location where the ucode expects the MAC capabilities */
+#define B43_SHM_SH_MACHW_H 0x00C2 /* Location where the ucode expects the MAC capabilities */
#define B43_SHM_SH_HOSTF5 0x00D4 /* Hostflags 5 for ucode options */
#define B43_SHM_SH_BCMCFIFOID 0x0108 /* Last posted cookie to the bcast/mcast FIFO */
/* TSSI information */
@@ -297,6 +300,7 @@
#define B43_SHM_SH_LFFBLIM 0x0046 /* Long frame fallback retry limit */
#define B43_SHM_SH_BEACPHYCTL 0x0054 /* Beacon PHY TX control word (see PHY TX control) */
#define B43_SHM_SH_EXTNPHYCTL 0x00B0 /* Extended bytes for beacon PHY control (N) */
+#define B43_SHM_SH_BCN_LI 0x00B6 /* beacon listen interval */
/* SHM_SHARED ACK/CTS control */
#define B43_SHM_SH_ACKCTSPHYCTL 0x0022 /* ACK/CTS PHY control word (see PHY TX control) */
/* SHM_SHARED probe response variables */
@@ -476,6 +480,11 @@
#define B43_MACCMD_CCA 0x00000008 /* Clear channel assessment */
#define B43_MACCMD_BGNOISE 0x00000010 /* Background noise */
+/* B43_MMIO_PSM_PHY_HDR bits */
+#define B43_PSM_HDR_MAC_PHY_RESET 0x00000001
+#define B43_PSM_HDR_MAC_PHY_CLOCK_EN 0x00000002
+#define B43_PSM_HDR_MAC_PHY_FORCE_CLK 0x00000004
+
/* See BCMA_CLKCTLST_EXTRESREQ and BCMA_CLKCTLST_EXTRESST */
#define B43_BCMA_CLKCTLST_80211_PLL_REQ 0x00000100
#define B43_BCMA_CLKCTLST_PHY_PLL_REQ 0x00000200
diff --git a/drivers/net/wireless/b43/main.c b/drivers/net/wireless/b43/main.c
index 66ff718..5d4173e 100644
--- a/drivers/net/wireless/b43/main.c
+++ b/drivers/net/wireless/b43/main.c
@@ -1204,6 +1204,36 @@
}
}
+/* http://bcm-v4.sipsolutions.net/802.11/PHY/BmacCorePllReset */
+void b43_wireless_core_phy_pll_reset(struct b43_wldev *dev)
+{
+ struct bcma_drv_cc *bcma_cc __maybe_unused;
+ struct ssb_chipcommon *ssb_cc __maybe_unused;
+
+ switch (dev->dev->bus_type) {
+#ifdef CONFIG_B43_BCMA
+ case B43_BUS_BCMA:
+ bcma_cc = &dev->dev->bdev->bus->drv_cc;
+
+ bcma_cc_write32(bcma_cc, BCMA_CC_CHIPCTL_ADDR, 0);
+ bcma_cc_mask32(bcma_cc, BCMA_CC_CHIPCTL_DATA, ~0x4);
+ bcma_cc_set32(bcma_cc, BCMA_CC_CHIPCTL_DATA, 0x4);
+ bcma_cc_mask32(bcma_cc, BCMA_CC_CHIPCTL_DATA, ~0x4);
+ break;
+#endif
+#ifdef CONFIG_B43_SSB
+ case B43_BUS_SSB:
+ ssb_cc = &dev->dev->sdev->bus->chipco;
+
+ chipco_write32(ssb_cc, SSB_CHIPCO_CHIPCTL_ADDR, 0);
+ chipco_mask32(ssb_cc, SSB_CHIPCO_CHIPCTL_DATA, ~0x4);
+ chipco_set32(ssb_cc, SSB_CHIPCO_CHIPCTL_DATA, 0x4);
+ chipco_mask32(ssb_cc, SSB_CHIPCO_CHIPCTL_DATA, ~0x4);
+ break;
+#endif
+ }
+}
+
#ifdef CONFIG_B43_BCMA
static void b43_bcma_phy_reset(struct b43_wldev *dev)
{
@@ -2985,7 +3015,22 @@
{
u16 chip_id = dev->dev->chip_id;
- if (chip_id == BCMA_CHIP_ID_BCM43131 ||
+ if (chip_id == BCMA_CHIP_ID_BCM4331) {
+ switch (spurmode) {
+ case 2: /* 168 Mhz: 2^26/168 = 0x61862 */
+ b43_write16(dev, B43_MMIO_TSF_CLK_FRAC_LOW, 0x1862);
+ b43_write16(dev, B43_MMIO_TSF_CLK_FRAC_HIGH, 0x6);
+ break;
+ case 1: /* 164 Mhz: 2^26/164 = 0x63e70 */
+ b43_write16(dev, B43_MMIO_TSF_CLK_FRAC_LOW, 0x3e70);
+ b43_write16(dev, B43_MMIO_TSF_CLK_FRAC_HIGH, 0x6);
+ break;
+ default: /* 160 Mhz: 2^26/160 = 0x66666 */
+ b43_write16(dev, B43_MMIO_TSF_CLK_FRAC_LOW, 0x6666);
+ b43_write16(dev, B43_MMIO_TSF_CLK_FRAC_HIGH, 0x6);
+ break;
+ }
+ } else if (chip_id == BCMA_CHIP_ID_BCM43131 ||
chip_id == BCMA_CHIP_ID_BCM43217 ||
chip_id == BCMA_CHIP_ID_BCM43222 ||
chip_id == BCMA_CHIP_ID_BCM43224 ||
@@ -3106,6 +3151,7 @@
case B43_PHYTYPE_HT:
case B43_PHYTYPE_LCN:
b43_rate_memory_write(dev, B43_OFDM_RATE_6MB, 1);
+ b43_rate_memory_write(dev, B43_OFDM_RATE_9MB, 1);
b43_rate_memory_write(dev, B43_OFDM_RATE_12MB, 1);
b43_rate_memory_write(dev, B43_OFDM_RATE_18MB, 1);
b43_rate_memory_write(dev, B43_OFDM_RATE_24MB, 1);
@@ -3884,6 +3930,12 @@
return 0;
}
+static void b43_set_beacon_listen_interval(struct b43_wldev *dev, u16 interval)
+{
+ interval = min_t(u16, interval, (u16)0xFF);
+ b43_shm_write16(dev, B43_SHM_SHARED, B43_SHM_SH_BCN_LI, interval);
+}
+
/* Write the short and long frame retry limit values. */
static void b43_set_retry_limits(struct b43_wldev *dev,
unsigned int short_retry,
@@ -3912,6 +3964,9 @@
mutex_lock(&wl->mutex);
b43_mac_suspend(dev);
+ if (changed & IEEE80211_CONF_CHANGE_LISTEN_INTERVAL)
+ b43_set_beacon_listen_interval(dev, conf->listen_interval);
+
if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
phy->chandef = &conf->chandef;
phy->channel = conf->chandef.chan->hw_value;
@@ -4812,6 +4867,16 @@
hf &= ~B43_HF_SKCFPUP;
b43_hf_write(dev, hf);
+ /* tell the ucode MAC capabilities */
+ if (dev->dev->core_rev >= 13) {
+ u32 mac_hw_cap = b43_read32(dev, B43_MMIO_MAC_HW_CAP);
+
+ b43_shm_write16(dev, B43_SHM_SHARED, B43_SHM_SH_MACHW_L,
+ mac_hw_cap & 0xffff);
+ b43_shm_write16(dev, B43_SHM_SHARED, B43_SHM_SH_MACHW_H,
+ (mac_hw_cap >> 16) & 0xffff);
+ }
+
b43_set_retry_limits(dev, B43_DEFAULT_SHORT_RETRY_LIMIT,
B43_DEFAULT_LONG_RETRY_LIMIT);
b43_shm_write16(dev, B43_SHM_SHARED, B43_SHM_SH_SFFBLIM, 3);
@@ -4834,6 +4899,10 @@
/* Maximum Contention Window */
b43_shm_write16(dev, B43_SHM_SCRATCH, B43_SHM_SC_MAXCONT, 0x3FF);
+ /* write phytype and phyvers */
+ b43_shm_write16(dev, B43_SHM_SHARED, B43_SHM_SH_PHYTYPE, phy->type);
+ b43_shm_write16(dev, B43_SHM_SHARED, B43_SHM_SH_PHYVER, phy->rev);
+
if (b43_bus_host_is_pcmcia(dev->dev) ||
b43_bus_host_is_sdio(dev->dev)) {
dev->__using_pio_transfers = true;
diff --git a/drivers/net/wireless/b43/main.h b/drivers/net/wireless/b43/main.h
index 9f22e4b..c46430c 100644
--- a/drivers/net/wireless/b43/main.h
+++ b/drivers/net/wireless/b43/main.h
@@ -96,6 +96,8 @@
#define B43_PS_ASLEEP (1 << 3) /* Force device asleep */
void b43_power_saving_ctl_bits(struct b43_wldev *dev, unsigned int ps_flags);
+void b43_wireless_core_phy_pll_reset(struct b43_wldev *dev);
+
void b43_mac_suspend(struct b43_wldev *dev);
void b43_mac_enable(struct b43_wldev *dev);
void b43_mac_phy_clock_set(struct b43_wldev *dev, bool on);
diff --git a/drivers/net/wireless/b43/phy_ht.c b/drivers/net/wireless/b43/phy_ht.c
index c4dc8b0..bd68945 100644
--- a/drivers/net/wireless/b43/phy_ht.c
+++ b/drivers/net/wireless/b43/phy_ht.c
@@ -81,80 +81,104 @@
udelay(50);
/* Calibration */
- b43_radio_mask(dev, 0x2b, ~0x1);
- b43_radio_mask(dev, 0x2e, ~0x4);
- b43_radio_set(dev, 0x2e, 0x4);
- b43_radio_set(dev, 0x2b, 0x1);
+ b43_radio_mask(dev, R2059_RFPLL_MISC_EN, ~0x1);
+ b43_radio_mask(dev, R2059_RFPLL_MISC_CAL_RESETN, ~0x4);
+ b43_radio_set(dev, R2059_RFPLL_MISC_CAL_RESETN, 0x4);
+ b43_radio_set(dev, R2059_RFPLL_MISC_EN, 0x1);
udelay(300);
}
+/* Calibrate resistors in LPF of PLL? */
+static void b43_radio_2059_rcal(struct b43_wldev *dev)
+{
+ /* Enable */
+ b43_radio_set(dev, R2059_C3 | R2059_RCAL_CONFIG, 0x1);
+ usleep_range(10, 20);
+
+ b43_radio_set(dev, R2059_C3 | 0x0BF, 0x1);
+ b43_radio_maskset(dev, R2059_C3 | 0x19B, 0x3, 0x2);
+
+ /* Start */
+ b43_radio_set(dev, R2059_C3 | R2059_RCAL_CONFIG, 0x2);
+ usleep_range(100, 200);
+
+ /* Stop */
+ b43_radio_mask(dev, R2059_C3 | R2059_RCAL_CONFIG, ~0x2);
+
+ if (!b43_radio_wait_value(dev, R2059_C3 | R2059_RCAL_STATUS, 1, 1, 100,
+ 1000000))
+ b43err(dev->wl, "Radio 0x2059 rcal timeout\n");
+
+ /* Disable */
+ b43_radio_mask(dev, R2059_C3 | R2059_RCAL_CONFIG, ~0x1);
+
+ b43_radio_set(dev, 0xa, 0x60);
+}
+
+/* Calibrate the internal RC oscillator? */
+static void b43_radio_2057_rccal(struct b43_wldev *dev)
+{
+ const u16 radio_values[3][2] = {
+ { 0x61, 0xE9 }, { 0x69, 0xD5 }, { 0x73, 0x99 },
+ };
+ int i;
+
+ for (i = 0; i < 3; i++) {
+ b43_radio_write(dev, R2059_RCCAL_MASTER, radio_values[i][0]);
+ b43_radio_write(dev, R2059_RCCAL_X1, 0x6E);
+ b43_radio_write(dev, R2059_RCCAL_TRC0, radio_values[i][1]);
+
+ /* Start */
+ b43_radio_write(dev, R2059_RCCAL_START_R1_Q1_P1, 0x55);
+
+ /* Wait */
+ if (!b43_radio_wait_value(dev, R2059_RCCAL_DONE_OSCCAP, 2, 2,
+ 500, 5000000))
+ b43err(dev->wl, "Radio 0x2059 rccal timeout\n");
+
+ /* Stop */
+ b43_radio_write(dev, R2059_RCCAL_START_R1_Q1_P1, 0x15);
+ }
+
+ b43_radio_mask(dev, R2059_RCCAL_MASTER, ~0x1);
+}
+
+static void b43_radio_2059_init_pre(struct b43_wldev *dev)
+{
+ b43_phy_mask(dev, B43_PHY_HT_RF_CTL_CMD, ~B43_PHY_HT_RF_CTL_CMD_CHIP0_PU);
+ b43_phy_set(dev, B43_PHY_HT_RF_CTL_CMD, B43_PHY_HT_RF_CTL_CMD_FORCE);
+ b43_phy_mask(dev, B43_PHY_HT_RF_CTL_CMD, ~B43_PHY_HT_RF_CTL_CMD_FORCE);
+ b43_phy_set(dev, B43_PHY_HT_RF_CTL_CMD, B43_PHY_HT_RF_CTL_CMD_CHIP0_PU);
+}
+
static void b43_radio_2059_init(struct b43_wldev *dev)
{
const u16 routing[] = { R2059_C1, R2059_C2, R2059_C3 };
- const u16 radio_values[3][2] = {
- { 0x61, 0xE9 }, { 0x69, 0xD5 }, { 0x73, 0x99 },
- };
- u16 i, j;
+ int i;
- b43_radio_write(dev, R2059_ALL | 0x51, 0x0070);
- b43_radio_write(dev, R2059_ALL | 0x5a, 0x0003);
+ /* Prepare (reset?) radio */
+ b43_radio_2059_init_pre(dev);
+
+ r2059_upload_inittabs(dev);
for (i = 0; i < ARRAY_SIZE(routing); i++)
b43_radio_set(dev, routing[i] | 0x146, 0x3);
- b43_radio_set(dev, 0x2e, 0x0078);
- b43_radio_set(dev, 0xc0, 0x0080);
+ /* Post init starts below */
+
+ b43_radio_set(dev, R2059_RFPLL_MISC_CAL_RESETN, 0x0078);
+ b43_radio_set(dev, R2059_XTAL_CONFIG2, 0x0080);
msleep(2);
- b43_radio_mask(dev, 0x2e, ~0x0078);
- b43_radio_mask(dev, 0xc0, ~0x0080);
+ b43_radio_mask(dev, R2059_RFPLL_MISC_CAL_RESETN, ~0x0078);
+ b43_radio_mask(dev, R2059_XTAL_CONFIG2, ~0x0080);
if (1) { /* FIXME */
- b43_radio_set(dev, R2059_C3 | 0x4, 0x1);
- udelay(10);
- b43_radio_set(dev, R2059_C3 | 0x0BF, 0x1);
- b43_radio_maskset(dev, R2059_C3 | 0x19B, 0x3, 0x2);
-
- b43_radio_set(dev, R2059_C3 | 0x4, 0x2);
- udelay(100);
- b43_radio_mask(dev, R2059_C3 | 0x4, ~0x2);
-
- for (i = 0; i < 10000; i++) {
- if (b43_radio_read(dev, R2059_C3 | 0x145) & 1) {
- i = 0;
- break;
- }
- udelay(100);
- }
- if (i)
- b43err(dev->wl, "radio 0x945 timeout\n");
-
- b43_radio_mask(dev, R2059_C3 | 0x4, ~0x1);
- b43_radio_set(dev, 0xa, 0x60);
-
- for (i = 0; i < 3; i++) {
- b43_radio_write(dev, 0x17F, radio_values[i][0]);
- b43_radio_write(dev, 0x13D, 0x6E);
- b43_radio_write(dev, 0x13E, radio_values[i][1]);
- b43_radio_write(dev, 0x13C, 0x55);
-
- for (j = 0; j < 10000; j++) {
- if (b43_radio_read(dev, 0x140) & 2) {
- j = 0;
- break;
- }
- udelay(500);
- }
- if (j)
- b43err(dev->wl, "radio 0x140 timeout\n");
-
- b43_radio_write(dev, 0x13C, 0x15);
- }
-
- b43_radio_mask(dev, 0x17F, ~0x1);
+ b43_radio_2059_rcal(dev);
+ b43_radio_2057_rccal(dev);
}
- b43_radio_mask(dev, 0x11, ~0x0008);
+ b43_radio_mask(dev, R2059_RFPLL_MASTER, ~0x0008);
}
/**************************************************
@@ -297,6 +321,26 @@
b43_phy_write(dev, B43_PHY_N_BMODE(0x38), 0x668);
}
+static void b43_phy_ht_bphy_reset(struct b43_wldev *dev, bool reset)
+{
+ u16 tmp;
+
+ tmp = b43_read16(dev, B43_MMIO_PSM_PHY_HDR);
+ b43_write16(dev, B43_MMIO_PSM_PHY_HDR,
+ tmp | B43_PSM_HDR_MAC_PHY_FORCE_CLK);
+
+ /* Put BPHY in or take it out of the reset */
+ if (reset)
+ b43_phy_set(dev, B43_PHY_B_BBCFG,
+ B43_PHY_B_BBCFG_RSTCCA | B43_PHY_B_BBCFG_RSTRX);
+ else
+ b43_phy_mask(dev, B43_PHY_B_BBCFG,
+ (u16)~(B43_PHY_B_BBCFG_RSTCCA |
+ B43_PHY_B_BBCFG_RSTRX));
+
+ b43_write16(dev, B43_MMIO_PSM_PHY_HDR, tmp);
+}
+
/**************************************************
* Samples
**************************************************/
@@ -704,7 +748,6 @@
{
struct bcma_device *core = dev->dev->bdev;
int spuravoid = 0;
- u16 tmp;
/* Check for 13 and 14 is just a guess, we don't have enough logs. */
if (new_channel->hw_value == 13 || new_channel->hw_value == 14)
@@ -717,22 +760,9 @@
B43_BCMA_CLKCTLST_80211_PLL_ST |
B43_BCMA_CLKCTLST_PHY_PLL_ST, false);
- /* Values has been taken from wlc_bmac_switch_macfreq comments */
- switch (spuravoid) {
- case 2: /* 126MHz */
- tmp = 0x2082;
- break;
- case 1: /* 123MHz */
- tmp = 0x5341;
- break;
- default: /* 120MHz */
- tmp = 0x8889;
- }
+ b43_mac_switch_freq(dev, spuravoid);
- b43_write16(dev, B43_MMIO_TSF_CLK_FRAC_LOW, tmp);
- b43_write16(dev, B43_MMIO_TSF_CLK_FRAC_HIGH, 0x8);
-
- /* TODO: reset PLL */
+ b43_wireless_core_phy_pll_reset(dev);
if (spuravoid)
b43_phy_set(dev, B43_PHY_HT_BBCFG, B43_PHY_HT_BBCFG_RSTRX);
@@ -747,13 +777,19 @@
const struct b43_phy_ht_channeltab_e_phy *e,
struct ieee80211_channel *new_channel)
{
- bool old_band_5ghz;
+ if (new_channel->band == IEEE80211_BAND_5GHZ) {
+ /* Switch to 2 GHz for a moment to access B-PHY regs */
+ b43_phy_mask(dev, B43_PHY_HT_BANDCTL, ~B43_PHY_HT_BANDCTL_5GHZ);
- old_band_5ghz = b43_phy_read(dev, B43_PHY_HT_BANDCTL) & 0; /* FIXME */
- if (new_channel->band == IEEE80211_BAND_5GHZ && !old_band_5ghz) {
- /* TODO */
- } else if (new_channel->band == IEEE80211_BAND_2GHZ && old_band_5ghz) {
- /* TODO */
+ b43_phy_ht_bphy_reset(dev, true);
+
+ /* Switch to 5 GHz */
+ b43_phy_set(dev, B43_PHY_HT_BANDCTL, B43_PHY_HT_BANDCTL_5GHZ);
+ } else {
+ /* Switch to 2 GHz */
+ b43_phy_mask(dev, B43_PHY_HT_BANDCTL, ~B43_PHY_HT_BANDCTL_5GHZ);
+
+ b43_phy_ht_bphy_reset(dev, false);
}
b43_phy_write(dev, B43_PHY_HT_BW1, e->bw1);
@@ -1002,19 +1038,10 @@
if (b43_read32(dev, B43_MMIO_MACCTL) & B43_MACCTL_ENABLED)
b43err(dev->wl, "MAC not suspended\n");
- /* In the following PHY ops we copy wl's dummy behaviour.
- * TODO: Find out if reads (currently hidden in masks/masksets) are
- * needed and replace following ops with just writes or w&r.
- * Note: B43_PHY_HT_RF_CTL1 register is tricky, wrong operation can
- * cause delayed (!) machine lock up. */
if (blocked) {
- b43_phy_mask(dev, B43_PHY_HT_RF_CTL1, 0);
+ b43_phy_mask(dev, B43_PHY_HT_RF_CTL_CMD,
+ ~B43_PHY_HT_RF_CTL_CMD_CHIP0_PU);
} else {
- b43_phy_mask(dev, B43_PHY_HT_RF_CTL1, 0);
- b43_phy_maskset(dev, B43_PHY_HT_RF_CTL1, 0, 0x1);
- b43_phy_mask(dev, B43_PHY_HT_RF_CTL1, 0);
- b43_phy_maskset(dev, B43_PHY_HT_RF_CTL1, 0, 0x2);
-
if (dev->phy.radio_ver == 0x2059)
b43_radio_2059_init(dev);
else
diff --git a/drivers/net/wireless/b43/phy_ht.h b/drivers/net/wireless/b43/phy_ht.h
index 6cae370..c086f56 100644
--- a/drivers/net/wireless/b43/phy_ht.h
+++ b/drivers/net/wireless/b43/phy_ht.h
@@ -81,7 +81,9 @@
#define B43_PHY_HT_RF_SEQ_STATUS B43_PHY_EXTG(0x004)
/* Values for the status are the same as for the trigger */
-#define B43_PHY_HT_RF_CTL1 B43_PHY_EXTG(0x010)
+#define B43_PHY_HT_RF_CTL_CMD 0x810
+#define B43_PHY_HT_RF_CTL_CMD_FORCE 0x0001
+#define B43_PHY_HT_RF_CTL_CMD_CHIP0_PU 0x0002
#define B43_PHY_HT_RF_CTL_INT_C1 B43_PHY_EXTG(0x04c)
#define B43_PHY_HT_RF_CTL_INT_C2 B43_PHY_EXTG(0x06c)
@@ -104,6 +106,9 @@
#define B43_PHY_HT_TXPCTL_TARG_PWR2_C3_SHIFT 0
#define B43_PHY_HT_TX_PCTL_STATUS_C3 B43_PHY_EXTG(0x169)
+#define B43_PHY_B_BBCFG B43_PHY_N_BMODE(0x001)
+#define B43_PHY_B_BBCFG_RSTCCA 0x4000 /* Reset CCA */
+#define B43_PHY_B_BBCFG_RSTRX 0x8000 /* Reset RX */
#define B43_PHY_HT_TEST B43_PHY_N_BMODE(0x00A)
diff --git a/drivers/net/wireless/b43/phy_n.c b/drivers/net/wireless/b43/phy_n.c
index cf625d8..9f0bcf3 100644
--- a/drivers/net/wireless/b43/phy_n.c
+++ b/drivers/net/wireless/b43/phy_n.c
@@ -6369,7 +6369,7 @@
b43_mac_switch_freq(dev, spuravoid);
if (dev->phy.rev == 3 || dev->phy.rev == 4)
- ; /* TODO: reset PLL */
+ b43_wireless_core_phy_pll_reset(dev);
if (spuravoid)
b43_phy_set(dev, B43_NPHY_BBCFG, B43_NPHY_BBCFG_RSTRX);
diff --git a/drivers/net/wireless/b43/radio_2059.c b/drivers/net/wireless/b43/radio_2059.c
index 38e31d8..a3cf9ef 100644
--- a/drivers/net/wireless/b43/radio_2059.c
+++ b/drivers/net/wireless/b43/radio_2059.c
@@ -25,6 +25,13 @@
#include "b43.h"
#include "radio_2059.h"
+/* Extracted from MMIO dump of 6.30.223.141 */
+static u16 r2059_phy_rev1_init[][2] = {
+ { 0x051, 0x70 }, { 0x05a, 0x03 }, { 0x079, 0x01 }, { 0x082, 0x70 },
+ { 0x083, 0x00 }, { 0x084, 0x70 }, { 0x09a, 0x7f }, { 0x0b6, 0x10 },
+ { 0x188, 0x05 },
+};
+
#define RADIOREGS(r00, r01, r02, r03, r04, r05, r06, r07, r08, r09, \
r10, r11, r12, r13, r14, r15, r16, r17, r18, r19, \
r20) \
@@ -58,73 +65,87 @@
.phy_regs.bw5 = r4, \
.phy_regs.bw6 = r5
+/* Extracted from MMIO dump of 6.30.223.141
+ * TODO: Values for channels 12 & 13 are outdated (from some old 5.x driver)!
+ */
static const struct b43_phy_ht_channeltab_e_radio2059 b43_phy_ht_channeltab_radio2059[] = {
- { .freq = 2412,
- RADIOREGS(0x48, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x6c,
- 0x09, 0x0f, 0x0a, 0x00, 0x0a, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03c9, 0x03c5, 0x03c1, 0x043a, 0x043f, 0x0443),
- },
- { .freq = 2417,
- RADIOREGS(0x4b, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x71,
- 0x09, 0x0f, 0x0a, 0x00, 0x0a, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03cb, 0x03c7, 0x03c3, 0x0438, 0x043d, 0x0441),
- },
- { .freq = 2422,
- RADIOREGS(0x4e, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x76,
- 0x09, 0x0f, 0x09, 0x00, 0x09, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03cd, 0x03c9, 0x03c5, 0x0436, 0x043a, 0x043f),
- },
- { .freq = 2427,
- RADIOREGS(0x52, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x7b,
- 0x09, 0x0f, 0x09, 0x00, 0x09, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03cf, 0x03cb, 0x03c7, 0x0434, 0x0438, 0x043d),
- },
- { .freq = 2432,
- RADIOREGS(0x55, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x80,
- 0x09, 0x0f, 0x08, 0x00, 0x08, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03d1, 0x03cd, 0x03c9, 0x0431, 0x0436, 0x043a),
- },
- { .freq = 2437,
- RADIOREGS(0x58, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x85,
- 0x09, 0x0f, 0x08, 0x00, 0x08, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03d3, 0x03cf, 0x03cb, 0x042f, 0x0434, 0x0438),
- },
- { .freq = 2442,
- RADIOREGS(0x5c, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x8a,
- 0x09, 0x0f, 0x07, 0x00, 0x07, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03d5, 0x03d1, 0x03cd, 0x042d, 0x0431, 0x0436),
- },
- { .freq = 2447,
- RADIOREGS(0x5f, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x8f,
- 0x09, 0x0f, 0x07, 0x00, 0x07, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03d7, 0x03d3, 0x03cf, 0x042b, 0x042f, 0x0434),
- },
- { .freq = 2452,
- RADIOREGS(0x62, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x94,
- 0x09, 0x0f, 0x07, 0x00, 0x07, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03d9, 0x03d5, 0x03d1, 0x0429, 0x042d, 0x0431),
- },
- { .freq = 2457,
- RADIOREGS(0x66, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x99,
- 0x09, 0x0f, 0x06, 0x00, 0x06, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03db, 0x03d7, 0x03d3, 0x0427, 0x042b, 0x042f),
- },
- { .freq = 2462,
- RADIOREGS(0x69, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x9e,
- 0x09, 0x0f, 0x06, 0x00, 0x06, 0x00, 0x61, 0x03,
- 0x00, 0x00, 0x00, 0xf0, 0x00),
- PHYREGS(0x03dd, 0x03d9, 0x03d5, 0x0424, 0x0429, 0x042d),
- },
+ {
+ .freq = 2412,
+ RADIOREGS(0x48, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x6c,
+ 0x09, 0x0f, 0x0a, 0x00, 0x0a, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0xd0, 0x00),
+ PHYREGS(0x03c9, 0x03c5, 0x03c1, 0x043a, 0x043f, 0x0443),
+ },
+ {
+ .freq = 2417,
+ RADIOREGS(0x4b, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x71,
+ 0x09, 0x0f, 0x0a, 0x00, 0x0a, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0xd0, 0x00),
+ PHYREGS(0x03cb, 0x03c7, 0x03c3, 0x0438, 0x043d, 0x0441),
+ },
+ {
+ .freq = 2422,
+ RADIOREGS(0x4e, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x76,
+ 0x09, 0x0f, 0x09, 0x00, 0x09, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0xd0, 0x00),
+ PHYREGS(0x03cd, 0x03c9, 0x03c5, 0x0436, 0x043a, 0x043f),
+ },
+ {
+ .freq = 2427,
+ RADIOREGS(0x52, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x7b,
+ 0x09, 0x0f, 0x09, 0x00, 0x09, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0xa0, 0x00),
+ PHYREGS(0x03cf, 0x03cb, 0x03c7, 0x0434, 0x0438, 0x043d),
+ },
+ {
+ .freq = 2432,
+ RADIOREGS(0x55, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x80,
+ 0x09, 0x0f, 0x08, 0x00, 0x08, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0xa0, 0x00),
+ PHYREGS(0x03d1, 0x03cd, 0x03c9, 0x0431, 0x0436, 0x043a),
+ },
+ {
+ .freq = 2437,
+ RADIOREGS(0x58, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x85,
+ 0x09, 0x0f, 0x08, 0x00, 0x08, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0xa0, 0x00),
+ PHYREGS(0x03d3, 0x03cf, 0x03cb, 0x042f, 0x0434, 0x0438),
+ },
+ {
+ .freq = 2442,
+ RADIOREGS(0x5c, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x8a,
+ 0x09, 0x0f, 0x07, 0x00, 0x07, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0x80, 0x00),
+ PHYREGS(0x03d5, 0x03d1, 0x03cd, 0x042d, 0x0431, 0x0436),
+ },
+ {
+ .freq = 2447,
+ RADIOREGS(0x5f, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x8f,
+ 0x09, 0x0f, 0x07, 0x00, 0x07, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0x80, 0x00),
+ PHYREGS(0x03d7, 0x03d3, 0x03cf, 0x042b, 0x042f, 0x0434),
+ },
+ {
+ .freq = 2452,
+ RADIOREGS(0x62, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x94,
+ 0x09, 0x0f, 0x07, 0x00, 0x07, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0x80, 0x00),
+ PHYREGS(0x03d9, 0x03d5, 0x03d1, 0x0429, 0x042d, 0x0431),
+ },
+ {
+ .freq = 2457,
+ RADIOREGS(0x66, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x99,
+ 0x09, 0x0f, 0x06, 0x00, 0x06, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0x60, 0x00),
+ PHYREGS(0x03db, 0x03d7, 0x03d3, 0x0427, 0x042b, 0x042f),
+ },
+ {
+ .freq = 2462,
+ RADIOREGS(0x69, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0x9e,
+ 0x09, 0x0f, 0x06, 0x00, 0x06, 0x00, 0x61, 0x73,
+ 0x00, 0x00, 0x00, 0x60, 0x00),
+ PHYREGS(0x03dd, 0x03d9, 0x03d5, 0x0424, 0x0429, 0x042d),
+ },
{ .freq = 2467,
RADIOREGS(0x6c, 0x16, 0x30, 0x1b, 0x0a, 0x0a, 0x30, 0xa3,
0x09, 0x0f, 0x05, 0x00, 0x05, 0x00, 0x61, 0x03,
@@ -137,8 +158,196 @@
0x00, 0x00, 0x00, 0xf0, 0x00),
PHYREGS(0x03e1, 0x03dd, 0x03d9, 0x0420, 0x0424, 0x0429),
},
+ {
+ .freq = 5180,
+ RADIOREGS(0xbe, 0x16, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x06,
+ 0x02, 0x0c, 0x00, 0x0c, 0x00, 0x0c, 0x00, 0x00,
+ 0x0f, 0x4f, 0xa3, 0x00, 0xfc),
+ PHYREGS(0x081c, 0x0818, 0x0814, 0x01f9, 0x01fa, 0x01fb),
+ },
+ {
+ .freq = 5200,
+ RADIOREGS(0xc5, 0x16, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x08,
+ 0x02, 0x0c, 0x00, 0x0c, 0x00, 0x0c, 0x00, 0x00,
+ 0x0f, 0x4f, 0x93, 0x00, 0xfb),
+ PHYREGS(0x0824, 0x0820, 0x081c, 0x01f7, 0x01f8, 0x01f9),
+ },
+ {
+ .freq = 5220,
+ RADIOREGS(0xcc, 0x16, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x0a,
+ 0x02, 0x0c, 0x00, 0x0c, 0x00, 0x0c, 0x00, 0x00,
+ 0x0f, 0x4f, 0x93, 0x00, 0xea),
+ PHYREGS(0x082c, 0x0828, 0x0824, 0x01f5, 0x01f6, 0x01f7),
+ },
+ {
+ .freq = 5240,
+ RADIOREGS(0xd2, 0x16, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x0c,
+ 0x02, 0x0c, 0x00, 0x0c, 0x00, 0x0c, 0x00, 0x00,
+ 0x0f, 0x4f, 0x93, 0x00, 0xda),
+ PHYREGS(0x0834, 0x0830, 0x082c, 0x01f3, 0x01f4, 0x01f5),
+ },
+ {
+ .freq = 5260,
+ RADIOREGS(0xd9, 0x16, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x0e,
+ 0x02, 0x0b, 0x00, 0x0b, 0x00, 0x0b, 0x00, 0x00,
+ 0x0f, 0x4f, 0x93, 0x00, 0xca),
+ PHYREGS(0x083c, 0x0838, 0x0834, 0x01f1, 0x01f2, 0x01f3),
+ },
+ {
+ .freq = 5280,
+ RADIOREGS(0xe0, 0x16, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x10,
+ 0x02, 0x0b, 0x00, 0x0b, 0x00, 0x0b, 0x00, 0x00,
+ 0x0f, 0x4f, 0x93, 0x00, 0xb9),
+ PHYREGS(0x0844, 0x0840, 0x083c, 0x01f0, 0x01f0, 0x01f1),
+ },
+ {
+ .freq = 5300,
+ RADIOREGS(0xe6, 0x16, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x12,
+ 0x02, 0x0b, 0x00, 0x0b, 0x00, 0x0b, 0x00, 0x00,
+ 0x0f, 0x4c, 0x83, 0x00, 0xb8),
+ PHYREGS(0x084c, 0x0848, 0x0844, 0x01ee, 0x01ef, 0x01f0),
+ },
+ {
+ .freq = 5320,
+ RADIOREGS(0xed, 0x16, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x14,
+ 0x02, 0x0b, 0x00, 0x0b, 0x00, 0x0b, 0x00, 0x00,
+ 0x0f, 0x4c, 0x83, 0x00, 0xa8),
+ PHYREGS(0x0854, 0x0850, 0x084c, 0x01ec, 0x01ed, 0x01ee),
+ },
+ {
+ .freq = 5500,
+ RADIOREGS(0x29, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x26,
+ 0x02, 0x09, 0x00, 0x09, 0x00, 0x09, 0x00, 0x00,
+ 0x0a, 0x46, 0x43, 0x00, 0x75),
+ PHYREGS(0x089c, 0x0898, 0x0894, 0x01dc, 0x01dd, 0x01dd),
+ },
+ {
+ .freq = 5520,
+ RADIOREGS(0x30, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x28,
+ 0x02, 0x08, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00,
+ 0x0a, 0x46, 0x43, 0x00, 0x75),
+ PHYREGS(0x08a4, 0x08a0, 0x089c, 0x01da, 0x01db, 0x01dc),
+ },
+ {
+ .freq = 5540,
+ RADIOREGS(0x36, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x2a,
+ 0x02, 0x08, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00,
+ 0x0a, 0x46, 0x43, 0x00, 0x75),
+ PHYREGS(0x08ac, 0x08a8, 0x08a4, 0x01d8, 0x01d9, 0x01da),
+ },
+ {
+ .freq = 5560,
+ RADIOREGS(0x3d, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x2c,
+ 0x02, 0x08, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00,
+ 0x0a, 0x46, 0x43, 0x00, 0x75),
+ PHYREGS(0x08b4, 0x08b0, 0x08ac, 0x01d7, 0x01d7, 0x01d8),
+ },
+ {
+ .freq = 5580,
+ RADIOREGS(0x44, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x2e,
+ 0x02, 0x08, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00,
+ 0x0a, 0x46, 0x43, 0x00, 0x74),
+ PHYREGS(0x08bc, 0x08b8, 0x08b4, 0x01d5, 0x01d6, 0x01d7),
+ },
+ {
+ .freq = 5600,
+ RADIOREGS(0x4a, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x30,
+ 0x02, 0x08, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00,
+ 0x09, 0x44, 0x23, 0x00, 0x54),
+ PHYREGS(0x08c4, 0x08c0, 0x08bc, 0x01d3, 0x01d4, 0x01d5),
+ },
+ {
+ .freq = 5620,
+ RADIOREGS(0x51, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x32,
+ 0x02, 0x07, 0x00, 0x07, 0x00, 0x07, 0x00, 0x00,
+ 0x09, 0x44, 0x23, 0x00, 0x54),
+ PHYREGS(0x08cc, 0x08c8, 0x08c4, 0x01d2, 0x01d2, 0x01d3),
+ },
+ {
+ .freq = 5640,
+ RADIOREGS(0x58, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x34,
+ 0x02, 0x07, 0x00, 0x07, 0x00, 0x07, 0x00, 0x00,
+ 0x09, 0x44, 0x23, 0x00, 0x43),
+ PHYREGS(0x08d4, 0x08d0, 0x08cc, 0x01d0, 0x01d1, 0x01d2),
+ },
+ {
+ .freq = 5660,
+ RADIOREGS(0x5e, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x36,
+ 0x02, 0x07, 0x00, 0x07, 0x00, 0x07, 0x00, 0x00,
+ 0x09, 0x43, 0x23, 0x00, 0x43),
+ PHYREGS(0x08dc, 0x08d8, 0x08d4, 0x01ce, 0x01cf, 0x01d0),
+ },
+ {
+ .freq = 5680,
+ RADIOREGS(0x65, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x38,
+ 0x02, 0x07, 0x00, 0x07, 0x00, 0x07, 0x00, 0x00,
+ 0x09, 0x42, 0x23, 0x00, 0x43),
+ PHYREGS(0x08e4, 0x08e0, 0x08dc, 0x01cd, 0x01ce, 0x01ce),
+ },
+ {
+ .freq = 5700,
+ RADIOREGS(0x6c, 0x17, 0x10, 0x1f, 0x08, 0x08, 0x3f, 0x3a,
+ 0x02, 0x07, 0x00, 0x07, 0x00, 0x07, 0x00, 0x00,
+ 0x08, 0x42, 0x13, 0x00, 0x32),
+ PHYREGS(0x08ec, 0x08e8, 0x08e4, 0x01cb, 0x01cc, 0x01cd),
+ },
+ {
+ .freq = 5745,
+ RADIOREGS(0x7b, 0x17, 0x20, 0x1f, 0x08, 0x08, 0x3f, 0x7d,
+ 0x04, 0x06, 0x00, 0x06, 0x00, 0x06, 0x00, 0x00,
+ 0x08, 0x42, 0x13, 0x00, 0x21),
+ PHYREGS(0x08fe, 0x08fa, 0x08f6, 0x01c8, 0x01c8, 0x01c9),
+ },
+ {
+ .freq = 5765,
+ RADIOREGS(0x81, 0x17, 0x20, 0x1f, 0x08, 0x08, 0x3f, 0x81,
+ 0x04, 0x06, 0x00, 0x06, 0x00, 0x06, 0x00, 0x00,
+ 0x08, 0x42, 0x13, 0x00, 0x11),
+ PHYREGS(0x0906, 0x0902, 0x08fe, 0x01c6, 0x01c7, 0x01c8),
+ },
+ {
+ .freq = 5785,
+ RADIOREGS(0x88, 0x17, 0x20, 0x1f, 0x08, 0x08, 0x3f, 0x85,
+ 0x04, 0x05, 0x00, 0x05, 0x00, 0x05, 0x00, 0x00,
+ 0x08, 0x42, 0x13, 0x00, 0x00),
+ PHYREGS(0x090e, 0x090a, 0x0906, 0x01c4, 0x01c5, 0x01c6),
+ },
+ {
+ .freq = 5805,
+ RADIOREGS(0x8f, 0x17, 0x20, 0x1f, 0x08, 0x08, 0x3f, 0x89,
+ 0x04, 0x05, 0x00, 0x05, 0x00, 0x05, 0x00, 0x00,
+ 0x06, 0x41, 0x03, 0x00, 0x00),
+ PHYREGS(0x0916, 0x0912, 0x090e, 0x01c3, 0x01c4, 0x01c4),
+ },
+ {
+ .freq = 5825,
+ RADIOREGS(0x95, 0x17, 0x20, 0x1f, 0x08, 0x08, 0x3f, 0x8d,
+ 0x04, 0x05, 0x00, 0x05, 0x00, 0x05, 0x00, 0x00,
+ 0x06, 0x41, 0x03, 0x00, 0x00),
+ PHYREGS(0x091e, 0x091a, 0x0916, 0x01c1, 0x01c2, 0x01c3),
+ },
};
+void r2059_upload_inittabs(struct b43_wldev *dev)
+{
+ struct b43_phy *phy = &dev->phy;
+ u16 *table = NULL;
+ u16 size, i;
+
+ switch (phy->rev) {
+ case 1:
+ table = r2059_phy_rev1_init[0];
+ size = ARRAY_SIZE(r2059_phy_rev1_init);
+ break;
+ default:
+ B43_WARN_ON(1);
+ return;
+ }
+
+ for (i = 0; i < size; i++, table += 2)
+ b43_radio_write(dev, R2059_ALL | table[0], table[1]);
+}
+
const struct b43_phy_ht_channeltab_e_radio2059
*b43_phy_ht_get_channeltab_e_r2059(struct b43_wldev *dev, u16 freq)
{
diff --git a/drivers/net/wireless/b43/radio_2059.h b/drivers/net/wireless/b43/radio_2059.h
index 40a82d7..9e22fb6 100644
--- a/drivers/net/wireless/b43/radio_2059.h
+++ b/drivers/net/wireless/b43/radio_2059.h
@@ -10,6 +10,18 @@
#define R2059_C3 0x800
#define R2059_ALL 0xC00
+#define R2059_RCAL_CONFIG 0x004
+#define R2059_RFPLL_MASTER 0x011
+#define R2059_RFPLL_MISC_EN 0x02b
+#define R2059_RFPLL_MISC_CAL_RESETN 0x02e
+#define R2059_XTAL_CONFIG2 0x0c0
+#define R2059_RCCAL_START_R1_Q1_P1 0x13c
+#define R2059_RCCAL_X1 0x13d
+#define R2059_RCCAL_TRC0 0x13e
+#define R2059_RCCAL_DONE_OSCCAP 0x140
+#define R2059_RCAL_STATUS 0x145
+#define R2059_RCCAL_MASTER 0x17f
+
/* Values for various registers uploaded on channel switching */
struct b43_phy_ht_channeltab_e_radio2059 {
/* The channel frequency in MHz */
@@ -40,6 +52,8 @@
struct b43_phy_ht_channeltab_e_phy phy_regs;
};
+void r2059_upload_inittabs(struct b43_wldev *dev);
+
const struct b43_phy_ht_channeltab_e_radio2059
*b43_phy_ht_get_channeltab_e_r2059(struct b43_wldev *dev, u16 freq);
diff --git a/drivers/net/wireless/b43/xmit.h b/drivers/net/wireless/b43/xmit.h
index 98d9074..ba61153 100644
--- a/drivers/net/wireless/b43/xmit.h
+++ b/drivers/net/wireless/b43/xmit.h
@@ -97,9 +97,13 @@
};
/* MAC TX control */
+#define B43_TXH_MAC_RTS_FB_SHORTPRMBL 0x80000000 /* RTS fallback preamble */
+#define B43_TXH_MAC_RTS_SHORTPRMBL 0x40000000 /* RTS main rate preamble */
+#define B43_TXH_MAC_FB_SHORTPRMBL 0x20000000 /* Main fallback preamble */
#define B43_TXH_MAC_USEFBR 0x10000000 /* Use fallback rate for this AMPDU */
#define B43_TXH_MAC_KEYIDX 0x0FF00000 /* Security key index */
#define B43_TXH_MAC_KEYIDX_SHIFT 20
+#define B43_TXH_MAC_ALT_TXPWR 0x00080000 /* Use alternate txpwr defined at loc. M_ALT_TXPWR_IDX */
#define B43_TXH_MAC_KEYALG 0x00070000 /* Security key algorithm */
#define B43_TXH_MAC_KEYALG_SHIFT 16
#define B43_TXH_MAC_AMIC 0x00008000 /* AMIC */
@@ -126,25 +130,25 @@
#define B43_TXH_EFT_FB 0x03 /* Data frame fallback encoding */
#define B43_TXH_EFT_FB_CCK 0x00 /* CCK */
#define B43_TXH_EFT_FB_OFDM 0x01 /* OFDM */
-#define B43_TXH_EFT_FB_EWC 0x02 /* EWC */
-#define B43_TXH_EFT_FB_N 0x03 /* N */
+#define B43_TXH_EFT_FB_HT 0x02 /* HT */
+#define B43_TXH_EFT_FB_VHT 0x03 /* VHT */
#define B43_TXH_EFT_RTS 0x0C /* RTS/CTS encoding */
#define B43_TXH_EFT_RTS_CCK 0x00 /* CCK */
#define B43_TXH_EFT_RTS_OFDM 0x04 /* OFDM */
-#define B43_TXH_EFT_RTS_EWC 0x08 /* EWC */
-#define B43_TXH_EFT_RTS_N 0x0C /* N */
+#define B43_TXH_EFT_RTS_HT 0x08 /* HT */
+#define B43_TXH_EFT_RTS_VHT 0x0C /* VHT */
#define B43_TXH_EFT_RTSFB 0x30 /* RTS/CTS fallback encoding */
#define B43_TXH_EFT_RTSFB_CCK 0x00 /* CCK */
#define B43_TXH_EFT_RTSFB_OFDM 0x10 /* OFDM */
-#define B43_TXH_EFT_RTSFB_EWC 0x20 /* EWC */
-#define B43_TXH_EFT_RTSFB_N 0x30 /* N */
+#define B43_TXH_EFT_RTSFB_HT 0x20 /* HT */
+#define B43_TXH_EFT_RTSFB_VHT 0x30 /* VHT */
/* PHY TX control word */
#define B43_TXH_PHY_ENC 0x0003 /* Data frame encoding */
#define B43_TXH_PHY_ENC_CCK 0x0000 /* CCK */
#define B43_TXH_PHY_ENC_OFDM 0x0001 /* OFDM */
-#define B43_TXH_PHY_ENC_EWC 0x0002 /* EWC */
-#define B43_TXH_PHY_ENC_N 0x0003 /* N */
+#define B43_TXH_PHY_ENC_HT 0x0002 /* HT */
+#define B43_TXH_PHY_ENC_VHT 0x0003 /* VHT */
#define B43_TXH_PHY_SHORTPRMBL 0x0010 /* Use short preamble */
#define B43_TXH_PHY_ANT 0x03C0 /* Antenna selection */
#define B43_TXH_PHY_ANT0 0x0000 /* Use antenna 0 */
@@ -162,7 +166,7 @@
#define B43_TXH_PHY1_BW_20 0x0002 /* 20 MHz */
#define B43_TXH_PHY1_BW_20U 0x0003 /* 20 MHz upper */
#define B43_TXH_PHY1_BW_40 0x0004 /* 40 MHz */
-#define B43_TXH_PHY1_BW_40DUP 0x0005 /* 50 MHz duplicate */
+#define B43_TXH_PHY1_BW_40DUP 0x0005 /* 40 MHz duplicate */
#define B43_TXH_PHY1_MODE 0x0038 /* Mode */
#define B43_TXH_PHY1_MODE_SISO 0x0000 /* SISO */
#define B43_TXH_PHY1_MODE_CDD 0x0008 /* CDD */
diff --git a/drivers/net/wireless/brcm80211/Kconfig b/drivers/net/wireless/brcm80211/Kconfig
index b8e2561..fe3dc12 100644
--- a/drivers/net/wireless/brcm80211/Kconfig
+++ b/drivers/net/wireless/brcm80211/Kconfig
@@ -27,10 +27,17 @@
one of the bus interface support. If you choose to build a module,
it'll be called brcmfmac.ko.
+config BRCMFMAC_PROTO_BCDC
+ bool
+
+config BRCMFMAC_PROTO_MSGBUF
+ bool
+
config BRCMFMAC_SDIO
bool "SDIO bus interface support for FullMAC driver"
depends on (MMC = y || MMC = BRCMFMAC)
depends on BRCMFMAC
+ select BRCMFMAC_PROTO_BCDC
select FW_LOADER
default y
---help---
@@ -42,6 +49,7 @@
bool "USB bus interface support for FullMAC driver"
depends on (USB = y || USB = BRCMFMAC)
depends on BRCMFMAC
+ select BRCMFMAC_PROTO_BCDC
select FW_LOADER
---help---
This option enables the USB bus interface support for Broadcom
@@ -52,6 +60,8 @@
bool "PCIE bus interface support for FullMAC driver"
depends on BRCMFMAC
depends on PCI
+ depends on HAS_DMA
+ select BRCMFMAC_PROTO_MSGBUF
select FW_LOADER
---help---
This option enables the PCIE bus interface support for Broadcom
diff --git a/drivers/net/wireless/brcm80211/brcmfmac/Makefile b/drivers/net/wireless/brcm80211/brcmfmac/Makefile
index c35adf4..90a977f 100644
--- a/drivers/net/wireless/brcm80211/brcmfmac/Makefile
+++ b/drivers/net/wireless/brcm80211/brcmfmac/Makefile
@@ -30,16 +30,18 @@
fwsignal.o \
p2p.o \
proto.o \
- bcdc.o \
- commonring.o \
- flowring.o \
- msgbuf.o \
dhd_common.o \
dhd_linux.o \
firmware.o \
feature.o \
btcoex.o \
vendor.o
+brcmfmac-$(CONFIG_BRCMFMAC_PROTO_BCDC) += \
+ bcdc.o
+brcmfmac-$(CONFIG_BRCMFMAC_PROTO_MSGBUF) += \
+ commonring.o \
+ flowring.o \
+ msgbuf.o
brcmfmac-$(CONFIG_BRCMFMAC_SDIO) += \
dhd_sdio.o \
bcmsdh.o
diff --git a/drivers/net/wireless/brcm80211/brcmfmac/bcdc.h b/drivers/net/wireless/brcm80211/brcmfmac/bcdc.h
index 17e8c03..6003179 100644
--- a/drivers/net/wireless/brcm80211/brcmfmac/bcdc.h
+++ b/drivers/net/wireless/brcm80211/brcmfmac/bcdc.h
@@ -16,9 +16,12 @@
#ifndef BRCMFMAC_BCDC_H
#define BRCMFMAC_BCDC_H
-
+#ifdef CONFIG_BRCMFMAC_PROTO_BCDC
int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr);
void brcmf_proto_bcdc_detach(struct brcmf_pub *drvr);
-
+#else
+static inline int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr) { return 0; }
+static inline void brcmf_proto_bcdc_detach(struct brcmf_pub *drvr) {}
+#endif
#endif /* BRCMFMAC_BCDC_H */
diff --git a/drivers/net/wireless/brcm80211/brcmfmac/fweh.c b/drivers/net/wireless/brcm80211/brcmfmac/fweh.c
index 4f1daab..44fc85f 100644
--- a/drivers/net/wireless/brcm80211/brcmfmac/fweh.c
+++ b/drivers/net/wireless/brcm80211/brcmfmac/fweh.c
@@ -185,7 +185,13 @@
ifevent->action, ifevent->ifidx, ifevent->bssidx,
ifevent->flags, ifevent->role);
- if (ifevent->flags & BRCMF_E_IF_FLAG_NOIF) {
+ /* The P2P Device interface event must not be ignored
+ * contrary to what firmware tells us. The only way to
+ * distinguish the P2P Device is by looking at the ifidx
+ * and bssidx received.
+ */
+ if (!(ifevent->ifidx == 0 && ifevent->bssidx == 1) &&
+ (ifevent->flags & BRCMF_E_IF_FLAG_NOIF)) {
brcmf_dbg(EVENT, "event can be ignored\n");
return;
}
@@ -210,12 +216,12 @@
return;
}
- if (ifevent->action == BRCMF_E_IF_CHANGE)
+ if (ifp && ifevent->action == BRCMF_E_IF_CHANGE)
brcmf_fws_reset_interface(ifp);
err = brcmf_fweh_call_event_handler(ifp, emsg->event_code, emsg, data);
- if (ifevent->action == BRCMF_E_IF_DEL) {
+ if (ifp && ifevent->action == BRCMF_E_IF_DEL) {
brcmf_fws_del_interface(ifp);
brcmf_del_if(drvr, ifevent->bssidx);
}
diff --git a/drivers/net/wireless/brcm80211/brcmfmac/fweh.h b/drivers/net/wireless/brcm80211/brcmfmac/fweh.h
index dd20b18..cbf033f 100644
--- a/drivers/net/wireless/brcm80211/brcmfmac/fweh.h
+++ b/drivers/net/wireless/brcm80211/brcmfmac/fweh.h
@@ -172,6 +172,8 @@
#define BRCMF_E_IF_ROLE_STA 0
#define BRCMF_E_IF_ROLE_AP 1
#define BRCMF_E_IF_ROLE_WDS 2
+#define BRCMF_E_IF_ROLE_P2P_GO 3
+#define BRCMF_E_IF_ROLE_P2P_CLIENT 4
/**
* definitions for event packet validation.
diff --git a/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.h b/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.h
index f901ae5..77a51b8 100644
--- a/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.h
+++ b/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.h
@@ -15,6 +15,7 @@
#ifndef BRCMFMAC_MSGBUF_H
#define BRCMFMAC_MSGBUF_H
+#ifdef CONFIG_BRCMFMAC_PROTO_MSGBUF
#define BRCMF_H2D_MSGRING_CONTROL_SUBMIT_MAX_ITEM 20
#define BRCMF_H2D_MSGRING_RXPOST_SUBMIT_MAX_ITEM 256
@@ -32,9 +33,15 @@
int brcmf_proto_msgbuf_rx_trigger(struct device *dev);
+void brcmf_msgbuf_delete_flowring(struct brcmf_pub *drvr, u8 flowid);
int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr);
void brcmf_proto_msgbuf_detach(struct brcmf_pub *drvr);
-void brcmf_msgbuf_delete_flowring(struct brcmf_pub *drvr, u8 flowid);
-
+#else
+static inline int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr)
+{
+ return 0;
+}
+static inline void brcmf_proto_msgbuf_detach(struct brcmf_pub *drvr) {}
+#endif
#endif /* BRCMFMAC_MSGBUF_H */
diff --git a/drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c b/drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
index 12a60ca..1db11b0 100644
--- a/drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
+++ b/drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
@@ -497,8 +497,11 @@
static void
brcmf_cfg80211_update_proto_addr_mode(struct wireless_dev *wdev)
{
- struct net_device *ndev = wdev->netdev;
- struct brcmf_if *ifp = netdev_priv(ndev);
+ struct brcmf_cfg80211_vif *vif;
+ struct brcmf_if *ifp;
+
+ vif = container_of(wdev, struct brcmf_cfg80211_vif, wdev);
+ ifp = vif->ifp;
if ((wdev->iftype == NL80211_IFTYPE_ADHOC) ||
(wdev->iftype == NL80211_IFTYPE_AP) ||
@@ -4924,7 +4927,7 @@
struct brcmu_chan ch;
int i;
- for (i = 0; i <= total; i++) {
+ for (i = 0; i < total; i++) {
ch.chspec = (u16)le32_to_cpu(chlist->element[i]);
cfg->d11inf.decchspec(&ch);
@@ -5149,6 +5152,7 @@
ch.band = BRCMU_CHAN_BAND_2G;
ch.bw = BRCMU_CHAN_BW_40;
+ ch.sb = BRCMU_CHAN_SB_NONE;
ch.chnum = 0;
cfg->d11inf.encchspec(&ch);
@@ -5182,6 +5186,7 @@
brcmf_update_bw40_channel_flag(&band->channels[j], &ch);
}
+ kfree(pbuf);
}
return err;
}
diff --git a/drivers/net/wireless/brcm80211/brcmsmac/dma.c b/drivers/net/wireless/brcm80211/brcmsmac/dma.c
index 4fb9635..796f5f9 100644
--- a/drivers/net/wireless/brcm80211/brcmsmac/dma.c
+++ b/drivers/net/wireless/brcm80211/brcmsmac/dma.c
@@ -746,7 +746,7 @@
/* !! may be called with core in reset */
void dma_detach(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
brcms_dbg_dma(di->core, "%s:\n", di->name);
@@ -842,7 +842,7 @@
void dma_rxinit(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
brcms_dbg_dma(di->core, "%s:\n", di->name);
@@ -924,7 +924,7 @@
*/
int dma_rx(struct dma_pub *pub, struct sk_buff_head *skb_list)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
struct sk_buff_head dma_frames;
struct sk_buff *p, *next;
uint len;
@@ -1022,7 +1022,7 @@
*/
bool dma_rxfill(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
struct sk_buff *p;
u16 rxin, rxout;
u32 flags = 0;
@@ -1106,7 +1106,7 @@
void dma_rxreclaim(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
struct sk_buff *p;
brcms_dbg_dma(di->core, "%s:\n", di->name);
@@ -1126,7 +1126,7 @@
/* get the address of the var in order to change later */
unsigned long dma_getvar(struct dma_pub *pub, const char *name)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
if (!strcmp(name, "&txavail"))
return (unsigned long)&(di->dma.txavail);
@@ -1137,7 +1137,7 @@
void dma_txinit(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
u32 control = D64_XC_XE;
brcms_dbg_dma(di->core, "%s:\n", di->name);
@@ -1170,7 +1170,7 @@
void dma_txsuspend(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
brcms_dbg_dma(di->core, "%s:\n", di->name);
@@ -1182,7 +1182,7 @@
void dma_txresume(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
brcms_dbg_dma(di->core, "%s:\n", di->name);
@@ -1194,7 +1194,7 @@
bool dma_txsuspended(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
return (di->ntxd == 0) ||
((bcma_read32(di->core,
@@ -1204,7 +1204,7 @@
void dma_txreclaim(struct dma_pub *pub, enum txd_range range)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
struct sk_buff *p;
brcms_dbg_dma(di->core, "%s: %s\n",
@@ -1225,7 +1225,7 @@
bool dma_txreset(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
u32 status;
if (di->ntxd == 0)
@@ -1252,7 +1252,7 @@
bool dma_rxreset(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
u32 status;
if (di->nrxd == 0)
@@ -1377,7 +1377,7 @@
int dma_txfast(struct brcms_c_info *wlc, struct dma_pub *pub,
struct sk_buff *p)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
struct brcms_ampdu_session *session = &di->ampdu_session;
struct ieee80211_tx_info *tx_info;
bool is_ampdu;
@@ -1427,7 +1427,7 @@
void dma_txflush(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
struct brcms_ampdu_session *session = &di->ampdu_session;
if (!skb_queue_empty(&session->skb_list))
@@ -1436,7 +1436,7 @@
int dma_txpending(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
return ntxdactive(di, di->txin, di->txout);
}
@@ -1446,7 +1446,7 @@
*/
void dma_kick_tx(struct dma_pub *pub)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
struct brcms_ampdu_session *session = &di->ampdu_session;
if (!skb_queue_empty(&session->skb_list) && dma64_txidle(di))
@@ -1465,7 +1465,7 @@
*/
struct sk_buff *dma_getnexttxp(struct dma_pub *pub, enum txd_range range)
{
- struct dma_info *di = (struct dma_info *)pub;
+ struct dma_info *di = container_of(pub, struct dma_info, dma);
u16 start, end, i;
u16 active_desc;
struct sk_buff *txp;
@@ -1547,7 +1547,7 @@
void dma_walk_packets(struct dma_pub *dmah, void (*callback_fnc)
(void *pkt, void *arg_a), void *arg_a)
{
- struct dma_info *di = (struct dma_info *) dmah;
+ struct dma_info *di = container_of(dmah, struct dma_info, dma);
uint i = di->txin;
uint end = di->txout;
struct sk_buff *skb;
diff --git a/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_cmn.c b/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_cmn.c
index 57ecc05..941b1e4 100644
--- a/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_cmn.c
+++ b/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_cmn.c
@@ -128,19 +128,19 @@
void wlc_phyreg_enter(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
wlapi_bmac_ucode_wake_override_phyreg_set(pi->sh->physhim);
}
void wlc_phyreg_exit(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
wlapi_bmac_ucode_wake_override_phyreg_clear(pi->sh->physhim);
}
void wlc_radioreg_enter(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
wlapi_bmac_mctrl(pi->sh->physhim, MCTL_LOCK_RADIO, MCTL_LOCK_RADIO);
udelay(10);
@@ -148,7 +148,7 @@
void wlc_radioreg_exit(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
(void)bcma_read16(pi->d11core, D11REGOFFS(phyversion));
pi->phy_wreg = 0;
@@ -586,7 +586,7 @@
void wlc_phy_detach(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
if (pih) {
if (--pi->refcnt)
@@ -613,7 +613,7 @@
wlc_phy_get_phyversion(struct brcms_phy_pub *pih, u16 *phytype, u16 *phyrev,
u16 *radioid, u16 *radiover)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
*phytype = (u16) pi->pubpi.phy_type;
*phyrev = (u16) pi->pubpi.phy_rev;
*radioid = pi->pubpi.radioid;
@@ -624,19 +624,19 @@
bool wlc_phy_get_encore(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
return pi->pubpi.abgphy_encore;
}
u32 wlc_phy_get_coreflags(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
return pi->pubpi.coreflags;
}
void wlc_phy_anacore(struct brcms_phy_pub *pih, bool on)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
if (ISNPHY(pi)) {
if (on) {
@@ -673,7 +673,7 @@
u32 wlc_phy_clk_bwbits(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
u32 phy_bw_clkbits = 0;
@@ -698,14 +698,14 @@
void wlc_phy_por_inform(struct brcms_phy_pub *ppi)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
pi->phy_init_por = true;
}
void wlc_phy_edcrs_lock(struct brcms_phy_pub *pih, bool lock)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
pi->edcrs_threshold_lock = lock;
@@ -717,14 +717,14 @@
void wlc_phy_initcal_enable(struct brcms_phy_pub *pih, bool initcal)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
pi->do_initcal = initcal;
}
void wlc_phy_hw_clk_state_upd(struct brcms_phy_pub *pih, bool newstate)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
if (!pi || !pi->sh)
return;
@@ -734,7 +734,7 @@
void wlc_phy_hw_state_upd(struct brcms_phy_pub *pih, bool newstate)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
if (!pi || !pi->sh)
return;
@@ -746,7 +746,7 @@
{
u32 mc;
void (*phy_init)(struct brcms_phy *) = NULL;
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
if (pi->init_in_progress)
return;
@@ -798,7 +798,7 @@
void wlc_phy_cal_init(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
void (*cal_init)(struct brcms_phy *) = NULL;
if (WARN((bcma_read32(pi->d11core, D11REGOFFS(maccontrol)) &
@@ -816,7 +816,7 @@
int wlc_phy_down(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
int callbacks = 0;
if (pi->phycal_timer
@@ -1070,7 +1070,7 @@
void wlc_phy_hold_upd(struct brcms_phy_pub *pih, u32 id, bool set)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
if (set)
mboolset(pi->measure_hold, id);
@@ -1082,7 +1082,7 @@
void wlc_phy_mute_upd(struct brcms_phy_pub *pih, bool mute, u32 flags)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
if (mute)
mboolset(pi->measure_hold, PHY_HOLD_FOR_MUTE);
@@ -1096,7 +1096,7 @@
void wlc_phy_clear_tssi(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
if (ISNPHY(pi)) {
return;
@@ -1115,7 +1115,7 @@
void wlc_phy_switch_radio(struct brcms_phy_pub *pih, bool on)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
(void)bcma_read32(pi->d11core, D11REGOFFS(maccontrol));
if (ISNPHY(pi)) {
@@ -1149,35 +1149,35 @@
u16 wlc_phy_bw_state_get(struct brcms_phy_pub *ppi)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
return pi->bw;
}
void wlc_phy_bw_state_set(struct brcms_phy_pub *ppi, u16 bw)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
pi->bw = bw;
}
void wlc_phy_chanspec_radio_set(struct brcms_phy_pub *ppi, u16 newch)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
pi->radio_chanspec = newch;
}
u16 wlc_phy_chanspec_get(struct brcms_phy_pub *ppi)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
return pi->radio_chanspec;
}
void wlc_phy_chanspec_set(struct brcms_phy_pub *ppi, u16 chanspec)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
u16 m_cur_channel;
void (*chanspec_set)(struct brcms_phy *, u16) = NULL;
m_cur_channel = CHSPEC_CHANNEL(chanspec);
@@ -1226,7 +1226,7 @@
void wlc_phy_chanspec_ch14_widefilter_set(struct brcms_phy_pub *ppi,
bool wide_filter)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
pi->channel_14_wide_filter = wide_filter;
@@ -1246,7 +1246,7 @@
wlc_phy_chanspec_band_validch(struct brcms_phy_pub *ppi, uint band,
struct brcms_chanvec *channels)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
uint i;
uint channel;
@@ -1267,7 +1267,7 @@
u16 wlc_phy_chanspec_band_firstch(struct brcms_phy_pub *ppi, uint band)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
uint i;
uint channel;
u16 chspec;
@@ -1311,7 +1311,7 @@
int wlc_phy_txpower_get(struct brcms_phy_pub *ppi, uint *qdbm, bool *override)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
*qdbm = pi->tx_user_target[0];
if (override != NULL)
@@ -1323,7 +1323,7 @@
struct txpwr_limits *txpwr)
{
bool mac_enabled = false;
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
memcpy(&pi->tx_user_target[TXP_FIRST_CCK],
&txpwr->cck[0], BRCMS_NUM_RATES_CCK);
@@ -1371,7 +1371,7 @@
int wlc_phy_txpower_set(struct brcms_phy_pub *ppi, uint qdbm, bool override)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
int i;
if (qdbm > 127)
@@ -1407,7 +1407,7 @@
wlc_phy_txpower_sromlimit(struct brcms_phy_pub *ppi, uint channel, u8 *min_pwr,
u8 *max_pwr, int txp_rate_idx)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
uint i;
*min_pwr = pi->min_txpower * BRCMS_TXPWR_DB_FACTOR;
@@ -1456,7 +1456,7 @@
wlc_phy_txpower_sromlimit_max_get(struct brcms_phy_pub *ppi, uint chan,
u8 *max_txpwr, u8 *min_txpwr)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
u8 tx_pwr_max = 0;
u8 tx_pwr_min = 255;
u8 max_num_rate;
@@ -1493,14 +1493,14 @@
u8 wlc_phy_txpower_get_target_min(struct brcms_phy_pub *ppi)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
return pi->tx_power_min;
}
u8 wlc_phy_txpower_get_target_max(struct brcms_phy_pub *ppi)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
return pi->tx_power_max;
}
@@ -1812,21 +1812,21 @@
void wlc_phy_txpwr_percent_set(struct brcms_phy_pub *ppi, u8 txpwr_percent)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
pi->txpwr_percent = txpwr_percent;
}
void wlc_phy_machwcap_set(struct brcms_phy_pub *ppi, u32 machwcap)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
pi->sh->machwcap = machwcap;
}
void wlc_phy_runbist_config(struct brcms_phy_pub *ppi, bool start_end)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
u16 rxc;
rxc = 0;
@@ -1857,7 +1857,7 @@
wlc_phy_txpower_limit_set(struct brcms_phy_pub *ppi, struct txpwr_limits *txpwr,
u16 chanspec)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
wlc_phy_txpower_reg_limit_calc(pi, txpwr, chanspec);
@@ -1881,14 +1881,14 @@
void wlc_phy_ofdm_rateset_war(struct brcms_phy_pub *pih, bool war)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
pi->ofdm_rateset_war = war;
}
void wlc_phy_bf_preempt_enable(struct brcms_phy_pub *pih, bool bf_preempt)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
pi->bf_preempt_4306 = bf_preempt;
}
@@ -1945,7 +1945,7 @@
bool wlc_phy_txpower_hw_ctrl_get(struct brcms_phy_pub *ppi)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
if (ISNPHY(pi))
return pi->nphy_txpwrctrl;
@@ -1955,7 +1955,7 @@
void wlc_phy_txpower_hw_ctrl_set(struct brcms_phy_pub *ppi, bool hwpwrctrl)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
bool suspend;
if (!pi->hwpwrctrl_capable)
@@ -2038,7 +2038,7 @@
wlc_phy_txpower_get_current(struct brcms_phy_pub *ppi, struct tx_power *power,
uint channel)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
uint rate, num_rates;
u8 min_pwr, max_pwr;
@@ -2136,21 +2136,21 @@
void wlc_phy_antsel_type_set(struct brcms_phy_pub *ppi, u8 antsel_type)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
pi->antsel_type = antsel_type;
}
bool wlc_phy_test_ison(struct brcms_phy_pub *ppi)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
return pi->phytest_on;
}
void wlc_phy_ant_rxdiv_set(struct brcms_phy_pub *ppi, u8 val)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
bool suspend;
pi->sh->rx_antdiv = val;
@@ -2283,7 +2283,7 @@
void wlc_phy_noise_sample_intr(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
u16 jssi_aux;
u8 channel = 0;
s8 noise_dbm = PHY_NOISE_FIXED_VAL_NPHY;
@@ -2339,7 +2339,7 @@
static void
wlc_phy_noise_sample_request(struct brcms_phy_pub *pih, u8 reason, u8 ch)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
s8 noise_dbm = PHY_NOISE_FIXED_VAL_NPHY;
bool sampling_in_progress = (pi->phynoise_state != 0);
bool wait_for_intr = true;
@@ -2531,7 +2531,7 @@
{
int rssi = rxh->PhyRxStatus_1 & PRXS1_JSSI_MASK;
uint radioid = pih->radioid;
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
if ((pi->sh->corerev >= 11)
&& !(rxh->RxStatus2 & RXS_PHYRXST_VALID)) {
@@ -2591,7 +2591,7 @@
void wlc_phy_watchdog(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
bool delay_phy_cal = false;
pi->sh->now++;
@@ -2651,7 +2651,7 @@
void wlc_phy_BSSinit(struct brcms_phy_pub *pih, bool bonlyap, int rssi)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
uint i;
uint k;
@@ -2711,7 +2711,7 @@
s16 nphy_currtemp = 0;
s16 delta_temp = 0;
bool do_periodic_cal = true;
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
if (!ISNPHY(pi))
return;
@@ -2804,7 +2804,7 @@
void wlc_phy_stf_chain_init(struct brcms_phy_pub *pih, u8 txchain, u8 rxchain)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
pi->sh->hw_phytxchain = txchain;
pi->sh->hw_phyrxchain = rxchain;
@@ -2815,7 +2815,7 @@
void wlc_phy_stf_chain_set(struct brcms_phy_pub *pih, u8 txchain, u8 rxchain)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
pi->sh->phytxchain = txchain;
@@ -2827,7 +2827,7 @@
void wlc_phy_stf_chain_get(struct brcms_phy_pub *pih, u8 *txchain, u8 *rxchain)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
*txchain = pi->sh->phytxchain;
*rxchain = pi->sh->phyrxchain;
@@ -2837,7 +2837,7 @@
{
s16 nphy_currtemp;
u8 active_bitmap;
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
active_bitmap = (pi->phy_txcore_heatedup) ? 0x31 : 0x33;
@@ -2867,7 +2867,7 @@
s8 wlc_phy_stf_ssmode_get(struct brcms_phy_pub *pih, u16 chanspec)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
u8 siso_mcs_id, cdd_mcs_id;
siso_mcs_id =
@@ -2944,7 +2944,7 @@
bool wlc_phy_txpower_ipa_ison(struct brcms_phy_pub *ppi)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
if (ISNPHY(pi))
return wlc_phy_n_txpower_ipa_ison(pi);
diff --git a/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_lcn.c b/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_lcn.c
index b2d6d6d..5f13662 100644
--- a/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_lcn.c
+++ b/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_lcn.c
@@ -2865,7 +2865,7 @@
{
bool suspend, tx_gain_override_old;
struct lcnphy_txgains old_gains;
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
u16 idleTssi, idleTssi0_2C, idleTssi0_OB, idleTssi0_regvalue_OB,
idleTssi0_regvalue_2C;
u16 SAVE_txpwrctrl = wlc_lcnphy_get_tx_pwr_ctrl(pi);
@@ -3084,7 +3084,7 @@
s32 a1, b0, b1;
s32 tssi, pwr, maxtargetpwr, mintargetpwr;
bool suspend;
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
suspend = (0 == (bcma_read32(pi->d11core, D11REGOFFS(maccontrol)) &
MCTL_EN_MAC));
@@ -4348,7 +4348,7 @@
{
s8 index;
u16 index2;
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
struct brcms_phy_lcnphy *pi_lcn = pi->u.pi_lcnphy;
u16 SAVE_txpwrctrl = wlc_lcnphy_get_tx_pwr_ctrl(pi);
if (wlc_lcnphy_tempsense_based_pwr_ctrl_enabled(pi) &&
diff --git a/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_n.c b/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_n.c
index 93869e8..084f18f 100644
--- a/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_n.c
+++ b/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_n.c
@@ -14121,7 +14121,7 @@
bool wlc_phy_bist_check_phy(struct brcms_phy_pub *pih)
{
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
u32 phybist0, phybist1, phybist2, phybist3, phybist4;
if (NREV_GE(pi->pubpi.phy_rev, 16))
@@ -19734,7 +19734,7 @@
u16 regval;
u16 tbl_buf[16];
uint i;
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
u16 tbl_opcode;
bool suspend;
@@ -19812,7 +19812,7 @@
u8 wlc_phy_rxcore_getstate_nphy(struct brcms_phy_pub *pih)
{
u16 regval, rxen_bits;
- struct brcms_phy *pi = (struct brcms_phy *) pih;
+ struct brcms_phy *pi = container_of(pih, struct brcms_phy, pubpi_ro);
regval = read_phy_reg(pi, 0xa2);
rxen_bits = (regval >> 4) & 0xf;
@@ -21342,7 +21342,7 @@
void wlc_phy_antsel_init(struct brcms_phy_pub *ppi, bool lut_init)
{
- struct brcms_phy *pi = (struct brcms_phy *) ppi;
+ struct brcms_phy *pi = container_of(ppi, struct brcms_phy, pubpi_ro);
u16 mask = 0xfc00;
u32 mc = 0;
diff --git a/drivers/net/wireless/hostap/hostap_proc.c b/drivers/net/wireless/hostap/hostap_proc.c
index 4e5c0f8..8efd17c 100644
--- a/drivers/net/wireless/hostap/hostap_proc.c
+++ b/drivers/net/wireless/hostap/hostap_proc.c
@@ -186,11 +186,9 @@
bss->ssid[i] : '_');
seq_putc(m, '\t');
- for (i = 0; i < bss->ssid_len; i++)
- seq_printf(m, "%02x", bss->ssid[i]);
+ seq_printf(m, "%*phN", (int)bss->ssid_len, bss->ssid);
seq_putc(m, '\t');
- for (i = 0; i < bss->wpa_ie_len; i++)
- seq_printf(m, "%02x", bss->wpa_ie[i]);
+ seq_printf(m, "%*phN", (int)bss->wpa_ie_len, bss->wpa_ie);
seq_putc(m, '\n');
return 0;
}
diff --git a/drivers/net/wireless/iwlegacy/4965-mac.c b/drivers/net/wireless/iwlegacy/4965-mac.c
index 3dcbe2c..26fec54 100644
--- a/drivers/net/wireless/iwlegacy/4965-mac.c
+++ b/drivers/net/wireless/iwlegacy/4965-mac.c
@@ -4633,7 +4633,7 @@
else {
ret = il_set_tx_power(il, val, false);
if (ret)
- IL_ERR("failed setting tx power (0x%d).\n", ret);
+ IL_ERR("failed setting tx power (0x%08x).\n", ret);
else
ret = count;
}
@@ -5757,9 +5757,8 @@
IEEE80211_HW_REPORTS_TX_ACK_STATUS | IEEE80211_HW_SUPPORTS_PS |
IEEE80211_HW_SUPPORTS_DYNAMIC_PS;
if (il->cfg->sku & IL_SKU_N)
- hw->flags |=
- IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS |
- IEEE80211_HW_SUPPORTS_STATIC_SMPS;
+ hw->wiphy->features |= NL80211_FEATURE_DYNAMIC_SMPS |
+ NL80211_FEATURE_STATIC_SMPS;
hw->sta_data_size = sizeof(struct il_station_priv);
hw->vif_data_size = sizeof(struct il_vif_priv);
diff --git a/drivers/net/wireless/iwlwifi/dvm/mac80211.c b/drivers/net/wireless/iwlwifi/dvm/mac80211.c
index afb98f4..2364a3c 100644
--- a/drivers/net/wireless/iwlwifi/dvm/mac80211.c
+++ b/drivers/net/wireless/iwlwifi/dvm/mac80211.c
@@ -125,8 +125,8 @@
*/
if (priv->nvm_data->sku_cap_11n_enable)
- hw->flags |= IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS |
- IEEE80211_HW_SUPPORTS_STATIC_SMPS;
+ hw->wiphy->features |= NL80211_FEATURE_DYNAMIC_SMPS |
+ NL80211_FEATURE_STATIC_SMPS;
/*
* Enable 11w if advertised by firmware and software crypto
diff --git a/drivers/net/wireless/iwlwifi/dvm/power.c b/drivers/net/wireless/iwlwifi/dvm/power.c
index 760c45c..1513dbc 100644
--- a/drivers/net/wireless/iwlwifi/dvm/power.c
+++ b/drivers/net/wireless/iwlwifi/dvm/power.c
@@ -40,7 +40,7 @@
#include "commands.h"
#include "power.h"
-static bool force_cam;
+static bool force_cam = true;
module_param(force_cam, bool, 0644);
MODULE_PARM_DESC(force_cam, "force continuously aware mode (no power saving at all)");
diff --git a/drivers/net/wireless/iwlwifi/iwl-7000.c b/drivers/net/wireless/iwlwifi/iwl-7000.c
index 7e26d0d..b04b885 100644
--- a/drivers/net/wireless/iwlwifi/iwl-7000.c
+++ b/drivers/net/wireless/iwlwifi/iwl-7000.c
@@ -85,6 +85,8 @@
#define IWL7260_TX_POWER_VERSION 0xffff /* meaningless */
#define IWL3160_NVM_VERSION 0x709
#define IWL3160_TX_POWER_VERSION 0xffff /* meaningless */
+#define IWL3165_NVM_VERSION 0x709
+#define IWL3165_TX_POWER_VERSION 0xffff /* meaningless */
#define IWL7265_NVM_VERSION 0x0a1d
#define IWL7265_TX_POWER_VERSION 0xffff /* meaningless */
@@ -94,6 +96,9 @@
#define IWL3160_FW_PRE "iwlwifi-3160-"
#define IWL3160_MODULE_FIRMWARE(api) IWL3160_FW_PRE __stringify(api) ".ucode"
+#define IWL3165_FW_PRE "iwlwifi-3165-"
+#define IWL3165_MODULE_FIRMWARE(api) IWL3165_FW_PRE __stringify(api) ".ucode"
+
#define IWL7265_FW_PRE "iwlwifi-7265-"
#define IWL7265_MODULE_FIRMWARE(api) IWL7265_FW_PRE __stringify(api) ".ucode"
@@ -126,7 +131,8 @@
.max_data_size = IWL60_RTC_DATA_SIZE, \
.base_params = &iwl7000_base_params, \
.led_mode = IWL_LED_RF_STATE, \
- .nvm_hw_section_num = NVM_HW_SECTION_NUM_FAMILY_7000
+ .nvm_hw_section_num = NVM_HW_SECTION_NUM_FAMILY_7000, \
+ .non_shared_ant = ANT_A
const struct iwl_cfg iwl7260_2ac_cfg = {
@@ -215,11 +221,27 @@
{0},
};
+static const struct iwl_ht_params iwl7265_ht_params = {
+ .stbc = true,
+ .ldpc = true,
+ .ht40_bands = BIT(IEEE80211_BAND_2GHZ) | BIT(IEEE80211_BAND_5GHZ),
+};
+
+const struct iwl_cfg iwl3165_2ac_cfg = {
+ .name = "Intel(R) Dual Band Wireless AC 3165",
+ .fw_name_pre = IWL3165_FW_PRE,
+ IWL_DEVICE_7000,
+ .ht_params = &iwl7000_ht_params,
+ .nvm_ver = IWL3165_NVM_VERSION,
+ .nvm_calib_ver = IWL3165_TX_POWER_VERSION,
+ .pwr_tx_backoffs = iwl7265_pwr_tx_backoffs,
+};
+
const struct iwl_cfg iwl7265_2ac_cfg = {
.name = "Intel(R) Dual Band Wireless AC 7265",
.fw_name_pre = IWL7265_FW_PRE,
IWL_DEVICE_7000,
- .ht_params = &iwl7000_ht_params,
+ .ht_params = &iwl7265_ht_params,
.nvm_ver = IWL7265_NVM_VERSION,
.nvm_calib_ver = IWL7265_TX_POWER_VERSION,
.pwr_tx_backoffs = iwl7265_pwr_tx_backoffs,
@@ -229,7 +251,7 @@
.name = "Intel(R) Dual Band Wireless N 7265",
.fw_name_pre = IWL7265_FW_PRE,
IWL_DEVICE_7000,
- .ht_params = &iwl7000_ht_params,
+ .ht_params = &iwl7265_ht_params,
.nvm_ver = IWL7265_NVM_VERSION,
.nvm_calib_ver = IWL7265_TX_POWER_VERSION,
.pwr_tx_backoffs = iwl7265_pwr_tx_backoffs,
@@ -239,7 +261,7 @@
.name = "Intel(R) Wireless N 7265",
.fw_name_pre = IWL7265_FW_PRE,
IWL_DEVICE_7000,
- .ht_params = &iwl7000_ht_params,
+ .ht_params = &iwl7265_ht_params,
.nvm_ver = IWL7265_NVM_VERSION,
.nvm_calib_ver = IWL7265_TX_POWER_VERSION,
.pwr_tx_backoffs = iwl7265_pwr_tx_backoffs,
@@ -247,4 +269,5 @@
MODULE_FIRMWARE(IWL7260_MODULE_FIRMWARE(IWL7260_UCODE_API_OK));
MODULE_FIRMWARE(IWL3160_MODULE_FIRMWARE(IWL3160_UCODE_API_OK));
+MODULE_FIRMWARE(IWL3165_MODULE_FIRMWARE(IWL3160_UCODE_API_OK));
MODULE_FIRMWARE(IWL7265_MODULE_FIRMWARE(IWL7260_UCODE_API_OK));
diff --git a/drivers/net/wireless/iwlwifi/iwl-8000.c b/drivers/net/wireless/iwlwifi/iwl-8000.c
index 23a67bf..4ae8ba6 100644
--- a/drivers/net/wireless/iwlwifi/iwl-8000.c
+++ b/drivers/net/wireless/iwlwifi/iwl-8000.c
@@ -103,6 +103,7 @@
};
static const struct iwl_ht_params iwl8000_ht_params = {
+ .ldpc = true,
.ht40_bands = BIT(IEEE80211_BAND_2GHZ) | BIT(IEEE80211_BAND_5GHZ),
};
@@ -115,7 +116,17 @@
.max_data_size = IWL60_RTC_DATA_SIZE, \
.base_params = &iwl8000_base_params, \
.led_mode = IWL_LED_RF_STATE, \
- .nvm_hw_section_num = NVM_HW_SECTION_NUM_FAMILY_8000
+ .nvm_hw_section_num = NVM_HW_SECTION_NUM_FAMILY_8000, \
+ .non_shared_ant = ANT_A
+
+const struct iwl_cfg iwl8260_2n_cfg = {
+ .name = "Intel(R) Dual Band Wireless N 8260",
+ .fw_name_pre = IWL8000_FW_PRE,
+ IWL_DEVICE_8000,
+ .ht_params = &iwl8000_ht_params,
+ .nvm_ver = IWL8000_NVM_VERSION,
+ .nvm_calib_ver = IWL8000_TX_POWER_VERSION,
+};
const struct iwl_cfg iwl8260_2ac_cfg = {
.name = "Intel(R) Dual Band Wireless AC 8260",
@@ -135,6 +146,7 @@
.nvm_calib_ver = IWL8000_TX_POWER_VERSION,
.default_nvm_file = DEFAULT_NVM_FILE_FAMILY_8000,
.max_rx_agg_size = MAX_RX_AGG_SIZE_8260_SDIO,
+ .disable_dummy_notification = true,
};
MODULE_FIRMWARE(IWL8000_MODULE_FIRMWARE(IWL8000_UCODE_API_OK));
diff --git a/drivers/net/wireless/iwlwifi/iwl-config.h b/drivers/net/wireless/iwlwifi/iwl-config.h
index 8da596d..2ef83a3 100644
--- a/drivers/net/wireless/iwlwifi/iwl-config.h
+++ b/drivers/net/wireless/iwlwifi/iwl-config.h
@@ -120,6 +120,8 @@
#define IWL_LONG_WD_TIMEOUT 10000
#define IWL_MAX_WD_TIMEOUT 120000
+#define IWL_DEFAULT_MAX_TX_POWER 22
+
/* Antenna presence definitions */
#define ANT_NONE 0x0
#define ANT_A BIT(0)
@@ -169,6 +171,7 @@
/*
* @stbc: support Tx STBC and 1*SS Rx STBC
+ * @ldpc: support Tx/Rx with LDPC
* @use_rts_for_aggregation: use rts/cts protection for HT traffic
* @ht40_bands: bitmap of bands (using %IEEE80211_BAND_*) that support HT40
*/
@@ -176,6 +179,7 @@
enum ieee80211_smps_mode smps_mode;
const bool ht_greenfield_support; /* if used set to true */
const bool stbc;
+ const bool ldpc;
bool use_rts_for_aggregation;
u8 ht40_bands;
};
@@ -226,6 +230,7 @@
* @max_data_size: The maximal length of the fw data section
* @valid_tx_ant: valid transmit antenna
* @valid_rx_ant: valid receive antenna
+ * @non_shared_ant: the antenna that is for WiFi only
* @nvm_ver: NVM version
* @nvm_calib_ver: NVM calibration version
* @lib: pointer to the lib ops
@@ -258,6 +263,7 @@
const u32 max_inst_size;
u8 valid_tx_ant;
u8 valid_rx_ant;
+ u8 non_shared_ant;
bool bt_shared_single_ant;
u16 nvm_ver;
u16 nvm_calib_ver;
@@ -278,6 +284,7 @@
bool no_power_up_nic_in_init;
const char *default_nvm_file;
unsigned int max_rx_agg_size;
+ bool disable_dummy_notification;
};
/*
@@ -335,9 +342,11 @@
extern const struct iwl_cfg iwl3160_2ac_cfg;
extern const struct iwl_cfg iwl3160_2n_cfg;
extern const struct iwl_cfg iwl3160_n_cfg;
+extern const struct iwl_cfg iwl3165_2ac_cfg;
extern const struct iwl_cfg iwl7265_2ac_cfg;
extern const struct iwl_cfg iwl7265_2n_cfg;
extern const struct iwl_cfg iwl7265_n_cfg;
+extern const struct iwl_cfg iwl8260_2n_cfg;
extern const struct iwl_cfg iwl8260_2ac_cfg;
extern const struct iwl_cfg iwl8260_2ac_sdio_cfg;
#endif /* CONFIG_IWLMVM */
diff --git a/drivers/net/wireless/iwlwifi/iwl-csr.h b/drivers/net/wireless/iwlwifi/iwl-csr.h
index 23d059a..3f6f015 100644
--- a/drivers/net/wireless/iwlwifi/iwl-csr.h
+++ b/drivers/net/wireless/iwlwifi/iwl-csr.h
@@ -295,6 +295,16 @@
#define CSR_HW_REV_DASH(_val) (((_val) & 0x0000003) >> 0)
#define CSR_HW_REV_STEP(_val) (((_val) & 0x000000C) >> 2)
+
+/**
+ * hw_rev values
+ */
+enum {
+ SILICON_A_STEP = 0,
+ SILICON_B_STEP,
+};
+
+
#define CSR_HW_REV_TYPE_MSK (0x000FFF0)
#define CSR_HW_REV_TYPE_5300 (0x0000020)
#define CSR_HW_REV_TYPE_5350 (0x0000030)
diff --git a/drivers/net/wireless/iwlwifi/iwl-drv.c b/drivers/net/wireless/iwlwifi/iwl-drv.c
index aefd94c..ed673ba 100644
--- a/drivers/net/wireless/iwlwifi/iwl-drv.c
+++ b/drivers/net/wireless/iwlwifi/iwl-drv.c
@@ -1363,7 +1363,7 @@
module_param_named(antenna_coupling, iwlwifi_mod_params.ant_coupling,
int, S_IRUGO);
MODULE_PARM_DESC(antenna_coupling,
- "specify antenna coupling in dB (defualt: 0 dB)");
+ "specify antenna coupling in dB (default: 0 dB)");
module_param_named(wd_disable, iwlwifi_mod_params.wd_disable, int, S_IRUGO);
MODULE_PARM_DESC(wd_disable,
diff --git a/drivers/net/wireless/iwlwifi/iwl-eeprom-parse.c b/drivers/net/wireless/iwlwifi/iwl-eeprom-parse.c
index 07ff7e0..74b796d 100644
--- a/drivers/net/wireless/iwlwifi/iwl-eeprom-parse.c
+++ b/drivers/net/wireless/iwlwifi/iwl-eeprom-parse.c
@@ -758,6 +758,9 @@
ht_info->cap |= IEEE80211_HT_CAP_TX_STBC;
}
+ if (cfg->ht_params->ldpc)
+ ht_info->cap |= IEEE80211_HT_CAP_LDPC_CODING;
+
if (iwlwifi_mod_params.amsdu_size_8K)
ht_info->cap |= IEEE80211_HT_CAP_MAX_AMSDU;
diff --git a/drivers/net/wireless/iwlwifi/iwl-fw.h b/drivers/net/wireless/iwlwifi/iwl-fw.h
index f68cba4e0..62c46eb 100644
--- a/drivers/net/wireless/iwlwifi/iwl-fw.h
+++ b/drivers/net/wireless/iwlwifi/iwl-fw.h
@@ -127,6 +127,7 @@
* @IWL_UCODE_TLV_API_CSA_FLOW: ucode can do unbind-bind flow for CSA.
* @IWL_UCODE_TLV_API_DISABLE_STA_TX: ucode supports tx_disable bit.
* @IWL_UCODE_TLV_API_LMAC_SCAN: This ucode uses LMAC unified scan API.
+ * @IWL_UCODE_TLV_API_SF_NO_DUMMY_NOTIF: ucode supports disabling dummy notif.
* @IWL_UCODE_TLV_API_FRAGMENTED_SCAN: This ucode supports active dwell time
* longer than the passive one, which is essential for fragmented scan.
*/
@@ -137,6 +138,7 @@
IWL_UCODE_TLV_API_CSA_FLOW = BIT(4),
IWL_UCODE_TLV_API_DISABLE_STA_TX = BIT(5),
IWL_UCODE_TLV_API_LMAC_SCAN = BIT(6),
+ IWL_UCODE_TLV_API_SF_NO_DUMMY_NOTIF = BIT(7),
IWL_UCODE_TLV_API_FRAGMENTED_SCAN = BIT(8),
};
diff --git a/drivers/net/wireless/iwlwifi/iwl-io.c b/drivers/net/wireless/iwlwifi/iwl-io.c
index 5eef4ae..7a2cbf6 100644
--- a/drivers/net/wireless/iwlwifi/iwl-io.c
+++ b/drivers/net/wireless/iwlwifi/iwl-io.c
@@ -193,7 +193,7 @@
* DEVICE_SET_NMI_8000B_REG - is used.
*/
if ((trans->cfg->device_family != IWL_DEVICE_FAMILY_8000) ||
- ((trans->hw_rev & 0xc) == 0x0))
+ (CSR_HW_REV_STEP(trans->hw_rev) == SILICON_A_STEP))
iwl_write_prph(trans, DEVICE_SET_NMI_REG, DEVICE_SET_NMI_VAL);
else
iwl_write_prph(trans, DEVICE_SET_NMI_8000B_REG,
diff --git a/drivers/net/wireless/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/iwlwifi/iwl-nvm-parse.c
index 8e7af79..c302e74 100644
--- a/drivers/net/wireless/iwlwifi/iwl-nvm-parse.c
+++ b/drivers/net/wireless/iwlwifi/iwl-nvm-parse.c
@@ -148,8 +148,6 @@
#define LAST_2GHZ_HT_PLUS 9
#define LAST_5GHZ_HT 161
-#define DEFAULT_MAX_TX_POWER 16
-
/* rate data (static) */
static struct ieee80211_rate iwl_cfg80211_rates[] = {
{ .bitrate = 1 * 10, .hw_value = 0, .hw_value_short = 0, },
@@ -297,7 +295,7 @@
* Default value - highest tx power value. max_power
* is not used in mvm, and is used for backwards compatibility
*/
- channel->max_power = DEFAULT_MAX_TX_POWER;
+ channel->max_power = IWL_DEFAULT_MAX_TX_POWER;
is_5ghz = channel->band == IEEE80211_BAND_5GHZ;
IWL_DEBUG_EEPROM(dev,
"Ch. %d [%sGHz] %s%s%s%s%s%s%s(0x%02x %ddBm): Ad-Hoc %ssupported\n",
@@ -336,6 +334,9 @@
3 << IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT |
7 << IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_SHIFT;
+ if (cfg->ht_params->ldpc)
+ vht_cap->cap |= IEEE80211_VHT_CAP_RXLDPC;
+
if (num_tx_ants > 1)
vht_cap->cap |= IEEE80211_VHT_CAP_TXSTBC;
else
diff --git a/drivers/net/wireless/iwlwifi/iwl-trans.h b/drivers/net/wireless/iwlwifi/iwl-trans.h
index c89985a..9eb8524 100644
--- a/drivers/net/wireless/iwlwifi/iwl-trans.h
+++ b/drivers/net/wireless/iwlwifi/iwl-trans.h
@@ -377,6 +377,7 @@
* if unset 4k will be the RX buffer size
* @bc_table_dword: set to true if the BC table expects the byte count to be
* in DWORD (as opposed to bytes)
+ * @scd_set_active: should the transport configure the SCD for HCMD queue
* @queue_watchdog_timeout: time (in ms) after which queues
* are considered stuck and will trigger device restart
* @command_names: array of command names, must be 256 entries
@@ -392,6 +393,7 @@
bool rx_buf_size_8k;
bool bc_table_dword;
+ bool scd_set_active;
unsigned int queue_watchdog_timeout;
const char *const *command_names;
};
@@ -826,12 +828,6 @@
iwl_trans_txq_enable_cfg(trans, queue, 0, &cfg);
}
-static inline void
-iwl_trans_txq_enable_no_scd(struct iwl_trans *trans, int queue, u16 ssn)
-{
- iwl_trans_txq_enable_cfg(trans, queue, ssn, NULL);
-}
-
static inline int iwl_trans_wait_tx_queue_empty(struct iwl_trans *trans,
u32 txq_bm)
{
diff --git a/drivers/net/wireless/iwlwifi/mvm/Makefile b/drivers/net/wireless/iwlwifi/mvm/Makefile
index a282359..2d7c3ea 100644
--- a/drivers/net/wireless/iwlwifi/mvm/Makefile
+++ b/drivers/net/wireless/iwlwifi/mvm/Makefile
@@ -3,7 +3,7 @@
iwlmvm-y += utils.o rx.o tx.o binding.o quota.o sta.o sf.o
iwlmvm-y += scan.o time-event.o rs.o
iwlmvm-y += power.o coex.o coex_legacy.o
-iwlmvm-y += tt.o offloading.o
+iwlmvm-y += tt.o offloading.o tdls.o
iwlmvm-$(CONFIG_IWLWIFI_DEBUGFS) += debugfs.o debugfs-vif.o
iwlmvm-$(CONFIG_IWLWIFI_LEDS) += led.o
iwlmvm-$(CONFIG_PM_SLEEP) += d3.o
diff --git a/drivers/net/wireless/iwlwifi/mvm/coex.c b/drivers/net/wireless/iwlwifi/mvm/coex.c
index 2262d6d..8df2021 100644
--- a/drivers/net/wireless/iwlwifi/mvm/coex.c
+++ b/drivers/net/wireless/iwlwifi/mvm/coex.c
@@ -587,8 +587,6 @@
lockdep_assert_held(&mvm->mutex);
if (unlikely(mvm->bt_force_ant_mode != BT_FORCE_ANT_DIS)) {
- u32 mode;
-
switch (mvm->bt_force_ant_mode) {
case BT_FORCE_ANT_BT:
mode = BT_COEX_BT;
@@ -758,7 +756,8 @@
struct iwl_bt_iterator_data *data = _data;
struct iwl_mvm *mvm = data->mvm;
struct ieee80211_chanctx_conf *chanctx_conf;
- enum ieee80211_smps_mode smps_mode;
+ /* default smps_mode is AUTOMATIC - only used for client modes */
+ enum ieee80211_smps_mode smps_mode = IEEE80211_SMPS_AUTOMATIC;
u32 bt_activity_grading;
int ave_rssi;
@@ -766,8 +765,6 @@
switch (vif->type) {
case NL80211_IFTYPE_STATION:
- /* default smps_mode for BSS / P2P client is AUTOMATIC */
- smps_mode = IEEE80211_SMPS_AUTOMATIC;
break;
case NL80211_IFTYPE_AP:
if (!mvmvif->ap_ibss_active)
@@ -799,7 +796,7 @@
else if (bt_activity_grading >= BT_LOW_TRAFFIC)
smps_mode = IEEE80211_SMPS_DYNAMIC;
- /* relax SMPS contraints for next association */
+ /* relax SMPS constraints for next association */
if (!vif->bss_conf.assoc)
smps_mode = IEEE80211_SMPS_AUTOMATIC;
@@ -1149,6 +1146,10 @@
bool iwl_mvm_bt_coex_is_shared_ant_avail(struct iwl_mvm *mvm)
{
+ /* there is no other antenna, shared antenna is always available */
+ if (mvm->cfg->bt_shared_single_ant)
+ return true;
+
if (!(mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_BT_COEX_SPLIT))
return iwl_mvm_bt_coex_is_shared_ant_avail_old(mvm);
diff --git a/drivers/net/wireless/iwlwifi/mvm/constants.h b/drivers/net/wireless/iwlwifi/mvm/constants.h
index dd00e8f..a355788 100644
--- a/drivers/net/wireless/iwlwifi/mvm/constants.h
+++ b/drivers/net/wireless/iwlwifi/mvm/constants.h
@@ -65,12 +65,18 @@
#ifndef __MVM_CONSTANTS_H
#define __MVM_CONSTANTS_H
+#include <linux/ieee80211.h>
+
#define IWL_MVM_DEFAULT_PS_TX_DATA_TIMEOUT (100 * USEC_PER_MSEC)
#define IWL_MVM_DEFAULT_PS_RX_DATA_TIMEOUT (100 * USEC_PER_MSEC)
#define IWL_MVM_WOWLAN_PS_TX_DATA_TIMEOUT (10 * USEC_PER_MSEC)
#define IWL_MVM_WOWLAN_PS_RX_DATA_TIMEOUT (10 * USEC_PER_MSEC)
#define IWL_MVM_UAPSD_RX_DATA_TIMEOUT (50 * USEC_PER_MSEC)
#define IWL_MVM_UAPSD_TX_DATA_TIMEOUT (50 * USEC_PER_MSEC)
+#define IWL_MVM_UAPSD_QUEUES (IEEE80211_WMM_IE_STA_QOSINFO_AC_VO |\
+ IEEE80211_WMM_IE_STA_QOSINFO_AC_VI |\
+ IEEE80211_WMM_IE_STA_QOSINFO_AC_BK |\
+ IEEE80211_WMM_IE_STA_QOSINFO_AC_BE)
#define IWL_MVM_PS_HEAVY_TX_THLD_PACKETS 20
#define IWL_MVM_PS_HEAVY_RX_THLD_PACKETS 8
#define IWL_MVM_PS_SNOOZE_HEAVY_TX_THLD_PACKETS 30
@@ -86,5 +92,7 @@
#define IWL_MVM_BT_COEX_SYNC2SCO 1
#define IWL_MVM_BT_COEX_CORUNNING 1
#define IWL_MVM_BT_COEX_MPLUT 1
+#define IWL_MVM_FW_MCAST_FILTER_PASS_ALL 0
+#define IWL_MVM_QUOTA_THRESHOLD 8
#endif /* __MVM_CONSTANTS_H */
diff --git a/drivers/net/wireless/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/iwlwifi/mvm/debugfs-vif.c
index d919b4e..9aa2311 100644
--- a/drivers/net/wireless/iwlwifi/mvm/debugfs-vif.c
+++ b/drivers/net/wireless/iwlwifi/mvm/debugfs-vif.c
@@ -76,8 +76,7 @@
switch (param) {
case MVM_DEBUGFS_PM_KEEP_ALIVE: {
- struct ieee80211_hw *hw = mvm->hw;
- int dtimper = hw->conf.ps_dtim_period ?: 1;
+ int dtimper = vif->bss_conf.dtim_period ?: 1;
int dtimper_msec = dtimper * vif->bss_conf.beacon_int;
IWL_DEBUG_POWER(mvm, "debugfs: set keep_alive= %d sec\n", val);
diff --git a/drivers/net/wireless/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/iwlwifi/mvm/debugfs.c
index d98ee10..95eb9a5 100644
--- a/drivers/net/wireless/iwlwifi/mvm/debugfs.c
+++ b/drivers/net/wireless/iwlwifi/mvm/debugfs.c
@@ -288,6 +288,9 @@
{
int temperature;
+ if (!mvm->ucode_loaded && !mvm->temperature_test)
+ return -EIO;
+
if (kstrtoint(buf, 10, &temperature))
return -EINVAL;
/* not a legal temperature */
@@ -1256,6 +1259,18 @@
PRINT_MVM_REF(IWL_MVM_REF_P2P_CLIENT);
PRINT_MVM_REF(IWL_MVM_REF_AP_IBSS);
PRINT_MVM_REF(IWL_MVM_REF_USER);
+ PRINT_MVM_REF(IWL_MVM_REF_TX);
+ PRINT_MVM_REF(IWL_MVM_REF_TX_AGG);
+ PRINT_MVM_REF(IWL_MVM_REF_ADD_IF);
+ PRINT_MVM_REF(IWL_MVM_REF_START_AP);
+ PRINT_MVM_REF(IWL_MVM_REF_BSS_CHANGED);
+ PRINT_MVM_REF(IWL_MVM_REF_PREPARE_TX);
+ PRINT_MVM_REF(IWL_MVM_REF_PROTECT_TDLS);
+ PRINT_MVM_REF(IWL_MVM_REF_CHECK_CTKILL);
+ PRINT_MVM_REF(IWL_MVM_REF_PRPH_READ);
+ PRINT_MVM_REF(IWL_MVM_REF_PRPH_WRITE);
+ PRINT_MVM_REF(IWL_MVM_REF_NMI);
+ PRINT_MVM_REF(IWL_MVM_REF_TM_CMD);
PRINT_MVM_REF(IWL_MVM_REF_EXIT_WORK);
return simple_read_from_buffer(user_buf, count, ppos, buf, pos);
diff --git a/drivers/net/wireless/iwlwifi/mvm/fw-api.h b/drivers/net/wireless/iwlwifi/mvm/fw-api.h
index 9c975f9..a2c6628 100644
--- a/drivers/net/wireless/iwlwifi/mvm/fw-api.h
+++ b/drivers/net/wireless/iwlwifi/mvm/fw-api.h
@@ -205,6 +205,10 @@
REPLY_SF_CFG_CMD = 0xd1,
REPLY_BEACON_FILTERING_CMD = 0xd2,
+ /* DTS measurements */
+ CMD_DTS_MEASUREMENT_TRIGGER = 0xdc,
+ DTS_MEASUREMENT_NOTIFICATION = 0xdd,
+
REPLY_DEBUG_CMD = 0xf0,
DEBUG_LOG_MSG = 0xf7,
@@ -550,7 +554,7 @@
TE_WIDI_TX_SYNC,
/* Channel Switch NoA */
- TE_P2P_GO_CSA_NOA,
+ TE_CHANNEL_SWITCH_PERIOD,
TE_MAX
}; /* MAC_EVENT_TYPE_API_E_VER_1 */
@@ -1601,19 +1605,49 @@
#define SF_LONG_DELAY_AGING_TIMER 1000000 /* 1 Sec */
+#define SF_CFG_DUMMY_NOTIF_OFF BIT(16)
+
/**
* Smart Fifo configuration command.
- * @state: smart fifo state, types listed in iwl_sf_sate.
+ * @state: smart fifo state, types listed in enum %iwl_sf_sate.
* @watermark: Minimum allowed availabe free space in RXF for transient state.
* @long_delay_timeouts: aging and idle timer values for each scenario
* in long delay state.
* @full_on_timeouts: timer values for each scenario in full on state.
*/
struct iwl_sf_cfg_cmd {
- enum iwl_sf_state state;
+ __le32 state;
__le32 watermark[SF_TRANSIENT_STATES_NUMBER];
__le32 long_delay_timeouts[SF_NUM_SCENARIO][SF_NUM_TIMEOUT_TYPES];
__le32 full_on_timeouts[SF_NUM_SCENARIO][SF_NUM_TIMEOUT_TYPES];
} __packed; /* SF_CFG_API_S_VER_2 */
+/* DTS measurements */
+
+enum iwl_dts_measurement_flags {
+ DTS_TRIGGER_CMD_FLAGS_TEMP = BIT(0),
+ DTS_TRIGGER_CMD_FLAGS_VOLT = BIT(1),
+};
+
+/**
+ * iwl_dts_measurement_cmd - request DTS temperature and/or voltage measurements
+ *
+ * @flags: indicates which measurements we want as specified in &enum
+ * iwl_dts_measurement_flags
+ */
+struct iwl_dts_measurement_cmd {
+ __le32 flags;
+} __packed; /* TEMPERATURE_MEASUREMENT_TRIGGER_CMD_S */
+
+/**
+ * iwl_dts_measurement_notif - notification received with the measurements
+ *
+ * @temp: the measured temperature
+ * @voltage: the measured voltage
+ */
+struct iwl_dts_measurement_notif {
+ __le32 temp;
+ __le32 voltage;
+} __packed; /* TEMPERATURE_MEASUREMENT_TRIGGER_NTFY_S */
+
#endif /* __fw_api_h__ */
diff --git a/drivers/net/wireless/iwlwifi/mvm/fw.c b/drivers/net/wireless/iwlwifi/mvm/fw.c
index 21d60602..23fd711 100644
--- a/drivers/net/wireless/iwlwifi/mvm/fw.c
+++ b/drivers/net/wireless/iwlwifi/mvm/fw.c
@@ -454,6 +454,9 @@
for (i = 0; i < IWL_MVM_STATION_COUNT; i++)
RCU_INIT_POINTER(mvm->fw_id_to_mac_id[i], NULL);
+ /* reset quota debouncing buffer - 0xff will yield invalid data */
+ memset(&mvm->last_quota_cmd, 0xff, sizeof(mvm->last_quota_cmd));
+
/* Add auxiliary station for scanning */
ret = iwl_mvm_add_aux_sta(mvm);
if (ret)
diff --git a/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c
index 9cbb192..8342671 100644
--- a/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c
+++ b/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c
@@ -727,11 +727,6 @@
!force_assoc_off) {
u32 dtim_offs;
- /* Allow beacons to pass through as long as we are not
- * associated, or we do not have dtim period information.
- */
- cmd.filter_flags |= cpu_to_le32(MAC_FILTER_IN_BEACON);
-
/*
* The DTIM count counts down, so when it is N that means N
* more beacon intervals happen until the DTIM TBTT. Therefore
@@ -765,6 +760,11 @@
ctxt_sta->is_assoc = cpu_to_le32(1);
} else {
ctxt_sta->is_assoc = cpu_to_le32(0);
+
+ /* Allow beacons to pass through as long as we are not
+ * associated, or we do not have dtim period information.
+ */
+ cmd.filter_flags |= cpu_to_le32(MAC_FILTER_IN_BEACON);
}
ctxt_sta->bi = cpu_to_le32(vif->bss_conf.beacon_int);
@@ -1234,13 +1234,13 @@
!iwl_mvm_te_scheduled(&mvmvif->time_event_data) && gp2) {
u32 rel_time = (c + 1) *
csa_vif->bss_conf.beacon_int -
- IWL_MVM_CHANNEL_SWITCH_TIME;
+ IWL_MVM_CHANNEL_SWITCH_TIME_GO;
u32 apply_time = gp2 + rel_time * 1024;
- iwl_mvm_schedule_csa_noa(mvm, csa_vif,
- IWL_MVM_CHANNEL_SWITCH_TIME -
- IWL_MVM_CHANNEL_SWITCH_MARGIN,
- apply_time);
+ iwl_mvm_schedule_csa_period(mvm, csa_vif,
+ IWL_MVM_CHANNEL_SWITCH_TIME_GO -
+ IWL_MVM_CHANNEL_SWITCH_MARGIN,
+ apply_time);
}
} else if (!iwl_mvm_te_scheduled(&mvmvif->time_event_data)) {
/* we don't have CSA NoA scheduled yet, switch now */
diff --git a/drivers/net/wireless/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/iwlwifi/mvm/mac80211.c
index 8d1d4b4..4c21210 100644
--- a/drivers/net/wireless/iwlwifi/mvm/mac80211.c
+++ b/drivers/net/wireless/iwlwifi/mvm/mac80211.c
@@ -303,9 +303,7 @@
IEEE80211_HW_AMPDU_AGGREGATION |
IEEE80211_HW_TIMING_BEACON_ONLY |
IEEE80211_HW_CONNECTION_MONITOR |
- IEEE80211_HW_CHANCTX_STA_CSA |
- IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS |
- IEEE80211_HW_SUPPORTS_STATIC_SMPS;
+ IEEE80211_HW_CHANCTX_STA_CSA;
hw->queues = mvm->first_agg_queue;
hw->offchannel_tx_hw_queue = IWL_MVM_OFFCHANNEL_QUEUE;
@@ -327,7 +325,7 @@
IWL_UCODE_API(mvm->fw->ucode_ver) >= 9 &&
!iwlwifi_mod_params.uapsd_disable) {
hw->flags |= IEEE80211_HW_SUPPORTS_UAPSD;
- hw->uapsd_queues = IWL_UAPSD_AC_INFO;
+ hw->uapsd_queues = IWL_MVM_UAPSD_QUEUES;
hw->uapsd_max_sp_len = IWL_UAPSD_MAX_SP;
}
@@ -398,16 +396,20 @@
else
hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT;
- /* TODO: enable that only for firmwares that don't crash */
- /* hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_SCHED_SCAN; */
- hw->wiphy->max_sched_scan_ssids = PROBE_OPTION_MAX;
- hw->wiphy->max_match_sets = IWL_SCAN_MAX_PROFILES;
- /* we create the 802.11 header and zero length SSID IE. */
- hw->wiphy->max_sched_scan_ie_len = SCAN_OFFLOAD_PROBE_REQ_SIZE - 24 - 2;
+ if (IWL_UCODE_API(mvm->fw->ucode_ver) >= 10) {
+ hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_SCHED_SCAN;
+ hw->wiphy->max_sched_scan_ssids = PROBE_OPTION_MAX;
+ hw->wiphy->max_match_sets = IWL_SCAN_MAX_PROFILES;
+ /* we create the 802.11 header and zero length SSID IE. */
+ hw->wiphy->max_sched_scan_ie_len =
+ SCAN_OFFLOAD_PROBE_REQ_SIZE - 24 - 2;
+ }
hw->wiphy->features |= NL80211_FEATURE_P2P_GO_CTWIN |
NL80211_FEATURE_LOW_PRIORITY_SCAN |
- NL80211_FEATURE_P2P_GO_OPPPS;
+ NL80211_FEATURE_P2P_GO_OPPPS |
+ NL80211_FEATURE_DYNAMIC_SMPS |
+ NL80211_FEATURE_STATIC_SMPS;
mvm->rts_threshold = IEEE80211_MAX_RTS_THRESHOLD;
@@ -668,8 +670,9 @@
}
#ifdef CONFIG_IWLWIFI_DEBUGFS
-static void iwl_mvm_fw_error_dump(struct iwl_mvm *mvm)
+void iwl_mvm_fw_error_dump(struct iwl_mvm *mvm)
{
+ static char *env[] = { "DRIVER=iwlwifi", "EVENT=error_dump", NULL };
struct iwl_fw_error_dump_file *dump_file;
struct iwl_fw_error_dump_data *dump_data;
struct iwl_fw_error_dump_info *dump_info;
@@ -761,20 +764,16 @@
file_len += fw_error_dump->trans_ptr->len;
dump_file->file_len = cpu_to_le32(file_len);
mvm->fw_error_dump = fw_error_dump;
+
+ /* notify the userspace about the error we had */
+ kobject_uevent_env(&mvm->hw->wiphy->dev.kobj, KOBJ_CHANGE, env);
}
#endif
static void iwl_mvm_restart_cleanup(struct iwl_mvm *mvm)
{
-#ifdef CONFIG_IWLWIFI_DEBUGFS
- static char *env[] = { "DRIVER=iwlwifi", "EVENT=error_dump", NULL };
-
iwl_mvm_fw_error_dump(mvm);
- /* notify the userspace about the error we had */
- kobject_uevent_env(&mvm->hw->wiphy->dev.kobj, KOBJ_CHANGE, env);
-#endif
-
iwl_trans_stop_device(mvm->trans);
mvm->scan_status = IWL_MVM_SCAN_NONE;
@@ -813,12 +812,11 @@
mvm->rx_ba_sessions = 0;
}
-static int iwl_mvm_mac_start(struct ieee80211_hw *hw)
+int __iwl_mvm_mac_start(struct iwl_mvm *mvm)
{
- struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
- mutex_lock(&mvm->mutex);
+ lockdep_assert_held(&mvm->mutex);
/* Clean up some internal and mac80211 state on restart */
if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
@@ -835,6 +833,16 @@
iwl_mvm_d0i3_enable_tx(mvm, NULL);
}
+ return ret;
+}
+
+static int iwl_mvm_mac_start(struct ieee80211_hw *hw)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ int ret;
+
+ mutex_lock(&mvm->mutex);
+ ret = __iwl_mvm_mac_start(mvm);
mutex_unlock(&mvm->mutex);
return ret;
@@ -860,14 +868,9 @@
mutex_unlock(&mvm->mutex);
}
-static void iwl_mvm_mac_stop(struct ieee80211_hw *hw)
+void __iwl_mvm_mac_stop(struct iwl_mvm *mvm)
{
- struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
-
- flush_work(&mvm->d0i3_exit_work);
- flush_work(&mvm->async_handlers_wk);
-
- mutex_lock(&mvm->mutex);
+ lockdep_assert_held(&mvm->mutex);
/* disallow low power states when the FW is down */
iwl_mvm_ref(mvm, IWL_MVM_REF_UCODE_DOWN);
@@ -888,6 +891,19 @@
/* the fw is stopped, the aux sta is dead: clean up driver state */
iwl_mvm_del_aux_sta(mvm);
+ mvm->ucode_loaded = false;
+}
+
+static void iwl_mvm_mac_stop(struct ieee80211_hw *hw)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+
+ flush_work(&mvm->d0i3_exit_work);
+ flush_work(&mvm->async_handlers_wk);
+ flush_work(&mvm->fw_error_dump_wk);
+
+ mutex_lock(&mvm->mutex);
+ __iwl_mvm_mac_stop(mvm);
mutex_unlock(&mvm->mutex);
/*
@@ -1196,14 +1212,15 @@
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mcast_filter_cmd *cmd;
struct netdev_hw_addr *addr;
- int addr_count = netdev_hw_addr_list_count(mc_list);
- bool pass_all = false;
+ int addr_count;
+ bool pass_all;
int len;
- if (addr_count > MAX_MCAST_FILTERING_ADDRESSES) {
- pass_all = true;
+ addr_count = netdev_hw_addr_list_count(mc_list);
+ pass_all = addr_count > MAX_MCAST_FILTERING_ADDRESSES ||
+ IWL_MVM_FW_MCAST_FILTER_PASS_ALL;
+ if (pass_all)
addr_count = 0;
- }
len = roundup(sizeof(*cmd) + addr_count * ETH_ALEN, 4);
cmd = kzalloc(len, GFP_ATOMIC);
@@ -1403,28 +1420,6 @@
}
#endif
-static void iwl_mvm_teardown_tdls_peers(struct iwl_mvm *mvm)
-{
- struct ieee80211_sta *sta;
- struct iwl_mvm_sta *mvmsta;
- int i;
-
- lockdep_assert_held(&mvm->mutex);
-
- for (i = 0; i < IWL_MVM_STATION_COUNT; i++) {
- sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[i],
- lockdep_is_held(&mvm->mutex));
- if (!sta || IS_ERR(sta) || !sta->tdls)
- continue;
-
- mvmsta = iwl_mvm_sta_from_mac80211(sta);
- ieee80211_tdls_oper_request(mvmsta->vif, sta->addr,
- NL80211_TDLS_TEARDOWN,
- WLAN_REASON_TDLS_TEARDOWN_UNSPECIFIED,
- GFP_KERNEL);
- }
-}
-
static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
struct ieee80211_bss_conf *bss_conf,
@@ -1544,11 +1539,6 @@
*/
iwl_mvm_remove_time_event(mvm, mvmvif,
&mvmvif->time_event_data);
- } else if (changes & (BSS_CHANGED_PS | BSS_CHANGED_P2P_PS |
- BSS_CHANGED_QOS)) {
- ret = iwl_mvm_power_update_mac(mvm);
- if (ret)
- IWL_ERR(mvm, "failed to update power mode\n");
}
if (changes & BSS_CHANGED_BEACON_INFO) {
@@ -1556,6 +1546,12 @@
WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0));
}
+ if (changes & (BSS_CHANGED_PS | BSS_CHANGED_P2P_PS | BSS_CHANGED_QOS)) {
+ ret = iwl_mvm_power_update_mac(mvm);
+ if (ret)
+ IWL_ERR(mvm, "failed to update power mode\n");
+ }
+
if (changes & BSS_CHANGED_TXPOWER) {
IWL_DEBUG_CALIB(mvm, "Changing TX Power to %d\n",
bss_conf->txpower);
@@ -1721,7 +1717,7 @@
return;
if (changes & (BSS_CHANGED_ERP_CTS_PROT | BSS_CHANGED_HT |
- BSS_CHANGED_BANDWIDTH) &&
+ BSS_CHANGED_BANDWIDTH | BSS_CHANGED_QOS) &&
iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL))
IWL_ERR(mvm, "failed to update MAC %pM\n", vif->addr);
@@ -1952,48 +1948,6 @@
mutex_unlock(&mvm->mutex);
}
-int iwl_mvm_tdls_sta_count(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
-{
- struct ieee80211_sta *sta;
- struct iwl_mvm_sta *mvmsta;
- int count = 0;
- int i;
-
- lockdep_assert_held(&mvm->mutex);
-
- for (i = 0; i < IWL_MVM_STATION_COUNT; i++) {
- sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[i],
- lockdep_is_held(&mvm->mutex));
- if (!sta || IS_ERR(sta) || !sta->tdls)
- continue;
-
- if (vif) {
- mvmsta = iwl_mvm_sta_from_mac80211(sta);
- if (mvmsta->vif != vif)
- continue;
- }
-
- count++;
- }
-
- return count;
-}
-
-static void iwl_mvm_recalc_tdls_state(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- bool sta_added)
-{
- int tdls_sta_cnt = iwl_mvm_tdls_sta_count(mvm, vif);
-
- /*
- * Disable ps when the first TDLS sta is added and re-enable it
- * when the last TDLS sta is removed
- */
- if ((tdls_sta_cnt == 1 && sta_added) ||
- (tdls_sta_cnt == 0 && !sta_added))
- iwl_mvm_power_update_mac(mvm);
-}
-
static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
@@ -2167,27 +2121,6 @@
iwl_mvm_unref(mvm, IWL_MVM_REF_PREPARE_TX);
}
-static void iwl_mvm_mac_mgd_protect_tdls_discover(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
-{
- struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
- u32 duration = 2 * vif->bss_conf.dtim_period * vif->bss_conf.beacon_int;
-
- /*
- * iwl_mvm_protect_session() reads directly from the device
- * (the system time), so make sure it is available.
- */
- if (iwl_mvm_ref_sync(mvm, IWL_MVM_REF_PROTECT_TDLS))
- return;
-
- mutex_lock(&mvm->mutex);
- /* Protect the session to hear the TDLS setup response on the channel */
- iwl_mvm_protect_session(mvm, vif, duration, duration, 100, true);
- mutex_unlock(&mvm->mutex);
-
- iwl_mvm_unref(mvm, IWL_MVM_REF_PROTECT_TDLS);
-}
-
static int iwl_mvm_mac_sched_scan_start(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct cfg80211_sched_scan_request *req,
diff --git a/drivers/net/wireless/iwlwifi/mvm/mvm.h b/drivers/net/wireless/iwlwifi/mvm/mvm.h
index e292de9..5529958 100644
--- a/drivers/net/wireless/iwlwifi/mvm/mvm.h
+++ b/drivers/net/wireless/iwlwifi/mvm/mvm.h
@@ -87,11 +87,11 @@
/* A TimeUnit is 1024 microsecond */
#define MSEC_TO_TU(_msec) (_msec*1000/1024)
-/*
- * The CSA NoA is scheduled IWL_MVM_CHANNEL_SWITCH_TIME TUs before "beacon 0"
- * TBTT. This value should be big enough to ensure that we switch in time.
+/* This value represents the number of TUs before CSA "beacon 0" TBTT
+ * when the CSA time-event needs to be scheduled to start. It must be
+ * big enough to ensure that we switch in time.
*/
-#define IWL_MVM_CHANNEL_SWITCH_TIME 40
+#define IWL_MVM_CHANNEL_SWITCH_TIME_GO 40
/*
* This value (in TUs) is used to fine tune the CSA NoA end time which should
@@ -180,10 +180,6 @@
};
#define IWL_CONN_MAX_LISTEN_INTERVAL 10
-#define IWL_UAPSD_AC_INFO (IEEE80211_WMM_IE_STA_QOSINFO_AC_VO |\
- IEEE80211_WMM_IE_STA_QOSINFO_AC_VI |\
- IEEE80211_WMM_IE_STA_QOSINFO_AC_BK |\
- IEEE80211_WMM_IE_STA_QOSINFO_AC_BE)
#define IWL_UAPSD_MAX_SP IEEE80211_WMM_IE_STA_QOSINFO_SP_2
#ifdef CONFIG_IWLWIFI_DEBUGFS
@@ -274,6 +270,8 @@
IWL_MVM_REF_TM_CMD,
IWL_MVM_REF_EXIT_WORK,
+ /* update debugfs.c when changing this */
+
IWL_MVM_REF_COUNT,
};
@@ -649,6 +647,7 @@
/* -1 for always, 0 for never, >0 for that many times */
s8 restart_fw;
+ struct work_struct fw_error_dump_wk;
struct iwl_mvm_dump_ptrs *fw_error_dump;
#ifdef CONFIG_IWLWIFI_LEDS
@@ -709,6 +708,8 @@
*/
bool temperature_test; /* Debug test temperature is enabled */
+ struct iwl_time_quota_cmd last_quota_cmd;
+
#ifdef CONFIG_NL80211_TESTMODE
u32 noa_duration;
struct ieee80211_vif *noa_vif;
@@ -788,6 +789,9 @@
u8 ieee; /* MAC header: IWL_RATE_6M_IEEE, etc. */
};
+void __iwl_mvm_mac_stop(struct iwl_mvm *mvm);
+int __iwl_mvm_mac_start(struct iwl_mvm *mvm);
+
/******************
* MVM Methods
******************/
@@ -1153,7 +1157,17 @@
/* TDLS */
int iwl_mvm_tdls_sta_count(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
+void iwl_mvm_teardown_tdls_peers(struct iwl_mvm *mvm);
+void iwl_mvm_recalc_tdls_state(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ bool sta_added);
+void iwl_mvm_mac_mgd_protect_tdls_discover(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif);
void iwl_mvm_nic_restart(struct iwl_mvm *mvm, bool fw_error);
+#ifdef CONFIG_IWLWIFI_DEBUGFS
+void iwl_mvm_fw_error_dump(struct iwl_mvm *mvm);
+#else
+static inline void iwl_mvm_fw_error_dump(struct iwl_mvm *mvm) {}
+#endif
#endif /* __IWL_MVM_H__ */
diff --git a/drivers/net/wireless/iwlwifi/mvm/nvm.c b/drivers/net/wireless/iwlwifi/mvm/nvm.c
index 4fafd4b..af07456 100644
--- a/drivers/net/wireless/iwlwifi/mvm/nvm.c
+++ b/drivers/net/wireless/iwlwifi/mvm/nvm.c
@@ -64,6 +64,7 @@
*****************************************************************************/
#include <linux/firmware.h>
#include "iwl-trans.h"
+#include "iwl-csr.h"
#include "mvm.h"
#include "iwl-eeprom-parse.h"
#include "iwl-eeprom-read.h"
@@ -349,7 +350,7 @@
/* Maximal size depends on HW family and step */
if (mvm->trans->cfg->device_family != IWL_DEVICE_FAMILY_8000)
max_section_size = IWL_MAX_NVM_SECTION_SIZE;
- else if ((mvm->trans->hw_rev & 0xc) == 0) /* Family 8000 A-step */
+ else if (CSR_HW_REV_STEP(mvm->trans->hw_rev) == SILICON_A_STEP)
max_section_size = IWL_MAX_NVM_8000A_SECTION_SIZE;
else /* Family 8000 B-step */
max_section_size = IWL_MAX_NVM_8000B_SECTION_SIZE;
diff --git a/drivers/net/wireless/iwlwifi/mvm/ops.c b/drivers/net/wireless/iwlwifi/mvm/ops.c
index 87f278c..f887779 100644
--- a/drivers/net/wireless/iwlwifi/mvm/ops.c
+++ b/drivers/net/wireless/iwlwifi/mvm/ops.c
@@ -332,6 +332,8 @@
CMD(BCAST_FILTER_CMD),
CMD(REPLY_SF_CFG_CMD),
CMD(REPLY_BEACON_FILTERING_CMD),
+ CMD(CMD_DTS_MEASUREMENT_TRIGGER),
+ CMD(DTS_MEASUREMENT_NOTIFICATION),
CMD(REPLY_THERMAL_MNG_BACKOFF),
CMD(MAC_PM_POWER_TABLE),
CMD(BT_COEX_CI),
@@ -364,6 +366,8 @@
return 0;
}
+static void iwl_mvm_fw_error_dump_wk(struct work_struct *work);
+
static struct iwl_op_mode *
iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
const struct iwl_fw *fw, struct dentry *dbgfs_dir)
@@ -431,6 +435,7 @@
INIT_WORK(&mvm->roc_done_wk, iwl_mvm_roc_done_wk);
INIT_WORK(&mvm->sta_drained_wk, iwl_mvm_sta_drained_wk);
INIT_WORK(&mvm->d0i3_exit_work, iwl_mvm_d0i3_exit_work);
+ INIT_WORK(&mvm->fw_error_dump_wk, iwl_mvm_fw_error_dump_wk);
spin_lock_init(&mvm->d0i3_tx_lock);
spin_lock_init(&mvm->refs_lock);
@@ -460,6 +465,7 @@
trans_cfg.cmd_queue = IWL_MVM_CMD_QUEUE;
trans_cfg.cmd_fifo = IWL_MVM_TX_FIFO_CMD;
+ trans_cfg.scd_set_active = true;
snprintf(mvm->hw->wiphy->fw_version,
sizeof(mvm->hw->wiphy->fw_version),
@@ -781,6 +787,16 @@
module_put(THIS_MODULE);
}
+static void iwl_mvm_fw_error_dump_wk(struct work_struct *work)
+{
+ struct iwl_mvm *mvm =
+ container_of(work, struct iwl_mvm, fw_error_dump_wk);
+
+ mutex_lock(&mvm->mutex);
+ iwl_mvm_fw_error_dump(mvm);
+ mutex_unlock(&mvm->mutex);
+}
+
void iwl_mvm_nic_restart(struct iwl_mvm *mvm, bool fw_error)
{
iwl_abort_notification_waits(&mvm->notif_wait);
@@ -846,6 +862,8 @@
if (fw_error && mvm->restart_fw > 0)
mvm->restart_fw--;
ieee80211_restart_hw(mvm->hw);
+ } else if (fw_error) {
+ schedule_work(&mvm->fw_error_dump_wk);
}
}
diff --git a/drivers/net/wireless/iwlwifi/mvm/power.c b/drivers/net/wireless/iwlwifi/mvm/power.c
index e7a6626..5b85b0c 100644
--- a/drivers/net/wireless/iwlwifi/mvm/power.c
+++ b/drivers/net/wireless/iwlwifi/mvm/power.c
@@ -286,13 +286,28 @@
return true;
}
+static bool iwl_mvm_power_is_radar(struct ieee80211_vif *vif)
+{
+ struct ieee80211_chanctx_conf *chanctx_conf;
+ struct ieee80211_channel *chan;
+ bool radar_detect = false;
+
+ rcu_read_lock();
+ chanctx_conf = rcu_dereference(vif->chanctx_conf);
+ WARN_ON(!chanctx_conf);
+ if (chanctx_conf) {
+ chan = chanctx_conf->def.chan;
+ radar_detect = chan->flags & IEEE80211_CHAN_RADAR;
+ }
+ rcu_read_unlock();
+
+ return radar_detect;
+}
+
static void iwl_mvm_power_build_cmd(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
struct iwl_mac_power_cmd *cmd)
{
- struct ieee80211_hw *hw = mvm->hw;
- struct ieee80211_chanctx_conf *chanctx_conf;
- struct ieee80211_channel *chan;
int dtimper, dtimper_msec;
int keep_alive;
bool radar_detect = false;
@@ -301,7 +316,7 @@
cmd->id_and_color = cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
mvmvif->color));
- dtimper = hw->conf.ps_dtim_period ?: 1;
+ dtimper = vif->bss_conf.dtim_period;
/*
* Regardless of power management state the driver must set
@@ -321,7 +336,7 @@
cmd->flags |= cpu_to_le16(POWER_FLAGS_POWER_SAVE_ENA_MSK);
if (!vif->bss_conf.ps || iwl_mvm_vif_low_latency(mvmvif) ||
- !mvmvif->pm_enabled)
+ !mvmvif->pm_enabled || iwl_mvm_tdls_sta_count(mvm, vif))
return;
cmd->flags |= cpu_to_le16(POWER_FLAGS_POWER_MANAGEMENT_ENA_MSK);
@@ -334,14 +349,7 @@
}
/* Check if radar detection is required on current channel */
- rcu_read_lock();
- chanctx_conf = rcu_dereference(vif->chanctx_conf);
- WARN_ON(!chanctx_conf);
- if (chanctx_conf) {
- chan = chanctx_conf->def.chan;
- radar_detect = chan->flags & IEEE80211_CHAN_RADAR;
- }
- rcu_read_unlock();
+ radar_detect = iwl_mvm_power_is_radar(vif);
/* Check skip over DTIM conditions */
if (!radar_detect && (dtimper <= 10) &&
@@ -502,8 +510,6 @@
bool bss_active;
bool ap_active;
bool monitor_active;
- bool bss_tdls;
- bool p2p_tdls;
};
static void iwl_mvm_power_disable_pm_iterator(void *_data, u8* mac,
@@ -558,8 +564,6 @@
/* only a single MAC of the same type */
WARN_ON(power_iterator->p2p_vif);
power_iterator->p2p_vif = vif;
- power_iterator->p2p_tdls =
- !!iwl_mvm_tdls_sta_count(power_iterator->mvm, vif);
if (mvmvif->phy_ctxt)
if (mvmvif->phy_ctxt->id < MAX_PHYS)
power_iterator->p2p_active = true;
@@ -569,8 +573,6 @@
/* only a single MAC of the same type */
WARN_ON(power_iterator->bss_vif);
power_iterator->bss_vif = vif;
- power_iterator->bss_tdls =
- !!iwl_mvm_tdls_sta_count(power_iterator->mvm, vif);
if (mvmvif->phy_ctxt)
if (mvmvif->phy_ctxt->id < MAX_PHYS)
power_iterator->bss_active = true;
@@ -613,15 +615,13 @@
ap_mvmvif = iwl_mvm_vif_from_mac80211(vifs->ap_vif);
/* enable PM on bss if bss stand alone */
- if (vifs->bss_active && !vifs->p2p_active && !vifs->ap_active &&
- !vifs->bss_tdls) {
+ if (vifs->bss_active && !vifs->p2p_active && !vifs->ap_active) {
bss_mvmvif->pm_enabled = true;
return;
}
/* enable PM on p2p if p2p stand alone */
- if (vifs->p2p_active && !vifs->bss_active && !vifs->ap_active &&
- !vifs->p2p_tdls) {
+ if (vifs->p2p_active && !vifs->bss_active && !vifs->ap_active) {
if (mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_P2P_PM)
p2p_mvmvif->pm_enabled = true;
return;
@@ -962,17 +962,22 @@
iwl_mvm_power_build_cmd(mvm, vif, &cmd);
if (enable) {
- /* configure skip over dtim up to 300 msec */
- int dtimper = mvm->hw->conf.ps_dtim_period ?: 1;
- int dtimper_msec = dtimper * vif->bss_conf.beacon_int;
+ /* configure skip over dtim up to 306TU - 314 msec */
+ int dtimper = vif->bss_conf.dtim_period ?: 1;
+ int dtimper_tu = dtimper * vif->bss_conf.beacon_int;
+ bool radar_detect = iwl_mvm_power_is_radar(vif);
- if (WARN_ON(!dtimper_msec))
+ if (WARN_ON(!dtimper_tu))
return 0;
- cmd.skip_dtim_periods = 300 / dtimper_msec;
- if (cmd.skip_dtim_periods)
- cmd.flags |=
- cpu_to_le16(POWER_FLAGS_SKIP_OVER_DTIM_MSK);
+ /* Check skip over DTIM conditions */
+ /* TODO: check that multicast wake lock is off */
+ if (!radar_detect && (dtimper < 10)) {
+ cmd.skip_dtim_periods = 306 / dtimper_tu;
+ if (cmd.skip_dtim_periods)
+ cmd.flags |= cpu_to_le16(
+ POWER_FLAGS_SKIP_OVER_DTIM_MSK);
+ }
}
iwl_mvm_power_log(mvm, &cmd);
#ifdef CONFIG_IWLWIFI_DEBUGFS
diff --git a/drivers/net/wireless/iwlwifi/mvm/quota.c b/drivers/net/wireless/iwlwifi/mvm/quota.c
index 5fd502d..dbb2594 100644
--- a/drivers/net/wireless/iwlwifi/mvm/quota.c
+++ b/drivers/net/wireless/iwlwifi/mvm/quota.c
@@ -175,12 +175,14 @@
struct ieee80211_vif *disabled_vif)
{
struct iwl_time_quota_cmd cmd = {};
- int i, idx, ret, num_active_macs, quota, quota_rem, n_non_lowlat;
+ int i, idx, err, num_active_macs, quota, quota_rem, n_non_lowlat;
struct iwl_mvm_quota_iterator_data data = {
.n_interfaces = {},
.colors = { -1, -1, -1, -1 },
.disabled_vif = disabled_vif,
};
+ struct iwl_time_quota_cmd *last = &mvm->last_quota_cmd;
+ bool send = false;
lockdep_assert_held(&mvm->mutex);
@@ -293,15 +295,33 @@
/* check that we have non-zero quota for all valid bindings */
for (i = 0; i < MAX_BINDINGS; i++) {
+ if (cmd.quotas[i].id_and_color != last->quotas[i].id_and_color)
+ send = true;
+ if (cmd.quotas[i].max_duration != last->quotas[i].max_duration)
+ send = true;
+ if (abs((int)le32_to_cpu(cmd.quotas[i].quota) -
+ (int)le32_to_cpu(last->quotas[i].quota))
+ > IWL_MVM_QUOTA_THRESHOLD)
+ send = true;
if (cmd.quotas[i].id_and_color == cpu_to_le32(FW_CTXT_INVALID))
continue;
WARN_ONCE(cmd.quotas[i].quota == 0,
"zero quota on binding %d\n", i);
}
- ret = iwl_mvm_send_cmd_pdu(mvm, TIME_QUOTA_CMD, 0,
- sizeof(cmd), &cmd);
- if (ret)
- IWL_ERR(mvm, "Failed to send quota: %d\n", ret);
- return ret;
+ if (!send) {
+ /* don't send a practically unchanged command, the firmware has
+ * to re-initialize a lot of state and that can have an adverse
+ * impact on it
+ */
+ return 0;
+ }
+
+ err = iwl_mvm_send_cmd_pdu(mvm, TIME_QUOTA_CMD, 0, sizeof(cmd), &cmd);
+
+ if (err)
+ IWL_ERR(mvm, "Failed to send quota: %d\n", err);
+ else
+ mvm->last_quota_cmd = cmd;
+ return err;
}
diff --git a/drivers/net/wireless/iwlwifi/mvm/rs.c b/drivers/net/wireless/iwlwifi/mvm/rs.c
index 17002cf..f77dfe4 100644
--- a/drivers/net/wireless/iwlwifi/mvm/rs.c
+++ b/drivers/net/wireless/iwlwifi/mvm/rs.c
@@ -505,10 +505,10 @@
static inline void rs_dump_rate(struct iwl_mvm *mvm, const struct rs_rate *rate,
const char *prefix)
{
- IWL_DEBUG_RATE(mvm, "%s: (%s: %d) ANT: %s BW: %d SGI: %d\n",
+ IWL_DEBUG_RATE(mvm, "%s: (%s: %d) ANT: %s BW: %d SGI: %d LDPC: %d\n",
prefix, rs_pretty_lq_type(rate->type),
rate->index, rs_pretty_ant(rate->ant),
- rate->bw, rate->sgi);
+ rate->bw, rate->sgi, rate->ldpc);
}
static void rs_rate_scale_clear_window(struct iwl_rate_scale_data *window)
@@ -672,8 +672,10 @@
return -EINVAL;
if (tbl->column != RS_COLUMN_INVALID) {
- lq_sta->tx_stats[tbl->column][scale_index].total += attempts;
- lq_sta->tx_stats[tbl->column][scale_index].success += successes;
+ struct lq_sta_pers *pers = &lq_sta->pers;
+
+ pers->tx_stats[tbl->column][scale_index].total += attempts;
+ pers->tx_stats[tbl->column][scale_index].success += successes;
}
/* Select window for current tx bit rate */
@@ -742,6 +744,8 @@
ucode_rate |= rate->bw;
if (rate->sgi)
ucode_rate |= RATE_MCS_SGI_MSK;
+ if (rate->ldpc)
+ ucode_rate |= RATE_MCS_LDPC_MSK;
return ucode_rate;
}
@@ -779,6 +783,8 @@
/* HT or VHT */
if (ucode_rate & RATE_MCS_SGI_MSK)
rate->sgi = true;
+ if (ucode_rate & RATE_MCS_LDPC_MSK)
+ rate->ldpc = true;
rate->bw = ucode_rate & RATE_MCS_CHAN_WIDTH_MSK;
@@ -965,13 +971,13 @@
rate->index > IWL_RATE_MCS_9_INDEX);
rate->index = rs_ht_to_legacy[rate->index];
+ rate->ldpc = false;
} else {
/* Downgrade to SISO with same MCS if in MIMO */
rate->type = is_vht_mimo2(rate) ?
LQ_VHT_SISO : LQ_HT_SISO;
}
-
if (num_of_ant(rate->ant) > 1)
rate->ant = first_antenna(mvm->fw->valid_tx_ant);
@@ -1621,6 +1627,7 @@
}
rate->bw = rs_bw_from_sta_bw(sta);
+ rate->ldpc = lq_sta->ldpc;
search_tbl->column = col_id;
rs_set_expected_tpt_table(lq_sta, search_tbl);
@@ -2031,18 +2038,7 @@
return;
}
- /* force user max rate if set by user */
- if ((lq_sta->max_rate_idx != -1) &&
- (lq_sta->max_rate_idx < index)) {
- index = lq_sta->max_rate_idx;
- update_lq = 1;
- window = &(tbl->win[index]);
- IWL_DEBUG_RATE(mvm,
- "Forcing user max rate %d\n",
- index);
- goto lq_update;
- }
-
+ /* TODO: handle rate_idx_mask and rate_idx_mcs_mask */
window = &(tbl->win[index]);
/*
@@ -2130,10 +2126,7 @@
low = high_low & 0xff;
high = (high_low >> 8) & 0xff;
- /* If user set max rate, dont allow higher than user constrain */
- if ((lq_sta->max_rate_idx != -1) &&
- (lq_sta->max_rate_idx < high))
- high = IWL_RATE_INVALID;
+ /* TODO: handle rate_idx_mask and rate_idx_mcs_mask */
sr = window->success_ratio;
@@ -2342,6 +2335,7 @@
rate->index = i;
rate->ant = first_antenna(valid_tx_ant);
rate->sgi = false;
+ rate->ldpc = false;
rate->bw = RATE_MCS_CHAN_WIDTH_20;
if (band == IEEE80211_BAND_5GHZ)
rate->type = LQ_LEGACY_A;
@@ -2364,23 +2358,13 @@
struct ieee80211_tx_rate_control *txrc)
{
struct sk_buff *skb = txrc->skb;
- struct ieee80211_supported_band *sband = txrc->sband;
struct iwl_op_mode *op_mode __maybe_unused =
(struct iwl_op_mode *)mvm_r;
struct iwl_mvm *mvm __maybe_unused = IWL_OP_MODE_GET_MVM(op_mode);
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct iwl_lq_sta *lq_sta = mvm_sta;
- /* Get max rate if user set max rate */
- if (lq_sta) {
- lq_sta->max_rate_idx = txrc->max_rate_idx;
- if ((sband->band == IEEE80211_BAND_5GHZ) &&
- (lq_sta->max_rate_idx != -1))
- lq_sta->max_rate_idx += IWL_FIRST_OFDM_RATE;
- if ((lq_sta->max_rate_idx < 0) ||
- (lq_sta->max_rate_idx >= IWL_RATE_COUNT))
- lq_sta->max_rate_idx = -1;
- }
+ /* TODO: handle rate_idx_mask and rate_idx_mcs_mask */
/* Treat uninitialized rate scaling data same as non-existing. */
if (lq_sta && !lq_sta->pers.drv) {
@@ -2581,7 +2565,6 @@
* previous packets? Need to have IEEE 802.1X auth succeed immediately
* after assoc.. */
- lq_sta->max_rate_idx = -1;
lq_sta->missed_rate_counter = IWL_MISSED_RATE_MAX;
lq_sta->band = sband->band;
/*
@@ -2610,9 +2593,16 @@
lq_sta->active_mimo2_rate <<= IWL_FIRST_OFDM_RATE;
lq_sta->is_vht = false;
+ if (mvm->cfg->ht_params->ldpc &&
+ (ht_cap->cap & IEEE80211_HT_CAP_LDPC_CODING))
+ lq_sta->ldpc = true;
} else {
rs_vht_set_enabled_rates(sta, vht_cap, lq_sta);
lq_sta->is_vht = true;
+
+ if (mvm->cfg->ht_params->ldpc &&
+ (vht_cap->cap & IEEE80211_VHT_CAP_RXLDPC))
+ lq_sta->ldpc = true;
}
lq_sta->max_legacy_rate_idx = find_last_bit(&lq_sta->active_legacy_rate,
@@ -2622,11 +2612,12 @@
lq_sta->max_mimo2_rate_idx = find_last_bit(&lq_sta->active_mimo2_rate,
BITS_PER_LONG);
- IWL_DEBUG_RATE(mvm, "RATE MASK: LEGACY=%lX SISO=%lX MIMO2=%lX VHT=%d\n",
+ IWL_DEBUG_RATE(mvm,
+ "RATE MASK: LEGACY=%lX SISO=%lX MIMO2=%lX VHT=%d LDPC=%d\n",
lq_sta->active_legacy_rate,
lq_sta->active_siso_rate,
lq_sta->active_mimo2_rate,
- lq_sta->is_vht);
+ lq_sta->is_vht, lq_sta->ldpc);
IWL_DEBUG_RATE(mvm, "MAX RATE: LEGACY=%d SISO=%d MIMO2=%d\n",
lq_sta->max_legacy_rate_idx,
lq_sta->max_siso_rate_idx,
@@ -3032,8 +3023,9 @@
(is_ht20(rate)) ? "20MHz" :
(is_ht40(rate)) ? "40MHz" :
(is_ht80(rate)) ? "80Mhz" : "BAD BW");
- desc += sprintf(buff+desc, " %s %s\n",
+ desc += sprintf(buff+desc, " %s %s %s\n",
(rate->sgi) ? "SGI" : "NGI",
+ (rate->ldpc) ? "LDPC" : "BCC",
(lq_sta->is_agg) ? "AGG on" : "");
}
desc += sprintf(buff+desc, "last tx rate=0x%X\n",
@@ -3181,7 +3173,7 @@
"%s,", column_name[col]);
for (rate = 0; rate < IWL_RATE_COUNT; rate++) {
- stats = &(lq_sta->tx_stats[col][rate]);
+ stats = &(lq_sta->pers.tx_stats[col][rate]);
pos += scnprintf(pos, endpos - pos,
"%llu/%llu,",
stats->success,
@@ -3200,7 +3192,7 @@
size_t count, loff_t *ppos)
{
struct iwl_lq_sta *lq_sta = file->private_data;
- memset(lq_sta->tx_stats, 0, sizeof(lq_sta->tx_stats));
+ memset(lq_sta->pers.tx_stats, 0, sizeof(lq_sta->pers.tx_stats));
return count;
}
diff --git a/drivers/net/wireless/iwlwifi/mvm/rs.h b/drivers/net/wireless/iwlwifi/mvm/rs.h
index f27b9d6..95c4b96 100644
--- a/drivers/net/wireless/iwlwifi/mvm/rs.h
+++ b/drivers/net/wireless/iwlwifi/mvm/rs.h
@@ -207,6 +207,7 @@
u8 ant;
u32 bw;
bool sgi;
+ bool ldpc;
};
@@ -329,10 +330,9 @@
*/
u64 last_tx;
bool is_vht;
+ bool ldpc; /* LDPC Rx is supported by the STA */
enum ieee80211_band band;
- struct rs_rate_stats tx_stats[RS_COLUMN_COUNT][IWL_RATE_COUNT];
-
/* The following are bitmaps of rates; IWL_RATE_6M_MASK, etc. */
unsigned long active_legacy_rate;
unsigned long active_siso_rate;
@@ -343,7 +343,6 @@
u8 max_siso_rate_idx;
u8 max_mimo2_rate_idx;
- s8 max_rate_idx; /* Max rate set by user */
u8 missed_rate_counter;
struct iwl_lq_cmd lq;
@@ -361,11 +360,14 @@
int tpc_reduce;
/* persistent fields - initialized only once - keep last! */
- struct {
+ struct lq_sta_pers {
#ifdef CONFIG_MAC80211_DEBUGFS
u32 dbg_fixed_rate;
u8 dbg_fixed_txp_reduction;
#endif
+ u8 chains;
+ s8 chain_signal[IEEE80211_MAX_CHAINS];
+ struct rs_rate_stats tx_stats[RS_COLUMN_COUNT][IWL_RATE_COUNT];
struct iwl_mvm *drv;
} pers;
};
diff --git a/drivers/net/wireless/iwlwifi/mvm/rx.c b/drivers/net/wireless/iwlwifi/mvm/rx.c
index 48144e3..a6cb84e 100644
--- a/drivers/net/wireless/iwlwifi/mvm/rx.c
+++ b/drivers/net/wireless/iwlwifi/mvm/rx.c
@@ -151,13 +151,13 @@
le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_ENERGY_ANT_ABC_IDX]);
energy_a = (val & IWL_RX_INFO_ENERGY_ANT_A_MSK) >>
IWL_RX_INFO_ENERGY_ANT_A_POS;
- energy_a = energy_a ? -energy_a : -256;
+ energy_a = energy_a ? -energy_a : S8_MIN;
energy_b = (val & IWL_RX_INFO_ENERGY_ANT_B_MSK) >>
IWL_RX_INFO_ENERGY_ANT_B_POS;
- energy_b = energy_b ? -energy_b : -256;
+ energy_b = energy_b ? -energy_b : S8_MIN;
energy_c = (val & IWL_RX_INFO_ENERGY_ANT_C_MSK) >>
IWL_RX_INFO_ENERGY_ANT_C_POS;
- energy_c = energy_c ? -energy_c : -256;
+ energy_c = energy_c ? -energy_c : S8_MIN;
max_energy = max(energy_a, energy_b);
max_energy = max(max_energy, energy_c);
diff --git a/drivers/net/wireless/iwlwifi/mvm/scan.c b/drivers/net/wireless/iwlwifi/mvm/scan.c
index bf9c63d..09545f2 100644
--- a/drivers/net/wireless/iwlwifi/mvm/scan.c
+++ b/drivers/net/wireless/iwlwifi/mvm/scan.c
@@ -160,8 +160,8 @@
static u16 iwl_mvm_get_active_dwell(enum ieee80211_band band, int n_ssids)
{
if (band == IEEE80211_BAND_2GHZ)
- return 30 + 3 * (n_ssids + 1);
- return 20 + 2 * (n_ssids + 1);
+ return 20 + 3 * (n_ssids + 1);
+ return 10 + 2 * (n_ssids + 1);
}
static u16 iwl_mvm_get_passive_dwell(enum ieee80211_band band)
diff --git a/drivers/net/wireless/iwlwifi/mvm/sf.c b/drivers/net/wireless/iwlwifi/mvm/sf.c
index d1922af..7eb78e2 100644
--- a/drivers/net/wireless/iwlwifi/mvm/sf.c
+++ b/drivers/net/wireless/iwlwifi/mvm/sf.c
@@ -174,11 +174,15 @@
enum iwl_sf_state new_state)
{
struct iwl_sf_cfg_cmd sf_cmd = {
- .state = new_state,
+ .state = cpu_to_le32(new_state),
};
struct ieee80211_sta *sta;
int ret = 0;
+ if (mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_SF_NO_DUMMY_NOTIF &&
+ mvm->cfg->disable_dummy_notification)
+ sf_cmd.state |= cpu_to_le32(SF_CFG_DUMMY_NOTIF_OFF);
+
/*
* If an associated AP sta changed its antenna configuration, the state
* will remain FULL_ON but SF parameters need to be reconsidered.
diff --git a/drivers/net/wireless/iwlwifi/mvm/sta.c b/drivers/net/wireless/iwlwifi/mvm/sta.c
index dd9f3a4..666f16b 100644
--- a/drivers/net/wireless/iwlwifi/mvm/sta.c
+++ b/drivers/net/wireless/iwlwifi/mvm/sta.c
@@ -948,8 +948,16 @@
}
tid_data->ssn = 0xffff;
+ tid_data->state = IWL_AGG_OFF;
+ mvm->queue_to_mac80211[txq_id] = IWL_INVALID_MAC80211_QUEUE;
+ spin_unlock_bh(&mvmsta->lock);
+
+ ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
+
+ iwl_mvm_sta_tx_agg(mvm, sta, tid, txq_id, false);
+
iwl_trans_txq_disable(mvm->trans, txq_id, true);
- /* fall through */
+ return 0;
case IWL_AGG_STARTING:
case IWL_EMPTYING_HW_QUEUE_ADDBA:
/*
@@ -1003,6 +1011,8 @@
if (iwl_mvm_flush_tx_path(mvm, BIT(txq_id), true))
IWL_ERR(mvm, "Couldn't flush the AGG queue\n");
+ iwl_mvm_sta_tx_agg(mvm, sta, tid, txq_id, false);
+
iwl_trans_txq_disable(mvm->trans, tid_data->txq_id, true);
}
diff --git a/drivers/net/wireless/iwlwifi/mvm/tdls.c b/drivers/net/wireless/iwlwifi/mvm/tdls.c
new file mode 100644
index 0000000..66c82df
--- /dev/null
+++ b/drivers/net/wireless/iwlwifi/mvm/tdls.c
@@ -0,0 +1,149 @@
+/******************************************************************************
+ *
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2014 Intel Mobile Communications GmbH
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110,
+ * USA
+ *
+ * The full GNU General Public License is included in this distribution
+ * in the file called COPYING.
+ *
+ * Contact Information:
+ * Intel Linux Wireless <ilw@linux.intel.com>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2014 Intel Mobile Communications GmbH
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *****************************************************************************/
+
+#include "mvm.h"
+#include "time-event.h"
+
+void iwl_mvm_teardown_tdls_peers(struct iwl_mvm *mvm)
+{
+ struct ieee80211_sta *sta;
+ struct iwl_mvm_sta *mvmsta;
+ int i;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ for (i = 0; i < IWL_MVM_STATION_COUNT; i++) {
+ sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[i],
+ lockdep_is_held(&mvm->mutex));
+ if (!sta || IS_ERR(sta) || !sta->tdls)
+ continue;
+
+ mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ ieee80211_tdls_oper_request(mvmsta->vif, sta->addr,
+ NL80211_TDLS_TEARDOWN,
+ WLAN_REASON_TDLS_TEARDOWN_UNSPECIFIED,
+ GFP_KERNEL);
+ }
+}
+
+int iwl_mvm_tdls_sta_count(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+{
+ struct ieee80211_sta *sta;
+ struct iwl_mvm_sta *mvmsta;
+ int count = 0;
+ int i;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ for (i = 0; i < IWL_MVM_STATION_COUNT; i++) {
+ sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[i],
+ lockdep_is_held(&mvm->mutex));
+ if (!sta || IS_ERR(sta) || !sta->tdls)
+ continue;
+
+ if (vif) {
+ mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ if (mvmsta->vif != vif)
+ continue;
+ }
+
+ count++;
+ }
+
+ return count;
+}
+
+void iwl_mvm_recalc_tdls_state(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ bool sta_added)
+{
+ int tdls_sta_cnt = iwl_mvm_tdls_sta_count(mvm, vif);
+
+ /*
+ * Disable ps when the first TDLS sta is added and re-enable it
+ * when the last TDLS sta is removed
+ */
+ if ((tdls_sta_cnt == 1 && sta_added) ||
+ (tdls_sta_cnt == 0 && !sta_added))
+ iwl_mvm_power_update_mac(mvm);
+}
+
+void iwl_mvm_mac_mgd_protect_tdls_discover(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ u32 duration = 2 * vif->bss_conf.dtim_period * vif->bss_conf.beacon_int;
+
+ /*
+ * iwl_mvm_protect_session() reads directly from the device
+ * (the system time), so make sure it is available.
+ */
+ if (iwl_mvm_ref_sync(mvm, IWL_MVM_REF_PROTECT_TDLS))
+ return;
+
+ mutex_lock(&mvm->mutex);
+ /* Protect the session to hear the TDLS setup response on the channel */
+ iwl_mvm_protect_session(mvm, vif, duration, duration, 100, true);
+ mutex_unlock(&mvm->mutex);
+
+ iwl_mvm_unref(mvm, IWL_MVM_REF_PROTECT_TDLS);
+}
diff --git a/drivers/net/wireless/iwlwifi/mvm/time-event.c b/drivers/net/wireless/iwlwifi/mvm/time-event.c
index 447d3b1..b7f9e61 100644
--- a/drivers/net/wireless/iwlwifi/mvm/time-event.c
+++ b/drivers/net/wireless/iwlwifi/mvm/time-event.c
@@ -700,9 +700,9 @@
iwl_mvm_roc_finished(mvm);
}
-int iwl_mvm_schedule_csa_noa(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- u32 duration, u32 apply_time)
+int iwl_mvm_schedule_csa_period(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u32 duration, u32 apply_time)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_time_event_data *te_data = &mvmvif->time_event_data;
@@ -711,14 +711,14 @@
lockdep_assert_held(&mvm->mutex);
if (te_data->running) {
- IWL_DEBUG_TE(mvm, "CS NOA is already scheduled\n");
+ IWL_DEBUG_TE(mvm, "CS period is already scheduled\n");
return -EBUSY;
}
time_cmd.action = cpu_to_le32(FW_CTXT_ACTION_ADD);
time_cmd.id_and_color =
cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id, mvmvif->color));
- time_cmd.id = cpu_to_le32(TE_P2P_GO_CSA_NOA);
+ time_cmd.id = cpu_to_le32(TE_CHANNEL_SWITCH_PERIOD);
time_cmd.apply_time = cpu_to_le32(apply_time);
time_cmd.max_frags = TE_V2_FRAG_NONE;
time_cmd.duration = cpu_to_le32(duration);
diff --git a/drivers/net/wireless/iwlwifi/mvm/time-event.h b/drivers/net/wireless/iwlwifi/mvm/time-event.h
index bee3b24..b350e47 100644
--- a/drivers/net/wireless/iwlwifi/mvm/time-event.h
+++ b/drivers/net/wireless/iwlwifi/mvm/time-event.h
@@ -219,7 +219,7 @@
void iwl_mvm_roc_done_wk(struct work_struct *wk);
/**
- * iwl_mvm_schedule_csa_noa - request NoA for channel switch
+ * iwl_mvm_schedule_csa_period - request channel switch absence period
* @mvm: the mvm component
* @vif: the virtual interface for which the channel switch is issued
* @duration: the duration of the NoA in TU.
@@ -228,9 +228,9 @@
* This function is used to schedule NoA time event and is used to perform
* the channel switch flow.
*/
-int iwl_mvm_schedule_csa_noa(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- u32 duration, u32 apply_time);
+int iwl_mvm_schedule_csa_period(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u32 duration, u32 apply_time);
/**
* iwl_mvm_te_scheduled - check if the fw received the TE cmd
diff --git a/drivers/net/wireless/iwlwifi/mvm/tt.c b/drivers/net/wireless/iwlwifi/mvm/tt.c
index c3e1fe4..c750ca7 100644
--- a/drivers/net/wireless/iwlwifi/mvm/tt.c
+++ b/drivers/net/wireless/iwlwifi/mvm/tt.c
@@ -69,248 +69,7 @@
#include "iwl-csr.h"
#include "iwl-prph.h"
-#define OTP_DTS_DIODE_DEVIATION 96 /*in words*/
-/* VBG - Voltage Band Gap error data (temperature offset) */
-#define OTP_WP_DTS_VBG (OTP_DTS_DIODE_DEVIATION + 2)
-#define MEAS_VBG_MIN_VAL 2300
-#define MEAS_VBG_MAX_VAL 3000
-#define MEAS_VBG_DEFAULT_VAL 2700
-#define DTS_DIODE_VALID(flags) (flags & DTS_DIODE_REG_FLAGS_PASS_ONCE)
-#define MIN_TEMPERATURE 0
-#define MAX_TEMPERATURE 125
-#define TEMPERATURE_ERROR (MAX_TEMPERATURE + 1)
-#define PTAT_DIGITAL_VALUE_MIN_VALUE 0
-#define PTAT_DIGITAL_VALUE_MAX_VALUE 0xFF
-#define DTS_VREFS_NUM 5
-static inline u32 DTS_DIODE_GET_VREFS_ID(u32 flags)
-{
- return (flags & DTS_DIODE_REG_FLAGS_VREFS_ID) >>
- DTS_DIODE_REG_FLAGS_VREFS_ID_POS;
-}
-
-#define CALC_VREFS_MIN_DIFF 43
-#define CALC_VREFS_MAX_DIFF 51
-#define CALC_LUT_SIZE (1 + CALC_VREFS_MAX_DIFF - CALC_VREFS_MIN_DIFF)
-#define CALC_LUT_INDEX_OFFSET CALC_VREFS_MIN_DIFF
-#define CALC_TEMPERATURE_RESULT_SHIFT_OFFSET 23
-
-/*
- * @digital_value: The diode's digital-value sampled (temperature/voltage)
- * @vref_low: The lower voltage-reference (the vref just below the diode's
- * sampled digital-value)
- * @vref_high: The higher voltage-reference (the vref just above the diode's
- * sampled digital-value)
- * @flags: bits[1:0]: The ID of the Vrefs pair (lowVref,highVref)
- * bits[6:2]: Reserved.
- * bits[7:7]: Indicates completion of at least 1 successful sample
- * since last DTS reset.
- */
-struct iwl_mvm_dts_diode_bits {
- u8 digital_value;
- u8 vref_low;
- u8 vref_high;
- u8 flags;
-} __packed;
-
-union dts_diode_results {
- u32 reg_value;
- struct iwl_mvm_dts_diode_bits bits;
-} __packed;
-
-static s16 iwl_mvm_dts_get_volt_band_gap(struct iwl_mvm *mvm)
-{
- struct iwl_nvm_section calib_sec;
- const __le16 *calib;
- u16 vbg;
-
- /* TODO: move parsing to NVM code */
- calib_sec = mvm->nvm_sections[NVM_SECTION_TYPE_CALIBRATION];
- calib = (__le16 *)calib_sec.data;
-
- vbg = le16_to_cpu(calib[OTP_WP_DTS_VBG]);
-
- if (vbg < MEAS_VBG_MIN_VAL || vbg > MEAS_VBG_MAX_VAL)
- vbg = MEAS_VBG_DEFAULT_VAL;
-
- return vbg;
-}
-
-static u16 iwl_mvm_dts_get_ptat_deviation_offset(struct iwl_mvm *mvm)
-{
- const u8 *calib;
- u8 ptat, pa1, pa2, median;
-
- /* TODO: move parsing to NVM code */
- calib = mvm->nvm_sections[NVM_SECTION_TYPE_CALIBRATION].data;
- ptat = calib[OTP_DTS_DIODE_DEVIATION * 2];
- pa1 = calib[OTP_DTS_DIODE_DEVIATION * 2 + 1];
- pa2 = calib[OTP_DTS_DIODE_DEVIATION * 2 + 2];
-
- /* get the median: */
- if (ptat > pa1) {
- if (ptat > pa2)
- median = (pa1 > pa2) ? pa1 : pa2;
- else
- median = ptat;
- } else {
- if (pa1 > pa2)
- median = (ptat > pa2) ? ptat : pa2;
- else
- median = pa1;
- }
-
- return ptat - median;
-}
-
-static u8 iwl_mvm_dts_calibrate_ptat_deviation(struct iwl_mvm *mvm, u8 value)
-{
- /* Calibrate the PTAT digital value, based on PTAT deviation data: */
- s16 new_val = value - iwl_mvm_dts_get_ptat_deviation_offset(mvm);
-
- if (new_val > PTAT_DIGITAL_VALUE_MAX_VALUE)
- new_val = PTAT_DIGITAL_VALUE_MAX_VALUE;
- else if (new_val < PTAT_DIGITAL_VALUE_MIN_VALUE)
- new_val = PTAT_DIGITAL_VALUE_MIN_VALUE;
-
- return new_val;
-}
-
-static bool dts_get_adjacent_vrefs(struct iwl_mvm *mvm,
- union dts_diode_results *avg_ptat)
-{
- u8 vrefs_results[DTS_VREFS_NUM];
- u8 low_vref_index = 0, flags;
- u32 reg;
-
- reg = iwl_read_prph(mvm->trans, DTSC_VREF_AVG);
- memcpy(vrefs_results, ®, sizeof(reg));
- reg = iwl_read_prph(mvm->trans, DTSC_VREF5_AVG);
- vrefs_results[4] = reg & 0xff;
-
- if (avg_ptat->bits.digital_value < vrefs_results[0] ||
- avg_ptat->bits.digital_value > vrefs_results[4])
- return false;
-
- if (avg_ptat->bits.digital_value > vrefs_results[3])
- low_vref_index = 3;
- else if (avg_ptat->bits.digital_value > vrefs_results[2])
- low_vref_index = 2;
- else if (avg_ptat->bits.digital_value > vrefs_results[1])
- low_vref_index = 1;
-
- avg_ptat->bits.vref_low = vrefs_results[low_vref_index];
- avg_ptat->bits.vref_high = vrefs_results[low_vref_index + 1];
- flags = avg_ptat->bits.flags;
- avg_ptat->bits.flags =
- (flags & ~DTS_DIODE_REG_FLAGS_VREFS_ID) |
- (low_vref_index & DTS_DIODE_REG_FLAGS_VREFS_ID);
- return true;
-}
-
-/*
- * return true it the results are valid, and false otherwise.
- */
-static bool dts_read_ptat_avg_results(struct iwl_mvm *mvm,
- union dts_diode_results *avg_ptat)
-{
- u32 reg;
- u8 tmp;
-
- /* fill the diode value and pass_once with avg-reg results */
- reg = iwl_read_prph(mvm->trans, DTSC_PTAT_AVG);
- reg &= DTS_DIODE_REG_DIG_VAL | DTS_DIODE_REG_PASS_ONCE;
- avg_ptat->reg_value = reg;
-
- /* calibrate the PTAT digital value */
- tmp = avg_ptat->bits.digital_value;
- tmp = iwl_mvm_dts_calibrate_ptat_deviation(mvm, tmp);
- avg_ptat->bits.digital_value = tmp;
-
- /*
- * fill vrefs fields, based on the avgVrefs results
- * and the diode value
- */
- return dts_get_adjacent_vrefs(mvm, avg_ptat) &&
- DTS_DIODE_VALID(avg_ptat->bits.flags);
-}
-
-static s32 calculate_nic_temperature(union dts_diode_results avg_ptat,
- u16 volt_band_gap)
-{
- u32 tmp_result;
- u8 vrefs_diff;
- /*
- * For temperature calculation (at the end, shift right by 23)
- * LUT[(D2-D1)] = ROUND{ 2^23 / ((D2-D1)*9*10) }
- * (D2-D1) == 43 44 45 46 47 48 49 50 51
- */
- static const u16 calc_lut[CALC_LUT_SIZE] = {
- 2168, 2118, 2071, 2026, 1983, 1942, 1902, 1864, 1828,
- };
-
- /*
- * The diff between the high and low voltage-references is assumed
- * to be strictly be in range of [60,68]
- */
- vrefs_diff = avg_ptat.bits.vref_high - avg_ptat.bits.vref_low;
-
- if (vrefs_diff < CALC_VREFS_MIN_DIFF ||
- vrefs_diff > CALC_VREFS_MAX_DIFF)
- return TEMPERATURE_ERROR;
-
- /* calculate the result: */
- tmp_result =
- vrefs_diff * (DTS_DIODE_GET_VREFS_ID(avg_ptat.bits.flags) + 9);
- tmp_result += avg_ptat.bits.digital_value;
- tmp_result -= avg_ptat.bits.vref_high;
-
- /* multiply by the LUT value (based on the diff) */
- tmp_result *= calc_lut[vrefs_diff - CALC_LUT_INDEX_OFFSET];
-
- /*
- * Get the BandGap (the voltage refereces source) error data
- * (temperature offset)
- */
- tmp_result *= volt_band_gap;
-
- /*
- * here, tmp_result value can be up to 32-bits. We want to right-shift
- * it *without* sign-extend.
- */
- tmp_result = tmp_result >> CALC_TEMPERATURE_RESULT_SHIFT_OFFSET;
-
- /*
- * at this point, tmp_result should be in the range:
- * 200 <= tmp_result <= 365
- */
- return (s16)tmp_result - 240;
-}
-
-static s32 check_nic_temperature(struct iwl_mvm *mvm)
-{
- u16 volt_band_gap;
- union dts_diode_results avg_ptat;
-
- volt_band_gap = iwl_mvm_dts_get_volt_band_gap(mvm);
-
- /* disable DTS */
- iwl_write_prph(mvm->trans, SHR_MISC_WFM_DTS_EN, 0);
-
- /* SV initialization */
- iwl_write_prph(mvm->trans, SHR_MISC_WFM_DTS_EN, 1);
- iwl_write_prph(mvm->trans, DTSC_CFG_MODE,
- DTSC_CFG_MODE_PERIODIC);
-
- /* wait for results */
- msleep(100);
- if (!dts_read_ptat_avg_results(mvm, &avg_ptat))
- return TEMPERATURE_ERROR;
-
- /* disable DTS */
- iwl_write_prph(mvm->trans, SHR_MISC_WFM_DTS_EN, 0);
-
- return calculate_nic_temperature(avg_ptat, volt_band_gap);
-}
+#define IWL_MVM_TEMP_NOTIF_WAIT_TIMEOUT HZ
static void iwl_mvm_enter_ctkill(struct iwl_mvm *mvm)
{
@@ -340,6 +99,71 @@
iwl_mvm_set_hw_ctkill_state(mvm, false);
}
+static bool iwl_mvm_temp_notif(struct iwl_notif_wait_data *notif_wait,
+ struct iwl_rx_packet *pkt, void *data)
+{
+ struct iwl_mvm *mvm =
+ container_of(notif_wait, struct iwl_mvm, notif_wait);
+ int *temp = data;
+ struct iwl_dts_measurement_notif *notif;
+ int len = iwl_rx_packet_payload_len(pkt);
+
+ if (WARN_ON_ONCE(len != sizeof(*notif))) {
+ IWL_ERR(mvm, "Invalid DTS_MEASUREMENT_NOTIFICATION\n");
+ return true;
+ }
+
+ notif = (void *)pkt->data;
+
+ *temp = le32_to_cpu(notif->temp);
+
+ /* shouldn't be negative, but since it's s32, make sure it isn't */
+ if (WARN_ON_ONCE(*temp < 0))
+ *temp = 0;
+
+ IWL_DEBUG_TEMP(mvm, "DTS_MEASUREMENT_NOTIFICATION - %d\n", *temp);
+ return true;
+}
+
+static int iwl_mvm_get_temp_cmd(struct iwl_mvm *mvm)
+{
+ struct iwl_dts_measurement_cmd cmd = {
+ .flags = cpu_to_le32(DTS_TRIGGER_CMD_FLAGS_TEMP),
+ };
+
+ return iwl_mvm_send_cmd_pdu(mvm, CMD_DTS_MEASUREMENT_TRIGGER, 0,
+ sizeof(cmd), &cmd);
+}
+
+static int iwl_mvm_get_temp(struct iwl_mvm *mvm)
+{
+ struct iwl_notification_wait wait_temp_notif;
+ static const u8 temp_notif[] = { DTS_MEASUREMENT_NOTIFICATION };
+ int ret, temp;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ iwl_init_notification_wait(&mvm->notif_wait, &wait_temp_notif,
+ temp_notif, ARRAY_SIZE(temp_notif),
+ iwl_mvm_temp_notif, &temp);
+
+ ret = iwl_mvm_get_temp_cmd(mvm);
+ if (ret) {
+ IWL_ERR(mvm, "Failed to get the temperature (err=%d)\n", ret);
+ iwl_remove_notification(&mvm->notif_wait, &wait_temp_notif);
+ return ret;
+ }
+
+ ret = iwl_wait_notification(&mvm->notif_wait, &wait_temp_notif,
+ IWL_MVM_TEMP_NOTIF_WAIT_TIMEOUT);
+ if (ret) {
+ IWL_ERR(mvm, "Getting the temperature timed out\n");
+ return ret;
+ }
+
+ return temp;
+}
+
static void check_exit_ctkill(struct work_struct *work)
{
struct iwl_mvm_tt_mgmt *tt;
@@ -352,28 +176,36 @@
duration = tt->params->ct_kill_duration;
- /* make sure the device is available for direct read/writes */
- if (iwl_mvm_ref_sync(mvm, IWL_MVM_REF_CHECK_CTKILL))
+ mutex_lock(&mvm->mutex);
+
+ if (__iwl_mvm_mac_start(mvm))
goto reschedule;
- iwl_trans_start_hw(mvm->trans);
- temp = check_nic_temperature(mvm);
- iwl_trans_stop_device(mvm->trans);
+ /* make sure the device is available for direct read/writes */
+ if (iwl_mvm_ref_sync(mvm, IWL_MVM_REF_CHECK_CTKILL)) {
+ __iwl_mvm_mac_stop(mvm);
+ goto reschedule;
+ }
+
+ temp = iwl_mvm_get_temp(mvm);
iwl_mvm_unref(mvm, IWL_MVM_REF_CHECK_CTKILL);
- if (temp < MIN_TEMPERATURE || temp > MAX_TEMPERATURE) {
- IWL_DEBUG_TEMP(mvm, "Failed to measure NIC temperature\n");
+ __iwl_mvm_mac_stop(mvm);
+
+ if (temp < 0)
goto reschedule;
- }
+
IWL_DEBUG_TEMP(mvm, "NIC temperature: %d\n", temp);
if (temp <= tt->params->ct_kill_exit) {
+ mutex_unlock(&mvm->mutex);
iwl_mvm_exit_ctkill(mvm);
return;
}
reschedule:
+ mutex_unlock(&mvm->mutex);
schedule_delayed_work(&mvm->thermal_throttle.ct_kill_exit,
round_jiffies(duration * HZ));
}
diff --git a/drivers/net/wireless/iwlwifi/mvm/tx.c b/drivers/net/wireless/iwlwifi/mvm/tx.c
index 963edb8..c67296e 100644
--- a/drivers/net/wireless/iwlwifi/mvm/tx.c
+++ b/drivers/net/wireless/iwlwifi/mvm/tx.c
@@ -170,10 +170,14 @@
/*
* for data packets, rate info comes from the table inside the fw. This
- * table is controlled by LINK_QUALITY commands
+ * table is controlled by LINK_QUALITY commands. Exclude ctrl port
+ * frames like EAPOLs which should be treated as mgmt frames. This
+ * avoids them being sent initially in high rates which increases the
+ * chances for completion of the 4-Way handshake.
*/
- if (ieee80211_is_data(fc) && sta) {
+ if (ieee80211_is_data(fc) && sta &&
+ !(info->control.flags & IEEE80211_TX_CTRL_PORT_CTRL_PROTO)) {
tx_cmd->initial_rate_index = 0;
tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE);
return;
@@ -209,7 +213,7 @@
if (info->band == IEEE80211_BAND_2GHZ &&
!iwl_mvm_bt_coex_is_shared_ant_avail(mvm))
- rate_flags = BIT(ANT_A) << RATE_MCS_ANT_POS;
+ rate_flags = BIT(mvm->cfg->non_shared_ant) << RATE_MCS_ANT_POS;
else
rate_flags =
BIT(mvm->mgmt_last_antenna_idx) << RATE_MCS_ANT_POS;
diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c
index dbbbf23..ca68c3c 100644
--- a/drivers/net/wireless/iwlwifi/pcie/drv.c
+++ b/drivers/net/wireless/iwlwifi/pcie/drv.c
@@ -354,11 +354,17 @@
{IWL_PCI_DEVICE(0x08B3, 0x8060, iwl3160_2n_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x8062, iwl3160_n_cfg)},
{IWL_PCI_DEVICE(0x08B4, 0x8270, iwl3160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B4, 0x8370, iwl3160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B4, 0x8272, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x8470, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x8570, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x1070, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x1170, iwl3160_2ac_cfg)},
+/* 3165 Series */
+ {IWL_PCI_DEVICE(0x3165, 0x4010, iwl3165_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x3165, 0x4210, iwl3165_2ac_cfg)},
+
/* 7265 Series */
{IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x5110, iwl7265_2ac_cfg)},
@@ -380,6 +386,7 @@
{IWL_PCI_DEVICE(0x095B, 0x5202, iwl7265_n_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x9010, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x9012, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x900A, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x9110, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x9112, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)},
@@ -398,6 +405,7 @@
/* 8000 Series */
{IWL_PCI_DEVICE(0x24F3, 0x0010, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x0004, iwl8260_2n_cfg)},
{IWL_PCI_DEVICE(0x24F4, 0x0030, iwl8260_2ac_cfg)},
#endif /* CONFIG_IWLMVM */
diff --git a/drivers/net/wireless/iwlwifi/pcie/internal.h b/drivers/net/wireless/iwlwifi/pcie/internal.h
index a4fedc4..1aea6b6 100644
--- a/drivers/net/wireless/iwlwifi/pcie/internal.h
+++ b/drivers/net/wireless/iwlwifi/pcie/internal.h
@@ -257,6 +257,7 @@
* @cmd_queue - command queue number
* @rx_buf_size_8k: 8 kB RX buffer size
* @bc_table_dword: true if the BC table expects DWORD (as opposed to bytes)
+ * @scd_set_active: should the transport configure the SCD for HCMD queue
* @rx_page_order: page order for receive buffer size
* @wd_timeout: queue watchdog timeout (jiffies)
* @reg_lock: protect hw register access
@@ -306,6 +307,7 @@
bool rx_buf_size_8k;
bool bc_table_dword;
+ bool scd_set_active;
u32 rx_page_order;
const char *const *command_names;
diff --git a/drivers/net/wireless/iwlwifi/pcie/rx.c b/drivers/net/wireless/iwlwifi/pcie/rx.c
index 702f47f..7b7e2f2 100644
--- a/drivers/net/wireless/iwlwifi/pcie/rx.c
+++ b/drivers/net/wireless/iwlwifi/pcie/rx.c
@@ -640,7 +640,7 @@
err = iwl_op_mode_rx(trans->op_mode, &rxcb, cmd);
if (reclaim) {
- kfree(txq->entries[cmd_index].free_buf);
+ kzfree(txq->entries[cmd_index].free_buf);
txq->entries[cmd_index].free_buf = NULL;
}
diff --git a/drivers/net/wireless/iwlwifi/pcie/trans.c b/drivers/net/wireless/iwlwifi/pcie/trans.c
index 3076e0e..ae99240 100644
--- a/drivers/net/wireless/iwlwifi/pcie/trans.c
+++ b/drivers/net/wireless/iwlwifi/pcie/trans.c
@@ -1171,6 +1171,7 @@
trans_pcie->command_names = trans_cfg->command_names;
trans_pcie->bc_table_dword = trans_cfg->bc_table_dword;
+ trans_pcie->scd_set_active = trans_cfg->scd_set_active;
/* Initialize NAPI here - it should be before registering to mac80211
* in the opmode but after the HW struct is allocated.
@@ -2189,7 +2190,7 @@
*/
if (trans->cfg->device_family == IWL_DEVICE_FAMILY_8000)
trans->hw_rev = (trans->hw_rev & 0xfff0) |
- ((trans->hw_rev << 2) & 0xc);
+ (CSR_HW_REV_STEP(trans->hw_rev << 2));
trans->hw_id = (pdev->device << 16) + pdev->subsystem_device;
snprintf(trans->hw_id_str, sizeof(trans->hw_id_str),
diff --git a/drivers/net/wireless/iwlwifi/pcie/tx.c b/drivers/net/wireless/iwlwifi/pcie/tx.c
index a6336b4..eb8e298 100644
--- a/drivers/net/wireless/iwlwifi/pcie/tx.c
+++ b/drivers/net/wireless/iwlwifi/pcie/tx.c
@@ -620,8 +620,8 @@
/* De-alloc array of command/tx buffers */
if (txq_id == trans_pcie->cmd_queue)
for (i = 0; i < txq->q.n_window; i++) {
- kfree(txq->entries[i].cmd);
- kfree(txq->entries[i].free_buf);
+ kzfree(txq->entries[i].cmd);
+ kzfree(txq->entries[i].free_buf);
}
/* De-alloc circular buffer of TFDs */
@@ -1080,7 +1080,8 @@
fifo = cfg->fifo;
/* Disable the scheduler prior configuring the cmd queue */
- if (txq_id == trans_pcie->cmd_queue)
+ if (txq_id == trans_pcie->cmd_queue &&
+ trans_pcie->scd_set_active)
iwl_scd_enable_set_active(trans, 0);
/* Stop this Tx queue before configuring it */
@@ -1142,7 +1143,8 @@
SCD_QUEUE_STTS_REG_MSK);
/* enable the scheduler for this queue (only) */
- if (txq_id == trans_pcie->cmd_queue)
+ if (txq_id == trans_pcie->cmd_queue &&
+ trans_pcie->scd_set_active)
iwl_scd_enable_set_active(trans, BIT(txq_id));
}
@@ -1407,7 +1409,7 @@
out_meta->flags = cmd->flags;
if (WARN_ON_ONCE(txq->entries[idx].free_buf))
- kfree(txq->entries[idx].free_buf);
+ kzfree(txq->entries[idx].free_buf);
txq->entries[idx].free_buf = dup_buf;
trace_iwlwifi_dev_hcmd(trans->dev, cmd, cmd_size, &out_cmd->hdr);
diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
index 1326f61..babbdc1 100644
--- a/drivers/net/wireless/mac80211_hwsim.c
+++ b/drivers/net/wireless/mac80211_hwsim.c
@@ -2045,8 +2045,6 @@
hw->flags = IEEE80211_HW_MFP_CAPABLE |
IEEE80211_HW_SIGNAL_DBM |
- IEEE80211_HW_SUPPORTS_STATIC_SMPS |
- IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS |
IEEE80211_HW_AMPDU_AGGREGATION |
IEEE80211_HW_WANT_MONITOR_VIF |
IEEE80211_HW_QUEUE_CONTROL |
@@ -2059,8 +2057,10 @@
WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL |
WIPHY_FLAG_AP_UAPSD |
WIPHY_FLAG_HAS_CHANNEL_SWITCH;
- hw->wiphy->features |= NL80211_FEATURE_ACTIVE_MONITOR;
- hw->wiphy->features |= NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE;
+ hw->wiphy->features |= NL80211_FEATURE_ACTIVE_MONITOR |
+ NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE |
+ NL80211_FEATURE_STATIC_SMPS |
+ NL80211_FEATURE_DYNAMIC_SMPS;
/* ask mac80211 to reserve space for magic */
hw->vif_data_size = sizeof(struct hwsim_vif_priv);
diff --git a/drivers/net/wireless/mwifiex/11n_rxreorder.c b/drivers/net/wireless/mwifiex/11n_rxreorder.c
index 06a2c21..4005707 100644
--- a/drivers/net/wireless/mwifiex/11n_rxreorder.c
+++ b/drivers/net/wireless/mwifiex/11n_rxreorder.c
@@ -183,6 +183,15 @@
if (!tbl)
return;
+ spin_lock_irqsave(&priv->adapter->rx_proc_lock, flags);
+ priv->adapter->rx_locked = true;
+ if (priv->adapter->rx_processing) {
+ spin_unlock_irqrestore(&priv->adapter->rx_proc_lock, flags);
+ flush_workqueue(priv->adapter->rx_workqueue);
+ } else {
+ spin_unlock_irqrestore(&priv->adapter->rx_proc_lock, flags);
+ }
+
start_win = (tbl->start_win + tbl->win_size) & (MAX_TID_VALUE - 1);
mwifiex_11n_dispatch_pkt_until_start_win(priv, tbl, start_win);
@@ -194,6 +203,11 @@
kfree(tbl->rx_reorder_ptr);
kfree(tbl);
+
+ spin_lock_irqsave(&priv->adapter->rx_proc_lock, flags);
+ priv->adapter->rx_locked = false;
+ spin_unlock_irqrestore(&priv->adapter->rx_proc_lock, flags);
+
}
/*
diff --git a/drivers/net/wireless/mwifiex/cfg80211.c b/drivers/net/wireless/mwifiex/cfg80211.c
index c4723b0..0dd6729 100644
--- a/drivers/net/wireless/mwifiex/cfg80211.c
+++ b/drivers/net/wireless/mwifiex/cfg80211.c
@@ -1936,13 +1936,6 @@
wiphy_dbg(wiphy, "info: received scan request on %s\n", dev->name);
- if ((request->flags & NL80211_SCAN_FLAG_LOW_PRIORITY) &&
- atomic_read(&priv->wmm.tx_pkts_queued) >=
- MWIFIEX_MIN_TX_PENDING_TO_CANCEL_SCAN) {
- dev_dbg(priv->adapter->dev, "scan rejected due to traffic\n");
- return -EBUSY;
- }
-
/* Block scan request if scan operation or scan cleanup when interface
* is disabled is in process
*/
@@ -1981,7 +1974,7 @@
user_scan_cfg->chan_list[i].chan_number = chan->hw_value;
user_scan_cfg->chan_list[i].radio_type = chan->band;
- if (chan->flags & IEEE80211_CHAN_NO_IR)
+ if ((chan->flags & IEEE80211_CHAN_NO_IR) || !request->n_ssids)
user_scan_cfg->chan_list[i].scan_type =
MWIFIEX_SCAN_TYPE_PASSIVE;
else
@@ -1991,6 +1984,11 @@
user_scan_cfg->chan_list[i].scan_time = 0;
}
+ if (priv->adapter->scan_chan_gap_enabled &&
+ mwifiex_is_any_intf_active(priv))
+ user_scan_cfg->scan_chan_gap =
+ priv->adapter->scan_chan_gap_time;
+
ret = mwifiex_scan_networks(priv, user_scan_cfg);
kfree(user_scan_cfg);
if (ret) {
@@ -2915,7 +2913,6 @@
wiphy->features |= NL80211_FEATURE_HT_IBSS |
NL80211_FEATURE_INACTIVITY_TIMER |
- NL80211_FEATURE_LOW_PRIORITY_SCAN |
NL80211_FEATURE_NEED_OBSS_SCAN;
/* Reserve space for mwifiex specific private data for BSS */
diff --git a/drivers/net/wireless/mwifiex/cmdevt.c b/drivers/net/wireless/mwifiex/cmdevt.c
index 985f6c2..8559720 100644
--- a/drivers/net/wireless/mwifiex/cmdevt.c
+++ b/drivers/net/wireless/mwifiex/cmdevt.c
@@ -1508,6 +1508,7 @@
}
adapter->fw_release_number = le32_to_cpu(hw_spec->fw_release_number);
+ adapter->fw_api_ver = (adapter->fw_release_number >> 16) & 0xff;
adapter->number_of_antenna = le16_to_cpu(hw_spec->number_of_antenna);
if (le32_to_cpu(hw_spec->dot_11ac_dev_cap)) {
@@ -1612,5 +1613,8 @@
adapter->if_ops.update_mp_end_port(adapter,
le16_to_cpu(hw_spec->mp_end_port));
+ if (adapter->fw_api_ver == MWIFIEX_FW_V15)
+ adapter->scan_chan_gap_enabled = true;
+
return 0;
}
diff --git a/drivers/net/wireless/mwifiex/decl.h b/drivers/net/wireless/mwifiex/decl.h
index 0e03fe3..e0d00a7 100644
--- a/drivers/net/wireless/mwifiex/decl.h
+++ b/drivers/net/wireless/mwifiex/decl.h
@@ -48,8 +48,8 @@
#define MWIFIEX_UAP_AMPDU_DEF_RXWINSIZE 16
#define MWIFIEX_11AC_STA_AMPDU_DEF_TXWINSIZE 64
#define MWIFIEX_11AC_STA_AMPDU_DEF_RXWINSIZE 64
-#define MWIFIEX_11AC_UAP_AMPDU_DEF_TXWINSIZE 48
-#define MWIFIEX_11AC_UAP_AMPDU_DEF_RXWINSIZE 32
+#define MWIFIEX_11AC_UAP_AMPDU_DEF_TXWINSIZE 64
+#define MWIFIEX_11AC_UAP_AMPDU_DEF_RXWINSIZE 64
#define MWIFIEX_DEFAULT_BLOCK_ACK_TIMEOUT 0xffff
diff --git a/drivers/net/wireless/mwifiex/fw.h b/drivers/net/wireless/mwifiex/fw.h
index 6a703ea..1eb6173 100644
--- a/drivers/net/wireless/mwifiex/fw.h
+++ b/drivers/net/wireless/mwifiex/fw.h
@@ -170,7 +170,8 @@
#define TLV_TYPE_COALESCE_RULE (PROPRIETARY_TLV_BASE_ID + 154)
#define TLV_TYPE_KEY_PARAM_V2 (PROPRIETARY_TLV_BASE_ID + 156)
#define TLV_TYPE_TDLS_IDLE_TIMEOUT (PROPRIETARY_TLV_BASE_ID + 194)
-#define TLV_TYPE_API_REV (PROPRIETARY_TLV_BASE_ID + 199)
+#define TLV_TYPE_SCAN_CHANNEL_GAP (PROPRIETARY_TLV_BASE_ID + 197)
+#define TLV_TYPE_API_REV (PROPRIETARY_TLV_BASE_ID + 199)
#define MWIFIEX_TX_DATA_BUF_SIZE_2K 2048
@@ -653,6 +654,12 @@
__le16 num_probes;
} __packed;
+struct mwifiex_ie_types_scan_chan_gap {
+ struct mwifiex_ie_types_header header;
+ /* time gap in TUs to be used between two consecutive channels scan */
+ __le16 chan_gap;
+} __packed;
+
struct mwifiex_ie_types_wildcard_ssid_params {
struct mwifiex_ie_types_header header;
u8 max_ssid_length;
@@ -1249,6 +1256,7 @@
u8 num_ssids;
/* Variable number (fixed maximum) of channels to scan up */
struct mwifiex_user_scan_chan chan_list[MWIFIEX_USER_SCAN_CHAN_MAX];
+ u16 scan_chan_gap;
} __packed;
struct ie_body {
diff --git a/drivers/net/wireless/mwifiex/init.c b/drivers/net/wireless/mwifiex/init.c
index 80bda80..f7c97cf 100644
--- a/drivers/net/wireless/mwifiex/init.c
+++ b/drivers/net/wireless/mwifiex/init.c
@@ -212,6 +212,7 @@
adapter->specific_scan_time = MWIFIEX_SPECIFIC_SCAN_CHAN_TIME;
adapter->active_scan_time = MWIFIEX_ACTIVE_SCAN_CHAN_TIME;
adapter->passive_scan_time = MWIFIEX_PASSIVE_SCAN_CHAN_TIME;
+ adapter->scan_chan_gap_time = MWIFIEX_DEF_SCAN_CHAN_GAP_TIME;
adapter->scan_probes = 1;
@@ -280,7 +281,6 @@
memset(&adapter->arp_filter, 0, sizeof(adapter->arp_filter));
adapter->arp_filter_size = 0;
adapter->max_mgmt_ie_index = MAX_MGMT_IE_INDEX;
- adapter->empty_tx_q_cnt = 0;
adapter->ext_scan = true;
adapter->key_api_major_ver = 0;
adapter->key_api_minor_ver = 0;
@@ -447,8 +447,11 @@
spin_lock_init(&adapter->cmd_free_q_lock);
spin_lock_init(&adapter->cmd_pending_q_lock);
spin_lock_init(&adapter->scan_pending_q_lock);
+ spin_lock_init(&adapter->rx_q_lock);
+ spin_lock_init(&adapter->rx_proc_lock);
skb_queue_head_init(&adapter->usb_rx_data_q);
+ skb_queue_head_init(&adapter->rx_data_q);
for (i = 0; i < adapter->priv_num; ++i) {
INIT_LIST_HEAD(&adapter->bss_prio_tbl[i].bss_prio_head);
@@ -614,6 +617,7 @@
int ret = -EINPROGRESS;
struct mwifiex_private *priv;
s32 i;
+ unsigned long flags;
struct sk_buff *skb;
/* mwifiex already shutdown */
@@ -648,6 +652,21 @@
}
}
+ spin_lock_irqsave(&adapter->rx_proc_lock, flags);
+
+ while ((skb = skb_dequeue(&adapter->rx_data_q))) {
+ struct mwifiex_rxinfo *rx_info = MWIFIEX_SKB_RXCB(skb);
+
+ atomic_dec(&adapter->rx_pending);
+ priv = adapter->priv[rx_info->bss_num];
+ if (priv)
+ priv->stats.rx_dropped++;
+
+ dev_kfree_skb_any(skb);
+ }
+
+ spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+
spin_lock(&adapter->mwifiex_lock);
if (adapter->if_ops.data_complete) {
diff --git a/drivers/net/wireless/mwifiex/main.c b/drivers/net/wireless/mwifiex/main.c
index dfa37ea..b522f7c 100644
--- a/drivers/net/wireless/mwifiex/main.c
+++ b/drivers/net/wireless/mwifiex/main.c
@@ -28,91 +28,6 @@
static char *cal_data_cfg;
module_param(cal_data_cfg, charp, 0);
-static void scan_delay_timer_fn(unsigned long data)
-{
- struct mwifiex_private *priv = (struct mwifiex_private *)data;
- struct mwifiex_adapter *adapter = priv->adapter;
- struct cmd_ctrl_node *cmd_node, *tmp_node;
- spinlock_t *scan_q_lock = &adapter->scan_pending_q_lock;
- unsigned long flags;
-
- if (adapter->surprise_removed)
- return;
-
- if (adapter->scan_delay_cnt == MWIFIEX_MAX_SCAN_DELAY_CNT ||
- !adapter->scan_processing) {
- /*
- * Abort scan operation by cancelling all pending scan
- * commands
- */
- spin_lock_irqsave(scan_q_lock, flags);
- list_for_each_entry_safe(cmd_node, tmp_node,
- &adapter->scan_pending_q, list) {
- list_del(&cmd_node->list);
- mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
- }
- spin_unlock_irqrestore(scan_q_lock, flags);
-
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
- adapter->scan_processing = false;
- adapter->scan_delay_cnt = 0;
- adapter->empty_tx_q_cnt = 0;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
-
- if (priv->scan_request) {
- dev_dbg(adapter->dev, "info: aborting scan\n");
- cfg80211_scan_done(priv->scan_request, 1);
- priv->scan_request = NULL;
- } else {
- priv->scan_aborting = false;
- dev_dbg(adapter->dev, "info: scan already aborted\n");
- }
- goto done;
- }
-
- if (!atomic_read(&priv->adapter->is_tx_received)) {
- adapter->empty_tx_q_cnt++;
- if (adapter->empty_tx_q_cnt == MWIFIEX_MAX_EMPTY_TX_Q_CNT) {
- /*
- * No Tx traffic for 200msec. Get scan command from
- * scan pending queue and put to cmd pending queue to
- * resume scan operation
- */
- adapter->scan_delay_cnt = 0;
- adapter->empty_tx_q_cnt = 0;
- spin_lock_irqsave(scan_q_lock, flags);
-
- if (list_empty(&adapter->scan_pending_q)) {
- spin_unlock_irqrestore(scan_q_lock, flags);
- goto done;
- }
-
- cmd_node = list_first_entry(&adapter->scan_pending_q,
- struct cmd_ctrl_node, list);
- list_del(&cmd_node->list);
- spin_unlock_irqrestore(scan_q_lock, flags);
-
- mwifiex_insert_cmd_to_pending_q(adapter, cmd_node,
- true);
- queue_work(adapter->workqueue, &adapter->main_work);
- goto done;
- }
- } else {
- adapter->empty_tx_q_cnt = 0;
- }
-
- /* Delay scan operation further by 20msec */
- mod_timer(&priv->scan_delay_timer, jiffies +
- msecs_to_jiffies(MWIFIEX_SCAN_DELAY_MSEC));
- adapter->scan_delay_cnt++;
-
-done:
- if (atomic_read(&priv->adapter->is_tx_received))
- atomic_set(&priv->adapter->is_tx_received, false);
-
- return;
-}
-
/*
* This function registers the device and performs all the necessary
* initializations.
@@ -160,10 +75,6 @@
adapter->priv[i]->adapter = adapter;
adapter->priv_num++;
-
- setup_timer(&adapter->priv[i]->scan_delay_timer,
- scan_delay_timer_fn,
- (unsigned long)adapter->priv[i]);
}
mwifiex_init_lock_list(adapter);
@@ -207,7 +118,6 @@
for (i = 0; i < adapter->priv_num; i++) {
if (adapter->priv[i]) {
mwifiex_free_curr_bcn(adapter->priv[i]);
- del_timer_sync(&adapter->priv[i]->scan_delay_timer);
kfree(adapter->priv[i]);
}
}
@@ -216,6 +126,42 @@
return 0;
}
+static int mwifiex_process_rx(struct mwifiex_adapter *adapter)
+{
+ unsigned long flags;
+ struct sk_buff *skb;
+ bool delay_main_work = adapter->delay_main_work;
+
+ spin_lock_irqsave(&adapter->rx_proc_lock, flags);
+ if (adapter->rx_processing || adapter->rx_locked) {
+ spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ goto exit_rx_proc;
+ } else {
+ adapter->rx_processing = true;
+ spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ }
+
+ /* Check for Rx data */
+ while ((skb = skb_dequeue(&adapter->rx_data_q))) {
+ atomic_dec(&adapter->rx_pending);
+ if (adapter->delay_main_work &&
+ (atomic_dec_return(&adapter->rx_pending) <
+ LOW_RX_PENDING)) {
+ adapter->delay_main_work = false;
+ queue_work(adapter->rx_workqueue, &adapter->rx_work);
+ }
+ mwifiex_handle_rx_packet(adapter, skb);
+ }
+ spin_lock_irqsave(&adapter->rx_proc_lock, flags);
+ adapter->rx_processing = false;
+ spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+
+ if (delay_main_work)
+ queue_work(adapter->workqueue, &adapter->main_work);
+exit_rx_proc:
+ return 0;
+}
+
/*
* The main process.
*
@@ -253,6 +199,19 @@
(adapter->hw_status == MWIFIEX_HW_STATUS_NOT_READY))
break;
+ /* If we process interrupts first, it would increase RX pending
+ * even further. Avoid this by checking if rx_pending has
+ * crossed high threshold and schedule rx work queue
+ * and then process interrupts
+ */
+ if (atomic_read(&adapter->rx_pending) >= HIGH_RX_PENDING) {
+ adapter->delay_main_work = true;
+ if (!adapter->rx_processing)
+ queue_work(adapter->rx_workqueue,
+ &adapter->rx_work);
+ break;
+ }
+
/* Handle pending interrupt if any */
if (adapter->int_status) {
if (adapter->hs_activated)
@@ -261,6 +220,9 @@
adapter->if_ops.process_int_status(adapter);
}
+ if (adapter->rx_work_enabled && adapter->data_received)
+ queue_work(adapter->rx_workqueue, &adapter->rx_work);
+
/* Need to wake up the card ? */
if ((adapter->ps_state == PS_STATE_SLEEP) &&
(adapter->pm_wakeup_card_req &&
@@ -273,6 +235,7 @@
}
if (IS_CARD_RX_RCVD(adapter)) {
+ adapter->data_received = false;
adapter->pm_wakeup_fw_try = false;
if (adapter->ps_state == PS_STATE_SLEEP)
adapter->ps_state = PS_STATE_AWAKE;
@@ -284,8 +247,8 @@
adapter->tx_lock_flag)
break;
- if ((adapter->scan_processing &&
- !adapter->scan_delay_cnt) || adapter->data_sent ||
+ if ((!adapter->scan_chan_gap_enabled &&
+ adapter->scan_processing) || adapter->data_sent ||
mwifiex_wmm_lists_empty(adapter)) {
if (adapter->cmd_sent || adapter->curr_cmd ||
(!is_command_pending(adapter)))
@@ -339,7 +302,8 @@
}
}
- if ((!adapter->scan_processing || adapter->scan_delay_cnt) &&
+ if ((adapter->scan_chan_gap_enabled ||
+ !adapter->scan_processing) &&
!adapter->data_sent && !mwifiex_wmm_lists_empty(adapter)) {
mwifiex_wmm_process_tx(adapter);
if (adapter->hs_activated) {
@@ -407,6 +371,12 @@
flush_workqueue(adapter->workqueue);
destroy_workqueue(adapter->workqueue);
adapter->workqueue = NULL;
+
+ if (adapter->rx_workqueue) {
+ flush_workqueue(adapter->rx_workqueue);
+ destroy_workqueue(adapter->rx_workqueue);
+ adapter->rx_workqueue = NULL;
+ }
}
/*
@@ -598,9 +568,6 @@
atomic_inc(&priv->adapter->tx_pending);
mwifiex_wmm_add_buf_txqueue(priv, skb);
- if (priv->adapter->scan_delay_cnt)
- atomic_set(&priv->adapter->is_tx_received, true);
-
queue_work(priv->adapter->workqueue, &priv->adapter->main_work);
return 0;
@@ -824,6 +791,21 @@
}
/*
+ * This is the RX work queue function.
+ *
+ * It handles the RX operations.
+ */
+static void mwifiex_rx_work_queue(struct work_struct *work)
+{
+ struct mwifiex_adapter *adapter =
+ container_of(work, struct mwifiex_adapter, rx_work);
+
+ if (adapter->surprise_removed)
+ return;
+ mwifiex_process_rx(adapter);
+}
+
+/*
* This is the main work queue function.
*
* It handles the main process, which in turn handles the complete
@@ -879,6 +861,11 @@
adapter->cmd_wait_q.status = 0;
adapter->scan_wait_q_woken = false;
+ if (num_possible_cpus() > 1) {
+ adapter->rx_work_enabled = true;
+ pr_notice("rx work enabled, cpus %d\n", num_possible_cpus());
+ }
+
adapter->workqueue =
alloc_workqueue("MWIFIEX_WORK_QUEUE",
WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_UNBOUND, 1);
@@ -886,6 +873,18 @@
goto err_kmalloc;
INIT_WORK(&adapter->main_work, mwifiex_main_work_queue);
+
+ if (adapter->rx_work_enabled) {
+ adapter->rx_workqueue = alloc_workqueue("MWIFIEX_RX_WORK_QUEUE",
+ WQ_HIGHPRI |
+ WQ_MEM_RECLAIM |
+ WQ_UNBOUND, 1);
+ if (!adapter->rx_workqueue)
+ goto err_kmalloc;
+
+ INIT_WORK(&adapter->rx_work, mwifiex_rx_work_queue);
+ }
+
if (adapter->if_ops.iface_work)
INIT_WORK(&adapter->iface_work, adapter->if_ops.iface_work);
diff --git a/drivers/net/wireless/mwifiex/main.h b/drivers/net/wireless/mwifiex/main.h
index 5439963..1a99999 100644
--- a/drivers/net/wireless/mwifiex/main.h
+++ b/drivers/net/wireless/mwifiex/main.h
@@ -58,6 +58,9 @@
#define MAX_TX_PENDING 100
#define LOW_TX_PENDING 80
+#define HIGH_RX_PENDING 50
+#define LOW_RX_PENDING 20
+
#define MWIFIEX_UPLD_SIZE (2312)
#define MAX_EVENT_SIZE 2048
@@ -84,17 +87,12 @@
#define MWIFIEX_PASSIVE_SCAN_CHAN_TIME 110
#define MWIFIEX_ACTIVE_SCAN_CHAN_TIME 30
#define MWIFIEX_SPECIFIC_SCAN_CHAN_TIME 30
+#define MWIFIEX_DEF_SCAN_CHAN_GAP_TIME 50
#define SCAN_RSSI(RSSI) (0x100 - ((u8)(RSSI)))
#define MWIFIEX_MAX_TOTAL_SCAN_TIME (MWIFIEX_TIMER_10S - MWIFIEX_TIMER_1S)
-#define MWIFIEX_MAX_SCAN_DELAY_CNT 50
-#define MWIFIEX_MAX_EMPTY_TX_Q_CNT 10
-#define MWIFIEX_SCAN_DELAY_MSEC 20
-
-#define MWIFIEX_MIN_TX_PENDING_TO_CANCEL_SCAN 2
-
#define RSN_GTK_OUI_OFFSET 2
#define MWIFIEX_OUI_NOT_PRESENT 0
@@ -547,7 +545,6 @@
u8 nick_name[16];
u16 current_key_index;
struct semaphore async_sem;
- u8 report_scan_result;
struct cfg80211_scan_request *scan_request;
u8 cfg_bssid[6];
struct wps wps;
@@ -561,7 +558,6 @@
u16 proberesp_idx;
u16 assocresp_idx;
u16 rsn_idx;
- struct timer_list scan_delay_timer;
u8 ap_11n_enabled;
u8 ap_11ac_enabled;
u32 mgmt_frame_mask;
@@ -721,6 +717,12 @@
atomic_t cmd_pending;
struct workqueue_struct *workqueue;
struct work_struct main_work;
+ struct workqueue_struct *rx_workqueue;
+ struct work_struct rx_work;
+ bool rx_work_enabled;
+ bool rx_processing;
+ bool delay_main_work;
+ bool rx_locked;
struct mwifiex_bss_prio_tbl bss_prio_tbl[MWIFIEX_MAX_BSS_NUM];
/* spin lock for init/shutdown */
spinlock_t mwifiex_lock;
@@ -761,6 +763,10 @@
struct list_head scan_pending_q;
/* spin lock for scan_pending_q */
spinlock_t scan_pending_q_lock;
+ /* spin lock for RX queue */
+ spinlock_t rx_q_lock;
+ /* spin lock for RX processing routine */
+ spinlock_t rx_proc_lock;
struct sk_buff_head usb_rx_data_q;
u32 scan_processing;
u16 region_code;
@@ -770,6 +776,7 @@
u16 specific_scan_time;
u16 active_scan_time;
u16 passive_scan_time;
+ u16 scan_chan_gap_time;
u8 fw_bands;
u8 adhoc_start_band;
u8 config_bands;
@@ -815,8 +822,6 @@
spinlock_t queue_lock; /* lock for tx queues */
u8 country_code[IEEE80211_COUNTRY_STRING_LEN];
u16 max_mgmt_ie_index;
- u8 scan_delay_cnt;
- u8 empty_tx_q_cnt;
const struct firmware *cal_data;
struct device_node *dt_node;
@@ -828,7 +833,6 @@
u32 usr_dot_11ac_dev_cap_a;
u32 usr_dot_11ac_mcs_support;
- atomic_t is_tx_received;
atomic_t pending_bridged_pkts;
struct semaphore *card_sem;
bool ext_scan;
@@ -839,6 +843,8 @@
struct memory_type_mapping *mem_type_mapping_tbl;
u8 num_mem_types;
u8 curr_mem_idx;
+ bool scan_chan_gap_enabled;
+ struct sk_buff_head rx_data_q;
};
int mwifiex_init_lock_list(struct mwifiex_adapter *adapter);
@@ -1139,6 +1145,25 @@
return priv->csa_chan;
}
+static inline u8 mwifiex_is_any_intf_active(struct mwifiex_private *priv)
+{
+ struct mwifiex_private *priv_num;
+ int i;
+
+ for (i = 0; i < priv->adapter->priv_num; i++) {
+ priv_num = priv->adapter->priv[i];
+ if (priv_num) {
+ if ((GET_BSS_ROLE(priv_num) == MWIFIEX_BSS_ROLE_UAP &&
+ priv_num->bss_started) ||
+ (GET_BSS_ROLE(priv_num) == MWIFIEX_BSS_ROLE_STA &&
+ priv_num->media_connected))
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
int mwifiex_init_shutdown_fw(struct mwifiex_private *priv,
u32 func_init_shutdown);
int mwifiex_add_card(void *, struct semaphore *, struct mwifiex_if_ops *, u8);
@@ -1274,6 +1299,7 @@
bool mwifiex_is_bss_in_11ac_mode(struct mwifiex_private *priv);
u8 mwifiex_get_center_freq_index(struct mwifiex_private *priv, u8 band,
u32 pri_chan, u8 chan_bw);
+int mwifiex_init_channel_scan_gap(struct mwifiex_adapter *adapter);
#ifdef CONFIG_DEBUG_FS
void mwifiex_debugfs_init(void);
diff --git a/drivers/net/wireless/mwifiex/pcie.c b/drivers/net/wireless/mwifiex/pcie.c
index ff05458..1504b16 100644
--- a/drivers/net/wireless/mwifiex/pcie.c
+++ b/drivers/net/wireless/mwifiex/pcie.c
@@ -1233,6 +1233,7 @@
struct sk_buff *skb_tmp = NULL;
struct mwifiex_pcie_buf_desc *desc;
struct mwifiex_pfu_buf_desc *desc2;
+ unsigned long flags;
if (!mwifiex_pcie_ok_to_access_hw(adapter))
mwifiex_pm_wakeup_card(adapter);
@@ -1271,12 +1272,29 @@
*/
pkt_len = *((__le16 *)skb_data->data);
rx_len = le16_to_cpu(pkt_len);
- skb_put(skb_data, rx_len);
- dev_dbg(adapter->dev,
- "info: RECV DATA: Rd=%#x, Wr=%#x, Len=%d\n",
- card->rxbd_rdptr, wrptr, rx_len);
- skb_pull(skb_data, INTF_HEADER_LEN);
- mwifiex_handle_rx_packet(adapter, skb_data);
+ if (WARN_ON(rx_len <= INTF_HEADER_LEN ||
+ rx_len > MWIFIEX_RX_DATA_BUF_SIZE)) {
+ dev_err(adapter->dev,
+ "Invalid RX len %d, Rd=%#x, Wr=%#x\n",
+ rx_len, card->rxbd_rdptr, wrptr);
+ dev_kfree_skb_any(skb_data);
+ } else {
+ skb_put(skb_data, rx_len);
+ dev_dbg(adapter->dev,
+ "info: RECV DATA: Rd=%#x, Wr=%#x, Len=%d\n",
+ card->rxbd_rdptr, wrptr, rx_len);
+ skb_pull(skb_data, INTF_HEADER_LEN);
+ if (adapter->rx_work_enabled) {
+ spin_lock_irqsave(&adapter->rx_q_lock, flags);
+ skb_queue_tail(&adapter->rx_data_q, skb_data);
+ spin_unlock_irqrestore(&adapter->rx_q_lock,
+ flags);
+ adapter->data_received = true;
+ atomic_inc(&adapter->rx_pending);
+ } else {
+ mwifiex_handle_rx_packet(adapter, skb_data);
+ }
+ }
skb_tmp = dev_alloc_skb(MWIFIEX_RX_DATA_BUF_SIZE);
if (!skb_tmp) {
@@ -1718,6 +1736,13 @@
buffer is released. This is just to make things simpler,
we need to find a better method of managing these buffers.
*/
+ } else {
+ if (mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
+ CPU_INTR_EVENT_DONE)) {
+ dev_warn(adapter->dev,
+ "Write register failed\n");
+ return -1;
+ }
}
return 0;
diff --git a/drivers/net/wireless/mwifiex/pcie.h b/drivers/net/wireless/mwifiex/pcie.h
index a1a8fd3..200e8b0 100644
--- a/drivers/net/wireless/mwifiex/pcie.h
+++ b/drivers/net/wireless/mwifiex/pcie.h
@@ -40,8 +40,8 @@
#define MWIFIEX_TXBD_MASK 0x3F
#define MWIFIEX_RXBD_MASK 0x3F
-#define MWIFIEX_MAX_EVT_BD 0x04
-#define MWIFIEX_EVTBD_MASK 0x07
+#define MWIFIEX_MAX_EVT_BD 0x08
+#define MWIFIEX_EVTBD_MASK 0x0f
/* PCIE INTERNAL REGISTERS */
#define PCIE_SCRATCH_0_REG 0xC10
@@ -69,6 +69,7 @@
#define CPU_INTR_DOOR_BELL BIT(1)
#define CPU_INTR_SLEEP_CFM_DONE BIT(2)
#define CPU_INTR_RESET BIT(3)
+#define CPU_INTR_EVENT_DONE BIT(5)
#define HOST_INTR_DNLD_DONE BIT(0)
#define HOST_INTR_UPLD_RDY BIT(1)
diff --git a/drivers/net/wireless/mwifiex/scan.c b/drivers/net/wireless/mwifiex/scan.c
index 195ef0ca..c09ebee 100644
--- a/drivers/net/wireless/mwifiex/scan.c
+++ b/drivers/net/wireless/mwifiex/scan.c
@@ -799,6 +799,7 @@
{
struct mwifiex_adapter *adapter = priv->adapter;
struct mwifiex_ie_types_num_probes *num_probes_tlv;
+ struct mwifiex_ie_types_scan_chan_gap *chan_gap_tlv;
struct mwifiex_ie_types_wildcard_ssid_params *wildcard_ssid_tlv;
struct mwifiex_ie_types_bssid_list *bssid_tlv;
u8 *tlv_pos;
@@ -939,6 +940,22 @@
else
*max_chan_per_scan = MWIFIEX_DEF_CHANNELS_PER_SCAN_CMD;
+ if (user_scan_in->scan_chan_gap) {
+ *max_chan_per_scan = MWIFIEX_MAX_CHANNELS_PER_SPECIFIC_SCAN;
+ dev_dbg(adapter->dev, "info: scan: channel gap = %d\n",
+ user_scan_in->scan_chan_gap);
+
+ chan_gap_tlv = (void *)tlv_pos;
+ chan_gap_tlv->header.type =
+ cpu_to_le16(TLV_TYPE_SCAN_CHANNEL_GAP);
+ chan_gap_tlv->header.len =
+ cpu_to_le16(sizeof(chan_gap_tlv->chan_gap));
+ chan_gap_tlv->chan_gap =
+ cpu_to_le16((user_scan_in->scan_chan_gap));
+
+ tlv_pos += sizeof(struct mwifiex_ie_types_scan_chan_gap);
+ }
+
/* If the input config or adapter has the number of Probes set,
add tlv */
if (num_probes) {
@@ -1050,12 +1067,6 @@
*filtered_scan);
}
- /*
- * In associated state we will reduce the number of channels scanned per
- * scan command to 1 to avoid any traffic delay/loss.
- */
- if (priv->media_connected)
- *max_chan_per_scan = 1;
}
/*
@@ -1755,7 +1766,7 @@
static void mwifiex_check_next_scan_command(struct mwifiex_private *priv)
{
struct mwifiex_adapter *adapter = priv->adapter;
- struct cmd_ctrl_node *cmd_node;
+ struct cmd_ctrl_node *cmd_node, *tmp_node;
unsigned long flags;
spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
@@ -1768,9 +1779,6 @@
if (!adapter->ext_scan)
mwifiex_complete_scan(priv);
- if (priv->report_scan_result)
- priv->report_scan_result = false;
-
if (priv->scan_request) {
dev_dbg(adapter->dev, "info: notifying scan done\n");
cfg80211_scan_done(priv->scan_request, 0);
@@ -1779,37 +1787,36 @@
priv->scan_aborting = false;
dev_dbg(adapter->dev, "info: scan already aborted\n");
}
- } else {
- if ((priv->scan_aborting && !priv->scan_request) ||
- priv->scan_block) {
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
- flags);
- adapter->scan_delay_cnt = MWIFIEX_MAX_SCAN_DELAY_CNT;
- mod_timer(&priv->scan_delay_timer, jiffies);
- dev_dbg(priv->adapter->dev,
- "info: %s: triggerring scan abort\n", __func__);
- } else if (!mwifiex_wmm_lists_empty(adapter) &&
- (priv->scan_request && (priv->scan_request->flags &
- NL80211_SCAN_FLAG_LOW_PRIORITY))) {
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
- flags);
- adapter->scan_delay_cnt = 1;
- mod_timer(&priv->scan_delay_timer, jiffies +
- msecs_to_jiffies(MWIFIEX_SCAN_DELAY_MSEC));
- dev_dbg(priv->adapter->dev,
- "info: %s: deferring scan\n", __func__);
- } else {
- /* Get scan command from scan_pending_q and put to
- * cmd_pending_q
- */
- cmd_node = list_first_entry(&adapter->scan_pending_q,
- struct cmd_ctrl_node, list);
+ } else if ((priv->scan_aborting && !priv->scan_request) ||
+ priv->scan_block) {
+ list_for_each_entry_safe(cmd_node, tmp_node,
+ &adapter->scan_pending_q, list) {
list_del(&cmd_node->list);
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
- flags);
- mwifiex_insert_cmd_to_pending_q(adapter, cmd_node,
- true);
+ mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
}
+ spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+
+ spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ adapter->scan_processing = false;
+ spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
+
+ if (priv->scan_request) {
+ dev_dbg(adapter->dev, "info: aborting scan\n");
+ cfg80211_scan_done(priv->scan_request, 1);
+ priv->scan_request = NULL;
+ } else {
+ priv->scan_aborting = false;
+ dev_dbg(adapter->dev, "info: scan already aborted\n");
+ }
+ } else {
+ /* Get scan command from scan_pending_q and put to
+ * cmd_pending_q
+ */
+ cmd_node = list_first_entry(&adapter->scan_pending_q,
+ struct cmd_ctrl_node, list);
+ list_del(&cmd_node->list);
+ spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+ mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true);
}
return;
@@ -1971,9 +1978,34 @@
/* This function handles the command response of extended scan */
int mwifiex_ret_802_11_scan_ext(struct mwifiex_private *priv)
{
+ struct mwifiex_adapter *adapter = priv->adapter;
+ struct host_cmd_ds_command *cmd_ptr;
+ struct cmd_ctrl_node *cmd_node;
+ unsigned long cmd_flags, scan_flags;
+ bool complete_scan = false;
+
dev_dbg(priv->adapter->dev, "info: EXT scan returns successfully\n");
- mwifiex_complete_scan(priv);
+ spin_lock_irqsave(&adapter->cmd_pending_q_lock, cmd_flags);
+ spin_lock_irqsave(&adapter->scan_pending_q_lock, scan_flags);
+ if (list_empty(&adapter->scan_pending_q)) {
+ complete_scan = true;
+ list_for_each_entry(cmd_node, &adapter->cmd_pending_q, list) {
+ cmd_ptr = (void *)cmd_node->cmd_skb->data;
+ if (le16_to_cpu(cmd_ptr->command) ==
+ HostCmd_CMD_802_11_SCAN_EXT) {
+ dev_dbg(priv->adapter->dev,
+ "Scan pending in command pending list");
+ complete_scan = false;
+ break;
+ }
+ }
+ }
+ spin_unlock_irqrestore(&adapter->scan_pending_q_lock, scan_flags);
+ spin_unlock_irqrestore(&adapter->cmd_pending_q_lock, cmd_flags);
+
+ if (complete_scan)
+ mwifiex_complete_scan(priv);
return 0;
}
diff --git a/drivers/net/wireless/mwifiex/sdio.c b/drivers/net/wireless/mwifiex/sdio.c
index 1770fa3..ea8fc58 100644
--- a/drivers/net/wireless/mwifiex/sdio.c
+++ b/drivers/net/wireless/mwifiex/sdio.c
@@ -622,22 +622,15 @@
dev_dbg(adapter->dev, "data: mp_wr_bitmap=0x%08x\n", wr_bitmap);
- if (card->supports_sdio_new_mode &&
- !(wr_bitmap & reg->data_port_mask)) {
+ if (!(wr_bitmap & card->mp_data_port_mask)) {
adapter->data_sent = true;
return -EBUSY;
- } else if (!card->supports_sdio_new_mode &&
- !(wr_bitmap & card->mp_data_port_mask)) {
- return -1;
}
if (card->mp_wr_bitmap & (1 << card->curr_wr_port)) {
card->mp_wr_bitmap &= (u32) (~(1 << card->curr_wr_port));
*port = card->curr_wr_port;
- if (((card->supports_sdio_new_mode) &&
- (++card->curr_wr_port == card->max_ports)) ||
- ((!card->supports_sdio_new_mode) &&
- (++card->curr_wr_port == card->mp_end_port)))
+ if (++card->curr_wr_port == card->mp_end_port)
card->curr_wr_port = reg->start_wr_port;
} else {
adapter->data_sent = true;
@@ -1046,6 +1039,7 @@
struct sk_buff *skb, u32 upld_typ)
{
u8 *cmd_buf;
+ unsigned long flags;
__le16 *curr_ptr = (__le16 *)skb->data;
u16 pkt_len = le16_to_cpu(*curr_ptr);
@@ -1055,7 +1049,15 @@
switch (upld_typ) {
case MWIFIEX_TYPE_DATA:
dev_dbg(adapter->dev, "info: --- Rx: Data packet ---\n");
- mwifiex_handle_rx_packet(adapter, skb);
+ if (adapter->rx_work_enabled) {
+ spin_lock_irqsave(&adapter->rx_q_lock, flags);
+ skb_queue_tail(&adapter->rx_data_q, skb);
+ spin_unlock_irqrestore(&adapter->rx_q_lock, flags);
+ adapter->data_received = true;
+ atomic_inc(&adapter->rx_pending);
+ } else {
+ mwifiex_handle_rx_packet(adapter, skb);
+ }
break;
case MWIFIEX_TYPE_CMD:
@@ -1527,8 +1529,7 @@
__func__);
if (MP_TX_AGGR_IN_PROGRESS(card)) {
- if (!mp_tx_aggr_port_limit_reached(card) &&
- MP_TX_AGGR_BUF_HAS_ROOM(card, pkt_len)) {
+ if (MP_TX_AGGR_BUF_HAS_ROOM(card, pkt_len)) {
f_precopy_cur_buf = 1;
if (!(card->mp_wr_bitmap &
@@ -1540,8 +1541,7 @@
/* No room in Aggr buf, send it */
f_send_aggr_buf = 1;
- if (mp_tx_aggr_port_limit_reached(card) ||
- !(card->mp_wr_bitmap &
+ if (!(card->mp_wr_bitmap &
(1 << card->curr_wr_port)))
f_send_cur_buf = 1;
else
diff --git a/drivers/net/wireless/mwifiex/sta_cmdresp.c b/drivers/net/wireless/mwifiex/sta_cmdresp.c
index 62866b0..4aad446 100644
--- a/drivers/net/wireless/mwifiex/sta_cmdresp.c
+++ b/drivers/net/wireless/mwifiex/sta_cmdresp.c
@@ -85,8 +85,6 @@
spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
adapter->scan_processing = false;
spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
- if (priv->report_scan_result)
- priv->report_scan_result = false;
break;
case HostCmd_CMD_MAC_CONTROL:
diff --git a/drivers/net/wireless/mwifiex/sta_ioctl.c b/drivers/net/wireless/mwifiex/sta_ioctl.c
index b95a29b..92f3eb8 100644
--- a/drivers/net/wireless/mwifiex/sta_ioctl.c
+++ b/drivers/net/wireless/mwifiex/sta_ioctl.c
@@ -287,10 +287,13 @@
return -1;
if (mwifiex_band_to_radio_type(bss_desc->bss_band) ==
- HostCmd_SCAN_RADIO_TYPE_BG)
+ HostCmd_SCAN_RADIO_TYPE_BG) {
config_bands = BAND_B | BAND_G | BAND_GN;
- else
- config_bands = BAND_A | BAND_AN | BAND_AAC;
+ } else {
+ config_bands = BAND_A | BAND_AN;
+ if (adapter->fw_bands & BAND_AAC)
+ config_bands |= BAND_AAC;
+ }
if (!((config_bands | adapter->fw_bands) & ~adapter->fw_bands))
adapter->config_bands = config_bands;
diff --git a/drivers/net/wireless/mwifiex/tdls.c b/drivers/net/wireless/mwifiex/tdls.c
index 4c5fd95..e294907 100644
--- a/drivers/net/wireless/mwifiex/tdls.c
+++ b/drivers/net/wireless/mwifiex/tdls.c
@@ -871,7 +871,9 @@
break;
case WLAN_EID_RSN:
memcpy((u8 *)&sta_ptr->tdls_cap.rsn_ie, pos,
- sizeof(struct ieee_types_header) + pos[1]);
+ sizeof(struct ieee_types_header) +
+ min_t(u8, pos[1], IEEE_MAX_IE_SIZE -
+ sizeof(struct ieee_types_header)));
break;
case WLAN_EID_QOS_CAPA:
sta_ptr->tdls_cap.qos_info = pos[2];
diff --git a/drivers/net/wireless/p54/main.c b/drivers/net/wireless/p54/main.c
index 7be3a48..97aeff0 100644
--- a/drivers/net/wireless/p54/main.c
+++ b/drivers/net/wireless/p54/main.c
@@ -696,7 +696,8 @@
WARN(total, "tx flush timeout, unresponsive firmware");
}
-static void p54_set_coverage_class(struct ieee80211_hw *dev, u8 coverage_class)
+static void p54_set_coverage_class(struct ieee80211_hw *dev,
+ s16 coverage_class)
{
struct p54_common *priv = dev->priv;
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbt_precomp.h b/drivers/net/wireless/rtlwifi/btcoexist/halbt_precomp.h
index d76684e..39b9a33 100644
--- a/drivers/net/wireless/rtlwifi/btcoexist/halbt_precomp.h
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbt_precomp.h
@@ -37,7 +37,13 @@
#include "halbtcoutsrc.h"
+#include "halbtc8192e2ant.h"
+#include "halbtc8723b1ant.h"
#include "halbtc8723b2ant.h"
+#include "halbtc8821a2ant.h"
+#include "halbtc8821a1ant.h"
+
+#define GetDefaultAdapter(padapter) padapter
#define BIT0 0x00000001
#define BIT1 0x00000002
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8192e2ant.c b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8192e2ant.c
new file mode 100644
index 0000000..53261d6
--- /dev/null
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8192e2ant.c
@@ -0,0 +1,3849 @@
+/******************************************************************************
+ *
+ * Copyright(c) 2012 Realtek Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called LICENSE.
+ *
+ * Contact Information:
+ * wlanfae <wlanfae@realtek.com>
+ * Realtek Corporation, No. 2, Innovation Road II, Hsinchu Science Park,
+ * Hsinchu 300, Taiwan.
+ *
+ * Larry Finger <Larry.Finger@lwfinger.net>
+ *
+ *****************************************************************************/
+/**************************************************************
+ * Description:
+ *
+ * This file is for RTL8192E Co-exist mechanism
+ *
+ * History
+ * 2012/11/15 Cosa first check in.
+ *
+ **************************************************************/
+
+/**************************************************************
+ * include files
+ **************************************************************/
+#include "halbt_precomp.h"
+/**************************************************************
+ * Global variables, these are static variables
+ **************************************************************/
+static struct coex_dm_8192e_2ant glcoex_dm_8192e_2ant;
+static struct coex_dm_8192e_2ant *coex_dm = &glcoex_dm_8192e_2ant;
+static struct coex_sta_8192e_2ant glcoex_sta_8192e_2ant;
+static struct coex_sta_8192e_2ant *coex_sta = &glcoex_sta_8192e_2ant;
+
+static const char *const GLBtInfoSrc8192e2Ant[] = {
+ "BT Info[wifi fw]",
+ "BT Info[bt rsp]",
+ "BT Info[bt auto report]",
+};
+
+static u32 glcoex_ver_date_8192e_2ant = 20130902;
+static u32 glcoex_ver_8192e_2ant = 0x34;
+
+/**************************************************************
+ * local function proto type if needed
+ **************************************************************/
+/**************************************************************
+ * local function start with halbtc8192e2ant_
+ **************************************************************/
+static u8 halbtc8192e2ant_btrssi_state(u8 level_num, u8 rssi_thresh,
+ u8 rssi_thresh1)
+{
+ int btrssi = 0;
+ u8 btrssi_state = coex_sta->pre_bt_rssi_state;
+
+ btrssi = coex_sta->bt_rssi;
+
+ if (level_num == 2) {
+ if ((coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi pre state = LOW\n");
+ if (btrssi >= (rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8192E_2ANT)) {
+ btrssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state switch to High\n");
+ } else {
+ btrssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state stay at Low\n");
+ }
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi pre state = HIGH\n");
+ if (btrssi < rssi_thresh) {
+ btrssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state switch to Low\n");
+ } else {
+ btrssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state stay at High\n");
+ }
+ }
+ } else if (level_num == 3) {
+ if (rssi_thresh > rssi_thresh1) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi thresh error!!\n");
+ return coex_sta->pre_bt_rssi_state;
+ }
+
+ if ((coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi pre state = LOW\n");
+ if (btrssi >= (rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8192E_2ANT)) {
+ btrssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state switch to Medium\n");
+ } else {
+ btrssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state stay at Low\n");
+ }
+ } else if ((coex_sta->pre_bt_rssi_state ==
+ BTC_RSSI_STATE_MEDIUM) ||
+ (coex_sta->pre_bt_rssi_state ==
+ BTC_RSSI_STATE_STAY_MEDIUM)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi pre state = MEDIUM\n");
+ if (btrssi >= (rssi_thresh1 +
+ BTC_RSSI_COEX_THRESH_TOL_8192E_2ANT)) {
+ btrssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state switch to High\n");
+ } else if (btrssi < rssi_thresh) {
+ btrssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state switch to Low\n");
+ } else {
+ btrssi_state = BTC_RSSI_STATE_STAY_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state stay at Medium\n");
+ }
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi pre state = HIGH\n");
+ if (btrssi < rssi_thresh1) {
+ btrssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state switch to Medium\n");
+ } else {
+ btrssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "BT Rssi state stay at High\n");
+ }
+ }
+ }
+
+ coex_sta->pre_bt_rssi_state = btrssi_state;
+
+ return btrssi_state;
+}
+
+static u8 halbtc8192e2ant_wifirssi_state(struct btc_coexist *btcoexist,
+ u8 index, u8 level_num, u8 rssi_thresh,
+ u8 rssi_thresh1)
+{
+ int wifirssi = 0;
+ u8 wifirssi_state = coex_sta->pre_wifi_rssi_state[index];
+
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_WIFI_RSSI, &wifirssi);
+
+ if (level_num == 2) {
+ if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_LOW)) {
+ if (wifirssi >= (rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8192E_2ANT)) {
+ wifirssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state switch to High\n");
+ } else {
+ wifirssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state stay at Low\n");
+ }
+ } else {
+ if (wifirssi < rssi_thresh) {
+ wifirssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state switch to Low\n");
+ } else {
+ wifirssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state stay at High\n");
+ }
+ }
+ } else if (level_num == 3) {
+ if (rssi_thresh > rssi_thresh1) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI thresh error!!\n");
+ return coex_sta->pre_wifi_rssi_state[index];
+ }
+
+ if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_LOW)) {
+ if (wifirssi >= (rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8192E_2ANT)) {
+ wifirssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state switch to Medium\n");
+ } else {
+ wifirssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state stay at Low\n");
+ }
+ } else if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_MEDIUM) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_MEDIUM)) {
+ if (wifirssi >= (rssi_thresh1 +
+ BTC_RSSI_COEX_THRESH_TOL_8192E_2ANT)) {
+ wifirssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state switch to High\n");
+ } else if (wifirssi < rssi_thresh) {
+ wifirssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state switch to Low\n");
+ } else {
+ wifirssi_state = BTC_RSSI_STATE_STAY_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state stay at Medium\n");
+ }
+ } else {
+ if (wifirssi < rssi_thresh1) {
+ wifirssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state switch to Medium\n");
+ } else {
+ wifirssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "wifi RSSI state stay at High\n");
+ }
+ }
+ }
+
+ coex_sta->pre_wifi_rssi_state[index] = wifirssi_state;
+
+ return wifirssi_state;
+}
+
+static void btc8192e2ant_monitor_bt_enable_dis(struct btc_coexist *btcoexist)
+{
+ static bool pre_bt_disabled;
+ static u32 bt_disable_cnt;
+ bool bt_active = true, bt_disabled = false;
+
+ /* This function check if bt is disabled */
+
+ if (coex_sta->high_priority_tx == 0 &&
+ coex_sta->high_priority_rx == 0 &&
+ coex_sta->low_priority_tx == 0 &&
+ coex_sta->low_priority_rx == 0)
+ bt_active = false;
+
+ if (coex_sta->high_priority_tx == 0xffff &&
+ coex_sta->high_priority_rx == 0xffff &&
+ coex_sta->low_priority_tx == 0xffff &&
+ coex_sta->low_priority_rx == 0xffff)
+ bt_active = false;
+
+ if (bt_active) {
+ bt_disable_cnt = 0;
+ bt_disabled = false;
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_DISABLE,
+ &bt_disabled);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], BT is enabled !!\n");
+ } else {
+ bt_disable_cnt++;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], bt all counters = 0, %d times!!\n",
+ bt_disable_cnt);
+ if (bt_disable_cnt >= 2) {
+ bt_disabled = true;
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_DISABLE,
+ &bt_disabled);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], BT is disabled !!\n");
+ }
+ }
+ if (pre_bt_disabled != bt_disabled) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], BT is from %s to %s!!\n",
+ (pre_bt_disabled ? "disabled" : "enabled"),
+ (bt_disabled ? "disabled" : "enabled"));
+ pre_bt_disabled = bt_disabled;
+ }
+}
+
+static u32 halbtc8192e2ant_decidera_mask(struct btc_coexist *btcoexist,
+ u8 sstype, u32 ra_masktype)
+{
+ u32 disra_mask = 0x0;
+
+ switch (ra_masktype) {
+ case 0: /* normal mode */
+ if (sstype == 2)
+ disra_mask = 0x0; /* enable 2ss */
+ else
+ disra_mask = 0xfff00000;/* disable 2ss */
+ break;
+ case 1: /* disable cck 1/2 */
+ if (sstype == 2)
+ disra_mask = 0x00000003;/* enable 2ss */
+ else
+ disra_mask = 0xfff00003;/* disable 2ss */
+ break;
+ case 2: /* disable cck 1/2/5.5, ofdm 6/9/12/18/24, mcs 0/1/2/3/4 */
+ if (sstype == 2)
+ disra_mask = 0x0001f1f7;/* enable 2ss */
+ else
+ disra_mask = 0xfff1f1f7;/* disable 2ss */
+ break;
+ default:
+ break;
+ }
+
+ return disra_mask;
+}
+
+static void halbtc8192e2ant_Updatera_mask(struct btc_coexist *btcoexist,
+ bool force_exec, u32 dis_ratemask)
+{
+ coex_dm->curra_mask = dis_ratemask;
+
+ if (force_exec || (coex_dm->prera_mask != coex_dm->curra_mask))
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_UPDATE_ra_mask,
+ &coex_dm->curra_mask);
+ coex_dm->prera_mask = coex_dm->curra_mask;
+}
+
+static void btc8192e2ant_autorate_fallback_retry(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ bool wifi_under_bmode = false;
+
+ coex_dm->cur_arfrtype = type;
+
+ if (force_exec || (coex_dm->pre_arfrtype != coex_dm->cur_arfrtype)) {
+ switch (coex_dm->cur_arfrtype) {
+ case 0: /* normal mode */
+ btcoexist->btc_write_4byte(btcoexist, 0x430,
+ coex_dm->backup_arfr_cnt1);
+ btcoexist->btc_write_4byte(btcoexist, 0x434,
+ coex_dm->backup_arfr_cnt2);
+ break;
+ case 1:
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_UNDER_B_MODE,
+ &wifi_under_bmode);
+ if (wifi_under_bmode) {
+ btcoexist->btc_write_4byte(btcoexist, 0x430,
+ 0x0);
+ btcoexist->btc_write_4byte(btcoexist, 0x434,
+ 0x01010101);
+ } else {
+ btcoexist->btc_write_4byte(btcoexist, 0x430,
+ 0x0);
+ btcoexist->btc_write_4byte(btcoexist, 0x434,
+ 0x04030201);
+ }
+ break;
+ default:
+ break;
+ }
+ }
+
+ coex_dm->pre_arfrtype = coex_dm->cur_arfrtype;
+}
+
+static void halbtc8192e2ant_retrylimit(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ coex_dm->cur_retrylimit_type = type;
+
+ if (force_exec || (coex_dm->pre_retrylimit_type !=
+ coex_dm->cur_retrylimit_type)) {
+ switch (coex_dm->cur_retrylimit_type) {
+ case 0: /* normal mode */
+ btcoexist->btc_write_2byte(btcoexist, 0x42a,
+ coex_dm->backup_retrylimit);
+ break;
+ case 1: /* retry limit = 8 */
+ btcoexist->btc_write_2byte(btcoexist, 0x42a,
+ 0x0808);
+ break;
+ default:
+ break;
+ }
+ }
+
+ coex_dm->pre_retrylimit_type = coex_dm->cur_retrylimit_type;
+}
+
+static void halbtc8192e2ant_ampdu_maxtime(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ coex_dm->cur_ampdutime_type = type;
+
+ if (force_exec || (coex_dm->pre_ampdutime_type !=
+ coex_dm->cur_ampdutime_type)) {
+ switch (coex_dm->cur_ampdutime_type) {
+ case 0: /* normal mode */
+ btcoexist->btc_write_1byte(btcoexist, 0x456,
+ coex_dm->backup_ampdu_maxtime);
+ break;
+ case 1: /* AMPDU timw = 0x38 * 32us */
+ btcoexist->btc_write_1byte(btcoexist, 0x456, 0x38);
+ break;
+ default:
+ break;
+ }
+ }
+
+ coex_dm->pre_ampdutime_type = coex_dm->cur_ampdutime_type;
+}
+
+static void halbtc8192e2ant_limited_tx(struct btc_coexist *btcoexist,
+ bool force_exec, u8 ra_masktype,
+ u8 arfr_type, u8 retrylimit_type,
+ u8 ampdutime_type)
+{
+ u32 disra_mask = 0x0;
+
+ coex_dm->curra_masktype = ra_masktype;
+ disra_mask = halbtc8192e2ant_decidera_mask(btcoexist,
+ coex_dm->cur_sstype,
+ ra_masktype);
+ halbtc8192e2ant_Updatera_mask(btcoexist, force_exec, disra_mask);
+btc8192e2ant_autorate_fallback_retry(btcoexist, force_exec, arfr_type);
+ halbtc8192e2ant_retrylimit(btcoexist, force_exec, retrylimit_type);
+ halbtc8192e2ant_ampdu_maxtime(btcoexist, force_exec, ampdutime_type);
+}
+
+static void halbtc8192e2ant_limited_rx(struct btc_coexist *btcoexist,
+ bool force_exec, bool rej_ap_agg_pkt,
+ bool bt_ctrl_agg_buf_size,
+ u8 agg_buf_size)
+{
+ bool reject_rx_agg = rej_ap_agg_pkt;
+ bool bt_ctrl_rx_agg_size = bt_ctrl_agg_buf_size;
+ u8 rx_agg_size = agg_buf_size;
+
+ /*********************************************
+ * Rx Aggregation related setting
+ *********************************************/
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_TO_REJ_AP_AGG_PKT,
+ &reject_rx_agg);
+ /* decide BT control aggregation buf size or not */
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_CTRL_AGG_SIZE,
+ &bt_ctrl_rx_agg_size);
+ /* aggregation buf size, only work
+ * when BT control Rx aggregation size.
+ */
+ btcoexist->btc_set(btcoexist, BTC_SET_U1_AGG_BUF_SIZE, &rx_agg_size);
+ /* real update aggregation setting */
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_AGGREGATE_CTRL, NULL);
+}
+
+static void halbtc8192e2ant_monitor_bt_ctr(struct btc_coexist *btcoexist)
+{
+ u32 reg_hp_txrx, reg_lp_txrx, u32tmp;
+ u32 reg_hp_tx = 0, reg_hp_rx = 0, reg_lp_tx = 0, reg_lp_rx = 0;
+
+ reg_hp_txrx = 0x770;
+ reg_lp_txrx = 0x774;
+
+ u32tmp = btcoexist->btc_read_4byte(btcoexist, reg_hp_txrx);
+ reg_hp_tx = u32tmp & MASKLWORD;
+ reg_hp_rx = (u32tmp & MASKHWORD)>>16;
+
+ u32tmp = btcoexist->btc_read_4byte(btcoexist, reg_lp_txrx);
+ reg_lp_tx = u32tmp & MASKLWORD;
+ reg_lp_rx = (u32tmp & MASKHWORD)>>16;
+
+ coex_sta->high_priority_tx = reg_hp_tx;
+ coex_sta->high_priority_rx = reg_hp_rx;
+ coex_sta->low_priority_tx = reg_lp_tx;
+ coex_sta->low_priority_rx = reg_lp_rx;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex] High Priority Tx/Rx (reg 0x%x) = 0x%x(%d)/0x%x(%d)\n",
+ reg_hp_txrx, reg_hp_tx, reg_hp_tx, reg_hp_rx, reg_hp_rx);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex] Low Priority Tx/Rx (reg 0x%x) = 0x%x(%d)/0x%x(%d)\n",
+ reg_lp_txrx, reg_lp_tx, reg_lp_tx, reg_lp_rx, reg_lp_rx);
+
+ /* reset counter */
+ btcoexist->btc_write_1byte(btcoexist, 0x76e, 0xc);
+}
+
+static void halbtc8192e2ant_querybt_info(struct btc_coexist *btcoexist)
+{
+ u8 h2c_parameter[1] = {0};
+
+ coex_sta->c2h_bt_info_req_sent = true;
+
+ h2c_parameter[0] |= BIT0; /* trigger */
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], Query Bt Info, FW write 0x61 = 0x%x\n",
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x61, 1, h2c_parameter);
+}
+
+static void halbtc8192e2ant_update_btlink_info(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool bt_hson = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hson);
+
+ bt_link_info->bt_link_exist = coex_sta->bt_link_exist;
+ bt_link_info->sco_exist = coex_sta->sco_exist;
+ bt_link_info->a2dp_exist = coex_sta->a2dp_exist;
+ bt_link_info->pan_exist = coex_sta->pan_exist;
+ bt_link_info->hid_exist = coex_sta->hid_exist;
+
+ /* work around for HS mode. */
+ if (bt_hson) {
+ bt_link_info->pan_exist = true;
+ bt_link_info->bt_link_exist = true;
+ }
+
+ /* check if Sco only */
+ if (bt_link_info->sco_exist &&
+ !bt_link_info->a2dp_exist &&
+ !bt_link_info->pan_exist &&
+ !bt_link_info->hid_exist)
+ bt_link_info->sco_only = true;
+ else
+ bt_link_info->sco_only = false;
+
+ /* check if A2dp only */
+ if (!bt_link_info->sco_exist &&
+ bt_link_info->a2dp_exist &&
+ !bt_link_info->pan_exist &&
+ !bt_link_info->hid_exist)
+ bt_link_info->a2dp_only = true;
+ else
+ bt_link_info->a2dp_only = false;
+
+ /* check if Pan only */
+ if (!bt_link_info->sco_exist &&
+ !bt_link_info->a2dp_exist &&
+ bt_link_info->pan_exist &&
+ !bt_link_info->hid_exist)
+ bt_link_info->pan_only = true;
+ else
+ bt_link_info->pan_only = false;
+
+ /* check if Hid only */
+ if (!bt_link_info->sco_exist &&
+ !bt_link_info->a2dp_exist &&
+ !bt_link_info->pan_exist &&
+ bt_link_info->hid_exist)
+ bt_link_info->hid_only = true;
+ else
+ bt_link_info->hid_only = false;
+}
+
+static u8 halbtc8192e2ant_action_algorithm(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ struct btc_stack_info *stack_info = &btcoexist->stack_info;
+ bool bt_hson = false;
+ u8 algorithm = BT_8192E_2ANT_COEX_ALGO_UNDEFINED;
+ u8 numdiffprofile = 0;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hson);
+
+ if (!bt_link_info->bt_link_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "No BT link exists!!!\n");
+ return algorithm;
+ }
+
+ if (bt_link_info->sco_exist)
+ numdiffprofile++;
+ if (bt_link_info->hid_exist)
+ numdiffprofile++;
+ if (bt_link_info->pan_exist)
+ numdiffprofile++;
+ if (bt_link_info->a2dp_exist)
+ numdiffprofile++;
+
+ if (numdiffprofile == 1) {
+ if (bt_link_info->sco_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO only\n");
+ algorithm = BT_8192E_2ANT_COEX_ALGO_SCO;
+ } else {
+ if (bt_link_info->hid_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "HID only\n");
+ algorithm = BT_8192E_2ANT_COEX_ALGO_HID;
+ } else if (bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "A2DP only\n");
+ algorithm = BT_8192E_2ANT_COEX_ALGO_A2DP;
+ } else if (bt_link_info->pan_exist) {
+ if (bt_hson) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "PAN(HS) only\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_PANHS;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "PAN(EDR) only\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_PANEDR;
+ }
+ }
+ }
+ } else if (numdiffprofile == 2) {
+ if (bt_link_info->sco_exist) {
+ if (bt_link_info->hid_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO + HID\n");
+ algorithm = BT_8192E_2ANT_COEX_ALGO_SCO;
+ } else if (bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO + A2DP ==> SCO\n");
+ algorithm = BT_8192E_2ANT_COEX_ALGO_PANEDR_HID;
+ } else if (bt_link_info->pan_exist) {
+ if (bt_hson) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO + PAN(HS)\n");
+ algorithm = BT_8192E_2ANT_COEX_ALGO_SCO;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO + PAN(EDR)\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_SCO_PAN;
+ }
+ }
+ } else {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->a2dp_exist) {
+ if (stack_info->num_of_hid >= 2) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "HID*2 + A2DP\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_HID_A2DP_PANEDR;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "HID + A2DP\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_HID_A2DP;
+ }
+ } else if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist) {
+ if (bt_hson) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "HID + PAN(HS)\n");
+ algorithm = BT_8192E_2ANT_COEX_ALGO_HID;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "HID + PAN(EDR)\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_PANEDR_HID;
+ }
+ } else if (bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hson) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "A2DP + PAN(HS)\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_A2DP_PANHS;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "A2DP + PAN(EDR)\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_PANEDR_A2DP;
+ }
+ }
+ }
+ } else if (numdiffprofile == 3) {
+ if (bt_link_info->sco_exist) {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO + HID + A2DP ==> HID\n");
+ algorithm = BT_8192E_2ANT_COEX_ALGO_PANEDR_HID;
+ } else if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist) {
+ if (bt_hson) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO + HID + PAN(HS)\n");
+ algorithm = BT_8192E_2ANT_COEX_ALGO_SCO;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO + HID + PAN(EDR)\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_SCO_PAN;
+ }
+ } else if (bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hson) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO + A2DP + PAN(HS)\n");
+ algorithm = BT_8192E_2ANT_COEX_ALGO_SCO;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO + A2DP + PAN(EDR)\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ } else {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hson) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "HID + A2DP + PAN(HS)\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_HID_A2DP;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "HID + A2DP + PAN(EDR)\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_HID_A2DP_PANEDR;
+ }
+ }
+ }
+ } else if (numdiffprofile >= 3) {
+ if (bt_link_info->sco_exist) {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hson) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "ErrorSCO+HID+A2DP+PAN(HS)\n");
+
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "SCO+HID+A2DP+PAN(EDR)\n");
+ algorithm =
+ BT_8192E_2ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ }
+ }
+
+ return algorithm;
+}
+
+static void halbtc8192e2ant_setfw_dac_swinglevel(struct btc_coexist *btcoexist,
+ u8 dac_swinglvl)
+{
+ u8 h2c_parameter[1] = {0};
+
+ /* There are several type of dacswing
+ * 0x18/ 0x10/ 0xc/ 0x8/ 0x4/ 0x6
+ */
+ h2c_parameter[0] = dac_swinglvl;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], Set Dac Swing Level = 0x%x\n", dac_swinglvl);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], FW write 0x64 = 0x%x\n", h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x64, 1, h2c_parameter);
+}
+
+static void halbtc8192e2ant_set_fwdec_btpwr(struct btc_coexist *btcoexist,
+ u8 dec_btpwr_lvl)
+{
+ u8 h2c_parameter[1] = {0};
+
+ h2c_parameter[0] = dec_btpwr_lvl;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex] decrease Bt Power level = %d, FW write 0x62 = 0x%x\n",
+ dec_btpwr_lvl, h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x62, 1, h2c_parameter);
+}
+
+static void halbtc8192e2ant_dec_btpwr(struct btc_coexist *btcoexist,
+ bool force_exec, u8 dec_btpwr_lvl)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s Dec BT power level = %d\n",
+ (force_exec ? "force to" : ""), dec_btpwr_lvl);
+ coex_dm->cur_dec_bt_pwr = dec_btpwr_lvl;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], preBtDecPwrLvl=%d, curBtDecPwrLvl=%d\n",
+ coex_dm->pre_dec_bt_pwr, coex_dm->cur_dec_bt_pwr);
+ }
+ halbtc8192e2ant_set_fwdec_btpwr(btcoexist, coex_dm->cur_dec_bt_pwr);
+
+ coex_dm->pre_dec_bt_pwr = coex_dm->cur_dec_bt_pwr;
+}
+
+static void halbtc8192e2ant_set_bt_autoreport(struct btc_coexist *btcoexist,
+ bool enable_autoreport)
+{
+ u8 h2c_parameter[1] = {0};
+
+ h2c_parameter[0] = 0;
+
+ if (enable_autoreport)
+ h2c_parameter[0] |= BIT0;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], BT FW auto report : %s, FW write 0x68 = 0x%x\n",
+ (enable_autoreport ? "Enabled!!" : "Disabled!!"),
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x68, 1, h2c_parameter);
+}
+
+static void halbtc8192e2ant_bt_autoreport(struct btc_coexist *btcoexist,
+ bool force_exec,
+ bool enable_autoreport)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s BT Auto report = %s\n",
+ (force_exec ? "force to" : ""),
+ ((enable_autoreport) ? "Enabled" : "Disabled"));
+ coex_dm->cur_bt_auto_report = enable_autoreport;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex] bPreBtAutoReport=%d, bCurBtAutoReport=%d\n",
+ coex_dm->pre_bt_auto_report,
+ coex_dm->cur_bt_auto_report);
+
+ if (coex_dm->pre_bt_auto_report == coex_dm->cur_bt_auto_report)
+ return;
+ }
+ halbtc8192e2ant_set_bt_autoreport(btcoexist,
+ coex_dm->cur_bt_auto_report);
+
+ coex_dm->pre_bt_auto_report = coex_dm->cur_bt_auto_report;
+}
+
+static void halbtc8192e2ant_fw_dac_swinglvl(struct btc_coexist *btcoexist,
+ bool force_exec, u8 fw_dac_swinglvl)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s set FW Dac Swing level = %d\n",
+ (force_exec ? "force to" : ""), fw_dac_swinglvl);
+ coex_dm->cur_fw_dac_swing_lvl = fw_dac_swinglvl;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex] preFwDacSwingLvl=%d, curFwDacSwingLvl=%d\n",
+ coex_dm->pre_fw_dac_swing_lvl,
+ coex_dm->cur_fw_dac_swing_lvl);
+
+ if (coex_dm->pre_fw_dac_swing_lvl ==
+ coex_dm->cur_fw_dac_swing_lvl)
+ return;
+ }
+
+ halbtc8192e2ant_setfw_dac_swinglevel(btcoexist,
+ coex_dm->cur_fw_dac_swing_lvl);
+
+ coex_dm->pre_fw_dac_swing_lvl = coex_dm->cur_fw_dac_swing_lvl;
+}
+
+static void btc8192e2ant_set_sw_rf_rx_lpf_corner(struct btc_coexist *btcoexist,
+ bool rx_rf_shrink_on)
+{
+ if (rx_rf_shrink_on) {
+ /* Shrink RF Rx LPF corner */
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], Shrink RF Rx LPF corner!!\n");
+ btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1e,
+ 0xfffff, 0xffffc);
+ } else {
+ /* Resume RF Rx LPF corner
+ * After initialized, we can use coex_dm->btRf0x1eBackup
+ */
+ if (btcoexist->initilized) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], Resume RF Rx LPF corner!!\n");
+ btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1e,
+ 0xfffff,
+ coex_dm->bt_rf0x1e_backup);
+ }
+ }
+}
+
+static void halbtc8192e2ant_rf_shrink(struct btc_coexist *btcoexist,
+ bool force_exec, bool rx_rf_shrink_on)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s turn Rx RF Shrink = %s\n",
+ (force_exec ? "force to" : ""),
+ ((rx_rf_shrink_on) ? "ON" : "OFF"));
+ coex_dm->cur_rf_rx_lpf_shrink = rx_rf_shrink_on;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex]bPreRfRxLpfShrink=%d,bCurRfRxLpfShrink=%d\n",
+ coex_dm->pre_rf_rx_lpf_shrink,
+ coex_dm->cur_rf_rx_lpf_shrink);
+
+ if (coex_dm->pre_rf_rx_lpf_shrink ==
+ coex_dm->cur_rf_rx_lpf_shrink)
+ return;
+ }
+ btc8192e2ant_set_sw_rf_rx_lpf_corner(btcoexist,
+ coex_dm->cur_rf_rx_lpf_shrink);
+
+ coex_dm->pre_rf_rx_lpf_shrink = coex_dm->cur_rf_rx_lpf_shrink;
+}
+
+static void halbtc8192e2ant_set_dac_swingreg(struct btc_coexist *btcoexist,
+ u32 level)
+{
+ u8 val = (u8)level;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], Write SwDacSwing = 0x%x\n", level);
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x883, 0x3e, val);
+}
+
+static void btc8192e2ant_setsw_full_swing(struct btc_coexist *btcoexist,
+ bool sw_dac_swingon,
+ u32 sw_dac_swinglvl)
+{
+ if (sw_dac_swingon)
+ halbtc8192e2ant_set_dac_swingreg(btcoexist, sw_dac_swinglvl);
+ else
+ halbtc8192e2ant_set_dac_swingreg(btcoexist, 0x18);
+}
+
+static void halbtc8192e2ant_DacSwing(struct btc_coexist *btcoexist,
+ bool force_exec, bool dac_swingon,
+ u32 dac_swinglvl)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s turn DacSwing=%s, dac_swinglvl = 0x%x\n",
+ (force_exec ? "force to" : ""),
+ ((dac_swingon) ? "ON" : "OFF"), dac_swinglvl);
+ coex_dm->cur_dac_swing_on = dac_swingon;
+ coex_dm->cur_dac_swing_lvl = dac_swinglvl;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex], bPreDacSwingOn=%d, preDacSwingLvl = 0x%x, ",
+ coex_dm->pre_dac_swing_on,
+ coex_dm->pre_dac_swing_lvl);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "bCurDacSwingOn=%d, curDacSwingLvl = 0x%x\n",
+ coex_dm->cur_dac_swing_on,
+ coex_dm->cur_dac_swing_lvl);
+
+ if ((coex_dm->pre_dac_swing_on == coex_dm->cur_dac_swing_on) &&
+ (coex_dm->pre_dac_swing_lvl == coex_dm->cur_dac_swing_lvl))
+ return;
+ }
+ mdelay(30);
+ btc8192e2ant_setsw_full_swing(btcoexist, dac_swingon, dac_swinglvl);
+
+ coex_dm->pre_dac_swing_on = coex_dm->cur_dac_swing_on;
+ coex_dm->pre_dac_swing_lvl = coex_dm->cur_dac_swing_lvl;
+}
+
+static void halbtc8192e2ant_set_agc_table(struct btc_coexist *btcoexist,
+ bool agc_table_en)
+{
+ /* BB AGC Gain Table */
+ if (agc_table_en) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], BB Agc Table On!\n");
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0x0a1A0001);
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0x091B0001);
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0x081C0001);
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0x071D0001);
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0x061E0001);
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0x051F0001);
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], BB Agc Table Off!\n");
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0xaa1A0001);
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0xa91B0001);
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0xa81C0001);
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0xa71D0001);
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0xa61E0001);
+ btcoexist->btc_write_4byte(btcoexist, 0xc78, 0xa51F0001);
+ }
+}
+
+static void halbtc8192e2ant_AgcTable(struct btc_coexist *btcoexist,
+ bool force_exec, bool agc_table_en)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s %s Agc Table\n",
+ (force_exec ? "force to" : ""),
+ ((agc_table_en) ? "Enable" : "Disable"));
+ coex_dm->cur_agc_table_en = agc_table_en;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex], bPreAgcTableEn=%d, bCurAgcTableEn=%d\n",
+ coex_dm->pre_agc_table_en, coex_dm->cur_agc_table_en);
+
+ if (coex_dm->pre_agc_table_en == coex_dm->cur_agc_table_en)
+ return;
+ }
+ halbtc8192e2ant_set_agc_table(btcoexist, agc_table_en);
+
+ coex_dm->pre_agc_table_en = coex_dm->cur_agc_table_en;
+}
+
+static void halbtc8192e2ant_set_coex_table(struct btc_coexist *btcoexist,
+ u32 val0x6c0, u32 val0x6c4,
+ u32 val0x6c8, u8 val0x6cc)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c0 = 0x%x\n", val0x6c0);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c0, val0x6c0);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c4 = 0x%x\n", val0x6c4);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c4, val0x6c4);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c8 = 0x%x\n", val0x6c8);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c8, val0x6c8);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6cc = 0x%x\n", val0x6cc);
+ btcoexist->btc_write_1byte(btcoexist, 0x6cc, val0x6cc);
+}
+
+static void halbtc8192e2ant_coex_table(struct btc_coexist *btcoexist,
+ bool force_exec,
+ u32 val0x6c0, u32 val0x6c4,
+ u32 val0x6c8, u8 val0x6cc)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s write Coex Table 0x6c0 = 0x%x, ",
+ (force_exec ? "force to" : ""), val0x6c0);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "0x6c4 = 0x%x, 0x6c8 = 0x%x, 0x6cc = 0x%x\n",
+ val0x6c4, val0x6c8, val0x6cc);
+ coex_dm->cur_val0x6c0 = val0x6c0;
+ coex_dm->cur_val0x6c4 = val0x6c4;
+ coex_dm->cur_val0x6c8 = val0x6c8;
+ coex_dm->cur_val0x6cc = val0x6cc;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex], preVal0x6c0 = 0x%x, preVal0x6c4 = 0x%x, ",
+ coex_dm->pre_val0x6c0, coex_dm->pre_val0x6c4);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "preVal0x6c8 = 0x%x, preVal0x6cc = 0x%x !!\n",
+ coex_dm->pre_val0x6c8, coex_dm->pre_val0x6cc);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex], curVal0x6c0 = 0x%x, curVal0x6c4 = 0x%x,\n",
+ coex_dm->cur_val0x6c0, coex_dm->cur_val0x6c4);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "curVal0x6c8 = 0x%x, curVal0x6cc = 0x%x !!\n",
+ coex_dm->cur_val0x6c8, coex_dm->cur_val0x6cc);
+
+ if ((coex_dm->pre_val0x6c0 == coex_dm->cur_val0x6c0) &&
+ (coex_dm->pre_val0x6c4 == coex_dm->cur_val0x6c4) &&
+ (coex_dm->pre_val0x6c8 == coex_dm->cur_val0x6c8) &&
+ (coex_dm->pre_val0x6cc == coex_dm->cur_val0x6cc))
+ return;
+ }
+ halbtc8192e2ant_set_coex_table(btcoexist, val0x6c0, val0x6c4,
+ val0x6c8, val0x6cc);
+
+ coex_dm->pre_val0x6c0 = coex_dm->cur_val0x6c0;
+ coex_dm->pre_val0x6c4 = coex_dm->cur_val0x6c4;
+ coex_dm->pre_val0x6c8 = coex_dm->cur_val0x6c8;
+ coex_dm->pre_val0x6cc = coex_dm->cur_val0x6cc;
+}
+
+static void btc8192e2ant_coex_tbl_w_type(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ switch (type) {
+ case 0:
+ halbtc8192e2ant_coex_table(btcoexist, force_exec, 0x55555555,
+ 0x5a5a5a5a, 0xffffff, 0x3);
+ break;
+ case 1:
+ halbtc8192e2ant_coex_table(btcoexist, force_exec, 0x5a5a5a5a,
+ 0x5a5a5a5a, 0xffffff, 0x3);
+ break;
+ case 2:
+ halbtc8192e2ant_coex_table(btcoexist, force_exec, 0x55555555,
+ 0x5ffb5ffb, 0xffffff, 0x3);
+ break;
+ case 3:
+ halbtc8192e2ant_coex_table(btcoexist, force_exec, 0xdfffdfff,
+ 0x5fdb5fdb, 0xffffff, 0x3);
+ break;
+ case 4:
+ halbtc8192e2ant_coex_table(btcoexist, force_exec, 0xdfffdfff,
+ 0x5ffb5ffb, 0xffffff, 0x3);
+ break;
+ default:
+ break;
+ }
+}
+
+static void halbtc8192e2ant_set_fw_ignore_wlanact(struct btc_coexist *btcoexist,
+ bool enable)
+{
+ u8 h2c_parameter[1] = {0};
+
+ if (enable)
+ h2c_parameter[0] |= BIT0; /* function enable */
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex]set FW for BT Ignore Wlan_Act, FW write 0x63 = 0x%x\n",
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x63, 1, h2c_parameter);
+}
+
+static void halbtc8192e2ant_IgnoreWlanAct(struct btc_coexist *btcoexist,
+ bool force_exec, bool enable)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s turn Ignore WlanAct %s\n",
+ (force_exec ? "force to" : ""), (enable ? "ON" : "OFF"));
+ coex_dm->cur_ignore_wlan_act = enable;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], bPreIgnoreWlanAct = %d ",
+ coex_dm->pre_ignore_wlan_act);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "bCurIgnoreWlanAct = %d!!\n",
+ coex_dm->cur_ignore_wlan_act);
+
+ if (coex_dm->pre_ignore_wlan_act ==
+ coex_dm->cur_ignore_wlan_act)
+ return;
+ }
+ halbtc8192e2ant_set_fw_ignore_wlanact(btcoexist, enable);
+
+ coex_dm->pre_ignore_wlan_act = coex_dm->cur_ignore_wlan_act;
+}
+
+static void halbtc8192e2ant_SetFwPstdma(struct btc_coexist *btcoexist, u8 byte1,
+ u8 byte2, u8 byte3, u8 byte4, u8 byte5)
+{
+ u8 h2c_parameter[5] = {0};
+
+ h2c_parameter[0] = byte1;
+ h2c_parameter[1] = byte2;
+ h2c_parameter[2] = byte3;
+ h2c_parameter[3] = byte4;
+ h2c_parameter[4] = byte5;
+
+ coex_dm->ps_tdma_para[0] = byte1;
+ coex_dm->ps_tdma_para[1] = byte2;
+ coex_dm->ps_tdma_para[2] = byte3;
+ coex_dm->ps_tdma_para[3] = byte4;
+ coex_dm->ps_tdma_para[4] = byte5;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], FW write 0x60(5bytes) = 0x%x%08x\n",
+ h2c_parameter[0],
+ h2c_parameter[1] << 24 | h2c_parameter[2] << 16 |
+ h2c_parameter[3] << 8 | h2c_parameter[4]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x60, 5, h2c_parameter);
+}
+
+static void btc8192e2ant_sw_mec1(struct btc_coexist *btcoexist,
+ bool shrink_rx_lpf, bool low_penalty_ra,
+ bool limited_dig, bool btlan_constrain)
+{
+ halbtc8192e2ant_rf_shrink(btcoexist, NORMAL_EXEC, shrink_rx_lpf);
+}
+
+static void btc8192e2ant_sw_mec2(struct btc_coexist *btcoexist,
+ bool agc_table_shift, bool adc_backoff,
+ bool sw_dac_swing, u32 dac_swinglvl)
+{
+ halbtc8192e2ant_AgcTable(btcoexist, NORMAL_EXEC, agc_table_shift);
+ halbtc8192e2ant_DacSwing(btcoexist, NORMAL_EXEC, sw_dac_swing,
+ dac_swinglvl);
+}
+
+static void halbtc8192e2ant_ps_tdma(struct btc_coexist *btcoexist,
+ bool force_exec, bool turn_on, u8 type)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s turn %s PS TDMA, type=%d\n",
+ (force_exec ? "force to" : ""),
+ (turn_on ? "ON" : "OFF"), type);
+ coex_dm->cur_ps_tdma_on = turn_on;
+ coex_dm->cur_ps_tdma = type;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], bPrePsTdmaOn = %d, bCurPsTdmaOn = %d!!\n",
+ coex_dm->pre_ps_tdma_on, coex_dm->cur_ps_tdma_on);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], prePsTdma = %d, curPsTdma = %d!!\n",
+ coex_dm->pre_ps_tdma, coex_dm->cur_ps_tdma);
+
+ if ((coex_dm->pre_ps_tdma_on == coex_dm->cur_ps_tdma_on) &&
+ (coex_dm->pre_ps_tdma == coex_dm->cur_ps_tdma))
+ return;
+ }
+ if (turn_on) {
+ switch (type) {
+ case 1:
+ default:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x1a,
+ 0x1a, 0xe1, 0x90);
+ break;
+ case 2:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x12,
+ 0x12, 0xe1, 0x90);
+ break;
+ case 3:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x1c,
+ 0x3, 0xf1, 0x90);
+ break;
+ case 4:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x10,
+ 0x3, 0xf1, 0x90);
+ break;
+ case 5:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x1a,
+ 0x1a, 0x60, 0x90);
+ break;
+ case 6:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x12,
+ 0x12, 0x60, 0x90);
+ break;
+ case 7:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x1c,
+ 0x3, 0x70, 0x90);
+ break;
+ case 8:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xa3, 0x10,
+ 0x3, 0x70, 0x90);
+ break;
+ case 9:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x1a,
+ 0x1a, 0xe1, 0x10);
+ break;
+ case 10:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x12,
+ 0x12, 0xe1, 0x10);
+ break;
+ case 11:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x1c,
+ 0x3, 0xf1, 0x10);
+ break;
+ case 12:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x10,
+ 0x3, 0xf1, 0x10);
+ break;
+ case 13:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x1a,
+ 0x1a, 0xe0, 0x10);
+ break;
+ case 14:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x12,
+ 0x12, 0xe0, 0x10);
+ break;
+ case 15:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x1c,
+ 0x3, 0xf0, 0x10);
+ break;
+ case 16:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x12,
+ 0x3, 0xf0, 0x10);
+ break;
+ case 17:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0x61, 0x20,
+ 0x03, 0x10, 0x10);
+ break;
+ case 18:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x5,
+ 0x5, 0xe1, 0x90);
+ break;
+ case 19:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x25,
+ 0x25, 0xe1, 0x90);
+ break;
+ case 20:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x25,
+ 0x25, 0x60, 0x90);
+ break;
+ case 21:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x15,
+ 0x03, 0x70, 0x90);
+ break;
+ case 71:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0xe3, 0x1a,
+ 0x1a, 0xe1, 0x90);
+ break;
+ }
+ } else {
+ /* disable PS tdma */
+ switch (type) {
+ default:
+ case 0:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0x8, 0x0, 0x0,
+ 0x0, 0x0);
+ btcoexist->btc_write_1byte(btcoexist, 0x92c, 0x4);
+ break;
+ case 1:
+ halbtc8192e2ant_SetFwPstdma(btcoexist, 0x0, 0x0, 0x0,
+ 0x8, 0x0);
+ mdelay(5);
+ btcoexist->btc_write_1byte(btcoexist, 0x92c, 0x20);
+ break;
+ }
+ }
+
+ /* update pre state */
+ coex_dm->pre_ps_tdma_on = coex_dm->cur_ps_tdma_on;
+ coex_dm->pre_ps_tdma = coex_dm->cur_ps_tdma;
+}
+
+static void halbtc8192e2ant_set_switch_sstype(struct btc_coexist *btcoexist,
+ u8 sstype)
+{
+ u8 mimops = BTC_MIMO_PS_DYNAMIC;
+ u32 disra_mask = 0x0;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], REAL set SS Type = %d\n", sstype);
+
+ disra_mask = halbtc8192e2ant_decidera_mask(btcoexist, sstype,
+ coex_dm->curra_masktype);
+ halbtc8192e2ant_Updatera_mask(btcoexist, FORCE_EXEC, disra_mask);
+
+ if (sstype == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist, FORCE_EXEC, false, 1);
+ /* switch ofdm path */
+ btcoexist->btc_write_1byte(btcoexist, 0xc04, 0x11);
+ btcoexist->btc_write_1byte(btcoexist, 0xd04, 0x1);
+ btcoexist->btc_write_4byte(btcoexist, 0x90c, 0x81111111);
+ /* switch cck patch */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xe77, 0x4, 0x1);
+ btcoexist->btc_write_1byte(btcoexist, 0xa07, 0x81);
+ mimops = BTC_MIMO_PS_STATIC;
+ } else if (sstype == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist, FORCE_EXEC, false, 0);
+ btcoexist->btc_write_1byte(btcoexist, 0xc04, 0x33);
+ btcoexist->btc_write_1byte(btcoexist, 0xd04, 0x3);
+ btcoexist->btc_write_4byte(btcoexist, 0x90c, 0x81121313);
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xe77, 0x4, 0x0);
+ btcoexist->btc_write_1byte(btcoexist, 0xa07, 0x41);
+ mimops = BTC_MIMO_PS_DYNAMIC;
+ }
+ /* set rx 1ss or 2ss */
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_SEND_MIMO_PS, &mimops);
+}
+
+static void halbtc8192e2ant_switch_sstype(struct btc_coexist *btcoexist,
+ bool force_exec, u8 new_sstype)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], %s Switch SS Type = %d\n",
+ (force_exec ? "force to" : ""), new_sstype);
+ coex_dm->cur_sstype = new_sstype;
+
+ if (!force_exec) {
+ if (coex_dm->pre_sstype == coex_dm->cur_sstype)
+ return;
+ }
+ halbtc8192e2ant_set_switch_sstype(btcoexist, coex_dm->cur_sstype);
+
+ coex_dm->pre_sstype = coex_dm->cur_sstype;
+}
+
+static void halbtc8192e2ant_coex_alloff(struct btc_coexist *btcoexist)
+{
+ /* fw all off */
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 1);
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+
+ /* sw all off */
+ btc8192e2ant_sw_mec1(btcoexist, false, false, false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false, false, 0x18);
+
+ /* hw all off */
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 0);
+}
+
+static void halbtc8192e2ant_init_coex_dm(struct btc_coexist *btcoexist)
+{
+ /* force to reset coex mechanism */
+
+ halbtc8192e2ant_ps_tdma(btcoexist, FORCE_EXEC, false, 1);
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, FORCE_EXEC, 6);
+ halbtc8192e2ant_dec_btpwr(btcoexist, FORCE_EXEC, 0);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, FORCE_EXEC, 0);
+ halbtc8192e2ant_switch_sstype(btcoexist, FORCE_EXEC, 2);
+
+ btc8192e2ant_sw_mec1(btcoexist, false, false, false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false, false, 0x18);
+}
+
+static void halbtc8192e2ant_action_bt_inquiry(struct btc_coexist *btcoexist)
+{
+ bool low_pwr_disable = true;
+
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 2);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 3);
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+
+ btc8192e2ant_sw_mec1(btcoexist, false, false, false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false, false, 0x18);
+}
+
+static bool halbtc8192e2ant_is_common_action(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool common = false, wifi_connected = false, wifi_busy = false;
+ bool bt_hson = false, low_pwr_disable = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hson);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+
+ if (bt_link_info->sco_exist || bt_link_info->hid_exist)
+ halbtc8192e2ant_limited_tx(btcoexist, NORMAL_EXEC, 1, 0, 0, 0);
+ else
+ halbtc8192e2ant_limited_tx(btcoexist, NORMAL_EXEC, 0, 0, 0, 0);
+
+ if (!wifi_connected) {
+ low_pwr_disable = false;
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi non-connected idle!!\n");
+
+ if ((BT_8192E_2ANT_BT_STATUS_NON_CONNECTED_IDLE ==
+ coex_dm->bt_status) ||
+ (BT_8192E_2ANT_BT_STATUS_CONNECTED_IDLE ==
+ coex_dm->bt_status)) {
+ halbtc8192e2ant_switch_sstype(btcoexist,
+ NORMAL_EXEC, 2);
+ btc8192e2ant_coex_tbl_w_type(btcoexist,
+ NORMAL_EXEC, 1);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 0);
+ } else {
+ halbtc8192e2ant_switch_sstype(btcoexist,
+ NORMAL_EXEC, 1);
+ btc8192e2ant_coex_tbl_w_type(btcoexist,
+ NORMAL_EXEC, 0);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 1);
+ }
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+
+ btc8192e2ant_sw_mec1(btcoexist, false, false, false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false, false, 0x18);
+
+ common = true;
+ } else {
+ if (BT_8192E_2ANT_BT_STATUS_NON_CONNECTED_IDLE ==
+ coex_dm->bt_status) {
+ low_pwr_disable = false;
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Wifi connected + BT non connected-idle!!\n");
+
+ halbtc8192e2ant_switch_sstype(btcoexist,
+ NORMAL_EXEC, 2);
+ btc8192e2ant_coex_tbl_w_type(btcoexist,
+ NORMAL_EXEC, 1);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 0);
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist,
+ NORMAL_EXEC, 6);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+
+ common = true;
+ } else if (BT_8192E_2ANT_BT_STATUS_CONNECTED_IDLE ==
+ coex_dm->bt_status) {
+ low_pwr_disable = true;
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+
+ if (bt_hson)
+ return false;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Wifi connected + BT connected-idle!!\n");
+
+ halbtc8192e2ant_switch_sstype(btcoexist,
+ NORMAL_EXEC, 2);
+ btc8192e2ant_coex_tbl_w_type(btcoexist,
+ NORMAL_EXEC, 1);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 0);
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist,
+ NORMAL_EXEC, 6);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+
+ common = true;
+ } else {
+ low_pwr_disable = true;
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+
+ if (wifi_busy) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Wifi Connected-Busy + BT Busy!!\n");
+ common = false;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Wifi Connected-Idle + BT Busy!!\n");
+
+ halbtc8192e2ant_switch_sstype(btcoexist,
+ NORMAL_EXEC, 1);
+ btc8192e2ant_coex_tbl_w_type(btcoexist,
+ NORMAL_EXEC, 2);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 21);
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist,
+ NORMAL_EXEC, 6);
+ halbtc8192e2ant_dec_btpwr(btcoexist,
+ NORMAL_EXEC, 0);
+ btc8192e2ant_sw_mec1(btcoexist, false,
+ false, false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false,
+ false, false, 0x18);
+ common = true;
+ }
+ }
+ }
+ return common;
+}
+
+static void btc8192e_int1(struct btc_coexist *btcoexist, bool tx_pause,
+ int result)
+{
+ if (tx_pause) {
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 1\n");
+
+ if (coex_dm->cur_ps_tdma == 71) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 5);
+ coex_dm->tdma_adj_type = 5;
+ } else if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 5);
+ coex_dm->tdma_adj_type = 5;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ }
+ if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 13);
+ coex_dm->tdma_adj_type = 13;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ } else if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 5);
+ coex_dm->tdma_adj_type = 5;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 13);
+ coex_dm->tdma_adj_type = 13;
+ }
+ }
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 0\n");
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 71);
+ coex_dm->tdma_adj_type = 71;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ }
+ if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 71) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 1);
+ coex_dm->tdma_adj_type = 1;
+ } else if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ } else if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 1);
+ coex_dm->tdma_adj_type = 1;
+ } else if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 71);
+ coex_dm->tdma_adj_type = 71;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ }
+ }
+ }
+}
+
+static void btc8192e_int2(struct btc_coexist *btcoexist, bool tx_pause,
+ int result)
+{
+ if (tx_pause) {
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 1\n");
+ if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ }
+ if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ } else if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ }
+ }
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 0\n");
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ }
+ if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ } else if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ }
+ }
+ }
+}
+
+static void btc8192e_int3(struct btc_coexist *btcoexist, bool tx_pause,
+ int result)
+{
+ if (tx_pause) {
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 1\n");
+ if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ }
+ if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ } else if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ }
+ }
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 0\n");
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ }
+ if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ } else if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ }
+ }
+ }
+}
+
+static void halbtc8192e2ant_tdma_duration_adjust(struct btc_coexist *btcoexist,
+ bool sco_hid, bool tx_pause,
+ u8 max_interval)
+{
+ static int up, dn, m, n, wait_cnt;
+ /* 0: no change, +1: increase WiFi duration,
+ * -1: decrease WiFi duration
+ */
+ int result;
+ u8 retry_cnt = 0;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], TdmaDurationAdjust()\n");
+
+ if (!coex_dm->auto_tdma_adjust) {
+ coex_dm->auto_tdma_adjust = true;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], first run TdmaDurationAdjust()!!\n");
+ if (sco_hid) {
+ if (tx_pause) {
+ if (max_interval == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 13);
+ coex_dm->tdma_adj_type = 13;
+ } else if (max_interval == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (max_interval == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ }
+ } else {
+ if (max_interval == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (max_interval == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (max_interval == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ }
+ }
+ } else {
+ if (tx_pause) {
+ if (max_interval == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 5);
+ coex_dm->tdma_adj_type = 5;
+ } else if (max_interval == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (max_interval == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ }
+ } else {
+ if (max_interval == 1) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 1);
+ coex_dm->tdma_adj_type = 1;
+ } else if (max_interval == 2) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (max_interval == 3) {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else {
+ halbtc8192e2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ }
+ }
+ }
+
+ up = 0;
+ dn = 0;
+ m = 1;
+ n = 3;
+ result = 0;
+ wait_cnt = 0;
+ } else {
+ /* accquire the BT TRx retry count from BT_Info byte2 */
+ retry_cnt = coex_sta->bt_retry_cnt;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], retry_cnt = %d\n", retry_cnt);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], up=%d, dn=%d, m=%d, n=%d, wait_cnt=%d\n",
+ up, dn, m, n, wait_cnt);
+ result = 0;
+ wait_cnt++;
+ /* no retry in the last 2-second duration */
+ if (retry_cnt == 0) {
+ up++;
+ dn--;
+
+ if (dn <= 0)
+ dn = 0;
+
+ if (up >= n) {
+ wait_cnt = 0;
+ n = 3;
+ up = 0;
+ dn = 0;
+ result = 1;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex]Increase wifi duration!!\n");
+ }
+ } else if (retry_cnt <= 3) {
+ up--;
+ dn++;
+
+ if (up <= 0)
+ up = 0;
+
+ if (dn == 2) {
+ if (wait_cnt <= 2)
+ m++;
+ else
+ m = 1;
+
+ if (m >= 20)
+ m = 20;
+
+ n = 3 * m;
+ up = 0;
+ dn = 0;
+ wait_cnt = 0;
+ result = -1;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "Reduce wifi duration for retry<3\n");
+ }
+ } else {
+ if (wait_cnt == 1)
+ m++;
+ else
+ m = 1;
+
+ if (m >= 20)
+ m = 20;
+
+ n = 3*m;
+ up = 0;
+ dn = 0;
+ wait_cnt = 0;
+ result = -1;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "Decrease wifi duration for retryCounter>3!!\n");
+ }
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], max Interval = %d\n", max_interval);
+ if (max_interval == 1)
+ btc8192e_int1(btcoexist, tx_pause, result);
+ else if (max_interval == 2)
+ btc8192e_int2(btcoexist, tx_pause, result);
+ else if (max_interval == 3)
+ btc8192e_int3(btcoexist, tx_pause, result);
+ }
+
+ /* if current PsTdma not match with
+ * the recorded one (when scan, dhcp...),
+ * then we have to adjust it back to the previous record one.
+ */
+ if (coex_dm->cur_ps_tdma != coex_dm->tdma_adj_type) {
+ bool scan = false, link = false, roam = false;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], PsTdma type dismatch!!!, ");
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "curPsTdma=%d, recordPsTdma=%d\n",
+ coex_dm->cur_ps_tdma, coex_dm->tdma_adj_type);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
+
+ if (!scan && !link && !roam)
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true,
+ coex_dm->tdma_adj_type);
+ else
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], roaming/link/scan is under progress, will adjust next time!!!\n");
+ }
+}
+
+/* SCO only or SCO+PAN(HS) */
+static void halbtc8192e2ant_action_sco(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_STAY_LOW;
+ u32 wifi_bw;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 0x8);
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 4);
+
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 13);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 9);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 9);
+ }
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ /* sw mechanism */
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x6);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x6);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x6);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x6);
+ }
+ }
+}
+
+static void halbtc8192e2ant_action_sco_pan(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_STAY_LOW;
+ u32 wifi_bw;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 0x8);
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 4);
+
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 14);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 10);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 10);
+ }
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ /* sw mechanism */
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x6);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x6);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x6);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x6);
+ }
+ }
+}
+
+static void halbtc8192e2ant_action_hid(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_HIGH;
+ u32 wifi_bw;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 0x8);
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 3);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 13);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 9);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 9);
+ }
+
+ /* sw mechanism */
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+/* A2DP only / PAN(EDR) only/ A2DP+PAN(HS) */
+static void halbtc8192e2ant_action_a2dp(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_HIGH;
+ u32 wifi_bw;
+ bool long_dist = false;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW ||
+ btrssi_state == BTC_RSSI_STATE_STAY_LOW) &&
+ (wifirssi_state == BTC_RSSI_STATE_LOW ||
+ wifirssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], A2dp, wifi/bt rssi both LOW!!\n");
+ long_dist = true;
+ }
+ if (long_dist) {
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 2);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, true,
+ 0x4);
+ } else {
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false,
+ 0x8);
+ }
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+
+ if (long_dist)
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 0);
+ else
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 2);
+
+ if (long_dist) {
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 17);
+ coex_dm->auto_tdma_adjust = false;
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ } else {
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, false,
+ true, 1);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, false,
+ false, 1);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, false,
+ false, 1);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ }
+ }
+
+ /* sw mechanism */
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+static void halbtc8192e2ant_action_a2dp_pan_hs(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_HIGH;
+ u32 wifi_bw;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 0x8);
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 2);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, false, true, 2);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, false,
+ false, 2);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, false,
+ false, 2);
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ }
+
+ /* sw mechanism */
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ true, 0x6);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ true, 0x6);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ true, 0x6);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ true, 0x6);
+ }
+ }
+}
+
+static void halbtc8192e2ant_action_pan_edr(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_HIGH;
+ u32 wifi_bw;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 0x8);
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 2);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 5);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 1);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 1);
+ }
+
+ /* sw mechanism */
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+/* PAN(HS) only */
+static void halbtc8192e2ant_action_pan_hs(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_HIGH;
+ u32 wifi_bw;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 0x8);
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 2);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ }
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 1);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+/* PAN(EDR)+A2DP */
+static void halbtc8192e2ant_action_pan_edr_a2dp(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_HIGH;
+ u32 wifi_bw;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 0x8);
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 2);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, false, true, 3);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, false,
+ false, 3);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, false,
+ false, 3);
+ }
+
+ /* sw mechanism */
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, false,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+static void halbtc8192e2ant_action_pan_edr_hid(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_HIGH;
+ u32 wifi_bw;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 0x8);
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 3);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 14);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ halbtc8192e2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ }
+
+ /* sw mechanism */
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+/* HID+A2DP+PAN(EDR) */
+static void btc8192e2ant_action_hid_a2dp_pan_edr(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_HIGH;
+ u32 wifi_bw;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 0x8);
+
+ halbtc8192e2ant_fw_dac_swinglvl(btcoexist, NORMAL_EXEC, 6);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 3);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, true, true, 3);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, true, false, 3);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, true, false, 3);
+ }
+
+ /* sw mechanism */
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+static void halbtc8192e2ant_action_hid_a2dp(struct btc_coexist *btcoexist)
+{
+ u8 wifirssi_state, btrssi_state = BTC_RSSI_STATE_HIGH;
+ u32 wifi_bw;
+
+ wifirssi_state = halbtc8192e2ant_wifirssi_state(btcoexist, 0, 2, 15, 0);
+ btrssi_state = halbtc8192e2ant_btrssi_state(3, 34, 42);
+
+ halbtc8192e2ant_switch_sstype(btcoexist, NORMAL_EXEC, 1);
+ halbtc8192e2ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 0x8);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, NORMAL_EXEC, 3);
+
+ if ((btrssi_state == BTC_RSSI_STATE_LOW) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 0);
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, true, true, 2);
+ } else if ((btrssi_state == BTC_RSSI_STATE_MEDIUM) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_MEDIUM)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 2);
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, true, false, 2);
+ } else if ((btrssi_state == BTC_RSSI_STATE_HIGH) ||
+ (btrssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8192e2ant_dec_btpwr(btcoexist, NORMAL_EXEC, 4);
+ halbtc8192e2ant_tdma_duration_adjust(btcoexist, true, false, 2);
+ }
+
+ /* sw mechanism */
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, true, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ if ((wifirssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifirssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8192e2ant_sw_mec1(btcoexist, false, true,
+ false, false);
+ btc8192e2ant_sw_mec2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+static void halbtc8192e2ant_run_coexist_mechanism(struct btc_coexist *btcoexist)
+{
+ u8 algorithm = 0;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], RunCoexistMechanism()===>\n");
+
+ if (btcoexist->manual_control) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], return for Manual CTRL <===\n");
+ return;
+ }
+
+ if (coex_sta->under_ips) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], wifi is under IPS !!!\n");
+ return;
+ }
+
+ algorithm = halbtc8192e2ant_action_algorithm(btcoexist);
+ if (coex_sta->c2h_bt_inquiry_page &&
+ (BT_8192E_2ANT_COEX_ALGO_PANHS != algorithm)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT is under inquiry/page scan !!\n");
+ halbtc8192e2ant_action_bt_inquiry(btcoexist);
+ return;
+ }
+
+ coex_dm->cur_algorithm = algorithm;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Algorithm = %d\n", coex_dm->cur_algorithm);
+
+ if (halbtc8192e2ant_is_common_action(btcoexist)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant common.\n");
+ coex_dm->auto_tdma_adjust = false;
+ } else {
+ if (coex_dm->cur_algorithm != coex_dm->pre_algorithm) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex] preAlgorithm=%d, curAlgorithm=%d\n",
+ coex_dm->pre_algorithm,
+ coex_dm->cur_algorithm);
+ coex_dm->auto_tdma_adjust = false;
+ }
+ switch (coex_dm->cur_algorithm) {
+ case BT_8192E_2ANT_COEX_ALGO_SCO:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = SCO.\n");
+ halbtc8192e2ant_action_sco(btcoexist);
+ break;
+ case BT_8192E_2ANT_COEX_ALGO_SCO_PAN:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = SCO+PAN(EDR).\n");
+ halbtc8192e2ant_action_sco_pan(btcoexist);
+ break;
+ case BT_8192E_2ANT_COEX_ALGO_HID:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = HID.\n");
+ halbtc8192e2ant_action_hid(btcoexist);
+ break;
+ case BT_8192E_2ANT_COEX_ALGO_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = A2DP.\n");
+ halbtc8192e2ant_action_a2dp(btcoexist);
+ break;
+ case BT_8192E_2ANT_COEX_ALGO_A2DP_PANHS:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = A2DP+PAN(HS).\n");
+ halbtc8192e2ant_action_a2dp_pan_hs(btcoexist);
+ break;
+ case BT_8192E_2ANT_COEX_ALGO_PANEDR:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = PAN(EDR).\n");
+ halbtc8192e2ant_action_pan_edr(btcoexist);
+ break;
+ case BT_8192E_2ANT_COEX_ALGO_PANHS:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = HS mode.\n");
+ halbtc8192e2ant_action_pan_hs(btcoexist);
+ break;
+ case BT_8192E_2ANT_COEX_ALGO_PANEDR_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = PAN+A2DP.\n");
+ halbtc8192e2ant_action_pan_edr_a2dp(btcoexist);
+ break;
+ case BT_8192E_2ANT_COEX_ALGO_PANEDR_HID:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = PAN(EDR)+HID.\n");
+ halbtc8192e2ant_action_pan_edr_hid(btcoexist);
+ break;
+ case BT_8192E_2ANT_COEX_ALGO_HID_A2DP_PANEDR:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = HID+A2DP+PAN.\n");
+ btc8192e2ant_action_hid_a2dp_pan_edr(btcoexist);
+ break;
+ case BT_8192E_2ANT_COEX_ALGO_HID_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = HID+A2DP.\n");
+ halbtc8192e2ant_action_hid_a2dp(btcoexist);
+ break;
+ default:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "Action 2-Ant, algorithm = unknown!!\n");
+ /* halbtc8192e2ant_coex_alloff(btcoexist); */
+ break;
+ }
+ coex_dm->pre_algorithm = coex_dm->cur_algorithm;
+ }
+}
+
+static void halbtc8192e2ant_init_hwconfig(struct btc_coexist *btcoexist,
+ bool backup)
+{
+ u16 u16tmp = 0;
+ u8 u8tmp = 0;
+
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], 2Ant Init HW Config!!\n");
+
+ if (backup) {
+ /* backup rf 0x1e value */
+ coex_dm->bt_rf0x1e_backup =
+ btcoexist->btc_get_rf_reg(btcoexist, BTC_RF_A,
+ 0x1e, 0xfffff);
+
+ coex_dm->backup_arfr_cnt1 = btcoexist->btc_read_4byte(btcoexist,
+ 0x430);
+ coex_dm->backup_arfr_cnt2 = btcoexist->btc_read_4byte(btcoexist,
+ 0x434);
+ coex_dm->backup_retrylimit = btcoexist->btc_read_2byte(
+ btcoexist,
+ 0x42a);
+ coex_dm->backup_ampdu_maxtime = btcoexist->btc_read_1byte(
+ btcoexist,
+ 0x456);
+ }
+
+ /* antenna sw ctrl to bt */
+ btcoexist->btc_write_1byte(btcoexist, 0x4f, 0x6);
+ btcoexist->btc_write_1byte(btcoexist, 0x944, 0x24);
+ btcoexist->btc_write_4byte(btcoexist, 0x930, 0x700700);
+ btcoexist->btc_write_1byte(btcoexist, 0x92c, 0x20);
+ if (btcoexist->chip_interface == BTC_INTF_USB)
+ btcoexist->btc_write_4byte(btcoexist, 0x64, 0x30430004);
+ else
+ btcoexist->btc_write_4byte(btcoexist, 0x64, 0x30030004);
+
+ btc8192e2ant_coex_tbl_w_type(btcoexist, FORCE_EXEC, 0);
+
+ /* antenna switch control parameter */
+ btcoexist->btc_write_4byte(btcoexist, 0x858, 0x55555555);
+
+ /* coex parameters */
+ btcoexist->btc_write_1byte(btcoexist, 0x778, 0x3);
+ /* 0x790[5:0] = 0x5 */
+ u8tmp = btcoexist->btc_read_1byte(btcoexist, 0x790);
+ u8tmp &= 0xc0;
+ u8tmp |= 0x5;
+ btcoexist->btc_write_1byte(btcoexist, 0x790, u8tmp);
+
+ /* enable counter statistics */
+ btcoexist->btc_write_1byte(btcoexist, 0x76e, 0x4);
+
+ /* enable PTA */
+ btcoexist->btc_write_1byte(btcoexist, 0x40, 0x20);
+ /* enable mailbox interface */
+ u16tmp = btcoexist->btc_read_2byte(btcoexist, 0x40);
+ u16tmp |= BIT9;
+ btcoexist->btc_write_2byte(btcoexist, 0x40, u16tmp);
+
+ /* enable PTA I2C mailbox */
+ u8tmp = btcoexist->btc_read_1byte(btcoexist, 0x101);
+ u8tmp |= BIT4;
+ btcoexist->btc_write_1byte(btcoexist, 0x101, u8tmp);
+
+ /* enable bt clock when wifi is disabled. */
+ u8tmp = btcoexist->btc_read_1byte(btcoexist, 0x93);
+ u8tmp |= BIT0;
+ btcoexist->btc_write_1byte(btcoexist, 0x93, u8tmp);
+ /* enable bt clock when suspend. */
+ u8tmp = btcoexist->btc_read_1byte(btcoexist, 0x7);
+ u8tmp |= BIT0;
+ btcoexist->btc_write_1byte(btcoexist, 0x7, u8tmp);
+}
+
+/*************************************************************
+ * work around function start with wa_halbtc8192e2ant_
+ *************************************************************/
+
+/************************************************************
+ * extern function start with EXhalbtc8192e2ant_
+ ************************************************************/
+
+void ex_halbtc8192e2ant_init_hwconfig(struct btc_coexist *btcoexist)
+{
+ halbtc8192e2ant_init_hwconfig(btcoexist, true);
+}
+
+void ex_halbtc8192e2ant_init_coex_dm(struct btc_coexist *btcoexist)
+{
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], Coex Mechanism Init!!\n");
+ halbtc8192e2ant_init_coex_dm(btcoexist);
+}
+
+void ex_halbtc8192e2ant_display_coex_info(struct btc_coexist *btcoexist)
+{
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ struct btc_stack_info *stack_info = &btcoexist->stack_info;
+ struct rtl_priv *rtlpriv = btcoexist->adapter;
+ u8 u8tmp[4], i, bt_info_ext, ps_tdma_case = 0;
+ u16 u16tmp[4];
+ u32 u32tmp[4];
+ bool roam = false, scan = false, link = false, wifi_under_5g = false;
+ bool bt_hson = false, wifi_busy = false;
+ int wifirssi = 0, bt_hs_rssi = 0;
+ u32 wifi_bw, wifi_traffic_dir;
+ u8 wifi_dot11_chnl, wifi_hs_chnl;
+ u32 fw_ver = 0, bt_patch_ver = 0;
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ============[BT Coexist info]============");
+
+ if (btcoexist->manual_control) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ===========[Under Manual Control]===========");
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ==========================================");
+ }
+
+ if (!board_info->bt_exist) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n BT not exists !!!");
+ return;
+ }
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d ", "Ant PG number/ Ant mechanism:",
+ board_info->pg_ant_num, board_info->btdm_ant_num);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s / %d",
+ "BT stack/ hci ext ver",
+ ((stack_info->profile_notified) ? "Yes" : "No"),
+ stack_info->hci_version);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_BT_PATCH_VER, &bt_patch_ver);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d_%d/ 0x%x/ 0x%x(%d)",
+ "CoexVer/ FwVer/ PatchVer",
+ glcoex_ver_date_8192e_2ant, glcoex_ver_8192e_2ant,
+ fw_ver, bt_patch_ver, bt_patch_ver);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hson);
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_DOT11_CHNL,
+ &wifi_dot11_chnl);
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_HS_CHNL, &wifi_hs_chnl);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d / %d(%d)",
+ "Dot11 channel / HsMode(HsChnl)",
+ wifi_dot11_chnl, bt_hson, wifi_hs_chnl);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %02x %02x %02x ",
+ "H2C Wifi inform bt chnl Info", coex_dm->wifi_chnl_info[0],
+ coex_dm->wifi_chnl_info[1], coex_dm->wifi_chnl_info[2]);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_WIFI_RSSI, &wifirssi);
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_HS_RSSI, &bt_hs_rssi);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d",
+ "Wifi rssi/ HS rssi", wifirssi, bt_hs_rssi);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d/ %d ",
+ "Wifi link/ roam/ scan", link, roam, scan);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_UNDER_5G, &wifi_under_5g);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_TRAFFIC_DIRECTION,
+ &wifi_traffic_dir);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s / %s/ %s ",
+ "Wifi status", (wifi_under_5g ? "5G" : "2.4G"),
+ ((BTC_WIFI_BW_LEGACY == wifi_bw) ? "Legacy" :
+ (((BTC_WIFI_BW_HT40 == wifi_bw) ? "HT40" : "HT20"))),
+ ((!wifi_busy) ? "idle" :
+ ((BTC_WIFI_TRAFFIC_TX == wifi_traffic_dir) ?
+ "uplink" : "downlink")));
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = [%s/ %d/ %d] ",
+ "BT [status/ rssi/ retryCnt]",
+ ((btcoexist->bt_info.bt_disabled) ? ("disabled") :
+ ((coex_sta->c2h_bt_inquiry_page) ?
+ ("inquiry/page scan") :
+ ((BT_8192E_2ANT_BT_STATUS_NON_CONNECTED_IDLE ==
+ coex_dm->bt_status) ? "non-connected idle" :
+ ((BT_8192E_2ANT_BT_STATUS_CONNECTED_IDLE ==
+ coex_dm->bt_status) ? "connected-idle" : "busy")))),
+ coex_sta->bt_rssi, coex_sta->bt_retry_cnt);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d / %d / %d / %d",
+ "SCO/HID/PAN/A2DP", stack_info->sco_exist,
+ stack_info->hid_exist, stack_info->pan_exist,
+ stack_info->a2dp_exist);
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_BT_LINK_INFO);
+
+ bt_info_ext = coex_sta->bt_info_ext;
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s",
+ "BT Info A2DP rate",
+ (bt_info_ext&BIT0) ? "Basic rate" : "EDR rate");
+
+ for (i = 0; i < BT_INFO_SRC_8192E_2ANT_MAX; i++) {
+ if (coex_sta->bt_info_c2h_cnt[i]) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %02x %02x %02x %02x ",
+ GLBtInfoSrc8192e2Ant[i],
+ coex_sta->bt_info_c2h[i][0],
+ coex_sta->bt_info_c2h[i][1],
+ coex_sta->bt_info_c2h[i][2],
+ coex_sta->bt_info_c2h[i][3]);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "%02x %02x %02x(%d)",
+ coex_sta->bt_info_c2h[i][4],
+ coex_sta->bt_info_c2h[i][5],
+ coex_sta->bt_info_c2h[i][6],
+ coex_sta->bt_info_c2h_cnt[i]);
+ }
+ }
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s/%s",
+ "PS state, IPS/LPS",
+ ((coex_sta->under_ips ? "IPS ON" : "IPS OFF")),
+ ((coex_sta->under_lps ? "LPS ON" : "LPS OFF")));
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_FW_PWR_MODE_CMD);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x ", "SS Type",
+ coex_dm->cur_sstype);
+
+ /* Sw mechanism */
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
+ "============[Sw mechanism]============");
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d/ %d ",
+ "SM1[ShRf/ LpRA/ LimDig]", coex_dm->cur_rf_rx_lpf_shrink,
+ coex_dm->cur_low_penalty_ra, coex_dm->limited_dig);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d/ %d(0x%x) ",
+ "SM2[AgcT/ AdcB/ SwDacSwing(lvl)]",
+ coex_dm->cur_agc_table_en, coex_dm->cur_adc_back_off,
+ coex_dm->cur_dac_swing_on, coex_dm->cur_dac_swing_lvl);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x ", "Rate Mask",
+ btcoexist->bt_info.ra_mask);
+
+ /* Fw mechanism */
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
+ "============[Fw mechanism]============");
+
+ ps_tdma_case = coex_dm->cur_ps_tdma;
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %02x %02x %02x %02x %02x case-%d (auto:%d)",
+ "PS TDMA", coex_dm->ps_tdma_para[0],
+ coex_dm->ps_tdma_para[1], coex_dm->ps_tdma_para[2],
+ coex_dm->ps_tdma_para[3], coex_dm->ps_tdma_para[4],
+ ps_tdma_case, coex_dm->auto_tdma_adjust);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d ",
+ "DecBtPwr/ IgnWlanAct",
+ coex_dm->cur_dec_bt_pwr, coex_dm->cur_ignore_wlan_act);
+
+ /* Hw setting */
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
+ "============[Hw setting]============");
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x",
+ "RF-A, 0x1e initVal", coex_dm->bt_rf0x1e_backup);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/0x%x/0x%x/0x%x",
+ "backup ARFR1/ARFR2/RL/AMaxTime", coex_dm->backup_arfr_cnt1,
+ coex_dm->backup_arfr_cnt2, coex_dm->backup_retrylimit,
+ coex_dm->backup_ampdu_maxtime);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x430);
+ u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x434);
+ u16tmp[0] = btcoexist->btc_read_2byte(btcoexist, 0x42a);
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x456);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/0x%x/0x%x/0x%x",
+ "0x430/0x434/0x42a/0x456",
+ u32tmp[0], u32tmp[1], u16tmp[0], u8tmp[0]);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xc04);
+ u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0xd04);
+ u32tmp[2] = btcoexist->btc_read_4byte(btcoexist, 0x90c);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ "0xc04/ 0xd04/ 0x90c", u32tmp[0], u32tmp[1], u32tmp[2]);
+
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x778);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x", "0x778",
+ u8tmp[0]);
+
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x92c);
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x930);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
+ "0x92c/ 0x930", (u8tmp[0]), u32tmp[0]);
+
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x40);
+ u8tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0x4f);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
+ "0x40/ 0x4f", u8tmp[0], u8tmp[1]);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x550);
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x522);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
+ "0x550(bcn ctrl)/0x522", u32tmp[0], u8tmp[0]);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xc50);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x", "0xc50(dig)",
+ u32tmp[0]);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x6c0);
+ u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x6c4);
+ u32tmp[2] = btcoexist->btc_read_4byte(btcoexist, 0x6c8);
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x6cc);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x/ 0x%x/ 0x%x/ 0x%x",
+ "0x6c0/0x6c4/0x6c8/0x6cc(coexTable)",
+ u32tmp[0], u32tmp[1], u32tmp[2], u8tmp[0]);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d",
+ "0x770(hp rx[31:16]/tx[15:0])",
+ coex_sta->high_priority_rx, coex_sta->high_priority_tx);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d",
+ "0x774(lp rx[31:16]/tx[15:0])",
+ coex_sta->low_priority_rx, coex_sta->low_priority_tx);
+#if (BT_AUTO_REPORT_ONLY_8192E_2ANT == 1)
+ halbtc8192e2ant_monitor_bt_ctr(btcoexist);
+#endif
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_COEX_STATISTICS);
+}
+
+void ex_halbtc8192e2ant_ips_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (BTC_IPS_ENTER == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], IPS ENTER notify\n");
+ coex_sta->under_ips = true;
+ halbtc8192e2ant_coex_alloff(btcoexist);
+ } else if (BTC_IPS_LEAVE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], IPS LEAVE notify\n");
+ coex_sta->under_ips = false;
+ }
+}
+
+void ex_halbtc8192e2ant_lps_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (BTC_LPS_ENABLE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], LPS ENABLE notify\n");
+ coex_sta->under_lps = true;
+ } else if (BTC_LPS_DISABLE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], LPS DISABLE notify\n");
+ coex_sta->under_lps = false;
+ }
+}
+
+void ex_halbtc8192e2ant_scan_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (BTC_SCAN_START == type)
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], SCAN START notify\n");
+ else if (BTC_SCAN_FINISH == type)
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], SCAN FINISH notify\n");
+}
+
+void ex_halbtc8192e2ant_connect_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (BTC_ASSOCIATE_START == type)
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], CONNECT START notify\n");
+ else if (BTC_ASSOCIATE_FINISH == type)
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], CONNECT FINISH notify\n");
+}
+
+void ex_halbtc8192e2ant_media_status_notify(struct btc_coexist *btcoexist,
+ u8 type)
+{
+ u8 h2c_parameter[3] = {0};
+ u32 wifi_bw;
+ u8 wifi_center_chnl;
+
+ if (btcoexist->manual_control ||
+ btcoexist->stop_coex_dm ||
+ btcoexist->bt_info.bt_disabled)
+ return;
+
+ if (BTC_MEDIA_CONNECT == type)
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], MEDIA connect notify\n");
+ else
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], MEDIA disconnect notify\n");
+
+ /* only 2.4G we need to inform bt the chnl mask */
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_CENTRAL_CHNL,
+ &wifi_center_chnl);
+ if ((BTC_MEDIA_CONNECT == type) &&
+ (wifi_center_chnl <= 14)) {
+ h2c_parameter[0] = 0x1;
+ h2c_parameter[1] = wifi_center_chnl;
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ if (BTC_WIFI_BW_HT40 == wifi_bw)
+ h2c_parameter[2] = 0x30;
+ else
+ h2c_parameter[2] = 0x20;
+ }
+
+ coex_dm->wifi_chnl_info[0] = h2c_parameter[0];
+ coex_dm->wifi_chnl_info[1] = h2c_parameter[1];
+ coex_dm->wifi_chnl_info[2] = h2c_parameter[2];
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], FW write 0x66 = 0x%x\n",
+ h2c_parameter[0] << 16 | h2c_parameter[1] << 8 |
+ h2c_parameter[2]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x66, 3, h2c_parameter);
+}
+
+void ex_halbtc8192e2ant_special_packet_notify(struct btc_coexist *btcoexist,
+ u8 type)
+{
+ if (type == BTC_PACKET_DHCP)
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], DHCP Packet notify\n");
+}
+
+void ex_halbtc8192e2ant_bt_info_notify(struct btc_coexist *btcoexist,
+ u8 *tmp_buf, u8 length)
+{
+ u8 bt_info = 0;
+ u8 i, rsp_source = 0;
+ bool bt_busy = false, limited_dig = false;
+ bool wifi_connected = false;
+
+ coex_sta->c2h_bt_info_req_sent = false;
+
+ rsp_source = tmp_buf[0] & 0xf;
+ if (rsp_source >= BT_INFO_SRC_8192E_2ANT_MAX)
+ rsp_source = BT_INFO_SRC_8192E_2ANT_WIFI_FW;
+ coex_sta->bt_info_c2h_cnt[rsp_source]++;
+
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Bt info[%d], length=%d, hex data = [",
+ rsp_source, length);
+ for (i = 0; i < length; i++) {
+ coex_sta->bt_info_c2h[rsp_source][i] = tmp_buf[i];
+ if (i == 1)
+ bt_info = tmp_buf[i];
+ if (i == length-1)
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "0x%02x]\n", tmp_buf[i]);
+ else
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "0x%02x, ", tmp_buf[i]);
+ }
+
+ if (BT_INFO_SRC_8192E_2ANT_WIFI_FW != rsp_source) {
+ coex_sta->bt_retry_cnt = /* [3:0] */
+ coex_sta->bt_info_c2h[rsp_source][2] & 0xf;
+
+ coex_sta->bt_rssi =
+ coex_sta->bt_info_c2h[rsp_source][3] * 2 + 10;
+
+ coex_sta->bt_info_ext =
+ coex_sta->bt_info_c2h[rsp_source][4];
+
+ /* Here we need to resend some wifi info to BT
+ * because bt is reset and loss of the info.
+ */
+ if ((coex_sta->bt_info_ext & BIT1)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "bit1, send wifi BW&Chnl to BT!!\n");
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+ if (wifi_connected)
+ ex_halbtc8192e2ant_media_status_notify(
+ btcoexist,
+ BTC_MEDIA_CONNECT);
+ else
+ ex_halbtc8192e2ant_media_status_notify(
+ btcoexist,
+ BTC_MEDIA_DISCONNECT);
+ }
+
+ if ((coex_sta->bt_info_ext & BIT3)) {
+ if (!btcoexist->manual_control &&
+ !btcoexist->stop_coex_dm) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "bit3, BT NOT ignore Wlan active!\n");
+ halbtc8192e2ant_IgnoreWlanAct(btcoexist,
+ FORCE_EXEC,
+ false);
+ }
+ } else {
+ /* BT already NOT ignore Wlan active,
+ * do nothing here.
+ */
+ }
+
+#if (BT_AUTO_REPORT_ONLY_8192E_2ANT == 0)
+ if ((coex_sta->bt_info_ext & BIT4)) {
+ /* BT auto report already enabled, do nothing */
+ } else {
+ halbtc8192e2ant_bt_autoreport(btcoexist, FORCE_EXEC,
+ true);
+ }
+#endif
+ }
+
+ /* check BIT2 first ==> check if bt is under inquiry or page scan */
+ if (bt_info & BT_INFO_8192E_2ANT_B_INQ_PAGE)
+ coex_sta->c2h_bt_inquiry_page = true;
+ else
+ coex_sta->c2h_bt_inquiry_page = false;
+
+ /* set link exist status */
+ if (!(bt_info&BT_INFO_8192E_2ANT_B_CONNECTION)) {
+ coex_sta->bt_link_exist = false;
+ coex_sta->pan_exist = false;
+ coex_sta->a2dp_exist = false;
+ coex_sta->hid_exist = false;
+ coex_sta->sco_exist = false;
+ } else {/* connection exists */
+ coex_sta->bt_link_exist = true;
+ if (bt_info & BT_INFO_8192E_2ANT_B_FTP)
+ coex_sta->pan_exist = true;
+ else
+ coex_sta->pan_exist = false;
+ if (bt_info & BT_INFO_8192E_2ANT_B_A2DP)
+ coex_sta->a2dp_exist = true;
+ else
+ coex_sta->a2dp_exist = false;
+ if (bt_info & BT_INFO_8192E_2ANT_B_HID)
+ coex_sta->hid_exist = true;
+ else
+ coex_sta->hid_exist = false;
+ if (bt_info & BT_INFO_8192E_2ANT_B_SCO_ESCO)
+ coex_sta->sco_exist = true;
+ else
+ coex_sta->sco_exist = false;
+ }
+
+ halbtc8192e2ant_update_btlink_info(btcoexist);
+
+ if (!(bt_info&BT_INFO_8192E_2ANT_B_CONNECTION)) {
+ coex_dm->bt_status = BT_8192E_2ANT_BT_STATUS_NON_CONNECTED_IDLE;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Non-Connected idle!!!\n");
+ } else if (bt_info == BT_INFO_8192E_2ANT_B_CONNECTION) {
+ coex_dm->bt_status = BT_8192E_2ANT_BT_STATUS_CONNECTED_IDLE;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], bt_infoNotify(), BT Connected-idle!!!\n");
+ } else if ((bt_info&BT_INFO_8192E_2ANT_B_SCO_ESCO) ||
+ (bt_info&BT_INFO_8192E_2ANT_B_SCO_BUSY)) {
+ coex_dm->bt_status = BT_8192E_2ANT_BT_STATUS_SCO_BUSY;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], bt_infoNotify(), BT SCO busy!!!\n");
+ } else if (bt_info&BT_INFO_8192E_2ANT_B_ACL_BUSY) {
+ coex_dm->bt_status = BT_8192E_2ANT_BT_STATUS_ACL_BUSY;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], bt_infoNotify(), BT ACL busy!!!\n");
+ } else {
+ coex_dm->bt_status = BT_8192E_2ANT_BT_STATUS_MAX;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex]bt_infoNotify(), BT Non-Defined state!!!\n");
+ }
+
+ if ((BT_8192E_2ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) ||
+ (BT_8192E_2ANT_BT_STATUS_SCO_BUSY == coex_dm->bt_status) ||
+ (BT_8192E_2ANT_BT_STATUS_ACL_SCO_BUSY == coex_dm->bt_status)) {
+ bt_busy = true;
+ limited_dig = true;
+ } else {
+ bt_busy = false;
+ limited_dig = false;
+ }
+
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_TRAFFIC_BUSY, &bt_busy);
+
+ coex_dm->limited_dig = limited_dig;
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_LIMITED_DIG, &limited_dig);
+
+ halbtc8192e2ant_run_coexist_mechanism(btcoexist);
+}
+
+void ex_halbtc8192e2ant_stack_operation_notify(struct btc_coexist *btcoexist,
+ u8 type)
+{
+}
+
+void ex_halbtc8192e2ant_halt_notify(struct btc_coexist *btcoexist)
+{
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY, "[BTCoex], Halt notify\n");
+
+ halbtc8192e2ant_IgnoreWlanAct(btcoexist, FORCE_EXEC, true);
+ ex_halbtc8192e2ant_media_status_notify(btcoexist, BTC_MEDIA_DISCONNECT);
+}
+
+void ex_halbtc8192e2ant_periodical(struct btc_coexist *btcoexist)
+{
+ static u8 dis_ver_info_cnt;
+ u32 fw_ver = 0, bt_patch_ver = 0;
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ struct btc_stack_info *stack_info = &btcoexist->stack_info;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "=======================Periodical=======================\n");
+ if (dis_ver_info_cnt <= 5) {
+ dis_ver_info_cnt += 1;
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "************************************************\n");
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "Ant PG Num/ Ant Mech/ Ant Pos = %d/ %d/ %d\n",
+ board_info->pg_ant_num, board_info->btdm_ant_num,
+ board_info->btdm_ant_pos);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "BT stack/ hci ext ver = %s / %d\n",
+ ((stack_info->profile_notified) ? "Yes" : "No"),
+ stack_info->hci_version);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_BT_PATCH_VER,
+ &bt_patch_ver);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "CoexVer/ FwVer/ PatchVer = %d_%x/ 0x%x/ 0x%x(%d)\n",
+ glcoex_ver_date_8192e_2ant, glcoex_ver_8192e_2ant,
+ fw_ver, bt_patch_ver, bt_patch_ver);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "************************************************\n");
+ }
+
+#if (BT_AUTO_REPORT_ONLY_8192E_2ANT == 0)
+ halbtc8192e2ant_querybt_info(btcoexist);
+ halbtc8192e2ant_monitor_bt_ctr(btcoexist);
+ btc8192e2ant_monitor_bt_enable_dis(btcoexist);
+#else
+ if (halbtc8192e2ant_iswifi_status_changed(btcoexist) ||
+ coex_dm->auto_tdma_adjust)
+ halbtc8192e2ant_run_coexist_mechanism(btcoexist);
+#endif
+}
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8192e2ant.h b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8192e2ant.h
new file mode 100644
index 0000000..75e1f7d
--- /dev/null
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8192e2ant.h
@@ -0,0 +1,185 @@
+/******************************************************************************
+ *
+ * Copyright(c) 2012 Realtek Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called LICENSE.
+ *
+ * Contact Information:
+ * wlanfae <wlanfae@realtek.com>
+ * Realtek Corporation, No. 2, Innovation Road II, Hsinchu Science Park,
+ * Hsinchu 300, Taiwan.
+ *
+ * Larry Finger <Larry.Finger@lwfinger.net>
+ *
+ *****************************************************************************/
+/*****************************************************************
+ * The following is for 8192E 2Ant BT Co-exist definition
+ *****************************************************************/
+#define BT_AUTO_REPORT_ONLY_8192E_2ANT 0
+
+#define BT_INFO_8192E_2ANT_B_FTP BIT7
+#define BT_INFO_8192E_2ANT_B_A2DP BIT6
+#define BT_INFO_8192E_2ANT_B_HID BIT5
+#define BT_INFO_8192E_2ANT_B_SCO_BUSY BIT4
+#define BT_INFO_8192E_2ANT_B_ACL_BUSY BIT3
+#define BT_INFO_8192E_2ANT_B_INQ_PAGE BIT2
+#define BT_INFO_8192E_2ANT_B_SCO_ESCO BIT1
+#define BT_INFO_8192E_2ANT_B_CONNECTION BIT0
+
+#define BTC_RSSI_COEX_THRESH_TOL_8192E_2ANT 2
+
+enum bt_info_src_8192e_2ant {
+ BT_INFO_SRC_8192E_2ANT_WIFI_FW = 0x0,
+ BT_INFO_SRC_8192E_2ANT_BT_RSP = 0x1,
+ BT_INFO_SRC_8192E_2ANT_BT_ACTIVE_SEND = 0x2,
+ BT_INFO_SRC_8192E_2ANT_MAX
+};
+
+enum bt_8192e_2ant_bt_status {
+ BT_8192E_2ANT_BT_STATUS_NON_CONNECTED_IDLE = 0x0,
+ BT_8192E_2ANT_BT_STATUS_CONNECTED_IDLE = 0x1,
+ BT_8192E_2ANT_BT_STATUS_INQ_PAGE = 0x2,
+ BT_8192E_2ANT_BT_STATUS_ACL_BUSY = 0x3,
+ BT_8192E_2ANT_BT_STATUS_SCO_BUSY = 0x4,
+ BT_8192E_2ANT_BT_STATUS_ACL_SCO_BUSY = 0x5,
+ BT_8192E_2ANT_BT_STATUS_MAX
+};
+
+enum bt_8192e_2ant_coex_algo {
+ BT_8192E_2ANT_COEX_ALGO_UNDEFINED = 0x0,
+ BT_8192E_2ANT_COEX_ALGO_SCO = 0x1,
+ BT_8192E_2ANT_COEX_ALGO_SCO_PAN = 0x2,
+ BT_8192E_2ANT_COEX_ALGO_HID = 0x3,
+ BT_8192E_2ANT_COEX_ALGO_A2DP = 0x4,
+ BT_8192E_2ANT_COEX_ALGO_A2DP_PANHS = 0x5,
+ BT_8192E_2ANT_COEX_ALGO_PANEDR = 0x6,
+ BT_8192E_2ANT_COEX_ALGO_PANHS = 0x7,
+ BT_8192E_2ANT_COEX_ALGO_PANEDR_A2DP = 0x8,
+ BT_8192E_2ANT_COEX_ALGO_PANEDR_HID = 0x9,
+ BT_8192E_2ANT_COEX_ALGO_HID_A2DP_PANEDR = 0xa,
+ BT_8192E_2ANT_COEX_ALGO_HID_A2DP = 0xb,
+ BT_8192E_2ANT_COEX_ALGO_MAX = 0xc
+};
+
+struct coex_dm_8192e_2ant {
+ /* fw mechanism */
+ u8 pre_dec_bt_pwr;
+ u8 cur_dec_bt_pwr;
+ u8 pre_fw_dac_swing_lvl;
+ u8 cur_fw_dac_swing_lvl;
+ bool cur_ignore_wlan_act;
+ bool pre_ignore_wlan_act;
+ u8 pre_ps_tdma;
+ u8 cur_ps_tdma;
+ u8 ps_tdma_para[5];
+ u8 tdma_adj_type;
+ bool reset_tdma_adjust;
+ bool auto_tdma_adjust;
+ bool pre_ps_tdma_on;
+ bool cur_ps_tdma_on;
+ bool pre_bt_auto_report;
+ bool cur_bt_auto_report;
+
+ /* sw mechanism */
+ bool pre_rf_rx_lpf_shrink;
+ bool cur_rf_rx_lpf_shrink;
+ u32 bt_rf0x1e_backup;
+ bool pre_low_penalty_ra;
+ bool cur_low_penalty_ra;
+ bool pre_dac_swing_on;
+ u32 pre_dac_swing_lvl;
+ bool cur_dac_swing_on;
+ u32 cur_dac_swing_lvl;
+ bool pre_adc_back_off;
+ bool cur_adc_back_off;
+ bool pre_agc_table_en;
+ bool cur_agc_table_en;
+ u32 pre_val0x6c0;
+ u32 cur_val0x6c0;
+ u32 pre_val0x6c4;
+ u32 cur_val0x6c4;
+ u32 pre_val0x6c8;
+ u32 cur_val0x6c8;
+ u8 pre_val0x6cc;
+ u8 cur_val0x6cc;
+ bool limited_dig;
+
+ u32 backup_arfr_cnt1; /* Auto Rate Fallback Retry cnt */
+ u32 backup_arfr_cnt2; /* Auto Rate Fallback Retry cnt */
+ u16 backup_retrylimit;
+ u8 backup_ampdu_maxtime;
+
+ /* algorithm related */
+ u8 pre_algorithm;
+ u8 cur_algorithm;
+ u8 bt_status;
+ u8 wifi_chnl_info[3];
+
+ u8 pre_sstype;
+ u8 cur_sstype;
+
+ u32 prera_mask;
+ u32 curra_mask;
+ u8 curra_masktype;
+ u8 pre_arfrtype;
+ u8 cur_arfrtype;
+ u8 pre_retrylimit_type;
+ u8 cur_retrylimit_type;
+ u8 pre_ampdutime_type;
+ u8 cur_ampdutime_type;
+};
+
+struct coex_sta_8192e_2ant {
+ bool bt_link_exist;
+ bool sco_exist;
+ bool a2dp_exist;
+ bool hid_exist;
+ bool pan_exist;
+
+ bool under_lps;
+ bool under_ips;
+ u32 high_priority_tx;
+ u32 high_priority_rx;
+ u32 low_priority_tx;
+ u32 low_priority_rx;
+ u8 bt_rssi;
+ u8 pre_bt_rssi_state;
+ u8 pre_wifi_rssi_state[4];
+ bool c2h_bt_info_req_sent;
+ u8 bt_info_c2h[BT_INFO_SRC_8192E_2ANT_MAX][10];
+ u32 bt_info_c2h_cnt[BT_INFO_SRC_8192E_2ANT_MAX];
+ bool c2h_bt_inquiry_page;
+ u8 bt_retry_cnt;
+ u8 bt_info_ext;
+};
+
+/****************************************************************
+ * The following is interface which will notify coex module.
+ ****************************************************************/
+void ex_halbtc8192e2ant_init_hwconfig(struct btc_coexist *btcoexist);
+void ex_halbtc8192e2ant_init_coex_dm(struct btc_coexist *btcoexist);
+void ex_halbtc8192e2ant_ips_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8192e2ant_lps_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8192e2ant_scan_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8192e2ant_connect_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8192e2ant_media_status_notify(struct btc_coexist *btcoexist,
+ u8 type);
+void ex_halbtc8192e2ant_special_packet_notify(struct btc_coexist *btcoexist,
+ u8 type);
+void ex_halbtc8192e2ant_bt_info_notify(struct btc_coexist *btcoexist,
+ u8 *tmpbuf, u8 length);
+void ex_halbtc8192e2ant_stack_operation_notify(struct btc_coexist *btcoexist,
+ u8 type);
+void ex_halbtc8192e2ant_halt_notify(struct btc_coexist *btcoexist);
+void ex_halbtc8192e2ant_periodical(struct btc_coexist *btcoexist);
+void ex_halbtc8192e2ant_display_coex_info(struct btc_coexist *btcoexist);
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b1ant.c b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b1ant.c
new file mode 100644
index 0000000..c4acd40
--- /dev/null
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b1ant.c
@@ -0,0 +1,3170 @@
+/******************************************************************************
+ *
+ * Copyright(c) 2012 Realtek Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called LICENSE.
+ *
+ * Contact Information:
+ * wlanfae <wlanfae@realtek.com>
+ * Realtek Corporation, No. 2, Innovation Road II, Hsinchu Science Park,
+ * Hsinchu 300, Taiwan.
+ *
+ * Larry Finger <Larry.Finger@lwfinger.net>
+ *
+ *****************************************************************************/
+
+/***************************************************************
+ * Description:
+ *
+ * This file is for RTL8723B Co-exist mechanism
+ *
+ * History
+ * 2012/11/15 Cosa first check in.
+ *
+ ***************************************************************/
+
+/***************************************************************
+ * include files
+ ***************************************************************/
+#include "halbt_precomp.h"
+/***************************************************************
+ * Global variables, these are static variables
+ ***************************************************************/
+static struct coex_dm_8723b_1ant glcoex_dm_8723b_1ant;
+static struct coex_dm_8723b_1ant *coex_dm = &glcoex_dm_8723b_1ant;
+static struct coex_sta_8723b_1ant glcoex_sta_8723b_1ant;
+static struct coex_sta_8723b_1ant *coex_sta = &glcoex_sta_8723b_1ant;
+
+static const char *const GLBtInfoSrc8723b1Ant[] = {
+ "BT Info[wifi fw]",
+ "BT Info[bt rsp]",
+ "BT Info[bt auto report]",
+};
+
+static u32 glcoex_ver_date_8723b_1ant = 20130918;
+static u32 glcoex_ver_8723b_1ant = 0x47;
+
+/***************************************************************
+ * local function proto type if needed
+ ***************************************************************/
+/***************************************************************
+ * local function start with halbtc8723b1ant_
+ ***************************************************************/
+static u8 halbtc8723b1ant_bt_rssi_state(u8 level_num, u8 rssi_thresh,
+ u8 rssi_thresh1)
+{
+ s32 bt_rssi = 0;
+ u8 bt_rssi_state = coex_sta->pre_bt_rssi_state;
+
+ bt_rssi = coex_sta->bt_rssi;
+
+ if (level_num == 2) {
+ if ((coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ if (bt_rssi >= rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8723B_1ANT) {
+ bt_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to High\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at Low\n");
+ }
+ } else {
+ if (bt_rssi < rssi_thresh) {
+ bt_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Low\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at High\n");
+ }
+ }
+ } else if (level_num == 3) {
+ if (rssi_thresh > rssi_thresh1) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi thresh error!!\n");
+ return coex_sta->pre_bt_rssi_state;
+ }
+
+ if ((coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ if (bt_rssi >= rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8723B_1ANT) {
+ bt_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Medium\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at Low\n");
+ }
+ } else if ((coex_sta->pre_bt_rssi_state ==
+ BTC_RSSI_STATE_MEDIUM) ||
+ (coex_sta->pre_bt_rssi_state ==
+ BTC_RSSI_STATE_STAY_MEDIUM)) {
+ if (bt_rssi >= rssi_thresh1 +
+ BTC_RSSI_COEX_THRESH_TOL_8723B_1ANT) {
+ bt_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to High\n");
+ } else if (bt_rssi < rssi_thresh) {
+ bt_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Low\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at Medium\n");
+ }
+ } else {
+ if (bt_rssi < rssi_thresh1) {
+ bt_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Medium\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at High\n");
+ }
+ }
+ }
+
+ coex_sta->pre_bt_rssi_state = bt_rssi_state;
+
+ return bt_rssi_state;
+}
+
+static u8 halbtc8723b1ant_wifi_rssi_state(struct btc_coexist *btcoexist,
+ u8 index, u8 level_num,
+ u8 rssi_thresh, u8 rssi_thresh1)
+{
+ s32 wifi_rssi = 0;
+ u8 wifi_rssi_state = coex_sta->pre_wifi_rssi_state[index];
+
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_S4_WIFI_RSSI, &wifi_rssi);
+
+ if (level_num == 2) {
+ if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_LOW)) {
+ if (wifi_rssi >= rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8723B_1ANT) {
+ wifi_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to High\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at Low\n");
+ }
+ } else {
+ if (wifi_rssi < rssi_thresh) {
+ wifi_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Low\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at High\n");
+ }
+ }
+ } else if (level_num == 3) {
+ if (rssi_thresh > rssi_thresh1) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI thresh error!!\n");
+ return coex_sta->pre_wifi_rssi_state[index];
+ }
+
+ if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_LOW)) {
+ if (wifi_rssi >= rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8723B_1ANT) {
+ wifi_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Medium\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at Low\n");
+ }
+ } else if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_MEDIUM) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_MEDIUM)) {
+ if (wifi_rssi >= rssi_thresh1 +
+ BTC_RSSI_COEX_THRESH_TOL_8723B_1ANT) {
+ wifi_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to High\n");
+ } else if (wifi_rssi < rssi_thresh) {
+ wifi_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Low\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at Medium\n");
+ }
+ } else {
+ if (wifi_rssi < rssi_thresh1) {
+ wifi_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Medium\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at High\n");
+ }
+ }
+ }
+
+ coex_sta->pre_wifi_rssi_state[index] = wifi_rssi_state;
+
+ return wifi_rssi_state;
+}
+
+static void halbtc8723b1ant_updatera_mask(struct btc_coexist *btcoexist,
+ bool force_exec, u32 dis_rate_mask)
+{
+ coex_dm->curra_mask = dis_rate_mask;
+
+ if (force_exec || (coex_dm->prera_mask != coex_dm->curra_mask))
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_UPDATE_ra_mask,
+ &coex_dm->curra_mask);
+
+ coex_dm->prera_mask = coex_dm->curra_mask;
+}
+
+static void btc8723b1ant_auto_rate_fb_retry(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ bool wifi_under_bmode = false;
+
+ coex_dm->cur_arfr_type = type;
+
+ if (force_exec || (coex_dm->pre_arfr_type != coex_dm->cur_arfr_type)) {
+ switch (coex_dm->cur_arfr_type) {
+ case 0: /* normal mode */
+ btcoexist->btc_write_4byte(btcoexist, 0x430,
+ coex_dm->backup_arfr_cnt1);
+ btcoexist->btc_write_4byte(btcoexist, 0x434,
+ coex_dm->backup_arfr_cnt2);
+ break;
+ case 1:
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_UNDER_B_MODE,
+ &wifi_under_bmode);
+ if (wifi_under_bmode) {
+ btcoexist->btc_write_4byte(btcoexist,
+ 0x430, 0x0);
+ btcoexist->btc_write_4byte(btcoexist,
+ 0x434, 0x01010101);
+ } else {
+ btcoexist->btc_write_4byte(btcoexist,
+ 0x430, 0x0);
+ btcoexist->btc_write_4byte(btcoexist,
+ 0x434, 0x04030201);
+ }
+ break;
+ default:
+ break;
+ }
+ }
+
+ coex_dm->pre_arfr_type = coex_dm->cur_arfr_type;
+}
+
+static void halbtc8723b1ant_retry_limit(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ coex_dm->cur_retry_limit_type = type;
+
+ if (force_exec || (coex_dm->pre_retry_limit_type !=
+ coex_dm->cur_retry_limit_type)) {
+ switch (coex_dm->cur_retry_limit_type) {
+ case 0: /* normal mode */
+ btcoexist->btc_write_2byte(btcoexist, 0x42a,
+ coex_dm->backup_retry_limit);
+ break;
+ case 1: /* retry limit = 8 */
+ btcoexist->btc_write_2byte(btcoexist, 0x42a, 0x0808);
+ break;
+ default:
+ break;
+ }
+ }
+
+ coex_dm->pre_retry_limit_type = coex_dm->cur_retry_limit_type;
+}
+
+static void halbtc8723b1ant_ampdu_maxtime(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ coex_dm->cur_ampdu_time_type = type;
+
+ if (force_exec || (coex_dm->pre_ampdu_time_type !=
+ coex_dm->cur_ampdu_time_type)) {
+ switch (coex_dm->cur_ampdu_time_type) {
+ case 0: /* normal mode */
+ btcoexist->btc_write_1byte(btcoexist, 0x456,
+ coex_dm->backup_ampdu_max_time);
+ break;
+ case 1: /* AMPDU timw = 0x38 * 32us */
+ btcoexist->btc_write_1byte(btcoexist,
+ 0x456, 0x38);
+ break;
+ default:
+ break;
+ }
+ }
+
+ coex_dm->pre_ampdu_time_type = coex_dm->cur_ampdu_time_type;
+}
+
+static void halbtc8723b1ant_limited_tx(struct btc_coexist *btcoexist,
+ bool force_exec, u8 ra_masktype,
+ u8 arfr_type, u8 retry_limit_type,
+ u8 ampdu_time_type)
+{
+ switch (ra_masktype) {
+ case 0: /* normal mode */
+ halbtc8723b1ant_updatera_mask(btcoexist, force_exec, 0x0);
+ break;
+ case 1: /* disable cck 1/2 */
+ halbtc8723b1ant_updatera_mask(btcoexist, force_exec,
+ 0x00000003);
+ break;
+ /* disable cck 1/2/5.5, ofdm 6/9/12/18/24, mcs 0/1/2/3/4*/
+ case 2:
+ halbtc8723b1ant_updatera_mask(btcoexist, force_exec,
+ 0x0001f1f7);
+ break;
+ default:
+ break;
+ }
+
+ btc8723b1ant_auto_rate_fb_retry(btcoexist, force_exec, arfr_type);
+ halbtc8723b1ant_retry_limit(btcoexist, force_exec, retry_limit_type);
+ halbtc8723b1ant_ampdu_maxtime(btcoexist, force_exec, ampdu_time_type);
+}
+
+static void halbtc8723b1ant_limited_rx(struct btc_coexist *btcoexist,
+ bool force_exec, bool rej_ap_agg_pkt,
+ bool bt_ctrl_agg_buf_size,
+ u8 agg_buf_size)
+{
+ bool reject_rx_agg = rej_ap_agg_pkt;
+ bool bt_ctrl_rx_agg_size = bt_ctrl_agg_buf_size;
+ u8 rxaggsize = agg_buf_size;
+
+ /**********************************************
+ * Rx Aggregation related setting
+ **********************************************/
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_TO_REJ_AP_AGG_PKT,
+ &reject_rx_agg);
+ /* decide BT control aggregation buf size or not */
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_CTRL_AGG_SIZE,
+ &bt_ctrl_rx_agg_size);
+ /* aggregation buf size, only work
+ * when BT control Rx aggregation size.
+ */
+ btcoexist->btc_set(btcoexist, BTC_SET_U1_AGG_BUF_SIZE, &rxaggsize);
+ /* real update aggregation setting */
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_AGGREGATE_CTRL, NULL);
+}
+
+static void halbtc8723b1ant_monitor_bt_ctr(struct btc_coexist *btcoexist)
+{
+ u32 reg_hp_txrx, reg_lp_txrx, u32tmp;
+ u32 reg_hp_tx = 0, reg_hp_rx = 0;
+ u32 reg_lp_tx = 0, reg_lp_rx = 0;
+
+ reg_hp_txrx = 0x770;
+ reg_lp_txrx = 0x774;
+
+ u32tmp = btcoexist->btc_read_4byte(btcoexist, reg_hp_txrx);
+ reg_hp_tx = u32tmp & MASKLWORD;
+ reg_hp_rx = (u32tmp & MASKHWORD) >> 16;
+
+ u32tmp = btcoexist->btc_read_4byte(btcoexist, reg_lp_txrx);
+ reg_lp_tx = u32tmp & MASKLWORD;
+ reg_lp_rx = (u32tmp & MASKHWORD) >> 16;
+
+ coex_sta->high_priority_tx = reg_hp_tx;
+ coex_sta->high_priority_rx = reg_hp_rx;
+ coex_sta->low_priority_tx = reg_lp_tx;
+ coex_sta->low_priority_rx = reg_lp_rx;
+
+ /* reset counter */
+ btcoexist->btc_write_1byte(btcoexist, 0x76e, 0xc);
+}
+
+static void halbtc8723b1ant_query_bt_info(struct btc_coexist *btcoexist)
+{
+ u8 h2c_parameter[1] = {0};
+
+ coex_sta->c2h_bt_info_req_sent = true;
+
+ h2c_parameter[0] |= BIT0; /* trigger*/
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], Query Bt Info, FW write 0x61 = 0x%x\n",
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x61, 1, h2c_parameter);
+}
+
+static bool btc8723b1ant_is_wifi_status_changed(struct btc_coexist *btcoexist)
+{
+ static bool pre_wifi_busy;
+ static bool pre_under_4way, pre_bt_hs_on;
+ bool wifi_busy = false, under_4way = false, bt_hs_on = false;
+ bool wifi_connected = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_4_WAY_PROGRESS,
+ &under_4way);
+
+ if (wifi_connected) {
+ if (wifi_busy != pre_wifi_busy) {
+ pre_wifi_busy = wifi_busy;
+ return true;
+ }
+ if (under_4way != pre_under_4way) {
+ pre_under_4way = under_4way;
+ return true;
+ }
+ if (bt_hs_on != pre_bt_hs_on) {
+ pre_bt_hs_on = bt_hs_on;
+ return true;
+ }
+ }
+
+ return false;
+}
+
+static void halbtc8723b1ant_update_bt_link_info(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool bt_hs_on = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+
+ bt_link_info->bt_link_exist = coex_sta->bt_link_exist;
+ bt_link_info->sco_exist = coex_sta->sco_exist;
+ bt_link_info->a2dp_exist = coex_sta->a2dp_exist;
+ bt_link_info->pan_exist = coex_sta->pan_exist;
+ bt_link_info->hid_exist = coex_sta->hid_exist;
+
+ /* work around for HS mode. */
+ if (bt_hs_on) {
+ bt_link_info->pan_exist = true;
+ bt_link_info->bt_link_exist = true;
+ }
+
+ /* check if Sco only */
+ if (bt_link_info->sco_exist && !bt_link_info->a2dp_exist &&
+ !bt_link_info->pan_exist && !bt_link_info->hid_exist)
+ bt_link_info->sco_only = true;
+ else
+ bt_link_info->sco_only = false;
+
+ /* check if A2dp only */
+ if (!bt_link_info->sco_exist && bt_link_info->a2dp_exist &&
+ !bt_link_info->pan_exist && !bt_link_info->hid_exist)
+ bt_link_info->a2dp_only = true;
+ else
+ bt_link_info->a2dp_only = false;
+
+ /* check if Pan only */
+ if (!bt_link_info->sco_exist && !bt_link_info->a2dp_exist &&
+ bt_link_info->pan_exist && !bt_link_info->hid_exist)
+ bt_link_info->pan_only = true;
+ else
+ bt_link_info->pan_only = false;
+
+ /* check if Hid only */
+ if (!bt_link_info->sco_exist && !bt_link_info->a2dp_exist &&
+ !bt_link_info->pan_exist && bt_link_info->hid_exist)
+ bt_link_info->hid_only = true;
+ else
+ bt_link_info->hid_only = false;
+}
+
+static u8 halbtc8723b1ant_action_algorithm(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool bt_hs_on = false;
+ u8 algorithm = BT_8723B_1ANT_COEX_ALGO_UNDEFINED;
+ u8 numdiffprofile = 0;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+
+ if (!bt_link_info->bt_link_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], No BT link exists!!!\n");
+ return algorithm;
+ }
+
+ if (bt_link_info->sco_exist)
+ numdiffprofile++;
+ if (bt_link_info->hid_exist)
+ numdiffprofile++;
+ if (bt_link_info->pan_exist)
+ numdiffprofile++;
+ if (bt_link_info->a2dp_exist)
+ numdiffprofile++;
+
+ if (numdiffprofile == 1) {
+ if (bt_link_info->sco_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO only\n");
+ algorithm = BT_8723B_1ANT_COEX_ALGO_SCO;
+ } else {
+ if (bt_link_info->hid_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID only\n");
+ algorithm = BT_8723B_1ANT_COEX_ALGO_HID;
+ } else if (bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = A2DP only\n");
+ algorithm = BT_8723B_1ANT_COEX_ALGO_A2DP;
+ } else if (bt_link_info->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = PAN(HS) only\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_PANHS;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = PAN(EDR) only\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_PANEDR;
+ }
+ }
+ }
+ } else if (numdiffprofile == 2) {
+ if (bt_link_info->sco_exist) {
+ if (bt_link_info->hid_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + HID\n");
+ algorithm = BT_8723B_1ANT_COEX_ALGO_HID;
+ } else if (bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + A2DP ==> SCO\n");
+ algorithm = BT_8723B_1ANT_COEX_ALGO_SCO;
+ } else if (bt_link_info->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + PAN(HS)\n");
+ algorithm = BT_8723B_1ANT_COEX_ALGO_SCO;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + PAN(EDR)\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ } else {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID + A2DP\n");
+ algorithm = BT_8723B_1ANT_COEX_ALGO_HID_A2DP;
+ } else if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID + PAN(HS)\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_HID_A2DP;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID + PAN(EDR)\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_PANEDR_HID;
+ }
+ } else if (bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = A2DP + PAN(HS)\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_A2DP_PANHS;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = A2DP + PAN(EDR)\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_PANEDR_A2DP;
+ }
+ }
+ }
+ } else if (numdiffprofile == 3) {
+ if (bt_link_info->sco_exist) {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + HID + A2DP ==> HID\n");
+ algorithm = BT_8723B_1ANT_COEX_ALGO_HID;
+ } else if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + HID + PAN(HS)\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_HID_A2DP;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + HID + PAN(EDR)\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_PANEDR_HID;
+ }
+ } else if (bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + A2DP + PAN(HS)\n");
+ algorithm = BT_8723B_1ANT_COEX_ALGO_SCO;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + A2DP + PAN(EDR) ==> HID\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ } else {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID + A2DP + PAN(HS)\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_HID_A2DP;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID + A2DP + PAN(EDR)\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_HID_A2DP_PANEDR;
+ }
+ }
+ }
+ } else if (numdiffprofile >= 3) {
+ if (bt_link_info->sco_exist) {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Error!!! BT Profile = SCO + HID + A2DP + PAN(HS)\n");
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + HID + A2DP + PAN(EDR)==>PAN(EDR)+HID\n");
+ algorithm =
+ BT_8723B_1ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ }
+ }
+
+ return algorithm;
+}
+
+static void btc8723b1ant_set_sw_pen_tx_rate_adapt(struct btc_coexist *btcoexist,
+ bool low_penalty_ra)
+{
+ u8 h2c_parameter[6] = {0};
+
+ h2c_parameter[0] = 0x6; /* opCode, 0x6= Retry_Penalty */
+
+ if (low_penalty_ra) {
+ h2c_parameter[1] |= BIT0;
+ /*normal rate except MCS7/6/5, OFDM54/48/36 */
+ h2c_parameter[2] = 0x00;
+ h2c_parameter[3] = 0xf7; /*MCS7 or OFDM54 */
+ h2c_parameter[4] = 0xf8; /*MCS6 or OFDM48 */
+ h2c_parameter[5] = 0xf9; /*MCS5 or OFDM36 */
+ }
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], set WiFi Low-Penalty Retry: %s",
+ (low_penalty_ra ? "ON!!" : "OFF!!"));
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x69, 6, h2c_parameter);
+}
+
+static void halbtc8723b1ant_low_penalty_ra(struct btc_coexist *btcoexist,
+ bool force_exec, bool low_penalty_ra)
+{
+ coex_dm->cur_low_penalty_ra = low_penalty_ra;
+
+ if (!force_exec) {
+ if (coex_dm->pre_low_penalty_ra == coex_dm->cur_low_penalty_ra)
+ return;
+ }
+ btc8723b1ant_set_sw_pen_tx_rate_adapt(btcoexist,
+ coex_dm->cur_low_penalty_ra);
+
+ coex_dm->pre_low_penalty_ra = coex_dm->cur_low_penalty_ra;
+}
+
+static void halbtc8723b1ant_set_coex_table(struct btc_coexist *btcoexist,
+ u32 val0x6c0, u32 val0x6c4,
+ u32 val0x6c8, u8 val0x6cc)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c0 = 0x%x\n", val0x6c0);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c0, val0x6c0);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c4 = 0x%x\n", val0x6c4);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c4, val0x6c4);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c8 = 0x%x\n", val0x6c8);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c8, val0x6c8);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6cc = 0x%x\n", val0x6cc);
+ btcoexist->btc_write_1byte(btcoexist, 0x6cc, val0x6cc);
+}
+
+static void halbtc8723b1ant_coex_table(struct btc_coexist *btcoexist,
+ bool force_exec, u32 val0x6c0,
+ u32 val0x6c4, u32 val0x6c8,
+ u8 val0x6cc)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s write Coex Table 0x6c0 = 0x%x, 0x6c4 = 0x%x, 0x6cc = 0x%x\n",
+ (force_exec ? "force to" : ""),
+ val0x6c0, val0x6c4, val0x6cc);
+ coex_dm->cur_val0x6c0 = val0x6c0;
+ coex_dm->cur_val0x6c4 = val0x6c4;
+ coex_dm->cur_val0x6c8 = val0x6c8;
+ coex_dm->cur_val0x6cc = val0x6cc;
+
+ if (!force_exec) {
+ if ((coex_dm->pre_val0x6c0 == coex_dm->cur_val0x6c0) &&
+ (coex_dm->pre_val0x6c4 == coex_dm->cur_val0x6c4) &&
+ (coex_dm->pre_val0x6c8 == coex_dm->cur_val0x6c8) &&
+ (coex_dm->pre_val0x6cc == coex_dm->cur_val0x6cc))
+ return;
+ }
+ halbtc8723b1ant_set_coex_table(btcoexist, val0x6c0, val0x6c4,
+ val0x6c8, val0x6cc);
+
+ coex_dm->pre_val0x6c0 = coex_dm->cur_val0x6c0;
+ coex_dm->pre_val0x6c4 = coex_dm->cur_val0x6c4;
+ coex_dm->pre_val0x6c8 = coex_dm->cur_val0x6c8;
+ coex_dm->pre_val0x6cc = coex_dm->cur_val0x6cc;
+}
+
+static void halbtc8723b1ant_coex_table_with_type(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ switch (type) {
+ case 0:
+ halbtc8723b1ant_coex_table(btcoexist, force_exec, 0x55555555,
+ 0x55555555, 0xffffff, 0x3);
+ break;
+ case 1:
+ halbtc8723b1ant_coex_table(btcoexist, force_exec, 0x55555555,
+ 0x5a5a5a5a, 0xffffff, 0x3);
+ break;
+ case 2:
+ halbtc8723b1ant_coex_table(btcoexist, force_exec, 0x5a5a5a5a,
+ 0x5a5a5a5a, 0xffffff, 0x3);
+ break;
+ case 3:
+ halbtc8723b1ant_coex_table(btcoexist, force_exec, 0x55555555,
+ 0xaaaaaaaa, 0xffffff, 0x3);
+ break;
+ case 4:
+ halbtc8723b1ant_coex_table(btcoexist, force_exec, 0x55555555,
+ 0x5aaa5aaa, 0xffffff, 0x3);
+ break;
+ case 5:
+ halbtc8723b1ant_coex_table(btcoexist, force_exec, 0x5a5a5a5a,
+ 0xaaaa5a5a, 0xffffff, 0x3);
+ break;
+ case 6:
+ halbtc8723b1ant_coex_table(btcoexist, force_exec, 0x55555555,
+ 0xaaaa5a5a, 0xffffff, 0x3);
+ break;
+ case 7:
+ halbtc8723b1ant_coex_table(btcoexist, force_exec, 0xaaaaaaaa,
+ 0xaaaaaaaa, 0xffffff, 0x3);
+ break;
+ default:
+ break;
+ }
+}
+
+static void halbtc8723b1ant_SetFwIgnoreWlanAct(struct btc_coexist *btcoexist,
+ bool enable)
+{
+ u8 h2c_parameter[1] = {0};
+
+ if (enable)
+ h2c_parameter[0] |= BIT0; /* function enable */
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], set FW for BT Ignore Wlan_Act, FW write 0x63 = 0x%x\n",
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x63, 1, h2c_parameter);
+}
+
+static void halbtc8723b1ant_ignore_wlan_act(struct btc_coexist *btcoexist,
+ bool force_exec, bool enable)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s turn Ignore WlanAct %s\n",
+ (force_exec ? "force to" : ""), (enable ? "ON" : "OFF"));
+ coex_dm->cur_ignore_wlan_act = enable;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], bPreIgnoreWlanAct = %d, bCurIgnoreWlanAct = %d!!\n",
+ coex_dm->pre_ignore_wlan_act,
+ coex_dm->cur_ignore_wlan_act);
+
+ if (coex_dm->pre_ignore_wlan_act ==
+ coex_dm->cur_ignore_wlan_act)
+ return;
+ }
+ halbtc8723b1ant_SetFwIgnoreWlanAct(btcoexist, enable);
+
+ coex_dm->pre_ignore_wlan_act = coex_dm->cur_ignore_wlan_act;
+}
+
+static void halbtc8723b1ant_set_fw_ps_tdma(struct btc_coexist *btcoexist,
+ u8 byte1, u8 byte2, u8 byte3,
+ u8 byte4, u8 byte5)
+{
+ u8 h2c_parameter[5] = {0};
+ u8 real_byte1 = byte1, real_byte5 = byte5;
+ bool ap_enable = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_AP_MODE_ENABLE,
+ &ap_enable);
+
+ if (ap_enable) {
+ if ((byte1 & BIT4) && !(byte1 & BIT5)) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], FW for 1Ant AP mode\n");
+ real_byte1 &= ~BIT4;
+ real_byte1 |= BIT5;
+
+ real_byte5 |= BIT5;
+ real_byte5 &= ~BIT6;
+ }
+ }
+
+ h2c_parameter[0] = real_byte1;
+ h2c_parameter[1] = byte2;
+ h2c_parameter[2] = byte3;
+ h2c_parameter[3] = byte4;
+ h2c_parameter[4] = real_byte5;
+
+ coex_dm->ps_tdma_para[0] = real_byte1;
+ coex_dm->ps_tdma_para[1] = byte2;
+ coex_dm->ps_tdma_para[2] = byte3;
+ coex_dm->ps_tdma_para[3] = byte4;
+ coex_dm->ps_tdma_para[4] = real_byte5;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], PS-TDMA H2C cmd =0x%x%08x\n",
+ h2c_parameter[0],
+ h2c_parameter[1] << 24 |
+ h2c_parameter[2] << 16 |
+ h2c_parameter[3] << 8 |
+ h2c_parameter[4]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x60, 5, h2c_parameter);
+}
+
+static void halbtc8723b1ant_set_lps_rpwm(struct btc_coexist *btcoexist,
+ u8 lps_val, u8 rpwm_val)
+{
+ u8 lps = lps_val;
+ u8 rpwm = rpwm_val;
+
+ btcoexist->btc_set(btcoexist, BTC_SET_U1_LPS_VAL, &lps);
+ btcoexist->btc_set(btcoexist, BTC_SET_U1_RPWM_VAL, &rpwm);
+}
+
+static void halbtc8723b1ant_LpsRpwm(struct btc_coexist *btcoexist,
+ bool force_exec,
+ u8 lps_val, u8 rpwm_val)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s set lps/rpwm = 0x%x/0x%x\n",
+ (force_exec ? "force to" : ""), lps_val, rpwm_val);
+ coex_dm->cur_lps = lps_val;
+ coex_dm->cur_rpwm = rpwm_val;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], LPS-RxBeaconMode = 0x%x , LPS-RPWM = 0x%x!!\n",
+ coex_dm->cur_lps, coex_dm->cur_rpwm);
+
+ if ((coex_dm->pre_lps == coex_dm->cur_lps) &&
+ (coex_dm->pre_rpwm == coex_dm->cur_rpwm)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], LPS-RPWM_Last = 0x%x , LPS-RPWM_Now = 0x%x!!\n",
+ coex_dm->pre_rpwm, coex_dm->cur_rpwm);
+
+ return;
+ }
+ }
+ halbtc8723b1ant_set_lps_rpwm(btcoexist, lps_val, rpwm_val);
+
+ coex_dm->pre_lps = coex_dm->cur_lps;
+ coex_dm->pre_rpwm = coex_dm->cur_rpwm;
+}
+
+static void halbtc8723b1ant_sw_mechanism(struct btc_coexist *btcoexist,
+ bool low_penalty_ra)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], SM[LpRA] = %d\n", low_penalty_ra);
+
+ halbtc8723b1ant_low_penalty_ra(btcoexist, NORMAL_EXEC, low_penalty_ra);
+}
+
+static void halbtc8723b1ant_SetAntPath(struct btc_coexist *btcoexist,
+ u8 ant_pos_type, bool init_hw_cfg,
+ bool wifi_off)
+{
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ u32 fw_ver = 0, u32tmp = 0;
+ bool pg_ext_switch = false;
+ bool use_ext_switch = false;
+ u8 h2c_parameter[2] = {0};
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_EXT_SWITCH, &pg_ext_switch);
+ /* [31:16] = fw ver, [15:0] = fw sub ver */
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
+
+ if ((fw_ver < 0xc0000) || pg_ext_switch)
+ use_ext_switch = true;
+
+ if (init_hw_cfg) {
+ /*BT select s0/s1 is controlled by WiFi */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x67, 0x20, 0x1);
+
+ /*Force GNT_BT to Normal */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x765, 0x18, 0x0);
+ } else if (wifi_off) {
+ /*Force GNT_BT to High */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x765, 0x18, 0x3);
+ /*BT select s0/s1 is controlled by BT */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x67, 0x20, 0x0);
+
+ /* 0x4c[24:23] = 00, Set Antenna control by BT_RFE_CTRL
+ * BT Vendor 0xac = 0xf002
+ */
+ u32tmp = btcoexist->btc_read_4byte(btcoexist, 0x4c);
+ u32tmp &= ~BIT23;
+ u32tmp &= ~BIT24;
+ btcoexist->btc_write_4byte(btcoexist, 0x4c, u32tmp);
+ }
+
+ if (use_ext_switch) {
+ if (init_hw_cfg) {
+ /* 0x4c[23] = 0, 0x4c[24] = 1
+ * Antenna control by WL/BT
+ */
+ u32tmp = btcoexist->btc_read_4byte(btcoexist, 0x4c);
+ u32tmp &= ~BIT23;
+ u32tmp |= BIT24;
+ btcoexist->btc_write_4byte(btcoexist, 0x4c, u32tmp);
+
+ if (board_info->btdm_ant_pos ==
+ BTC_ANTENNA_AT_MAIN_PORT) {
+ /* Main Ant to BT for IPS case 0x4c[23] = 1 */
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x64, 0x1,
+ 0x1);
+
+ /*tell firmware "no antenna inverse"*/
+ h2c_parameter[0] = 0;
+ h2c_parameter[1] = 1; /*ext switch type*/
+ btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
+ h2c_parameter);
+ } else {
+ /*Aux Ant to BT for IPS case 0x4c[23] = 1 */
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x64, 0x1,
+ 0x0);
+
+ /*tell firmware "antenna inverse"*/
+ h2c_parameter[0] = 1;
+ h2c_parameter[1] = 1; /*ext switch type*/
+ btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
+ h2c_parameter);
+ }
+ }
+
+ /* fixed internal switch first*/
+ /* fixed internal switch S1->WiFi, S0->BT*/
+ if (board_info->btdm_ant_pos == BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_2byte(btcoexist, 0x948, 0x0);
+ else/* fixed internal switch S0->WiFi, S1->BT*/
+ btcoexist->btc_write_2byte(btcoexist, 0x948, 0x280);
+
+ /* ext switch setting */
+ switch (ant_pos_type) {
+ case BTC_ANT_PATH_WIFI:
+ if (board_info->btdm_ant_pos ==
+ BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x92c, 0x3,
+ 0x1);
+ else
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x92c, 0x3,
+ 0x2);
+ break;
+ case BTC_ANT_PATH_BT:
+ if (board_info->btdm_ant_pos ==
+ BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x92c, 0x3,
+ 0x2);
+ else
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x92c, 0x3,
+ 0x1);
+ break;
+ default:
+ case BTC_ANT_PATH_PTA:
+ if (board_info->btdm_ant_pos ==
+ BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x92c, 0x3,
+ 0x1);
+ else
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x92c, 0x3,
+ 0x2);
+ break;
+ }
+
+ } else {
+ if (init_hw_cfg) {
+ /* 0x4c[23] = 1, 0x4c[24] = 0 Antenna control by 0x64*/
+ u32tmp = btcoexist->btc_read_4byte(btcoexist, 0x4c);
+ u32tmp |= BIT23;
+ u32tmp &= ~BIT24;
+ btcoexist->btc_write_4byte(btcoexist, 0x4c, u32tmp);
+
+ if (board_info->btdm_ant_pos ==
+ BTC_ANTENNA_AT_MAIN_PORT) {
+ /*Main Ant to WiFi for IPS case 0x4c[23] = 1*/
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x64, 0x1,
+ 0x0);
+
+ /*tell firmware "no antenna inverse"*/
+ h2c_parameter[0] = 0;
+ h2c_parameter[1] = 0; /*internal switch type*/
+ btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
+ h2c_parameter);
+ } else {
+ /*Aux Ant to BT for IPS case 0x4c[23] = 1*/
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x64, 0x1,
+ 0x1);
+
+ /*tell firmware "antenna inverse"*/
+ h2c_parameter[0] = 1;
+ h2c_parameter[1] = 0; /*internal switch type*/
+ btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
+ h2c_parameter);
+ }
+ }
+
+ /* fixed external switch first*/
+ /*Main->WiFi, Aux->BT*/
+ if (board_info->btdm_ant_pos ==
+ BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x92c,
+ 0x3, 0x1);
+ else/*Main->BT, Aux->WiFi */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x92c,
+ 0x3, 0x2);
+
+ /* internal switch setting*/
+ switch (ant_pos_type) {
+ case BTC_ANT_PATH_WIFI:
+ if (board_info->btdm_ant_pos ==
+ BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_2byte(btcoexist, 0x948,
+ 0x0);
+ else
+ btcoexist->btc_write_2byte(btcoexist, 0x948,
+ 0x280);
+ break;
+ case BTC_ANT_PATH_BT:
+ if (board_info->btdm_ant_pos ==
+ BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_2byte(btcoexist, 0x948,
+ 0x280);
+ else
+ btcoexist->btc_write_2byte(btcoexist, 0x948,
+ 0x0);
+ break;
+ default:
+ case BTC_ANT_PATH_PTA:
+ if (board_info->btdm_ant_pos ==
+ BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_2byte(btcoexist, 0x948,
+ 0x200);
+ else
+ btcoexist->btc_write_2byte(btcoexist, 0x948,
+ 0x80);
+ break;
+ }
+ }
+}
+
+static void halbtc8723b1ant_ps_tdma(struct btc_coexist *btcoexist,
+ bool force_exec, bool turn_on, u8 type)
+{
+ bool wifi_busy = false;
+ u8 rssi_adjust_val = 0;
+
+ coex_dm->cur_ps_tdma_on = turn_on;
+ coex_dm->cur_ps_tdma = type;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+
+ if (!force_exec) {
+ if (coex_dm->cur_ps_tdma_on)
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], ******** TDMA(on, %d) *********\n",
+ coex_dm->cur_ps_tdma);
+ else
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], ******** TDMA(off, %d) ********\n",
+ coex_dm->cur_ps_tdma);
+
+ if ((coex_dm->pre_ps_tdma_on == coex_dm->cur_ps_tdma_on) &&
+ (coex_dm->pre_ps_tdma == coex_dm->cur_ps_tdma))
+ return;
+ }
+ if (turn_on) {
+ switch (type) {
+ default:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x51, 0x1a,
+ 0x1a, 0x0, 0x50);
+ break;
+ case 1:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x51, 0x3a,
+ 0x03, 0x10, 0x50);
+
+ rssi_adjust_val = 11;
+ break;
+ case 2:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x51, 0x2b,
+ 0x03, 0x10, 0x50);
+ rssi_adjust_val = 14;
+ break;
+ case 3:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x51, 0x1d,
+ 0x1d, 0x0, 0x52);
+ break;
+ case 4:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x93, 0x15,
+ 0x3, 0x14, 0x0);
+ rssi_adjust_val = 17;
+ break;
+ case 5:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x61, 0x15,
+ 0x3, 0x11, 0x10);
+ break;
+ case 6:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x61, 0x20,
+ 0x3, 0x11, 0x13);
+ break;
+ case 7:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x13, 0xc,
+ 0x5, 0x0, 0x0);
+ break;
+ case 8:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x93, 0x25,
+ 0x3, 0x10, 0x0);
+ break;
+ case 9:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x51, 0x21,
+ 0x3, 0x10, 0x50);
+ rssi_adjust_val = 18;
+ break;
+ case 10:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x13, 0xa,
+ 0xa, 0x0, 0x40);
+ break;
+ case 11:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x51, 0x15,
+ 0x03, 0x10, 0x50);
+ rssi_adjust_val = 20;
+ break;
+ case 12:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x51, 0x0a,
+ 0x0a, 0x0, 0x50);
+ break;
+ case 13:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x51, 0x15,
+ 0x15, 0x0, 0x50);
+ break;
+ case 14:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x51, 0x21,
+ 0x3, 0x10, 0x52);
+ break;
+ case 15:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x13, 0xa,
+ 0x3, 0x8, 0x0);
+ break;
+ case 16:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x93, 0x15,
+ 0x3, 0x10, 0x0);
+ rssi_adjust_val = 18;
+ break;
+ case 18:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x93, 0x25,
+ 0x3, 0x10, 0x0);
+ rssi_adjust_val = 14;
+ break;
+ case 20:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x61, 0x35,
+ 0x03, 0x11, 0x10);
+ break;
+ case 21:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x61, 0x25,
+ 0x03, 0x11, 0x11);
+ break;
+ case 22:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x61, 0x25,
+ 0x03, 0x11, 0x10);
+ break;
+ case 23:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0xe3, 0x25,
+ 0x3, 0x31, 0x18);
+ rssi_adjust_val = 22;
+ break;
+ case 24:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0xe3, 0x15,
+ 0x3, 0x31, 0x18);
+ rssi_adjust_val = 22;
+ break;
+ case 25:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0xe3, 0xa,
+ 0x3, 0x31, 0x18);
+ rssi_adjust_val = 22;
+ break;
+ case 26:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0xe3, 0xa,
+ 0x3, 0x31, 0x18);
+ rssi_adjust_val = 22;
+ break;
+ case 27:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0xe3, 0x25,
+ 0x3, 0x31, 0x98);
+ rssi_adjust_val = 22;
+ break;
+ case 28:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x69, 0x25,
+ 0x3, 0x31, 0x0);
+ break;
+ case 29:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0xab, 0x1a,
+ 0x1a, 0x1, 0x10);
+ break;
+ case 30:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x51, 0x14,
+ 0x3, 0x10, 0x50);
+ break;
+ case 31:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0xd3, 0x1a,
+ 0x1a, 0, 0x58);
+ break;
+ case 32:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x61, 0xa,
+ 0x3, 0x10, 0x0);
+ break;
+ case 33:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0xa3, 0x25,
+ 0x3, 0x30, 0x90);
+ break;
+ case 34:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x53, 0x1a,
+ 0x1a, 0x0, 0x10);
+ break;
+ case 35:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x63, 0x1a,
+ 0x1a, 0x0, 0x10);
+ break;
+ case 36:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0xd3, 0x12,
+ 0x3, 0x14, 0x50);
+ break;
+ /* SoftAP only with no sta associated,BT disable ,
+ * TDMA mode for power saving
+ * here softap mode screen off will cost 70-80mA for phone
+ */
+ case 40:
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x23, 0x18,
+ 0x00, 0x10, 0x24);
+ break;
+ }
+ } else {
+ switch (type) {
+ case 8: /*PTA Control */
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x8, 0x0,
+ 0x0, 0x0, 0x0);
+ halbtc8723b1ant_SetAntPath(btcoexist, BTC_ANT_PATH_PTA,
+ false, false);
+ break;
+ case 0:
+ default: /*Software control, Antenna at BT side */
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x0, 0x0,
+ 0x0, 0x0, 0x0);
+ halbtc8723b1ant_SetAntPath(btcoexist, BTC_ANT_PATH_BT,
+ false, false);
+ break;
+ case 9: /*Software control, Antenna at WiFi side */
+ halbtc8723b1ant_set_fw_ps_tdma(btcoexist, 0x0, 0x0,
+ 0x0, 0x0, 0x0);
+ halbtc8723b1ant_SetAntPath(btcoexist, BTC_ANT_PATH_WIFI,
+ false, false);
+ break;
+ }
+ }
+ rssi_adjust_val = 0;
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_U1_RSSI_ADJ_VAL_FOR_1ANT_COEX_TYPE,
+ &rssi_adjust_val);
+
+ /* update pre state */
+ coex_dm->pre_ps_tdma_on = coex_dm->cur_ps_tdma_on;
+ coex_dm->pre_ps_tdma = coex_dm->cur_ps_tdma;
+}
+
+static bool halbtc8723b1ant_is_common_action(struct btc_coexist *btcoexist)
+{
+ bool commom = false, wifi_connected = false;
+ bool wifi_busy = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+
+ if (!wifi_connected &&
+ BT_8723B_1ANT_BT_STATUS_NON_CONNECTED_IDLE == coex_dm->bt_status) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi non connected-idle + BT non connected-idle!!\n");
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+ commom = true;
+ } else if (wifi_connected &&
+ (BT_8723B_1ANT_BT_STATUS_NON_CONNECTED_IDLE ==
+ coex_dm->bt_status)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi connected + BT non connected-idle!!\n");
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+ commom = true;
+ } else if (!wifi_connected &&
+ (BT_8723B_1ANT_BT_STATUS_CONNECTED_IDLE ==
+ coex_dm->bt_status)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi non connected-idle + BT connected-idle!!\n");
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+ commom = true;
+ } else if (wifi_connected &&
+ (BT_8723B_1ANT_BT_STATUS_CONNECTED_IDLE ==
+ coex_dm->bt_status)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi connected + BT connected-idle!!\n");
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+ commom = true;
+ } else if (!wifi_connected &&
+ (BT_8723B_1ANT_BT_STATUS_CONNECTED_IDLE !=
+ coex_dm->bt_status)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ ("[BTCoex], Wifi non connected-idle + BT Busy!!\n"));
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+ commom = true;
+ } else {
+ if (wifi_busy)
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi Connected-Busy + BT Busy!!\n");
+ else
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi Connected-Idle + BT Busy!!\n");
+
+ commom = false;
+ }
+
+ return commom;
+}
+
+static void btc8723b1ant_tdma_dur_adj_for_acl(struct btc_coexist *btcoexist,
+ u8 wifi_status)
+{
+ static s32 up, dn, m, n, wait_count;
+ /* 0: no change, +1: increase WiFi duration,
+ * -1: decrease WiFi duration
+ */
+ s32 result;
+ u8 retry_count = 0, bt_info_ext;
+ bool wifi_busy = false;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], TdmaDurationAdjustForAcl()\n");
+
+ if (BT_8723B_1ANT_WIFI_STATUS_CONNECTED_BUSY == wifi_status)
+ wifi_busy = true;
+ else
+ wifi_busy = false;
+
+ if ((BT_8723B_1ANT_WIFI_STATUS_NON_CONNECTED_ASSO_AUTH_SCAN ==
+ wifi_status) ||
+ (BT_8723B_1ANT_WIFI_STATUS_CONNECTED_SCAN == wifi_status) ||
+ (BT_8723B_1ANT_WIFI_STATUS_CONNECTED_SPECIAL_PKT == wifi_status)) {
+ if (coex_dm->cur_ps_tdma != 1 && coex_dm->cur_ps_tdma != 2 &&
+ coex_dm->cur_ps_tdma != 3 && coex_dm->cur_ps_tdma != 9) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+
+ up = 0;
+ dn = 0;
+ m = 1;
+ n = 3;
+ result = 0;
+ wait_count = 0;
+ }
+ return;
+ }
+
+ if (!coex_dm->auto_tdma_adjust) {
+ coex_dm->auto_tdma_adjust = true;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], first run TdmaDurationAdjust()!!\n");
+
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 2);
+ coex_dm->tdma_adj_type = 2;
+
+ up = 0;
+ dn = 0;
+ m = 1;
+ n = 3;
+ result = 0;
+ wait_count = 0;
+ } else {
+ /*accquire the BT TRx retry count from BT_Info byte2 */
+ retry_count = coex_sta->bt_retry_cnt;
+ bt_info_ext = coex_sta->bt_info_ext;
+ result = 0;
+ wait_count++;
+ /* no retry in the last 2-second duration */
+ if (retry_count == 0) {
+ up++;
+ dn--;
+
+ if (dn <= 0)
+ dn = 0;
+
+ if (up >= n) {
+ wait_count = 0;
+ n = 3;
+ up = 0;
+ dn = 0;
+ result = 1;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], Increase wifi duration!!\n");
+ }
+ } else if (retry_count <= 3) {
+ up--;
+ dn++;
+
+ if (up <= 0)
+ up = 0;
+
+ if (dn == 2) {
+ if (wait_count <= 2)
+ m++;
+ else
+ m = 1;
+
+ if (m >= 20)
+ m = 20;
+
+ n = 3 * m;
+ up = 0;
+ dn = 0;
+ wait_count = 0;
+ result = -1;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], Decrease wifi duration for retryCounter<3!!\n");
+ }
+ } else {
+ if (wait_count == 1)
+ m++;
+ else
+ m = 1;
+
+ if (m >= 20)
+ m = 20;
+
+ n = 3 * m;
+ up = 0;
+ dn = 0;
+ wait_count = 0;
+ result = -1;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], Decrease wifi duration for retryCounter>3!!\n");
+ }
+
+ if (result == -1) {
+ if ((BT_INFO_8723B_1ANT_A2DP_BASIC_RATE(bt_info_ext)) &&
+ ((coex_dm->cur_ps_tdma == 1) ||
+ (coex_dm->cur_ps_tdma == 2))) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ }
+ } else if (result == 1) {
+ if ((BT_INFO_8723B_1ANT_A2DP_BASIC_RATE(bt_info_ext)) &&
+ ((coex_dm->cur_ps_tdma == 1) ||
+ (coex_dm->cur_ps_tdma == 2))) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 1);
+ coex_dm->tdma_adj_type = 1;
+ }
+ } else { /*no change */
+ /*if busy / idle change */
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex],********* TDMA(on, %d) ********\n",
+ coex_dm->cur_ps_tdma);
+ }
+
+ if (coex_dm->cur_ps_tdma != 1 && coex_dm->cur_ps_tdma != 2 &&
+ coex_dm->cur_ps_tdma != 9 && coex_dm->cur_ps_tdma != 11) {
+ /* recover to previous adjust type */
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true,
+ coex_dm->tdma_adj_type);
+ }
+ }
+}
+
+static void btc8723b1ant_pstdmachkpwrsave(struct btc_coexist *btcoexist,
+ bool new_ps_state)
+{
+ u8 lps_mode = 0x0;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_LPS_MODE, &lps_mode);
+
+ if (lps_mode) { /* already under LPS state */
+ if (new_ps_state) {
+ /* keep state under LPS, do nothing. */
+ } else {
+ /* will leave LPS state, turn off psTdma first */
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 0);
+ }
+ } else { /* NO PS state */
+ if (new_ps_state) {
+ /* will enter LPS state, turn off psTdma first */
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 0);
+ } else {
+ /* keep state under NO PS state, do nothing. */
+ }
+ }
+}
+
+static void halbtc8723b1ant_power_save_state(struct btc_coexist *btcoexist,
+ u8 ps_type, u8 lps_val,
+ u8 rpwm_val)
+{
+ bool low_pwr_disable = false;
+
+ switch (ps_type) {
+ case BTC_PS_WIFI_NATIVE:
+ /* recover to original 32k low power setting */
+ low_pwr_disable = false;
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_NORMAL_LPS, NULL);
+ break;
+ case BTC_PS_LPS_ON:
+ btc8723b1ant_pstdmachkpwrsave(btcoexist, true);
+ halbtc8723b1ant_LpsRpwm(btcoexist, NORMAL_EXEC, lps_val,
+ rpwm_val);
+ /* when coex force to enter LPS, do not enter 32k low power. */
+ low_pwr_disable = true;
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+ /* power save must executed before psTdma. */
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_ENTER_LPS, NULL);
+ break;
+ case BTC_PS_LPS_OFF:
+ btc8723b1ant_pstdmachkpwrsave(btcoexist, false);
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_LEAVE_LPS, NULL);
+ break;
+ default:
+ break;
+ }
+}
+
+/***************************************************
+ *
+ * Software Coex Mechanism start
+ *
+ ***************************************************/
+/* SCO only or SCO+PAN(HS) */
+static void halbtc8723b1ant_action_sco(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_sw_mechanism(btcoexist, true);
+}
+
+static void halbtc8723b1ant_action_hid(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_sw_mechanism(btcoexist, true);
+}
+
+/*A2DP only / PAN(EDR) only/ A2DP+PAN(HS) */
+static void halbtc8723b1ant_action_a2dp(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+}
+
+static void halbtc8723b1ant_action_a2dp_pan_hs(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+}
+
+static void halbtc8723b1ant_action_pan_edr(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+}
+
+/* PAN(HS) only */
+static void halbtc8723b1ant_action_pan_hs(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+}
+
+/*PAN(EDR)+A2DP */
+static void halbtc8723b1ant_action_pan_edr_a2dp(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+}
+
+static void halbtc8723b1ant_action_pan_edr_hid(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_sw_mechanism(btcoexist, true);
+}
+
+/* HID+A2DP+PAN(EDR) */
+static void btc8723b1ant_action_hid_a2dp_pan_edr(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_sw_mechanism(btcoexist, true);
+}
+
+static void halbtc8723b1ant_action_hid_a2dp(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_sw_mechanism(btcoexist, true);
+}
+
+/*****************************************************
+ *
+ * Non-Software Coex Mechanism start
+ *
+ *****************************************************/
+static void halbtc8723b1ant_action_wifi_multiport(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 2);
+}
+
+static void halbtc8723b1ant_action_hs(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 5);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 2);
+}
+
+static void halbtc8723b1ant_action_bt_inquiry(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool wifi_connected = false, ap_enable = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_AP_MODE_ENABLE,
+ &ap_enable);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+
+ if (!wifi_connected) {
+ halbtc8723b1ant_power_save_state(btcoexist,
+ BTC_PS_WIFI_NATIVE, 0x0, 0x0);
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 2);
+ } else if (bt_link_info->sco_exist || bt_link_info->hid_only) {
+ /* SCO/HID-only busy */
+ halbtc8723b1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 32);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ } else {
+ if (ap_enable)
+ halbtc8723b1ant_power_save_state(btcoexist,
+ BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+ else
+ halbtc8723b1ant_power_save_state(btcoexist,
+ BTC_PS_LPS_ON,
+ 0x50, 0x4);
+
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 30);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ }
+}
+
+static void btc8723b1ant_act_bt_sco_hid_only_busy(struct btc_coexist *btcoexist,
+ u8 wifi_status)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool wifi_connected = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+
+ /* tdma and coex table */
+
+ if (bt_link_info->sco_exist) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 5);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 2);
+ } else { /* HID */
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 6);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 5);
+ }
+}
+
+static void halbtc8723b1ant_action_wifi_connected_bt_acl_busy(
+ struct btc_coexist *btcoexist,
+ u8 wifi_status)
+{
+ u8 bt_rssi_state;
+
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+
+ bt_rssi_state = halbtc8723b1ant_bt_rssi_state(2, 28, 0);
+
+ if (bt_link_info->hid_only) { /*HID */
+ btc8723b1ant_act_bt_sco_hid_only_busy(btcoexist, wifi_status);
+ coex_dm->auto_tdma_adjust = false;
+ return;
+ } else if (bt_link_info->a2dp_only) { /*A2DP */
+ if (BT_8723B_1ANT_WIFI_STATUS_CONNECTED_IDLE == wifi_status) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 2);
+ coex_dm->auto_tdma_adjust = false;
+ } else if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8723b1ant_tdma_dur_adj_for_acl(btcoexist,
+ wifi_status);
+ halbtc8723b1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 1);
+ } else { /*for low BT RSSI */
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ halbtc8723b1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 1);
+ coex_dm->auto_tdma_adjust = false;
+ }
+ } else if (bt_link_info->hid_exist &&
+ bt_link_info->a2dp_exist) { /*HID+A2DP */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->auto_tdma_adjust = false;
+ } else { /*for low BT RSSI*/
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->auto_tdma_adjust = false;
+ }
+
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 6);
+ /*PAN(OPP,FTP), HID+PAN(OPP,FTP) */
+ } else if (bt_link_info->pan_only ||
+ (bt_link_info->hid_exist && bt_link_info->pan_exist)) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 3);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 6);
+ coex_dm->auto_tdma_adjust = false;
+ /*A2DP+PAN(OPP,FTP), HID+A2DP+PAN(OPP,FTP)*/
+ } else if ((bt_link_info->a2dp_exist && bt_link_info->pan_exist) ||
+ (bt_link_info->hid_exist && bt_link_info->a2dp_exist &&
+ bt_link_info->pan_exist)) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 13);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ coex_dm->auto_tdma_adjust = false;
+ } else {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 11);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ coex_dm->auto_tdma_adjust = false;
+ }
+}
+
+static void btc8723b1ant_action_wifi_not_conn(struct btc_coexist *btcoexist)
+{
+ /* power save state */
+ halbtc8723b1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+
+ /* tdma and coex table */
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 0);
+}
+
+static void btc8723b1ant_action_wifi_not_conn_scan(struct btc_coexist *btcoex)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoex->bt_link_info;
+
+ halbtc8723b1ant_power_save_state(btcoex, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+
+ /* tdma and coex table */
+ if (BT_8723B_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) {
+ if (bt_link_info->a2dp_exist && bt_link_info->pan_exist) {
+ halbtc8723b1ant_ps_tdma(btcoex, NORMAL_EXEC,
+ true, 22);
+ halbtc8723b1ant_coex_table_with_type(btcoex,
+ NORMAL_EXEC, 1);
+ } else if (bt_link_info->pan_only) {
+ halbtc8723b1ant_ps_tdma(btcoex, NORMAL_EXEC,
+ true, 20);
+ halbtc8723b1ant_coex_table_with_type(btcoex,
+ NORMAL_EXEC, 2);
+ } else {
+ halbtc8723b1ant_ps_tdma(btcoex, NORMAL_EXEC,
+ true, 20);
+ halbtc8723b1ant_coex_table_with_type(btcoex,
+ NORMAL_EXEC, 1);
+ }
+ } else if ((BT_8723B_1ANT_BT_STATUS_SCO_BUSY == coex_dm->bt_status) ||
+ (BT_8723B_1ANT_BT_STATUS_ACL_SCO_BUSY ==
+ coex_dm->bt_status)){
+ btc8723b1ant_act_bt_sco_hid_only_busy(btcoex,
+ BT_8723B_1ANT_WIFI_STATUS_CONNECTED_SCAN);
+ } else {
+ halbtc8723b1ant_ps_tdma(btcoex, NORMAL_EXEC, false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoex, NORMAL_EXEC, 2);
+ }
+}
+
+static void btc8723b1ant_act_wifi_not_conn_asso_auth(struct btc_coexist *btcoex)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoex->bt_link_info;
+
+ halbtc8723b1ant_power_save_state(btcoex, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+
+ if ((BT_8723B_1ANT_BT_STATUS_CONNECTED_IDLE == coex_dm->bt_status) ||
+ (bt_link_info->sco_exist) || (bt_link_info->hid_only) ||
+ (bt_link_info->a2dp_only) || (bt_link_info->pan_only)) {
+ halbtc8723b1ant_ps_tdma(btcoex, NORMAL_EXEC, false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoex, NORMAL_EXEC, 7);
+ } else {
+ halbtc8723b1ant_ps_tdma(btcoex, NORMAL_EXEC, true, 20);
+ halbtc8723b1ant_coex_table_with_type(btcoex, NORMAL_EXEC, 1);
+ }
+}
+
+static void btc8723b1ant_action_wifi_conn_scan(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+
+ halbtc8723b1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+
+ /* tdma and coex table */
+ if (BT_8723B_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) {
+ if (bt_link_info->a2dp_exist && bt_link_info->pan_exist) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 22);
+ halbtc8723b1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 1);
+ } else if (bt_link_info->pan_only) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 20);
+ halbtc8723b1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 2);
+ } else {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 20);
+ halbtc8723b1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 1);
+ }
+ } else if ((BT_8723B_1ANT_BT_STATUS_SCO_BUSY == coex_dm->bt_status) ||
+ (BT_8723B_1ANT_BT_STATUS_ACL_SCO_BUSY ==
+ coex_dm->bt_status)) {
+ btc8723b1ant_act_bt_sco_hid_only_busy(btcoexist,
+ BT_8723B_1ANT_WIFI_STATUS_CONNECTED_SCAN);
+ } else {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 2);
+ }
+}
+
+static void halbtc8723b1ant_action_wifi_connected_special_packet(
+ struct btc_coexist *btcoexist)
+{
+ bool hs_connecting = false;
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_CONNECTING, &hs_connecting);
+
+ halbtc8723b1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+
+ /* tdma and coex table */
+ if ((BT_8723B_1ANT_BT_STATUS_CONNECTED_IDLE == coex_dm->bt_status) ||
+ (bt_link_info->sco_exist) || (bt_link_info->hid_only) ||
+ (bt_link_info->a2dp_only) || (bt_link_info->pan_only)) {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 7);
+ } else {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 20);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ }
+}
+
+static void halbtc8723b1ant_action_wifi_connected(struct btc_coexist *btcoexist)
+{
+ bool wifi_busy = false;
+ bool scan = false, link = false, roam = false;
+ bool under_4way = false, ap_enable = false;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], CoexForWifiConnect()===>\n");
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_4_WAY_PROGRESS,
+ &under_4way);
+ if (under_4way) {
+ halbtc8723b1ant_action_wifi_connected_special_packet(btcoexist);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], CoexForWifiConnect(), return for wifi is under 4way<===\n");
+ return;
+ }
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
+
+ if (scan || link || roam) {
+ if (scan)
+ btc8723b1ant_action_wifi_conn_scan(btcoexist);
+ else
+ halbtc8723b1ant_action_wifi_connected_special_packet(
+ btcoexist);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], CoexForWifiConnect(), return for wifi is under scan<===\n");
+ return;
+ }
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_AP_MODE_ENABLE,
+ &ap_enable);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+ /* power save state */
+ if (!ap_enable &&
+ BT_8723B_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status &&
+ !btcoexist->bt_link_info.hid_only) {
+ if (!wifi_busy && btcoexist->bt_link_info.a2dp_only)
+ halbtc8723b1ant_power_save_state(btcoexist,
+ BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+ else
+ halbtc8723b1ant_power_save_state(btcoexist,
+ BTC_PS_LPS_ON,
+ 0x50, 0x4);
+ } else {
+ halbtc8723b1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+ }
+ /* tdma and coex table */
+ if (!wifi_busy) {
+ if (BT_8723B_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) {
+ halbtc8723b1ant_action_wifi_connected_bt_acl_busy(btcoexist,
+ BT_8723B_1ANT_WIFI_STATUS_CONNECTED_IDLE);
+ } else if ((BT_8723B_1ANT_BT_STATUS_SCO_BUSY ==
+ coex_dm->bt_status) ||
+ (BT_8723B_1ANT_BT_STATUS_ACL_SCO_BUSY ==
+ coex_dm->bt_status)) {
+ btc8723b1ant_act_bt_sco_hid_only_busy(btcoexist,
+ BT_8723B_1ANT_WIFI_STATUS_CONNECTED_IDLE);
+ } else {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 2);
+ }
+ } else {
+ if (BT_8723B_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) {
+ halbtc8723b1ant_action_wifi_connected_bt_acl_busy(btcoexist,
+ BT_8723B_1ANT_WIFI_STATUS_CONNECTED_BUSY);
+ } else if ((BT_8723B_1ANT_BT_STATUS_SCO_BUSY ==
+ coex_dm->bt_status) ||
+ (BT_8723B_1ANT_BT_STATUS_ACL_SCO_BUSY ==
+ coex_dm->bt_status)) {
+ btc8723b1ant_act_bt_sco_hid_only_busy(btcoexist,
+ BT_8723B_1ANT_WIFI_STATUS_CONNECTED_BUSY);
+ } else {
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 2);
+ }
+ }
+}
+
+static void btc8723b1ant_run_sw_coex_mech(struct btc_coexist *btcoexist)
+{
+ u8 algorithm = 0;
+
+ algorithm = halbtc8723b1ant_action_algorithm(btcoexist);
+ coex_dm->cur_algorithm = algorithm;
+
+ if (!halbtc8723b1ant_is_common_action(btcoexist)) {
+ switch (coex_dm->cur_algorithm) {
+ case BT_8723B_1ANT_COEX_ALGO_SCO:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = SCO.\n");
+ halbtc8723b1ant_action_sco(btcoexist);
+ break;
+ case BT_8723B_1ANT_COEX_ALGO_HID:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = HID.\n");
+ halbtc8723b1ant_action_hid(btcoexist);
+ break;
+ case BT_8723B_1ANT_COEX_ALGO_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = A2DP.\n");
+ halbtc8723b1ant_action_a2dp(btcoexist);
+ break;
+ case BT_8723B_1ANT_COEX_ALGO_A2DP_PANHS:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = A2DP+PAN(HS).\n");
+ halbtc8723b1ant_action_a2dp_pan_hs(btcoexist);
+ break;
+ case BT_8723B_1ANT_COEX_ALGO_PANEDR:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = PAN(EDR).\n");
+ halbtc8723b1ant_action_pan_edr(btcoexist);
+ break;
+ case BT_8723B_1ANT_COEX_ALGO_PANHS:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = HS mode.\n");
+ halbtc8723b1ant_action_pan_hs(btcoexist);
+ break;
+ case BT_8723B_1ANT_COEX_ALGO_PANEDR_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = PAN+A2DP.\n");
+ halbtc8723b1ant_action_pan_edr_a2dp(btcoexist);
+ break;
+ case BT_8723B_1ANT_COEX_ALGO_PANEDR_HID:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = PAN(EDR)+HID.\n");
+ halbtc8723b1ant_action_pan_edr_hid(btcoexist);
+ break;
+ case BT_8723B_1ANT_COEX_ALGO_HID_A2DP_PANEDR:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = HID+A2DP+PAN.\n");
+ btc8723b1ant_action_hid_a2dp_pan_edr(btcoexist);
+ break;
+ case BT_8723B_1ANT_COEX_ALGO_HID_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = HID+A2DP.\n");
+ halbtc8723b1ant_action_hid_a2dp(btcoexist);
+ break;
+ default:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = coexist All Off!!\n");
+ break;
+ }
+ coex_dm->pre_algorithm = coex_dm->cur_algorithm;
+ }
+}
+
+static void halbtc8723b1ant_run_coexist_mechanism(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool wifi_connected = false, bt_hs_on = false;
+ bool increase_scan_dev_num = false;
+ bool bt_ctrl_agg_buf_size = false;
+ u8 agg_buf_size = 5;
+ u8 wifi_rssi_state = BTC_RSSI_STATE_HIGH;
+ u32 wifi_link_status = 0;
+ u32 num_of_wifi_link = 0;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], RunCoexistMechanism()===>\n");
+
+ if (btcoexist->manual_control) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], RunCoexistMechanism(), return for Manual CTRL <===\n");
+ return;
+ }
+
+ if (btcoexist->stop_coex_dm) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], RunCoexistMechanism(), return for Stop Coex DM <===\n");
+ return;
+ }
+
+ if (coex_sta->under_ips) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], wifi is under IPS !!!\n");
+ return;
+ }
+
+ if ((BT_8723B_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) ||
+ (BT_8723B_1ANT_BT_STATUS_SCO_BUSY == coex_dm->bt_status) ||
+ (BT_8723B_1ANT_BT_STATUS_ACL_SCO_BUSY == coex_dm->bt_status)) {
+ increase_scan_dev_num = true;
+ }
+
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_INC_SCAN_DEV_NUM,
+ &increase_scan_dev_num);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_LINK_STATUS,
+ &wifi_link_status);
+ num_of_wifi_link = wifi_link_status >> 16;
+ if (num_of_wifi_link >= 2) {
+ halbtc8723b1ant_limited_tx(btcoexist, NORMAL_EXEC, 0, 0, 0, 0);
+ halbtc8723b1ant_limited_rx(btcoexist, NORMAL_EXEC, false,
+ bt_ctrl_agg_buf_size,
+ agg_buf_size);
+ halbtc8723b1ant_action_wifi_multiport(btcoexist);
+ return;
+ }
+
+ if (!bt_link_info->sco_exist && !bt_link_info->hid_exist) {
+ halbtc8723b1ant_limited_tx(btcoexist, NORMAL_EXEC, 0, 0, 0, 0);
+ } else {
+ if (wifi_connected) {
+ wifi_rssi_state =
+ halbtc8723b1ant_wifi_rssi_state(btcoexist,
+ 1, 2, 30, 0);
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8723b1ant_limited_tx(btcoexist,
+ NORMAL_EXEC,
+ 1, 1, 1, 1);
+ } else {
+ halbtc8723b1ant_limited_tx(btcoexist,
+ NORMAL_EXEC,
+ 1, 1, 1, 1);
+ }
+ } else {
+ halbtc8723b1ant_limited_tx(btcoexist, NORMAL_EXEC,
+ 0, 0, 0, 0);
+ }
+ }
+
+ if (bt_link_info->sco_exist) {
+ bt_ctrl_agg_buf_size = true;
+ agg_buf_size = 0x3;
+ } else if (bt_link_info->hid_exist) {
+ bt_ctrl_agg_buf_size = true;
+ agg_buf_size = 0x5;
+ } else if (bt_link_info->a2dp_exist || bt_link_info->pan_exist) {
+ bt_ctrl_agg_buf_size = true;
+ agg_buf_size = 0x8;
+ }
+ halbtc8723b1ant_limited_rx(btcoexist, NORMAL_EXEC, false,
+ bt_ctrl_agg_buf_size, agg_buf_size);
+
+ btc8723b1ant_run_sw_coex_mech(btcoexist);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+
+ if (coex_sta->c2h_bt_inquiry_page) {
+ halbtc8723b1ant_action_bt_inquiry(btcoexist);
+ return;
+ } else if (bt_hs_on) {
+ halbtc8723b1ant_action_hs(btcoexist);
+ return;
+ }
+
+ if (!wifi_connected) {
+ bool scan = false, link = false, roam = false;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], wifi is non connected-idle !!!\n");
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
+
+ if (scan || link || roam) {
+ if (scan)
+ btc8723b1ant_action_wifi_not_conn_scan(
+ btcoexist);
+ else
+ btc8723b1ant_act_wifi_not_conn_asso_auth(
+ btcoexist);
+ } else {
+ btc8723b1ant_action_wifi_not_conn(btcoexist);
+ }
+ } else { /* wifi LPS/Busy */
+ halbtc8723b1ant_action_wifi_connected(btcoexist);
+ }
+}
+
+static void halbtc8723b1ant_init_coex_dm(struct btc_coexist *btcoexist)
+{
+ /* sw all off */
+ halbtc8723b1ant_sw_mechanism(btcoexist, false);
+
+ halbtc8723b1ant_ps_tdma(btcoexist, FORCE_EXEC, false, 8);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, FORCE_EXEC, 0);
+}
+
+static void halbtc8723b1ant_init_hw_config(struct btc_coexist *btcoexist,
+ bool backup)
+{
+ u32 u32tmp = 0;
+ u8 u8tmp = 0;
+ u32 cnt_bt_cal_chk = 0;
+
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], 1Ant Init HW Config!!\n");
+
+ if (backup) {/* backup rf 0x1e value */
+ coex_dm->backup_arfr_cnt1 =
+ btcoexist->btc_read_4byte(btcoexist, 0x430);
+ coex_dm->backup_arfr_cnt2 =
+ btcoexist->btc_read_4byte(btcoexist, 0x434);
+ coex_dm->backup_retry_limit =
+ btcoexist->btc_read_2byte(btcoexist, 0x42a);
+ coex_dm->backup_ampdu_max_time =
+ btcoexist->btc_read_1byte(btcoexist, 0x456);
+ }
+
+ /* WiFi goto standby while GNT_BT 0-->1 */
+ btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1, 0xfffff, 0x780);
+ /* BT goto standby while GNT_BT 1-->0 */
+ btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x2, 0xfffff, 0x500);
+
+ btcoexist->btc_write_1byte(btcoexist, 0x974, 0xff);
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x944, 0x3, 0x3);
+ btcoexist->btc_write_1byte(btcoexist, 0x930, 0x77);
+
+ /* BT calibration check */
+ while (cnt_bt_cal_chk <= 20) {
+ u32tmp = btcoexist->btc_read_4byte(btcoexist, 0x49d);
+ cnt_bt_cal_chk++;
+ if (u32tmp & BIT0) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], ########### BT calibration(cnt=%d) ###########\n",
+ cnt_bt_cal_chk);
+ mdelay(50);
+ } else {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], ********** BT NOT calibration (cnt=%d)**********\n",
+ cnt_bt_cal_chk);
+ break;
+ }
+ }
+
+ /* 0x790[5:0] = 0x5 */
+ u8tmp = btcoexist->btc_read_1byte(btcoexist, 0x790);
+ u8tmp &= 0xc0;
+ u8tmp |= 0x5;
+ btcoexist->btc_write_1byte(btcoexist, 0x790, u8tmp);
+
+ /* Enable counter statistics */
+ /*0x76e[3] =1, WLAN_Act control by PTA */
+ btcoexist->btc_write_1byte(btcoexist, 0x76e, 0xc);
+ btcoexist->btc_write_1byte(btcoexist, 0x778, 0x1);
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x40, 0x20, 0x1);
+
+ /*Antenna config */
+ halbtc8723b1ant_SetAntPath(btcoexist, BTC_ANT_PATH_PTA, true, false);
+ /* PTA parameter */
+ halbtc8723b1ant_coex_table_with_type(btcoexist, FORCE_EXEC, 0);
+}
+
+static void halbtc8723b1ant_wifi_off_hw_cfg(struct btc_coexist *btcoexist)
+{
+ /* set wlan_act to low */
+ btcoexist->btc_write_1byte(btcoexist, 0x76e, 0x4);
+}
+
+/**************************************************************
+ * work around function start with wa_halbtc8723b1ant_
+ **************************************************************/
+/**************************************************************
+ * extern function start with EXhalbtc8723b1ant_
+ **************************************************************/
+
+void ex_halbtc8723b1ant_init_hwconfig(struct btc_coexist *btcoexist)
+{
+ halbtc8723b1ant_init_hw_config(btcoexist, true);
+}
+
+void ex_halbtc8723b1ant_init_coex_dm(struct btc_coexist *btcoexist)
+{
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], Coex Mechanism Init!!\n");
+
+ btcoexist->stop_coex_dm = false;
+
+ halbtc8723b1ant_init_coex_dm(btcoexist);
+
+ halbtc8723b1ant_query_bt_info(btcoexist);
+}
+
+void ex_halbtc8723b1ant_display_coex_info(struct btc_coexist *btcoexist)
+{
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ struct btc_stack_info *stack_info = &btcoexist->stack_info;
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ struct rtl_priv *rtlpriv = btcoexist->adapter;
+ u8 u8tmp[4], i, bt_info_ext, pstdmacase = 0;
+ u16 u16tmp[4];
+ u32 u32tmp[4];
+ bool roam = false, scan = false;
+ bool link = false, wifi_under_5g = false;
+ bool bt_hs_on = false, wifi_busy = false;
+ s32 wifi_rssi = 0, bt_hs_rssi = 0;
+ u32 wifi_bw, wifi_traffic_dir, fa_ofdm, fa_cck, wifi_link_status;
+ u8 wifi_dot11_chnl, wifi_hs_chnl;
+ u32 fw_ver = 0, bt_patch_ver = 0;
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ============[BT Coexist info]============");
+
+ if (btcoexist->manual_control) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ============[Under Manual Control]==========");
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ==========================================");
+ }
+ if (btcoexist->stop_coex_dm) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ============[Coex is STOPPED]============");
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ==========================================");
+ }
+
+ if (!board_info->bt_exist) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n BT not exists !!!");
+ return;
+ }
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d/ %d",
+ "Ant PG Num/ Ant Mech/ Ant Pos:",
+ board_info->pg_ant_num, board_info->btdm_ant_num,
+ board_info->btdm_ant_pos);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s / %d",
+ "BT stack/ hci ext ver",
+ ((stack_info->profile_notified) ? "Yes" : "No"),
+ stack_info->hci_version);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_BT_PATCH_VER, &bt_patch_ver);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d_%x/ 0x%x/ 0x%x(%d)",
+ "CoexVer/ FwVer/ PatchVer",
+ glcoex_ver_date_8723b_1ant, glcoex_ver_8723b_1ant,
+ fw_ver, bt_patch_ver, bt_patch_ver);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_DOT11_CHNL,
+ &wifi_dot11_chnl);
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_HS_CHNL, &wifi_hs_chnl);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d / %d(%d)",
+ "Dot11 channel / HsChnl(HsMode)",
+ wifi_dot11_chnl, wifi_hs_chnl, bt_hs_on);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %02x %02x %02x ",
+ "H2C Wifi inform bt chnl Info",
+ coex_dm->wifi_chnl_info[0], coex_dm->wifi_chnl_info[1],
+ coex_dm->wifi_chnl_info[2]);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_WIFI_RSSI, &wifi_rssi);
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_HS_RSSI, &bt_hs_rssi);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d",
+ "Wifi rssi/ HS rssi", wifi_rssi, bt_hs_rssi);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d/ %d ",
+ "Wifi link/ roam/ scan", link, roam, scan);
+
+ btcoexist->btc_get(btcoexist , BTC_GET_BL_WIFI_UNDER_5G,
+ &wifi_under_5g);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_TRAFFIC_DIRECTION,
+ &wifi_traffic_dir);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s / %s/ %s ",
+ "Wifi status", (wifi_under_5g ? "5G" : "2.4G"),
+ ((BTC_WIFI_BW_LEGACY == wifi_bw) ? "Legacy" :
+ (((BTC_WIFI_BW_HT40 == wifi_bw) ? "HT40" : "HT20"))),
+ ((!wifi_busy) ? "idle" :
+ ((BTC_WIFI_TRAFFIC_TX == wifi_traffic_dir) ?
+ "uplink" : "downlink")));
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_LINK_STATUS,
+ &wifi_link_status);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d/ %d/ %d/ %d",
+ "sta/vwifi/hs/p2pGo/p2pGc",
+ ((wifi_link_status & WIFI_STA_CONNECTED) ? 1 : 0),
+ ((wifi_link_status & WIFI_AP_CONNECTED) ? 1 : 0),
+ ((wifi_link_status & WIFI_HS_CONNECTED) ? 1 : 0),
+ ((wifi_link_status & WIFI_P2P_GO_CONNECTED) ? 1 : 0),
+ ((wifi_link_status & WIFI_P2P_GC_CONNECTED) ? 1 : 0));
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = [%s/ %d/ %d] ",
+ "BT [status/ rssi/ retryCnt]",
+ ((btcoexist->bt_info.bt_disabled) ? ("disabled") :
+ ((coex_sta->c2h_bt_inquiry_page) ? ("inquiry/page scan") :
+ ((BT_8723B_1ANT_BT_STATUS_NON_CONNECTED_IDLE ==
+ coex_dm->bt_status) ?
+ "non-connected idle" :
+ ((BT_8723B_1ANT_BT_STATUS_CONNECTED_IDLE ==
+ coex_dm->bt_status) ?
+ "connected-idle" : "busy")))),
+ coex_sta->bt_rssi, coex_sta->bt_retry_cnt);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d / %d / %d / %d",
+ "SCO/HID/PAN/A2DP", bt_link_info->sco_exist,
+ bt_link_info->hid_exist, bt_link_info->pan_exist,
+ bt_link_info->a2dp_exist);
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_BT_LINK_INFO);
+
+ bt_info_ext = coex_sta->bt_info_ext;
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s",
+ "BT Info A2DP rate",
+ (bt_info_ext & BIT0) ? "Basic rate" : "EDR rate");
+
+ for (i = 0; i < BT_INFO_SRC_8723B_1ANT_MAX; i++) {
+ if (coex_sta->bt_info_c2h_cnt[i]) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %02x %02x %02x %02x %02x %02x %02x(%d)",
+ GLBtInfoSrc8723b1Ant[i],
+ coex_sta->bt_info_c2h[i][0],
+ coex_sta->bt_info_c2h[i][1],
+ coex_sta->bt_info_c2h[i][2],
+ coex_sta->bt_info_c2h[i][3],
+ coex_sta->bt_info_c2h[i][4],
+ coex_sta->bt_info_c2h[i][5],
+ coex_sta->bt_info_c2h[i][6],
+ coex_sta->bt_info_c2h_cnt[i]);
+ }
+ }
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %s/%s, (0x%x/0x%x)",
+ "PS state, IPS/LPS, (lps/rpwm)",
+ ((coex_sta->under_ips ? "IPS ON" : "IPS OFF")),
+ ((coex_sta->under_lps ? "LPS ON" : "LPS OFF")),
+ btcoexist->bt_info.lps_val,
+ btcoexist->bt_info.rpwm_val);
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_FW_PWR_MODE_CMD);
+
+ if (!btcoexist->manual_control) {
+ /* Sw mechanism */
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
+ "============[Sw mechanism]============");
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/",
+ "SM[LowPenaltyRA]", coex_dm->cur_low_penalty_ra);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s/ %s/ %d ",
+ "DelBA/ BtCtrlAgg/ AggSize",
+ (btcoexist->bt_info.reject_agg_pkt ? "Yes" : "No"),
+ (btcoexist->bt_info.bt_ctrl_buf_size ? "Yes" : "No"),
+ btcoexist->bt_info.agg_buf_size);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x ",
+ "Rate Mask", btcoexist->bt_info.ra_mask);
+
+ /* Fw mechanism */
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
+ "============[Fw mechanism]============");
+
+ pstdmacase = coex_dm->cur_ps_tdma;
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %02x %02x %02x %02x %02x case-%d (auto:%d)",
+ "PS TDMA", coex_dm->ps_tdma_para[0],
+ coex_dm->ps_tdma_para[1], coex_dm->ps_tdma_para[2],
+ coex_dm->ps_tdma_para[3], coex_dm->ps_tdma_para[4],
+ pstdmacase, coex_dm->auto_tdma_adjust);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d ",
+ "IgnWlanAct", coex_dm->cur_ignore_wlan_act);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x ",
+ "Latest error condition(should be 0)",
+ coex_dm->error_condition);
+ }
+
+ /* Hw setting */
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
+ "============[Hw setting]============");
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/0x%x/0x%x/0x%x",
+ "backup ARFR1/ARFR2/RL/AMaxTime", coex_dm->backup_arfr_cnt1,
+ coex_dm->backup_arfr_cnt2, coex_dm->backup_retry_limit,
+ coex_dm->backup_ampdu_max_time);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x430);
+ u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x434);
+ u16tmp[0] = btcoexist->btc_read_2byte(btcoexist, 0x42a);
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x456);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/0x%x/0x%x/0x%x",
+ "0x430/0x434/0x42a/0x456",
+ u32tmp[0], u32tmp[1], u16tmp[0], u8tmp[0]);
+
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x778);
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x6cc);
+ u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x880);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ "0x778/0x6cc/0x880[29:25]", u8tmp[0], u32tmp[0],
+ (u32tmp[1] & 0x3e000000) >> 25);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x948);
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x67);
+ u8tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0x765);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ "0x948/ 0x67[5] / 0x765",
+ u32tmp[0], ((u8tmp[0] & 0x20) >> 5), u8tmp[1]);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x92c);
+ u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x930);
+ u32tmp[2] = btcoexist->btc_read_4byte(btcoexist, 0x944);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ "0x92c[1:0]/ 0x930[7:0]/0x944[1:0]",
+ u32tmp[0] & 0x3, u32tmp[1] & 0xff, u32tmp[2] & 0x3);
+
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x39);
+ u8tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0x40);
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x4c);
+ u8tmp[2] = btcoexist->btc_read_1byte(btcoexist, 0x64);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x/ 0x%x/ 0x%x/ 0x%x",
+ "0x38[11]/0x40/0x4c[24:23]/0x64[0]",
+ ((u8tmp[0] & 0x8)>>3), u8tmp[1],
+ ((u32tmp[0] & 0x01800000) >> 23), u8tmp[2] & 0x1);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x550);
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x522);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
+ "0x550(bcn ctrl)/0x522", u32tmp[0], u8tmp[0]);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xc50);
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x49c);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
+ "0xc50(dig)/0x49c(null-drop)", u32tmp[0] & 0xff, u8tmp[0]);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xda0);
+ u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0xda4);
+ u32tmp[2] = btcoexist->btc_read_4byte(btcoexist, 0xda8);
+ u32tmp[3] = btcoexist->btc_read_4byte(btcoexist, 0xcf0);
+
+ u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0xa5b);
+ u8tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0xa5c);
+
+ fa_ofdm = ((u32tmp[0] & 0xffff0000) >> 16) +
+ ((u32tmp[1] & 0xffff0000) >> 16) +
+ (u32tmp[1] & 0xffff) +
+ (u32tmp[2] & 0xffff) +
+ ((u32tmp[3] & 0xffff0000) >> 16) +
+ (u32tmp[3] & 0xffff);
+ fa_cck = (u8tmp[0] << 8) + u8tmp[1];
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ "OFDM-CCA/OFDM-FA/CCK-FA",
+ u32tmp[0] & 0xffff, fa_ofdm, fa_cck);
+
+ u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x6c0);
+ u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x6c4);
+ u32tmp[2] = btcoexist->btc_read_4byte(btcoexist, 0x6c8);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ "0x6c0/0x6c4/0x6c8(coexTable)",
+ u32tmp[0], u32tmp[1], u32tmp[2]);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d",
+ "0x770(high-pri rx/tx)", coex_sta->high_priority_rx,
+ coex_sta->high_priority_tx);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d",
+ "0x774(low-pri rx/tx)", coex_sta->low_priority_rx,
+ coex_sta->low_priority_tx);
+#if (BT_AUTO_REPORT_ONLY_8723B_1ANT == 1)
+ halbtc8723b1ant_monitor_bt_ctr(btcoexist);
+#endif
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_COEX_STATISTICS);
+}
+
+void ex_halbtc8723b1ant_ips_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (btcoexist->manual_control || btcoexist->stop_coex_dm)
+ return;
+
+ if (BTC_IPS_ENTER == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], IPS ENTER notify\n");
+ coex_sta->under_ips = true;
+
+ halbtc8723b1ant_SetAntPath(btcoexist, BTC_ANT_PATH_BT,
+ false, true);
+ /* set PTA control */
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 0);
+ halbtc8723b1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 0);
+ halbtc8723b1ant_wifi_off_hw_cfg(btcoexist);
+ } else if (BTC_IPS_LEAVE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], IPS LEAVE notify\n");
+ coex_sta->under_ips = false;
+
+ halbtc8723b1ant_init_hw_config(btcoexist, false);
+ halbtc8723b1ant_init_coex_dm(btcoexist);
+ halbtc8723b1ant_query_bt_info(btcoexist);
+ }
+}
+
+void ex_halbtc8723b1ant_lps_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (btcoexist->manual_control || btcoexist->stop_coex_dm)
+ return;
+
+ if (BTC_LPS_ENABLE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], LPS ENABLE notify\n");
+ coex_sta->under_lps = true;
+ } else if (BTC_LPS_DISABLE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], LPS DISABLE notify\n");
+ coex_sta->under_lps = false;
+ }
+}
+
+void ex_halbtc8723b1ant_scan_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ bool wifi_connected = false, bt_hs_on = false;
+ u32 wifi_link_status = 0;
+ u32 num_of_wifi_link = 0;
+ bool bt_ctrl_agg_buf_size = false;
+ u8 agg_buf_size = 5;
+
+ if (btcoexist->manual_control || btcoexist->stop_coex_dm ||
+ btcoexist->bt_info.bt_disabled)
+ return;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+
+ halbtc8723b1ant_query_bt_info(btcoexist);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_LINK_STATUS,
+ &wifi_link_status);
+ num_of_wifi_link = wifi_link_status >> 16;
+ if (num_of_wifi_link >= 2) {
+ halbtc8723b1ant_limited_tx(btcoexist, NORMAL_EXEC, 0, 0, 0, 0);
+ halbtc8723b1ant_limited_rx(btcoexist, NORMAL_EXEC, false,
+ bt_ctrl_agg_buf_size, agg_buf_size);
+ halbtc8723b1ant_action_wifi_multiport(btcoexist);
+ return;
+ }
+
+ if (coex_sta->c2h_bt_inquiry_page) {
+ halbtc8723b1ant_action_bt_inquiry(btcoexist);
+ return;
+ } else if (bt_hs_on) {
+ halbtc8723b1ant_action_hs(btcoexist);
+ return;
+ }
+
+ if (BTC_SCAN_START == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], SCAN START notify\n");
+ if (!wifi_connected) /* non-connected scan */
+ btc8723b1ant_action_wifi_not_conn_scan(btcoexist);
+ else /* wifi is connected */
+ btc8723b1ant_action_wifi_conn_scan(btcoexist);
+ } else if (BTC_SCAN_FINISH == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], SCAN FINISH notify\n");
+ if (!wifi_connected) /* non-connected scan */
+ btc8723b1ant_action_wifi_not_conn(btcoexist);
+ else
+ halbtc8723b1ant_action_wifi_connected(btcoexist);
+ }
+}
+
+void ex_halbtc8723b1ant_connect_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ bool wifi_connected = false, bt_hs_on = false;
+ u32 wifi_link_status = 0;
+ u32 num_of_wifi_link = 0;
+ bool bt_ctrl_agg_buf_size = false;
+ u8 agg_buf_size = 5;
+
+ if (btcoexist->manual_control || btcoexist->stop_coex_dm ||
+ btcoexist->bt_info.bt_disabled)
+ return;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_LINK_STATUS,
+ &wifi_link_status);
+ num_of_wifi_link = wifi_link_status>>16;
+ if (num_of_wifi_link >= 2) {
+ halbtc8723b1ant_limited_tx(btcoexist, NORMAL_EXEC, 0, 0, 0, 0);
+ halbtc8723b1ant_limited_rx(btcoexist, NORMAL_EXEC, false,
+ bt_ctrl_agg_buf_size, agg_buf_size);
+ halbtc8723b1ant_action_wifi_multiport(btcoexist);
+ return;
+ }
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ if (coex_sta->c2h_bt_inquiry_page) {
+ halbtc8723b1ant_action_bt_inquiry(btcoexist);
+ return;
+ } else if (bt_hs_on) {
+ halbtc8723b1ant_action_hs(btcoexist);
+ return;
+ }
+
+ if (BTC_ASSOCIATE_START == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], CONNECT START notify\n");
+ btc8723b1ant_act_wifi_not_conn_asso_auth(btcoexist);
+ } else if (BTC_ASSOCIATE_FINISH == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], CONNECT FINISH notify\n");
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+ if (!wifi_connected) /* non-connected scan */
+ btc8723b1ant_action_wifi_not_conn(btcoexist);
+ else
+ halbtc8723b1ant_action_wifi_connected(btcoexist);
+ }
+}
+
+void ex_halbtc8723b1ant_media_status_notify(struct btc_coexist *btcoexist,
+ u8 type)
+{
+ u8 h2c_parameter[3] = {0};
+ u32 wifi_bw;
+ u8 wifiCentralChnl;
+
+ if (btcoexist->manual_control || btcoexist->stop_coex_dm ||
+ btcoexist->bt_info.bt_disabled)
+ return;
+
+ if (BTC_MEDIA_CONNECT == type)
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], MEDIA connect notify\n");
+ else
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], MEDIA disconnect notify\n");
+
+ /* only 2.4G we need to inform bt the chnl mask */
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_CENTRAL_CHNL,
+ &wifiCentralChnl);
+
+ if ((BTC_MEDIA_CONNECT == type) &&
+ (wifiCentralChnl <= 14)) {
+ h2c_parameter[0] = 0x0;
+ h2c_parameter[1] = wifiCentralChnl;
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ if (BTC_WIFI_BW_HT40 == wifi_bw)
+ h2c_parameter[2] = 0x30;
+ else
+ h2c_parameter[2] = 0x20;
+ }
+
+ coex_dm->wifi_chnl_info[0] = h2c_parameter[0];
+ coex_dm->wifi_chnl_info[1] = h2c_parameter[1];
+ coex_dm->wifi_chnl_info[2] = h2c_parameter[2];
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], FW write 0x66 = 0x%x\n",
+ h2c_parameter[0] << 16 | h2c_parameter[1] << 8 |
+ h2c_parameter[2]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x66, 3, h2c_parameter);
+}
+
+void ex_halbtc8723b1ant_special_packet_notify(struct btc_coexist *btcoexist,
+ u8 type)
+{
+ bool bt_hs_on = false;
+ u32 wifi_link_status = 0;
+ u32 num_of_wifi_link = 0;
+ bool bt_ctrl_agg_buf_size = false;
+ u8 agg_buf_size = 5;
+
+ if (btcoexist->manual_control || btcoexist->stop_coex_dm ||
+ btcoexist->bt_info.bt_disabled)
+ return;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_LINK_STATUS,
+ &wifi_link_status);
+ num_of_wifi_link = wifi_link_status >> 16;
+ if (num_of_wifi_link >= 2) {
+ halbtc8723b1ant_limited_tx(btcoexist, NORMAL_EXEC, 0, 0, 0, 0);
+ halbtc8723b1ant_limited_rx(btcoexist, NORMAL_EXEC, false,
+ bt_ctrl_agg_buf_size, agg_buf_size);
+ halbtc8723b1ant_action_wifi_multiport(btcoexist);
+ return;
+ }
+
+ coex_sta->special_pkt_period_cnt = 0;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ if (coex_sta->c2h_bt_inquiry_page) {
+ halbtc8723b1ant_action_bt_inquiry(btcoexist);
+ return;
+ } else if (bt_hs_on) {
+ halbtc8723b1ant_action_hs(btcoexist);
+ return;
+ }
+
+ if (BTC_PACKET_DHCP == type ||
+ BTC_PACKET_EAPOL == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], special Packet(%d) notify\n", type);
+ halbtc8723b1ant_action_wifi_connected_special_packet(btcoexist);
+ }
+}
+
+void ex_halbtc8723b1ant_bt_info_notify(struct btc_coexist *btcoexist,
+ u8 *tmp_buf, u8 length)
+{
+ u8 bt_info = 0;
+ u8 i, rsp_source = 0;
+ bool wifi_connected = false;
+ bool bt_busy = false;
+
+ coex_sta->c2h_bt_info_req_sent = false;
+
+ rsp_source = tmp_buf[0] & 0xf;
+ if (rsp_source >= BT_INFO_SRC_8723B_1ANT_MAX)
+ rsp_source = BT_INFO_SRC_8723B_1ANT_WIFI_FW;
+ coex_sta->bt_info_c2h_cnt[rsp_source]++;
+
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Bt info[%d], length=%d, hex data = [",
+ rsp_source, length);
+ for (i = 0; i < length; i++) {
+ coex_sta->bt_info_c2h[rsp_source][i] = tmp_buf[i];
+ if (i == 1)
+ bt_info = tmp_buf[i];
+ if (i == length - 1)
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "0x%02x]\n", tmp_buf[i]);
+ else
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "0x%02x, ", tmp_buf[i]);
+ }
+
+ if (BT_INFO_SRC_8723B_1ANT_WIFI_FW != rsp_source) {
+ coex_sta->bt_retry_cnt = /* [3:0] */
+ coex_sta->bt_info_c2h[rsp_source][2] & 0xf;
+
+ coex_sta->bt_rssi =
+ coex_sta->bt_info_c2h[rsp_source][3] * 2 + 10;
+
+ coex_sta->bt_info_ext =
+ coex_sta->bt_info_c2h[rsp_source][4];
+
+ /* Here we need to resend some wifi info to BT
+ * because bt is reset and loss of the info.
+ */
+ if (coex_sta->bt_info_ext & BIT1) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT ext info bit1 check, send wifi BW&Chnl to BT!!\n");
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+ if (wifi_connected)
+ ex_halbtc8723b1ant_media_status_notify(btcoexist,
+ BTC_MEDIA_CONNECT);
+ else
+ ex_halbtc8723b1ant_media_status_notify(btcoexist,
+ BTC_MEDIA_DISCONNECT);
+ }
+
+ if (coex_sta->bt_info_ext & BIT3) {
+ if (!btcoexist->manual_control &&
+ !btcoexist->stop_coex_dm) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT ext info bit3 check, set BT NOT ignore Wlan active!!\n");
+ halbtc8723b1ant_ignore_wlan_act(btcoexist,
+ FORCE_EXEC,
+ false);
+ }
+ } else {
+ /* BT already NOT ignore Wlan active, do nothing here.*/
+ }
+#if (BT_AUTO_REPORT_ONLY_8723B_1ANT == 0)
+ if (coex_sta->bt_info_ext & BIT4) {
+ /* BT auto report already enabled, do nothing */
+ } else {
+ halbtc8723b1ant_bt_auto_report(btcoexist, FORCE_EXEC,
+ true);
+ }
+#endif
+ }
+
+ /* check BIT2 first ==> check if bt is under inquiry or page scan */
+ if (bt_info & BT_INFO_8723B_1ANT_B_INQ_PAGE)
+ coex_sta->c2h_bt_inquiry_page = true;
+ else
+ coex_sta->c2h_bt_inquiry_page = false;
+
+ /* set link exist status */
+ if (!(bt_info & BT_INFO_8723B_1ANT_B_CONNECTION)) {
+ coex_sta->bt_link_exist = false;
+ coex_sta->pan_exist = false;
+ coex_sta->a2dp_exist = false;
+ coex_sta->hid_exist = false;
+ coex_sta->sco_exist = false;
+ } else { /* connection exists */
+ coex_sta->bt_link_exist = true;
+ if (bt_info & BT_INFO_8723B_1ANT_B_FTP)
+ coex_sta->pan_exist = true;
+ else
+ coex_sta->pan_exist = false;
+ if (bt_info & BT_INFO_8723B_1ANT_B_A2DP)
+ coex_sta->a2dp_exist = true;
+ else
+ coex_sta->a2dp_exist = false;
+ if (bt_info & BT_INFO_8723B_1ANT_B_HID)
+ coex_sta->hid_exist = true;
+ else
+ coex_sta->hid_exist = false;
+ if (bt_info & BT_INFO_8723B_1ANT_B_SCO_ESCO)
+ coex_sta->sco_exist = true;
+ else
+ coex_sta->sco_exist = false;
+ }
+
+ halbtc8723b1ant_update_bt_link_info(btcoexist);
+
+ if (!(bt_info&BT_INFO_8723B_1ANT_B_CONNECTION)) {
+ coex_dm->bt_status = BT_8723B_1ANT_BT_STATUS_NON_CONNECTED_IDLE;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BtInfoNotify(), BT Non-Connected idle!\n");
+ /* connection exists but no busy */
+ } else if (bt_info == BT_INFO_8723B_1ANT_B_CONNECTION) {
+ coex_dm->bt_status = BT_8723B_1ANT_BT_STATUS_CONNECTED_IDLE;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BtInfoNotify(), BT Connected-idle!!!\n");
+ } else if ((bt_info & BT_INFO_8723B_1ANT_B_SCO_ESCO) ||
+ (bt_info & BT_INFO_8723B_1ANT_B_SCO_BUSY)) {
+ coex_dm->bt_status = BT_8723B_1ANT_BT_STATUS_SCO_BUSY;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BtInfoNotify(), BT SCO busy!!!\n");
+ } else if (bt_info & BT_INFO_8723B_1ANT_B_ACL_BUSY) {
+ if (BT_8723B_1ANT_BT_STATUS_ACL_BUSY != coex_dm->bt_status)
+ coex_dm->auto_tdma_adjust = false;
+
+ coex_dm->bt_status = BT_8723B_1ANT_BT_STATUS_ACL_BUSY;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BtInfoNotify(), BT ACL busy!!!\n");
+ } else {
+ coex_dm->bt_status =
+ BT_8723B_1ANT_BT_STATUS_MAX;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BtInfoNotify(), BT Non-Defined state!!\n");
+ }
+
+ if ((BT_8723B_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) ||
+ (BT_8723B_1ANT_BT_STATUS_SCO_BUSY == coex_dm->bt_status) ||
+ (BT_8723B_1ANT_BT_STATUS_ACL_SCO_BUSY == coex_dm->bt_status))
+ bt_busy = true;
+ else
+ bt_busy = false;
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_TRAFFIC_BUSY, &bt_busy);
+
+ halbtc8723b1ant_run_coexist_mechanism(btcoexist);
+}
+
+void ex_halbtc8723b1ant_halt_notify(struct btc_coexist *btcoexist)
+{
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY, "[BTCoex], Halt notify\n");
+
+ btcoexist->stop_coex_dm = true;
+
+ halbtc8723b1ant_SetAntPath(btcoexist, BTC_ANT_PATH_BT, false, true);
+
+ halbtc8723b1ant_wifi_off_hw_cfg(btcoexist);
+ halbtc8723b1ant_ignore_wlan_act(btcoexist, FORCE_EXEC, true);
+
+ halbtc8723b1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+ halbtc8723b1ant_ps_tdma(btcoexist, FORCE_EXEC, false, 0);
+
+ ex_halbtc8723b1ant_media_status_notify(btcoexist, BTC_MEDIA_DISCONNECT);
+}
+
+void ex_halbtc8723b1ant_pnp_notify(struct btc_coexist *btcoexist, u8 pnp_state)
+{
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY, "[BTCoex], Pnp notify\n");
+
+ if (BTC_WIFI_PNP_SLEEP == pnp_state) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Pnp notify to SLEEP\n");
+ btcoexist->stop_coex_dm = true;
+ halbtc8723b1ant_SetAntPath(btcoexist, BTC_ANT_PATH_BT, false,
+ true);
+ halbtc8723b1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+ halbtc8723b1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 0);
+ halbtc8723b1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 2);
+ halbtc8723b1ant_wifi_off_hw_cfg(btcoexist);
+ } else if (BTC_WIFI_PNP_WAKE_UP == pnp_state) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Pnp notify to WAKE UP\n");
+ btcoexist->stop_coex_dm = false;
+ halbtc8723b1ant_init_hw_config(btcoexist, false);
+ halbtc8723b1ant_init_coex_dm(btcoexist);
+ halbtc8723b1ant_query_bt_info(btcoexist);
+ }
+}
+
+void ex_halbtc8723b1ant_coex_dm_reset(struct btc_coexist *btcoexist)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], *****************Coex DM Reset****************\n");
+
+ halbtc8723b1ant_init_hw_config(btcoexist, false);
+ btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1, 0xfffff, 0x0);
+ btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x2, 0xfffff, 0x0);
+ halbtc8723b1ant_init_coex_dm(btcoexist);
+}
+
+void ex_halbtc8723b1ant_periodical(struct btc_coexist *btcoexist)
+{
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ struct btc_stack_info *stack_info = &btcoexist->stack_info;
+ static u8 dis_ver_info_cnt;
+ u32 fw_ver = 0, bt_patch_ver = 0;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], ==========================Periodical===========================\n");
+
+ if (dis_ver_info_cnt <= 5) {
+ dis_ver_info_cnt += 1;
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], ****************************************************************\n");
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], Ant PG Num/ Ant Mech/ Ant Pos = %d/ %d/ %d\n",
+ board_info->pg_ant_num, board_info->btdm_ant_num,
+ board_info->btdm_ant_pos);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], BT stack/ hci ext ver = %s / %d\n",
+ ((stack_info->profile_notified) ? "Yes" : "No"),
+ stack_info->hci_version);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_BT_PATCH_VER,
+ &bt_patch_ver);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], CoexVer/ FwVer/ PatchVer = %d_%x/ 0x%x/ 0x%x(%d)\n",
+ glcoex_ver_date_8723b_1ant,
+ glcoex_ver_8723b_1ant, fw_ver,
+ bt_patch_ver, bt_patch_ver);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], ****************************************************************\n");
+ }
+
+#if (BT_AUTO_REPORT_ONLY_8723B_1ANT == 0)
+ halbtc8723b1ant_query_bt_info(btcoexist);
+ halbtc8723b1ant_monitor_bt_ctr(btcoexist);
+ halbtc8723b1ant_monitor_bt_enable_disable(btcoexist);
+#else
+ if (btc8723b1ant_is_wifi_status_changed(btcoexist) ||
+ coex_dm->auto_tdma_adjust) {
+ halbtc8723b1ant_run_coexist_mechanism(btcoexist);
+ }
+
+ coex_sta->special_pkt_period_cnt++;
+#endif
+}
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b1ant.h b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b1ant.h
new file mode 100644
index 0000000..75f8094
--- /dev/null
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b1ant.h
@@ -0,0 +1,184 @@
+/******************************************************************************
+ *
+ * Copyright(c) 2012 Realtek Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called LICENSE.
+ *
+ * Contact Information:
+ * wlanfae <wlanfae@realtek.com>
+ * Realtek Corporation, No. 2, Innovation Road II, Hsinchu Science Park,
+ * Hsinchu 300, Taiwan.
+ *
+ * Larry Finger <Larry.Finger@lwfinger.net>
+ *
+ *****************************************************************************/
+/**********************************************************************
+ * The following is for 8723B 1ANT BT Co-exist definition
+ **********************************************************************/
+#define BT_AUTO_REPORT_ONLY_8723B_1ANT 1
+
+#define BT_INFO_8723B_1ANT_B_FTP BIT7
+#define BT_INFO_8723B_1ANT_B_A2DP BIT6
+#define BT_INFO_8723B_1ANT_B_HID BIT5
+#define BT_INFO_8723B_1ANT_B_SCO_BUSY BIT4
+#define BT_INFO_8723B_1ANT_B_ACL_BUSY BIT3
+#define BT_INFO_8723B_1ANT_B_INQ_PAGE BIT2
+#define BT_INFO_8723B_1ANT_B_SCO_ESCO BIT1
+#define BT_INFO_8723B_1ANT_B_CONNECTION BIT0
+
+#define BT_INFO_8723B_1ANT_A2DP_BASIC_RATE(_BT_INFO_EXT_) \
+ (((_BT_INFO_EXT_&BIT0)) ? true : false)
+
+#define BTC_RSSI_COEX_THRESH_TOL_8723B_1ANT 2
+
+enum _BT_INFO_SRC_8723B_1ANT {
+ BT_INFO_SRC_8723B_1ANT_WIFI_FW = 0x0,
+ BT_INFO_SRC_8723B_1ANT_BT_RSP = 0x1,
+ BT_INFO_SRC_8723B_1ANT_BT_ACTIVE_SEND = 0x2,
+ BT_INFO_SRC_8723B_1ANT_MAX
+};
+
+enum _BT_8723B_1ANT_BT_STATUS {
+ BT_8723B_1ANT_BT_STATUS_NON_CONNECTED_IDLE = 0x0,
+ BT_8723B_1ANT_BT_STATUS_CONNECTED_IDLE = 0x1,
+ BT_8723B_1ANT_BT_STATUS_INQ_PAGE = 0x2,
+ BT_8723B_1ANT_BT_STATUS_ACL_BUSY = 0x3,
+ BT_8723B_1ANT_BT_STATUS_SCO_BUSY = 0x4,
+ BT_8723B_1ANT_BT_STATUS_ACL_SCO_BUSY = 0x5,
+ BT_8723B_1ANT_BT_STATUS_MAX
+};
+
+enum _BT_8723B_1ANT_WIFI_STATUS {
+ BT_8723B_1ANT_WIFI_STATUS_NON_CONNECTED_IDLE = 0x0,
+ BT_8723B_1ANT_WIFI_STATUS_NON_CONNECTED_ASSO_AUTH_SCAN = 0x1,
+ BT_8723B_1ANT_WIFI_STATUS_CONNECTED_SCAN = 0x2,
+ BT_8723B_1ANT_WIFI_STATUS_CONNECTED_SPECIAL_PKT = 0x3,
+ BT_8723B_1ANT_WIFI_STATUS_CONNECTED_IDLE = 0x4,
+ BT_8723B_1ANT_WIFI_STATUS_CONNECTED_BUSY = 0x5,
+ BT_8723B_1ANT_WIFI_STATUS_MAX
+};
+
+enum _BT_8723B_1ANT_COEX_ALGO {
+ BT_8723B_1ANT_COEX_ALGO_UNDEFINED = 0x0,
+ BT_8723B_1ANT_COEX_ALGO_SCO = 0x1,
+ BT_8723B_1ANT_COEX_ALGO_HID = 0x2,
+ BT_8723B_1ANT_COEX_ALGO_A2DP = 0x3,
+ BT_8723B_1ANT_COEX_ALGO_A2DP_PANHS = 0x4,
+ BT_8723B_1ANT_COEX_ALGO_PANEDR = 0x5,
+ BT_8723B_1ANT_COEX_ALGO_PANHS = 0x6,
+ BT_8723B_1ANT_COEX_ALGO_PANEDR_A2DP = 0x7,
+ BT_8723B_1ANT_COEX_ALGO_PANEDR_HID = 0x8,
+ BT_8723B_1ANT_COEX_ALGO_HID_A2DP_PANEDR = 0x9,
+ BT_8723B_1ANT_COEX_ALGO_HID_A2DP = 0xa,
+ BT_8723B_1ANT_COEX_ALGO_MAX = 0xb,
+};
+
+struct coex_dm_8723b_1ant {
+ /* fw mechanism */
+ bool cur_ignore_wlan_act;
+ bool pre_ignore_wlan_act;
+ u8 pre_ps_tdma;
+ u8 cur_ps_tdma;
+ u8 ps_tdma_para[5];
+ u8 tdma_adj_type;
+ bool auto_tdma_adjust;
+ bool pre_ps_tdma_on;
+ bool cur_ps_tdma_on;
+ bool pre_bt_auto_report;
+ bool cur_bt_auto_report;
+ u8 pre_lps;
+ u8 cur_lps;
+ u8 pre_rpwm;
+ u8 cur_rpwm;
+
+ /* sw mechanism */
+ bool pre_low_penalty_ra;
+ bool cur_low_penalty_ra;
+ u32 pre_val0x6c0;
+ u32 cur_val0x6c0;
+ u32 pre_val0x6c4;
+ u32 cur_val0x6c4;
+ u32 pre_val0x6c8;
+ u32 cur_val0x6c8;
+ u8 pre_val0x6cc;
+ u8 cur_val0x6cc;
+ bool limited_dig;
+
+ u32 backup_arfr_cnt1; /* Auto Rate Fallback Retry cnt */
+ u32 backup_arfr_cnt2; /* Auto Rate Fallback Retry cnt */
+ u16 backup_retry_limit;
+ u8 backup_ampdu_max_time;
+
+ /* algorithm related */
+ u8 pre_algorithm;
+ u8 cur_algorithm;
+ u8 bt_status;
+ u8 wifi_chnl_info[3];
+
+ u32 prera_mask;
+ u32 curra_mask;
+ u8 pre_arfr_type;
+ u8 cur_arfr_type;
+ u8 pre_retry_limit_type;
+ u8 cur_retry_limit_type;
+ u8 pre_ampdu_time_type;
+ u8 cur_ampdu_time_type;
+
+ u8 error_condition;
+};
+
+struct coex_sta_8723b_1ant {
+ bool bt_link_exist;
+ bool sco_exist;
+ bool a2dp_exist;
+ bool hid_exist;
+ bool pan_exist;
+
+ bool under_lps;
+ bool under_ips;
+ u32 special_pkt_period_cnt;
+ u32 high_priority_tx;
+ u32 high_priority_rx;
+ u32 low_priority_tx;
+ u32 low_priority_rx;
+ u8 bt_rssi;
+ u8 pre_bt_rssi_state;
+ u8 pre_wifi_rssi_state[4];
+ bool c2h_bt_info_req_sent;
+ u8 bt_info_c2h[BT_INFO_SRC_8723B_1ANT_MAX][10];
+ u32 bt_info_c2h_cnt[BT_INFO_SRC_8723B_1ANT_MAX];
+ bool c2h_bt_inquiry_page;
+ u8 bt_retry_cnt;
+ u8 bt_info_ext;
+};
+
+/*************************************************************************
+ * The following is interface which will notify coex module.
+ *************************************************************************/
+void ex_halbtc8723b1ant_init_hwconfig(struct btc_coexist *btcoexist);
+void ex_halbtc8723b1ant_init_coex_dm(struct btc_coexist *btcoexist);
+void ex_halbtc8723b1ant_ips_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8723b1ant_lps_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8723b1ant_scan_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8723b1ant_connect_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8723b1ant_media_status_notify(struct btc_coexist *btcoexist,
+ u8 type);
+void ex_halbtc8723b1ant_special_packet_notify(struct btc_coexist *btcoexist,
+ u8 type);
+void ex_halbtc8723b1ant_bt_info_notify(struct btc_coexist *btcoexist,
+ u8 *tmpbuf, u8 length);
+void ex_halbtc8723b1ant_halt_notify(struct btc_coexist *btcoexist);
+void ex_halbtc8723b1ant_pnp_notify(struct btc_coexist *btcoexist, u8 pnpstate);
+void ex_halbtc8723b1ant_coex_dm_reset(struct btc_coexist *btcoexist);
+void ex_halbtc8723b1ant_periodical(struct btc_coexist *btcoexist);
+void ex_halbtc8723b1ant_display_coex_info(struct btc_coexist *btcoexist);
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b2ant.c b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b2ant.c
index d916ab9..cefe269 100644
--- a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b2ant.c
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b2ant.c
@@ -49,8 +49,8 @@
"BT Info[bt auto report]",
};
-static u32 glcoex_ver_date_8723b_2ant = 20130731;
-static u32 glcoex_ver_8723b_2ant = 0x3b;
+static u32 glcoex_ver_date_8723b_2ant = 20131113;
+static u32 glcoex_ver_8723b_2ant = 0x3f;
/**************************************************************
* local function proto type if needed
@@ -303,6 +303,21 @@
btcoexist->btc_write_1byte(btcoexist, 0x76e, 0xc);
}
+static void btc8723b2ant_query_bt_info(struct btc_coexist *btcoexist)
+{
+ u8 h2c_parameter[1] = {0};
+
+ coex_sta->c2h_bt_info_req_sent = true;
+
+ h2c_parameter[0] |= BIT0; /* trigger */
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], Query Bt Info, FW write 0x61 = 0x%x\n",
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x61, 1, h2c_parameter);
+}
+
static bool btc8723b2ant_is_wifi_status_changed(struct btc_coexist *btcoexist)
{
static bool pre_wifi_busy;
@@ -604,7 +619,7 @@
if (!btcoexist->btc_get(btcoexist, BTC_GET_S4_HS_RSSI, &bt_hs_rssi))
return false;
- bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 35, 0);
+ bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 29, 0);
if (wifi_connected) {
if (bt_hs_on) {
@@ -824,7 +839,6 @@
btc8723b2ant_set_dac_swing_reg(btcoex, 0x18);
}
-
static void btc8723b2ant_dac_swing(struct btc_coexist *btcoexist,
bool force_exec, bool dac_swing_on,
u32 dac_swing_lvl)
@@ -884,7 +898,6 @@
btcoexist->btc_write_4byte(btcoexist, 0xc78, 0xa4200001);
}
-
/* RF Gain */
btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0xef, 0xfffff, 0x02000);
if (agc_table_en) {
@@ -1160,8 +1173,87 @@
dac_swing_lvl);
}
+static void btc8723b2ant_set_ant_path(struct btc_coexist *btcoexist,
+ u8 antpos_type, bool init_hwcfg,
+ bool wifi_off)
+{
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ u32 fw_ver = 0, u32tmp = 0;
+ bool pg_ext_switch = false;
+ bool use_ext_switch = false;
+ u8 h2c_parameter[2] = {0};
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_EXT_SWITCH, &pg_ext_switch);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
+
+ if ((fw_ver < 0xc0000) || pg_ext_switch)
+ use_ext_switch = true;
+
+ if (init_hwcfg) {
+ /* 0x4c[23] = 0, 0x4c[24] = 1 Antenna control by WL/BT */
+ u32tmp = btcoexist->btc_read_4byte(btcoexist, 0x4c);
+ u32tmp &= ~BIT23;
+ u32tmp |= BIT24;
+ btcoexist->btc_write_4byte(btcoexist, 0x4c, u32tmp);
+
+ btcoexist->btc_write_1byte(btcoexist, 0x974, 0xff);
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x944, 0x3, 0x3);
+ btcoexist->btc_write_1byte(btcoexist, 0x930, 0x77);
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x67, 0x20, 0x1);
+
+ /* Force GNT_BT to low */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x765, 0x18, 0x0);
+ btcoexist->btc_write_2byte(btcoexist, 0x948, 0x0);
+
+ if (board_info->btdm_ant_pos == BTC_ANTENNA_AT_MAIN_PORT) {
+ /* tell firmware "no antenna inverse" */
+ h2c_parameter[0] = 0;
+ h2c_parameter[1] = 1; /* ext switch type */
+ btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
+ h2c_parameter);
+ } else {
+ /* tell firmware "antenna inverse" */
+ h2c_parameter[0] = 1;
+ h2c_parameter[1] = 1; /* ext switch type */
+ btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
+ h2c_parameter);
+ }
+ }
+
+ /* ext switch setting */
+ if (use_ext_switch) {
+ /* fixed internal switch S1->WiFi, S0->BT */
+ btcoexist->btc_write_2byte(btcoexist, 0x948, 0x0);
+ switch (antpos_type) {
+ case BTC_ANT_WIFI_AT_MAIN:
+ /* ext switch main at wifi */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x92c,
+ 0x3, 0x1);
+ break;
+ case BTC_ANT_WIFI_AT_AUX:
+ /* ext switch aux at wifi */
+ btcoexist->btc_write_1byte_bitmask(btcoexist,
+ 0x92c, 0x3, 0x2);
+ break;
+ }
+ } else { /* internal switch */
+ /* fixed ext switch */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x92c, 0x3, 0x1);
+ switch (antpos_type) {
+ case BTC_ANT_WIFI_AT_MAIN:
+ /* fixed internal switch S1->WiFi, S0->BT */
+ btcoexist->btc_write_2byte(btcoexist, 0x948, 0x0);
+ break;
+ case BTC_ANT_WIFI_AT_AUX:
+ /* fixed internal switch S0->WiFi, S1->BT */
+ btcoexist->btc_write_2byte(btcoexist, 0x948, 0x280);
+ break;
+ }
+ }
+}
+
static void btc8723b2ant_ps_tdma(struct btc_coexist *btcoexist, bool force_exec,
- bool turn_on, u8 type)
+ bool turn_on, u8 type)
{
BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
"[BTCoex], %s turn %s PS TDMA, type=%d\n",
@@ -1351,7 +1443,8 @@
coex_dm->need_recover_0x948 = true;
coex_dm->backup_0x948 = btcoexist->btc_read_2byte(btcoexist, 0x948);
- btcoexist->btc_write_2byte(btcoexist, 0x948, 0x280);
+ btc8723b2ant_set_ant_path(btcoexist, BTC_ANT_WIFI_AT_AUX,
+ false, false);
}
static bool btc8723b2ant_is_common_action(struct btc_coexist *btcoexist)
@@ -1520,7 +1613,9 @@
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 8);
coex_dm->tdma_adj_type = 8;
- } else if (coex_dm->cur_ps_tdma == 9) {
+ }
+
+ if (coex_dm->cur_ps_tdma == 9) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 13);
coex_dm->tdma_adj_type = 13;
@@ -1607,7 +1702,9 @@
} else if (coex_dm->cur_ps_tdma == 8) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 4);
coex_dm->tdma_adj_type = 4;
- } else if (coex_dm->cur_ps_tdma == 13) {
+ }
+
+ if (coex_dm->cur_ps_tdma == 13) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 9);
coex_dm->tdma_adj_type = 9;
} else if (coex_dm->cur_ps_tdma == 14) {
@@ -1652,23 +1749,34 @@
coex_dm->tdma_adj_type = 12;
}
} else if (result == 1) {
- int tmp = coex_dm->cur_ps_tdma;
- switch (tmp) {
- case 4:
- case 3:
- case 2:
- case 12:
- case 11:
- case 10:
+ if (coex_dm->cur_ps_tdma == 4) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
- true, tmp - 1);
- coex_dm->tdma_adj_type = tmp - 1;
- break;
- case 1:
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 1);
+ coex_dm->tdma_adj_type = 1;
+ } else if (coex_dm->cur_ps_tdma == 1) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 71);
coex_dm->tdma_adj_type = 71;
- break;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
}
}
}
@@ -1694,7 +1802,8 @@
} else if (coex_dm->cur_ps_tdma == 4) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 8);
coex_dm->tdma_adj_type = 8;
- } else if (coex_dm->cur_ps_tdma == 9) {
+ }
+ if (coex_dm->cur_ps_tdma == 9) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 14);
coex_dm->tdma_adj_type = 14;
} else if (coex_dm->cur_ps_tdma == 10) {
@@ -1776,7 +1885,8 @@
} else if (coex_dm->cur_ps_tdma == 8) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 4);
coex_dm->tdma_adj_type = 4;
- } else if (coex_dm->cur_ps_tdma == 13) {
+ }
+ if (coex_dm->cur_ps_tdma == 13) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 10);
coex_dm->tdma_adj_type = 10;
} else if (coex_dm->cur_ps_tdma == 14) {
@@ -1865,7 +1975,8 @@
} else if (coex_dm->cur_ps_tdma == 4) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 8);
coex_dm->tdma_adj_type = 8;
- } else if (coex_dm->cur_ps_tdma == 9) {
+ }
+ if (coex_dm->cur_ps_tdma == 9) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 15);
coex_dm->tdma_adj_type = 15;
} else if (coex_dm->cur_ps_tdma == 10) {
@@ -1935,101 +2046,80 @@
BTC_PRINT(BTC_MSG_ALGORITHM,
ALGO_TRACE_FW_DETAIL,
"[BTCoex], TxPause = 0\n");
- switch (coex_dm->cur_ps_tdma) {
- case 5:
+ if (coex_dm->cur_ps_tdma == 5) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 3);
coex_dm->tdma_adj_type = 3;
- break;
- case 6:
+ } else if (coex_dm->cur_ps_tdma == 6) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 3);
coex_dm->tdma_adj_type = 3;
- break;
- case 7:
+ } else if (coex_dm->cur_ps_tdma == 7) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 3);
coex_dm->tdma_adj_type = 3;
- break;
- case 8:
+ } else if (coex_dm->cur_ps_tdma == 8) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 4);
coex_dm->tdma_adj_type = 4;
- break;
- case 13:
+ }
+ if (coex_dm->cur_ps_tdma == 13) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 11);
coex_dm->tdma_adj_type = 11;
- break;
- case 14:
+ } else if (coex_dm->cur_ps_tdma == 14) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 11);
coex_dm->tdma_adj_type = 11;
- break;
- case 15:
+ } else if (coex_dm->cur_ps_tdma == 15) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 11);
coex_dm->tdma_adj_type = 11;
- break;
- case 16:
+ } else if (coex_dm->cur_ps_tdma == 16) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 12);
coex_dm->tdma_adj_type = 12;
- break;
}
if (result == -1) {
- switch (coex_dm->cur_ps_tdma) {
- case 1:
+ if (coex_dm->cur_ps_tdma == 1) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 3);
coex_dm->tdma_adj_type = 3;
- break;
- case 2:
+ } else if (coex_dm->cur_ps_tdma == 2) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 3);
coex_dm->tdma_adj_type = 3;
- break;
- case 3:
+ } else if (coex_dm->cur_ps_tdma == 3) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 4);
coex_dm->tdma_adj_type = 4;
- break;
- case 9:
+ } else if (coex_dm->cur_ps_tdma == 9) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 11);
coex_dm->tdma_adj_type = 11;
- break;
- case 10:
+ } else if (coex_dm->cur_ps_tdma == 10) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 11);
coex_dm->tdma_adj_type = 11;
- break;
- case 11:
+ } else if (coex_dm->cur_ps_tdma == 11) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 12);
coex_dm->tdma_adj_type = 12;
- break;
}
} else if (result == 1) {
- switch (coex_dm->cur_ps_tdma) {
- case 4:
+ if (coex_dm->cur_ps_tdma == 4) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 3);
coex_dm->tdma_adj_type = 3;
- break;
- case 3:
+ } else if (coex_dm->cur_ps_tdma == 3) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 3);
coex_dm->tdma_adj_type = 3;
- break;
- case 2:
+ } else if (coex_dm->cur_ps_tdma == 2) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 3);
coex_dm->tdma_adj_type = 3;
- break;
- case 12:
+ } else if (coex_dm->cur_ps_tdma == 12) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 11);
coex_dm->tdma_adj_type = 11;
- break;
- case 11:
+ } else if (coex_dm->cur_ps_tdma == 11) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 11);
coex_dm->tdma_adj_type = 11;
- break;
- case 10:
+ } else if (coex_dm->cur_ps_tdma == 10) {
btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC,
true, 11);
coex_dm->tdma_adj_type = 11;
@@ -2328,7 +2418,7 @@
wifi_rssi_state = btc8723b2ant_wifi_rssi_state(btcoexist,
0, 2, 15, 0);
- bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 35, 0);
+ bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 29, 0);
btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1, 0xfffff, 0x0);
@@ -2385,12 +2475,43 @@
/*A2DP only / PAN(EDR) only/ A2DP+PAN(HS)*/
static void btc8723b2ant_action_a2dp(struct btc_coexist *btcoexist)
{
- u8 wifi_rssi_state, bt_rssi_state;
+ u8 wifi_rssi_state, wifi_rssi_state1, bt_rssi_state;
u32 wifi_bw;
+ u8 ap_num = 0;
wifi_rssi_state = btc8723b2ant_wifi_rssi_state(btcoexist,
0, 2, 15, 0);
- bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 35, 0);
+ wifi_rssi_state1 = btc8723b2ant_wifi_rssi_state(btcoexist,
+ 1, 2, 40, 0);
+ bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 29, 0);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_AP_NUM, &ap_num);
+
+ /* define the office environment */
+ /* driver don't know AP num in Linux, so we will never enter this if */
+ if (ap_num >= 10 && BTC_RSSI_HIGH(wifi_rssi_state1)) {
+ btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1, 0xfffff,
+ 0x0);
+ btc8723b2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+ btc8723b2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+ btc8723b_coex_tbl_type(btcoexist, NORMAL_EXEC, 0);
+ btc8723b2ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 1);
+
+ /* sw mechanism */
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ btc8723b2ant_sw_mechanism1(btcoexist, true, false,
+ false, false);
+ btc8723b2ant_sw_mechanism2(btcoexist, true, false,
+ true, 0x18);
+ } else {
+ btc8723b2ant_sw_mechanism1(btcoexist, false, false,
+ false, false);
+ btc8723b2ant_sw_mechanism2(btcoexist, true, false,
+ true, 0x18);
+ }
+ return;
+ }
btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1, 0xfffff, 0x0);
@@ -2501,7 +2622,7 @@
wifi_rssi_state = btc8723b2ant_wifi_rssi_state(btcoexist,
0, 2, 15, 0);
- bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 35, 0);
+ bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 29, 0);
btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1, 0xfffff, 0x0);
@@ -2612,7 +2733,7 @@
wifi_rssi_state = btc8723b2ant_wifi_rssi_state(btcoexist,
0, 2, 15, 0);
- bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 35, 0);
+ bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 29, 0);
btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1, 0xfffff, 0x0);
@@ -2676,7 +2797,7 @@
wifi_rssi_state = btc8723b2ant_wifi_rssi_state(btcoexist,
0, 2, 15, 0);
- bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 35, 0);
+ bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 29, 0);
btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
if (btc8723b_need_dec_pwr(btcoexist))
@@ -2746,7 +2867,7 @@
wifi_rssi_state = btc8723b2ant_wifi_rssi_state(btcoexist,
0, 2, 15, 0);
- bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 35, 0);
+ bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 29, 0);
btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1, 0xfffff, 0x0);
@@ -2809,8 +2930,8 @@
u32 wifi_bw;
wifi_rssi_state = btc8723b2ant_wifi_rssi_state(btcoexist,
- 0, 2, 15, 0);
- bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 35, 0);
+ 0, 2, 15, 0);
+ bt_rssi_state = btc8723b2ant_bt_rssi_state(2, 29, 0);
btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1, 0xfffff, 0x0);
@@ -2982,7 +3103,15 @@
}
}
-
+static void btc8723b2ant_wifioff_hwcfg(struct btc_coexist *btcoexist)
+{
+ /* set wlan_act to low */
+ btcoexist->btc_write_1byte(btcoexist, 0x76e, 0x4);
+ /* Force GNT_BT to High */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x765, 0x18, 0x3);
+ /* BT select s0/s1 is controlled by BT */
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x67, 0x20, 0x0);
+}
/*********************************************************************
* work around function start with wa_btc8723b2ant_
@@ -2990,98 +3119,24 @@
/*********************************************************************
* extern function start with EXbtc8723b2ant_
*********************************************************************/
-void ex_halbtc8723b2ant_init_hwconfig(struct btc_coexist *btcoexist)
+void ex_btc8723b2ant_init_hwconfig(struct btc_coexist *btcoexist)
{
- struct btc_board_info *board_info = &btcoexist->board_info;
- u32 u32tmp = 0, fw_ver;
u8 u8tmp = 0;
- u8 h2c_parameter[2] = {0};
-
BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
"[BTCoex], 2Ant Init HW Config!!\n");
+ coex_dm->bt_rf0x1e_backup =
+ btcoexist->btc_get_rf_reg(btcoexist, BTC_RF_A, 0x1e, 0xfffff);
- /* backup rf 0x1e value */
- coex_dm->bt_rf0x1e_backup = btcoexist->btc_get_rf_reg(btcoexist,
- BTC_RF_A, 0x1e,
- 0xfffff);
-
- /* 0x4c[23]=0, 0x4c[24]=1 Antenna control by WL/BT */
- u32tmp = btcoexist->btc_read_4byte(btcoexist, 0x4c);
- u32tmp &= ~BIT23;
- u32tmp |= BIT24;
- btcoexist->btc_write_4byte(btcoexist, 0x4c, u32tmp);
-
- btcoexist->btc_write_1byte(btcoexist, 0x974, 0xff);
- btcoexist->btc_write_1byte_bitmask(btcoexist, 0x944, 0x3, 0x3);
- btcoexist->btc_write_1byte(btcoexist, 0x930, 0x77);
- btcoexist->btc_write_1byte_bitmask(btcoexist, 0x67, 0x20, 0x1);
-
- /* Antenna switch control parameter */
- /* btcoexist->btc_write_4byte(btcoexist, 0x858, 0x55555555);*/
-
- /*Force GNT_BT to low*/
- btcoexist->btc_write_1byte_bitmask(btcoexist, 0x765, 0x18, 0x0);
- btcoexist->btc_write_2byte(btcoexist, 0x948, 0x0);
-
- /* 0x790[5:0]=0x5 */
+ /* 0x790[5:0] = 0x5 */
u8tmp = btcoexist->btc_read_1byte(btcoexist, 0x790);
u8tmp &= 0xc0;
u8tmp |= 0x5;
btcoexist->btc_write_1byte(btcoexist, 0x790, u8tmp);
-
- /*Antenna config */
- btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
-
- /*ext switch for fw ver < 0xc */
- if (fw_ver < 0xc00) {
- if (board_info->btdm_ant_pos == BTC_ANTENNA_AT_MAIN_PORT) {
- btcoexist->btc_write_1byte_bitmask(btcoexist, 0x92c,
- 0x3, 0x1);
- /*Main Ant to BT for IPS case 0x4c[23]=1*/
- btcoexist->btc_write_1byte_bitmask(btcoexist, 0x64, 0x1,
- 0x1);
-
- /*tell firmware "no antenna inverse"*/
- h2c_parameter[0] = 0;
- h2c_parameter[1] = 1; /* ext switch type */
- btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
- h2c_parameter);
- } else {
- btcoexist->btc_write_1byte_bitmask(btcoexist, 0x92c,
- 0x3, 0x2);
- /*Aux Ant to BT for IPS case 0x4c[23]=1*/
- btcoexist->btc_write_1byte_bitmask(btcoexist, 0x64, 0x1,
- 0x0);
-
- /*tell firmware "antenna inverse"*/
- h2c_parameter[0] = 1;
- h2c_parameter[1] = 1; /*ext switch type*/
- btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
- h2c_parameter);
- }
- } else {
- /*ext switch always at s1 (if exist) */
- btcoexist->btc_write_1byte_bitmask(btcoexist, 0x92c, 0x3, 0x1);
- /*Main Ant to BT for IPS case 0x4c[23]=1*/
- btcoexist->btc_write_1byte_bitmask(btcoexist, 0x64, 0x1, 0x1);
-
- if (board_info->btdm_ant_pos == BTC_ANTENNA_AT_MAIN_PORT) {
- /*tell firmware "no antenna inverse"*/
- h2c_parameter[0] = 0;
- h2c_parameter[1] = 0; /*ext switch type*/
- btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
- h2c_parameter);
- } else {
- /*tell firmware "antenna inverse"*/
- h2c_parameter[0] = 1;
- h2c_parameter[1] = 0; /*ext switch type*/
- btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
- h2c_parameter);
- }
- }
-
+ /*Antenna config */
+ btc8723b2ant_set_ant_path(btcoexist, BTC_ANT_WIFI_AT_MAIN,
+ true, false);
/* PTA parameter */
btc8723b_coex_tbl_type(btcoexist, FORCE_EXEC, 0);
@@ -3092,19 +3147,19 @@
btcoexist->btc_write_1byte_bitmask(btcoexist, 0x40, 0x20, 0x1);
}
-void ex_halbtc8723b2ant_init_coex_dm(struct btc_coexist *btcoexist)
+void ex_btc8723b2ant_init_coex_dm(struct btc_coexist *btcoexist)
{
BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
"[BTCoex], Coex Mechanism Init!!\n");
btc8723b2ant_init_coex_dm(btcoexist);
}
-void ex_halbtc8723b2ant_display_coex_info(struct btc_coexist *btcoexist)
+void ex_btc8723b2ant_display_coex_info(struct btc_coexist *btcoexist)
{
struct btc_board_info *board_info = &btcoexist->board_info;
struct btc_stack_info *stack_info = &btcoexist->stack_info;
struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
- u8 *cli_buf = btcoexist->cli_buf;
+ struct rtl_priv *rtlpriv = btcoexist->adapter;
u8 u8tmp[4], i, bt_info_ext, ps_tdma_case = 0;
u32 u32tmp[4];
bool roam = false, scan = false;
@@ -3114,106 +3169,93 @@
u32 wifi_bw, wifi_traffic_dir, fa_ofdm, fa_cck;
u8 wifi_dot11_chnl, wifi_hs_chnl;
u32 fw_ver = 0, bt_patch_ver = 0;
+ u8 ap_num = 0;
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE,
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
"\r\n ============[BT Coexist info]============");
- CL_PRINTF(cli_buf);
if (btcoexist->manual_control) {
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE,
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
"\r\n ==========[Under Manual Control]============");
- CL_PRINTF(cli_buf);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE,
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
"\r\n ==========================================");
- CL_PRINTF(cli_buf);
}
if (!board_info->bt_exist) {
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n BT not exists !!!");
- CL_PRINTF(cli_buf);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n BT not exists !!!");
return;
}
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %d/ %d ",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d ",
"Ant PG number/ Ant mechanism:",
board_info->pg_ant_num, board_info->btdm_ant_num);
- CL_PRINTF(cli_buf);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %s / %d",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s / %d",
"BT stack/ hci ext ver",
((stack_info->profile_notified) ? "Yes" : "No"),
stack_info->hci_version);
- CL_PRINTF(cli_buf);
btcoexist->btc_get(btcoexist, BTC_GET_U4_BT_PATCH_VER, &bt_patch_ver);
btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE,
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
"\r\n %-35s = %d_%x/ 0x%x/ 0x%x(%d)",
"CoexVer/ FwVer/ PatchVer",
glcoex_ver_date_8723b_2ant, glcoex_ver_8723b_2ant,
fw_ver, bt_patch_ver, bt_patch_ver);
- CL_PRINTF(cli_buf);
btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_DOT11_CHNL,
&wifi_dot11_chnl);
btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_HS_CHNL, &wifi_hs_chnl);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %d / %d(%d)",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d / %d(%d)",
"Dot11 channel / HsChnl(HsMode)",
wifi_dot11_chnl, wifi_hs_chnl, bt_hs_on);
- CL_PRINTF(cli_buf);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %02x %02x %02x ",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %02x %02x %02x ",
"H2C Wifi inform bt chnl Info", coex_dm->wifi_chnl_info[0],
coex_dm->wifi_chnl_info[1], coex_dm->wifi_chnl_info[2]);
- CL_PRINTF(cli_buf);
btcoexist->btc_get(btcoexist, BTC_GET_S4_WIFI_RSSI, &wifi_rssi);
btcoexist->btc_get(btcoexist, BTC_GET_S4_HS_RSSI, &bt_hs_rssi);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %d/ %d",
- "Wifi rssi/ HS rssi", wifi_rssi, bt_hs_rssi);
- CL_PRINTF(cli_buf);
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_AP_NUM, &ap_num);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d/ %d",
+ "Wifi rssi/ HS rssi/ AP#", wifi_rssi, bt_hs_rssi, ap_num);
btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %d/ %d/ %d ",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d/ %d ",
"Wifi link/ roam/ scan", link, roam, scan);
- CL_PRINTF(cli_buf);
btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_UNDER_5G, &wifi_under_5g);
btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_TRAFFIC_DIRECTION,
&wifi_traffic_dir);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %s / %s/ %s ",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s / %s/ %s ",
"Wifi status", (wifi_under_5g ? "5G" : "2.4G"),
((BTC_WIFI_BW_LEGACY == wifi_bw) ? "Legacy" :
(((BTC_WIFI_BW_HT40 == wifi_bw) ? "HT40" : "HT20"))),
((!wifi_busy) ? "idle" :
((BTC_WIFI_TRAFFIC_TX == wifi_traffic_dir) ?
"uplink" : "downlink")));
- CL_PRINTF(cli_buf);
- CL_PRINTF(cli_buf);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %d / %d / %d / %d",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d / %d / %d / %d",
"SCO/HID/PAN/A2DP",
bt_link_info->sco_exist, bt_link_info->hid_exist,
bt_link_info->pan_exist, bt_link_info->a2dp_exist);
- CL_PRINTF(cli_buf);
btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_BT_LINK_INFO);
bt_info_ext = coex_sta->bt_info_ext;
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %s",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s",
"BT Info A2DP rate",
(bt_info_ext&BIT0) ? "Basic rate" : "EDR rate");
- CL_PRINTF(cli_buf);
for (i = 0; i < BT_INFO_SRC_8723B_2ANT_MAX; i++) {
if (coex_sta->bt_info_c2h_cnt[i]) {
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE,
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
"\r\n %-35s = %02x %02x %02x "
"%02x %02x %02x %02x(%d)",
glbt_info_src_8723b_2ant[i],
@@ -3225,105 +3267,88 @@
coex_sta->bt_info_c2h[i][5],
coex_sta->bt_info_c2h[i][6],
coex_sta->bt_info_c2h_cnt[i]);
- CL_PRINTF(cli_buf);
}
}
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %s/%s",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s/%s",
"PS state, IPS/LPS",
((coex_sta->under_ips ? "IPS ON" : "IPS OFF")),
((coex_sta->under_lps ? "LPS ON" : "LPS OFF")));
- CL_PRINTF(cli_buf);
btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_FW_PWR_MODE_CMD);
/* Sw mechanism */
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE,
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
"\r\n %-35s", "============[Sw mechanism]============");
- CL_PRINTF(cli_buf);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %d/ %d/ %d ",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d/ %d ",
"SM1[ShRf/ LpRA/ LimDig]", coex_dm->cur_rf_rx_lpf_shrink,
coex_dm->cur_low_penalty_ra, coex_dm->limited_dig);
- CL_PRINTF(cli_buf);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %d/ %d/ %d(0x%x) ",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d/ %d(0x%x) ",
"SM2[AgcT/ AdcB/ SwDacSwing(lvl)]",
coex_dm->cur_agc_table_en, coex_dm->cur_adc_back_off,
coex_dm->cur_dac_swing_on, coex_dm->cur_dac_swing_lvl);
- CL_PRINTF(cli_buf);
/* Fw mechanism */
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
"============[Fw mechanism]============");
- CL_PRINTF(cli_buf);
ps_tdma_case = coex_dm->cur_ps_tdma;
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE,
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
"\r\n %-35s = %02x %02x %02x %02x %02x case-%d (auto:%d)",
"PS TDMA", coex_dm->ps_tdma_para[0],
coex_dm->ps_tdma_para[1], coex_dm->ps_tdma_para[2],
coex_dm->ps_tdma_para[3], coex_dm->ps_tdma_para[4],
ps_tdma_case, coex_dm->auto_tdma_adjust);
- CL_PRINTF(cli_buf);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %d/ %d ",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d ",
"DecBtPwr/ IgnWlanAct", coex_dm->cur_dec_bt_pwr,
coex_dm->cur_ignore_wlan_act);
- CL_PRINTF(cli_buf);
/* Hw setting */
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
"============[Hw setting]============");
- CL_PRINTF(cli_buf);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = 0x%x",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x",
"RF-A, 0x1e initVal", coex_dm->bt_rf0x1e_backup);
- CL_PRINTF(cli_buf);
u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x778);
u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x880);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = 0x%x/ 0x%x",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
"0x778/0x880[29:25]", u8tmp[0],
(u32tmp[0]&0x3e000000) >> 25);
- CL_PRINTF(cli_buf);
u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x948);
u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x67);
u8tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0x765);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
"0x948/ 0x67[5] / 0x765",
u32tmp[0], ((u8tmp[0]&0x20) >> 5), u8tmp[1]);
- CL_PRINTF(cli_buf);
u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x92c);
u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x930);
u32tmp[2] = btcoexist->btc_read_4byte(btcoexist, 0x944);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
"0x92c[1:0]/ 0x930[7:0]/0x944[1:0]",
u32tmp[0]&0x3, u32tmp[1]&0xff, u32tmp[2]&0x3);
- CL_PRINTF(cli_buf);
-
u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x39);
u8tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0x40);
u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x4c);
u8tmp[2] = btcoexist->btc_read_1byte(btcoexist, 0x64);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE,
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
"\r\n %-35s = 0x%x/ 0x%x/ 0x%x/ 0x%x",
"0x38[11]/0x40/0x4c[24:23]/0x64[0]",
((u8tmp[0] & 0x8)>>3), u8tmp[1],
((u32tmp[0]&0x01800000)>>23), u8tmp[2]&0x1);
- CL_PRINTF(cli_buf);
u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x550);
u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x522);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = 0x%x/ 0x%x",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
"0x550(bcn ctrl)/0x522", u32tmp[0], u8tmp[0]);
- CL_PRINTF(cli_buf);
u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xc50);
u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x49c);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = 0x%x/ 0x%x",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
"0xc50(dig)/0x49c(null-drop)", u32tmp[0]&0xff, u8tmp[0]);
- CL_PRINTF(cli_buf);
u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xda0);
u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0xda4);
@@ -3341,29 +3366,25 @@
(u32tmp[3] & 0xffff);
fa_cck = (u8tmp[0] << 8) + u8tmp[1];
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
"OFDM-CCA/OFDM-FA/CCK-FA",
u32tmp[0]&0xffff, fa_ofdm, fa_cck);
- CL_PRINTF(cli_buf);
u32tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x6c0);
u32tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x6c4);
u32tmp[2] = btcoexist->btc_read_4byte(btcoexist, 0x6c8);
u8tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x6cc);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE,
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
"\r\n %-35s = 0x%x/ 0x%x/ 0x%x/ 0x%x",
"0x6c0/0x6c4/0x6c8/0x6cc(coexTable)",
u32tmp[0], u32tmp[1], u32tmp[2], u8tmp[0]);
- CL_PRINTF(cli_buf);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %d/ %d",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d",
"0x770(high-pri rx/tx)",
coex_sta->high_priority_rx, coex_sta->high_priority_tx);
- CL_PRINTF(cli_buf);
- CL_SPRINTF(cli_buf, BT_TMP_BUF_SIZE, "\r\n %-35s = %d/ %d",
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d",
"0x774(low-pri rx/tx)", coex_sta->low_priority_rx,
coex_sta->low_priority_tx);
- CL_PRINTF(cli_buf);
#if (BT_AUTO_REPORT_ONLY_8723B_2ANT == 1)
btc8723b2ant_monitor_bt_ctr(btcoexist);
#endif
@@ -3371,22 +3392,26 @@
BTC_DBG_DISP_COEX_STATISTICS);
}
-
-void ex_halbtc8723b2ant_ips_notify(struct btc_coexist *btcoexist, u8 type)
+void ex_btc8723b2ant_ips_notify(struct btc_coexist *btcoexist, u8 type)
{
if (BTC_IPS_ENTER == type) {
BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
"[BTCoex], IPS ENTER notify\n");
coex_sta->under_ips = true;
+ btc8723b2ant_wifioff_hwcfg(btcoexist);
+ btc8723b2ant_ignore_wlan_act(btcoexist, FORCE_EXEC, true);
btc8723b2ant_coex_alloff(btcoexist);
} else if (BTC_IPS_LEAVE == type) {
BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
"[BTCoex], IPS LEAVE notify\n");
coex_sta->under_ips = false;
+ ex_btc8723b2ant_init_hwconfig(btcoexist);
+ btc8723b2ant_init_coex_dm(btcoexist);
+ btc8723b2ant_query_bt_info(btcoexist);
}
}
-void ex_halbtc8723b2ant_lps_notify(struct btc_coexist *btcoexist, u8 type)
+void ex_btc8723b2ant_lps_notify(struct btc_coexist *btcoexist, u8 type)
{
if (BTC_LPS_ENABLE == type) {
BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
@@ -3399,7 +3424,7 @@
}
}
-void ex_halbtc8723b2ant_scan_notify(struct btc_coexist *btcoexist, u8 type)
+void ex_btc8723b2ant_scan_notify(struct btc_coexist *btcoexist, u8 type)
{
if (BTC_SCAN_START == type)
BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
@@ -3409,7 +3434,7 @@
"[BTCoex], SCAN FINISH notify\n");
}
-void ex_halbtc8723b2ant_connect_notify(struct btc_coexist *btcoexist, u8 type)
+void ex_btc8723b2ant_connect_notify(struct btc_coexist *btcoexist, u8 type)
{
if (BTC_ASSOCIATE_START == type)
BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
@@ -3419,8 +3444,8 @@
"[BTCoex], CONNECT FINISH notify\n");
}
-void btc8723b_med_stat_notify(struct btc_coexist *btcoexist,
- u8 type)
+void ex_btc8723b2ant_media_status_notify(struct btc_coexist *btcoexist,
+ u8 type)
{
u8 h2c_parameter[3] = {0};
u32 wifi_bw;
@@ -3460,16 +3485,16 @@
btcoexist->btc_fill_h2c(btcoexist, 0x66, 3, h2c_parameter);
}
-void ex_halbtc8723b2ant_special_packet_notify(struct btc_coexist *btcoexist,
- u8 type)
+void ex_btc8723b2ant_special_packet_notify(struct btc_coexist *btcoexist,
+ u8 type)
{
if (type == BTC_PACKET_DHCP)
BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
"[BTCoex], DHCP Packet notify\n");
}
-void ex_halbtc8723b2ant_bt_info_notify(struct btc_coexist *btcoexist,
- u8 *tmpbuf, u8 length)
+void ex_btc8723b2ant_bt_info_notify(struct btc_coexist *btcoexist,
+ u8 *tmpbuf, u8 length)
{
u8 bt_info = 0;
u8 i, rsp_source = 0;
@@ -3516,7 +3541,7 @@
coex_sta->bt_info_c2h[rsp_source][4];
/* Here we need to resend some wifi info to BT
- * because bt is reset and loss of the info.
+ because bt is reset and loss of the info.
*/
if ((coex_sta->bt_info_ext & BIT1)) {
BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
@@ -3525,11 +3550,13 @@
btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
&wifi_connected);
if (wifi_connected)
- btc8723b_med_stat_notify(btcoexist,
- BTC_MEDIA_CONNECT);
+ ex_btc8723b2ant_media_status_notify(
+ btcoexist,
+ BTC_MEDIA_CONNECT);
else
- btc8723b_med_stat_notify(btcoexist,
- BTC_MEDIA_DISCONNECT);
+ ex_btc8723b2ant_media_status_notify(
+ btcoexist,
+ BTC_MEDIA_DISCONNECT);
}
if ((coex_sta->bt_info_ext & BIT3)) {
@@ -3564,7 +3591,7 @@
coex_sta->a2dp_exist = false;
coex_sta->hid_exist = false;
coex_sta->sco_exist = false;
- } else { /* connection exists */
+ } else { /* connection exists */
coex_sta->bt_link_exist = true;
if (bt_info & BT_INFO_8723B_2ANT_B_FTP)
coex_sta->pan_exist = true;
@@ -3601,7 +3628,7 @@
coex_dm->bt_status = BT_8723B_2ANT_BT_STATUS_SCO_BUSY;
BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
"[BTCoex], BtInfoNotify(), BT SCO busy!!!\n");
- } else if (bt_info & BT_INFO_8723B_2ANT_B_ACL_BUSY) {
+ } else if (bt_info&BT_INFO_8723B_2ANT_B_ACL_BUSY) {
coex_dm->bt_status = BT_8723B_2ANT_BT_STATUS_ACL_BUSY;
BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
"[BTCoex], BtInfoNotify(), BT ACL busy!!!\n");
@@ -3630,26 +3657,16 @@
btc8723b2ant_run_coexist_mechanism(btcoexist);
}
-void ex_halbtc8723b2ant_stack_operation_notify(struct btc_coexist *btcoexist,
- u8 type)
-{
- if (BTC_STACK_OP_INQ_PAGE_PAIR_START == type)
- BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
- "[BTCoex],StackOP Inquiry/page/pair start notify\n");
- else if (BTC_STACK_OP_INQ_PAGE_PAIR_FINISH == type)
- BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
- "[BTCoex],StackOP Inquiry/page/pair finish notify\n");
-}
-
-void ex_halbtc8723b2ant_halt_notify(struct btc_coexist *btcoexist)
+void ex_btc8723b2ant_halt_notify(struct btc_coexist *btcoexist)
{
BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY, "[BTCoex], Halt notify\n");
+ btc8723b2ant_wifioff_hwcfg(btcoexist);
btc8723b2ant_ignore_wlan_act(btcoexist, FORCE_EXEC, true);
- btc8723b_med_stat_notify(btcoexist, BTC_MEDIA_DISCONNECT);
+ ex_btc8723b2ant_media_status_notify(btcoexist, BTC_MEDIA_DISCONNECT);
}
-void ex_halbtc8723b2ant_periodical(struct btc_coexist *btcoexist)
+void ex_btc8723b2ant_periodical(struct btc_coexist *btcoexist)
{
struct btc_board_info *board_info = &btcoexist->board_info;
struct btc_stack_info *stack_info = &btcoexist->stack_info;
@@ -3677,8 +3694,7 @@
&bt_patch_ver);
btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
- "[BTCoex], CoexVer/ FwVer/ PatchVer = "
- "%d_%x/ 0x%x/ 0x%x(%d)\n",
+ "[BTCoex], CoexVer/ fw_ver/ PatchVer = %d_%x/ 0x%x/ 0x%x(%d)\n",
glcoex_ver_date_8723b_2ant, glcoex_ver_8723b_2ant,
fw_ver, bt_patch_ver, bt_patch_ver);
BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b2ant.h b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b2ant.h
index e0ad8e5..567f354 100644
--- a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b2ant.h
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8723b2ant.h
@@ -153,21 +153,20 @@
/*********************************************************************
* The following is interface which will notify coex module.
*********************************************************************/
-void ex_halbtc8723b2ant_init_hwconfig(struct btc_coexist *btcoexist);
-void ex_halbtc8723b2ant_init_coex_dm(struct btc_coexist *btcoexist);
-void ex_halbtc8723b2ant_ips_notify(struct btc_coexist *btcoexist, u8 type);
-void ex_halbtc8723b2ant_lps_notify(struct btc_coexist *btcoexist, u8 type);
-void ex_halbtc8723b2ant_scan_notify(struct btc_coexist *btcoexist, u8 type);
-void ex_halbtc8723b2ant_connect_notify(struct btc_coexist *btcoexist, u8 type);
-void btc8723b_med_stat_notify(struct btc_coexist *btcoexist, u8 type);
-void ex_halbtc8723b2ant_special_packet_notify(struct btc_coexist *btcoexist,
- u8 type);
-void ex_halbtc8723b2ant_bt_info_notify(struct btc_coexist *btcoexist,
- u8 *tmpbuf, u8 length);
-void ex_halbtc8723b2ant_stack_operation_notify(struct btc_coexist *btcoexist,
- u8 type);
-void ex_halbtc8723b2ant_halt_notify(struct btc_coexist *btcoexist);
-void ex_halbtc8723b2ant_periodical(struct btc_coexist *btcoexist);
-void ex_halbtc8723b2ant_display_coex_info(struct btc_coexist *btcoexist);
+void ex_btc8723b2ant_init_hwconfig(struct btc_coexist *btcoexist);
+void ex_btc8723b2ant_init_coex_dm(struct btc_coexist *btcoexist);
+void ex_btc8723b2ant_ips_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_btc8723b2ant_lps_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_btc8723b2ant_scan_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_btc8723b2ant_connect_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_btc8723b2ant_media_status_notify(struct btc_coexist *btcoexist,
+ u8 type);
+void ex_btc8723b2ant_special_packet_notify(struct btc_coexist *btcoexist,
+ u8 type);
+void ex_btc8723b2ant_bt_info_notify(struct btc_coexist *btcoexist,
+ u8 *tmpbuf, u8 length);
+void ex_btc8723b2ant_halt_notify(struct btc_coexist *btcoexist);
+void ex_btc8723b2ant_periodical(struct btc_coexist *btcoexist);
+void ex_btc8723b2ant_display_coex_info(struct btc_coexist *btcoexist);
#endif
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a1ant.c b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a1ant.c
new file mode 100644
index 0000000..b72e537
--- /dev/null
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a1ant.c
@@ -0,0 +1,2970 @@
+/******************************************************************************
+ *
+ * Copyright(c) 2012 Realtek Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called LICENSE.
+ *
+ * Contact Information:
+ * wlanfae <wlanfae@realtek.com>
+ * Realtek Corporation, No. 2, Innovation Road II, Hsinchu Science Park,
+ * Hsinchu 300, Taiwan.
+ *
+ * Larry Finger <Larry.Finger@lwfinger.net>
+ *
+ *****************************************************************************/
+
+/*============================================================
+ * Description:
+ *
+ * This file is for RTL8821A Co-exist mechanism
+ *
+ * History
+ * 2012/11/15 Cosa first check in.
+ *
+ *============================================================
+*/
+/*============================================================
+ * include files
+ *============================================================
+ */
+#include "halbt_precomp.h"
+/*============================================================
+ * Global variables, these are static variables
+ *============================================================
+ */
+static struct coex_dm_8821a_1ant glcoex_dm_8821a_1ant;
+static struct coex_dm_8821a_1ant *coex_dm = &glcoex_dm_8821a_1ant;
+static struct coex_sta_8821a_1ant glcoex_sta_8821a_1ant;
+static struct coex_sta_8821a_1ant *coex_sta = &glcoex_sta_8821a_1ant;
+
+static const char *const glbt_info_src_8821a_1ant[] = {
+ "BT Info[wifi fw]",
+ "BT Info[bt rsp]",
+ "BT Info[bt auto report]",
+};
+
+static u32 glcoex_ver_date_8821a_1ant = 20130816;
+static u32 glcoex_ver_8821a_1ant = 0x41;
+
+/*============================================================
+ * local function proto type if needed
+ *
+ * local function start with halbtc8821a1ant_
+ *============================================================
+ */
+static u8 halbtc8821a1ant_bt_rssi_state(u8 level_num, u8 rssi_thresh,
+ u8 rssi_thresh1)
+{
+ long bt_rssi = 0;
+ u8 bt_rssi_state = coex_sta->pre_bt_rssi_state;
+
+ bt_rssi = coex_sta->bt_rssi;
+
+ if (level_num == 2) {
+ if ((coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ if (bt_rssi >= (rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8821A_1ANT)) {
+ bt_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to High\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at Low\n");
+ }
+ } else {
+ if (bt_rssi < rssi_thresh) {
+ bt_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Low\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at High\n");
+ }
+ }
+ } else if (level_num == 3) {
+ if (rssi_thresh > rssi_thresh1) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi thresh error!!\n");
+ return coex_sta->pre_bt_rssi_state;
+ }
+
+ if ((coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ if (bt_rssi >= (rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8821A_1ANT)) {
+ bt_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Medium\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at Low\n");
+ }
+ } else if ((coex_sta->pre_bt_rssi_state ==
+ BTC_RSSI_STATE_MEDIUM) ||
+ (coex_sta->pre_bt_rssi_state ==
+ BTC_RSSI_STATE_STAY_MEDIUM)) {
+ if (bt_rssi >= (rssi_thresh1 +
+ BTC_RSSI_COEX_THRESH_TOL_8821A_1ANT)) {
+ bt_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to High\n");
+ } else if (bt_rssi < rssi_thresh) {
+ bt_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Low\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at Medium\n");
+ }
+ } else {
+ if (bt_rssi < rssi_thresh1) {
+ bt_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Medium\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at High\n");
+ }
+ }
+ }
+ coex_sta->pre_bt_rssi_state = bt_rssi_state;
+
+ return bt_rssi_state;
+}
+
+static u8 halbtc8821a1ant_WifiRssiState(struct btc_coexist *btcoexist,
+ u8 index, u8 level_num, u8 rssi_thresh,
+ u8 rssi_thresh1)
+{
+ long wifi_rssi = 0;
+ u8 wifi_rssi_state = coex_sta->pre_wifi_rssi_state[index];
+
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_WIFI_RSSI, &wifi_rssi);
+
+ if (level_num == 2) {
+ if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_LOW)) {
+ if (wifi_rssi >=
+ (rssi_thresh+BTC_RSSI_COEX_THRESH_TOL_8821A_1ANT)) {
+ wifi_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to High\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at Low\n");
+ }
+ } else {
+ if (wifi_rssi < rssi_thresh) {
+ wifi_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Low\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at High\n");
+ }
+ }
+ } else if (level_num == 3) {
+ if (rssi_thresh > rssi_thresh1) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI thresh error!!\n");
+ return coex_sta->pre_wifi_rssi_state[index];
+ }
+
+ if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_LOW)) {
+ if (wifi_rssi >=
+ (rssi_thresh+BTC_RSSI_COEX_THRESH_TOL_8821A_1ANT)) {
+ wifi_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Medium\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at Low\n");
+ }
+ } else if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_MEDIUM) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_MEDIUM)) {
+ if (wifi_rssi >=
+ (rssi_thresh1 +
+ BTC_RSSI_COEX_THRESH_TOL_8821A_1ANT)) {
+ wifi_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to High\n");
+ } else if (wifi_rssi < rssi_thresh) {
+ wifi_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Low\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at Medium\n");
+ }
+ } else {
+ if (wifi_rssi < rssi_thresh1) {
+ wifi_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Medium\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at High\n");
+ }
+ }
+ }
+ coex_sta->pre_wifi_rssi_state[index] = wifi_rssi_state;
+
+ return wifi_rssi_state;
+}
+
+static void halbtc8821a1ant_update_ra_mask(struct btc_coexist *btcoexist,
+ bool force_exec, u32 dis_rate_mask)
+{
+ coex_dm->cur_ra_mask = dis_rate_mask;
+
+ if (force_exec ||
+ (coex_dm->pre_ra_mask != coex_dm->cur_ra_mask)) {
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_UPDATE_ra_mask,
+ &coex_dm->cur_ra_mask);
+ }
+ coex_dm->pre_ra_mask = coex_dm->cur_ra_mask;
+}
+
+static void btc8821a1ant_auto_rate_fb_retry(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ bool wifi_under_b_mode = false;
+
+ coex_dm->cur_arfr_type = type;
+
+ if (force_exec ||
+ (coex_dm->pre_arfr_type != coex_dm->cur_arfr_type)) {
+ switch (coex_dm->cur_arfr_type) {
+ case 0: /* normal mode*/
+ btcoexist->btc_write_4byte(btcoexist, 0x430,
+ coex_dm->backup_arfr_cnt1);
+ btcoexist->btc_write_4byte(btcoexist, 0x434,
+ coex_dm->backup_arfr_cnt2);
+ break;
+ case 1:
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_UNDER_B_MODE,
+ &wifi_under_b_mode);
+ if (wifi_under_b_mode) {
+ btcoexist->btc_write_4byte(btcoexist, 0x430,
+ 0x0);
+ btcoexist->btc_write_4byte(btcoexist, 0x434,
+ 0x01010101);
+ } else {
+ btcoexist->btc_write_4byte(btcoexist, 0x430,
+ 0x0);
+ btcoexist->btc_write_4byte(btcoexist, 0x434,
+ 0x04030201);
+ }
+ break;
+ default:
+ break;
+ }
+ }
+
+ coex_dm->pre_arfr_type = coex_dm->cur_arfr_type;
+}
+
+static void halbtc8821a1ant_retry_limit(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ coex_dm->cur_retry_limit_type = type;
+
+ if (force_exec ||
+ (coex_dm->pre_retry_limit_type != coex_dm->cur_retry_limit_type)) {
+ switch (coex_dm->cur_retry_limit_type) {
+ case 0: /* normal mode*/
+ btcoexist->btc_write_2byte(btcoexist, 0x42a,
+ coex_dm->backup_retry_limit);
+ break;
+ case 1: /* retry limit = 8*/
+ btcoexist->btc_write_2byte(btcoexist, 0x42a, 0x0808);
+ break;
+ default:
+ break;
+ }
+ }
+ coex_dm->pre_retry_limit_type = coex_dm->cur_retry_limit_type;
+}
+
+static void halbtc8821a1ant_ampdu_max_time(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ coex_dm->cur_ampdu_time_type = type;
+
+ if (force_exec ||
+ (coex_dm->pre_ampdu_time_type != coex_dm->cur_ampdu_time_type)) {
+ switch (coex_dm->cur_ampdu_time_type) {
+ case 0: /* normal mode*/
+ btcoexist->btc_write_1byte(btcoexist, 0x456,
+ coex_dm->backup_ampdu_max_time);
+ break;
+ case 1: /* AMPDU timw = 0x38 * 32us*/
+ btcoexist->btc_write_1byte(btcoexist, 0x456, 0x38);
+ break;
+ default:
+ break;
+ }
+ }
+
+ coex_dm->pre_ampdu_time_type = coex_dm->cur_ampdu_time_type;
+}
+
+static void halbtc8821a1ant_limited_tx(struct btc_coexist *btcoexist,
+ bool force_exec, u8 ra_mask_type,
+ u8 arfr_type, u8 retry_limit_type,
+ u8 ampdu_time_type)
+{
+ switch (ra_mask_type) {
+ case 0: /* normal mode*/
+ halbtc8821a1ant_update_ra_mask(btcoexist, force_exec, 0x0);
+ break;
+ case 1: /* disable cck 1/2*/
+ halbtc8821a1ant_update_ra_mask(btcoexist, force_exec,
+ 0x00000003);
+ break;
+ case 2: /* disable cck 1/2/5.5, ofdm 6/9/12/18/24, mcs 0/1/2/3/4*/
+ halbtc8821a1ant_update_ra_mask(btcoexist, force_exec,
+ 0x0001f1f7);
+ break;
+ default:
+ break;
+ }
+
+ btc8821a1ant_auto_rate_fb_retry(btcoexist, force_exec, arfr_type);
+ halbtc8821a1ant_retry_limit(btcoexist, force_exec, retry_limit_type);
+ halbtc8821a1ant_ampdu_max_time(btcoexist, force_exec, ampdu_time_type);
+}
+
+static void halbtc8821a1ant_limited_rx(struct btc_coexist *btcoexist,
+ bool force_exec, bool rej_ap_agg_pkt,
+ bool bt_ctrl_agg_buf_size,
+ u8 agg_buf_size)
+{
+ bool reject_rx_agg = rej_ap_agg_pkt;
+ bool bt_ctrl_rx_agg_size = bt_ctrl_agg_buf_size;
+ u8 rx_agg_size = agg_buf_size;
+
+ /*============================================*/
+ /* Rx Aggregation related setting*/
+ /*============================================*/
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_BL_TO_REJ_AP_AGG_PKT, &reject_rx_agg);
+ /* decide BT control aggregation buf size or not*/
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_CTRL_AGG_SIZE,
+ &bt_ctrl_rx_agg_size);
+ /* aggregation buf size, only work when BT control Rx agg size.*/
+ btcoexist->btc_set(btcoexist, BTC_SET_U1_AGG_BUF_SIZE, &rx_agg_size);
+ /* real update aggregation setting*/
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_AGGREGATE_CTRL, NULL);
+}
+
+static void halbtc8821a1ant_monitor_bt_ctr(struct btc_coexist *btcoexist)
+{
+ u32 reg_hp_tx_rx, reg_lp_tx_rx, u4_tmp;
+ u32 reg_hp_tx = 0, reg_hp_rx = 0, reg_lp_tx = 0, reg_lp_rx = 0;
+
+ reg_hp_tx_rx = 0x770;
+ reg_lp_tx_rx = 0x774;
+
+ u4_tmp = btcoexist->btc_read_4byte(btcoexist, reg_hp_tx_rx);
+ reg_hp_tx = u4_tmp & MASKLWORD;
+ reg_hp_rx = (u4_tmp & MASKHWORD)>>16;
+
+ u4_tmp = btcoexist->btc_read_4byte(btcoexist, reg_lp_tx_rx);
+ reg_lp_tx = u4_tmp & MASKLWORD;
+ reg_lp_rx = (u4_tmp & MASKHWORD)>>16;
+
+ coex_sta->high_priority_tx = reg_hp_tx;
+ coex_sta->high_priority_rx = reg_hp_rx;
+ coex_sta->low_priority_tx = reg_lp_tx;
+ coex_sta->low_priority_rx = reg_lp_rx;
+
+ /* reset counter*/
+ btcoexist->btc_write_1byte(btcoexist, 0x76e, 0xc);
+}
+
+static void halbtc8821a1ant_query_bt_info(struct btc_coexist *btcoexist)
+{
+ u8 h2c_parameter[1] = {0};
+
+ coex_sta->c2h_bt_info_req_sent = true;
+
+ h2c_parameter[0] |= BIT0; /* trigger*/
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], Query Bt Info, FW write 0x61 = 0x%x\n",
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x61, 1, h2c_parameter);
+}
+
+static void halbtc8821a1ant_update_bt_link_info(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool bt_hs_on = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+
+ bt_link_info->bt_link_exist = coex_sta->bt_link_exist;
+ bt_link_info->sco_exist = coex_sta->sco_exist;
+ bt_link_info->a2dp_exist = coex_sta->a2dp_exist;
+ bt_link_info->pan_exist = coex_sta->pan_exist;
+ bt_link_info->hid_exist = coex_sta->hid_exist;
+
+ /* work around for HS mode.*/
+ if (bt_hs_on) {
+ bt_link_info->pan_exist = true;
+ bt_link_info->bt_link_exist = true;
+ }
+
+ /* check if Sco only*/
+ if (bt_link_info->sco_exist &&
+ !bt_link_info->a2dp_exist &&
+ !bt_link_info->pan_exist &&
+ !bt_link_info->hid_exist)
+ bt_link_info->sco_only = true;
+ else
+ bt_link_info->sco_only = false;
+
+ /* check if A2dp only*/
+ if (!bt_link_info->sco_exist &&
+ bt_link_info->a2dp_exist &&
+ !bt_link_info->pan_exist &&
+ !bt_link_info->hid_exist)
+ bt_link_info->a2dp_only = true;
+ else
+ bt_link_info->a2dp_only = false;
+
+ /* check if Pan only*/
+ if (!bt_link_info->sco_exist &&
+ !bt_link_info->a2dp_exist &&
+ bt_link_info->pan_exist &&
+ !bt_link_info->hid_exist)
+ bt_link_info->pan_only = true;
+ else
+ bt_link_info->pan_only = false;
+
+ /* check if Hid only*/
+ if (!bt_link_info->sco_exist &&
+ !bt_link_info->a2dp_exist &&
+ !bt_link_info->pan_exist &&
+ bt_link_info->hid_exist)
+ bt_link_info->hid_only = true;
+ else
+ bt_link_info->hid_only = false;
+}
+
+static u8 halbtc8821a1ant_action_algorithm(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool bt_hs_on = false;
+ u8 algorithm = BT_8821A_1ANT_COEX_ALGO_UNDEFINED;
+ u8 num_of_diff_profile = 0;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+
+ if (!bt_link_info->bt_link_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], No BT link exists!!!\n");
+ return algorithm;
+ }
+
+ if (bt_link_info->sco_exist)
+ num_of_diff_profile++;
+ if (bt_link_info->hid_exist)
+ num_of_diff_profile++;
+ if (bt_link_info->pan_exist)
+ num_of_diff_profile++;
+ if (bt_link_info->a2dp_exist)
+ num_of_diff_profile++;
+
+ if (num_of_diff_profile == 1) {
+ if (bt_link_info->sco_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO only\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_SCO;
+ } else {
+ if (bt_link_info->hid_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID only\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_HID;
+ } else if (bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = A2DP only\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_A2DP;
+ } else if (bt_link_info->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = PAN(HS) only\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_PANHS;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = PAN(EDR) only\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_PANEDR;
+ }
+ }
+ }
+ } else if (num_of_diff_profile == 2) {
+ if (bt_link_info->sco_exist) {
+ if (bt_link_info->hid_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + HID\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_HID;
+ } else if (bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + A2DP ==> SCO\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_SCO;
+ } else if (bt_link_info->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + PAN(HS)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_SCO;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + PAN(EDR)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ } else {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID + A2DP\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_HID_A2DP;
+ } else if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID + PAN(HS)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_HID_A2DP;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID + PAN(EDR)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_PANEDR_HID;
+ }
+ } else if (bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = A2DP + PAN(HS)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_A2DP_PANHS;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = A2DP + PAN(EDR)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_PANEDR_A2DP;
+ }
+ }
+ }
+ } else if (num_of_diff_profile == 3) {
+ if (bt_link_info->sco_exist) {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + HID + A2DP ==> HID\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_HID;
+ } else if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + HID + PAN(HS)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_HID_A2DP;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + HID + PAN(EDR)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_PANEDR_HID;
+ }
+ } else if (bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + A2DP + PAN(HS)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_SCO;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + A2DP + PAN(EDR) ==> HID\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ } else {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID + A2DP + PAN(HS)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_HID_A2DP;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = HID + A2DP + PAN(EDR)\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_HID_A2DP_PANEDR;
+ }
+ }
+ }
+ } else if (num_of_diff_profile >= 3) {
+ if (bt_link_info->sco_exist) {
+ if (bt_link_info->hid_exist &&
+ bt_link_info->pan_exist &&
+ bt_link_info->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Error!!! BT Profile = SCO + HID + A2DP + PAN(HS)\n");
+
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT Profile = SCO + HID + A2DP + PAN(EDR)==>PAN(EDR)+HID\n");
+ algorithm = BT_8821A_1ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ }
+ }
+ return algorithm;
+}
+
+static void halbtc8821a1ant_set_bt_auto_report(struct btc_coexist *btcoexist,
+ bool enable_auto_report)
+{
+ u8 h2c_parameter[1] = {0};
+
+ h2c_parameter[0] = 0;
+
+ if (enable_auto_report)
+ h2c_parameter[0] |= BIT0;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], BT FW auto report : %s, FW write 0x68 = 0x%x\n",
+ (enable_auto_report ? "Enabled!!" : "Disabled!!"),
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x68, 1, h2c_parameter);
+}
+
+static void halbtc8821a1ant_bt_auto_report(struct btc_coexist *btcoexist,
+ bool force_exec,
+ bool enable_auto_report)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW, "[BTCoex], %s BT Auto report = %s\n",
+ (force_exec ? "force to" : ""), ((enable_auto_report) ?
+ "Enabled" : "Disabled"));
+ coex_dm->cur_bt_auto_report = enable_auto_report;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], pre_bt_auto_report = %d, cur_bt_auto_report = %d\n",
+ coex_dm->pre_bt_auto_report,
+ coex_dm->cur_bt_auto_report);
+
+ if (coex_dm->pre_bt_auto_report == coex_dm->cur_bt_auto_report)
+ return;
+ }
+ halbtc8821a1ant_set_bt_auto_report(btcoexist, coex_dm->cur_bt_auto_report);
+
+ coex_dm->pre_bt_auto_report = coex_dm->cur_bt_auto_report;
+}
+
+static void btc8821a1ant_set_sw_pen_tx_rate(struct btc_coexist *btcoexist,
+ bool low_penalty_ra)
+{
+ u8 h2c_parameter[6] = {0};
+
+ h2c_parameter[0] = 0x6; /* opCode, 0x6= Retry_Penalty*/
+
+ if (low_penalty_ra) {
+ h2c_parameter[1] |= BIT0;
+ /*normal rate except MCS7/6/5, OFDM54/48/36*/
+ h2c_parameter[2] = 0x00;
+ h2c_parameter[3] = 0xf7; /*MCS7 or OFDM54*/
+ h2c_parameter[4] = 0xf8; /*MCS6 or OFDM48*/
+ h2c_parameter[5] = 0xf9; /*MCS5 or OFDM36*/
+ }
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], set WiFi Low-Penalty Retry: %s",
+ (low_penalty_ra ? "ON!!" : "OFF!!"));
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x69, 6, h2c_parameter);
+}
+
+static void halbtc8821a1ant_low_penalty_ra(struct btc_coexist *btcoexist,
+ bool force_exec, bool low_penalty_ra)
+{
+ coex_dm->cur_low_penalty_ra = low_penalty_ra;
+
+ if (!force_exec) {
+ if (coex_dm->pre_low_penalty_ra == coex_dm->cur_low_penalty_ra)
+ return;
+ }
+ btc8821a1ant_set_sw_pen_tx_rate(btcoexist, coex_dm->cur_low_penalty_ra);
+
+ coex_dm->pre_low_penalty_ra = coex_dm->cur_low_penalty_ra;
+}
+
+static void halbtc8821a1ant_set_coex_table(struct btc_coexist *btcoexist,
+ u32 val0x6c0, u32 val0x6c4,
+ u32 val0x6c8, u8 val0x6cc)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c0 = 0x%x\n", val0x6c0);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c0, val0x6c0);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c4 = 0x%x\n", val0x6c4);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c4, val0x6c4);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c8 = 0x%x\n", val0x6c8);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c8, val0x6c8);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6cc = 0x%x\n", val0x6cc);
+ btcoexist->btc_write_1byte(btcoexist, 0x6cc, val0x6cc);
+}
+
+static void halbtc8821a1ant_coex_table(struct btc_coexist *btcoexist,
+ bool force_exec, u32 val0x6c0,
+ u32 val0x6c4, u32 val0x6c8, u8 val0x6cc)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s write Coex Table 0x6c0 = 0x%x, 0x6c4 = 0x%x, 0x6c8 = 0x%x, 0x6cc = 0x%x\n",
+ (force_exec ? "force to" : ""), val0x6c0, val0x6c4,
+ val0x6c8, val0x6cc);
+ coex_dm->cur_val_0x6c0 = val0x6c0;
+ coex_dm->cur_val_0x6c4 = val0x6c4;
+ coex_dm->cur_val_0x6c8 = val0x6c8;
+ coex_dm->cur_val_0x6cc = val0x6cc;
+
+ if (!force_exec) {
+ if ((coex_dm->pre_val_0x6c0 == coex_dm->cur_val_0x6c0) &&
+ (coex_dm->pre_val_0x6c4 == coex_dm->cur_val_0x6c4) &&
+ (coex_dm->pre_val_0x6c8 == coex_dm->cur_val_0x6c8) &&
+ (coex_dm->pre_val_0x6cc == coex_dm->cur_val_0x6cc))
+ return;
+ }
+ halbtc8821a1ant_set_coex_table(btcoexist, val0x6c0, val0x6c4,
+ val0x6c8, val0x6cc);
+
+ coex_dm->pre_val_0x6c0 = coex_dm->cur_val_0x6c0;
+ coex_dm->pre_val_0x6c4 = coex_dm->cur_val_0x6c4;
+ coex_dm->pre_val_0x6c8 = coex_dm->cur_val_0x6c8;
+ coex_dm->pre_val_0x6cc = coex_dm->cur_val_0x6cc;
+}
+
+static void halbtc8821a1ant_coex_table_with_type(struct btc_coexist *btcoexist,
+ bool force_exec, u8 type)
+{
+ switch (type) {
+ case 0:
+ halbtc8821a1ant_coex_table(btcoexist, force_exec, 0x55555555,
+ 0x55555555, 0xffffff, 0x3);
+ break;
+ case 1:
+ halbtc8821a1ant_coex_table(btcoexist, force_exec,
+ 0x55555555, 0x5a5a5a5a,
+ 0xffffff, 0x3);
+ break;
+ case 2:
+ halbtc8821a1ant_coex_table(btcoexist, force_exec, 0x5a5a5a5a,
+ 0x5a5a5a5a, 0xffffff, 0x3);
+ break;
+ case 3:
+ halbtc8821a1ant_coex_table(btcoexist, force_exec, 0x55555555,
+ 0xaaaaaaaa, 0xffffff, 0x3);
+ break;
+ case 4:
+ halbtc8821a1ant_coex_table(btcoexist, force_exec, 0xffffffff,
+ 0xffffffff, 0xffffff, 0x3);
+ break;
+ case 5:
+ halbtc8821a1ant_coex_table(btcoexist, force_exec, 0x5fff5fff,
+ 0x5fff5fff, 0xffffff, 0x3);
+ break;
+ case 6:
+ halbtc8821a1ant_coex_table(btcoexist, force_exec, 0x55ff55ff,
+ 0x5a5a5a5a, 0xffffff, 0x3);
+ break;
+ case 7:
+ halbtc8821a1ant_coex_table(btcoexist, force_exec, 0x5afa5afa,
+ 0x5afa5afa, 0xffffff, 0x3);
+ break;
+ default:
+ break;
+ }
+}
+
+static void btc8821a1ant_set_fw_ignore_wlan_act(struct btc_coexist *btcoexist,
+ bool enable)
+{
+ u8 h2c_parameter[1] = {0};
+
+ if (enable)
+ h2c_parameter[0] |= BIT0; /* function enable*/
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], set FW for BT Ignore Wlan_Act, FW write 0x63 = 0x%x\n",
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x63, 1, h2c_parameter);
+}
+
+static void halbtc8821a1ant_ignore_wlan_act(struct btc_coexist *btcoexist,
+ bool force_exec, bool enable)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s turn Ignore WlanAct %s\n",
+ (force_exec ? "force to" : ""), (enable ? "ON" : "OFF"));
+ coex_dm->cur_ignore_wlan_act = enable;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], pre_ignore_wlan_act = %d, cur_ignore_wlan_act = %d!!\n",
+ coex_dm->pre_ignore_wlan_act,
+ coex_dm->cur_ignore_wlan_act);
+
+ if (coex_dm->pre_ignore_wlan_act ==
+ coex_dm->cur_ignore_wlan_act)
+ return;
+ }
+ btc8821a1ant_set_fw_ignore_wlan_act(btcoexist, enable);
+
+ coex_dm->pre_ignore_wlan_act = coex_dm->cur_ignore_wlan_act;
+}
+
+static void halbtc8821a1ant_set_fw_pstdma(struct btc_coexist *btcoexist,
+ u8 byte1, u8 byte2, u8 byte3,
+ u8 byte4, u8 byte5)
+{
+ u8 h2c_parameter[5] = {0};
+
+ h2c_parameter[0] = byte1;
+ h2c_parameter[1] = byte2;
+ h2c_parameter[2] = byte3;
+ h2c_parameter[3] = byte4;
+ h2c_parameter[4] = byte5;
+
+ coex_dm->ps_tdma_para[0] = byte1;
+ coex_dm->ps_tdma_para[1] = byte2;
+ coex_dm->ps_tdma_para[2] = byte3;
+ coex_dm->ps_tdma_para[3] = byte4;
+ coex_dm->ps_tdma_para[4] = byte5;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], PS-TDMA H2C cmd =0x%x%08x\n",
+ h2c_parameter[0],
+ h2c_parameter[1]<<24 |
+ h2c_parameter[2]<<16 |
+ h2c_parameter[3]<<8 |
+ h2c_parameter[4]);
+ btcoexist->btc_fill_h2c(btcoexist, 0x60, 5, h2c_parameter);
+}
+
+static void halbtc8821a1ant_set_lps_rpwm(struct btc_coexist *btcoexist,
+ u8 lps_val, u8 rpwm_val)
+{
+ u8 lps = lps_val;
+ u8 rpwm = rpwm_val;
+
+ btcoexist->btc_set(btcoexist, BTC_SET_U1_LPS_VAL, &lps);
+ btcoexist->btc_set(btcoexist, BTC_SET_U1_RPWM_VAL, &rpwm);
+}
+
+static void halbtc8821a1ant_lps_rpwm(struct btc_coexist *btcoexist,
+ bool force_exec, u8 lps_val, u8 rpwm_val)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s set lps/rpwm = 0x%x/0x%x\n",
+ (force_exec ? "force to" : ""), lps_val, rpwm_val);
+ coex_dm->cur_lps = lps_val;
+ coex_dm->cur_rpwm = rpwm_val;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], LPS-RxBeaconMode = 0x%x, LPS-RPWM = 0x%x!!\n",
+ coex_dm->cur_lps, coex_dm->cur_rpwm);
+
+ if ((coex_dm->pre_lps == coex_dm->cur_lps) &&
+ (coex_dm->pre_rpwm == coex_dm->cur_rpwm)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], LPS-RPWM_Last = 0x%x, LPS-RPWM_Now = 0x%x!!\n",
+ coex_dm->pre_rpwm, coex_dm->cur_rpwm);
+
+ return;
+ }
+ }
+ halbtc8821a1ant_set_lps_rpwm(btcoexist, lps_val, rpwm_val);
+
+ coex_dm->pre_lps = coex_dm->cur_lps;
+ coex_dm->pre_rpwm = coex_dm->cur_rpwm;
+}
+
+static void halbtc8821a1ant_sw_mechanism(struct btc_coexist *btcoexist,
+ bool low_penalty_ra)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], SM[LpRA] = %d\n", low_penalty_ra);
+
+ halbtc8821a1ant_low_penalty_ra(btcoexist, NORMAL_EXEC, low_penalty_ra);
+}
+
+static void halbtc8821a1ant_set_ant_path(struct btc_coexist *btcoexist,
+ u8 ant_pos_type, bool init_hw_cfg,
+ bool wifi_off)
+{
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ u32 u4_tmp = 0;
+ u8 h2c_parameter[2] = {0};
+
+ if (init_hw_cfg) {
+ /* 0x4c[23] = 0, 0x4c[24] = 1 Antenna control by WL/BT*/
+ u4_tmp = btcoexist->btc_read_4byte(btcoexist, 0x4c);
+ u4_tmp &= ~BIT23;
+ u4_tmp |= BIT24;
+ btcoexist->btc_write_4byte(btcoexist, 0x4c, u4_tmp);
+
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x975, 0x3, 0x3);
+ btcoexist->btc_write_1byte(btcoexist, 0xcb4, 0x77);
+
+ if (board_info->btdm_ant_pos == BTC_ANTENNA_AT_MAIN_PORT) {
+ /*tell firmware "antenna inverse" ==>
+ * WRONG firmware antenna control code.==>need fw to fix
+ */
+ h2c_parameter[0] = 1;
+ h2c_parameter[1] = 1;
+ btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
+ h2c_parameter);
+ /*Main Ant to BT for IPS case 0x4c[23] = 1*/
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x64,
+ 0x1, 0x1);
+ } else {
+ /*tell firmware "no antenna inverse" ==>
+ * WRONG firmware antenna control code.==>need fw to fix
+ */
+ h2c_parameter[0] = 0;
+ h2c_parameter[1] = 1;
+ btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
+ h2c_parameter);
+ /*Aux Ant to BT for IPS case 0x4c[23] = 1*/
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x64,
+ 0x1, 0x0);
+ }
+ } else if (wifi_off) {
+ /* 0x4c[24:23] = 00, Set Antenna control
+ * by BT_RFE_CTRL BT Vendor 0xac = 0xf002
+ */
+ u4_tmp = btcoexist->btc_read_4byte(btcoexist, 0x4c);
+ u4_tmp &= ~BIT23;
+ u4_tmp &= ~BIT24;
+ btcoexist->btc_write_4byte(btcoexist, 0x4c, u4_tmp);
+ }
+
+ /* ext switch setting*/
+ switch (ant_pos_type) {
+ case BTC_ANT_PATH_WIFI:
+ if (board_info->btdm_ant_pos == BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xcb7,
+ 0x30, 0x1);
+ else
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xcb7,
+ 0x30, 0x2);
+ break;
+ case BTC_ANT_PATH_BT:
+ if (board_info->btdm_ant_pos == BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xcb7,
+ 0x30, 0x2);
+ else
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xcb7,
+ 0x30, 0x1);
+ break;
+ default:
+ case BTC_ANT_PATH_PTA:
+ if (board_info->btdm_ant_pos == BTC_ANTENNA_AT_MAIN_PORT)
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xcb7,
+ 0x30, 0x1);
+ else
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xcb7,
+ 0x30, 0x2);
+ break;
+ }
+}
+
+static void halbtc8821a1ant_ps_tdma(struct btc_coexist *btcoexist,
+ bool force_exec, bool turn_on, u8 type)
+{
+ u8 rssi_adjust_val = 0;
+
+ coex_dm->cur_ps_tdma_on = turn_on;
+ coex_dm->cur_ps_tdma = type;
+
+ if (!force_exec) {
+ if (coex_dm->cur_ps_tdma_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], ********** TDMA(on, %d) **********\n",
+ coex_dm->cur_ps_tdma);
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], ********** TDMA(off, %d) **********\n",
+ coex_dm->cur_ps_tdma);
+ }
+ if ((coex_dm->pre_ps_tdma_on == coex_dm->cur_ps_tdma_on) &&
+ (coex_dm->pre_ps_tdma == coex_dm->cur_ps_tdma))
+ return;
+ }
+ if (turn_on) {
+ switch (type) {
+ default:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x51, 0x1a,
+ 0x1a, 0x0, 0x50);
+ break;
+ case 1:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x51, 0x3a,
+ 0x03, 0x10, 0x50);
+ rssi_adjust_val = 11;
+ break;
+ case 2:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x51, 0x2b,
+ 0x03, 0x10, 0x50);
+ rssi_adjust_val = 14;
+ break;
+ case 3:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x51, 0x1d,
+ 0x1d, 0x0, 0x10);
+ break;
+ case 4:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x93, 0x15,
+ 0x3, 0x14, 0x0);
+ rssi_adjust_val = 17;
+ break;
+ case 5:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x61, 0x15,
+ 0x3, 0x11, 0x10);
+ break;
+ case 6:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x13, 0xa,
+ 0x3, 0x0, 0x0);
+ break;
+ case 7:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x13, 0xc,
+ 0x5, 0x0, 0x0);
+ break;
+ case 8:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x93, 0x25,
+ 0x3, 0x10, 0x0);
+ break;
+ case 9:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x51, 0x21,
+ 0x3, 0x10, 0x50);
+ rssi_adjust_val = 18;
+ break;
+ case 10:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x13, 0xa,
+ 0xa, 0x0, 0x40);
+ break;
+ case 11:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x51, 0x14,
+ 0x03, 0x10, 0x10);
+ rssi_adjust_val = 20;
+ break;
+ case 12:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x51, 0x0a,
+ 0x0a, 0x0, 0x50);
+ break;
+ case 13:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x51, 0x18,
+ 0x18, 0x0, 0x10);
+ break;
+ case 14:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x51, 0x21,
+ 0x3, 0x10, 0x10);
+ break;
+ case 15:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x13, 0xa,
+ 0x3, 0x8, 0x0);
+ break;
+ case 16:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x93, 0x15,
+ 0x3, 0x10, 0x0);
+ rssi_adjust_val = 18;
+ break;
+ case 18:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x93, 0x25,
+ 0x3, 0x10, 0x0);
+ rssi_adjust_val = 14;
+ break;
+ case 20:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x61, 0x35,
+ 0x03, 0x11, 0x10);
+ break;
+ case 21:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x61, 0x15,
+ 0x03, 0x11, 0x10);
+ break;
+ case 22:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x61, 0x25,
+ 0x03, 0x11, 0x10);
+ break;
+ case 23:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0xe3, 0x25,
+ 0x3, 0x31, 0x18);
+ rssi_adjust_val = 22;
+ break;
+ case 24:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0xe3, 0x15,
+ 0x3, 0x31, 0x18);
+ rssi_adjust_val = 22;
+ break;
+ case 25:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0xe3, 0xa,
+ 0x3, 0x31, 0x18);
+ rssi_adjust_val = 22;
+ break;
+ case 26:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0xe3, 0xa,
+ 0x3, 0x31, 0x18);
+ rssi_adjust_val = 22;
+ break;
+ case 27:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0xe3, 0x25,
+ 0x3, 0x31, 0x98);
+ rssi_adjust_val = 22;
+ break;
+ case 28:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x69, 0x25,
+ 0x3, 0x31, 0x0);
+ break;
+ case 29:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0xab, 0x1a,
+ 0x1a, 0x1, 0x10);
+ break;
+ case 30:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x51, 0x14,
+ 0x3, 0x10, 0x50);
+ break;
+ case 31:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0xd3, 0x1a,
+ 0x1a, 0, 0x58);
+ break;
+ case 32:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x61, 0xa,
+ 0x3, 0x10, 0x0);
+ break;
+ case 33:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0xa3, 0x25,
+ 0x3, 0x30, 0x90);
+ break;
+ case 34:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x53, 0x1a,
+ 0x1a, 0x0, 0x10);
+ break;
+ case 35:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x63, 0x1a,
+ 0x1a, 0x0, 0x10);
+ break;
+ case 36:
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0xd3, 0x12,
+ 0x3, 0x14, 0x50);
+ break;
+ }
+ } else {
+ /* disable PS tdma*/
+ switch (type) {
+ case 8: /*PTA Control*/
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x8, 0x0, 0x0,
+ 0x0, 0x0);
+ halbtc8821a1ant_set_ant_path(btcoexist, BTC_ANT_PATH_PTA,
+ false, false);
+ break;
+ case 0:
+ default: /*Software control, Antenna at BT side*/
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x0, 0x0, 0x0,
+ 0x0, 0x0);
+ halbtc8821a1ant_set_ant_path(btcoexist, BTC_ANT_PATH_BT,
+ false, false);
+ break;
+ case 9: /*Software control, Antenna at WiFi side*/
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x0, 0x0, 0x0,
+ 0x0, 0x0);
+ halbtc8821a1ant_set_ant_path(btcoexist, BTC_ANT_PATH_WIFI,
+ false, false);
+ break;
+ case 10: /* under 5G*/
+ halbtc8821a1ant_set_fw_pstdma(btcoexist, 0x0, 0x0, 0x0,
+ 0x8, 0x0);
+ halbtc8821a1ant_set_ant_path(btcoexist, BTC_ANT_PATH_BT,
+ false, false);
+ break;
+ }
+ }
+ rssi_adjust_val = 0;
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_U1_RSSI_ADJ_VAL_FOR_1ANT_COEX_TYPE, &rssi_adjust_val);
+
+ /* update pre state*/
+ coex_dm->pre_ps_tdma_on = coex_dm->cur_ps_tdma_on;
+ coex_dm->pre_ps_tdma = coex_dm->cur_ps_tdma;
+}
+
+static bool halbtc8821a1ant_is_common_action(struct btc_coexist *btcoexist)
+{
+ bool common = false, wifi_connected = false, wifi_busy = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+
+ if (!wifi_connected &&
+ BT_8821A_1ANT_BT_STATUS_NON_CONNECTED_IDLE ==
+ coex_dm->bt_status) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi non connected-idle + BT non connected-idle!!\n");
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+
+ common = true;
+ } else if (wifi_connected &&
+ (BT_8821A_1ANT_BT_STATUS_NON_CONNECTED_IDLE ==
+ coex_dm->bt_status)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi connected + BT non connected-idle!!\n");
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+
+ common = true;
+ } else if (!wifi_connected &&
+ (BT_8821A_1ANT_BT_STATUS_CONNECTED_IDLE ==
+ coex_dm->bt_status)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi non connected-idle + BT connected-idle!!\n");
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+
+ common = true;
+ } else if (wifi_connected &&
+ (BT_8821A_1ANT_BT_STATUS_CONNECTED_IDLE ==
+ coex_dm->bt_status)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi connected + BT connected-idle!!\n");
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+
+ common = true;
+ } else if (!wifi_connected &&
+ (BT_8821A_1ANT_BT_STATUS_CONNECTED_IDLE !=
+ coex_dm->bt_status)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi non connected-idle + BT Busy!!\n");
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+
+ common = true;
+ } else {
+ if (wifi_busy) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi Connected-Busy + BT Busy!!\n");
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi Connected-Idle + BT Busy!!\n");
+ }
+
+ common = false;
+ }
+
+ return common;
+}
+
+static void btc8821a1ant_tdma_dur_adj(struct btc_coexist *btcoexist,
+ u8 wifi_status)
+{
+ static long up, dn, m, n, wait_count;
+ /*0: no change, +1: increase WiFi duration, -1: decrease WiFi duration*/
+ long result;
+ u8 retry_count = 0, bt_info_ext;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], TdmaDurationAdjustForAcl()\n");
+
+ if ((BT_8821A_1ANT_WIFI_STATUS_NON_CONNECTED_ASSO_AUTH_SCAN ==
+ wifi_status) ||
+ (BT_8821A_1ANT_WIFI_STATUS_CONNECTED_SCAN ==
+ wifi_status) ||
+ (BT_8821A_1ANT_WIFI_STATUS_CONNECTED_SPECIAL_PKT ==
+ wifi_status)) {
+ if (coex_dm->cur_ps_tdma != 1 &&
+ coex_dm->cur_ps_tdma != 2 &&
+ coex_dm->cur_ps_tdma != 3 &&
+ coex_dm->cur_ps_tdma != 9) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+
+ up = 0;
+ dn = 0;
+ m = 1;
+ n = 3;
+ result = 0;
+ wait_count = 0;
+ }
+ return;
+ }
+
+ if (!coex_dm->auto_tdma_adjust) {
+ coex_dm->auto_tdma_adjust = true;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], first run TdmaDurationAdjust()!!\n");
+
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 2);
+ coex_dm->tdma_adj_type = 2;
+ /*============*/
+ up = 0;
+ dn = 0;
+ m = 1;
+ n = 3;
+ result = 0;
+ wait_count = 0;
+ } else {
+ /*accquire the BT TRx retry count from BT_Info byte2*/
+ retry_count = coex_sta->bt_retry_cnt;
+ bt_info_ext = coex_sta->bt_info_ext;
+ result = 0;
+ wait_count++;
+
+ if (retry_count == 0) {
+ /* no retry in the last 2-second duration*/
+ up++;
+ dn--;
+
+ if (dn <= 0)
+ dn = 0;
+
+ if (up >= n) {
+ /* if (retry count == 0) for 2*n seconds ,
+ * make WiFi duration wider
+ */
+ wait_count = 0;
+ n = 3;
+ up = 0;
+ dn = 0;
+ result = 1;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], Increase wifi duration!!\n");
+ }
+ } else if (retry_count <= 3) {
+ /* <=3 retry in the last 2-second duration*/
+ up--;
+ dn++;
+
+ if (up <= 0)
+ up = 0;
+
+ if (dn == 2) {
+ /* if retry count< 3 for 2*2 seconds,
+ * shrink wifi duration
+ */
+ if (wait_count <= 2)
+ m++; /* avoid bounce in two levels */
+ else
+ m = 1;
+
+ if (m >= 20) {
+ /* m max value is 20, max time is 120 s,
+ * recheck if adjust WiFi duration.
+ */
+ m = 20;
+ }
+ n = 3*m;
+ up = 0;
+ dn = 0;
+ wait_count = 0;
+ result = -1;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], Decrease wifi duration for retryCounter<3!!\n");
+ }
+ } else {
+ /* retry count > 3, if retry count > 3 happens once,
+ * shrink WiFi duration
+ */
+ if (wait_count == 1)
+ m++; /* avoid bounce in two levels */
+ else
+ m = 1;
+ /* m max value is 20, max time is 120 second,
+ * recheck if adjust WiFi duration.
+ */
+ if (m >= 20)
+ m = 20;
+
+ n = 3*m;
+ up = 0;
+ dn = 0;
+ wait_count = 0;
+ result = -1;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], Decrease wifi duration for retryCounter>3!!\n");
+ }
+
+ if (result == -1) {
+ if ((BT_INFO_8821A_1ANT_A2DP_BASIC_RATE(bt_info_ext)) &&
+ ((coex_dm->cur_ps_tdma == 1) ||
+ (coex_dm->cur_ps_tdma == 2))) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ }
+ } else if (result == 1) {
+ if ((BT_INFO_8821A_1ANT_A2DP_BASIC_RATE(bt_info_ext)) &&
+ ((coex_dm->cur_ps_tdma == 1) ||
+ (coex_dm->cur_ps_tdma == 2))) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 1);
+ coex_dm->tdma_adj_type = 1;
+ }
+ } else {
+ /*no change*/
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], ********** TDMA(on, %d) **********\n",
+ coex_dm->cur_ps_tdma);
+ }
+
+ if (coex_dm->cur_ps_tdma != 1 &&
+ coex_dm->cur_ps_tdma != 2 &&
+ coex_dm->cur_ps_tdma != 9 &&
+ coex_dm->cur_ps_tdma != 11) {
+ /* recover to previous adjust type*/
+ halbtc8821a1ant_ps_tdma(btcoexist,
+ NORMAL_EXEC, true,
+ coex_dm->tdma_adj_type);
+ }
+ }
+}
+
+static void btc8821a1ant_ps_tdma_check_for_pwr_save(struct btc_coexist *btcoex,
+ bool new_ps_state)
+{
+ u8 lps_mode = 0x0;
+
+ btcoex->btc_get(btcoex, BTC_GET_U1_LPS_MODE, &lps_mode);
+
+ if (lps_mode) {
+ /* already under LPS state*/
+ if (new_ps_state) {
+ /* keep state under LPS, do nothing.*/
+ } else {
+ /* will leave LPS state, turn off psTdma first*/
+ halbtc8821a1ant_ps_tdma(btcoex, NORMAL_EXEC, false, 0);
+ }
+ } else {
+ /* NO PS state*/
+ if (new_ps_state) {
+ /* will enter LPS state, turn off psTdma first*/
+ halbtc8821a1ant_ps_tdma(btcoex, NORMAL_EXEC, false, 0);
+ } else {
+ /* keep state under NO PS state, do nothing.*/
+ }
+ }
+}
+
+static void halbtc8821a1ant_power_save_state(struct btc_coexist *btcoexist,
+ u8 ps_type, u8 lps_val,
+ u8 rpwm_val)
+{
+ bool low_pwr_disable = false;
+
+ switch (ps_type) {
+ case BTC_PS_WIFI_NATIVE:
+ /* recover to original 32k low power setting*/
+ low_pwr_disable = false;
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_NORMAL_LPS, NULL);
+ break;
+ case BTC_PS_LPS_ON:
+ btc8821a1ant_ps_tdma_check_for_pwr_save(btcoexist,
+ true);
+ halbtc8821a1ant_lps_rpwm(btcoexist,
+ NORMAL_EXEC, lps_val, rpwm_val);
+ /* when coex force to enter LPS, do not enter 32k low power.*/
+ low_pwr_disable = true;
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+ /* power save must executed before psTdma.*/
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_ENTER_LPS, NULL);
+ break;
+ case BTC_PS_LPS_OFF:
+ btc8821a1ant_ps_tdma_check_for_pwr_save(btcoexist, false);
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_LEAVE_LPS, NULL);
+ break;
+ default:
+ break;
+ }
+}
+
+static void halbtc8821a1ant_coex_under_5g(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+ halbtc8821a1ant_ignore_wlan_act(btcoexist, NORMAL_EXEC, true);
+
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 10);
+
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 0);
+
+ halbtc8821a1ant_limited_tx(btcoexist, NORMAL_EXEC, 0, 0, 0, 0);
+
+ halbtc8821a1ant_limited_rx(btcoexist, NORMAL_EXEC, false, false, 5);
+}
+
+static void halbtc8821a1ant_action_wifi_only(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 0);
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 9);
+}
+
+static void btc8821a1ant_mon_bt_en_dis(struct btc_coexist *btcoexist)
+{
+ static bool pre_bt_disabled;
+ static u32 bt_disable_cnt;
+ bool bt_active = true, bt_disabled = false;
+
+ /* This function check if bt is disabled*/
+
+ if (coex_sta->high_priority_tx == 0 &&
+ coex_sta->high_priority_rx == 0 &&
+ coex_sta->low_priority_tx == 0 &&
+ coex_sta->low_priority_rx == 0) {
+ bt_active = false;
+ }
+ if (coex_sta->high_priority_tx == 0xffff &&
+ coex_sta->high_priority_rx == 0xffff &&
+ coex_sta->low_priority_tx == 0xffff &&
+ coex_sta->low_priority_rx == 0xffff) {
+ bt_active = false;
+ }
+ if (bt_active) {
+ bt_disable_cnt = 0;
+ bt_disabled = false;
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_DISABLE,
+ &bt_disabled);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], BT is enabled !!\n");
+ } else {
+ bt_disable_cnt++;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], bt all counters = 0, %d times!!\n",
+ bt_disable_cnt);
+ if (bt_disable_cnt >= 2) {
+ bt_disabled = true;
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_DISABLE,
+ &bt_disabled);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], BT is disabled !!\n");
+ halbtc8821a1ant_action_wifi_only(btcoexist);
+ }
+ }
+ if (pre_bt_disabled != bt_disabled) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], BT is from %s to %s!!\n",
+ (pre_bt_disabled ? "disabled" : "enabled"),
+ (bt_disabled ? "disabled" : "enabled"));
+ pre_bt_disabled = bt_disabled;
+ if (bt_disabled) {
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_LEAVE_LPS,
+ NULL);
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_NORMAL_LPS,
+ NULL);
+ }
+ }
+}
+
+/*=============================================*/
+/**/
+/* Software Coex Mechanism start*/
+/**/
+/*=============================================*/
+
+/* SCO only or SCO+PAN(HS)*/
+static void halbtc8821a1ant_action_sco(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_sw_mechanism(btcoexist, true);
+}
+
+static void halbtc8821a1ant_action_hid(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_sw_mechanism(btcoexist, true);
+}
+
+/*A2DP only / PAN(EDR) only/ A2DP+PAN(HS)*/
+static void halbtc8821a1ant_action_a2dp(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+}
+
+static void halbtc8821a1ant_action_a2dp_pan_hs(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+}
+
+static void halbtc8821a1ant_action_pan_edr(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+}
+
+/*PAN(HS) only*/
+static void halbtc8821a1ant_action_pan_hs(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+}
+
+/*PAN(EDR)+A2DP*/
+static void halbtc8821a1ant_action_pan_edr_a2dp(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+}
+
+static void halbtc8821a1ant_action_pan_edr_hid(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_sw_mechanism(btcoexist, true);
+}
+
+/* HID+A2DP+PAN(EDR)*/
+static void btc8821a1ant_action_hid_a2dp_pan_edr(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_sw_mechanism(btcoexist, true);
+}
+
+static void halbtc8821a1ant_action_hid_a2dp(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_sw_mechanism(btcoexist, true);
+}
+
+/*=============================================*/
+/**/
+/* Non-Software Coex Mechanism start*/
+/**/
+/*=============================================*/
+
+static void halbtc8821a1ant_action_hs(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 5);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, FORCE_EXEC, 2);
+}
+
+static void halbtc8821a1ant_action_bt_inquiry(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool wifi_connected = false;
+
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_CONNECTED, &wifi_connected);
+
+ if (!wifi_connected) {
+ halbtc8821a1ant_power_save_state(btcoexist,
+ BTC_PS_WIFI_NATIVE, 0x0, 0x0);
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 5);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ } else if ((bt_link_info->sco_exist) ||
+ (bt_link_info->hid_only)) {
+ /* SCO/HID-only busy*/
+ halbtc8821a1ant_power_save_state(btcoexist,
+ BTC_PS_WIFI_NATIVE, 0x0, 0x0);
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 32);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ } else {
+ halbtc8821a1ant_power_save_state(btcoexist, BTC_PS_LPS_ON,
+ 0x50, 0x4);
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 30);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ }
+}
+
+static void btc8821a1ant_act_bt_sco_hid_only_busy(struct btc_coexist *btcoexist,
+ u8 wifi_status) {
+ /* tdma and coex table*/
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 5);
+
+ if (BT_8821A_1ANT_WIFI_STATUS_NON_CONNECTED_ASSO_AUTH_SCAN ==
+ wifi_status)
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ else
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+}
+
+static void btc8821a1ant_act_wifi_con_bt_acl_busy(struct btc_coexist *btcoexist,
+ u8 wifi_status)
+{
+ u8 bt_rssi_state;
+
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+
+ bt_rssi_state = halbtc8821a1ant_bt_rssi_state(2, 28, 0);
+
+ if (bt_link_info->hid_only) {
+ /*HID*/
+ btc8821a1ant_act_bt_sco_hid_only_busy(btcoexist,
+ wifi_status);
+ coex_dm->auto_tdma_adjust = false;
+ return;
+ } else if (bt_link_info->a2dp_only) {
+ /*A2DP*/
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a1ant_tdma_dur_adj(btcoexist, wifi_status);
+ } else {
+ /*for low BT RSSI*/
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->auto_tdma_adjust = false;
+ }
+
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ } else if (bt_link_info->hid_exist && bt_link_info->a2dp_exist) {
+ /*HID+A2DP*/
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->auto_tdma_adjust = false;
+ } else {
+ /*for low BT RSSI*/
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->auto_tdma_adjust = false;
+ }
+
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ } else if ((bt_link_info->pan_only) ||
+ (bt_link_info->hid_exist && bt_link_info->pan_exist)) {
+ /*PAN(OPP, FTP), HID+PAN(OPP, FTP)*/
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 3);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ coex_dm->auto_tdma_adjust = false;
+ } else if (((bt_link_info->a2dp_exist) && (bt_link_info->pan_exist)) ||
+ (bt_link_info->hid_exist && bt_link_info->a2dp_exist &&
+ bt_link_info->pan_exist)) {
+ /*A2DP+PAN(OPP, FTP), HID+A2DP+PAN(OPP, FTP)*/
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 13);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ coex_dm->auto_tdma_adjust = false;
+ } else {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 11);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ coex_dm->auto_tdma_adjust = false;
+ }
+}
+
+static void halbtc8821a1ant_action_wifi_not_connected(
+ struct btc_coexist *btcoexist)
+{
+ /* power save state*/
+ halbtc8821a1ant_power_save_state(btcoexist,
+ BTC_PS_WIFI_NATIVE, 0x0, 0x0);
+
+ /* tdma and coex table*/
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 8);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 0);
+}
+
+static void btc8821a1ant_act_wifi_not_conn_scan(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_power_save_state(btcoexist,
+ BTC_PS_WIFI_NATIVE, 0x0, 0x0);
+
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 22);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+}
+
+static void halbtc8821a1ant_action_wifi_connected_scan(
+ struct btc_coexist *btcoexist) {
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+
+ /* power save state*/
+ halbtc8821a1ant_power_save_state(btcoexist,
+ BTC_PS_WIFI_NATIVE, 0x0, 0x0);
+
+ /* tdma and coex table*/
+ if (BT_8821A_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) {
+ if (bt_link_info->a2dp_exist && bt_link_info->pan_exist) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 22);
+ halbtc8821a1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 1);
+ } else {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 20);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ }
+ } else if ((BT_8821A_1ANT_BT_STATUS_SCO_BUSY ==
+ coex_dm->bt_status) ||
+ (BT_8821A_1ANT_BT_STATUS_ACL_SCO_BUSY ==
+ coex_dm->bt_status)) {
+ btc8821a1ant_act_bt_sco_hid_only_busy(btcoexist,
+ BT_8821A_1ANT_WIFI_STATUS_CONNECTED_SCAN);
+ } else {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 20);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ }
+}
+
+static void btc8821a1ant_act_wifi_conn_sp_pkt(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool hs_connecting = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_CONNECTING, &hs_connecting);
+
+ halbtc8821a1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+
+ /* tdma and coex table*/
+ if (BT_8821A_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) {
+ if (bt_link_info->a2dp_exist && bt_link_info->pan_exist) {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 22);
+ halbtc8821a1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 1);
+ } else {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 20);
+ halbtc8821a1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 1);
+ }
+ } else {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 20);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, NORMAL_EXEC, 1);
+ }
+}
+
+static void halbtc8821a1ant_action_wifi_connected(struct btc_coexist *btcoexist)
+{
+ bool wifi_busy = false;
+ bool scan = false, link = false, roam = false;
+ bool under_4way = false;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], CoexForWifiConnect()===>\n");
+
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_4_WAY_PROGRESS, &under_4way);
+ if (under_4way) {
+ btc8821a1ant_act_wifi_conn_sp_pkt(btcoexist);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], CoexForWifiConnect(), return for wifi is under 4way<===\n");
+ return;
+ }
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
+ if (scan || link || roam) {
+ halbtc8821a1ant_action_wifi_connected_scan(btcoexist);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], CoexForWifiConnect(), return for wifi is under scan<===\n");
+ return;
+ }
+
+ /* power save state*/
+ if (BT_8821A_1ANT_BT_STATUS_ACL_BUSY ==
+ coex_dm->bt_status && !btcoexist->bt_link_info.hid_only)
+ halbtc8821a1ant_power_save_state(btcoexist,
+ BTC_PS_LPS_ON, 0x50, 0x4);
+ else
+ halbtc8821a1ant_power_save_state(btcoexist,
+ BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+
+ /* tdma and coex table*/
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+ if (!wifi_busy) {
+ if (BT_8821A_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) {
+ btc8821a1ant_act_wifi_con_bt_acl_busy(btcoexist,
+ BT_8821A_1ANT_WIFI_STATUS_CONNECTED_IDLE);
+ } else if ((BT_8821A_1ANT_BT_STATUS_SCO_BUSY ==
+ coex_dm->bt_status) ||
+ (BT_8821A_1ANT_BT_STATUS_ACL_SCO_BUSY ==
+ coex_dm->bt_status)) {
+ btc8821a1ant_act_bt_sco_hid_only_busy(btcoexist,
+ BT_8821A_1ANT_WIFI_STATUS_CONNECTED_IDLE);
+ } else {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 5);
+ halbtc8821a1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 2);
+ }
+ } else {
+ if (BT_8821A_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) {
+ btc8821a1ant_act_wifi_con_bt_acl_busy(btcoexist,
+ BT_8821A_1ANT_WIFI_STATUS_CONNECTED_BUSY);
+ } else if ((BT_8821A_1ANT_BT_STATUS_SCO_BUSY ==
+ coex_dm->bt_status) ||
+ (BT_8821A_1ANT_BT_STATUS_ACL_SCO_BUSY ==
+ coex_dm->bt_status)) {
+ btc8821a1ant_act_bt_sco_hid_only_busy(btcoexist,
+ BT_8821A_1ANT_WIFI_STATUS_CONNECTED_BUSY);
+ } else {
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 5);
+ halbtc8821a1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 2);
+ }
+ }
+}
+
+static void btc8821a1ant_run_sw_coex_mech(struct btc_coexist *btcoexist)
+{
+ u8 algorithm = 0;
+
+ algorithm = halbtc8821a1ant_action_algorithm(btcoexist);
+ coex_dm->cur_algorithm = algorithm;
+
+ if (!halbtc8821a1ant_is_common_action(btcoexist)) {
+ switch (coex_dm->cur_algorithm) {
+ case BT_8821A_1ANT_COEX_ALGO_SCO:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = SCO.\n");
+ halbtc8821a1ant_action_sco(btcoexist);
+ break;
+ case BT_8821A_1ANT_COEX_ALGO_HID:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = HID.\n");
+ halbtc8821a1ant_action_hid(btcoexist);
+ break;
+ case BT_8821A_1ANT_COEX_ALGO_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = A2DP.\n");
+ halbtc8821a1ant_action_a2dp(btcoexist);
+ break;
+ case BT_8821A_1ANT_COEX_ALGO_A2DP_PANHS:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = A2DP+PAN(HS).\n");
+ halbtc8821a1ant_action_a2dp_pan_hs(btcoexist);
+ break;
+ case BT_8821A_1ANT_COEX_ALGO_PANEDR:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = PAN(EDR).\n");
+ halbtc8821a1ant_action_pan_edr(btcoexist);
+ break;
+ case BT_8821A_1ANT_COEX_ALGO_PANHS:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = HS mode.\n");
+ halbtc8821a1ant_action_pan_hs(btcoexist);
+ break;
+ case BT_8821A_1ANT_COEX_ALGO_PANEDR_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = PAN+A2DP.\n");
+ halbtc8821a1ant_action_pan_edr_a2dp(btcoexist);
+ break;
+ case BT_8821A_1ANT_COEX_ALGO_PANEDR_HID:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = PAN(EDR)+HID.\n");
+ halbtc8821a1ant_action_pan_edr_hid(btcoexist);
+ break;
+ case BT_8821A_1ANT_COEX_ALGO_HID_A2DP_PANEDR:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = HID+A2DP+PAN.\n");
+ btc8821a1ant_action_hid_a2dp_pan_edr(btcoexist);
+ break;
+ case BT_8821A_1ANT_COEX_ALGO_HID_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = HID+A2DP.\n");
+ halbtc8821a1ant_action_hid_a2dp(btcoexist);
+ break;
+ default:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action algorithm = coexist All Off!!\n");
+ /*halbtc8821a1ant_coex_all_off(btcoexist);*/
+ break;
+ }
+ coex_dm->pre_algorithm = coex_dm->cur_algorithm;
+ }
+}
+
+static void halbtc8821a1ant_run_coexist_mechanism(struct btc_coexist *btcoexist)
+{
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ bool wifi_connected = false, bt_hs_on = false;
+ bool increase_scan_dev_num = false;
+ bool bt_ctrl_agg_buf_size = false;
+ u8 agg_buf_size = 5;
+ u8 wifi_rssi_state = BTC_RSSI_STATE_HIGH;
+ bool wifi_under_5g = false;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], RunCoexistMechanism()===>\n");
+
+ if (btcoexist->manual_control) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], RunCoexistMechanism(), return for Manual CTRL <===\n");
+ return;
+ }
+
+ if (btcoexist->stop_coex_dm) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], RunCoexistMechanism(), return for Stop Coex DM <===\n");
+ return;
+ }
+
+ if (coex_sta->under_ips) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], wifi is under IPS !!!\n");
+ return;
+ }
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_UNDER_5G, &wifi_under_5g);
+ if (wifi_under_5g) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], RunCoexistMechanism(), return for 5G <===\n");
+ halbtc8821a1ant_coex_under_5g(btcoexist);
+ return;
+ }
+
+ if ((BT_8821A_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) ||
+ (BT_8821A_1ANT_BT_STATUS_SCO_BUSY == coex_dm->bt_status) ||
+ (BT_8821A_1ANT_BT_STATUS_ACL_SCO_BUSY == coex_dm->bt_status))
+ increase_scan_dev_num = true;
+
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_INC_SCAN_DEV_NUM,
+ &increase_scan_dev_num);
+
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_CONNECTED, &wifi_connected);
+
+ if (!bt_link_info->sco_exist && !bt_link_info->hid_exist) {
+ halbtc8821a1ant_limited_tx(btcoexist, NORMAL_EXEC, 0, 0, 0, 0);
+ } else {
+ if (wifi_connected) {
+ wifi_rssi_state =
+ halbtc8821a1ant_WifiRssiState(btcoexist, 1, 2,
+ 30, 0);
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a1ant_limited_tx(btcoexist,
+ NORMAL_EXEC, 1, 1,
+ 1, 1);
+ } else {
+ halbtc8821a1ant_limited_tx(btcoexist,
+ NORMAL_EXEC, 1, 1,
+ 1, 1);
+ }
+ } else {
+ halbtc8821a1ant_limited_tx(btcoexist, NORMAL_EXEC,
+ 0, 0, 0, 0);
+ }
+ }
+
+ if (bt_link_info->sco_exist) {
+ bt_ctrl_agg_buf_size = true;
+ agg_buf_size = 0x3;
+ } else if (bt_link_info->hid_exist) {
+ bt_ctrl_agg_buf_size = true;
+ agg_buf_size = 0x5;
+ } else if (bt_link_info->a2dp_exist || bt_link_info->pan_exist) {
+ bt_ctrl_agg_buf_size = true;
+ agg_buf_size = 0x8;
+ }
+ halbtc8821a1ant_limited_rx(btcoexist, NORMAL_EXEC, false,
+ bt_ctrl_agg_buf_size, agg_buf_size);
+
+ btc8821a1ant_run_sw_coex_mech(btcoexist);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ if (coex_sta->c2h_bt_inquiry_page) {
+ halbtc8821a1ant_action_bt_inquiry(btcoexist);
+ return;
+ } else if (bt_hs_on) {
+ halbtc8821a1ant_action_hs(btcoexist);
+ return;
+ }
+
+ if (!wifi_connected) {
+ bool scan = false, link = false, roam = false;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], wifi is non connected-idle !!!\n");
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
+
+ if (scan || link || roam)
+ btc8821a1ant_act_wifi_not_conn_scan(btcoexist);
+ else
+ halbtc8821a1ant_action_wifi_not_connected(btcoexist);
+ } else {
+ /* wifi LPS/Busy*/
+ halbtc8821a1ant_action_wifi_connected(btcoexist);
+ }
+}
+
+static void halbtc8821a1ant_init_coex_dm(struct btc_coexist *btcoexist)
+{
+ /* force to reset coex mechanism*/
+ /* sw all off*/
+ halbtc8821a1ant_sw_mechanism(btcoexist, false);
+
+ halbtc8821a1ant_ps_tdma(btcoexist, FORCE_EXEC, false, 8);
+ halbtc8821a1ant_coex_table_with_type(btcoexist, FORCE_EXEC, 0);
+}
+
+static void halbtc8821a1ant_init_hw_config(struct btc_coexist *btcoexist,
+ bool back_up)
+{
+ u8 u1_tmp = 0;
+ bool wifi_under_5g = false;
+
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], 1Ant Init HW Config!!\n");
+
+ if (back_up) {
+ coex_dm->backup_arfr_cnt1 = btcoexist->btc_read_4byte(btcoexist,
+ 0x430);
+ coex_dm->backup_arfr_cnt2 = btcoexist->btc_read_4byte(btcoexist,
+ 0x434);
+ coex_dm->backup_retry_limit =
+ btcoexist->btc_read_2byte(btcoexist, 0x42a);
+ coex_dm->backup_ampdu_max_time =
+ btcoexist->btc_read_1byte(btcoexist, 0x456);
+ }
+
+ /* 0x790[5:0] = 0x5*/
+ u1_tmp = btcoexist->btc_read_1byte(btcoexist, 0x790);
+ u1_tmp &= 0xc0;
+ u1_tmp |= 0x5;
+ btcoexist->btc_write_1byte(btcoexist, 0x790, u1_tmp);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_UNDER_5G, &wifi_under_5g);
+
+ /*Antenna config*/
+ if (wifi_under_5g)
+ halbtc8821a1ant_set_ant_path(btcoexist, BTC_ANT_PATH_BT,
+ true, false);
+ else
+ halbtc8821a1ant_set_ant_path(btcoexist, BTC_ANT_PATH_PTA,
+ true, false);
+ /* PTA parameter*/
+ halbtc8821a1ant_coex_table_with_type(btcoexist, FORCE_EXEC, 0);
+
+ /* Enable counter statistics*/
+ /*0x76e[3] =1, WLAN_Act control by PTA*/
+ btcoexist->btc_write_1byte(btcoexist, 0x76e, 0xc);
+ btcoexist->btc_write_1byte(btcoexist, 0x778, 0x3);
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x40, 0x20, 0x1);
+}
+
+/*============================================================*/
+/* work around function start with wa_halbtc8821a1ant_*/
+/*============================================================*/
+/*============================================================*/
+/* extern function start with EXhalbtc8821a1ant_*/
+/*============================================================*/
+void ex_halbtc8821a1ant_init_hwconfig(struct btc_coexist *btcoexist)
+{
+ halbtc8821a1ant_init_hw_config(btcoexist, true);
+}
+
+void ex_halbtc8821a1ant_init_coex_dm(struct btc_coexist *btcoexist)
+{
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], Coex Mechanism Init!!\n");
+
+ btcoexist->stop_coex_dm = false;
+
+ halbtc8821a1ant_init_coex_dm(btcoexist);
+
+ halbtc8821a1ant_query_bt_info(btcoexist);
+}
+
+void ex_halbtc8821a1ant_display_coex_info(struct btc_coexist *btcoexist)
+{
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ struct btc_stack_info *stack_info = &btcoexist->stack_info;
+ struct btc_bt_link_info *bt_link_info = &btcoexist->bt_link_info;
+ struct rtl_priv *rtlpriv = btcoexist->adapter;
+ u8 u1_tmp[4], i, bt_info_ext, ps_tdma_case = 0;
+ u16 u2_tmp[4];
+ u32 u4_tmp[4];
+ bool roam = false, scan = false, link = false, wifi_under_5g = false;
+ bool bt_hs_on = false, wifi_busy = false;
+ long wifi_rssi = 0, bt_hs_rssi = 0;
+ u32 wifi_bw, wifi_traffic_dir;
+ u8 wifi_dot11_chnl, wifi_hs_chnl;
+ u32 fw_ver = 0, bt_patch_ver = 0;
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ============[BT Coexist info]============");
+
+ if (btcoexist->manual_control) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ============[Under Manual Control]============");
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ==========================================");
+ }
+ if (btcoexist->stop_coex_dm) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ============[Coex is STOPPED]============");
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ==========================================");
+ }
+
+ if (!board_info->bt_exist) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n BT not exists !!!");
+ return;
+ }
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d/ %d",
+ "Ant PG Num/ Ant Mech/ Ant Pos:",
+ board_info->pg_ant_num,
+ board_info->btdm_ant_num,
+ board_info->btdm_ant_pos);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %s / %d", "BT stack/ hci ext ver",
+ ((stack_info->profile_notified) ? "Yes" : "No"),
+ stack_info->hci_version);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_BT_PATCH_VER,
+ &bt_patch_ver);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d_%x/ 0x%x/ 0x%x(%d)",
+ "CoexVer/ FwVer/ PatchVer",
+ glcoex_ver_date_8821a_1ant,
+ glcoex_ver_8821a_1ant,
+ fw_ver, bt_patch_ver,
+ bt_patch_ver);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION,
+ &bt_hs_on);
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_DOT11_CHNL,
+ &wifi_dot11_chnl);
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_HS_CHNL,
+ &wifi_hs_chnl);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d / %d(%d)",
+ "Dot11 channel / HsChnl(HsMode)",
+ wifi_dot11_chnl, wifi_hs_chnl, bt_hs_on);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %02x %02x %02x ",
+ "H2C Wifi inform bt chnl Info",
+ coex_dm->wifi_chnl_info[0], coex_dm->wifi_chnl_info[1],
+ coex_dm->wifi_chnl_info[2]);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_WIFI_RSSI, &wifi_rssi);
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_HS_RSSI, &bt_hs_rssi);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d", "Wifi rssi/ HS rssi",
+ (int)wifi_rssi, (int)bt_hs_rssi);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d/ %d ", "Wifi link/ roam/ scan",
+ link, roam, scan);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_UNDER_5G,
+ &wifi_under_5g);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW,
+ &wifi_bw);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY,
+ &wifi_busy);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_TRAFFIC_DIRECTION,
+ &wifi_traffic_dir);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %s / %s/ %s ", "Wifi status",
+ (wifi_under_5g ? "5G" : "2.4G"),
+ ((BTC_WIFI_BW_LEGACY == wifi_bw) ? "Legacy" :
+ (((BTC_WIFI_BW_HT40 == wifi_bw) ? "HT40" : "HT20"))),
+ ((!wifi_busy) ? "idle" :
+ ((BTC_WIFI_TRAFFIC_TX == wifi_traffic_dir) ?
+ "uplink" : "downlink")));
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = [%s/ %d/ %d] ", "BT [status/ rssi/ retryCnt]",
+ ((btcoexist->bt_info.bt_disabled) ? ("disabled") :
+ ((coex_sta->c2h_bt_inquiry_page) ? ("inquiry/page scan") :
+ ((BT_8821A_1ANT_BT_STATUS_NON_CONNECTED_IDLE ==
+ coex_dm->bt_status) ?
+ "non-connected idle" :
+ ((BT_8821A_1ANT_BT_STATUS_CONNECTED_IDLE ==
+ coex_dm->bt_status) ?
+ "connected-idle" : "busy")))),
+ coex_sta->bt_rssi, coex_sta->bt_retry_cnt);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d / %d / %d / %d", "SCO/HID/PAN/A2DP",
+ bt_link_info->sco_exist,
+ bt_link_info->hid_exist,
+ bt_link_info->pan_exist,
+ bt_link_info->a2dp_exist);
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_BT_LINK_INFO);
+
+ bt_info_ext = coex_sta->bt_info_ext;
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %s",
+ "BT Info A2DP rate",
+ (bt_info_ext&BIT0) ?
+ "Basic rate" : "EDR rate");
+
+ for (i = 0; i < BT_INFO_SRC_8821A_1ANT_MAX; i++) {
+ if (coex_sta->bt_info_c2h_cnt[i]) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %02x %02x %02x %02x %02x %02x %02x(%d)",
+ glbt_info_src_8821a_1ant[i],
+ coex_sta->bt_info_c2h[i][0],
+ coex_sta->bt_info_c2h[i][1],
+ coex_sta->bt_info_c2h[i][2],
+ coex_sta->bt_info_c2h[i][3],
+ coex_sta->bt_info_c2h[i][4],
+ coex_sta->bt_info_c2h[i][5],
+ coex_sta->bt_info_c2h[i][6],
+ coex_sta->bt_info_c2h_cnt[i]);
+ }
+ }
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %s/%s, (0x%x/0x%x)",
+ "PS state, IPS/LPS, (lps/rpwm)",
+ ((coex_sta->under_ips ? "IPS ON" : "IPS OFF")),
+ ((coex_sta->under_Lps ? "LPS ON" : "LPS OFF")),
+ btcoexist->bt_info.lps_val,
+ btcoexist->bt_info.rpwm_val);
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_FW_PWR_MODE_CMD);
+
+ if (!btcoexist->manual_control) {
+ /* Sw mechanism*/
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s", "============[Sw mechanism]============");
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d", "SM[LowPenaltyRA]",
+ coex_dm->cur_low_penalty_ra);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %s/ %s/ %d ",
+ "DelBA/ BtCtrlAgg/ AggSize",
+ (btcoexist->bt_info.reject_agg_pkt ? "Yes" : "No"),
+ (btcoexist->bt_info.bt_ctrl_buf_size ? "Yes" : "No"),
+ btcoexist->bt_info.agg_buf_size);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x ", "Rate Mask",
+ btcoexist->bt_info.ra_mask);
+
+ /* Fw mechanism*/
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
+ "============[Fw mechanism]============");
+
+ ps_tdma_case = coex_dm->cur_ps_tdma;
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %02x %02x %02x %02x %02x case-%d (auto:%d)",
+ "PS TDMA",
+ coex_dm->ps_tdma_para[0],
+ coex_dm->ps_tdma_para[1],
+ coex_dm->ps_tdma_para[2],
+ coex_dm->ps_tdma_para[3],
+ coex_dm->ps_tdma_para[4],
+ ps_tdma_case,
+ coex_dm->auto_tdma_adjust);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x ",
+ "Latest error condition(should be 0)",
+ coex_dm->error_condition);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d ", "IgnWlanAct",
+ coex_dm->cur_ignore_wlan_act);
+ }
+
+ /* Hw setting*/
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s", "============[Hw setting]============");
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x/0x%x/0x%x/0x%x",
+ "backup ARFR1/ARFR2/RL/AMaxTime",
+ coex_dm->backup_arfr_cnt1,
+ coex_dm->backup_arfr_cnt2,
+ coex_dm->backup_retry_limit,
+ coex_dm->backup_ampdu_max_time);
+
+ u4_tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x430);
+ u4_tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x434);
+ u2_tmp[0] = btcoexist->btc_read_2byte(btcoexist, 0x42a);
+ u1_tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x456);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x/0x%x/0x%x/0x%x",
+ "0x430/0x434/0x42a/0x456",
+ u4_tmp[0], u4_tmp[1], u2_tmp[0], u1_tmp[0]);
+
+ u1_tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x778);
+ u4_tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xc58);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x/ 0x%x", "0x778/ 0xc58[29:25]",
+ u1_tmp[0], (u4_tmp[0]&0x3e000000) >> 25);
+
+ u1_tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x8db);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x", "0x8db[6:5]",
+ ((u1_tmp[0]&0x60)>>5));
+
+ u1_tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x975);
+ u4_tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xcb4);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ "0xcb4[29:28]/0xcb4[7:0]/0x974[9:8]",
+ (u4_tmp[0] & 0x30000000)>>28,
+ u4_tmp[0] & 0xff,
+ u1_tmp[0] & 0x3);
+
+ u1_tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x40);
+ u4_tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x4c);
+ u1_tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0x64);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ "0x40/0x4c[24:23]/0x64[0]",
+ u1_tmp[0], ((u4_tmp[0]&0x01800000)>>23), u1_tmp[1]&0x1);
+
+ u4_tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x550);
+ u1_tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x522);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x/ 0x%x", "0x550(bcn ctrl)/0x522",
+ u4_tmp[0], u1_tmp[0]);
+
+ u4_tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xc50);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x", "0xc50(dig)",
+ u4_tmp[0]&0xff);
+
+ u4_tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xf48);
+ u1_tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0xa5d);
+ u1_tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0xa5c);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x/ 0x%x", "OFDM-FA/ CCK-FA",
+ u4_tmp[0], (u1_tmp[0]<<8) + u1_tmp[1]);
+
+ u4_tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x6c0);
+ u4_tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x6c4);
+ u4_tmp[2] = btcoexist->btc_read_4byte(btcoexist, 0x6c8);
+ u1_tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x6cc);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x/ 0x%x/ 0x%x/ 0x%x",
+ "0x6c0/0x6c4/0x6c8/0x6cc(coexTable)",
+ u4_tmp[0], u4_tmp[1], u4_tmp[2], u1_tmp[0]);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d", "0x770(high-pri rx/tx)",
+ coex_sta->high_priority_rx, coex_sta->high_priority_tx);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d", "0x774(low-pri rx/tx)",
+ coex_sta->low_priority_rx, coex_sta->low_priority_tx);
+#if (BT_AUTO_REPORT_ONLY_8821A_1ANT == 1)
+ halbtc8821a1ant_monitor_bt_ctr(btcoexist);
+#endif
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_COEX_STATISTICS);
+}
+
+void ex_halbtc8821a1ant_ips_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (btcoexist->manual_control || btcoexist->stop_coex_dm)
+ return;
+
+ if (BTC_IPS_ENTER == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], IPS ENTER notify\n");
+ coex_sta->under_ips = true;
+ halbtc8821a1ant_set_ant_path(btcoexist,
+ BTC_ANT_PATH_BT, false, true);
+ /*set PTA control*/
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 8);
+ halbtc8821a1ant_coex_table_with_type(btcoexist,
+ NORMAL_EXEC, 0);
+ } else if (BTC_IPS_LEAVE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], IPS LEAVE notify\n");
+ coex_sta->under_ips = false;
+
+ halbtc8821a1ant_run_coexist_mechanism(btcoexist);
+ }
+}
+
+void ex_halbtc8821a1ant_lps_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (btcoexist->manual_control || btcoexist->stop_coex_dm)
+ return;
+
+ if (BTC_LPS_ENABLE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], LPS ENABLE notify\n");
+ coex_sta->under_Lps = true;
+ } else if (BTC_LPS_DISABLE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], LPS DISABLE notify\n");
+ coex_sta->under_Lps = false;
+ }
+}
+
+void ex_halbtc8821a1ant_scan_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ bool wifi_connected = false, bt_hs_on = false;
+
+ if (btcoexist->manual_control ||
+ btcoexist->stop_coex_dm ||
+ btcoexist->bt_info.bt_disabled)
+ return;
+
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_CONNECTED, &wifi_connected);
+
+ halbtc8821a1ant_query_bt_info(btcoexist);
+
+ if (coex_sta->c2h_bt_inquiry_page) {
+ halbtc8821a1ant_action_bt_inquiry(btcoexist);
+ return;
+ } else if (bt_hs_on) {
+ halbtc8821a1ant_action_hs(btcoexist);
+ return;
+ }
+
+ if (BTC_SCAN_START == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], SCAN START notify\n");
+ if (!wifi_connected) {
+ /* non-connected scan*/
+ btc8821a1ant_act_wifi_not_conn_scan(btcoexist);
+ } else {
+ /* wifi is connected*/
+ halbtc8821a1ant_action_wifi_connected_scan(btcoexist);
+ }
+ } else if (BTC_SCAN_FINISH == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], SCAN FINISH notify\n");
+ if (!wifi_connected) {
+ /* non-connected scan*/
+ halbtc8821a1ant_action_wifi_not_connected(btcoexist);
+ } else {
+ halbtc8821a1ant_action_wifi_connected(btcoexist);
+ }
+ }
+}
+
+void ex_halbtc8821a1ant_connect_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ bool wifi_connected = false, bt_hs_on = false;
+
+ if (btcoexist->manual_control ||
+ btcoexist->stop_coex_dm ||
+ btcoexist->bt_info.bt_disabled)
+ return;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ if (coex_sta->c2h_bt_inquiry_page) {
+ halbtc8821a1ant_action_bt_inquiry(btcoexist);
+ return;
+ } else if (bt_hs_on) {
+ halbtc8821a1ant_action_hs(btcoexist);
+ return;
+ }
+
+ if (BTC_ASSOCIATE_START == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], CONNECT START notify\n");
+ btc8821a1ant_act_wifi_not_conn_scan(btcoexist);
+ } else if (BTC_ASSOCIATE_FINISH == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], CONNECT FINISH notify\n");
+
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_CONNECTED, &wifi_connected);
+ if (!wifi_connected) {
+ /* non-connected scan*/
+ halbtc8821a1ant_action_wifi_not_connected(btcoexist);
+ } else {
+ halbtc8821a1ant_action_wifi_connected(btcoexist);
+ }
+ }
+}
+
+void ex_halbtc8821a1ant_media_status_notify(struct btc_coexist *btcoexist,
+ u8 type)
+{
+ u8 h2c_parameter[3] = {0};
+ u32 wifi_bw;
+ u8 wifi_central_chnl;
+
+ if (btcoexist->manual_control ||
+ btcoexist->stop_coex_dm ||
+ btcoexist->bt_info.bt_disabled)
+ return;
+
+ if (BTC_MEDIA_CONNECT == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], MEDIA connect notify\n");
+ } else {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], MEDIA disconnect notify\n");
+ }
+
+ /* only 2.4G we need to inform bt the chnl mask*/
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_U1_WIFI_CENTRAL_CHNL,
+ &wifi_central_chnl);
+ if ((BTC_MEDIA_CONNECT == type) &&
+ (wifi_central_chnl <= 14)) {
+ /*h2c_parameter[0] = 0x1;*/
+ h2c_parameter[0] = 0x0;
+ h2c_parameter[1] = wifi_central_chnl;
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ if (BTC_WIFI_BW_HT40 == wifi_bw)
+ h2c_parameter[2] = 0x30;
+ else
+ h2c_parameter[2] = 0x20;
+ }
+
+ coex_dm->wifi_chnl_info[0] = h2c_parameter[0];
+ coex_dm->wifi_chnl_info[1] = h2c_parameter[1];
+ coex_dm->wifi_chnl_info[2] = h2c_parameter[2];
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], FW write 0x66 = 0x%x\n",
+ h2c_parameter[0]<<16|h2c_parameter[1]<<8|h2c_parameter[2]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x66, 3, h2c_parameter);
+}
+
+void ex_halbtc8821a1ant_special_packet_notify(struct btc_coexist *btcoexist,
+ u8 type)
+{
+ bool bt_hs_on = false;
+
+ if (btcoexist->manual_control ||
+ btcoexist->stop_coex_dm ||
+ btcoexist->bt_info.bt_disabled)
+ return;
+
+ coex_sta->special_pkt_period_cnt = 0;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ if (coex_sta->c2h_bt_inquiry_page) {
+ halbtc8821a1ant_action_bt_inquiry(btcoexist);
+ return;
+ } else if (bt_hs_on) {
+ halbtc8821a1ant_action_hs(btcoexist);
+ return;
+ }
+
+ if (BTC_PACKET_DHCP == type ||
+ BTC_PACKET_EAPOL == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], special Packet(%d) notify\n", type);
+ btc8821a1ant_act_wifi_conn_sp_pkt(btcoexist);
+ }
+}
+
+void ex_halbtc8821a1ant_bt_info_notify(struct btc_coexist *btcoexist,
+ u8 *tmp_buf, u8 length)
+{
+ u8 bt_info = 0;
+ u8 i, rsp_source = 0;
+ bool wifi_connected = false;
+ bool bt_busy = false;
+ bool wifi_under_5g = false;
+
+ coex_sta->c2h_bt_info_req_sent = false;
+
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_UNDER_5G, &wifi_under_5g);
+
+ rsp_source = tmp_buf[0]&0xf;
+ if (rsp_source >= BT_INFO_SRC_8821A_1ANT_MAX)
+ rsp_source = BT_INFO_SRC_8821A_1ANT_WIFI_FW;
+ coex_sta->bt_info_c2h_cnt[rsp_source]++;
+
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Bt info[%d], length = %d, hex data = [",
+ rsp_source, length);
+ for (i = 0; i < length; i++) {
+ coex_sta->bt_info_c2h[rsp_source][i] = tmp_buf[i];
+ if (i == 1)
+ bt_info = tmp_buf[i];
+ if (i == length-1) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "0x%02x]\n", tmp_buf[i]);
+ } else {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "0x%02x, ", tmp_buf[i]);
+ }
+ }
+
+ if (BT_INFO_SRC_8821A_1ANT_WIFI_FW != rsp_source) {
+ coex_sta->bt_retry_cnt = /* [3:0]*/
+ coex_sta->bt_info_c2h[rsp_source][2]&0xf;
+
+ coex_sta->bt_rssi =
+ coex_sta->bt_info_c2h[rsp_source][3]*2+10;
+
+ coex_sta->bt_info_ext =
+ coex_sta->bt_info_c2h[rsp_source][4];
+
+ /* Here we need to resend some wifi info to BT*/
+ /* because bt is reset and loss of the info.*/
+ if (coex_sta->bt_info_ext & BIT1) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT ext info bit1 check, send wifi BW&Chnl to BT!!\n");
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+ if (wifi_connected) {
+ ex_halbtc8821a1ant_media_status_notify(btcoexist,
+ BTC_MEDIA_CONNECT);
+ } else {
+ ex_halbtc8821a1ant_media_status_notify(btcoexist,
+ BTC_MEDIA_DISCONNECT);
+ }
+ }
+
+ if ((coex_sta->bt_info_ext & BIT3) && !wifi_under_5g) {
+ if (!btcoexist->manual_control &&
+ !btcoexist->stop_coex_dm) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT ext info bit3 check, set BT NOT to ignore Wlan active!!\n");
+ halbtc8821a1ant_ignore_wlan_act(btcoexist,
+ FORCE_EXEC,
+ false);
+ }
+ }
+#if (BT_AUTO_REPORT_ONLY_8821A_1ANT == 0)
+ if (!(coex_sta->bt_info_ext & BIT4)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT ext info bit4 check, set BT to enable Auto Report!!\n");
+ halbtc8821a1ant_bt_auto_report(btcoexist,
+ FORCE_EXEC, true);
+ }
+#endif
+ }
+
+ /* check BIT2 first ==> check if bt is under inquiry or page scan*/
+ if (bt_info & BT_INFO_8821A_1ANT_B_INQ_PAGE)
+ coex_sta->c2h_bt_inquiry_page = true;
+ else
+ coex_sta->c2h_bt_inquiry_page = false;
+
+ /* set link exist status*/
+ if (!(bt_info&BT_INFO_8821A_1ANT_B_CONNECTION)) {
+ coex_sta->bt_link_exist = false;
+ coex_sta->pan_exist = false;
+ coex_sta->a2dp_exist = false;
+ coex_sta->hid_exist = false;
+ coex_sta->sco_exist = false;
+ } else {
+ /* connection exists*/
+ coex_sta->bt_link_exist = true;
+ if (bt_info & BT_INFO_8821A_1ANT_B_FTP)
+ coex_sta->pan_exist = true;
+ else
+ coex_sta->pan_exist = false;
+ if (bt_info & BT_INFO_8821A_1ANT_B_A2DP)
+ coex_sta->a2dp_exist = true;
+ else
+ coex_sta->a2dp_exist = false;
+ if (bt_info & BT_INFO_8821A_1ANT_B_HID)
+ coex_sta->hid_exist = true;
+ else
+ coex_sta->hid_exist = false;
+ if (bt_info & BT_INFO_8821A_1ANT_B_SCO_ESCO)
+ coex_sta->sco_exist = true;
+ else
+ coex_sta->sco_exist = false;
+ }
+
+ halbtc8821a1ant_update_bt_link_info(btcoexist);
+
+ if (!(bt_info&BT_INFO_8821A_1ANT_B_CONNECTION)) {
+ coex_dm->bt_status = BT_8821A_1ANT_BT_STATUS_NON_CONNECTED_IDLE;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BtInfoNotify(), BT Non-Connected idle!!!\n");
+ } else if (bt_info == BT_INFO_8821A_1ANT_B_CONNECTION) {
+ /* connection exists but no busy*/
+ coex_dm->bt_status = BT_8821A_1ANT_BT_STATUS_CONNECTED_IDLE;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BtInfoNotify(), BT Connected-idle!!!\n");
+ } else if ((bt_info&BT_INFO_8821A_1ANT_B_SCO_ESCO) ||
+ (bt_info&BT_INFO_8821A_1ANT_B_SCO_BUSY)) {
+ coex_dm->bt_status = BT_8821A_1ANT_BT_STATUS_SCO_BUSY;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BtInfoNotify(), BT SCO busy!!!\n");
+ } else if (bt_info&BT_INFO_8821A_1ANT_B_ACL_BUSY) {
+ if (BT_8821A_1ANT_BT_STATUS_ACL_BUSY != coex_dm->bt_status)
+ coex_dm->auto_tdma_adjust = false;
+ coex_dm->bt_status = BT_8821A_1ANT_BT_STATUS_ACL_BUSY;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BtInfoNotify(), BT ACL busy!!!\n");
+ } else {
+ coex_dm->bt_status = BT_8821A_1ANT_BT_STATUS_MAX;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BtInfoNotify(), BT Non-Defined state!!!\n");
+ }
+
+ if ((BT_8821A_1ANT_BT_STATUS_ACL_BUSY == coex_dm->bt_status) ||
+ (BT_8821A_1ANT_BT_STATUS_SCO_BUSY == coex_dm->bt_status) ||
+ (BT_8821A_1ANT_BT_STATUS_ACL_SCO_BUSY == coex_dm->bt_status))
+ bt_busy = true;
+ else
+ bt_busy = false;
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_BL_BT_TRAFFIC_BUSY, &bt_busy);
+
+ halbtc8821a1ant_run_coexist_mechanism(btcoexist);
+}
+
+void ex_halbtc8821a1ant_halt_notify(struct btc_coexist *btcoexist)
+{
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Halt notify\n");
+
+ btcoexist->stop_coex_dm = true;
+
+ halbtc8821a1ant_set_ant_path(btcoexist,
+ BTC_ANT_PATH_BT, false, true);
+ halbtc8821a1ant_ignore_wlan_act(btcoexist, FORCE_EXEC, true);
+
+ halbtc8821a1ant_power_save_state(btcoexist,
+ BTC_PS_WIFI_NATIVE, 0x0, 0x0);
+ halbtc8821a1ant_ps_tdma(btcoexist, FORCE_EXEC, false, 0);
+
+ ex_halbtc8821a1ant_media_status_notify(btcoexist,
+ BTC_MEDIA_DISCONNECT);
+}
+
+void ex_halbtc8821a1ant_pnp_notify(struct btc_coexist *btcoexist, u8 pnp_state)
+{
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Pnp notify\n");
+
+ if (BTC_WIFI_PNP_SLEEP == pnp_state) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Pnp notify to SLEEP\n");
+ btcoexist->stop_coex_dm = true;
+ halbtc8821a1ant_ignore_wlan_act(btcoexist, FORCE_EXEC, true);
+ halbtc8821a1ant_power_save_state(btcoexist, BTC_PS_WIFI_NATIVE,
+ 0x0, 0x0);
+ halbtc8821a1ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 9);
+ } else if (BTC_WIFI_PNP_WAKE_UP == pnp_state) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Pnp notify to WAKE UP\n");
+ btcoexist->stop_coex_dm = false;
+ halbtc8821a1ant_init_hw_config(btcoexist, false);
+ halbtc8821a1ant_init_coex_dm(btcoexist);
+ halbtc8821a1ant_query_bt_info(btcoexist);
+ }
+}
+
+void
+ex_halbtc8821a1ant_periodical(
+ struct btc_coexist *btcoexist) {
+ static u8 dis_ver_info_cnt;
+ u32 fw_ver = 0, bt_patch_ver = 0;
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ struct btc_stack_info *stack_info = &btcoexist->stack_info;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], ==========================Periodical===========================\n");
+
+ if (dis_ver_info_cnt <= 5) {
+ dis_ver_info_cnt += 1;
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], ****************************************************************\n");
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], Ant PG Num/ Ant Mech/ Ant Pos = %d/ %d/ %d\n",
+ board_info->pg_ant_num,
+ board_info->btdm_ant_num,
+ board_info->btdm_ant_pos);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], BT stack/ hci ext ver = %s / %d\n",
+ ((stack_info->profile_notified) ? "Yes" : "No"),
+ stack_info->hci_version);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_BT_PATCH_VER,
+ &bt_patch_ver);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], CoexVer/ FwVer/ PatchVer = %d_%x/ 0x%x/ 0x%x(%d)\n",
+ glcoex_ver_date_8821a_1ant,
+ glcoex_ver_8821a_1ant,
+ fw_ver, bt_patch_ver,
+ bt_patch_ver);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], ****************************************************************\n");
+ }
+
+#if (BT_AUTO_REPORT_ONLY_8821A_1ANT == 0)
+ halbtc8821a1ant_query_bt_info(btcoexist);
+ halbtc8821a1ant_monitor_bt_ctr(btcoexist);
+ btc8821a1ant_mon_bt_en_dis(btcoexist);
+#else
+ if (halbtc8821a1ant_Is_wifi_status_changed(btcoexist) ||
+ coex_dm->auto_tdma_adjust) {
+ if (coex_sta->special_pkt_period_cnt > 2)
+ halbtc8821a1ant_run_coexist_mechanism(btcoexist);
+ }
+
+ coex_sta->special_pkt_period_cnt++;
+#endif
+}
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a1ant.h b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a1ant.h
new file mode 100644
index 0000000..20e9048
--- /dev/null
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a1ant.h
@@ -0,0 +1,188 @@
+/******************************************************************************
+ *
+ * Copyright(c) 2012 Realtek Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called LICENSE.
+ *
+ * Contact Information:
+ * wlanfae <wlanfae@realtek.com>
+ * Realtek Corporation, No. 2, Innovation Road II, Hsinchu Science Park,
+ * Hsinchu 300, Taiwan.
+ *
+ * Larry Finger <Larry.Finger@lwfinger.net>
+ *
+ *****************************************************************************/
+
+/*===========================================
+ * The following is for 8821A 1ANT BT Co-exist definition
+ *===========================================
+ */
+#define BT_AUTO_REPORT_ONLY_8821A_1ANT 0
+
+#define BT_INFO_8821A_1ANT_B_FTP BIT7
+#define BT_INFO_8821A_1ANT_B_A2DP BIT6
+#define BT_INFO_8821A_1ANT_B_HID BIT5
+#define BT_INFO_8821A_1ANT_B_SCO_BUSY BIT4
+#define BT_INFO_8821A_1ANT_B_ACL_BUSY BIT3
+#define BT_INFO_8821A_1ANT_B_INQ_PAGE BIT2
+#define BT_INFO_8821A_1ANT_B_SCO_ESCO BIT1
+#define BT_INFO_8821A_1ANT_B_CONNECTION BIT0
+
+#define BT_INFO_8821A_1ANT_A2DP_BASIC_RATE(_BT_INFO_EXT_) \
+ (((_BT_INFO_EXT_&BIT0)) ? true : false)
+
+#define BTC_RSSI_COEX_THRESH_TOL_8821A_1ANT 2
+
+enum _BT_INFO_SRC_8821A_1ANT {
+ BT_INFO_SRC_8821A_1ANT_WIFI_FW = 0x0,
+ BT_INFO_SRC_8821A_1ANT_BT_RSP = 0x1,
+ BT_INFO_SRC_8821A_1ANT_BT_ACTIVE_SEND = 0x2,
+ BT_INFO_SRC_8821A_1ANT_MAX
+};
+
+enum _BT_8821A_1ANT_BT_STATUS {
+ BT_8821A_1ANT_BT_STATUS_NON_CONNECTED_IDLE = 0x0,
+ BT_8821A_1ANT_BT_STATUS_CONNECTED_IDLE = 0x1,
+ BT_8821A_1ANT_BT_STATUS_INQ_PAGE = 0x2,
+ BT_8821A_1ANT_BT_STATUS_ACL_BUSY = 0x3,
+ BT_8821A_1ANT_BT_STATUS_SCO_BUSY = 0x4,
+ BT_8821A_1ANT_BT_STATUS_ACL_SCO_BUSY = 0x5,
+ BT_8821A_1ANT_BT_STATUS_MAX
+};
+
+enum _BT_8821A_1ANT_WIFI_STATUS {
+ BT_8821A_1ANT_WIFI_STATUS_NON_CONNECTED_IDLE = 0x0,
+ BT_8821A_1ANT_WIFI_STATUS_NON_CONNECTED_ASSO_AUTH_SCAN = 0x1,
+ BT_8821A_1ANT_WIFI_STATUS_CONNECTED_SCAN = 0x2,
+ BT_8821A_1ANT_WIFI_STATUS_CONNECTED_SPECIAL_PKT = 0x3,
+ BT_8821A_1ANT_WIFI_STATUS_CONNECTED_IDLE = 0x4,
+ BT_8821A_1ANT_WIFI_STATUS_CONNECTED_BUSY = 0x5,
+ BT_8821A_1ANT_WIFI_STATUS_MAX
+};
+
+enum BT_8821A_1ANT_COEX_ALGO {
+ BT_8821A_1ANT_COEX_ALGO_UNDEFINED = 0x0,
+ BT_8821A_1ANT_COEX_ALGO_SCO = 0x1,
+ BT_8821A_1ANT_COEX_ALGO_HID = 0x2,
+ BT_8821A_1ANT_COEX_ALGO_A2DP = 0x3,
+ BT_8821A_1ANT_COEX_ALGO_A2DP_PANHS = 0x4,
+ BT_8821A_1ANT_COEX_ALGO_PANEDR = 0x5,
+ BT_8821A_1ANT_COEX_ALGO_PANHS = 0x6,
+ BT_8821A_1ANT_COEX_ALGO_PANEDR_A2DP = 0x7,
+ BT_8821A_1ANT_COEX_ALGO_PANEDR_HID = 0x8,
+ BT_8821A_1ANT_COEX_ALGO_HID_A2DP_PANEDR = 0x9,
+ BT_8821A_1ANT_COEX_ALGO_HID_A2DP = 0xa,
+ BT_8821A_1ANT_COEX_ALGO_MAX = 0xb,
+};
+
+struct coex_dm_8821a_1ant {
+ /* fw mechanism */
+ bool cur_ignore_wlan_act;
+ bool pre_ignore_wlan_act;
+ u8 pre_ps_tdma;
+ u8 cur_ps_tdma;
+ u8 ps_tdma_para[5];
+ u8 tdma_adj_type;
+ bool auto_tdma_adjust;
+ bool pre_ps_tdma_on;
+ bool cur_ps_tdma_on;
+ bool pre_bt_auto_report;
+ bool cur_bt_auto_report;
+ u8 pre_lps;
+ u8 cur_lps;
+ u8 pre_rpwm;
+ u8 cur_rpwm;
+
+ /* sw mechanism */
+ bool pre_low_penalty_ra;
+ bool cur_low_penalty_ra;
+ u32 pre_val_0x6c0;
+ u32 cur_val_0x6c0;
+ u32 pre_val_0x6c4;
+ u32 cur_val_0x6c4;
+ u32 pre_val_0x6c8;
+ u32 cur_val_0x6c8;
+ u8 pre_val_0x6cc;
+ u8 cur_val_0x6cc;
+ /* Auto Rate Fallback Retry cnt */
+ u32 backup_arfr_cnt1;
+ /* Auto Rate Fallback Retry cnt */
+ u32 backup_arfr_cnt2;
+ u16 backup_retry_limit;
+ u8 backup_ampdu_max_time;
+
+ /* algorithm related */
+ u8 pre_algorithm;
+ u8 cur_algorithm;
+ u8 bt_status;
+ u8 wifi_chnl_info[3];
+
+ u32 pre_ra_mask;
+ u32 cur_ra_mask;
+ u8 pre_arfr_type;
+ u8 cur_arfr_type;
+ u8 pre_retry_limit_type;
+ u8 cur_retry_limit_type;
+ u8 pre_ampdu_time_type;
+ u8 cur_ampdu_time_type;
+
+ u8 error_condition;
+};
+
+struct coex_sta_8821a_1ant {
+ bool bt_link_exist;
+ bool sco_exist;
+ bool a2dp_exist;
+ bool hid_exist;
+ bool pan_exist;
+
+ bool under_Lps;
+ bool under_ips;
+ u32 special_pkt_period_cnt;
+ u32 high_priority_tx;
+ u32 high_priority_rx;
+ u32 low_priority_tx;
+ u32 low_priority_rx;
+ u8 bt_rssi;
+ u8 pre_bt_rssi_state;
+ u8 pre_wifi_rssi_state[4];
+ bool c2h_bt_info_req_sent;
+ u8 bt_info_c2h[BT_INFO_SRC_8821A_1ANT_MAX][10];
+ u32 bt_info_c2h_cnt[BT_INFO_SRC_8821A_1ANT_MAX];
+ bool c2h_bt_inquiry_page;
+ u8 bt_retry_cnt;
+ u8 bt_info_ext;
+};
+
+/*===========================================
+ * The following is interface which will notify coex module.
+ *===========================================
+ */
+void ex_halbtc8821a1ant_init_hwconfig(struct btc_coexist *btcoexist);
+void ex_halbtc8821a1ant_init_coex_dm(struct btc_coexist *btcoexist);
+void ex_halbtc8821a1ant_ips_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8821a1ant_lps_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8821a1ant_scan_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8821a1ant_connect_notify(struct btc_coexist *btcoexist, u8 type);
+void ex_halbtc8821a1ant_media_status_notify(struct btc_coexist *btcoexist,
+ u8 type);
+void ex_halbtc8821a1ant_special_packet_notify(struct btc_coexist *btcoexist,
+ u8 type);
+void ex_halbtc8821a1ant_bt_info_notify(struct btc_coexist *btcoexist,
+ u8 *tmpbuf, u8 length);
+void ex_halbtc8821a1ant_halt_notify(struct btc_coexist *btcoexist);
+void ex_halbtc8821a1ant_pnp_notify(struct btc_coexist *btcoexist, u8 pnpstate);
+void ex_halbtc8821a1ant_periodical(struct btc_coexist *btcoexist);
+void ex_halbtc8821a1ant_display_coex_info(struct btc_coexist *btcoexist);
+void ex_halbtc8821a1ant_dbg_control(struct btc_coexist *btcoexist, u8 op_code,
+ u8 op_len, u8 *data);
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a2ant.c b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a2ant.c
new file mode 100644
index 0000000..cf819f0
--- /dev/null
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a2ant.c
@@ -0,0 +1,3879 @@
+/******************************************************************************
+ *
+ * Copyright(c) 2012 Realtek Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called LICENSE.
+ *
+ * Contact Information:
+ * wlanfae <wlanfae@realtek.com>
+ * Realtek Corporation, No. 2, Innovation Road II, Hsinchu Science Park,
+ * Hsinchu 300, Taiwan.
+ *
+ * Larry Finger <Larry.Finger@lwfinger.net>
+ *
+ *****************************************************************************/
+
+/*============================================================
+ * Description:
+ *
+ * This file is for RTL8821A Co-exist mechanism
+ *
+ * History
+ * 2012/08/22 Cosa first check in.
+ * 2012/11/14 Cosa Revise for 8821A 2Ant out sourcing.
+ *
+ *============================================================
+ */
+
+/*============================================================
+ * include files
+ *============================================================
+*/
+#include "halbt_precomp.h"
+/*============================================================
+ * Global variables, these are static variables
+ *============================================================
+ */
+static struct coex_dm_8821a_2ant glcoex_dm_8821a_2ant;
+static struct coex_dm_8821a_2ant *coex_dm = &glcoex_dm_8821a_2ant;
+static struct coex_sta_8821a_2ant glcoex_sta_8821a_2ant;
+static struct coex_sta_8821a_2ant *coex_sta = &glcoex_sta_8821a_2ant;
+
+static const char *const glbt_info_src_8821a_2ant[] = {
+ "BT Info[wifi fw]",
+ "BT Info[bt rsp]",
+ "BT Info[bt auto report]",
+};
+
+static u32 glcoex_ver_date_8821a_2ant = 20130618;
+static u32 glcoex_ver_8821a_2ant = 0x5050;
+
+/*============================================================
+ * local function proto type if needed
+ *============================================================
+ *============================================================
+ * local function start with halbtc8821a2ant_
+ *============================================================
+ */
+static u8 halbtc8821a2ant_bt_rssi_state(u8 level_num, u8 rssi_thresh,
+ u8 rssi_thresh1)
+{
+ long bt_rssi = 0;
+ u8 bt_rssi_state = coex_sta->pre_bt_rssi_state;
+
+ bt_rssi = coex_sta->bt_rssi;
+
+ if (level_num == 2) {
+ if ((coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ long tmp = rssi_thresh +
+ BTC_RSSI_COEX_THRESH_TOL_8821A_2ANT;
+ if (bt_rssi >= tmp) {
+ bt_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to High\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at Low\n");
+ }
+ } else {
+ if (bt_rssi < rssi_thresh) {
+ bt_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Low\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at High\n");
+ }
+ }
+ } else if (level_num == 3) {
+ if (rssi_thresh > rssi_thresh1) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi thresh error!!\n");
+ return coex_sta->pre_bt_rssi_state;
+ }
+
+ if ((coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_bt_rssi_state == BTC_RSSI_STATE_STAY_LOW)) {
+ if (bt_rssi >=
+ (rssi_thresh+BTC_RSSI_COEX_THRESH_TOL_8821A_2ANT)) {
+ bt_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Medium\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at Low\n");
+ }
+ } else if ((coex_sta->pre_bt_rssi_state ==
+ BTC_RSSI_STATE_MEDIUM) ||
+ (coex_sta->pre_bt_rssi_state ==
+ BTC_RSSI_STATE_STAY_MEDIUM)) {
+ if (bt_rssi >=
+ (rssi_thresh1 +
+ BTC_RSSI_COEX_THRESH_TOL_8821A_2ANT)) {
+ bt_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to High\n");
+ } else if (bt_rssi < rssi_thresh) {
+ bt_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Low\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at Medium\n");
+ }
+ } else {
+ if (bt_rssi < rssi_thresh1) {
+ bt_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state switch to Medium\n");
+ } else {
+ bt_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_RSSI_STATE,
+ "[BTCoex], BT Rssi state stay at High\n");
+ }
+ }
+ }
+
+ coex_sta->pre_bt_rssi_state = bt_rssi_state;
+
+ return bt_rssi_state;
+}
+
+static u8 halbtc8821a2ant_wifi_rssi_state(struct btc_coexist *btcoexist,
+ u8 index, u8 level_num,
+ u8 rssi_thresh, u8 rssi_thresh1)
+{
+ long wifi_rssi = 0;
+ u8 wifi_rssi_state = coex_sta->pre_wifi_rssi_state[index];
+
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_WIFI_RSSI, &wifi_rssi);
+
+ if (level_num == 2) {
+ if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_LOW)) {
+ if (wifi_rssi >=
+ (rssi_thresh+BTC_RSSI_COEX_THRESH_TOL_8821A_2ANT)) {
+ wifi_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to High\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at Low\n");
+ }
+ } else {
+ if (wifi_rssi < rssi_thresh) {
+ wifi_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Low\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at High\n");
+ }
+ }
+ } else if (level_num == 3) {
+ if (rssi_thresh > rssi_thresh1) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI thresh error!!\n");
+ return coex_sta->pre_wifi_rssi_state[index];
+ }
+
+ if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_LOW) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_LOW)) {
+ if (wifi_rssi >=
+ (rssi_thresh+BTC_RSSI_COEX_THRESH_TOL_8821A_2ANT)) {
+ wifi_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Medium\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at Low\n");
+ }
+ } else if ((coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_MEDIUM) ||
+ (coex_sta->pre_wifi_rssi_state[index] ==
+ BTC_RSSI_STATE_STAY_MEDIUM)) {
+ if (wifi_rssi >= (rssi_thresh1 +
+ BTC_RSSI_COEX_THRESH_TOL_8821A_2ANT)) {
+ wifi_rssi_state = BTC_RSSI_STATE_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to High\n");
+ } else if (wifi_rssi < rssi_thresh) {
+ wifi_rssi_state = BTC_RSSI_STATE_LOW;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Low\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at Medium\n");
+ }
+ } else {
+ if (wifi_rssi < rssi_thresh1) {
+ wifi_rssi_state = BTC_RSSI_STATE_MEDIUM;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state switch to Medium\n");
+ } else {
+ wifi_rssi_state = BTC_RSSI_STATE_STAY_HIGH;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_WIFI_RSSI_STATE,
+ "[BTCoex], wifi RSSI state stay at High\n");
+ }
+ }
+ }
+ coex_sta->pre_wifi_rssi_state[index] = wifi_rssi_state;
+
+ return wifi_rssi_state;
+}
+
+static void btc8821a2ant_mon_bt_en_dis(struct btc_coexist *btcoexist)
+{
+ static bool pre_bt_disabled;
+ static u32 bt_disable_cnt;
+ bool bt_active = true, bt_disabled = false;
+
+ /* This function check if bt is disabled*/
+
+ if (coex_sta->high_priority_tx == 0 &&
+ coex_sta->high_priority_rx == 0 &&
+ coex_sta->low_priority_tx == 0 &&
+ coex_sta->low_priority_rx == 0)
+ bt_active = false;
+ if (coex_sta->high_priority_tx == 0xffff &&
+ coex_sta->high_priority_rx == 0xffff &&
+ coex_sta->low_priority_tx == 0xffff &&
+ coex_sta->low_priority_rx == 0xffff)
+ bt_active = false;
+ if (bt_active) {
+ bt_disable_cnt = 0;
+ bt_disabled = false;
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_DISABLE,
+ &bt_disabled);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], BT is enabled !!\n");
+ } else {
+ bt_disable_cnt++;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], bt all counters = 0, %d times!!\n",
+ bt_disable_cnt);
+ if (bt_disable_cnt >= 2) {
+ bt_disabled = true;
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_DISABLE,
+ &bt_disabled);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], BT is disabled !!\n");
+ }
+ }
+ if (pre_bt_disabled != bt_disabled) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], BT is from %s to %s!!\n",
+ (pre_bt_disabled ? "disabled" : "enabled"),
+ (bt_disabled ? "disabled" : "enabled"));
+ pre_bt_disabled = bt_disabled;
+ }
+}
+
+static void halbtc8821a2ant_monitor_bt_ctr(struct btc_coexist *btcoexist)
+{
+ u32 reg_hp_txrx, reg_lp_txrx, u4tmp;
+ u32 reg_hp_tx = 0, reg_hp_rx = 0, reg_lp_tx = 0, reg_lp_rx = 0;
+
+ reg_hp_txrx = 0x770;
+ reg_lp_txrx = 0x774;
+
+ u4tmp = btcoexist->btc_read_4byte(btcoexist, reg_hp_txrx);
+ reg_hp_tx = u4tmp & MASKLWORD;
+ reg_hp_rx = (u4tmp & MASKHWORD)>>16;
+
+ u4tmp = btcoexist->btc_read_4byte(btcoexist, reg_lp_txrx);
+ reg_lp_tx = u4tmp & MASKLWORD;
+ reg_lp_rx = (u4tmp & MASKHWORD)>>16;
+
+ coex_sta->high_priority_tx = reg_hp_tx;
+ coex_sta->high_priority_rx = reg_hp_rx;
+ coex_sta->low_priority_tx = reg_lp_tx;
+ coex_sta->low_priority_rx = reg_lp_rx;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], High Priority Tx/Rx (reg 0x%x) = 0x%x(%d)/0x%x(%d)\n",
+ reg_hp_txrx, reg_hp_tx, reg_hp_tx, reg_hp_rx, reg_hp_rx);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_BT_MONITOR,
+ "[BTCoex], Low Priority Tx/Rx (reg 0x%x) = 0x%x(%d)/0x%x(%d)\n",
+ reg_lp_txrx, reg_lp_tx, reg_lp_tx, reg_lp_rx, reg_lp_rx);
+
+ /* reset counter */
+ btcoexist->btc_write_1byte(btcoexist, 0x76e, 0xc);
+}
+
+static void halbtc8821a2ant_query_bt_info(struct btc_coexist *btcoexist)
+{
+ u8 h2c_parameter[1] = {0};
+
+ coex_sta->c2h_bt_info_req_sent = true;
+
+ h2c_parameter[0] |= BIT0; /* trigger */
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], Query Bt Info, FW write 0x61 = 0x%x\n",
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x61, 1, h2c_parameter);
+}
+
+static u8 halbtc8821a2ant_action_algorithm(struct btc_coexist *btcoexist)
+{
+ struct btc_stack_info *stack_info = &btcoexist->stack_info;
+ bool bt_hs_on = false;
+ u8 algorithm = BT_8821A_2ANT_COEX_ALGO_UNDEFINED;
+ u8 num_of_diff_profile = 0;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+
+ /*for win-8 stack HID report error*/
+ /* sync BTInfo with BT firmware and stack */
+ if (!stack_info->hid_exist)
+ stack_info->hid_exist = coex_sta->hid_exist;
+ /* when stack HID report error, here we use the info from bt fw. */
+ if (!stack_info->bt_link_exist)
+ stack_info->bt_link_exist = coex_sta->bt_link_exist;
+
+ if (!coex_sta->bt_link_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], No profile exists!!!\n");
+ return algorithm;
+ }
+
+ if (coex_sta->sco_exist)
+ num_of_diff_profile++;
+ if (coex_sta->hid_exist)
+ num_of_diff_profile++;
+ if (coex_sta->pan_exist)
+ num_of_diff_profile++;
+ if (coex_sta->a2dp_exist)
+ num_of_diff_profile++;
+
+ if (num_of_diff_profile == 1) {
+ if (coex_sta->sco_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO only\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_SCO;
+ } else {
+ if (coex_sta->hid_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], HID only\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_HID;
+ } else if (coex_sta->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], A2DP only\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_A2DP;
+ } else if (coex_sta->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], PAN(HS) only\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANHS;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], PAN(EDR) only\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR;
+ }
+ }
+ }
+ } else if (num_of_diff_profile == 2) {
+ if (coex_sta->sco_exist) {
+ if (coex_sta->hid_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO + HID\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_HID;
+ } else if (coex_sta->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO + A2DP ==> SCO\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_HID;
+ } else if (coex_sta->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO + PAN(HS)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_SCO;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO + PAN(EDR)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ } else {
+ if (coex_sta->hid_exist &&
+ coex_sta->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], HID + A2DP\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_HID_A2DP;
+ } else if (coex_sta->hid_exist &&
+ coex_sta->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], HID + PAN(HS)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_HID;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], HID + PAN(EDR)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_HID;
+ }
+ } else if (coex_sta->pan_exist &&
+ coex_sta->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], A2DP + PAN(HS)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_A2DP_PANHS;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], A2DP + PAN(EDR)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_A2DP;
+ }
+ }
+ }
+ } else if (num_of_diff_profile == 3) {
+ if (coex_sta->sco_exist) {
+ if (coex_sta->hid_exist &&
+ coex_sta->a2dp_exist) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO + HID + A2DP ==> HID\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_HID;
+ } else if (coex_sta->hid_exist &&
+ coex_sta->pan_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO + HID + PAN(HS)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_HID;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO + HID + PAN(EDR)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_HID;
+ }
+ } else if (coex_sta->pan_exist &&
+ coex_sta->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO + A2DP + PAN(HS)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_HID;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO + A2DP + PAN(EDR) ==> HID\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ } else {
+ if (coex_sta->hid_exist &&
+ coex_sta->pan_exist &&
+ coex_sta->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], HID + A2DP + PAN(HS)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_HID_A2DP;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], HID + A2DP + PAN(EDR)\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_HID_A2DP_PANEDR;
+ }
+ }
+ }
+ } else if (num_of_diff_profile >= 3) {
+ if (coex_sta->sco_exist) {
+ if (coex_sta->hid_exist &&
+ coex_sta->pan_exist &&
+ coex_sta->a2dp_exist) {
+ if (bt_hs_on) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Error!!! SCO + HID + A2DP + PAN(HS)\n");
+
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], SCO + HID + A2DP + PAN(EDR)==>PAN(EDR)+HID\n");
+ algorithm = BT_8821A_2ANT_COEX_ALGO_PANEDR_HID;
+ }
+ }
+ }
+ }
+ return algorithm;
+}
+
+static bool halbtc8821a2ant_need_to_dec_bt_pwr(struct btc_coexist *btcoexist)
+{
+ bool ret = false;
+ bool bt_hs_on = false, wifi_connected = false;
+ long bt_hs_rssi = 0;
+ u8 bt_rssi_state;
+
+ if (!btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on))
+ return false;
+ if (!btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected))
+ return false;
+ if (!btcoexist->btc_get(btcoexist, BTC_GET_S4_HS_RSSI, &bt_hs_rssi))
+ return false;
+
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ if (wifi_connected) {
+ if (bt_hs_on) {
+ if (bt_hs_rssi > 37) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], Need to decrease bt power for HS mode!!\n");
+ ret = true;
+ }
+ } else {
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], Need to decrease bt power for Wifi is connected!!\n");
+ ret = true;
+ }
+ }
+ }
+ return ret;
+}
+
+static void btc8821a2ant_set_fw_dac_swing_lev(struct btc_coexist *btcoexist,
+ u8 dac_swing_lvl)
+{
+ u8 h2c_parameter[1] = {0};
+
+ /* There are several type of dacswing
+ * 0x18/ 0x10/ 0xc/ 0x8/ 0x4/ 0x6
+ */
+ h2c_parameter[0] = dac_swing_lvl;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], Set Dac Swing Level = 0x%x\n", dac_swing_lvl);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], FW write 0x64 = 0x%x\n", h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x64, 1, h2c_parameter);
+}
+
+static void halbtc8821a2ant_set_fw_dec_bt_pwr(struct btc_coexist *btcoexist,
+ bool dec_bt_pwr)
+{
+ u8 h2c_parameter[1] = {0};
+
+ h2c_parameter[0] = 0;
+
+ if (dec_bt_pwr)
+ h2c_parameter[0] |= BIT1;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], decrease Bt Power : %s, FW write 0x62 = 0x%x\n",
+ (dec_bt_pwr ? "Yes!!" : "No!!"), h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x62, 1, h2c_parameter);
+}
+
+static void halbtc8821a2ant_dec_bt_pwr(struct btc_coexist *btcoexist,
+ bool force_exec, bool dec_bt_pwr)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s Dec BT power = %s\n",
+ (force_exec ? "force to" : ""),
+ ((dec_bt_pwr) ? "ON" : "OFF"));
+ coex_dm->cur_dec_bt_pwr = dec_bt_pwr;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], pre_dec_bt_pwr = %d, cur_dec_bt_pwr = %d\n",
+ coex_dm->pre_dec_bt_pwr, coex_dm->cur_dec_bt_pwr);
+
+ if (coex_dm->pre_dec_bt_pwr == coex_dm->cur_dec_bt_pwr)
+ return;
+ }
+ halbtc8821a2ant_set_fw_dec_bt_pwr(btcoexist, coex_dm->cur_dec_bt_pwr);
+
+ coex_dm->pre_dec_bt_pwr = coex_dm->cur_dec_bt_pwr;
+}
+
+static void btc8821a2ant_set_fw_bt_lna_constr(struct btc_coexist *btcoexist,
+ bool bt_lna_cons_on)
+{
+ u8 h2c_parameter[2] = {0};
+
+ h2c_parameter[0] = 0x3; /* opCode, 0x3 = BT_SET_LNA_CONSTRAIN */
+
+ if (bt_lna_cons_on)
+ h2c_parameter[1] |= BIT0;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], set BT LNA Constrain: %s, FW write 0x69 = 0x%x\n",
+ (bt_lna_cons_on ? "ON!!" : "OFF!!"),
+ h2c_parameter[0]<<8|h2c_parameter[1]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x69, 2, h2c_parameter);
+}
+
+static void btc8821a2_set_bt_lna_const(struct btc_coexist *btcoexist,
+ bool force_exec, bool bt_lna_cons_on)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s BT Constrain = %s\n",
+ (force_exec ? "force" : ""),
+ ((bt_lna_cons_on) ? "ON" : "OFF"));
+ coex_dm->cur_bt_lna_constrain = bt_lna_cons_on;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], pre_bt_lna_constrain = %d,cur_bt_lna_constrain = %d\n",
+ coex_dm->pre_bt_lna_constrain,
+ coex_dm->cur_bt_lna_constrain);
+
+ if (coex_dm->pre_bt_lna_constrain ==
+ coex_dm->cur_bt_lna_constrain)
+ return;
+ }
+ btc8821a2ant_set_fw_bt_lna_constr(btcoexist,
+ coex_dm->cur_bt_lna_constrain);
+
+ coex_dm->pre_bt_lna_constrain = coex_dm->cur_bt_lna_constrain;
+}
+
+static void halbtc8821a2ant_set_fw_bt_psd_mode(struct btc_coexist *btcoexist,
+ u8 bt_psd_mode)
+{
+ u8 h2c_parameter[2] = {0};
+
+ h2c_parameter[0] = 0x2; /* opCode, 0x2 = BT_SET_PSD_MODE */
+
+ h2c_parameter[1] = bt_psd_mode;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], set BT PSD mode = 0x%x, FW write 0x69 = 0x%x\n",
+ h2c_parameter[1],
+ h2c_parameter[0]<<8|h2c_parameter[1]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x69, 2, h2c_parameter);
+}
+
+static void halbtc8821a2ant_set_bt_psd_mode(struct btc_coexist *btcoexist,
+ bool force_exec, u8 bt_psd_mode)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s BT PSD mode = 0x%x\n",
+ (force_exec ? "force" : ""), bt_psd_mode);
+ coex_dm->cur_bt_psd_mode = bt_psd_mode;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], pre_bt_psd_mode = 0x%x, cur_bt_psd_mode = 0x%x\n",
+ coex_dm->pre_bt_psd_mode, coex_dm->cur_bt_psd_mode);
+
+ if (coex_dm->pre_bt_psd_mode == coex_dm->cur_bt_psd_mode)
+ return;
+ }
+ halbtc8821a2ant_set_fw_bt_psd_mode(btcoexist,
+ coex_dm->cur_bt_psd_mode);
+
+ coex_dm->pre_bt_psd_mode = coex_dm->cur_bt_psd_mode;
+}
+
+static void halbtc8821a2ant_set_bt_auto_report(struct btc_coexist *btcoexist,
+ bool enable_auto_report)
+{
+ u8 h2c_parameter[1] = {0};
+
+ h2c_parameter[0] = 0;
+
+ if (enable_auto_report)
+ h2c_parameter[0] |= BIT0;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], BT FW auto report : %s, FW write 0x68 = 0x%x\n",
+ (enable_auto_report ? "Enabled!!" : "Disabled!!"),
+ h2c_parameter[0]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x68, 1, h2c_parameter);
+}
+
+static void halbtc8821a2ant_bt_auto_report(struct btc_coexist *btcoexist,
+ bool force_exec,
+ bool enable_auto_report)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s BT Auto report = %s\n",
+ (force_exec ? "force to" : ""),
+ ((enable_auto_report) ? "Enabled" : "Disabled"));
+ coex_dm->cur_bt_auto_report = enable_auto_report;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], pre_bt_auto_report = %d, cur_bt_auto_report = %d\n",
+ coex_dm->pre_bt_auto_report,
+ coex_dm->cur_bt_auto_report);
+
+ if (coex_dm->pre_bt_auto_report == coex_dm->cur_bt_auto_report)
+ return;
+ }
+ halbtc8821a2ant_set_bt_auto_report(btcoexist,
+ coex_dm->cur_bt_auto_report);
+
+ coex_dm->pre_bt_auto_report = coex_dm->cur_bt_auto_report;
+}
+
+static void halbtc8821a2ant_fw_dac_swing_lvl(struct btc_coexist *btcoexist,
+ bool force_exec,
+ u8 fw_dac_swing_lvl)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s set FW Dac Swing level = %d\n",
+ (force_exec ? "force to" : ""), fw_dac_swing_lvl);
+ coex_dm->cur_fw_dac_swing_lvl = fw_dac_swing_lvl;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], pre_fw_dac_swing_lvl = %d, cur_fw_dac_swing_lvl = %d\n",
+ coex_dm->pre_fw_dac_swing_lvl,
+ coex_dm->cur_fw_dac_swing_lvl);
+
+ if (coex_dm->pre_fw_dac_swing_lvl ==
+ coex_dm->cur_fw_dac_swing_lvl)
+ return;
+ }
+
+ btc8821a2ant_set_fw_dac_swing_lev(btcoexist,
+ coex_dm->cur_fw_dac_swing_lvl);
+
+ coex_dm->pre_fw_dac_swing_lvl = coex_dm->cur_fw_dac_swing_lvl;
+}
+
+static void btc8821a2ant_set_sw_rf_rx_lpf_corner(struct btc_coexist *btcoexist,
+ bool rx_rf_shrink_on)
+{
+ if (rx_rf_shrink_on) {
+ /* Shrink RF Rx LPF corner */
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], Shrink RF Rx LPF corner!!\n");
+ btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A, 0x1e,
+ 0xfffff, 0xffffc);
+ } else {
+ /* Resume RF Rx LPF corner
+ * After initialized, we can use coex_dm->bt_rf0x1e_backup
+ */
+ if (btcoexist->initilized) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], Resume RF Rx LPF corner!!\n");
+ btcoexist->btc_set_rf_reg(btcoexist, BTC_RF_A,
+ 0x1e, 0xfffff,
+ coex_dm->bt_rf0x1e_backup);
+ }
+ }
+}
+
+static void halbtc8821a2ant_RfShrink(struct btc_coexist *btcoexist,
+ bool force_exec, bool rx_rf_shrink_on)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s turn Rx RF Shrink = %s\n",
+ (force_exec ? "force to" : ""),
+ ((rx_rf_shrink_on) ? "ON" : "OFF"));
+ coex_dm->cur_rf_rx_lpf_shrink = rx_rf_shrink_on;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex], pre_rf_rx_lpf_shrink = %d, cur_rf_rx_lpf_shrink = %d\n",
+ coex_dm->pre_rf_rx_lpf_shrink,
+ coex_dm->cur_rf_rx_lpf_shrink);
+
+ if (coex_dm->pre_rf_rx_lpf_shrink ==
+ coex_dm->cur_rf_rx_lpf_shrink)
+ return;
+ }
+ btc8821a2ant_set_sw_rf_rx_lpf_corner(btcoexist,
+ coex_dm->cur_rf_rx_lpf_shrink);
+
+ coex_dm->pre_rf_rx_lpf_shrink = coex_dm->cur_rf_rx_lpf_shrink;
+}
+
+static void btc8821a2ant_SetSwPenTxRateAdapt(struct btc_coexist *btcoexist,
+ bool low_penalty_ra)
+{
+ u8 h2c_parameter[6] = {0};
+
+ h2c_parameter[0] = 0x6; /* opCode, 0x6 = Retry_Penalty */
+
+ if (low_penalty_ra) {
+ h2c_parameter[1] |= BIT0;
+ /*normal rate except MCS7/6/5, OFDM54/48/36 */
+ h2c_parameter[2] = 0x00;
+ /*MCS7 or OFDM54 */
+ h2c_parameter[3] = 0xf7;
+ /*MCS6 or OFDM48 */
+ h2c_parameter[4] = 0xf8;
+ /*MCS5 or OFDM36 */
+ h2c_parameter[5] = 0xf9;
+ }
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], set WiFi Low-Penalty Retry: %s",
+ (low_penalty_ra ? "ON!!" : "OFF!!"));
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x69, 6, h2c_parameter);
+}
+
+static void halbtc8821a2ant_low_penalty_ra(struct btc_coexist *btcoexist,
+ bool force_exec, bool low_penalty_ra)
+{
+ /*return;*/
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s turn LowPenaltyRA = %s\n",
+ (force_exec ? "force to" : ""),
+ ((low_penalty_ra) ? "ON" : "OFF"));
+ coex_dm->cur_low_penalty_ra = low_penalty_ra;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex], pre_low_penalty_ra = %d, cur_low_penalty_ra = %d\n",
+ coex_dm->pre_low_penalty_ra,
+ coex_dm->cur_low_penalty_ra);
+
+ if (coex_dm->pre_low_penalty_ra == coex_dm->cur_low_penalty_ra)
+ return;
+ }
+ btc8821a2ant_SetSwPenTxRateAdapt(btcoexist,
+ coex_dm->cur_low_penalty_ra);
+
+ coex_dm->pre_low_penalty_ra = coex_dm->cur_low_penalty_ra;
+}
+
+static void halbtc8821a2ant_set_dac_swing_reg(struct btc_coexist *btcoexist,
+ u32 level)
+{
+ u8 val = (u8)level;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], Write SwDacSwing = 0x%x\n", level);
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xc5b, 0x3e, val);
+}
+
+static void btc8821a2ant_set_sw_full_dac_swing(struct btc_coexist *btcoexist,
+ bool sw_dac_swing_on,
+ u32 sw_dac_swing_lvl)
+{
+ if (sw_dac_swing_on)
+ halbtc8821a2ant_set_dac_swing_reg(btcoexist, sw_dac_swing_lvl);
+ else
+ halbtc8821a2ant_set_dac_swing_reg(btcoexist, 0x18);
+}
+
+static void halbtc8821a2ant_dac_swing(struct btc_coexist *btcoexist,
+ bool force_exec, bool dac_swing_on,
+ u32 dac_swing_lvl)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s turn DacSwing = %s, dac_swing_lvl = 0x%x\n",
+ (force_exec ? "force to" : ""),
+ ((dac_swing_on) ? "ON" : "OFF"),
+ dac_swing_lvl);
+ coex_dm->cur_dac_swing_on = dac_swing_on;
+ coex_dm->cur_dac_swing_lvl = dac_swing_lvl;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex], pre_dac_swing_on = %d, pre_dac_swing_lvl = 0x%x, cur_dac_swing_on = %d, cur_dac_swing_lvl = 0x%x\n",
+ coex_dm->pre_dac_swing_on,
+ coex_dm->pre_dac_swing_lvl,
+ coex_dm->cur_dac_swing_on,
+ coex_dm->cur_dac_swing_lvl);
+
+ if ((coex_dm->pre_dac_swing_on == coex_dm->cur_dac_swing_on) &&
+ (coex_dm->pre_dac_swing_lvl ==
+ coex_dm->cur_dac_swing_lvl))
+ return;
+ }
+ mdelay(30);
+ btc8821a2ant_set_sw_full_dac_swing(btcoexist, dac_swing_on,
+ dac_swing_lvl);
+
+ coex_dm->pre_dac_swing_on = coex_dm->cur_dac_swing_on;
+ coex_dm->pre_dac_swing_lvl = coex_dm->cur_dac_swing_lvl;
+}
+
+static void halbtc8821a2ant_set_adc_back_off(struct btc_coexist *btcoexist,
+ bool adc_back_off)
+{
+ if (adc_back_off) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], BB BackOff Level On!\n");
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x8db, 0x60, 0x3);
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], BB BackOff Level Off!\n");
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x8db, 0x60, 0x1);
+ }
+}
+
+static void halbtc8821a2ant_adc_back_off(struct btc_coexist *btcoexist,
+ bool force_exec, bool adc_back_off)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s turn AdcBackOff = %s\n",
+ (force_exec ? "force to" : ""),
+ ((adc_back_off) ? "ON" : "OFF"));
+ coex_dm->cur_adc_back_off = adc_back_off;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex], pre_adc_back_off = %d, cur_adc_back_off = %d\n",
+ coex_dm->pre_adc_back_off, coex_dm->cur_adc_back_off);
+
+ if (coex_dm->pre_adc_back_off == coex_dm->cur_adc_back_off)
+ return;
+ }
+ halbtc8821a2ant_set_adc_back_off(btcoexist, coex_dm->cur_adc_back_off);
+
+ coex_dm->pre_adc_back_off = coex_dm->cur_adc_back_off;
+}
+
+static void halbtc8821a2ant_set_coex_table(struct btc_coexist *btcoexist,
+ u32 val0x6c0, u32 val0x6c4,
+ u32 val0x6c8, u8 val0x6cc)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c0 = 0x%x\n", val0x6c0);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c0, val0x6c0);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c4 = 0x%x\n", val0x6c4);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c4, val0x6c4);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6c8 = 0x%x\n", val0x6c8);
+ btcoexist->btc_write_4byte(btcoexist, 0x6c8, val0x6c8);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_EXEC,
+ "[BTCoex], set coex table, set 0x6cc = 0x%x\n", val0x6cc);
+ btcoexist->btc_write_1byte(btcoexist, 0x6cc, val0x6cc);
+}
+
+static void halbtc8821a2ant_coex_table(struct btc_coexist *btcoexist,
+ bool force_exec, u32 val0x6c0,
+ u32 val0x6c4, u32 val0x6c8, u8 val0x6cc)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW,
+ "[BTCoex], %s write Coex Table 0x6c0 = 0x%x, 0x6c4 = 0x%x, 0x6c8 = 0x%x, 0x6cc = 0x%x\n",
+ (force_exec ? "force to" : ""),
+ val0x6c0, val0x6c4, val0x6c8, val0x6cc);
+ coex_dm->cur_val0x6c0 = val0x6c0;
+ coex_dm->cur_val0x6c4 = val0x6c4;
+ coex_dm->cur_val0x6c8 = val0x6c8;
+ coex_dm->cur_val0x6cc = val0x6cc;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex], pre_val0x6c0 = 0x%x, pre_val0x6c4 = 0x%x, pre_val0x6c8 = 0x%x, pre_val0x6cc = 0x%x !!\n",
+ coex_dm->pre_val0x6c0,
+ coex_dm->pre_val0x6c4,
+ coex_dm->pre_val0x6c8,
+ coex_dm->pre_val0x6cc);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_SW_DETAIL,
+ "[BTCoex], cur_val0x6c0 = 0x%x, cur_val0x6c4 = 0x%x, cur_val0x6c8 = 0x%x, cur_val0x6cc = 0x%x !!\n",
+ coex_dm->cur_val0x6c0,
+ coex_dm->cur_val0x6c4,
+ coex_dm->cur_val0x6c8,
+ coex_dm->cur_val0x6cc);
+
+ if ((coex_dm->pre_val0x6c0 == coex_dm->cur_val0x6c0) &&
+ (coex_dm->pre_val0x6c4 == coex_dm->cur_val0x6c4) &&
+ (coex_dm->pre_val0x6c8 == coex_dm->cur_val0x6c8) &&
+ (coex_dm->pre_val0x6cc == coex_dm->cur_val0x6cc))
+ return;
+ }
+ halbtc8821a2ant_set_coex_table(btcoexist, val0x6c0, val0x6c4, val0x6c8,
+ val0x6cc);
+
+ coex_dm->pre_val0x6c0 = coex_dm->cur_val0x6c0;
+ coex_dm->pre_val0x6c4 = coex_dm->cur_val0x6c4;
+ coex_dm->pre_val0x6c8 = coex_dm->cur_val0x6c8;
+ coex_dm->pre_val0x6cc = coex_dm->cur_val0x6cc;
+}
+
+static void halbtc8821a2ant_set_fw_ignore_wlan_act(struct btc_coexist *btcoex,
+ bool enable)
+{
+ u8 h2c_parameter[1] = {0};
+
+ if (enable)
+ h2c_parameter[0] |= BIT0;/* function enable */
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], set FW for BT Ignore Wlan_Act, FW write 0x63 = 0x%x\n",
+ h2c_parameter[0]);
+
+ btcoex->btc_fill_h2c(btcoex, 0x63, 1, h2c_parameter);
+}
+
+static void halbtc8821a2ant_ignore_wlan_act(struct btc_coexist *btcoexist,
+ bool force_exec, bool enable)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s turn Ignore WlanAct %s\n",
+ (force_exec ? "force to" : ""), (enable ? "ON" : "OFF"));
+ coex_dm->cur_ignore_wlan_act = enable;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], pre_ignore_wlan_act = %d, cur_ignore_wlan_act = %d!!\n",
+ coex_dm->pre_ignore_wlan_act,
+ coex_dm->cur_ignore_wlan_act);
+
+ if (coex_dm->pre_ignore_wlan_act ==
+ coex_dm->cur_ignore_wlan_act)
+ return;
+ }
+ halbtc8821a2ant_set_fw_ignore_wlan_act(btcoexist, enable);
+
+ coex_dm->pre_ignore_wlan_act = coex_dm->cur_ignore_wlan_act;
+}
+
+static void halbtc8821a2ant_set_fw_pstdma(struct btc_coexist *btcoexist,
+ u8 byte1, u8 byte2, u8 byte3,
+ u8 byte4, u8 byte5)
+{
+ u8 h2c_parameter[5];
+
+ h2c_parameter[0] = byte1;
+ h2c_parameter[1] = byte2;
+ h2c_parameter[2] = byte3;
+ h2c_parameter[3] = byte4;
+ h2c_parameter[4] = byte5;
+
+ coex_dm->ps_tdma_para[0] = byte1;
+ coex_dm->ps_tdma_para[1] = byte2;
+ coex_dm->ps_tdma_para[2] = byte3;
+ coex_dm->ps_tdma_para[3] = byte4;
+ coex_dm->ps_tdma_para[4] = byte5;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], FW write 0x60(5bytes) = 0x%x%08x\n",
+ h2c_parameter[0],
+ h2c_parameter[1]<<24|
+ h2c_parameter[2]<<16|
+ h2c_parameter[3]<<8|
+ h2c_parameter[4]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x60, 5, h2c_parameter);
+}
+
+static void btc8821a2ant_sw_mech1(struct btc_coexist *btcoexist,
+ bool shrink_rx_lpf,
+ bool low_penalty_ra, bool limited_dig,
+ bool bt_lna_constrain)
+{
+ u32 wifi_bw;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_HT40 != wifi_bw) {
+ /*only shrink RF Rx LPF for HT40*/
+ if (shrink_rx_lpf)
+ shrink_rx_lpf = false;
+ }
+
+ halbtc8821a2ant_RfShrink(btcoexist, NORMAL_EXEC, shrink_rx_lpf);
+ halbtc8821a2ant_low_penalty_ra(btcoexist,
+ NORMAL_EXEC, low_penalty_ra);
+
+ /* no limited DIG
+ * btc8821a2_set_bt_lna_const(btcoexist,
+ NORMAL_EXEC, bBTLNAConstrain);
+ */
+}
+
+static void btc8821a2ant_sw_mech2(struct btc_coexist *btcoexist,
+ bool agc_table_shift,
+ bool adc_back_off, bool sw_dac_swing,
+ u32 dac_swing_lvl)
+{
+ /* halbtc8821a2ant_AgcTable(btcoexist, NORMAL_EXEC, bAGCTableShift); */
+ halbtc8821a2ant_adc_back_off(btcoexist, NORMAL_EXEC, adc_back_off);
+ halbtc8821a2ant_dac_swing(btcoexist, NORMAL_EXEC, sw_dac_swing,
+ sw_dac_swing);
+}
+
+static void halbtc8821a2ant_set_ant_path(struct btc_coexist *btcoexist,
+ u8 ant_pos_type, bool init_hw_cfg,
+ bool wifi_off)
+{
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ u32 u4tmp = 0;
+ u8 h2c_parameter[2] = {0};
+
+ if (init_hw_cfg) {
+ /* 0x4c[23] = 0, 0x4c[24] = 1 Antenna control by WL/BT */
+ u4tmp = btcoexist->btc_read_4byte(btcoexist, 0x4c);
+ u4tmp &= ~BIT23;
+ u4tmp |= BIT24;
+ btcoexist->btc_write_4byte(btcoexist, 0x4c, u4tmp);
+
+ btcoexist->btc_write_4byte(btcoexist, 0x974, 0x3ff);
+ btcoexist->btc_write_1byte(btcoexist, 0xcb4, 0x77);
+
+ if (board_info->btdm_ant_pos == BTC_ANTENNA_AT_MAIN_PORT) {
+ /* tell firmware "antenna inverse" ==>
+ * WRONG firmware antenna control code.
+ * ==>need fw to fix
+ */
+ h2c_parameter[0] = 1;
+ h2c_parameter[1] = 1;
+ btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
+ h2c_parameter);
+ } else {
+ /* tell firmware "no antenna inverse"
+ * ==> WRONG firmware antenna control code.
+ * ==>need fw to fix
+ */
+ h2c_parameter[0] = 0;
+ h2c_parameter[1] = 1;
+ btcoexist->btc_fill_h2c(btcoexist, 0x65, 2,
+ h2c_parameter);
+ }
+ }
+
+ /* ext switch setting */
+ switch (ant_pos_type) {
+ case BTC_ANT_WIFI_AT_MAIN:
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xcb7, 0x30, 0x1);
+ break;
+ case BTC_ANT_WIFI_AT_AUX:
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0xcb7, 0x30, 0x2);
+ break;
+ }
+}
+
+static void halbtc8821a2ant_ps_tdma(struct btc_coexist *btcoexist,
+ bool force_exec, bool turn_on, u8 type)
+{
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], %s turn %s PS TDMA, type = %d\n",
+ (force_exec ? "force to" : ""), (turn_on ? "ON" : "OFF"),
+ type);
+ coex_dm->cur_ps_tdma_on = turn_on;
+ coex_dm->cur_ps_tdma = type;
+
+ if (!force_exec) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], pre_ps_tdma_on = %d, cur_ps_tdma_on = %d!!\n",
+ coex_dm->pre_ps_tdma_on, coex_dm->cur_ps_tdma_on);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], pre_ps_tdma = %d, cur_ps_tdma = %d!!\n",
+ coex_dm->pre_ps_tdma, coex_dm->cur_ps_tdma);
+
+ if ((coex_dm->pre_ps_tdma_on == coex_dm->cur_ps_tdma_on) &&
+ (coex_dm->pre_ps_tdma == coex_dm->cur_ps_tdma))
+ return;
+ }
+ if (turn_on) {
+ switch (type) {
+ case 1:
+ default:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x1a,
+ 0x1a, 0xe1, 0x90);
+ break;
+ case 2:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x12,
+ 0x12, 0xe1, 0x90);
+ break;
+ case 3:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x1c,
+ 0x3, 0xf1, 0x90);
+ break;
+ case 4:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x10,
+ 0x03, 0xf1, 0x90);
+ break;
+ case 5:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x1a,
+ 0x1a, 0x60, 0x90);
+ break;
+ case 6:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x12,
+ 0x12, 0x60, 0x90);
+ break;
+ case 7:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x1c,
+ 0x3, 0x70, 0x90);
+ break;
+ case 8:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xa3, 0x10,
+ 0x3, 0x70, 0x90);
+ break;
+ case 9:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x1a,
+ 0x1a, 0xe1, 0x90);
+ break;
+ case 10:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x12,
+ 0x12, 0xe1, 0x90);
+ break;
+ case 11:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0xa,
+ 0xa, 0xe1, 0x90);
+ break;
+ case 12:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x5,
+ 0x5, 0xe1, 0x90);
+ break;
+ case 13:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x1a,
+ 0x1a, 0x60, 0x90);
+ break;
+ case 14:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3,
+ 0x12, 0x12, 0x60, 0x90);
+ break;
+ case 15:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0xa,
+ 0xa, 0x60, 0x90);
+ break;
+ case 16:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x5,
+ 0x5, 0x60, 0x90);
+ break;
+ case 17:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xa3, 0x2f,
+ 0x2f, 0x60, 0x90);
+ break;
+ case 18:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x5,
+ 0x5, 0xe1, 0x90);
+ break;
+ case 19:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x25,
+ 0x25, 0xe1, 0x90);
+ break;
+ case 20:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x25,
+ 0x25, 0x60, 0x90);
+ break;
+ case 21:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x15,
+ 0x03, 0x70, 0x90);
+ break;
+ case 71:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0xe3, 0x1a,
+ 0x1a, 0xe1, 0x90);
+ break;
+ }
+ } else {
+ /* disable PS tdma */
+ switch (type) {
+ case 0:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0x0, 0x0, 0x0,
+ 0x40, 0x0);
+ break;
+ case 1:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0x0, 0x0, 0x0,
+ 0x48, 0x0);
+ break;
+ default:
+ halbtc8821a2ant_set_fw_pstdma(btcoexist, 0x0, 0x0, 0x0,
+ 0x40, 0x0);
+ break;
+ }
+ }
+
+ /* update pre state */
+ coex_dm->pre_ps_tdma_on = coex_dm->cur_ps_tdma_on;
+ coex_dm->pre_ps_tdma = coex_dm->cur_ps_tdma;
+}
+
+static void halbtc8821a2ant_coex_all_off(struct btc_coexist *btcoexist)
+{
+ /* fw all off */
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 1);
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ /* sw all off */
+ btc8821a2ant_sw_mech1(btcoexist, false, false, false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false, false, 0x18);
+
+ /* hw all off */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC,
+ 0x55555555, 0x55555555, 0xffff, 0x3);
+}
+
+static void halbtc8821a2ant_coex_under_5g(struct btc_coexist *btcoexist)
+{
+ halbtc8821a2ant_coex_all_off(btcoexist);
+}
+
+static void halbtc8821a2ant_init_coex_dm(struct btc_coexist *btcoexist)
+{
+ /* force to reset coex mechanism */
+ halbtc8821a2ant_coex_table(btcoexist, FORCE_EXEC, 0x55555555,
+ 0x55555555, 0xffff, 0x3);
+
+ halbtc8821a2ant_ps_tdma(btcoexist, FORCE_EXEC, false, 1);
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, FORCE_EXEC, 6);
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, FORCE_EXEC, false);
+
+ btc8821a2ant_sw_mech1(btcoexist, false, false, false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false, false, 0x18);
+}
+
+static void halbtc8821a2ant_bt_inquiry_page(struct btc_coexist *btcoexist)
+{
+ bool low_pwr_disable = true;
+
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5afa5afa, 0xffff, 0x3);
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 3);
+}
+
+static bool halbtc8821a2ant_is_common_action(struct btc_coexist *btcoexist)
+{
+ bool common = false, wifi_connected = false, wifi_busy = false;
+ bool low_pwr_disable = false;
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_CONNECTED,
+ &wifi_connected);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5afa5afa, 0xffff, 0x3);
+
+ if (!wifi_connected &&
+ BT_8821A_2ANT_BT_STATUS_IDLE == coex_dm->bt_status) {
+ low_pwr_disable = false;
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi IPS + BT IPS!!\n");
+
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 1);
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btc8821a2ant_sw_mech1(btcoexist, false, false, false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false, false, 0x18);
+
+ common = true;
+ } else if (wifi_connected &&
+ (BT_8821A_2ANT_BT_STATUS_IDLE == coex_dm->bt_status)) {
+ low_pwr_disable = false;
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+
+ if (wifi_busy) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi Busy + BT IPS!!\n");
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 1);
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi LPS + BT IPS!!\n");
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 1);
+ }
+
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btc8821a2ant_sw_mech1(btcoexist, false, false, false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false, false, 0x18);
+
+ common = true;
+ } else if (!wifi_connected &&
+ (BT_8821A_2ANT_BT_STATUS_CON_IDLE == coex_dm->bt_status)) {
+ low_pwr_disable = true;
+ btcoexist->btc_set(btcoexist, BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi IPS + BT LPS!!\n");
+
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 1);
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btc8821a2ant_sw_mech1(btcoexist, false, false, false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false, false, 0x18);
+ common = true;
+ } else if (wifi_connected &&
+ (BT_8821A_2ANT_BT_STATUS_CON_IDLE == coex_dm->bt_status)) {
+ low_pwr_disable = true;
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_ACT_DISABLE_LOW_POWER, &low_pwr_disable);
+
+ if (wifi_busy) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi Busy + BT LPS!!\n");
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 1);
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi LPS + BT LPS!!\n");
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 1);
+ }
+
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btc8821a2ant_sw_mech1(btcoexist, true, true, true, true);
+ btc8821a2ant_sw_mech2(btcoexist, false, false, false, 0x18);
+
+ common = true;
+ } else if (!wifi_connected &&
+ (BT_8821A_2ANT_BT_STATUS_NON_IDLE ==
+ coex_dm->bt_status)) {
+ low_pwr_disable = false;
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_ACT_DISABLE_LOW_POWER, &low_pwr_disable);
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi IPS + BT Busy!!\n");
+
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 1);
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+
+ common = true;
+ } else {
+ low_pwr_disable = true;
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_ACT_DISABLE_LOW_POWER,
+ &low_pwr_disable);
+
+ if (wifi_busy) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi Busy + BT Busy!!\n");
+ common = false;
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Wifi LPS + BT Busy!!\n");
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC, true, 21);
+
+ if (halbtc8821a2ant_need_to_dec_bt_pwr(btcoexist))
+ halbtc8821a2ant_dec_bt_pwr(btcoexist,
+ NORMAL_EXEC, true);
+ else
+ halbtc8821a2ant_dec_bt_pwr(btcoexist,
+ NORMAL_EXEC, false);
+
+ common = true;
+ }
+ btc8821a2ant_sw_mech1(btcoexist, true, true, true, true);
+ }
+ return common;
+}
+
+static void btc8821a2_int1(struct btc_coexist *btcoexist, bool tx_pause,
+ int result)
+{
+ if (tx_pause) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 1\n");
+
+ if (coex_dm->cur_ps_tdma == 71) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 5);
+ coex_dm->tdma_adj_type = 5;
+ } else if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 5);
+ coex_dm->tdma_adj_type = 5;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ }
+ if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 13);
+ coex_dm->tdma_adj_type = 13;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ } else if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 5);
+ coex_dm->tdma_adj_type = 5;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 13);
+ coex_dm->tdma_adj_type = 13;
+ }
+ }
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 0\n");
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 71);
+ coex_dm->tdma_adj_type = 71;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ }
+ if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 71) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 1);
+ coex_dm->tdma_adj_type = 1;
+ } else if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ } else if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 1);
+ coex_dm->tdma_adj_type = 1;
+ } else if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 71);
+ coex_dm->tdma_adj_type = 71;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ }
+ }
+ }
+}
+
+static void btc8821a2_int2(struct btc_coexist *btcoexist, bool tx_pause,
+ int result)
+{
+ if (tx_pause) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 1\n");
+ if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ }
+ if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ } else if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ }
+ }
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 0\n");
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ }
+ if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ } else if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ }
+ }
+ }
+}
+
+static void btc8821a2_int3(struct btc_coexist *btcoexist, bool tx_pause,
+ int result)
+{
+ if (tx_pause) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 1\n");
+ if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ }
+ if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 8);
+ coex_dm->tdma_adj_type = 8;
+ } else if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 16);
+ coex_dm->tdma_adj_type = 16;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ }
+ }
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], TxPause = 0\n");
+ if (coex_dm->cur_ps_tdma == 5) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 6) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 7) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 8) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ }
+ if (coex_dm->cur_ps_tdma == 13) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 14) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 15) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 16) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+ if (result == -1) {
+ if (coex_dm->cur_ps_tdma == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 4);
+ coex_dm->tdma_adj_type = 4;
+ } else if (coex_dm->cur_ps_tdma == 9) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 12);
+ coex_dm->tdma_adj_type = 12;
+ }
+ } else if (result == 1) {
+ if (coex_dm->cur_ps_tdma == 4) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else if (coex_dm->cur_ps_tdma == 12) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 11) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else if (coex_dm->cur_ps_tdma == 10) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ }
+ }
+ }
+}
+
+static void btc8821a2ant_tdma_dur_adj(struct btc_coexist *btcoexist,
+ bool sco_hid, bool tx_pause,
+ u8 max_interval)
+{
+ static long up, dn, m, n, wait_count;
+ /* 0: no change, +1: increase WiFi duration,
+ * -1: decrease WiFi duration
+ */
+ int result;
+ u8 retry_count = 0;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW,
+ "[BTCoex], TdmaDurationAdjust()\n");
+
+ if (coex_dm->reset_tdma_adjust) {
+ coex_dm->reset_tdma_adjust = false;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], first run TdmaDurationAdjust()!!\n");
+ if (sco_hid) {
+ if (tx_pause) {
+ if (max_interval == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 13);
+ coex_dm->tdma_adj_type = 13;
+ } else if (max_interval == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 14);
+ coex_dm->tdma_adj_type = 14;
+ } else if (max_interval == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 15);
+ coex_dm->tdma_adj_type = 15;
+ }
+ } else {
+ if (max_interval == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 9);
+ coex_dm->tdma_adj_type = 9;
+ } else if (max_interval == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 10);
+ coex_dm->tdma_adj_type = 10;
+ } else if (max_interval == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 11);
+ coex_dm->tdma_adj_type = 11;
+ }
+ }
+ } else {
+ if (tx_pause) {
+ if (max_interval == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 5);
+ coex_dm->tdma_adj_type = 5;
+ } else if (max_interval == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 6);
+ coex_dm->tdma_adj_type = 6;
+ } else if (max_interval == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 7);
+ coex_dm->tdma_adj_type = 7;
+ }
+ } else {
+ if (max_interval == 1) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 1);
+ coex_dm->tdma_adj_type = 1;
+ } else if (max_interval == 2) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 2);
+ coex_dm->tdma_adj_type = 2;
+ } else if (max_interval == 3) {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist,
+ NORMAL_EXEC,
+ true, 3);
+ coex_dm->tdma_adj_type = 3;
+ }
+ }
+ }
+
+ up = 0;
+ dn = 0;
+ m = 1;
+ n = 3;
+ result = 0;
+ wait_count = 0;
+ } else {
+ /* accquire the BT TRx retry count from BT_Info byte2 */
+ retry_count = coex_sta->bt_retry_cnt;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], retry_count = %d\n", retry_count);
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], up = %d, dn = %d, m = %d, n = %d, wait_count = %d\n",
+ (int)up, (int)dn, (int)m, (int)n, (int)wait_count);
+ result = 0;
+ wait_count++;
+
+ if (retry_count == 0) {
+ /* no retry in the last 2-second duration */
+ up++;
+ dn--;
+
+ if (dn <= 0)
+ dn = 0;
+
+ if (up >= n) {
+ /* if (retry count == 0) for 2*n seconds,
+ * make WiFi duration wider
+ */
+ wait_count = 0;
+ n = 3;
+ up = 0;
+ dn = 0;
+ result = 1;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], Increase wifi duration!!\n");
+ }
+ } else if (retry_count <= 3) {
+ /* <=3 retry in the last 2-second duration */
+ up--;
+ dn++;
+
+ if (up <= 0)
+ up = 0;
+
+ if (dn == 2) {
+ /* if retry count< 3 for 2*2 seconds,
+ * shrink wifi duration
+ */
+ if (wait_count <= 2)
+ m++; /* avoid bounce in two levels */
+ else
+ m = 1;
+ /* m max value is 20, max time is 120 second,
+ * recheck if adjust WiFi duration.
+ */
+ if (m >= 20)
+ m = 20;
+
+ n = 3*m;
+ up = 0;
+ dn = 0;
+ wait_count = 0;
+ result = -1;
+ BTC_PRINT(BTC_MSG_ALGORITHM,
+ ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], Decrease wifi duration for retryCounter<3!!\n");
+ }
+ } else {
+ /* retry count > 3, if retry count > 3 happens once,
+ * shrink WiFi duration
+ */
+ if (wait_count == 1)
+ m++; /* avoid bounce in two levels */
+ else
+ m = 1;
+ /* m max value is 20, max time is 120 second,
+ * recheck if adjust WiFi duration.
+ */
+ if (m >= 20)
+ m = 20;
+
+ n = 3*m;
+ up = 0;
+ dn = 0;
+ wait_count = 0;
+ result = -1;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], Decrease wifi duration for retryCounter>3!!\n");
+ }
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], max Interval = %d\n", max_interval);
+ if (max_interval == 1)
+ btc8821a2_int1(btcoexist, tx_pause, result);
+ else if (max_interval == 2)
+ btc8821a2_int2(btcoexist, tx_pause, result);
+ else if (max_interval == 3)
+ btc8821a2_int3(btcoexist, tx_pause, result);
+ }
+
+ /* if current PsTdma not match with the recorded one
+ * (when scan, dhcp...), then we have to adjust it back to
+ * the previous recorded one.
+ */
+ if (coex_dm->cur_ps_tdma != coex_dm->tdma_adj_type) {
+ bool scan = false, link = false, roam = false;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], PsTdma type dismatch!!!, cur_ps_tdma = %d, recordPsTdma = %d\n",
+ coex_dm->cur_ps_tdma, coex_dm->tdma_adj_type);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
+
+ if (!scan && !link && !roam) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC, true,
+ coex_dm->tdma_adj_type);
+ } else {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_DETAIL,
+ "[BTCoex], roaming/link/scan is under progress, will adjust next time!!!\n");
+ }
+ }
+
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 0x6);
+}
+
+/* SCO only or SCO+PAN(HS)*/
+static void halbtc8821a2ant_action_sco(struct btc_coexist *btcoexist)
+{
+ u8 wifi_rssi_state, bt_rssi_state;
+ u32 wifi_bw;
+
+ wifi_rssi_state = halbtc8821a2ant_wifi_rssi_state(btcoexist, 0, 2,
+ 15, 0);
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 4);
+
+ if (halbtc8821a2ant_need_to_dec_bt_pwr(btcoexist))
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, true);
+ else
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_LEGACY == wifi_bw) {
+ /* for SCO quality at 11b/g mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC,
+ 0x5a5a5a5a, 0x5a5a5a5a, 0xffff, 0x3);
+ } else {
+ /* for SCO quality & wifi performance balance at 11n mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC,
+ 0x5aea5aea, 0x5aea5aea, 0xffff, 0x3);
+ }
+
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ /* fw mechanism
+ * halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 5);
+ */
+
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 0); /*for voice quality*/
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 0); /*for voice quality*/
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, true, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, true, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ /* fw mechanism
+ * halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC, true, 5);
+ */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 0); /*for voice quality*/
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 0); /*for voice quality*/
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, false, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, false, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+static void halbtc8821a2ant_action_hid(struct btc_coexist *btcoexist)
+{
+ u8 wifi_rssi_state, bt_rssi_state;
+ u32 wifi_bw;
+
+ wifi_rssi_state = halbtc8821a2ant_wifi_rssi_state(btcoexist,
+ 0, 2, 15, 0);
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+
+ if (halbtc8821a2ant_need_to_dec_bt_pwr(btcoexist))
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, true);
+ else
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_LEGACY == wifi_bw) {
+ /* for HID at 11b/g mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5a5a5a5a, 0xffff, 0x3);
+ } else {
+ /* for HID quality & wifi performance balance at 11n mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5aea5aea, 0xffff, 0x3);
+ }
+
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 13);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, true, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, true, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 9);
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 13);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, false, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, false, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+/* A2DP only / PAN(EDR) only/ A2DP+PAN(HS) */
+static void halbtc8821a2ant_action_a2dp(struct btc_coexist *btcoexist)
+{
+ u8 wifi_rssi_state, bt_rssi_state;
+ u32 wifi_bw;
+
+ wifi_rssi_state = halbtc8821a2ant_wifi_rssi_state(btcoexist, 0, 2,
+ 15, 0);
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ /* fw dac swing is called in btc8821a2ant_tdma_dur_adj()
+ * halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+ */
+
+ if (halbtc8821a2ant_need_to_dec_bt_pwr(btcoexist))
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, true);
+ else
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_tdma_dur_adj(btcoexist, false, false, 1);
+ } else {
+ btc8821a2ant_tdma_dur_adj(btcoexist, false, true, 1);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, true, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, true, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_tdma_dur_adj(btcoexist, false, false, 1);
+ } else {
+ btc8821a2ant_tdma_dur_adj(btcoexist, false, true, 1);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+static void halbtc8821a2ant_action_a2dp_pan_hs(struct btc_coexist *btcoexist)
+{
+ u8 wifi_rssi_state, bt_rssi_state, bt_info_ext;
+ u32 wifi_bw;
+
+ bt_info_ext = coex_sta->bt_info_ext;
+ wifi_rssi_state = halbtc8821a2ant_wifi_rssi_state(btcoexist, 0, 2,
+ 15, 0);
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ /*fw dac swing is called in btc8821a2ant_tdma_dur_adj()
+ *halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+ */
+
+ if (halbtc8821a2ant_need_to_dec_bt_pwr(btcoexist))
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, true);
+ else
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ /* fw mechanism */
+ if (bt_info_ext&BIT0) {
+ /*a2dp basic rate*/
+ btc8821a2ant_tdma_dur_adj(btcoexist, false, true, 2);
+ } else {
+ /*a2dp edr rate*/
+ btc8821a2ant_tdma_dur_adj(btcoexist, false, true, 1);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, true, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, true, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ /* fw mechanism */
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, false, true, 2);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, false, true, 1);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+static void halbtc8821a2ant_action_pan_edr(struct btc_coexist *btcoexist)
+{
+ u8 wifi_rssi_state, bt_rssi_state;
+ u32 wifi_bw;
+
+ wifi_rssi_state = halbtc8821a2ant_wifi_rssi_state(btcoexist, 0, 2,
+ 15, 0);
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+
+ if (halbtc8821a2ant_need_to_dec_bt_pwr(btcoexist))
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, true);
+ else
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_LEGACY == wifi_bw) {
+ /* for HID at 11b/g mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5aff5aff, 0xffff, 0x3);
+ } else {
+ /* for HID quality & wifi performance balance at 11n mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5aff5aff, 0xffff, 0x3);
+ }
+
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 1);
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 5);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, true, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, true, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 1);
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 5);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+/* PAN(HS) only */
+static void halbtc8821a2ant_action_pan_hs(struct btc_coexist *btcoexist)
+{
+ u8 wifi_rssi_state, bt_rssi_state;
+ u32 wifi_bw;
+
+ wifi_rssi_state = halbtc8821a2ant_wifi_rssi_state(btcoexist,
+ 0, 2, 15, 0);
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ /* fw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC,
+ true);
+ } else {
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC,
+ false);
+ }
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC, false, 1);
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, true, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, true, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ /* fw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_dec_bt_pwr(btcoexist,
+ NORMAL_EXEC, true);
+ } else {
+ halbtc8821a2ant_dec_bt_pwr(btcoexist,
+ NORMAL_EXEC, false);
+ }
+
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 1);
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ false, 1);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+/* PAN(EDR)+A2DP */
+static void halbtc8821a2ant_action_pan_edr_a2dp(struct btc_coexist *btcoexist)
+{
+ u8 wifi_rssi_state, bt_rssi_state, bt_info_ext;
+ u32 wifi_bw;
+
+ bt_info_ext = coex_sta->bt_info_ext;
+ wifi_rssi_state = halbtc8821a2ant_wifi_rssi_state(btcoexist, 0, 2,
+ 15, 0);
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+
+ if (halbtc8821a2ant_need_to_dec_bt_pwr(btcoexist))
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, true);
+ else
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_LEGACY == wifi_bw) {
+ /* for HID at 11b/g mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5afa5afa, 0xffff, 0x3);
+ } else {
+ /* for HID quality & wifi performance balance at 11n mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5afa5afa, 0xffff, 0x3);
+ }
+
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, false,
+ false, 3);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, false,
+ false, 3);
+ }
+ } else {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, false,
+ true, 3);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, false,
+ true, 3);
+ }
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, true, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, true, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ };
+ } else {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, false,
+ false, 3);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, false,
+ false, 3);
+ }
+ } else {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, false,
+ true, 3);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, false,
+ true, 3);
+ }
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, false, false,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+static void halbtc8821a2ant_action_pan_edr_hid(struct btc_coexist *btcoexist)
+{
+ u8 wifi_rssi_state, bt_rssi_state;
+ u32 wifi_bw;
+
+ wifi_rssi_state = halbtc8821a2ant_wifi_rssi_state(btcoexist, 0, 2,
+ 15, 0);
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+
+ if (halbtc8821a2ant_need_to_dec_bt_pwr(btcoexist))
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, true);
+ else
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_LEGACY == wifi_bw) {
+ /* for HID at 11b/g mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5a5f5a5f, 0xffff, 0x3);
+ } else {
+ /* for HID quality & wifi performance balance at 11n mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5a5f5a5f, 0xffff, 0x3);
+ }
+
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 3);
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, true, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, true, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 10);
+ } else {
+ halbtc8821a2ant_ps_tdma(btcoexist, NORMAL_EXEC,
+ true, 14);
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, false, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, false, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+/* HID+A2DP+PAN(EDR) */
+static void btc8821a2ant_act_hid_a2dp_pan_edr(struct btc_coexist *btcoexist)
+{
+ u8 wifi_rssi_state, bt_rssi_state, bt_info_ext;
+ u32 wifi_bw;
+
+ bt_info_ext = coex_sta->bt_info_ext;
+ wifi_rssi_state = halbtc8821a2ant_wifi_rssi_state(btcoexist,
+ 0, 2, 15, 0);
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ halbtc8821a2ant_fw_dac_swing_lvl(btcoexist, NORMAL_EXEC, 6);
+
+ if (halbtc8821a2ant_need_to_dec_bt_pwr(btcoexist))
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, true);
+ else
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_LEGACY == wifi_bw) {
+ /* for HID at 11b/g mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5a5a5a5a, 0xffff, 0x3);
+ } else {
+ /* for HID quality & wifi performance balance at 11n mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5a5a5a5a, 0xffff, 0x3);
+ }
+
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, true,
+ true, 3);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, true,
+ true, 3);
+ }
+ } else {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, true,
+ true, 3);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, true,
+ true, 3);
+ }
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, true, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, true, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, true,
+ false, 3);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, true,
+ false, 3);
+ }
+ } else {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, true,
+ true, 3);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist, true,
+ true, 3);
+ }
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, false, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, false, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+static void halbtc8821a2ant_action_hid_a2dp(struct btc_coexist *btcoexist)
+{
+ u8 wifi_rssi_state, bt_rssi_state, bt_info_ext;
+ u32 wifi_bw;
+
+ bt_info_ext = coex_sta->bt_info_ext;
+ wifi_rssi_state = halbtc8821a2ant_wifi_rssi_state(btcoexist, 0, 2,
+ 15, 0);
+ bt_rssi_state = halbtc8821a2ant_bt_rssi_state(2, 35, 0);
+
+ if (halbtc8821a2ant_need_to_dec_bt_pwr(btcoexist))
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, true);
+ else
+ halbtc8821a2ant_dec_bt_pwr(btcoexist, NORMAL_EXEC, false);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+
+ if (BTC_WIFI_BW_LEGACY == wifi_bw) {
+ /* for HID at 11b/g mode */
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5f5b5f5b, 0xffffff, 0x3);
+ } else {
+ /*for HID quality & wifi performance balance at 11n mode*/
+ halbtc8821a2ant_coex_table(btcoexist, NORMAL_EXEC, 0x55ff55ff,
+ 0x5f5b5f5b, 0xffffff, 0x3);
+ }
+
+ if (BTC_WIFI_BW_HT40 == wifi_bw) {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist,
+ true, true, 2);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist,
+ true, true, 2);
+ }
+ } else {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist,
+ true, true, 2);
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist,
+ true, true, 2);
+ }
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, true, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, true, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ } else {
+ /* fw mechanism */
+ if ((bt_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (bt_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ if (bt_info_ext&BIT0) {
+ /* a2dp basic rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist,
+ true, true, 2);
+
+ } else {
+ /* a2dp edr rate */
+ btc8821a2ant_tdma_dur_adj(btcoexist,
+ true, true, 2);
+ }
+ } else {
+ if (bt_info_ext&BIT0) {
+ /*a2dp basic rate*/
+ btc8821a2ant_tdma_dur_adj(btcoexist,
+ true, true, 2);
+ } else {
+ /*a2dp edr rate*/
+ btc8821a2ant_tdma_dur_adj(btcoexist,
+ true, true, 2);
+ }
+ }
+
+ /* sw mechanism */
+ if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
+ (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
+ btc8821a2ant_sw_mech1(btcoexist, false, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, true, false,
+ false, 0x18);
+ } else {
+ btc8821a2ant_sw_mech1(btcoexist, false, true,
+ false, false);
+ btc8821a2ant_sw_mech2(btcoexist, false, false,
+ false, 0x18);
+ }
+ }
+}
+
+static void halbtc8821a2ant_run_coexist_mechanism(struct btc_coexist *btcoexist)
+{
+ bool wifi_under_5g = false;
+ u8 algorithm = 0;
+
+ if (btcoexist->manual_control) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Manual control!!!\n");
+ return;
+ }
+
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_UNDER_5G, &wifi_under_5g);
+
+ if (wifi_under_5g) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], RunCoexistMechanism(), run 5G coex setting!!<===\n");
+ halbtc8821a2ant_coex_under_5g(btcoexist);
+ return;
+ }
+
+ algorithm = halbtc8821a2ant_action_algorithm(btcoexist);
+ if (coex_sta->c2h_bt_inquiry_page &&
+ (BT_8821A_2ANT_COEX_ALGO_PANHS != algorithm)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], BT is under inquiry/page scan !!\n");
+ halbtc8821a2ant_bt_inquiry_page(btcoexist);
+ return;
+ }
+
+ coex_dm->cur_algorithm = algorithm;
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Algorithm = %d\n", coex_dm->cur_algorithm);
+
+ if (halbtc8821a2ant_is_common_action(btcoexist)) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant common.\n");
+ coex_dm->reset_tdma_adjust = true;
+ } else {
+ if (coex_dm->cur_algorithm != coex_dm->pre_algorithm) {
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], pre_algorithm = %d, cur_algorithm = %d\n",
+ coex_dm->pre_algorithm, coex_dm->cur_algorithm);
+ coex_dm->reset_tdma_adjust = true;
+ }
+ switch (coex_dm->cur_algorithm) {
+ case BT_8821A_2ANT_COEX_ALGO_SCO:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = SCO.\n");
+ halbtc8821a2ant_action_sco(btcoexist);
+ break;
+ case BT_8821A_2ANT_COEX_ALGO_HID:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = HID.\n");
+ halbtc8821a2ant_action_hid(btcoexist);
+ break;
+ case BT_8821A_2ANT_COEX_ALGO_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = A2DP.\n");
+ halbtc8821a2ant_action_a2dp(btcoexist);
+ break;
+ case BT_8821A_2ANT_COEX_ALGO_A2DP_PANHS:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = A2DP+PAN(HS).\n");
+ halbtc8821a2ant_action_a2dp_pan_hs(btcoexist);
+ break;
+ case BT_8821A_2ANT_COEX_ALGO_PANEDR:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = PAN(EDR).\n");
+ halbtc8821a2ant_action_pan_edr(btcoexist);
+ break;
+ case BT_8821A_2ANT_COEX_ALGO_PANHS:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = HS mode.\n");
+ halbtc8821a2ant_action_pan_hs(btcoexist);
+ break;
+ case BT_8821A_2ANT_COEX_ALGO_PANEDR_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = PAN+A2DP.\n");
+ halbtc8821a2ant_action_pan_edr_a2dp(btcoexist);
+ break;
+ case BT_8821A_2ANT_COEX_ALGO_PANEDR_HID:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = PAN(EDR)+HID.\n");
+ halbtc8821a2ant_action_pan_edr_hid(btcoexist);
+ break;
+ case BT_8821A_2ANT_COEX_ALGO_HID_A2DP_PANEDR:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = HID+A2DP+PAN.\n");
+ btc8821a2ant_act_hid_a2dp_pan_edr(btcoexist);
+ break;
+ case BT_8821A_2ANT_COEX_ALGO_HID_A2DP:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = HID+A2DP.\n");
+ halbtc8821a2ant_action_hid_a2dp(btcoexist);
+ break;
+ default:
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], Action 2-Ant, algorithm = coexist All Off!!\n");
+ halbtc8821a2ant_coex_all_off(btcoexist);
+ break;
+ }
+ coex_dm->pre_algorithm = coex_dm->cur_algorithm;
+ }
+}
+
+/*============================================================
+ *work around function start with wa_halbtc8821a2ant_
+ *============================================================
+ *============================================================
+ * extern function start with EXhalbtc8821a2ant_
+ *============================================================
+ */
+void ex_halbtc8821a2ant_init_hwconfig(struct btc_coexist *btcoexist)
+{
+ u8 u1tmp = 0;
+
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], 2Ant Init HW Config!!\n");
+
+ /* backup rf 0x1e value */
+ coex_dm->bt_rf0x1e_backup =
+ btcoexist->btc_get_rf_reg(btcoexist, BTC_RF_A, 0x1e, 0xfffff);
+
+ /* 0x790[5:0] = 0x5 */
+ u1tmp = btcoexist->btc_read_1byte(btcoexist, 0x790);
+ u1tmp &= 0xc0;
+ u1tmp |= 0x5;
+ btcoexist->btc_write_1byte(btcoexist, 0x790, u1tmp);
+
+ /*Antenna config */
+ halbtc8821a2ant_set_ant_path(btcoexist,
+ BTC_ANT_WIFI_AT_MAIN, true, false);
+
+ /* PTA parameter */
+ halbtc8821a2ant_coex_table(btcoexist,
+ FORCE_EXEC, 0x55555555, 0x55555555,
+ 0xffff, 0x3);
+
+ /* Enable counter statistics */
+ /*0x76e[3] = 1, WLAN_Act control by PTA*/
+ btcoexist->btc_write_1byte(btcoexist, 0x76e, 0xc);
+ btcoexist->btc_write_1byte(btcoexist, 0x778, 0x3);
+ btcoexist->btc_write_1byte_bitmask(btcoexist, 0x40, 0x20, 0x1);
+}
+
+void
+ex_halbtc8821a2ant_init_coex_dm(
+ struct btc_coexist *btcoexist
+ )
+{
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], Coex Mechanism Init!!\n");
+
+ halbtc8821a2ant_init_coex_dm(btcoexist);
+}
+
+void
+ex_halbtc8821a2ant_display_coex_info(
+ struct btc_coexist *btcoexist
+ )
+{
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ struct btc_stack_info *stack_info = &btcoexist->stack_info;
+ struct rtl_priv *rtlpriv = btcoexist->adapter;
+ u8 u1tmp[4], i, bt_info_ext, ps_tdma_case = 0;
+ u32 u4tmp[4];
+ bool roam = false, scan = false, link = false, wifi_under_5g = false;
+ bool bt_hs_on = false, wifi_busy = false;
+ long wifi_rssi = 0, bt_hs_rssi = 0;
+ u32 wifi_bw, wifi_traffic_dir;
+ u8 wifi_dot_11_chnl, wifi_hs_chnl;
+ u32 fw_ver = 0, bt_patch_ver = 0;
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n ============[BT Coexist info]============");
+
+ if (!board_info->bt_exist) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n BT not exists !!!");
+ return;
+ }
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d ", "Ant PG number/ Ant mechanism:",
+ board_info->pg_ant_num, board_info->btdm_ant_num);
+
+ if (btcoexist->manual_control) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s", "[Action Manual control]!!");
+ }
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %s / %d", "BT stack/ hci ext ver",
+ ((stack_info->profile_notified) ? "Yes" : "No"),
+ stack_info->hci_version);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_BT_PATCH_VER, &bt_patch_ver);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d_%d/ 0x%x/ 0x%x(%d)",
+ "CoexVer/ FwVer/ PatchVer",
+ glcoex_ver_date_8821a_2ant, glcoex_ver_8821a_2ant,
+ fw_ver, bt_patch_ver, bt_patch_ver);
+
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_U1_WIFI_DOT11_CHNL, &wifi_dot_11_chnl);
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_U1_WIFI_HS_CHNL, &wifi_hs_chnl);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d / %d(%d)",
+ "Dot11 channel / HsMode(HsChnl)",
+ wifi_dot_11_chnl, bt_hs_on, wifi_hs_chnl);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %02x %02x %02x ",
+ "H2C Wifi inform bt chnl Info",
+ coex_dm->wifi_chnl_info[0], coex_dm->wifi_chnl_info[1],
+ coex_dm->wifi_chnl_info[2]);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_WIFI_RSSI, &wifi_rssi);
+ btcoexist->btc_get(btcoexist, BTC_GET_S4_HS_RSSI, &bt_hs_rssi);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %ld/ %ld", "Wifi rssi/ HS rssi",
+ wifi_rssi, bt_hs_rssi);
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_SCAN, &scan);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_LINK, &link);
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_ROAM, &roam);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d/ %d ", "Wifi link/ roam/ scan",
+ link, roam, scan);
+
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_UNDER_5G, &wifi_under_5g);
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_U4_WIFI_BW, &wifi_bw);
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_BUSY, &wifi_busy);
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_U4_WIFI_TRAFFIC_DIRECTION, &wifi_traffic_dir);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %s / %s/ %s ", "Wifi status",
+ (wifi_under_5g ? "5G" : "2.4G"),
+ ((BTC_WIFI_BW_LEGACY == wifi_bw) ? "Legacy" :
+ (((BTC_WIFI_BW_HT40 == wifi_bw) ? "HT40" : "HT20"))),
+ ((!wifi_busy) ? "idle" :
+ ((BTC_WIFI_TRAFFIC_TX == wifi_traffic_dir) ?
+ "uplink" : "downlink")));
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = [%s/ %d/ %d] ", "BT [status/ rssi/ retryCnt]",
+ ((coex_sta->c2h_bt_inquiry_page) ? ("inquiry/page scan") :
+ ((BT_8821A_2ANT_BT_STATUS_IDLE == coex_dm->bt_status)
+ ? "idle" : ((BT_8821A_2ANT_BT_STATUS_CON_IDLE ==
+ coex_dm->bt_status) ? "connected-idle" : "busy"))),
+ coex_sta->bt_rssi, coex_sta->bt_retry_cnt);
+
+ if (stack_info->profile_notified) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d / %d / %d / %d", "SCO/HID/PAN/A2DP",
+ stack_info->sco_exist, stack_info->hid_exist,
+ stack_info->pan_exist, stack_info->a2dp_exist);
+
+ btcoexist->btc_disp_dbg_msg(btcoexist,
+ BTC_DBG_DISP_BT_LINK_INFO);
+ }
+
+ bt_info_ext = coex_sta->bt_info_ext;
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s",
+ "BT Info A2DP rate",
+ (bt_info_ext&BIT0) ? "Basic rate" : "EDR rate");
+
+ for (i = 0; i < BT_INFO_SRC_8821A_2ANT_MAX; i++) {
+ if (coex_sta->bt_info_c2h_cnt[i]) {
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %02x %02x %02x %02x %02x %02x %02x(%d)",
+ glbt_info_src_8821a_2ant[i],
+ coex_sta->bt_info_c2h[i][0],
+ coex_sta->bt_info_c2h[i][1],
+ coex_sta->bt_info_c2h[i][2],
+ coex_sta->bt_info_c2h[i][3],
+ coex_sta->bt_info_c2h[i][4],
+ coex_sta->bt_info_c2h[i][5],
+ coex_sta->bt_info_c2h[i][6],
+ coex_sta->bt_info_c2h_cnt[i]);
+ }
+ }
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %s/%s",
+ "PS state, IPS/LPS",
+ ((coex_sta->under_ips ? "IPS ON" : "IPS OFF")),
+ ((coex_sta->under_lps ? "LPS ON" : "LPS OFF")));
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_FW_PWR_MODE_CMD);
+
+ /* Sw mechanism*/
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
+ "============[Sw mechanism]============");
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d/ %d/ %d ",
+ "SM1[ShRf/ LpRA/ LimDig/ btLna]",
+ coex_dm->cur_rf_rx_lpf_shrink, coex_dm->cur_low_penalty_ra,
+ coex_dm->limited_dig, coex_dm->cur_bt_lna_constrain);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d/ %d(0x%x) ",
+ "SM2[AgcT/ AdcB/ SwDacSwing(lvl)]",
+ coex_dm->cur_agc_table_en, coex_dm->cur_adc_back_off,
+ coex_dm->cur_dac_swing_on, coex_dm->cur_dac_swing_lvl);
+
+ /* Fw mechanism*/
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s",
+ "============[Fw mechanism]============");
+
+ if (!btcoexist->manual_control) {
+ ps_tdma_case = coex_dm->cur_ps_tdma;
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %02x %02x %02x %02x %02x case-%d",
+ "PS TDMA",
+ coex_dm->ps_tdma_para[0], coex_dm->ps_tdma_para[1],
+ coex_dm->ps_tdma_para[2], coex_dm->ps_tdma_para[3],
+ coex_dm->ps_tdma_para[4], ps_tdma_case);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = %d/ %d ", "DecBtPwr/ IgnWlanAct",
+ coex_dm->cur_dec_bt_pwr,
+ coex_dm->cur_ignore_wlan_act);
+ }
+
+ /* Hw setting*/
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s", "============[Hw setting]============");
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
+ "\r\n %-35s = 0x%x", "RF-A, 0x1e initVal",
+ coex_dm->bt_rf0x1e_backup);
+
+ u1tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x778);
+ u1tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0x6cc);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x ",
+ "0x778 (W_Act)/ 0x6cc (CoTab Sel)",
+ u1tmp[0], u1tmp[1]);
+
+ u1tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x8db);
+ u1tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0xc5b);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
+ "0x8db(ADC)/0xc5b[29:25](DAC)",
+ ((u1tmp[0]&0x60)>>5), ((u1tmp[1]&0x3e)>>1));
+
+ u4tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xcb4);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
+ "0xcb4[7:0](ctrl)/ 0xcb4[29:28](val)",
+ u4tmp[0]&0xff, ((u4tmp[0]&0x30000000)>>28));
+
+ u1tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x40);
+ u4tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x4c);
+ u4tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x974);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ "0x40/ 0x4c[24:23]/ 0x974",
+ u1tmp[0], ((u4tmp[0]&0x01800000)>>23), u4tmp[1]);
+
+ u4tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x550);
+ u1tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x522);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
+ "0x550(bcn ctrl)/0x522",
+ u4tmp[0], u1tmp[0]);
+
+ u4tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xc50);
+ u1tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0xa0a);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
+ "0xc50(DIG)/0xa0a(CCK-TH)",
+ u4tmp[0], u1tmp[0]);
+
+ u4tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0xf48);
+ u1tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0xa5b);
+ u1tmp[1] = btcoexist->btc_read_1byte(btcoexist, 0xa5c);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x",
+ "OFDM-FA/ CCK-FA",
+ u4tmp[0], (u1tmp[0]<<8) + u1tmp[1]);
+
+ u4tmp[0] = btcoexist->btc_read_4byte(btcoexist, 0x6c0);
+ u4tmp[1] = btcoexist->btc_read_4byte(btcoexist, 0x6c4);
+ u4tmp[2] = btcoexist->btc_read_4byte(btcoexist, 0x6c8);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x/ 0x%x/ 0x%x",
+ "0x6c0/0x6c4/0x6c8",
+ u4tmp[0], u4tmp[1], u4tmp[2]);
+
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d",
+ "0x770 (hi-pri Rx/Tx)",
+ coex_sta->high_priority_rx, coex_sta->high_priority_tx);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = %d/ %d",
+ "0x774(low-pri Rx/Tx)",
+ coex_sta->low_priority_rx, coex_sta->low_priority_tx);
+
+ /* Tx mgnt queue hang or not, 0x41b should = 0xf, ex: 0xd ==>hang*/
+ u1tmp[0] = btcoexist->btc_read_1byte(btcoexist, 0x41b);
+ RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "\r\n %-35s = 0x%x",
+ "0x41b (mgntQ hang chk == 0xf)",
+ u1tmp[0]);
+
+ btcoexist->btc_disp_dbg_msg(btcoexist, BTC_DBG_DISP_COEX_STATISTICS);
+}
+
+void ex_halbtc8821a2ant_ips_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (BTC_IPS_ENTER == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], IPS ENTER notify\n");
+ coex_sta->under_ips = true;
+ halbtc8821a2ant_coex_all_off(btcoexist);
+ } else if (BTC_IPS_LEAVE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], IPS LEAVE notify\n");
+ coex_sta->under_ips = false;
+ /*halbtc8821a2ant_init_coex_dm(btcoexist);*/
+ }
+}
+
+void ex_halbtc8821a2ant_lps_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (BTC_LPS_ENABLE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], LPS ENABLE notify\n");
+ coex_sta->under_lps = true;
+ } else if (BTC_LPS_DISABLE == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], LPS DISABLE notify\n");
+ coex_sta->under_lps = false;
+ }
+}
+
+void ex_halbtc8821a2ant_scan_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (BTC_SCAN_START == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], SCAN START notify\n");
+ } else if (BTC_SCAN_FINISH == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], SCAN FINISH notify\n");
+ }
+}
+
+void ex_halbtc8821a2ant_connect_notify(struct btc_coexist *btcoexist, u8 type)
+{
+ if (BTC_ASSOCIATE_START == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], CONNECT START notify\n");
+ } else if (BTC_ASSOCIATE_FINISH == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], CONNECT FINISH notify\n");
+ }
+}
+
+void ex_halbtc8821a2ant_media_status_notify(struct btc_coexist *btcoexist,
+ u8 type)
+{
+ u8 h2c_parameter[3] = {0};
+ u32 wifi_bw;
+ u8 wifi_central_chnl;
+
+ if (BTC_MEDIA_CONNECT == type) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], MEDIA connect notify\n");
+ } else {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], MEDIA disconnect notify\n");
+ }
+
+ /* only 2.4G we need to inform bt the chnl mask*/
+ btcoexist->btc_get(btcoexist, BTC_GET_U1_WIFI_CENTRAL_CHNL,
+ &wifi_central_chnl);
+ if ((BTC_MEDIA_CONNECT == type) &&
+ (wifi_central_chnl <= 14)) {
+ h2c_parameter[0] = 0x1;
+ h2c_parameter[1] = wifi_central_chnl;
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
+ if (BTC_WIFI_BW_HT40 == wifi_bw)
+ h2c_parameter[2] = 0x30;
+ else
+ h2c_parameter[2] = 0x20;
+ }
+
+ coex_dm->wifi_chnl_info[0] = h2c_parameter[0];
+ coex_dm->wifi_chnl_info[1] = h2c_parameter[1];
+ coex_dm->wifi_chnl_info[2] = h2c_parameter[2];
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE_FW_EXEC,
+ "[BTCoex], FW write 0x66 = 0x%x\n",
+ h2c_parameter[0]<<16|h2c_parameter[1]<<8|h2c_parameter[2]);
+
+ btcoexist->btc_fill_h2c(btcoexist, 0x66, 3, h2c_parameter);
+}
+
+void ex_halbtc8821a2ant_special_packet_notify(struct btc_coexist *btcoexist,
+ u8 type) {
+ if (type == BTC_PACKET_DHCP) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], DHCP Packet notify\n");
+ }
+}
+
+void ex_halbtc8821a2ant_bt_info_notify(struct btc_coexist *btcoexist,
+ u8 *tmp_buf, u8 length)
+{
+ u8 bt_info = 0;
+ u8 i, rsp_source = 0;
+ static u32 set_bt_lna_cnt, set_bt_psd_mode;
+ bool bt_busy = false, limited_dig = false;
+ bool wifi_connected = false, bt_hs_on = false;
+
+ coex_sta->c2h_bt_info_req_sent = false;
+
+ rsp_source = tmp_buf[0]&0xf;
+ if (rsp_source >= BT_INFO_SRC_8821A_2ANT_MAX)
+ rsp_source = BT_INFO_SRC_8821A_2ANT_WIFI_FW;
+ coex_sta->bt_info_c2h_cnt[rsp_source]++;
+
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Bt info[%d], length = %d, hex data = [",
+ rsp_source, length);
+ for (i = 0; i < length; i++) {
+ coex_sta->bt_info_c2h[rsp_source][i] = tmp_buf[i];
+ if (i == 1)
+ bt_info = tmp_buf[i];
+ if (i == length-1) {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "0x%02x]\n", tmp_buf[i]);
+ } else {
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "0x%02x, ", tmp_buf[i]);
+ }
+ }
+
+ if (BT_INFO_SRC_8821A_2ANT_WIFI_FW != rsp_source) {
+ coex_sta->bt_retry_cnt = /* [3:0]*/
+ coex_sta->bt_info_c2h[rsp_source][2]&0xf;
+
+ coex_sta->bt_rssi =
+ coex_sta->bt_info_c2h[rsp_source][3]*2+10;
+
+ coex_sta->bt_info_ext =
+ coex_sta->bt_info_c2h[rsp_source][4];
+
+ /* Here we need to resend some wifi info to BT*/
+ /* because bt is reset and loss of the info.*/
+ if ((coex_sta->bt_info_ext & BIT1)) {
+ btcoexist->btc_get(btcoexist,
+ BTC_GET_BL_WIFI_CONNECTED, &wifi_connected);
+ if (wifi_connected) {
+ ex_halbtc8821a2ant_media_status_notify(btcoexist,
+ BTC_MEDIA_CONNECT);
+ } else {
+ ex_halbtc8821a2ant_media_status_notify(btcoexist,
+ BTC_MEDIA_DISCONNECT);
+ }
+
+ set_bt_psd_mode = 0;
+ }
+ if (set_bt_psd_mode <= 3) {
+ halbtc8821a2ant_set_bt_psd_mode(btcoexist, FORCE_EXEC,
+ 0x0); /*fix CH-BW mode*/
+ set_bt_psd_mode++;
+ }
+
+ if (coex_dm->cur_bt_lna_constrain) {
+ if (!(coex_sta->bt_info_ext & BIT2)) {
+ if (set_bt_lna_cnt <= 3) {
+ btc8821a2_set_bt_lna_const(btcoexist,
+ FORCE_EXEC,
+ true);
+ set_bt_lna_cnt++;
+ }
+ }
+ } else {
+ set_bt_lna_cnt = 0;
+ }
+
+ if ((coex_sta->bt_info_ext & BIT3)) {
+ halbtc8821a2ant_ignore_wlan_act(btcoexist,
+ FORCE_EXEC, false);
+ } else {
+ /* BT already NOT ignore Wlan active, do nothing here.*/
+ }
+
+ if ((coex_sta->bt_info_ext & BIT4)) {
+ /* BT auto report already enabled, do nothing*/
+ } else {
+ halbtc8821a2ant_bt_auto_report(btcoexist,
+ FORCE_EXEC, true);
+ }
+ }
+
+ btcoexist->btc_get(btcoexist, BTC_GET_BL_HS_OPERATION, &bt_hs_on);
+ /* check BIT2 first ==> check if bt is under inquiry or page scan*/
+ if (bt_info & BT_INFO_8821A_2ANT_B_INQ_PAGE) {
+ coex_sta->c2h_bt_inquiry_page = true;
+ coex_dm->bt_status = BT_8821A_2ANT_BT_STATUS_NON_IDLE;
+ } else {
+ coex_sta->c2h_bt_inquiry_page = false;
+ if (bt_info == 0x1) {
+ /* connection exists but not busy*/
+ coex_sta->bt_link_exist = true;
+ coex_dm->bt_status = BT_8821A_2ANT_BT_STATUS_CON_IDLE;
+ } else if (bt_info & BT_INFO_8821A_2ANT_B_CONNECTION) {
+ /* connection exists and some link is busy*/
+ coex_sta->bt_link_exist = true;
+ if (bt_info & BT_INFO_8821A_2ANT_B_FTP)
+ coex_sta->pan_exist = true;
+ else
+ coex_sta->pan_exist = false;
+ if (bt_info & BT_INFO_8821A_2ANT_B_A2DP)
+ coex_sta->a2dp_exist = true;
+ else
+ coex_sta->a2dp_exist = false;
+ if (bt_info & BT_INFO_8821A_2ANT_B_HID)
+ coex_sta->hid_exist = true;
+ else
+ coex_sta->hid_exist = false;
+ if (bt_info & BT_INFO_8821A_2ANT_B_SCO_ESCO)
+ coex_sta->sco_exist = true;
+ else
+ coex_sta->sco_exist = false;
+ coex_dm->bt_status = BT_8821A_2ANT_BT_STATUS_NON_IDLE;
+ } else {
+ coex_sta->bt_link_exist = false;
+ coex_sta->pan_exist = false;
+ coex_sta->a2dp_exist = false;
+ coex_sta->hid_exist = false;
+ coex_sta->sco_exist = false;
+ coex_dm->bt_status = BT_8821A_2ANT_BT_STATUS_IDLE;
+ }
+
+ if (bt_hs_on)
+ coex_dm->bt_status = BT_8821A_2ANT_BT_STATUS_NON_IDLE;
+ }
+
+ if (BT_8821A_2ANT_BT_STATUS_NON_IDLE == coex_dm->bt_status)
+ bt_busy = true;
+ else
+ bt_busy = false;
+ btcoexist->btc_set(btcoexist, BTC_SET_BL_BT_TRAFFIC_BUSY, &bt_busy);
+
+ if (BT_8821A_2ANT_BT_STATUS_IDLE != coex_dm->bt_status)
+ limited_dig = true;
+ else
+ limited_dig = false;
+ coex_dm->limited_dig = limited_dig;
+ btcoexist->btc_set(btcoexist,
+ BTC_SET_BL_BT_LIMITED_DIG, &limited_dig);
+
+ halbtc8821a2ant_run_coexist_mechanism(btcoexist);
+}
+
+void ex_halbtc8821a2ant_halt_notify(struct btc_coexist *btcoexist)
+{
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_NOTIFY,
+ "[BTCoex], Halt notify\n");
+
+ halbtc8821a2ant_ignore_wlan_act(btcoexist, FORCE_EXEC, true);
+ ex_halbtc8821a2ant_media_status_notify(btcoexist, BTC_MEDIA_DISCONNECT);
+}
+
+void ex_halbtc8821a2ant_periodical(struct btc_coexist *btcoexist)
+{
+ static u8 dis_ver_info_cnt;
+ u32 fw_ver = 0, bt_patch_ver = 0;
+ struct btc_board_info *board_info = &btcoexist->board_info;
+ struct btc_stack_info *stack_info = &btcoexist->stack_info;
+
+ BTC_PRINT(BTC_MSG_ALGORITHM, ALGO_TRACE,
+ "[BTCoex], ==========================Periodical===========================\n");
+
+ if (dis_ver_info_cnt <= 5) {
+ dis_ver_info_cnt += 1;
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], ****************************************************************\n");
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], Ant PG Num/ Ant Mech/ Ant Pos = %d/ %d/ %d\n",
+ board_info->pg_ant_num,
+ board_info->btdm_ant_num,
+ board_info->btdm_ant_pos);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], BT stack/ hci ext ver = %s / %d\n",
+ ((stack_info->profile_notified) ? "Yes" : "No"),
+ stack_info->hci_version);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_BT_PATCH_VER,
+ &bt_patch_ver);
+ btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_FW_VER, &fw_ver);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], CoexVer/ FwVer/ PatchVer = %d_%x/ 0x%x/ 0x%x(%d)\n",
+ glcoex_ver_date_8821a_2ant, glcoex_ver_8821a_2ant,
+ fw_ver, bt_patch_ver, bt_patch_ver);
+ BTC_PRINT(BTC_MSG_INTERFACE, INTF_INIT,
+ "[BTCoex], ****************************************************************\n");
+ }
+
+ halbtc8821a2ant_query_bt_info(btcoexist);
+ halbtc8821a2ant_monitor_bt_ctr(btcoexist);
+ btc8821a2ant_mon_bt_en_dis(btcoexist);
+}
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a2ant.h b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a2ant.h
new file mode 100644
index 0000000..b4cf1f5
--- /dev/null
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtc8821a2ant.h
@@ -0,0 +1,205 @@
+/******************************************************************************
+ *
+ * Copyright(c) 2012 Realtek Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called LICENSE.
+ *
+ * Contact Information:
+ * wlanfae <wlanfae@realtek.com>
+ * Realtek Corporation, No. 2, Innovation Road II, Hsinchu Science Park,
+ * Hsinchu 300, Taiwan.
+ *
+ * Larry Finger <Larry.Finger@lwfinger.net>
+ *
+ *****************************************************************************/
+
+/*===========================================
+ * The following is for 8821A 2Ant BT Co-exist definition
+ *===========================================
+*/
+#define BT_INFO_8821A_2ANT_B_FTP BIT7
+#define BT_INFO_8821A_2ANT_B_A2DP BIT6
+#define BT_INFO_8821A_2ANT_B_HID BIT5
+#define BT_INFO_8821A_2ANT_B_SCO_BUSY BIT4
+#define BT_INFO_8821A_2ANT_B_ACL_BUSY BIT3
+#define BT_INFO_8821A_2ANT_B_INQ_PAGE BIT2
+#define BT_INFO_8821A_2ANT_B_SCO_ESCO BIT1
+#define BT_INFO_8821A_2ANT_B_CONNECTION BIT0
+
+#define BTC_RSSI_COEX_THRESH_TOL_8821A_2ANT 2
+
+enum _BT_INFO_SRC_8821A_2ANT {
+ BT_INFO_SRC_8821A_2ANT_WIFI_FW = 0x0,
+ BT_INFO_SRC_8821A_2ANT_BT_RSP = 0x1,
+ BT_INFO_SRC_8821A_2ANT_BT_ACTIVE_SEND = 0x2,
+ BT_INFO_SRC_8821A_2ANT_MAX
+};
+
+enum _BT_8821A_2ANT_BT_STATUS {
+ BT_8821A_2ANT_BT_STATUS_IDLE = 0x0,
+ BT_8821A_2ANT_BT_STATUS_CON_IDLE = 0x1,
+ BT_8821A_2ANT_BT_STATUS_NON_IDLE = 0x2,
+ BT_8821A_2ANT_BT_STATUS_MAX
+};
+
+enum _BT_8821A_2ANT_COEX_ALGO {
+ BT_8821A_2ANT_COEX_ALGO_UNDEFINED = 0x0,
+ BT_8821A_2ANT_COEX_ALGO_SCO = 0x1,
+ BT_8821A_2ANT_COEX_ALGO_HID = 0x2,
+ BT_8821A_2ANT_COEX_ALGO_A2DP = 0x3,
+ BT_8821A_2ANT_COEX_ALGO_A2DP_PANHS = 0x4,
+ BT_8821A_2ANT_COEX_ALGO_PANEDR = 0x5,
+ BT_8821A_2ANT_COEX_ALGO_PANHS = 0x6,
+ BT_8821A_2ANT_COEX_ALGO_PANEDR_A2DP = 0x7,
+ BT_8821A_2ANT_COEX_ALGO_PANEDR_HID = 0x8,
+ BT_8821A_2ANT_COEX_ALGO_HID_A2DP_PANEDR = 0x9,
+ BT_8821A_2ANT_COEX_ALGO_HID_A2DP = 0xa,
+ BT_8821A_2ANT_COEX_ALGO_MAX = 0xb,
+};
+
+struct coex_dm_8821a_2ant {
+ /* fw mechanism */
+ bool pre_dec_bt_pwr;
+ bool cur_dec_bt_pwr;
+ bool pre_bt_lna_constrain;
+ bool cur_bt_lna_constrain;
+ u8 pre_bt_psd_mode;
+ u8 cur_bt_psd_mode;
+ u8 pre_fw_dac_swing_lvl;
+ u8 cur_fw_dac_swing_lvl;
+ bool cur_ignore_wlan_act;
+ bool pre_ignore_wlan_act;
+ u8 pre_ps_tdma;
+ u8 cur_ps_tdma;
+ u8 ps_tdma_para[5];
+ u8 tdma_adj_type;
+ bool reset_tdma_adjust;
+ bool pre_ps_tdma_on;
+ bool cur_ps_tdma_on;
+ bool pre_bt_auto_report;
+ bool cur_bt_auto_report;
+
+ /* sw mechanism */
+ bool pre_rf_rx_lpf_shrink;
+ bool cur_rf_rx_lpf_shrink;
+ u32 bt_rf0x1e_backup;
+ bool pre_low_penalty_ra;
+ bool cur_low_penalty_ra;
+ bool pre_dac_swing_on;
+ u32 pre_dac_swing_lvl;
+ bool cur_dac_swing_on;
+ u32 cur_dac_swing_lvl;
+ bool pre_adc_back_off;
+ bool cur_adc_back_off;
+ bool pre_agc_table_en;
+ bool cur_agc_table_en;
+ u32 pre_val0x6c0;
+ u32 cur_val0x6c0;
+ u32 pre_val0x6c4;
+ u32 cur_val0x6c4;
+ u32 pre_val0x6c8;
+ u32 cur_val0x6c8;
+ u8 pre_val0x6cc;
+ u8 cur_val0x6cc;
+ bool limited_dig;
+
+ /* algorithm related */
+ u8 pre_algorithm;
+ u8 cur_algorithm;
+ u8 bt_status;
+ u8 wifi_chnl_info[3];
+};
+
+struct coex_sta_8821a_2ant {
+ bool bt_link_exist;
+ bool sco_exist;
+ bool a2dp_exist;
+ bool hid_exist;
+ bool pan_exist;
+ bool under_lps;
+ bool under_ips;
+ u32 high_priority_tx;
+ u32 high_priority_rx;
+ u32 low_priority_tx;
+ u32 low_priority_rx;
+ u8 bt_rssi;
+ u8 pre_bt_rssi_state;
+ u8 pre_wifi_rssi_state[4];
+ bool c2h_bt_info_req_sent;
+ u8 bt_info_c2h[BT_INFO_SRC_8821A_2ANT_MAX][10];
+ u32 bt_info_c2h_cnt[BT_INFO_SRC_8821A_2ANT_MAX];
+ bool c2h_bt_inquiry_page;
+ u8 bt_retry_cnt;
+ u8 bt_info_ext;
+};
+
+/*===========================================
+ * The following is interface which will notify coex module.
+ *===========================================
+ */
+void
+ex_halbtc8821a2ant_init_hwconfig(
+ struct btc_coexist *btcoexist
+ );
+void
+ex_halbtc8821a2ant_init_coex_dm(
+ struct btc_coexist *btcoexist
+ );
+void
+ex_halbtc8821a2ant_ips_notify(
+ struct btc_coexist *btcoexist,
+ u8 type
+ );
+void
+ex_halbtc8821a2ant_lps_notify(
+ struct btc_coexist *btcoexist,
+ u8 type
+ );
+void
+ex_halbtc8821a2ant_scan_notify(
+ struct btc_coexist *btcoexist,
+ u8 type
+ );
+void
+ex_halbtc8821a2ant_connect_notify(
+ struct btc_coexist *btcoexist,
+ u8 type
+ );
+void
+ex_halbtc8821a2ant_media_status_notify(
+ struct btc_coexist *btcoexist,
+ u8 type
+ );
+void
+ex_halbtc8821a2ant_special_packet_notify(
+ struct btc_coexist *btcoexist,
+ u8 type
+ );
+void
+ex_halbtc8821a2ant_bt_info_notify(
+ struct btc_coexist *btcoexist,
+ u8 *tmp_buf,
+ u8 length
+ );
+void
+ex_halbtc8821a2ant_halt_notify(
+ struct btc_coexist *btcoexist
+ );
+void
+ex_halbtc8821a2ant_periodical(
+ struct btc_coexist *btcoexist
+ );
+void
+ex_halbtc8821a2ant_display_coex_info(
+ struct btc_coexist *btcoexist
+ );
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtcoutsrc.c b/drivers/net/wireless/rtlwifi/btcoexist/halbtcoutsrc.c
index d4bd550..fcf7459 100644
--- a/drivers/net/wireless/rtlwifi/btcoexist/halbtcoutsrc.c
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtcoutsrc.c
@@ -32,7 +32,6 @@
struct btc_coexist gl_bt_coexist;
u32 btc_dbg_type[BTC_MSG_MAX];
-static u8 btc_dbg_buf[100];
/***************************************************
* Debug related function
@@ -389,7 +388,7 @@
btcoexist->bt_info.reject_agg_pkt = *bool_tmp;
break;
case BTC_SET_BL_BT_CTRL_AGG_SIZE:
- btcoexist->bt_info.b_bt_ctrl_buf_size = *bool_tmp;
+ btcoexist->bt_info.bt_ctrl_buf_size = *bool_tmp;
break;
case BTC_SET_BL_INC_SCAN_DEV_NUM:
btcoexist->bt_info.increase_scan_dev_num = *bool_tmp;
@@ -417,10 +416,10 @@
/* rtlpriv->mlmepriv.scan_compensation = *u8_tmp; */
break;
case BTC_SET_U1_1ANT_LPS:
- btcoexist->bt_info.lps_1ant = *u8_tmp;
+ btcoexist->bt_info.lps_val = *u8_tmp;
break;
case BTC_SET_U1_1ANT_RPWM:
- btcoexist->bt_info.rpwm_1ant = *u8_tmp;
+ btcoexist->bt_info.rpwm_val = *u8_tmp;
break;
/* the following are some action which will be triggered */
case BTC_SET_ACT_LEAVE_LPS:
@@ -497,7 +496,7 @@
return rtl_read_dword(rtlpriv, reg_addr);
}
-static void halbtc_write_1byte(void *bt_context, u32 reg_addr, u8 data)
+static void halbtc_write_1byte(void *bt_context, u32 reg_addr, u32 data)
{
struct btc_coexist *btcoexist = (struct btc_coexist *)bt_context;
struct rtl_priv *rtlpriv = btcoexist->adapter;
@@ -506,7 +505,7 @@
}
static void halbtc_bitmask_write_1byte(void *bt_context, u32 reg_addr,
- u32 bit_mask, u8 data)
+ u8 bit_mask, u8 data)
{
struct btc_coexist *btcoexist = (struct btc_coexist *)bt_context;
struct rtl_priv *rtlpriv = btcoexist->adapter;
@@ -652,9 +651,7 @@
btcoexist->btc_get = halbtc_get;
btcoexist->btc_set = halbtc_set;
- btcoexist->cli_buf = &btc_dbg_buf[0];
-
- btcoexist->bt_info.b_bt_ctrl_buf_size = false;
+ btcoexist->bt_info.bt_ctrl_buf_size = false;
btcoexist->bt_info.agg_buf_size = 5;
btcoexist->bt_info.increase_scan_dev_num = false;
@@ -672,7 +669,7 @@
btcoexist->statistics.cnt_init_hw_config++;
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_init_hwconfig(btcoexist);
+ ex_btc8723b2ant_init_hwconfig(btcoexist);
}
void exhalbtc_init_coex_dm(struct btc_coexist *btcoexist)
@@ -686,7 +683,7 @@
btcoexist->statistics.cnt_init_coex_dm++;
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_init_coex_dm(btcoexist);
+ ex_btc8723b2ant_init_coex_dm(btcoexist);
btcoexist->initilized = true;
}
@@ -711,7 +708,7 @@
halbtc_leave_low_power();
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_ips_notify(btcoexist, ips_type);
+ ex_btc8723b2ant_ips_notify(btcoexist, ips_type);
halbtc_nomal_low_power();
}
@@ -734,7 +731,7 @@
lps_type = BTC_LPS_ENABLE;
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_lps_notify(btcoexist, lps_type);
+ ex_btc8723b2ant_lps_notify(btcoexist, lps_type);
}
void exhalbtc_scan_notify(struct btc_coexist *btcoexist, u8 type)
@@ -757,7 +754,7 @@
halbtc_leave_low_power();
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_scan_notify(btcoexist, scan_type);
+ ex_btc8723b2ant_scan_notify(btcoexist, scan_type);
halbtc_nomal_low_power();
}
@@ -782,14 +779,12 @@
halbtc_leave_low_power();
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_connect_notify(btcoexist, asso_type);
+ ex_btc8723b2ant_connect_notify(btcoexist, asso_type);
}
void exhalbtc_mediastatus_notify(struct btc_coexist *btcoexist,
- enum _RT_MEDIA_STATUS media_status)
+ enum rt_media_status media_status)
{
- struct rtl_priv *rtlpriv = btcoexist->adapter;
- struct rtl_hal *rtlhal = rtl_hal(rtlpriv);
u8 status;
if (!halbtc_is_bt_coexist_available(btcoexist))
@@ -805,9 +800,6 @@
halbtc_leave_low_power();
- if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- btc8723b_med_stat_notify(btcoexist, status);
-
halbtc_nomal_low_power();
}
@@ -828,8 +820,8 @@
halbtc_leave_low_power();
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_special_packet_notify(btcoexist,
- packet_type);
+ ex_btc8723b2ant_special_packet_notify(btcoexist,
+ packet_type);
halbtc_nomal_low_power();
}
@@ -844,13 +836,11 @@
btcoexist->statistics.cnt_bt_info_notify++;
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_bt_info_notify(btcoexist, tmp_buf, length);
+ ex_btc8723b2ant_bt_info_notify(btcoexist, tmp_buf, length);
}
void exhalbtc_stack_operation_notify(struct btc_coexist *btcoexist, u8 type)
{
- struct rtl_priv *rtlpriv = btcoexist->adapter;
- struct rtl_hal *rtlhal = rtl_hal(rtlpriv);
u8 stack_op_type;
if (!halbtc_is_bt_coexist_available(btcoexist))
@@ -863,10 +853,6 @@
halbtc_leave_low_power();
- if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_stack_operation_notify(btcoexist,
- stack_op_type);
-
halbtc_nomal_low_power();
}
@@ -878,7 +864,7 @@
return;
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_halt_notify(btcoexist);
+ ex_btc8723b2ant_halt_notify(btcoexist);
}
void exhalbtc_pnp_notify(struct btc_coexist *btcoexist, u8 pnp_state)
@@ -898,7 +884,7 @@
halbtc_leave_low_power();
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_periodical(btcoexist);
+ ex_btc8723b2ant_periodical(btcoexist);
halbtc_nomal_low_power();
}
@@ -997,5 +983,5 @@
return;
if (rtlhal->hw_type == HARDWARE_TYPE_RTL8723BE)
- ex_halbtc8723b2ant_display_coex_info(btcoexist);
+ ex_btc8723b2ant_display_coex_info(btcoexist);
}
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/halbtcoutsrc.h b/drivers/net/wireless/rtlwifi/btcoexist/halbtcoutsrc.h
index 049f4c8..1345545 100644
--- a/drivers/net/wireless/rtlwifi/btcoexist/halbtcoutsrc.h
+++ b/drivers/net/wireless/rtlwifi/btcoexist/halbtcoutsrc.h
@@ -55,9 +55,16 @@
#define BTC_RATE_DISABLE 0
#define BTC_RATE_ENABLE 1
+/* single Antenna definition */
#define BTC_ANT_PATH_WIFI 0
#define BTC_ANT_PATH_BT 1
#define BTC_ANT_PATH_PTA 2
+/* dual Antenna definition */
+#define BTC_ANT_WIFI_AT_MAIN 0
+#define BTC_ANT_WIFI_AT_AUX 1
+/* coupler Antenna definition */
+#define BTC_ANT_WIFI_AT_CPL_MAIN 0
+#define BTC_ANT_WIFI_AT_CPL_AUX 1
enum btc_chip_interface {
BTC_INTF_UNKNOWN = 0,
@@ -68,7 +75,7 @@
BTC_INTF_MAX
};
-enum BTC_CHIP_TYPE {
+enum btc_chip_type {
BTC_CHIP_UNDEF = 0,
BTC_CHIP_CSR_BC4 = 1,
BTC_CHIP_CSR_BC8 = 2,
@@ -78,11 +85,12 @@
BTC_CHIP_MAX
};
-enum BTC_MSG_TYPE {
+enum btc_msg_type {
BTC_MSG_INTERFACE = 0x0,
BTC_MSG_ALGORITHM = 0x1,
BTC_MSG_MAX
};
+
extern u32 btc_dbg_type[];
/* following is for BTC_MSG_INTERFACE */
@@ -101,20 +109,12 @@
#define ALGO_TRACE_SW_DETAIL BIT8
#define ALGO_TRACE_SW_EXEC BIT9
-#define BT_COEX_ANT_TYPE_PG 0
-#define BT_COEX_ANT_TYPE_ANTDIV 1
-#define BT_COEX_ANT_TYPE_DETECTED 2
-#define BTC_MIMO_PS_STATIC 0
-#define BTC_MIMO_PS_DYNAMIC 1
-#define BTC_RATE_DISABLE 0
-#define BTC_RATE_ENABLE 1
-#define BTC_ANT_PATH_WIFI 0
-#define BTC_ANT_PATH_BT 1
-#define BTC_ANT_PATH_PTA 2
-
-
-#define CL_SPRINTF snprintf
-#define CL_PRINTF(buf) printk("%s", buf)
+/* following is for wifi link status */
+#define WIFI_STA_CONNECTED BIT0
+#define WIFI_AP_CONNECTED BIT1
+#define WIFI_HS_CONNECTED BIT2
+#define WIFI_P2P_GO_CONNECTED BIT3
+#define WIFI_P2P_GC_CONNECTED BIT4
#define BTC_PRINT(dbgtype, dbgflag, printstr, ...) \
do { \
@@ -123,46 +123,15 @@
} \
} while (0)
-#define BTC_PRINT_F(dbgtype, dbgflag, printstr, ...) \
- do { \
- if (unlikely(btc_dbg_type[dbgtype] & dbgflag)) {\
- pr_info("%s: ", __func__); \
- printk(printstr, ##__VA_ARGS__); \
- } \
- } while (0)
-
-#define BTC_PRINT_ADDR(dbgtype, dbgflag, printstr, _ptr) \
- do { \
- if (unlikely(btc_dbg_type[dbgtype] & dbgflag)) { \
- int __i; \
- u8 *__ptr = (u8 *)_ptr; \
- printk printstr; \
- for (__i = 0; __i < 6; __i++) \
- printk("%02X%s", __ptr[__i], (__i == 5) ? \
- "" : "-"); \
- pr_info("\n"); \
- } \
- } while (0)
-
-#define BTC_PRINT_DATA(dbgtype, dbgflag, _titlestring, _hexdata, _hexdatalen) \
- do { \
- if (unlikely(btc_dbg_type[dbgtype] & dbgflag)) { \
- int __i; \
- u8 *__ptr = (u8 *)_hexdata; \
- printk(_titlestring); \
- for (__i = 0; __i < (int)_hexdatalen; __i++) { \
- printk("%02X%s", __ptr[__i], (((__i + 1) % 4) \
- == 0) ? " " : " ");\
- if (((__i + 1) % 16) == 0) \
- printk("\n"); \
- } \
- pr_debug("\n"); \
- } \
- } while (0)
-
-#define BTC_ANT_PATH_WIFI 0
-#define BTC_ANT_PATH_BT 1
-#define BTC_ANT_PATH_PTA 2
+#define BTC_RSSI_HIGH(_rssi_) \
+ ((_rssi_ == BTC_RSSI_STATE_HIGH || \
+ _rssi_ == BTC_RSSI_STATE_STAY_HIGH) ? true : false)
+#define BTC_RSSI_MEDIUM(_rssi_) \
+ ((_rssi_ == BTC_RSSI_STATE_MEDIUM || \
+ _rssi_ == BTC_RSSI_STATE_STAY_MEDIUM) ? true : false)
+#define BTC_RSSI_LOW(_rssi_) \
+ ((_rssi_ == BTC_RSSI_STATE_LOW || \
+ _rssi_ == BTC_RSSI_STATE_STAY_LOW) ? true : false)
enum btc_power_save_type {
BTC_PS_WIFI_NATIVE = 0,
@@ -224,7 +193,6 @@
BTC_WIFI_PNP_MAX
};
-
enum btc_get_type {
/* type bool */
BTC_GET_BL_HS_OPERATION,
@@ -253,6 +221,7 @@
BTC_GET_U4_WIFI_BW,
BTC_GET_U4_WIFI_TRAFFIC_DIRECTION,
BTC_GET_U4_WIFI_FW_VER,
+ BTC_GET_U4_WIFI_LINK_STATUS,
BTC_GET_U4_BT_PATCH_VER,
/* type u1Byte */
@@ -260,6 +229,7 @@
BTC_GET_U1_WIFI_CENTRAL_CHNL,
BTC_GET_U1_WIFI_HS_CHNL,
BTC_GET_U1_MAC_PHY_MODE,
+ BTC_GET_U1_AP_NUM,
/* for 1Ant */
BTC_GET_U1_LPS_MODE,
@@ -270,7 +240,6 @@
BTC_GET_MAX
};
-
enum btc_set_type {
/* type bool */
BTC_SET_BL_BT_DISABLE,
@@ -283,7 +252,6 @@
/* type u1Byte */
BTC_SET_U1_RSSI_ADJ_VAL_FOR_AGC_TABLE_ON,
- BTC_SET_U1_RSSI_ADJ_VAL_FOR_1ANT_COEX_TYPE,
BTC_SET_UI_SCAN_SIG_COMPENSATION,
BTC_SET_U1_AGG_BUF_SIZE,
@@ -295,6 +263,9 @@
/* type bool */
BTC_SET_BL_BT_SCO_BUSY,
/* type u1Byte */
+ BTC_SET_U1_RSSI_ADJ_VAL_FOR_1ANT_COEX_TYPE,
+ BTC_SET_U1_LPS_VAL,
+ BTC_SET_U1_RPWM_VAL,
BTC_SET_U1_1ANT_LPS,
BTC_SET_U1_1ANT_RPWM,
/* type trigger some action */
@@ -358,6 +329,20 @@
BTC_PACKET_MAX
};
+enum hci_ext_bt_operation {
+ HCI_BT_OP_NONE = 0x0,
+ HCI_BT_OP_INQUIRY_START = 0x1,
+ HCI_BT_OP_INQUIRY_FINISH = 0x2,
+ HCI_BT_OP_PAGING_START = 0x3,
+ HCI_BT_OP_PAGING_SUCCESS = 0x4,
+ HCI_BT_OP_PAGING_UNSUCCESS = 0x5,
+ HCI_BT_OP_PAIRING_START = 0x6,
+ HCI_BT_OP_PAIRING_FINISH = 0x7,
+ HCI_BT_OP_BT_DEV_ENABLE = 0x8,
+ HCI_BT_OP_BT_DEV_DISABLE = 0x9,
+ HCI_BT_OP_MAX
+};
+
enum btc_notify_type_stack_operation {
BTC_STACK_OP_NONE = 0x0,
BTC_STACK_OP_INQ_PAGE_PAIR_START = 0x1,
@@ -365,17 +350,16 @@
BTC_STACK_OP_MAX
};
-
typedef u8 (*bfp_btc_r1)(void *btc_context, u32 reg_addr);
typedef u16 (*bfp_btc_r2)(void *btc_context, u32 reg_addr);
typedef u32 (*bfp_btc_r4)(void *btc_context, u32 reg_addr);
-typedef void (*bfp_btc_w1)(void *btc_context, u32 reg_addr, u8 data);
+typedef void (*bfp_btc_w1)(void *btc_context, u32 reg_addr, u32 data);
typedef void (*bfp_btc_w1_bit_mak)(void *btc_context, u32 reg_addr,
- u32 bit_mask, u8 data1b);
+ u8 bit_mask, u8 data1b);
typedef void (*bfp_btc_w2)(void *btc_context, u32 reg_addr, u16 data);
@@ -413,20 +397,22 @@
u8 agg_buf_size;
bool limited_dig;
bool reject_agg_pkt;
- bool b_bt_ctrl_buf_size;
+ bool bt_ctrl_buf_size;
bool increase_scan_dev_num;
u16 bt_hci_ver;
u16 bt_real_fw_ver;
u8 bt_fw_ver;
+ bool bt_disable_low_pwr;
+
/* the following is for 1Ant solution */
bool bt_ctrl_lps;
bool bt_pwr_save_mode;
bool bt_lps_on;
bool force_to_roam;
u8 force_exec_pwr_cmd_cnt;
- u8 lps_1ant;
- u8 rpwm_1ant;
+ u8 lps_val;
+ u8 rpwm_val;
u32 ra_mask;
};
@@ -457,6 +443,7 @@
u32 cnt_special_packet_notify;
u32 cnt_bt_info_notify;
u32 cnt_periodical;
+ u32 cnt_coex_dm_switch;
u32 cnt_stack_operation_notify;
u32 cnt_dbg_ctrl;
};
@@ -493,7 +480,6 @@
bool initilized;
bool stop_coex_dm;
bool manual_control;
- u8 *cli_buf;
struct btc_statistics statistics;
u8 pwr_mode_val[10];
@@ -509,7 +495,6 @@
bfp_btc_set_bb_reg btc_set_bb_reg;
bfp_btc_get_bb_reg btc_get_bb_reg;
-
bfp_btc_set_rf_reg btc_set_rf_reg;
bfp_btc_get_rf_reg btc_get_rf_reg;
@@ -533,13 +518,14 @@
void exhalbtc_scan_notify(struct btc_coexist *btcoexist, u8 type);
void exhalbtc_connect_notify(struct btc_coexist *btcoexist, u8 action);
void exhalbtc_mediastatus_notify(struct btc_coexist *btcoexist,
- enum _RT_MEDIA_STATUS media_status);
+ enum rt_media_status media_status);
void exhalbtc_special_packet_notify(struct btc_coexist *btcoexist, u8 pkt_type);
void exhalbtc_bt_info_notify(struct btc_coexist *btcoexist, u8 *tmp_buf,
u8 length);
void exhalbtc_stack_operation_notify(struct btc_coexist *btcoexist, u8 type);
void exhalbtc_halt_notify(struct btc_coexist *btcoexist);
void exhalbtc_pnp_notify(struct btc_coexist *btcoexist, u8 pnp_state);
+void exhalbtc_coex_dm_switch(struct btc_coexist *btcoexist);
void exhalbtc_periodical(struct btc_coexist *btcoexist);
void exhalbtc_dbg_control(struct btc_coexist *btcoexist, u8 code, u8 len,
u8 *data);
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/rtl_btc.c b/drivers/net/wireless/rtlwifi/btcoexist/rtl_btc.c
index 0ab94fe..b9b0cb7 100644
--- a/drivers/net/wireless/rtlwifi/btcoexist/rtl_btc.c
+++ b/drivers/net/wireless/rtlwifi/btcoexist/rtl_btc.c
@@ -22,19 +22,19 @@
* Larry Finger <Larry.Finger@lwfinger.net>
*
*****************************************************************************/
-
#include "../wifi.h"
-#include "rtl_btc.h"
-#include "halbt_precomp.h"
-
#include <linux/vmalloc.h>
#include <linux/module.h>
+#include "rtl_btc.h"
+#include "halbt_precomp.h"
+
static struct rtl_btc_ops rtl_btc_operation = {
.btc_init_variables = rtl_btc_init_variables,
.btc_init_hal_vars = rtl_btc_init_hal_vars,
.btc_init_hw_config = rtl_btc_init_hw_config,
.btc_ips_notify = rtl_btc_ips_notify,
+ .btc_lps_notify = rtl_btc_lps_notify,
.btc_scan_notify = rtl_btc_scan_notify,
.btc_connect_notify = rtl_btc_connect_notify,
.btc_mediastatus_notify = rtl_btc_mediastatus_notify,
@@ -44,6 +44,7 @@
.btc_is_limited_dig = rtl_btc_is_limited_dig,
.btc_is_disable_edca_turbo = rtl_btc_is_disable_edca_turbo,
.btc_is_bt_disabled = rtl_btc_is_bt_disabled,
+ .btc_special_packet_notify = rtl_btc_special_packet_notify,
};
void rtl_btc_init_variables(struct rtl_priv *rtlpriv)
@@ -85,6 +86,11 @@
exhalbtc_ips_notify(&gl_bt_coexist, type);
}
+void rtl_btc_lps_notify(struct rtl_priv *rtlpriv, u8 type)
+{
+ exhalbtc_lps_notify(&gl_bt_coexist, type);
+}
+
void rtl_btc_scan_notify(struct rtl_priv *rtlpriv, u8 scantype)
{
exhalbtc_scan_notify(&gl_bt_coexist, scantype);
@@ -96,13 +102,14 @@
}
void rtl_btc_mediastatus_notify(struct rtl_priv *rtlpriv,
- enum _RT_MEDIA_STATUS mstatus)
+ enum rt_media_status mstatus)
{
exhalbtc_mediastatus_notify(&gl_bt_coexist, mstatus);
}
void rtl_btc_periodical(struct rtl_priv *rtlpriv)
{
+ /*rtl_bt_dm_monitor();*/
exhalbtc_periodical(&gl_bt_coexist);
}
@@ -150,12 +157,18 @@
bool rtl_btc_is_bt_disabled(struct rtl_priv *rtlpriv)
{
+ /* It seems 'bt_disabled' is never be initialized or set. */
if (gl_bt_coexist.bt_info.bt_disabled)
return true;
else
return false;
}
+void rtl_btc_special_packet_notify(struct rtl_priv *rtlpriv, u8 pkt_type)
+{
+ return exhalbtc_special_packet_notify(&gl_bt_coexist, pkt_type);
+}
+
struct rtl_btc_ops *rtl_btc_get_ops_pointer(void)
{
return &rtl_btc_operation;
@@ -174,11 +187,11 @@
return num;
}
-enum _RT_MEDIA_STATUS mgnt_link_status_query(struct ieee80211_hw *hw)
+enum rt_media_status mgnt_link_status_query(struct ieee80211_hw *hw)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_mac *mac = rtl_mac(rtl_priv(hw));
- enum _RT_MEDIA_STATUS m_status = RT_MEDIA_DISCONNECT;
+ enum rt_media_status m_status = RT_MEDIA_DISCONNECT;
u8 bibss = (mac->opmode == NL80211_IFTYPE_ADHOC) ? 1 : 0;
diff --git a/drivers/net/wireless/rtlwifi/btcoexist/rtl_btc.h b/drivers/net/wireless/rtlwifi/btcoexist/rtl_btc.h
index 805b22c..ccd5a0f 100644
--- a/drivers/net/wireless/rtlwifi/btcoexist/rtl_btc.h
+++ b/drivers/net/wireless/rtlwifi/btcoexist/rtl_btc.h
@@ -31,22 +31,24 @@
void rtl_btc_init_hal_vars(struct rtl_priv *rtlpriv);
void rtl_btc_init_hw_config(struct rtl_priv *rtlpriv);
void rtl_btc_ips_notify(struct rtl_priv *rtlpriv, u8 type);
+void rtl_btc_lps_notify(struct rtl_priv *rtlpriv, u8 type);
void rtl_btc_scan_notify(struct rtl_priv *rtlpriv, u8 scantype);
void rtl_btc_connect_notify(struct rtl_priv *rtlpriv, u8 action);
void rtl_btc_mediastatus_notify(struct rtl_priv *rtlpriv,
- enum _RT_MEDIA_STATUS mstatus);
+ enum rt_media_status mstatus);
void rtl_btc_periodical(struct rtl_priv *rtlpriv);
void rtl_btc_halt_notify(void);
void rtl_btc_btinfo_notify(struct rtl_priv *rtlpriv, u8 *tmpbuf, u8 length);
bool rtl_btc_is_limited_dig(struct rtl_priv *rtlpriv);
bool rtl_btc_is_disable_edca_turbo(struct rtl_priv *rtlpriv);
bool rtl_btc_is_bt_disabled(struct rtl_priv *rtlpriv);
+void rtl_btc_special_packet_notify(struct rtl_priv *rtlpriv, u8 pkt_type);
struct rtl_btc_ops *rtl_btc_get_ops_pointer(void);
u8 rtl_get_hwpg_ant_num(struct rtl_priv *rtlpriv);
u8 rtl_get_hwpg_bt_exist(struct rtl_priv *rtlpriv);
u8 rtl_get_hwpg_bt_type(struct rtl_priv *rtlpriv);
-enum _RT_MEDIA_STATUS mgnt_link_status_query(struct ieee80211_hw *hw);
+enum rt_media_status mgnt_link_status_query(struct ieee80211_hw *hw);
#endif
diff --git a/drivers/net/wireless/rtlwifi/pci.c b/drivers/net/wireless/rtlwifi/pci.c
index 67d1ee6..74a8ba4 100644
--- a/drivers/net/wireless/rtlwifi/pci.c
+++ b/drivers/net/wireless/rtlwifi/pci.c
@@ -646,7 +646,7 @@
== 2) {
RT_TRACE(rtlpriv, COMP_ERR, DBG_LOUD,
- "more desc left, wake skb_queue@%d, ring->idx = %d, skb_queue_len = 0x%d\n",
+ "more desc left, wake skb_queue@%d, ring->idx = %d, skb_queue_len = 0x%x\n",
prio, ring->idx,
skb_queue_len(&ring->queue));
@@ -1469,7 +1469,7 @@
if ((own == 1) && (hw_queue != BEACON_QUEUE)) {
RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING,
- "No more TX desc@%d, ring->idx = %d, idx = %d, skb_queue_len = 0x%d\n",
+ "No more TX desc@%d, ring->idx = %d, idx = %d, skb_queue_len = 0x%x\n",
hw_queue, ring->idx, idx,
skb_queue_len(&ring->queue));
@@ -1511,7 +1511,7 @@
if ((ring->entries - skb_queue_len(&ring->queue)) < 2 &&
hw_queue != BEACON_QUEUE) {
RT_TRACE(rtlpriv, COMP_ERR, DBG_LOUD,
- "less desc left, stop skb_queue@%d, ring->idx = %d, idx = %d, skb_queue_len = 0x%d\n",
+ "less desc left, stop skb_queue@%d, ring->idx = %d, idx = %d, skb_queue_len = 0x%x\n",
hw_queue, ring->idx, idx,
skb_queue_len(&ring->queue));
diff --git a/drivers/net/wireless/rtlwifi/rtl8192de/phy.c b/drivers/net/wireless/rtlwifi/rtl8192de/phy.c
index 592125a..1961b8e 100644
--- a/drivers/net/wireless/rtlwifi/rtl8192de/phy.c
+++ b/drivers/net/wireless/rtlwifi/rtl8192de/phy.c
@@ -677,7 +677,7 @@
rtlphy->mcs_offset[rtlphy->pwrgroup_cnt][index] = data;
RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE,
- "MCSTxPowerLevelOriginalOffset[%d][%d] = 0x%ulx\n",
+ "MCSTxPowerLevelOriginalOffset[%d][%d] = 0x%x\n",
rtlphy->pwrgroup_cnt, index,
rtlphy->mcs_offset[rtlphy->pwrgroup_cnt][index]);
if (index == 13)
@@ -2531,7 +2531,7 @@
if (rtlpriv->rtlhal.current_bandtype == BAND_ON_5G) {/* Path-A for 5G */
u4tmp = curveindex_5g[channel-1];
RTPRINT(rtlpriv, FINIT, INIT_IQK,
- "ver 1 set RF-A, 5G, 0x28 = 0x%ulx !!\n", u4tmp);
+ "ver 1 set RF-A, 5G, 0x28 = 0x%x !!\n", u4tmp);
if (rtlpriv->rtlhal.macphymode == DUALMAC_DUALPHY &&
rtlpriv->rtlhal.interfaceindex == 1) {
bneed_powerdown_radio =
@@ -2550,7 +2550,7 @@
} else if (rtlpriv->rtlhal.current_bandtype == BAND_ON_2_4G) {
u4tmp = curveindex_2g[channel-1];
RTPRINT(rtlpriv, FINIT, INIT_IQK,
- "ver 3 set RF-B, 2G, 0x28 = 0x%ulx !!\n", u4tmp);
+ "ver 3 set RF-B, 2G, 0x28 = 0x%x !!\n", u4tmp);
if (rtlpriv->rtlhal.macphymode == DUALMAC_DUALPHY &&
rtlpriv->rtlhal.interfaceindex == 0) {
bneed_powerdown_radio =
@@ -2562,7 +2562,7 @@
}
rtl_set_rfreg(hw, erfpath, RF_SYN_G4, 0x3f800, u4tmp);
RTPRINT(rtlpriv, FINIT, INIT_IQK,
- "ver 3 set RF-B, 2G, 0x28 = 0x%ulx !!\n",
+ "ver 3 set RF-B, 2G, 0x28 = 0x%x !!\n",
rtl_get_rfreg(hw, erfpath, RF_SYN_G4, 0x3f800));
if (bneed_powerdown_radio)
_rtl92d_phy_restore_rf_env(hw, erfpath, &u4regvalue);
diff --git a/drivers/net/wireless/rtlwifi/rtl8723ae/hal_btc.c b/drivers/net/wireless/rtlwifi/rtl8723ae/hal_btc.c
index 5d534df..f76c50f 100644
--- a/drivers/net/wireless/rtlwifi/rtl8723ae/hal_btc.c
+++ b/drivers/net/wireless/rtlwifi/rtl8723ae/hal_btc.c
@@ -56,11 +56,11 @@
}
}
-static enum _RT_MEDIA_STATUS mgnt_link_status_query(struct ieee80211_hw *hw)
+static enum rt_media_status mgnt_link_status_query(struct ieee80211_hw *hw)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_mac *mac = rtl_mac(rtl_priv(hw));
- enum _RT_MEDIA_STATUS m_status = RT_MEDIA_DISCONNECT;
+ enum rt_media_status m_status = RT_MEDIA_DISCONNECT;
u8 bibss = (mac->opmode == NL80211_IFTYPE_ADHOC) ? 1 : 0;
diff --git a/drivers/net/wireless/rtlwifi/wifi.h b/drivers/net/wireless/rtlwifi/wifi.h
index 407a793..541b077 100644
--- a/drivers/net/wireless/rtlwifi/wifi.h
+++ b/drivers/net/wireless/rtlwifi/wifi.h
@@ -170,6 +170,11 @@
RF_TX_NUM_NONIMPLEMENT,
};
+#define PACKET_NORMAL 0
+#define PACKET_DHCP 1
+#define PACKET_ARP 2
+#define PACKET_EAPOL 3
+
struct txpower_info_2g {
u8 index_cck_base[MAX_RF_PATH][MAX_CHNL_GROUP_24G];
u8 index_bw40_base[MAX_RF_PATH][MAX_CHNL_GROUP_24G];
@@ -234,8 +239,9 @@
HARDWARE_TYPE_RTL8192DU,
HARDWARE_TYPE_RTL8723AE,
HARDWARE_TYPE_RTL8723U,
- HARDWARE_TYPE_RTL8723BE,
HARDWARE_TYPE_RTL8188EE,
+ HARDWARE_TYPE_RTL8723BE,
+ HARDWARE_TYPE_RTL8192EE,
HARDWARE_TYPE_RTL8821AE,
HARDWARE_TYPE_RTL8812AE,
@@ -428,7 +434,7 @@
HW_VAR_DATA_FILTER,
};
-enum _RT_MEDIA_STATUS {
+enum rt_media_status {
RT_MEDIA_DISCONNECT = 0,
RT_MEDIA_CONNECT = 1
};
@@ -2312,10 +2318,11 @@
void (*btc_init_hal_vars) (struct rtl_priv *rtlpriv);
void (*btc_init_hw_config) (struct rtl_priv *rtlpriv);
void (*btc_ips_notify) (struct rtl_priv *rtlpriv, u8 type);
+ void (*btc_lps_notify)(struct rtl_priv *rtlpriv, u8 type);
void (*btc_scan_notify) (struct rtl_priv *rtlpriv, u8 scantype);
void (*btc_connect_notify) (struct rtl_priv *rtlpriv, u8 action);
void (*btc_mediastatus_notify) (struct rtl_priv *rtlpriv,
- enum _RT_MEDIA_STATUS mstatus);
+ enum rt_media_status mstatus);
void (*btc_periodical) (struct rtl_priv *rtlpriv);
void (*btc_halt_notify) (void);
void (*btc_btinfo_notify) (struct rtl_priv *rtlpriv,
@@ -2323,6 +2330,8 @@
bool (*btc_is_limited_dig) (struct rtl_priv *rtlpriv);
bool (*btc_is_disable_edca_turbo) (struct rtl_priv *rtlpriv);
bool (*btc_is_bt_disabled) (struct rtl_priv *rtlpriv);
+ void (*btc_special_packet_notify)(struct rtl_priv *rtlpriv,
+ u8 pkt_type);
};
struct proxim {
diff --git a/drivers/nfc/microread/microread.c b/drivers/nfc/microread/microread.c
index f868333..963a4a5 100644
--- a/drivers/nfc/microread/microread.c
+++ b/drivers/nfc/microread/microread.c
@@ -501,9 +501,13 @@
targets->sens_res =
be16_to_cpu(*(u16 *)&skb->data[MICROREAD_EMCF_A_ATQA]);
targets->sel_res = skb->data[MICROREAD_EMCF_A_SAK];
- memcpy(targets->nfcid1, &skb->data[MICROREAD_EMCF_A_UID],
- skb->data[MICROREAD_EMCF_A_LEN]);
targets->nfcid1_len = skb->data[MICROREAD_EMCF_A_LEN];
+ if (targets->nfcid1_len > sizeof(targets->nfcid1)) {
+ r = -EINVAL;
+ goto exit_free;
+ }
+ memcpy(targets->nfcid1, &skb->data[MICROREAD_EMCF_A_UID],
+ targets->nfcid1_len);
break;
case MICROREAD_GATE_ID_MREAD_ISO_A_3:
targets->supported_protocols =
@@ -511,9 +515,13 @@
targets->sens_res =
be16_to_cpu(*(u16 *)&skb->data[MICROREAD_EMCF_A3_ATQA]);
targets->sel_res = skb->data[MICROREAD_EMCF_A3_SAK];
- memcpy(targets->nfcid1, &skb->data[MICROREAD_EMCF_A3_UID],
- skb->data[MICROREAD_EMCF_A3_LEN]);
targets->nfcid1_len = skb->data[MICROREAD_EMCF_A3_LEN];
+ if (targets->nfcid1_len > sizeof(targets->nfcid1)) {
+ r = -EINVAL;
+ goto exit_free;
+ }
+ memcpy(targets->nfcid1, &skb->data[MICROREAD_EMCF_A3_UID],
+ targets->nfcid1_len);
break;
case MICROREAD_GATE_ID_MREAD_ISO_B:
targets->supported_protocols = NFC_PROTO_ISO14443_B_MASK;
diff --git a/drivers/nfc/st21nfca/Makefile b/drivers/nfc/st21nfca/Makefile
index db7a38a..7d688f9 100644
--- a/drivers/nfc/st21nfca/Makefile
+++ b/drivers/nfc/st21nfca/Makefile
@@ -2,7 +2,8 @@
# Makefile for ST21NFCA HCI based NFC driver
#
-st21nfca_i2c-objs = i2c.o
+st21nfca_hci-objs = st21nfca.o st21nfca_dep.o
+obj-$(CONFIG_NFC_ST21NFCA) += st21nfca_hci.o
-obj-$(CONFIG_NFC_ST21NFCA) += st21nfca.o st21nfca_dep.o
+st21nfca_i2c-objs = i2c.o
obj-$(CONFIG_NFC_ST21NFCA_I2C) += st21nfca_i2c.o
diff --git a/drivers/nfc/st21nfcb/Makefile b/drivers/nfc/st21nfcb/Makefile
index 13d9f03..f4d835d 100644
--- a/drivers/nfc/st21nfcb/Makefile
+++ b/drivers/nfc/st21nfcb/Makefile
@@ -2,7 +2,8 @@
# Makefile for ST21NFCB NCI based NFC driver
#
-st21nfcb_i2c-objs = i2c.o
+st21nfcb_nci-objs = ndlc.o st21nfcb.o
+obj-$(CONFIG_NFC_ST21NFCB) += st21nfcb_nci.o
-obj-$(CONFIG_NFC_ST21NFCB) += st21nfcb.o ndlc.o
+st21nfcb_i2c-objs = i2c.o
obj-$(CONFIG_NFC_ST21NFCB_I2C) += st21nfcb_i2c.o
diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
index 9dd63b8..e9bf2f4 100644
--- a/drivers/ntb/ntb_transport.c
+++ b/drivers/ntb/ntb_transport.c
@@ -510,7 +510,7 @@
WARN_ON(nt->mw[mw_num].virt_addr == NULL);
- if (nt->max_qps % mw_max && mw_num < nt->max_qps % mw_max)
+ if (nt->max_qps % mw_max && mw_num + 1 < nt->max_qps / mw_max)
num_qps_mw = nt->max_qps / mw_max + 1;
else
num_qps_mw = nt->max_qps / mw_max;
@@ -576,6 +576,19 @@
return -ENOMEM;
}
+ /*
+ * we must ensure that the memory address allocated is BAR size
+ * aligned in order for the XLAT register to take the value. This
+ * is a requirement of the hardware. It is recommended to setup CMA
+ * for BAR sizes equal or greater than 4MB.
+ */
+ if (!IS_ALIGNED(mw->dma_addr, mw->size)) {
+ dev_err(&pdev->dev, "DMA memory %pad not aligned to BAR size\n",
+ &mw->dma_addr);
+ ntb_free_mw(nt, num_mw);
+ return -ENOMEM;
+ }
+
/* Notify HW the memory location of the receive buffer */
ntb_set_mw_addr(nt->ndev, num_mw, mw->dma_addr);
@@ -856,7 +869,7 @@
qp->client_ready = NTB_LINK_DOWN;
qp->event_handler = NULL;
- if (nt->max_qps % mw_max && mw_num < nt->max_qps % mw_max)
+ if (nt->max_qps % mw_max && mw_num + 1 < nt->max_qps / mw_max)
num_qps_mw = nt->max_qps / mw_max + 1;
else
num_qps_mw = nt->max_qps / mw_max;
diff --git a/drivers/of/of_mdio.c b/drivers/of/of_mdio.c
index 401b245..a85d800 100644
--- a/drivers/of/of_mdio.c
+++ b/drivers/of/of_mdio.c
@@ -224,6 +224,8 @@
if (!phy)
return NULL;
+ phy->dev_flags = flags;
+
return phy_connect_direct(dev, phy, hndlr, iface) ? NULL : phy;
}
EXPORT_SYMBOL(of_phy_connect);
diff --git a/drivers/parisc/dino.c b/drivers/parisc/dino.c
index 9eae983..a0580af 100644
--- a/drivers/parisc/dino.c
+++ b/drivers/parisc/dino.c
@@ -913,7 +913,7 @@
printk("%s version %s found at 0x%lx\n", name, version, hpa);
if (!request_mem_region(hpa, PAGE_SIZE, name)) {
- printk(KERN_ERR "DINO: Hey! Someone took my MMIO space (0x%ld)!\n",
+ printk(KERN_ERR "DINO: Hey! Someone took my MMIO space (0x%lx)!\n",
hpa);
return 1;
}
diff --git a/drivers/parisc/pdc_stable.c b/drivers/parisc/pdc_stable.c
index 0f54ab6..3651c38 100644
--- a/drivers/parisc/pdc_stable.c
+++ b/drivers/parisc/pdc_stable.c
@@ -278,7 +278,7 @@
{
struct hardware_path hwpath;
unsigned short i;
- char in[count+1], *temp;
+ char in[64], *temp;
struct device *dev;
int ret;
@@ -286,8 +286,9 @@
return -EINVAL;
/* We'll use a local copy of buf */
- memset(in, 0, count+1);
+ count = min_t(size_t, count, sizeof(in)-1);
strncpy(in, buf, count);
+ in[count] = '\0';
/* Let's clean up the target. 0xff is a blank pattern */
memset(&hwpath, 0xff, sizeof(hwpath));
@@ -393,14 +394,15 @@
{
unsigned int layers[6]; /* device-specific info (ctlr#, unit#, ...) */
unsigned short i;
- char in[count+1], *temp;
+ char in[64], *temp;
if (!entry || !buf || !count)
return -EINVAL;
/* We'll use a local copy of buf */
- memset(in, 0, count+1);
+ count = min_t(size_t, count, sizeof(in)-1);
strncpy(in, buf, count);
+ in[count] = '\0';
/* Let's clean up the target. 0 is a blank pattern */
memset(&layers, 0, sizeof(layers));
@@ -755,7 +757,7 @@
{
struct pdcspath_entry *pathentry;
unsigned char flags;
- char in[count+1], *temp;
+ char in[8], *temp;
char c;
if (!capable(CAP_SYS_ADMIN))
@@ -765,8 +767,9 @@
return -EINVAL;
/* We'll use a local copy of buf */
- memset(in, 0, count+1);
+ count = min_t(size_t, count, sizeof(in)-1);
strncpy(in, buf, count);
+ in[count] = '\0';
/* Current flags are stored in primary boot path entry */
pathentry = &pdcspath_entry_primary;
diff --git a/drivers/pci/host/pci-imx6.c b/drivers/pci/host/pci-imx6.c
index a568efa..35fc73a 100644
--- a/drivers/pci/host/pci-imx6.c
+++ b/drivers/pci/host/pci-imx6.c
@@ -49,6 +49,9 @@
/* PCIe Port Logic registers (memory-mapped) */
#define PL_OFFSET 0x700
+#define PCIE_PL_PFLR (PL_OFFSET + 0x08)
+#define PCIE_PL_PFLR_LINK_STATE_MASK (0x3f << 16)
+#define PCIE_PL_PFLR_FORCE_LINK (1 << 15)
#define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28)
#define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c)
#define PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING (1 << 29)
@@ -214,6 +217,32 @@
static int imx6_pcie_assert_core_reset(struct pcie_port *pp)
{
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp);
+ u32 val, gpr1, gpr12;
+
+ /*
+ * If the bootloader already enabled the link we need some special
+ * handling to get the core back into a state where it is safe to
+ * touch it for configuration. As there is no dedicated reset signal
+ * wired up for MX6QDL, we need to manually force LTSSM into "detect"
+ * state before completely disabling LTSSM, which is a prerequisite
+ * for core configuration.
+ *
+ * If both LTSSM_ENABLE and REF_SSP_ENABLE are active we have a strong
+ * indication that the bootloader activated the link.
+ */
+ regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, &gpr1);
+ regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, &gpr12);
+
+ if ((gpr1 & IMX6Q_GPR1_PCIE_REF_CLK_EN) &&
+ (gpr12 & IMX6Q_GPR12_PCIE_CTL_2)) {
+ val = readl(pp->dbi_base + PCIE_PL_PFLR);
+ val &= ~PCIE_PL_PFLR_LINK_STATE_MASK;
+ val |= PCIE_PL_PFLR_FORCE_LINK;
+ writel(val, pp->dbi_base + PCIE_PL_PFLR);
+
+ regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
+ IMX6Q_GPR12_PCIE_CTL_2, 0 << 10);
+ }
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
IMX6Q_GPR1_PCIE_TEST_PD, 1 << 18);
@@ -589,6 +618,14 @@
return 0;
}
+static void imx6_pcie_shutdown(struct platform_device *pdev)
+{
+ struct imx6_pcie *imx6_pcie = platform_get_drvdata(pdev);
+
+ /* bring down link, so bootloader gets clean state in case of reboot */
+ imx6_pcie_assert_core_reset(&imx6_pcie->pp);
+}
+
static const struct of_device_id imx6_pcie_of_match[] = {
{ .compatible = "fsl,imx6q-pcie", },
{},
@@ -601,6 +638,7 @@
.owner = THIS_MODULE,
.of_match_table = imx6_pcie_of_match,
},
+ .shutdown = imx6_pcie_shutdown,
};
/* Freescale PCIe driver does not allow module unload */
diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
index 70741c8..6cd5160 100644
--- a/drivers/pci/hotplug/acpiphp_glue.c
+++ b/drivers/pci/hotplug/acpiphp_glue.c
@@ -560,19 +560,15 @@
slot->flags &= (~SLOT_ENABLED);
}
-static bool acpiphp_no_hotplug(struct acpi_device *adev)
-{
- return adev && adev->flags.no_hotplug;
-}
-
static bool slot_no_hotplug(struct acpiphp_slot *slot)
{
- struct acpiphp_func *func;
+ struct pci_bus *bus = slot->bus;
+ struct pci_dev *dev;
- list_for_each_entry(func, &slot->funcs, sibling)
- if (acpiphp_no_hotplug(func_to_acpi_device(func)))
+ list_for_each_entry(dev, &bus->devices, bus_list) {
+ if (PCI_SLOT(dev->devfn) == slot->device && dev->ignore_hotplug)
return true;
-
+ }
return false;
}
@@ -645,7 +641,7 @@
status = acpi_evaluate_integer(adev->handle, "_STA", NULL, &sta);
alive = (ACPI_SUCCESS(status) && device_status_valid(sta))
- || acpiphp_no_hotplug(adev);
+ || dev->ignore_hotplug;
}
if (!alive)
alive = pci_device_is_present(dev);
diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
index 9da84b8..2a412fa 100644
--- a/drivers/pci/hotplug/pciehp_hpc.c
+++ b/drivers/pci/hotplug/pciehp_hpc.c
@@ -160,7 +160,7 @@
ctrl->slot_ctrl & PCI_EXP_SLTCTL_CCIE)
rc = wait_event_timeout(ctrl->queue, !ctrl->cmd_busy, timeout);
else
- rc = pcie_poll_cmd(ctrl, timeout);
+ rc = pcie_poll_cmd(ctrl, jiffies_to_msecs(timeout));
/*
* Controllers with errata like Intel CF118 don't generate
@@ -506,6 +506,8 @@
{
struct controller *ctrl = (struct controller *)dev_id;
struct pci_dev *pdev = ctrl_dev(ctrl);
+ struct pci_bus *subordinate = pdev->subordinate;
+ struct pci_dev *dev;
struct slot *slot = ctrl->slot;
u16 detected, intr_loc;
@@ -539,6 +541,16 @@
wake_up(&ctrl->queue);
}
+ if (subordinate) {
+ list_for_each_entry(dev, &subordinate->devices, bus_list) {
+ if (dev->ignore_hotplug) {
+ ctrl_dbg(ctrl, "ignoring hotplug event %#06x (%s requested no hotplug)\n",
+ intr_loc, pci_name(dev));
+ return IRQ_HANDLED;
+ }
+ }
+ }
+
if (!(intr_loc & ~PCI_EXP_SLTSTA_CC))
return IRQ_HANDLED;
diff --git a/drivers/pci/hotplug/pcihp_slot.c b/drivers/pci/hotplug/pcihp_slot.c
index e246a10..3e36ec8 100644
--- a/drivers/pci/hotplug/pcihp_slot.c
+++ b/drivers/pci/hotplug/pcihp_slot.c
@@ -46,7 +46,6 @@
*/
if (pci_is_pcie(dev))
return;
- dev_info(&dev->dev, "using default PCI settings\n");
hpp = &pci_default_type0;
}
@@ -153,7 +152,6 @@
{
struct pci_dev *cdev;
struct hotplug_params hpp;
- int ret;
if (!(dev->hdr_type == PCI_HEADER_TYPE_NORMAL ||
(dev->hdr_type == PCI_HEADER_TYPE_BRIDGE &&
@@ -163,9 +161,7 @@
pcie_bus_configure_settings(dev->bus);
memset(&hpp, 0, sizeof(hpp));
- ret = pci_get_hp_params(dev, &hpp);
- if (ret)
- dev_warn(&dev->dev, "no hotplug settings from platform\n");
+ pci_get_hp_params(dev, &hpp);
program_hpp_type2(dev, hpp.t2);
program_hpp_type1(dev, hpp.t1);
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index e3cf8a2..4170113 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -775,7 +775,7 @@
/* Check if setup is sensible at all */
if (!pass &&
(primary != bus->number || secondary <= bus->number ||
- secondary > subordinate || subordinate > bus->busn_res.end)) {
+ secondary > subordinate)) {
dev_info(&dev->dev, "bridge configuration invalid ([bus %02x-%02x]), reconfiguring\n",
secondary, subordinate);
broken = 1;
@@ -838,23 +838,18 @@
goto out;
}
- if (max >= bus->busn_res.end) {
- dev_warn(&dev->dev, "can't allocate child bus %02x from %pR\n",
- max, &bus->busn_res);
- goto out;
- }
-
/* Clear errors */
pci_write_config_word(dev, PCI_STATUS, 0xffff);
- /* The bus will already exist if we are rescanning */
+ /* Prevent assigning a bus number that already exists.
+ * This can happen when a bridge is hot-plugged, so in
+ * this case we only re-scan this bus. */
child = pci_find_bus(pci_domain_nr(bus), max+1);
if (!child) {
child = pci_add_new_bus(bus, dev, max+1);
if (!child)
goto out;
- pci_bus_insert_busn_res(child, max+1,
- bus->busn_res.end);
+ pci_bus_insert_busn_res(child, max+1, 0xff);
}
max++;
buses = (buses & 0xff000000)
@@ -913,11 +908,6 @@
/*
* Set the subordinate bus number to its real value.
*/
- if (max > bus->busn_res.end) {
- dev_warn(&dev->dev, "max busn %02x is outside %pR\n",
- max, &bus->busn_res);
- max = bus->busn_res.end;
- }
pci_bus_update_busn_res_end(child, max);
pci_write_config_byte(dev, PCI_SUBORDINATE_BUS, max);
}
diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig
index 0dd7427..f833aa2 100644
--- a/drivers/phy/Kconfig
+++ b/drivers/phy/Kconfig
@@ -41,9 +41,9 @@
config PHY_MIPHY365X
tristate "STMicroelectronics MIPHY365X PHY driver for STiH41x series"
depends on ARCH_STI
- depends on GENERIC_PHY
depends on HAS_IOMEM
depends on OF
+ select GENERIC_PHY
help
Enable this to support the miphy transceiver (for SATA/PCIE)
that is part of STMicroelectronics STiH41x SoC series.
@@ -214,12 +214,14 @@
config PHY_ST_SPEAR1310_MIPHY
tristate "ST SPEAR1310-MIPHY driver"
select GENERIC_PHY
+ depends on MACH_SPEAR1310 || COMPILE_TEST
help
Support for ST SPEAr1310 MIPHY which can be used for PCIe and SATA.
config PHY_ST_SPEAR1340_MIPHY
tristate "ST SPEAR1340-MIPHY driver"
select GENERIC_PHY
+ depends on MACH_SPEAR1340 || COMPILE_TEST
help
Support for ST SPEAr1340 MIPHY which can be used for PCIe and SATA.
diff --git a/drivers/phy/phy-exynos5-usbdrd.c b/drivers/phy/phy-exynos5-usbdrd.c
index b05302b..392101c 100644
--- a/drivers/phy/phy-exynos5-usbdrd.c
+++ b/drivers/phy/phy-exynos5-usbdrd.c
@@ -542,6 +542,7 @@
},
{ },
};
+MODULE_DEVICE_TABLE(of, exynos5_usbdrd_phy_of_match);
static int exynos5_usbdrd_phy_probe(struct platform_device *pdev)
{
diff --git a/drivers/phy/phy-miphy365x.c b/drivers/phy/phy-miphy365x.c
index e111baf..e0fb7a1 100644
--- a/drivers/phy/phy-miphy365x.c
+++ b/drivers/phy/phy-miphy365x.c
@@ -163,6 +163,7 @@
};
static u8 rx_tx_spd[] = {
+ 0, /* GEN0 doesn't exist. */
TX_SPDSEL_GEN1_VAL | RX_SPDSEL_GEN1_VAL,
TX_SPDSEL_GEN2_VAL | RX_SPDSEL_GEN2_VAL,
TX_SPDSEL_GEN3_VAL | RX_SPDSEL_GEN3_VAL
diff --git a/drivers/phy/phy-twl4030-usb.c b/drivers/phy/phy-twl4030-usb.c
index e1a6623..9cd33a4 100644
--- a/drivers/phy/phy-twl4030-usb.c
+++ b/drivers/phy/phy-twl4030-usb.c
@@ -34,6 +34,7 @@
#include <linux/delay.h>
#include <linux/usb/otg.h>
#include <linux/phy/phy.h>
+#include <linux/pm_runtime.h>
#include <linux/usb/musb-omap.h>
#include <linux/usb/ulpi.h>
#include <linux/i2c/twl.h>
@@ -422,37 +423,55 @@
}
}
-static int twl4030_phy_power_off(struct phy *phy)
+static int twl4030_usb_runtime_suspend(struct device *dev)
{
- struct twl4030_usb *twl = phy_get_drvdata(phy);
+ struct twl4030_usb *twl = dev_get_drvdata(dev);
+ dev_dbg(twl->dev, "%s\n", __func__);
if (twl->asleep)
return 0;
twl4030_phy_power(twl, 0);
twl->asleep = 1;
- dev_dbg(twl->dev, "%s\n", __func__);
+
return 0;
}
-static void __twl4030_phy_power_on(struct twl4030_usb *twl)
+static int twl4030_usb_runtime_resume(struct device *dev)
{
+ struct twl4030_usb *twl = dev_get_drvdata(dev);
+
+ dev_dbg(twl->dev, "%s\n", __func__);
+ if (!twl->asleep)
+ return 0;
+
twl4030_phy_power(twl, 1);
- twl4030_i2c_access(twl, 1);
- twl4030_usb_set_mode(twl, twl->usb_mode);
- if (twl->usb_mode == T2_USB_MODE_ULPI)
- twl4030_i2c_access(twl, 0);
+ twl->asleep = 0;
+
+ return 0;
+}
+
+static int twl4030_phy_power_off(struct phy *phy)
+{
+ struct twl4030_usb *twl = phy_get_drvdata(phy);
+
+ dev_dbg(twl->dev, "%s\n", __func__);
+ pm_runtime_mark_last_busy(twl->dev);
+ pm_runtime_put_autosuspend(twl->dev);
+
+ return 0;
}
static int twl4030_phy_power_on(struct phy *phy)
{
struct twl4030_usb *twl = phy_get_drvdata(phy);
- if (!twl->asleep)
- return 0;
- __twl4030_phy_power_on(twl);
- twl->asleep = 0;
dev_dbg(twl->dev, "%s\n", __func__);
+ pm_runtime_get_sync(twl->dev);
+ twl4030_i2c_access(twl, 1);
+ twl4030_usb_set_mode(twl, twl->usb_mode);
+ if (twl->usb_mode == T2_USB_MODE_ULPI)
+ twl4030_i2c_access(twl, 0);
/*
* XXX When VBUS gets driven after musb goes to A mode,
@@ -558,32 +577,16 @@
* USB_LINK_VBUS state. musb_hdrc won't care until it
* starts to handle softconnect right.
*/
- omap_musb_mailbox(status);
- }
- sysfs_notify(&twl->dev->kobj, NULL, "vbus");
-
- return IRQ_HANDLED;
-}
-
-static void twl4030_id_workaround_work(struct work_struct *work)
-{
- struct twl4030_usb *twl = container_of(work, struct twl4030_usb,
- id_workaround_work.work);
- enum omap_musb_vbus_id_status status;
- bool status_changed = false;
-
- status = twl4030_usb_linkstat(twl);
-
- spin_lock_irq(&twl->lock);
- if (status >= 0 && status != twl->linkstat) {
- twl->linkstat = status;
- status_changed = true;
- }
- spin_unlock_irq(&twl->lock);
-
- if (status_changed) {
- dev_dbg(twl->dev, "handle missing status change to %d\n",
- status);
+ if ((status == OMAP_MUSB_VBUS_VALID) ||
+ (status == OMAP_MUSB_ID_GROUND)) {
+ if (twl->asleep)
+ pm_runtime_get_sync(twl->dev);
+ } else {
+ if (!twl->asleep) {
+ pm_runtime_mark_last_busy(twl->dev);
+ pm_runtime_put_autosuspend(twl->dev);
+ }
+ }
omap_musb_mailbox(status);
}
@@ -592,6 +595,19 @@
cancel_delayed_work(&twl->id_workaround_work);
schedule_delayed_work(&twl->id_workaround_work, HZ);
}
+
+ if (irq)
+ sysfs_notify(&twl->dev->kobj, NULL, "vbus");
+
+ return IRQ_HANDLED;
+}
+
+static void twl4030_id_workaround_work(struct work_struct *work)
+{
+ struct twl4030_usb *twl = container_of(work, struct twl4030_usb,
+ id_workaround_work.work);
+
+ twl4030_usb_irq(0, twl);
}
static int twl4030_phy_init(struct phy *phy)
@@ -599,22 +615,17 @@
struct twl4030_usb *twl = phy_get_drvdata(phy);
enum omap_musb_vbus_id_status status;
- /*
- * Start in sleep state, we'll get called through set_suspend()
- * callback when musb is runtime resumed and it's time to start.
- */
- __twl4030_phy_power(twl, 0);
- twl->asleep = 1;
-
+ pm_runtime_get_sync(twl->dev);
status = twl4030_usb_linkstat(twl);
twl->linkstat = status;
- if (status == OMAP_MUSB_ID_GROUND || status == OMAP_MUSB_VBUS_VALID) {
+ if (status == OMAP_MUSB_ID_GROUND || status == OMAP_MUSB_VBUS_VALID)
omap_musb_mailbox(twl->linkstat);
- twl4030_phy_power_on(phy);
- }
sysfs_notify(&twl->dev->kobj, NULL, "vbus");
+ pm_runtime_mark_last_busy(twl->dev);
+ pm_runtime_put_autosuspend(twl->dev);
+
return 0;
}
@@ -650,6 +661,11 @@
.owner = THIS_MODULE,
};
+static const struct dev_pm_ops twl4030_usb_pm_ops = {
+ SET_RUNTIME_PM_OPS(twl4030_usb_runtime_suspend,
+ twl4030_usb_runtime_resume, NULL)
+};
+
static int twl4030_usb_probe(struct platform_device *pdev)
{
struct twl4030_usb_data *pdata = dev_get_platdata(&pdev->dev);
@@ -726,6 +742,11 @@
ATOMIC_INIT_NOTIFIER_HEAD(&twl->phy.notifier);
+ pm_runtime_use_autosuspend(&pdev->dev);
+ pm_runtime_set_autosuspend_delay(&pdev->dev, 2000);
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_get_sync(&pdev->dev);
+
/* Our job is to use irqs and status from the power module
* to keep the transceiver disabled when nothing's connected.
*
@@ -744,6 +765,9 @@
return status;
}
+ pm_runtime_mark_last_busy(&pdev->dev);
+ pm_runtime_put_autosuspend(twl->dev);
+
dev_info(&pdev->dev, "Initialized TWL4030 USB module\n");
return 0;
}
@@ -753,6 +777,7 @@
struct twl4030_usb *twl = platform_get_drvdata(pdev);
int val;
+ pm_runtime_get_sync(twl->dev);
cancel_delayed_work(&twl->id_workaround_work);
device_remove_file(twl->dev, &dev_attr_vbus);
@@ -772,9 +797,8 @@
/* disable complete OTG block */
twl4030_usb_clear_bits(twl, POWER_CTRL, POWER_CTRL_OTG_ENAB);
-
- if (!twl->asleep)
- twl4030_phy_power(twl, 0);
+ pm_runtime_mark_last_busy(twl->dev);
+ pm_runtime_put(twl->dev);
return 0;
}
@@ -792,6 +816,7 @@
.remove = twl4030_usb_remove,
.driver = {
.name = "twl4030_usb",
+ .pm = &twl4030_usb_pm_ops,
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(twl4030_usb_id_table),
},
diff --git a/drivers/pinctrl/pinctrl-baytrail.c b/drivers/pinctrl/pinctrl-baytrail.c
index 9ca59a0..e12e5b0 100644
--- a/drivers/pinctrl/pinctrl-baytrail.c
+++ b/drivers/pinctrl/pinctrl-baytrail.c
@@ -461,6 +461,7 @@
.irq_mask = byt_irq_mask,
.irq_unmask = byt_irq_unmask,
.irq_set_type = byt_irq_type,
+ .flags = IRQCHIP_SKIP_SET_WAKE,
};
static void byt_gpio_irq_init_hw(struct byt_gpio *vg)
diff --git a/drivers/regulator/88pm8607.c b/drivers/regulator/88pm8607.c
index 337634a..6d77dcd 100644
--- a/drivers/regulator/88pm8607.c
+++ b/drivers/regulator/88pm8607.c
@@ -319,7 +319,7 @@
struct regulator_config *config)
{
struct device_node *nproot, *np;
- nproot = of_node_get(pdev->dev.parent->of_node);
+ nproot = pdev->dev.parent->of_node;
if (!nproot)
return -ENODEV;
nproot = of_get_child_by_name(nproot, "regulators");
diff --git a/drivers/regulator/da9052-regulator.c b/drivers/regulator/da9052-regulator.c
index fdb6ea8..0003362 100644
--- a/drivers/regulator/da9052-regulator.c
+++ b/drivers/regulator/da9052-regulator.c
@@ -422,9 +422,9 @@
config.init_data = pdata->regulators[pdev->id];
} else {
#ifdef CONFIG_OF
- struct device_node *nproot, *np;
+ struct device_node *nproot = da9052->dev->of_node;
+ struct device_node *np;
- nproot = of_node_get(da9052->dev->of_node);
if (!nproot)
return -ENODEV;
diff --git a/drivers/regulator/max8907-regulator.c b/drivers/regulator/max8907-regulator.c
index 9623e9e..3426be8 100644
--- a/drivers/regulator/max8907-regulator.c
+++ b/drivers/regulator/max8907-regulator.c
@@ -226,7 +226,7 @@
struct device_node *np, *regulators;
int ret;
- np = of_node_get(pdev->dev.parent->of_node);
+ np = pdev->dev.parent->of_node;
if (!np)
return 0;
diff --git a/drivers/regulator/max8925-regulator.c b/drivers/regulator/max8925-regulator.c
index dad2bcd..7770777 100644
--- a/drivers/regulator/max8925-regulator.c
+++ b/drivers/regulator/max8925-regulator.c
@@ -250,7 +250,7 @@
struct device_node *nproot, *np;
int rcount;
- nproot = of_node_get(pdev->dev.parent->of_node);
+ nproot = pdev->dev.parent->of_node;
if (!nproot)
return -ENODEV;
np = of_get_child_by_name(nproot, "regulators");
diff --git a/drivers/regulator/max8997.c b/drivers/regulator/max8997.c
index 90b4c53..9c31e21 100644
--- a/drivers/regulator/max8997.c
+++ b/drivers/regulator/max8997.c
@@ -917,7 +917,7 @@
struct max8997_regulator_data *rdata;
unsigned int i, dvs_voltage_nr = 1, ret;
- pmic_np = of_node_get(iodev->dev->of_node);
+ pmic_np = iodev->dev->of_node;
if (!pmic_np) {
dev_err(&pdev->dev, "could not find pmic sub-node\n");
return -ENODEV;
diff --git a/drivers/regulator/palmas-regulator.c b/drivers/regulator/palmas-regulator.c
index a7ce34d..1878e5b 100644
--- a/drivers/regulator/palmas-regulator.c
+++ b/drivers/regulator/palmas-regulator.c
@@ -1427,7 +1427,6 @@
u32 prop;
int idx, ret;
- node = of_node_get(node);
regulators = of_get_child_by_name(node, "regulators");
if (!regulators) {
dev_info(dev, "regulator node not found\n");
diff --git a/drivers/regulator/tps65910-regulator.c b/drivers/regulator/tps65910-regulator.c
index fa7db88..e584c99 100644
--- a/drivers/regulator/tps65910-regulator.c
+++ b/drivers/regulator/tps65910-regulator.c
@@ -1014,7 +1014,7 @@
if (!pmic_plat_data)
return NULL;
- np = of_node_get(pdev->dev.parent->of_node);
+ np = pdev->dev.parent->of_node;
regulators = of_get_child_by_name(np, "regulators");
if (!regulators) {
dev_err(&pdev->dev, "regulator node not found\n");
diff --git a/drivers/s390/block/dasd_devmap.c b/drivers/s390/block/dasd_devmap.c
index 2ead7e7..14ba80b 100644
--- a/drivers/s390/block/dasd_devmap.c
+++ b/drivers/s390/block/dasd_devmap.c
@@ -77,7 +77,7 @@
* strings when running as a module.
*/
static char *dasd[256];
-module_param_array(dasd, charp, NULL, 0);
+module_param_array(dasd, charp, NULL, S_IRUGO);
/*
* Single spinlock to protect devmap and servermap structures and lists.
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 18a3358..bd85fb4 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -43,7 +43,7 @@
config SCSI_NETLINK
bool
default n
- select NET
+ depends on NET
config SCSI_PROC_FS
bool "legacy /proc/scsi/ support"
@@ -257,7 +257,7 @@
config SCSI_FC_ATTRS
tristate "FiberChannel Transport Attributes"
- depends on SCSI
+ depends on SCSI && NET
select SCSI_NETLINK
help
If you wish to export transport-specific information about
@@ -585,28 +585,28 @@
config LIBFC
tristate "LibFC module"
- select SCSI_FC_ATTRS
+ depends on SCSI_FC_ATTRS
select CRC32
---help---
Fibre Channel library module
config LIBFCOE
tristate "LibFCoE module"
- select LIBFC
+ depends on LIBFC
---help---
Library for Fibre Channel over Ethernet module
config FCOE
tristate "FCoE module"
depends on PCI
- select LIBFCOE
+ depends on LIBFCOE
---help---
Fibre Channel over Ethernet module
config FCOE_FNIC
tristate "Cisco FNIC Driver"
depends on PCI && X86
- select LIBFCOE
+ depends on LIBFCOE
help
This is support for the Cisco PCI-Express FCoE HBA.
@@ -816,7 +816,7 @@
config SCSI_IBMVFC
tristate "IBM Virtual FC support"
depends on PPC_PSERIES && SCSI
- select SCSI_FC_ATTRS
+ depends on SCSI_FC_ATTRS
help
This is the IBM POWER Virtual FC Client
@@ -1266,7 +1266,7 @@
config SCSI_LPFC
tristate "Emulex LightPulse Fibre Channel Support"
depends on PCI && SCSI
- select SCSI_FC_ATTRS
+ depends on SCSI_FC_ATTRS
select CRC_T10DIF
help
This lpfc driver supports the Emulex LightPulse
@@ -1676,7 +1676,7 @@
config ZFCP
tristate "FCP host bus adapter driver for IBM eServer zSeries"
depends on S390 && QDIO && SCSI
- select SCSI_FC_ATTRS
+ depends on SCSI_FC_ATTRS
help
If you want to access SCSI devices attached to your IBM eServer
zSeries by means of Fibre Channel interfaces say Y.
@@ -1704,7 +1704,7 @@
config SCSI_BFA_FC
tristate "Brocade BFA Fibre Channel Support"
depends on PCI && SCSI
- select SCSI_FC_ATTRS
+ depends on SCSI_FC_ATTRS
help
This bfa driver supports all Brocade PCIe FC/FCOE host adapters.
diff --git a/drivers/scsi/bnx2fc/Kconfig b/drivers/scsi/bnx2fc/Kconfig
index f245d54..0978828 100644
--- a/drivers/scsi/bnx2fc/Kconfig
+++ b/drivers/scsi/bnx2fc/Kconfig
@@ -1,11 +1,12 @@
config SCSI_BNX2X_FCOE
tristate "QLogic NetXtreme II FCoE support"
depends on PCI
+ depends on (IPV6 || IPV6=n)
+ depends on LIBFC
+ depends on LIBFCOE
select NETDEVICES
select ETHERNET
select NET_VENDOR_BROADCOM
- select LIBFC
- select LIBFCOE
select CNIC
---help---
This driver supports FCoE offload for the QLogic NetXtreme II
diff --git a/drivers/scsi/bnx2i/Kconfig b/drivers/scsi/bnx2i/Kconfig
index 44ce54e..ba30ff8 100644
--- a/drivers/scsi/bnx2i/Kconfig
+++ b/drivers/scsi/bnx2i/Kconfig
@@ -2,6 +2,7 @@
tristate "QLogic NetXtreme II iSCSI support"
depends on NET
depends on PCI
+ depends on (IPV6 || IPV6=n)
select SCSI_ISCSI_ATTRS
select NETDEVICES
select ETHERNET
diff --git a/drivers/scsi/csiostor/Kconfig b/drivers/scsi/csiostor/Kconfig
index 4d03b03..7c7e508 100644
--- a/drivers/scsi/csiostor/Kconfig
+++ b/drivers/scsi/csiostor/Kconfig
@@ -1,7 +1,7 @@
config SCSI_CHELSIO_FCOE
tristate "Chelsio Communications FCoE support"
depends on PCI && SCSI
- select SCSI_FC_ATTRS
+ depends on SCSI_FC_ATTRS
select FW_LOADER
help
This driver supports FCoE Offload functionality over
diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
index ea025e4..191b597 100644
--- a/drivers/scsi/libiscsi.c
+++ b/drivers/scsi/libiscsi.c
@@ -717,11 +717,21 @@
return NULL;
}
+ if (data_size > ISCSI_DEF_MAX_RECV_SEG_LEN) {
+ iscsi_conn_printk(KERN_ERR, conn, "Invalid buffer len of %u for login task. Max len is %u\n", data_size, ISCSI_DEF_MAX_RECV_SEG_LEN);
+ return NULL;
+ }
+
task = conn->login_task;
} else {
if (session->state != ISCSI_STATE_LOGGED_IN)
return NULL;
+ if (data_size != 0) {
+ iscsi_conn_printk(KERN_ERR, conn, "Can not send data buffer of len %u for op 0x%x\n", data_size, opcode);
+ return NULL;
+ }
+
BUG_ON(conn->c_stage == ISCSI_CONN_INITIAL_STAGE);
BUG_ON(conn->c_stage == ISCSI_CONN_STOPPED);
diff --git a/drivers/scsi/qla2xxx/Kconfig b/drivers/scsi/qla2xxx/Kconfig
index 23d6072..113e6c9 100644
--- a/drivers/scsi/qla2xxx/Kconfig
+++ b/drivers/scsi/qla2xxx/Kconfig
@@ -1,7 +1,7 @@
config SCSI_QLA_FC
tristate "QLogic QLA2XXX Fibre Channel Support"
depends on PCI && SCSI
- select SCSI_FC_ATTRS
+ depends on SCSI_FC_ATTRS
select FW_LOADER
---help---
This qla2xxx driver supports all QLogic Fibre Channel
@@ -31,7 +31,7 @@
config TCM_QLA2XXX
tristate "TCM_QLA2XXX fabric module for Qlogic 2xxx series target mode HBAs"
depends on SCSI_QLA_FC && TARGET_CORE
- select LIBFC
+ depends on LIBFC
select BTREE
default n
---help---
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index d837dc1..aaea4b9 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -733,12 +733,13 @@
} else {
unsigned long flags;
+ if (bidi_bytes)
+ scsi_release_bidi_buffers(cmd);
+
spin_lock_irqsave(q->queue_lock, flags);
blk_finish_request(req, error);
spin_unlock_irqrestore(q->queue_lock, flags);
- if (bidi_bytes)
- scsi_release_bidi_buffers(cmd);
scsi_release_buffers(cmd);
scsi_next_command(cmd);
}
diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c
index 48f1d26..134fb6e 100644
--- a/drivers/spi/spi-davinci.c
+++ b/drivers/spi/spi-davinci.c
@@ -397,24 +397,21 @@
struct spi_master *master = spi->master;
struct device_node *np = spi->dev.of_node;
bool internal_cs = true;
- unsigned long flags = GPIOF_DIR_OUT;
dspi = spi_master_get_devdata(spi->master);
pdata = &dspi->pdata;
- flags |= (spi->mode & SPI_CS_HIGH) ? GPIOF_INIT_LOW : GPIOF_INIT_HIGH;
-
if (!(spi->mode & SPI_NO_CS)) {
if (np && (master->cs_gpios != NULL) && (spi->cs_gpio >= 0)) {
- retval = gpio_request_one(spi->cs_gpio,
- flags, dev_name(&spi->dev));
+ retval = gpio_direction_output(
+ spi->cs_gpio, !(spi->mode & SPI_CS_HIGH));
internal_cs = false;
} else if (pdata->chip_sel &&
spi->chip_select < pdata->num_chipselect &&
pdata->chip_sel[spi->chip_select] != SPI_INTERN_CS) {
spi->cs_gpio = pdata->chip_sel[spi->chip_select];
- retval = gpio_request_one(spi->cs_gpio,
- flags, dev_name(&spi->dev));
+ retval = gpio_direction_output(
+ spi->cs_gpio, !(spi->mode & SPI_CS_HIGH));
internal_cs = false;
}
@@ -439,12 +436,6 @@
return retval;
}
-static void davinci_spi_cleanup(struct spi_device *spi)
-{
- if (spi->cs_gpio >= 0)
- gpio_free(spi->cs_gpio);
-}
-
static int davinci_spi_check_error(struct davinci_spi *dspi, int int_status)
{
struct device *sdev = dspi->bitbang.master->dev.parent;
@@ -956,7 +947,6 @@
master->num_chipselect = pdata->num_chipselect;
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(2, 16);
master->setup = davinci_spi_setup;
- master->cleanup = davinci_spi_cleanup;
dspi->bitbang.chipselect = davinci_spi_chipselect;
dspi->bitbang.setup_transfer = davinci_spi_setup_transfer;
@@ -967,6 +957,27 @@
if (dspi->version == SPI_VERSION_2)
dspi->bitbang.flags |= SPI_READY;
+ if (pdev->dev.of_node) {
+ int i;
+
+ for (i = 0; i < pdata->num_chipselect; i++) {
+ int cs_gpio = of_get_named_gpio(pdev->dev.of_node,
+ "cs-gpios", i);
+
+ if (cs_gpio == -EPROBE_DEFER) {
+ ret = cs_gpio;
+ goto free_clk;
+ }
+
+ if (gpio_is_valid(cs_gpio)) {
+ ret = devm_gpio_request(&pdev->dev, cs_gpio,
+ dev_name(&pdev->dev));
+ if (ret)
+ goto free_clk;
+ }
+ }
+ }
+
r = platform_get_resource(pdev, IORESOURCE_DMA, 0);
if (r)
dma_rx_chan = r->start;
diff --git a/drivers/spi/spi-dw.c b/drivers/spi/spi-dw.c
index 670f062..0dd0623 100644
--- a/drivers/spi/spi-dw.c
+++ b/drivers/spi/spi-dw.c
@@ -547,8 +547,7 @@
/* Only alloc on first setup */
chip = spi_get_ctldata(spi);
if (!chip) {
- chip = devm_kzalloc(&spi->dev, sizeof(struct chip_data),
- GFP_KERNEL);
+ chip = kzalloc(sizeof(struct chip_data), GFP_KERNEL);
if (!chip)
return -ENOMEM;
spi_set_ctldata(spi, chip);
@@ -606,6 +605,14 @@
return 0;
}
+static void dw_spi_cleanup(struct spi_device *spi)
+{
+ struct chip_data *chip = spi_get_ctldata(spi);
+
+ kfree(chip);
+ spi_set_ctldata(spi, NULL);
+}
+
/* Restart the controller, disable all interrupts, clean rx fifo */
static void spi_hw_init(struct dw_spi *dws)
{
@@ -661,6 +668,7 @@
master->bus_num = dws->bus_num;
master->num_chipselect = dws->num_cs;
master->setup = dw_spi_setup;
+ master->cleanup = dw_spi_cleanup;
master->transfer_one_message = dw_spi_transfer_one_message;
master->max_speed_hz = dws->max_freq;
diff --git a/drivers/spi/spi-fsl-espi.c b/drivers/spi/spi-fsl-espi.c
index 8ebd724..429e111 100644
--- a/drivers/spi/spi-fsl-espi.c
+++ b/drivers/spi/spi-fsl-espi.c
@@ -452,16 +452,16 @@
int retval;
u32 hw_mode;
u32 loop_mode;
- struct spi_mpc8xxx_cs *cs = spi->controller_state;
+ struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi);
if (!spi->max_speed_hz)
return -EINVAL;
if (!cs) {
- cs = devm_kzalloc(&spi->dev, sizeof(*cs), GFP_KERNEL);
+ cs = kzalloc(sizeof(*cs), GFP_KERNEL);
if (!cs)
return -ENOMEM;
- spi->controller_state = cs;
+ spi_set_ctldata(spi, cs);
}
mpc8xxx_spi = spi_master_get_devdata(spi->master);
@@ -496,6 +496,14 @@
return 0;
}
+static void fsl_espi_cleanup(struct spi_device *spi)
+{
+ struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi);
+
+ kfree(cs);
+ spi_set_ctldata(spi, NULL);
+}
+
void fsl_espi_cpu_irq(struct mpc8xxx_spi *mspi, u32 events)
{
struct fsl_espi_reg *reg_base = mspi->reg_base;
@@ -605,6 +613,7 @@
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 16);
master->setup = fsl_espi_setup;
+ master->cleanup = fsl_espi_cleanup;
mpc8xxx_spi = spi_master_get_devdata(master);
mpc8xxx_spi->spi_do_one_msg = fsl_espi_do_one_msg;
diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
index 9452f674..590f31b 100644
--- a/drivers/spi/spi-fsl-spi.c
+++ b/drivers/spi/spi-fsl-spi.c
@@ -425,16 +425,16 @@
struct fsl_spi_reg *reg_base;
int retval;
u32 hw_mode;
- struct spi_mpc8xxx_cs *cs = spi->controller_state;
+ struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi);
if (!spi->max_speed_hz)
return -EINVAL;
if (!cs) {
- cs = devm_kzalloc(&spi->dev, sizeof(*cs), GFP_KERNEL);
+ cs = kzalloc(sizeof(*cs), GFP_KERNEL);
if (!cs)
return -ENOMEM;
- spi->controller_state = cs;
+ spi_set_ctldata(spi, cs);
}
mpc8xxx_spi = spi_master_get_devdata(spi->master);
@@ -496,9 +496,13 @@
static void fsl_spi_cleanup(struct spi_device *spi)
{
struct mpc8xxx_spi *mpc8xxx_spi = spi_master_get_devdata(spi->master);
+ struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi);
if (mpc8xxx_spi->type == TYPE_GRLIB && gpio_is_valid(spi->cs_gpio))
gpio_free(spi->cs_gpio);
+
+ kfree(cs);
+ spi_set_ctldata(spi, NULL);
}
static void fsl_spi_cpu_irq(struct mpc8xxx_spi *mspi, u32 events)
diff --git a/drivers/spi/spi-pl022.c b/drivers/spi/spi-pl022.c
index 1189cfd..f1f0a58 100644
--- a/drivers/spi/spi-pl022.c
+++ b/drivers/spi/spi-pl022.c
@@ -2136,7 +2136,7 @@
cs_gpio);
else if (gpio_direction_output(cs_gpio, 1))
dev_err(&adev->dev,
- "could set gpio %d as output\n",
+ "could not set gpio %d as output\n",
cs_gpio);
}
}
diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
index cd0e08b0..3afc266 100644
--- a/drivers/spi/spi-rockchip.c
+++ b/drivers/spi/spi-rockchip.c
@@ -220,7 +220,7 @@
do {
if (!(readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY))
return;
- } while (time_before(jiffies, timeout));
+ } while (!time_after(jiffies, timeout));
dev_warn(rs->dev, "spi controller is in busy state!\n");
}
@@ -529,7 +529,8 @@
int ret = 0;
struct rockchip_spi *rs = spi_master_get_devdata(master);
- WARN_ON((readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY));
+ WARN_ON(readl_relaxed(rs->regs + ROCKCHIP_SPI_SSIENR) &&
+ (readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY));
if (!xfer->tx_buf && !xfer->rx_buf) {
dev_err(rs->dev, "No buffer for transfer\n");
diff --git a/drivers/spi/spi-sirf.c b/drivers/spi/spi-sirf.c
index 95ac276..6f0602f 100644
--- a/drivers/spi/spi-sirf.c
+++ b/drivers/spi/spi-sirf.c
@@ -312,6 +312,8 @@
u32 cmd;
sspi = spi_master_get_devdata(spi->master);
+ writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + SIRFSOC_SPI_TXFIFO_OP);
+ writel(SIRFSOC_SPI_FIFO_START, sspi->base + SIRFSOC_SPI_TXFIFO_OP);
memcpy(&cmd, sspi->tx, t->len);
if (sspi->word_width == 1 && !(spi->mode & SPI_LSB_FIRST))
cmd = cpu_to_be32(cmd) >>
@@ -438,7 +440,8 @@
sspi->tx_word(sspi);
writel(SIRFSOC_SPI_TXFIFO_EMPTY_INT_EN |
SIRFSOC_SPI_TX_UFLOW_INT_EN |
- SIRFSOC_SPI_RX_OFLOW_INT_EN,
+ SIRFSOC_SPI_RX_OFLOW_INT_EN |
+ SIRFSOC_SPI_RX_IO_DMA_INT_EN,
sspi->base + SIRFSOC_SPI_INT_EN);
writel(SIRFSOC_SPI_RX_EN | SIRFSOC_SPI_TX_EN,
sspi->base + SIRFSOC_SPI_TX_RX_EN);
diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c
index e7b2e02..69139ce 100644
--- a/drivers/staging/android/sync.c
+++ b/drivers/staging/android/sync.c
@@ -199,7 +199,6 @@
fence->num_fences = 1;
atomic_set(&fence->status, 1);
- fence_get(&pt->base);
fence->cbs[0].sync_pt = &pt->base;
fence->cbs[0].fence = fence;
if (fence_add_callback(&pt->base, &fence->cbs[0].cb,
diff --git a/drivers/staging/iio/meter/ade7758_trigger.c b/drivers/staging/iio/meter/ade7758_trigger.c
index ea01b8f..6f45ce0 100644
--- a/drivers/staging/iio/meter/ade7758_trigger.c
+++ b/drivers/staging/iio/meter/ade7758_trigger.c
@@ -85,7 +85,7 @@
ret = iio_trigger_register(st->trig);
/* select default trigger */
- indio_dev->trig = st->trig;
+ indio_dev->trig = iio_trigger_get(st->trig);
if (ret)
goto error_free_irq;
diff --git a/drivers/staging/imx-drm/imx-ldb.c b/drivers/staging/imx-drm/imx-ldb.c
index 7e3f019..4662e00 100644
--- a/drivers/staging/imx-drm/imx-ldb.c
+++ b/drivers/staging/imx-drm/imx-ldb.c
@@ -574,6 +574,9 @@
for (i = 0; i < 2; i++) {
struct imx_ldb_channel *channel = &imx_ldb->channel[i];
+ if (!channel->connector.funcs)
+ continue;
+
channel->connector.funcs->destroy(&channel->connector);
channel->encoder.funcs->destroy(&channel->encoder);
}
diff --git a/drivers/staging/imx-drm/ipuv3-plane.c b/drivers/staging/imx-drm/ipuv3-plane.c
index 6f393a1..50de10a 100644
--- a/drivers/staging/imx-drm/ipuv3-plane.c
+++ b/drivers/staging/imx-drm/ipuv3-plane.c
@@ -281,7 +281,8 @@
ipu_idmac_put(ipu_plane->ipu_ch);
ipu_dmfc_put(ipu_plane->dmfc);
- ipu_dp_put(ipu_plane->dp);
+ if (ipu_plane->dp)
+ ipu_dp_put(ipu_plane->dp);
}
}
diff --git a/drivers/staging/lustre/lustre/llite/llite_lib.c b/drivers/staging/lustre/lustre/llite/llite_lib.c
index 0367f5a..0c59e26 100644
--- a/drivers/staging/lustre/lustre/llite/llite_lib.c
+++ b/drivers/staging/lustre/lustre/llite/llite_lib.c
@@ -568,7 +568,7 @@
if (sb->s_root == NULL) {
CERROR("%s: can't make root dentry\n",
ll_get_fsname(sb, NULL, 0));
- GOTO(out_root, err = -ENOMEM);
+ GOTO(out_lock_cn_cb, err = -ENOMEM);
}
sbi->ll_sdev_orig = sb->s_dev;
diff --git a/drivers/staging/vt6655/hostap.c b/drivers/staging/vt6655/hostap.c
index f105c2a..164136b 100644
--- a/drivers/staging/vt6655/hostap.c
+++ b/drivers/staging/vt6655/hostap.c
@@ -350,6 +350,9 @@
{
PSMgmtObject pMgmt = pDevice->pMgmt;
+ if (param->u.generic_elem.len > sizeof(pMgmt->abyWPAIE))
+ return -EINVAL;
+
memcpy(pMgmt->abyWPAIE,
param->u.generic_elem.data,
param->u.generic_elem.len
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 1f4c794f..260c3e1 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -4540,6 +4540,7 @@
{
struct iscsi_conn *l_conn;
struct iscsi_session *sess = conn->sess;
+ bool conn_found = false;
if (!sess)
return;
@@ -4548,12 +4549,13 @@
list_for_each_entry(l_conn, &sess->sess_conn_list, conn_list) {
if (l_conn->cid == cid) {
iscsit_inc_conn_usage_count(l_conn);
+ conn_found = true;
break;
}
}
spin_unlock_bh(&sess->conn_lock);
- if (!l_conn)
+ if (!conn_found)
return;
if (l_conn->sock)
diff --git a/drivers/target/iscsi/iscsi_target_parameters.c b/drivers/target/iscsi/iscsi_target_parameters.c
index 02f9de2..18c2926 100644
--- a/drivers/target/iscsi/iscsi_target_parameters.c
+++ b/drivers/target/iscsi/iscsi_target_parameters.c
@@ -601,7 +601,7 @@
param_list = kzalloc(sizeof(struct iscsi_param_list), GFP_KERNEL);
if (!param_list) {
pr_err("Unable to allocate memory for struct iscsi_param_list.\n");
- goto err_out;
+ return -1;
}
INIT_LIST_HEAD(¶m_list->param_list);
INIT_LIST_HEAD(¶m_list->extra_response_list);
diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
index fd90b28..73355f4 100644
--- a/drivers/target/iscsi/iscsi_target_util.c
+++ b/drivers/target/iscsi/iscsi_target_util.c
@@ -400,6 +400,8 @@
spin_lock_bh(&conn->cmd_lock);
list_for_each_entry(cmd, &conn->conn_cmd_list, i_conn_node) {
+ if (cmd->cmd_flags & ICF_GOT_LAST_DATAOUT)
+ continue;
if (cmd->init_task_tag == init_task_tag) {
spin_unlock_bh(&conn->cmd_lock);
return cmd;
diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
index bf55c5a..756def3 100644
--- a/drivers/target/target_core_configfs.c
+++ b/drivers/target/target_core_configfs.c
@@ -2363,7 +2363,7 @@
pr_err("Invalid value '%ld', must be '0' or '1'\n", tmp); \
return -EINVAL; \
} \
- if (!tmp) \
+ if (tmp) \
t->_var |= _bit; \
else \
t->_var &= ~_bit; \
diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
index 6cd7222..bc286a6 100644
--- a/drivers/target/target_core_spc.c
+++ b/drivers/target/target_core_spc.c
@@ -664,7 +664,7 @@
buf[0] = dev->transport->get_device_type(dev);
buf[3] = 0x0c;
put_unaligned_be32(dev->t10_alua.lba_map_segment_size, &buf[8]);
- put_unaligned_be32(dev->t10_alua.lba_map_segment_size, &buf[12]);
+ put_unaligned_be32(dev->t10_alua.lba_map_segment_multiplier, &buf[12]);
return 0;
}
diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
index 4db7987..57d9df8 100644
--- a/drivers/tty/serial/8250/8250_dw.c
+++ b/drivers/tty/serial/8250/8250_dw.c
@@ -540,6 +540,7 @@
{ "INT3434", 0 },
{ "INT3435", 0 },
{ "80860F0A", 0 },
+ { "8086228A", 0 },
{ },
};
MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);
diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
index 7b63677..d7d4584 100644
--- a/drivers/tty/serial/atmel_serial.c
+++ b/drivers/tty/serial/atmel_serial.c
@@ -527,6 +527,45 @@
}
/*
+ * Disable modem status interrupts
+ */
+static void atmel_disable_ms(struct uart_port *port)
+{
+ struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
+ uint32_t idr = 0;
+
+ /*
+ * Interrupt should not be disabled twice
+ */
+ if (!atmel_port->ms_irq_enabled)
+ return;
+
+ atmel_port->ms_irq_enabled = false;
+
+ if (atmel_port->gpio_irq[UART_GPIO_CTS] >= 0)
+ disable_irq(atmel_port->gpio_irq[UART_GPIO_CTS]);
+ else
+ idr |= ATMEL_US_CTSIC;
+
+ if (atmel_port->gpio_irq[UART_GPIO_DSR] >= 0)
+ disable_irq(atmel_port->gpio_irq[UART_GPIO_DSR]);
+ else
+ idr |= ATMEL_US_DSRIC;
+
+ if (atmel_port->gpio_irq[UART_GPIO_RI] >= 0)
+ disable_irq(atmel_port->gpio_irq[UART_GPIO_RI]);
+ else
+ idr |= ATMEL_US_RIIC;
+
+ if (atmel_port->gpio_irq[UART_GPIO_DCD] >= 0)
+ disable_irq(atmel_port->gpio_irq[UART_GPIO_DCD]);
+ else
+ idr |= ATMEL_US_DCDIC;
+
+ UART_PUT_IDR(port, idr);
+}
+
+/*
* Control the transmission of a break signal
*/
static void atmel_break_ctl(struct uart_port *port, int break_state)
@@ -1993,7 +2032,9 @@
/* CTS flow-control and modem-status interrupts */
if (UART_ENABLE_MS(port, termios->c_cflag))
- port->ops->enable_ms(port);
+ atmel_enable_ms(port);
+ else
+ atmel_disable_ms(port);
spin_unlock_irqrestore(&port->lock, flags);
}
diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
index 01951d2..806e4bc 100644
--- a/drivers/tty/serial/xilinx_uartps.c
+++ b/drivers/tty/serial/xilinx_uartps.c
@@ -581,7 +581,7 @@
{
unsigned int status;
- status = cdns_uart_readl(CDNS_UART_ISR_OFFSET) & CDNS_UART_IXR_TXEMPTY;
+ status = cdns_uart_readl(CDNS_UART_SR_OFFSET) & CDNS_UART_SR_TXEMPTY;
return status ? TIOCSER_TEMT : 0;
}
diff --git a/drivers/usb/chipidea/ci_hdrc_msm.c b/drivers/usb/chipidea/ci_hdrc_msm.c
index d72b9d2..4935ac3 100644
--- a/drivers/usb/chipidea/ci_hdrc_msm.c
+++ b/drivers/usb/chipidea/ci_hdrc_msm.c
@@ -20,13 +20,13 @@
static void ci_hdrc_msm_notify_event(struct ci_hdrc *ci, unsigned event)
{
struct device *dev = ci->gadget.dev.parent;
- int val;
switch (event) {
case CI_HDRC_CONTROLLER_RESET_EVENT:
dev_dbg(dev, "CI_HDRC_CONTROLLER_RESET_EVENT received\n");
writel(0, USB_AHBBURST);
writel(0, USB_AHBMODE);
+ usb_phy_init(ci->transceiver);
break;
case CI_HDRC_CONTROLLER_STOPPED_EVENT:
dev_dbg(dev, "CI_HDRC_CONTROLLER_STOPPED_EVENT received\n");
@@ -34,10 +34,7 @@
* Put the transceiver in non-driving mode. Otherwise host
* may not detect soft-disconnection.
*/
- val = usb_phy_io_read(ci->transceiver, ULPI_FUNC_CTRL);
- val &= ~ULPI_FUNC_CTRL_OPMODE_MASK;
- val |= ULPI_FUNC_CTRL_OPMODE_NONDRIVING;
- usb_phy_io_write(ci->transceiver, val, ULPI_FUNC_CTRL);
+ usb_phy_notify_disconnect(ci->transceiver, USB_SPEED_UNKNOWN);
break;
default:
dev_dbg(dev, "unknown ci_hdrc event\n");
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index 46f5161..d481c99 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -5024,9 +5024,10 @@
hub = list_entry(tmp, struct usb_hub, event_list);
kref_get(&hub->kref);
+ hdev = hub->hdev;
+ usb_get_dev(hdev);
spin_unlock_irq(&hub_event_lock);
- hdev = hub->hdev;
hub_dev = hub->intfdev;
intf = to_usb_interface(hub_dev);
dev_dbg(hub_dev, "state %d ports %d chg %04x evt %04x\n",
@@ -5139,6 +5140,7 @@
usb_autopm_put_interface(intf);
loop_disconnected:
usb_unlock_device(hdev);
+ usb_put_dev(hdev);
kref_put(&hub->kref, hub_release);
} /* end while (1) */
diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
index 7c9618e..ce6071d 100644
--- a/drivers/usb/dwc2/gadget.c
+++ b/drivers/usb/dwc2/gadget.c
@@ -1649,6 +1649,7 @@
dev_err(hsotg->dev,
"%s: timeout flushing fifo (GRSTCTL=%08x)\n",
__func__, val);
+ break;
}
udelay(1);
@@ -2747,13 +2748,14 @@
dev_dbg(hsotg->dev, "pdev 0x%p\n", pdev);
- if (hsotg->phy) {
+ if (hsotg->uphy)
+ usb_phy_init(hsotg->uphy);
+ else if (hsotg->plat && hsotg->plat->phy_init)
+ hsotg->plat->phy_init(pdev, hsotg->plat->phy_type);
+ else {
phy_init(hsotg->phy);
phy_power_on(hsotg->phy);
- } else if (hsotg->uphy)
- usb_phy_init(hsotg->uphy);
- else if (hsotg->plat->phy_init)
- hsotg->plat->phy_init(pdev, hsotg->plat->phy_type);
+ }
}
/**
@@ -2767,13 +2769,14 @@
{
struct platform_device *pdev = to_platform_device(hsotg->dev);
- if (hsotg->phy) {
+ if (hsotg->uphy)
+ usb_phy_shutdown(hsotg->uphy);
+ else if (hsotg->plat && hsotg->plat->phy_exit)
+ hsotg->plat->phy_exit(pdev, hsotg->plat->phy_type);
+ else {
phy_power_off(hsotg->phy);
phy_exit(hsotg->phy);
- } else if (hsotg->uphy)
- usb_phy_shutdown(hsotg->uphy);
- else if (hsotg->plat->phy_exit)
- hsotg->plat->phy_exit(pdev, hsotg->plat->phy_type);
+ }
}
/**
@@ -2892,13 +2895,11 @@
return -ENODEV;
/* all endpoints should be shutdown */
- for (ep = 0; ep < hsotg->num_of_eps; ep++)
+ for (ep = 1; ep < hsotg->num_of_eps; ep++)
s3c_hsotg_ep_disable(&hsotg->eps[ep].ep);
spin_lock_irqsave(&hsotg->lock, flags);
- s3c_hsotg_phy_disable(hsotg);
-
if (!driver)
hsotg->driver = NULL;
@@ -2941,7 +2942,6 @@
s3c_hsotg_phy_enable(hsotg);
s3c_hsotg_core_init(hsotg);
} else {
- s3c_hsotg_disconnect(hsotg);
s3c_hsotg_phy_disable(hsotg);
}
@@ -3441,13 +3441,6 @@
hsotg->irq = ret;
- ret = devm_request_irq(&pdev->dev, hsotg->irq, s3c_hsotg_irq, 0,
- dev_name(dev), hsotg);
- if (ret < 0) {
- dev_err(dev, "cannot claim IRQ\n");
- goto err_clk;
- }
-
dev_info(dev, "regs %p, irq %d\n", hsotg->regs, hsotg->irq);
hsotg->gadget.max_speed = USB_SPEED_HIGH;
@@ -3488,9 +3481,6 @@
if (hsotg->phy && (phy_get_bus_width(phy) == 8))
hsotg->phyif = GUSBCFG_PHYIF8;
- if (hsotg->phy)
- phy_init(hsotg->phy);
-
/* usb phy enable */
s3c_hsotg_phy_enable(hsotg);
@@ -3498,6 +3488,17 @@
s3c_hsotg_init(hsotg);
s3c_hsotg_hw_cfg(hsotg);
+ ret = devm_request_irq(&pdev->dev, hsotg->irq, s3c_hsotg_irq, 0,
+ dev_name(dev), hsotg);
+ if (ret < 0) {
+ s3c_hsotg_phy_disable(hsotg);
+ clk_disable_unprepare(hsotg->clk);
+ regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies),
+ hsotg->supplies);
+ dev_err(dev, "cannot claim IRQ\n");
+ goto err_clk;
+ }
+
/* hsotg->num_of_eps holds number of EPs other than ep0 */
if (hsotg->num_of_eps == 0) {
@@ -3582,9 +3583,6 @@
usb_gadget_unregister_driver(hsotg->driver);
}
- s3c_hsotg_phy_disable(hsotg);
- if (hsotg->phy)
- phy_exit(hsotg->phy);
clk_disable_unprepare(hsotg->clk);
return 0;
diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
index b769c1f..9069984 100644
--- a/drivers/usb/dwc3/core.c
+++ b/drivers/usb/dwc3/core.c
@@ -799,20 +799,21 @@
{
struct dwc3 *dwc = platform_get_drvdata(pdev);
+ dwc3_debugfs_exit(dwc);
+ dwc3_core_exit_mode(dwc);
+ dwc3_event_buffers_cleanup(dwc);
+ dwc3_free_event_buffers(dwc);
+
usb_phy_set_suspend(dwc->usb2_phy, 1);
usb_phy_set_suspend(dwc->usb3_phy, 1);
phy_power_off(dwc->usb2_generic_phy);
phy_power_off(dwc->usb3_generic_phy);
+ dwc3_core_exit(dwc);
+
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
- dwc3_debugfs_exit(dwc);
- dwc3_core_exit_mode(dwc);
- dwc3_event_buffers_cleanup(dwc);
- dwc3_free_event_buffers(dwc);
- dwc3_core_exit(dwc);
-
return 0;
}
diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
index 9dcfbe7..fc0de375 100644
--- a/drivers/usb/dwc3/dwc3-omap.c
+++ b/drivers/usb/dwc3/dwc3-omap.c
@@ -576,9 +576,9 @@
if (omap->extcon_id_dev.edev)
extcon_unregister_interest(&omap->extcon_id_dev);
dwc3_omap_disable_irqs(omap);
+ device_for_each_child(&pdev->dev, NULL, dwc3_omap_remove_core);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
- device_for_each_child(&pdev->dev, NULL, dwc3_omap_remove_core);
return 0;
}
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 349cacc..490a6ca 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -527,7 +527,7 @@
dep->stream_capable = true;
}
- if (usb_endpoint_xfer_isoc(desc))
+ if (!usb_endpoint_xfer_control(desc))
params.param1 |= DWC3_DEPCFG_XFER_IN_PROGRESS_EN;
/*
@@ -1225,16 +1225,17 @@
int ret;
+ spin_lock_irqsave(&dwc->lock, flags);
if (!dep->endpoint.desc) {
dev_dbg(dwc->dev, "trying to queue request %p to disabled %s\n",
request, ep->name);
+ spin_unlock_irqrestore(&dwc->lock, flags);
return -ESHUTDOWN;
}
dev_vdbg(dwc->dev, "queing request %p to %s length %d\n",
request, ep->name, request->length);
- spin_lock_irqsave(&dwc->lock, flags);
ret = __dwc3_gadget_ep_queue(dep, req);
spin_unlock_irqrestore(&dwc->lock, flags);
@@ -2041,12 +2042,6 @@
dwc3_endpoint_transfer_complete(dwc, dep, event);
break;
case DWC3_DEPEVT_XFERINPROGRESS:
- if (!usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
- dev_dbg(dwc->dev, "%s is not an Isochronous endpoint\n",
- dep->name);
- return;
- }
-
dwc3_endpoint_transfer_complete(dwc, dep, event);
break;
case DWC3_DEPEVT_XFERNOTREADY:
diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
index dc30adf..0dc3552 100644
--- a/drivers/usb/gadget/function/f_fs.c
+++ b/drivers/usb/gadget/function/f_fs.c
@@ -155,6 +155,12 @@
struct usb_request *req;
};
+struct ffs_desc_helper {
+ struct ffs_data *ffs;
+ unsigned interfaces_count;
+ unsigned eps_count;
+};
+
static int __must_check ffs_epfiles_create(struct ffs_data *ffs);
static void ffs_epfiles_destroy(struct ffs_epfile *epfiles, unsigned count);
@@ -1830,7 +1836,8 @@
u8 *valuep, struct usb_descriptor_header *desc,
void *priv)
{
- struct ffs_data *ffs = priv;
+ struct ffs_desc_helper *helper = priv;
+ struct usb_endpoint_descriptor *d;
ENTER();
@@ -1844,8 +1851,8 @@
* encountered interface "n" then there are at least
* "n+1" interfaces.
*/
- if (*valuep >= ffs->interfaces_count)
- ffs->interfaces_count = *valuep + 1;
+ if (*valuep >= helper->interfaces_count)
+ helper->interfaces_count = *valuep + 1;
break;
case FFS_STRING:
@@ -1853,14 +1860,22 @@
* Strings are indexed from 1 (0 is magic ;) reserved
* for languages list or some such)
*/
- if (*valuep > ffs->strings_count)
- ffs->strings_count = *valuep;
+ if (*valuep > helper->ffs->strings_count)
+ helper->ffs->strings_count = *valuep;
break;
case FFS_ENDPOINT:
- /* Endpoints are indexed from 1 as well. */
- if ((*valuep & USB_ENDPOINT_NUMBER_MASK) > ffs->eps_count)
- ffs->eps_count = (*valuep & USB_ENDPOINT_NUMBER_MASK);
+ d = (void *)desc;
+ helper->eps_count++;
+ if (helper->eps_count >= 15)
+ return -EINVAL;
+ /* Check if descriptors for any speed were already parsed */
+ if (!helper->ffs->eps_count && !helper->ffs->interfaces_count)
+ helper->ffs->eps_addrmap[helper->eps_count] =
+ d->bEndpointAddress;
+ else if (helper->ffs->eps_addrmap[helper->eps_count] !=
+ d->bEndpointAddress)
+ return -EINVAL;
break;
}
@@ -2053,6 +2068,7 @@
char *data = _data, *raw_descs;
unsigned os_descs_count = 0, counts[3], flags;
int ret = -EINVAL, i;
+ struct ffs_desc_helper helper;
ENTER();
@@ -2101,13 +2117,29 @@
/* Read descriptors */
raw_descs = data;
+ helper.ffs = ffs;
for (i = 0; i < 3; ++i) {
if (!counts[i])
continue;
+ helper.interfaces_count = 0;
+ helper.eps_count = 0;
ret = ffs_do_descs(counts[i], data, len,
- __ffs_data_do_entity, ffs);
+ __ffs_data_do_entity, &helper);
if (ret < 0)
goto error;
+ if (!ffs->eps_count && !ffs->interfaces_count) {
+ ffs->eps_count = helper.eps_count;
+ ffs->interfaces_count = helper.interfaces_count;
+ } else {
+ if (ffs->eps_count != helper.eps_count) {
+ ret = -EINVAL;
+ goto error;
+ }
+ if (ffs->interfaces_count != helper.interfaces_count) {
+ ret = -EINVAL;
+ goto error;
+ }
+ }
data += ret;
len -= ret;
}
@@ -2342,9 +2374,18 @@
spin_unlock_irqrestore(&ffs->ev.waitq.lock, flags);
}
-
/* Bind/unbind USB function hooks *******************************************/
+static int ffs_ep_addr2idx(struct ffs_data *ffs, u8 endpoint_address)
+{
+ int i;
+
+ for (i = 1; i < ARRAY_SIZE(ffs->eps_addrmap); ++i)
+ if (ffs->eps_addrmap[i] == endpoint_address)
+ return i;
+ return -ENOENT;
+}
+
static int __ffs_func_bind_do_descs(enum ffs_entity_type type, u8 *valuep,
struct usb_descriptor_header *desc,
void *priv)
@@ -2378,7 +2419,10 @@
if (!desc || desc->bDescriptorType != USB_DT_ENDPOINT)
return 0;
- idx = (ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK) - 1;
+ idx = ffs_ep_addr2idx(func->ffs, ds->bEndpointAddress) - 1;
+ if (idx < 0)
+ return idx;
+
ffs_ep = func->eps + idx;
if (unlikely(ffs_ep->descs[ep_desc_id])) {
diff --git a/drivers/usb/gadget/function/u_fs.h b/drivers/usb/gadget/function/u_fs.h
index 63d6e71..d48897e 100644
--- a/drivers/usb/gadget/function/u_fs.h
+++ b/drivers/usb/gadget/function/u_fs.h
@@ -224,6 +224,8 @@
void *ms_os_descs_ext_prop_name_avail;
void *ms_os_descs_ext_prop_data_avail;
+ u8 eps_addrmap[15];
+
unsigned short strings_count;
unsigned short interfaces_count;
unsigned short eps_count;
diff --git a/drivers/usb/gadget/udc/fusb300_udc.h b/drivers/usb/gadget/udc/fusb300_udc.h
index ae811d8..ad39f89 100644
--- a/drivers/usb/gadget/udc/fusb300_udc.h
+++ b/drivers/usb/gadget/udc/fusb300_udc.h
@@ -12,7 +12,7 @@
#ifndef __FUSB300_UDC_H__
-#define __FUSB300_UDC_H_
+#define __FUSB300_UDC_H__
#include <linux/kernel.h>
diff --git a/drivers/usb/gadget/udc/net2280.c b/drivers/usb/gadget/udc/net2280.c
index f4eac11..2e95715 100644
--- a/drivers/usb/gadget/udc/net2280.c
+++ b/drivers/usb/gadget/udc/net2280.c
@@ -3320,7 +3320,7 @@
if (stat & tmp) {
writel(tmp, &dev->regs->irqstat1);
if ((((stat & BIT(ROOT_PORT_RESET_INTERRUPT)) &&
- (readl(&dev->usb->usbstat) & mask)) ||
+ ((readl(&dev->usb->usbstat) & mask) == 0)) ||
((readl(&dev->usb->usbctl) &
BIT(VBUS_PIN)) == 0)) &&
(dev->gadget.speed != USB_SPEED_UNKNOWN)) {
diff --git a/drivers/usb/host/bcma-hcd.c b/drivers/usb/host/bcma-hcd.c
index 205f4a3..cd6d0af 100644
--- a/drivers/usb/host/bcma-hcd.c
+++ b/drivers/usb/host/bcma-hcd.c
@@ -237,7 +237,7 @@
bcma_hcd_init_chip(dev);
/* In AI chips EHCI is addrspace 0, OHCI is 1 */
- ohci_addr = dev->addr1;
+ ohci_addr = dev->addr_s[0];
if ((chipinfo->id == 0x5357 || chipinfo->id == 0x4749)
&& chipinfo->rev == 0)
ohci_addr = 0x18009000;
diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
index 81cda09..488a308 100644
--- a/drivers/usb/host/ehci-hcd.c
+++ b/drivers/usb/host/ehci-hcd.c
@@ -965,8 +965,6 @@
}
qh->exception = 1;
- if (ehci->rh_state < EHCI_RH_RUNNING)
- qh->qh_state = QH_STATE_IDLE;
switch (qh->qh_state) {
case QH_STATE_LINKED:
WARN_ON(!list_empty(&qh->qtd_list));
diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
index aa79e87..69aece3 100644
--- a/drivers/usb/host/xhci-hub.c
+++ b/drivers/usb/host/xhci-hub.c
@@ -468,7 +468,8 @@
}
/* Updates Link Status for super Speed port */
-static void xhci_hub_report_usb3_link_state(u32 *status, u32 status_reg)
+static void xhci_hub_report_usb3_link_state(struct xhci_hcd *xhci,
+ u32 *status, u32 status_reg)
{
u32 pls = status_reg & PORT_PLS_MASK;
@@ -507,7 +508,8 @@
* in which sometimes the port enters compliance mode
* caused by a delay on the host-device negotiation.
*/
- if (pls == USB_SS_PORT_LS_COMP_MOD)
+ if ((xhci->quirks & XHCI_COMP_MODE_QUIRK) &&
+ (pls == USB_SS_PORT_LS_COMP_MOD))
pls |= USB_PORT_STAT_CONNECTION;
}
@@ -666,7 +668,7 @@
}
/* Update Port Link State */
if (hcd->speed == HCD_USB3) {
- xhci_hub_report_usb3_link_state(&status, raw_port_status);
+ xhci_hub_report_usb3_link_state(xhci, &status, raw_port_status);
/*
* Verify if all USB3 Ports Have entered U0 already.
* Delete Compliance Mode Timer if so.
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index 8056d90..8936211 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -1812,6 +1812,7 @@
if (xhci->lpm_command)
xhci_free_command(xhci, xhci->lpm_command);
+ xhci->lpm_command = NULL;
if (xhci->cmd_ring)
xhci_ring_free(xhci, xhci->cmd_ring);
xhci->cmd_ring = NULL;
@@ -1819,7 +1820,7 @@
xhci_cleanup_command_queue(xhci);
num_ports = HCS_MAX_PORTS(xhci->hcs_params1);
- for (i = 0; i < num_ports; i++) {
+ for (i = 0; i < num_ports && xhci->rh_bw; i++) {
struct xhci_interval_bw_table *bwt = &xhci->rh_bw[i].bw_table;
for (j = 0; j < XHCI_MAX_INTERVAL; j++) {
struct list_head *ep = &bwt->interval_bw[j].endpoints;
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index c020b09..c4a8fca 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -3971,13 +3971,21 @@
int ret;
spin_lock_irqsave(&xhci->lock, flags);
- if (max_exit_latency == xhci->devs[udev->slot_id]->current_mel) {
+
+ virt_dev = xhci->devs[udev->slot_id];
+
+ /*
+ * virt_dev might not exists yet if xHC resumed from hibernate (S4) and
+ * xHC was re-initialized. Exit latency will be set later after
+ * hub_port_finish_reset() is done and xhci->devs[] are re-allocated
+ */
+
+ if (!virt_dev || max_exit_latency == virt_dev->current_mel) {
spin_unlock_irqrestore(&xhci->lock, flags);
return 0;
}
/* Attempt to issue an Evaluate Context command to change the MEL. */
- virt_dev = xhci->devs[udev->slot_id];
command = xhci->lpm_command;
ctrl_ctx = xhci_get_input_control_ctx(xhci, command->in_ctx);
if (!ctrl_ctx) {
diff --git a/drivers/usb/musb/musb_cppi41.c b/drivers/usb/musb/musb_cppi41.c
index 47ae645..3ee133f 100644
--- a/drivers/usb/musb/musb_cppi41.c
+++ b/drivers/usb/musb/musb_cppi41.c
@@ -39,6 +39,7 @@
u32 transferred;
u32 packet_sz;
struct list_head tx_check;
+ int tx_zlp;
};
#define MUSB_DMA_NUM_CHANNELS 15
@@ -122,6 +123,8 @@
{
struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep;
struct musb *musb = hw_ep->musb;
+ void __iomem *epio = hw_ep->regs;
+ u16 csr;
if (!cppi41_channel->prog_len ||
(cppi41_channel->channel.status == MUSB_DMA_STATUS_FREE)) {
@@ -131,15 +134,24 @@
cppi41_channel->transferred;
cppi41_channel->channel.status = MUSB_DMA_STATUS_FREE;
cppi41_channel->channel.rx_packet_done = true;
+
+ /*
+ * transmit ZLP using PIO mode for transfers which size is
+ * multiple of EP packet size.
+ */
+ if (cppi41_channel->tx_zlp && (cppi41_channel->transferred %
+ cppi41_channel->packet_sz) == 0) {
+ musb_ep_select(musb->mregs, hw_ep->epnum);
+ csr = MUSB_TXCSR_MODE | MUSB_TXCSR_TXPKTRDY;
+ musb_writew(epio, MUSB_TXCSR, csr);
+ }
musb_dma_completion(musb, hw_ep->epnum, cppi41_channel->is_tx);
} else {
/* next iteration, reload */
struct dma_chan *dc = cppi41_channel->dc;
struct dma_async_tx_descriptor *dma_desc;
enum dma_transfer_direction direction;
- u16 csr;
u32 remain_bytes;
- void __iomem *epio = cppi41_channel->hw_ep->regs;
cppi41_channel->buf_addr += cppi41_channel->packet_sz;
@@ -363,6 +375,7 @@
cppi41_channel->total_len = len;
cppi41_channel->transferred = 0;
cppi41_channel->packet_sz = packet_sz;
+ cppi41_channel->tx_zlp = (cppi41_channel->is_tx && mode) ? 1 : 0;
/*
* Due to AM335x' Advisory 1.0.13 we are not allowed to transfer more
diff --git a/drivers/usb/phy/phy-mxs-usb.c b/drivers/usb/phy/phy-mxs-usb.c
index c42bdf0..00972ec 100644
--- a/drivers/usb/phy/phy-mxs-usb.c
+++ b/drivers/usb/phy/phy-mxs-usb.c
@@ -1,5 +1,5 @@
/*
- * Copyright 2012-2013 Freescale Semiconductor, Inc.
+ * Copyright 2012-2014 Freescale Semiconductor, Inc.
* Copyright (C) 2012 Marek Vasut <marex@denx.de>
* on behalf of DENX Software Engineering GmbH
*
@@ -125,7 +125,13 @@
MXS_PHY_NEED_IP_FIX,
};
+static const struct mxs_phy_data imx6sx_phy_data = {
+ .flags = MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS |
+ MXS_PHY_NEED_IP_FIX,
+};
+
static const struct of_device_id mxs_phy_dt_ids[] = {
+ { .compatible = "fsl,imx6sx-usbphy", .data = &imx6sx_phy_data, },
{ .compatible = "fsl,imx6sl-usbphy", .data = &imx6sl_phy_data, },
{ .compatible = "fsl,imx6q-usbphy", .data = &imx6q_phy_data, },
{ .compatible = "fsl,imx23-usbphy", .data = &imx23_phy_data, },
diff --git a/drivers/usb/phy/phy-tegra-usb.c b/drivers/usb/phy/phy-tegra-usb.c
index 13b4fa2..886f180 100644
--- a/drivers/usb/phy/phy-tegra-usb.c
+++ b/drivers/usb/phy/phy-tegra-usb.c
@@ -878,8 +878,8 @@
return -ENOMEM;
}
- tegra_phy->config = devm_kzalloc(&pdev->dev,
- sizeof(*tegra_phy->config), GFP_KERNEL);
+ tegra_phy->config = devm_kzalloc(&pdev->dev, sizeof(*config),
+ GFP_KERNEL);
if (!tegra_phy->config) {
dev_err(&pdev->dev,
"unable to allocate memory for USB UTMIP config\n");
diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
index 4fd3653..b0c97a3 100644
--- a/drivers/usb/renesas_usbhs/fifo.c
+++ b/drivers/usb/renesas_usbhs/fifo.c
@@ -108,19 +108,45 @@
return list_first_entry(&pipe->list, struct usbhs_pkt, node);
}
+static void usbhsf_fifo_clear(struct usbhs_pipe *pipe,
+ struct usbhs_fifo *fifo);
+static void usbhsf_fifo_unselect(struct usbhs_pipe *pipe,
+ struct usbhs_fifo *fifo);
+static struct dma_chan *usbhsf_dma_chan_get(struct usbhs_fifo *fifo,
+ struct usbhs_pkt *pkt);
+#define usbhsf_dma_map(p) __usbhsf_dma_map_ctrl(p, 1)
+#define usbhsf_dma_unmap(p) __usbhsf_dma_map_ctrl(p, 0)
+static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map);
struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
{
struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
+ struct usbhs_fifo *fifo = usbhs_pipe_to_fifo(pipe);
unsigned long flags;
/******************** spin lock ********************/
usbhs_lock(priv, flags);
+ usbhs_pipe_disable(pipe);
+
if (!pkt)
pkt = __usbhsf_pkt_get(pipe);
- if (pkt)
+ if (pkt) {
+ struct dma_chan *chan = NULL;
+
+ if (fifo)
+ chan = usbhsf_dma_chan_get(fifo, pkt);
+ if (chan) {
+ dmaengine_terminate_all(chan);
+ usbhsf_fifo_clear(pipe, fifo);
+ usbhsf_dma_unmap(pkt);
+ }
+
__usbhsf_pkt_del(pkt);
+ }
+
+ if (fifo)
+ usbhsf_fifo_unselect(pipe, fifo);
usbhs_unlock(priv, flags);
/******************** spin unlock ******************/
@@ -544,6 +570,7 @@
usbhsf_send_terminator(pipe, fifo);
usbhsf_tx_irq_ctrl(pipe, !*is_done);
+ usbhs_pipe_running(pipe, !*is_done);
usbhs_pipe_enable(pipe);
dev_dbg(dev, " send %d (%d/ %d/ %d/ %d)\n",
@@ -570,12 +597,21 @@
* retry in interrupt
*/
usbhsf_tx_irq_ctrl(pipe, 1);
+ usbhs_pipe_running(pipe, 1);
return ret;
}
+static int usbhsf_pio_prepare_push(struct usbhs_pkt *pkt, int *is_done)
+{
+ if (usbhs_pipe_is_running(pkt->pipe))
+ return 0;
+
+ return usbhsf_pio_try_push(pkt, is_done);
+}
+
struct usbhs_pkt_handle usbhs_fifo_pio_push_handler = {
- .prepare = usbhsf_pio_try_push,
+ .prepare = usbhsf_pio_prepare_push,
.try_run = usbhsf_pio_try_push,
};
@@ -589,6 +625,9 @@
if (usbhs_pipe_is_busy(pipe))
return 0;
+ if (usbhs_pipe_is_running(pipe))
+ return 0;
+
/*
* pipe enable to prepare packet receive
*/
@@ -597,6 +636,7 @@
usbhs_pipe_set_trans_count_if_bulk(pipe, pkt->length);
usbhs_pipe_enable(pipe);
+ usbhs_pipe_running(pipe, 1);
usbhsf_rx_irq_ctrl(pipe, 1);
return 0;
@@ -642,6 +682,7 @@
(total_len < maxp)) { /* short packet */
*is_done = 1;
usbhsf_rx_irq_ctrl(pipe, 0);
+ usbhs_pipe_running(pipe, 0);
usbhs_pipe_disable(pipe); /* disable pipe first */
}
@@ -763,8 +804,6 @@
usbhs_bset(priv, fifo->sel, DREQE, dreqe);
}
-#define usbhsf_dma_map(p) __usbhsf_dma_map_ctrl(p, 1)
-#define usbhsf_dma_unmap(p) __usbhsf_dma_map_ctrl(p, 0)
static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map)
{
struct usbhs_pipe *pipe = pkt->pipe;
@@ -805,6 +844,7 @@
dev_dbg(dev, " %s %d (%d/ %d)\n",
fifo->name, usbhs_pipe_number(pipe), pkt->length, pkt->zero);
+ usbhs_pipe_running(pipe, 1);
usbhs_pipe_set_trans_count_if_bulk(pipe, pkt->trans);
usbhs_pipe_enable(pipe);
usbhsf_dma_start(pipe, fifo);
@@ -836,6 +876,10 @@
if ((uintptr_t)(pkt->buf + pkt->actual) & 0x7) /* 8byte alignment */
goto usbhsf_pio_prepare_push;
+ /* return at this time if the pipe is running */
+ if (usbhs_pipe_is_running(pipe))
+ return 0;
+
/* get enable DMA fifo */
fifo = usbhsf_get_dma_fifo(priv, pkt);
if (!fifo)
@@ -869,15 +913,29 @@
static int usbhsf_dma_push_done(struct usbhs_pkt *pkt, int *is_done)
{
struct usbhs_pipe *pipe = pkt->pipe;
+ int is_short = pkt->trans % usbhs_pipe_get_maxpacket(pipe);
- pkt->actual = pkt->trans;
+ pkt->actual += pkt->trans;
- *is_done = !pkt->zero; /* send zero packet ? */
+ if (pkt->actual < pkt->length)
+ *is_done = 0; /* there are remainder data */
+ else if (is_short)
+ *is_done = 1; /* short packet */
+ else
+ *is_done = !pkt->zero; /* send zero packet? */
+
+ usbhs_pipe_running(pipe, !*is_done);
usbhsf_dma_stop(pipe, pipe->fifo);
usbhsf_dma_unmap(pkt);
usbhsf_fifo_unselect(pipe, pipe->fifo);
+ if (!*is_done) {
+ /* change handler to PIO */
+ pkt->handler = &usbhs_fifo_pio_push_handler;
+ return pkt->handler->try_run(pkt, is_done);
+ }
+
return 0;
}
@@ -972,8 +1030,10 @@
if ((pkt->actual == pkt->length) || /* receive all data */
(pkt->trans < maxp)) { /* short packet */
*is_done = 1;
+ usbhs_pipe_running(pipe, 0);
} else {
/* re-enable */
+ usbhs_pipe_running(pipe, 0);
usbhsf_prepare_pop(pkt, is_done);
}
diff --git a/drivers/usb/renesas_usbhs/mod.c b/drivers/usb/renesas_usbhs/mod.c
index 6a030b9..9a705b1 100644
--- a/drivers/usb/renesas_usbhs/mod.c
+++ b/drivers/usb/renesas_usbhs/mod.c
@@ -213,7 +213,10 @@
{
struct usbhs_mod *mod = usbhs_mod_get_current(priv);
u16 intenb0, intenb1;
+ unsigned long flags;
+ /******************** spin lock ********************/
+ usbhs_lock(priv, flags);
state->intsts0 = usbhs_read(priv, INTSTS0);
state->intsts1 = usbhs_read(priv, INTSTS1);
@@ -229,6 +232,8 @@
state->bempsts &= mod->irq_bempsts;
state->brdysts &= mod->irq_brdysts;
}
+ usbhs_unlock(priv, flags);
+ /******************** spin unlock ******************/
/*
* Check whether the irq enable registers and the irq status are set
diff --git a/drivers/usb/renesas_usbhs/pipe.c b/drivers/usb/renesas_usbhs/pipe.c
index 75fbcf6..040bcef 100644
--- a/drivers/usb/renesas_usbhs/pipe.c
+++ b/drivers/usb/renesas_usbhs/pipe.c
@@ -578,6 +578,19 @@
return usbhsp_flags_has(pipe, IS_DIR_HOST);
}
+int usbhs_pipe_is_running(struct usbhs_pipe *pipe)
+{
+ return usbhsp_flags_has(pipe, IS_RUNNING);
+}
+
+void usbhs_pipe_running(struct usbhs_pipe *pipe, int running)
+{
+ if (running)
+ usbhsp_flags_set(pipe, IS_RUNNING);
+ else
+ usbhsp_flags_clr(pipe, IS_RUNNING);
+}
+
void usbhs_pipe_data_sequence(struct usbhs_pipe *pipe, int sequence)
{
u16 mask = (SQCLR | SQSET);
diff --git a/drivers/usb/renesas_usbhs/pipe.h b/drivers/usb/renesas_usbhs/pipe.h
index 406f36d..d24a059 100644
--- a/drivers/usb/renesas_usbhs/pipe.h
+++ b/drivers/usb/renesas_usbhs/pipe.h
@@ -36,6 +36,7 @@
#define USBHS_PIPE_FLAGS_IS_USED (1 << 0)
#define USBHS_PIPE_FLAGS_IS_DIR_IN (1 << 1)
#define USBHS_PIPE_FLAGS_IS_DIR_HOST (1 << 2)
+#define USBHS_PIPE_FLAGS_IS_RUNNING (1 << 3)
struct usbhs_pkt_handle *handler;
@@ -80,6 +81,9 @@
void usbhs_pipe_remove(struct usbhs_priv *priv);
int usbhs_pipe_is_dir_in(struct usbhs_pipe *pipe);
int usbhs_pipe_is_dir_host(struct usbhs_pipe *pipe);
+int usbhs_pipe_is_running(struct usbhs_pipe *pipe);
+void usbhs_pipe_running(struct usbhs_pipe *pipe, int running);
+
void usbhs_pipe_init(struct usbhs_priv *priv,
int (*dma_map_ctrl)(struct usbhs_pkt *pkt, int map));
int usbhs_pipe_get_maxpacket(struct usbhs_pipe *pipe);
diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
index 824ea5e..dc72b92 100644
--- a/drivers/usb/serial/ftdi_sio.c
+++ b/drivers/usb/serial/ftdi_sio.c
@@ -728,6 +728,7 @@
{ USB_DEVICE(FTDI_VID, FTDI_NDI_AURORA_SCU_PID),
.driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
{ USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) },
+ { USB_DEVICE(NOVITUS_VID, NOVITUS_BONO_E_PID) },
{ USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_S03_PID) },
{ USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_59_PID) },
{ USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_57A_PID) },
@@ -939,6 +940,8 @@
{ USB_DEVICE(FTDI_VID, FTDI_EKEY_CONV_USB_PID) },
/* Infineon Devices */
{ USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) },
+ /* GE Healthcare devices */
+ { USB_DEVICE(GE_HEALTHCARE_VID, GE_HEALTHCARE_NEMO_TRACKER_PID) },
{ } /* Terminating entry */
};
diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
index 70b0b1d..5937b2d 100644
--- a/drivers/usb/serial/ftdi_sio_ids.h
+++ b/drivers/usb/serial/ftdi_sio_ids.h
@@ -837,6 +837,12 @@
#define TELLDUS_TELLSTICK_PID 0x0C30 /* RF control dongle 433 MHz using FT232RL */
/*
+ * NOVITUS printers
+ */
+#define NOVITUS_VID 0x1a28
+#define NOVITUS_BONO_E_PID 0x6010
+
+/*
* RT Systems programming cables for various ham radios
*/
#define RTSYSTEMS_VID 0x2100 /* Vendor ID */
@@ -1385,3 +1391,9 @@
* ekey biometric systems GmbH (http://ekey.net/)
*/
#define FTDI_EKEY_CONV_USB_PID 0xCB08 /* Converter USB */
+
+/*
+ * GE Healthcare devices
+ */
+#define GE_HEALTHCARE_VID 0x1901
+#define GE_HEALTHCARE_NEMO_TRACKER_PID 0x0015
diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
index 6f7f01e..46179a0 100644
--- a/drivers/usb/serial/sierra.c
+++ b/drivers/usb/serial/sierra.c
@@ -282,14 +282,19 @@
/* Sierra Wireless HSPA Non-Composite Device */
{ USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x6892, 0xFF, 0xFF, 0xFF)},
{ USB_DEVICE(0x1199, 0x6893) }, /* Sierra Wireless Device */
- { USB_DEVICE(0x1199, 0x68A3), /* Sierra Wireless Direct IP modems */
+ /* Sierra Wireless Direct IP modems */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x68A3, 0xFF, 0xFF, 0xFF),
+ .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist
+ },
+ { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x68AA, 0xFF, 0xFF, 0xFF),
.driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist
},
/* AT&T Direct IP LTE modems */
{ USB_DEVICE_AND_INTERFACE_INFO(0x0F3D, 0x68AA, 0xFF, 0xFF, 0xFF),
.driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist
},
- { USB_DEVICE(0x0f3d, 0x68A3), /* Airprime/Sierra Wireless Direct IP modems */
+ /* Airprime/Sierra Wireless Direct IP modems */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x0F3D, 0x68A3, 0xFF, 0xFF, 0xFF),
.driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist
},
diff --git a/drivers/usb/serial/zte_ev.c b/drivers/usb/serial/zte_ev.c
index 1a132e9..c9bb107 100644
--- a/drivers/usb/serial/zte_ev.c
+++ b/drivers/usb/serial/zte_ev.c
@@ -272,6 +272,14 @@
}
static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x19d2, 0xffec) },
+ { USB_DEVICE(0x19d2, 0xffee) },
+ { USB_DEVICE(0x19d2, 0xfff6) },
+ { USB_DEVICE(0x19d2, 0xfff7) },
+ { USB_DEVICE(0x19d2, 0xfff8) },
+ { USB_DEVICE(0x19d2, 0xfff9) },
+ { USB_DEVICE(0x19d2, 0xfffb) },
+ { USB_DEVICE(0x19d2, 0xfffc) },
/* MG880 */
{ USB_DEVICE(0x19d2, 0xfffd) },
{ },
diff --git a/drivers/usb/storage/uas-detect.h b/drivers/usb/storage/uas-detect.h
index 503ac5c..8a6f371 100644
--- a/drivers/usb/storage/uas-detect.h
+++ b/drivers/usb/storage/uas-detect.h
@@ -59,10 +59,6 @@
unsigned long flags = id->driver_info;
int r, alt;
- usb_stor_adjust_quirks(udev, &flags);
-
- if (flags & US_FL_IGNORE_UAS)
- return 0;
alt = uas_find_uas_alt_setting(intf);
if (alt < 0)
@@ -72,6 +68,29 @@
if (r < 0)
return 0;
+ /*
+ * ASM1051 and older ASM1053 devices have the same usb-id, and UAS is
+ * broken on the ASM1051, use the number of streams to differentiate.
+ * New ASM1053-s also support 32 streams, but have a different prod-id.
+ */
+ if (le16_to_cpu(udev->descriptor.idVendor) == 0x174c &&
+ le16_to_cpu(udev->descriptor.idProduct) == 0x55aa) {
+ if (udev->speed < USB_SPEED_SUPER) {
+ /* No streams info, assume ASM1051 */
+ flags |= US_FL_IGNORE_UAS;
+ } else if (usb_ss_max_streams(&eps[1]->ss_ep_comp) == 32) {
+ flags |= US_FL_IGNORE_UAS;
+ }
+ }
+
+ usb_stor_adjust_quirks(udev, &flags);
+
+ if (flags & US_FL_IGNORE_UAS) {
+ dev_warn(&udev->dev,
+ "UAS is blacklisted for this device, using usb-storage instead\n");
+ return 0;
+ }
+
if (udev->bus->sg_tablesize == 0) {
dev_warn(&udev->dev,
"The driver for the USB controller %s does not support scatter-gather which is\n",
diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
index 7ef99b2f..4a5c68a 100644
--- a/drivers/usb/storage/unusual_devs.h
+++ b/drivers/usb/storage/unusual_devs.h
@@ -101,6 +101,12 @@
"PhotoSmart R707",
USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_FIX_CAPACITY),
+UNUSUAL_DEV( 0x03f3, 0x0001, 0x0000, 0x9999,
+ "Adaptec",
+ "USBConnect 2000",
+ USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init,
+ US_FL_SCM_MULT_TARG ),
+
/* Reported by Sebastian Kapfer <sebastian_kapfer@gmx.net>
* and Olaf Hering <olh@suse.de> (different bcd's, same vendor/product)
* for USB floppies that need the SINGLE_LUN enforcement.
@@ -741,6 +747,12 @@
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_SINGLE_LUN ),
+UNUSUAL_DEV( 0x059b, 0x0040, 0x0100, 0x0100,
+ "Iomega",
+ "Jaz USB Adapter",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_SINGLE_LUN ),
+
/* Reported by <Hendryk.Pfeiffer@gmx.de> */
UNUSUAL_DEV( 0x059f, 0x0643, 0x0000, 0x0000,
"LaCie",
@@ -1119,6 +1131,18 @@
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_NOT_LOCKABLE),
+UNUSUAL_DEV( 0x085a, 0x0026, 0x0100, 0x0133,
+ "Xircom",
+ "PortGear USB-SCSI (Mac USB Dock)",
+ USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init,
+ US_FL_SCM_MULT_TARG ),
+
+UNUSUAL_DEV( 0x085a, 0x0028, 0x0100, 0x0133,
+ "Xircom",
+ "PortGear USB to SCSI Converter",
+ USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init,
+ US_FL_SCM_MULT_TARG ),
+
/* Submitted by Jan De Luyck <lkml@kcore.org> */
UNUSUAL_DEV( 0x08bd, 0x1100, 0x0000, 0x0000,
"CITIZEN",
@@ -1958,6 +1982,14 @@
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_IGNORE_RESIDUE | US_FL_SANE_SENSE ),
+/* Entrega Technologies U1-SC25 (later Xircom PortGear PGSCSI)
+ * and Mac USB Dock USB-SCSI */
+UNUSUAL_DEV( 0x1645, 0x0007, 0x0100, 0x0133,
+ "Entrega Technologies",
+ "USB to SCSI Converter",
+ USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init,
+ US_FL_SCM_MULT_TARG ),
+
/* Reported by Robert Schedel <r.schedel@yahoo.de>
* Note: this is a 'super top' device like the above 14cd/6600 device */
UNUSUAL_DEV( 0x1652, 0x6600, 0x0201, 0x0201,
@@ -1980,6 +2012,12 @@
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_BULK_IGNORE_TAG | US_FL_MAX_SECTORS_64 ),
+UNUSUAL_DEV( 0x1822, 0x0001, 0x0000, 0x9999,
+ "Ariston Technologies",
+ "iConnect USB to SCSI adapter",
+ USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init,
+ US_FL_SCM_MULT_TARG ),
+
/* Reported by Hans de Goede <hdegoede@redhat.com>
* These Appotech controllers are found in Picture Frames, they provide a
* (buggy) emulation of a cdrom drive which contains the windows software
diff --git a/drivers/uwb/lc-dev.c b/drivers/uwb/lc-dev.c
index 80079b8..d0303f0 100644
--- a/drivers/uwb/lc-dev.c
+++ b/drivers/uwb/lc-dev.c
@@ -431,16 +431,19 @@
uwb_dev->mac_addr = *bce->mac_addr;
uwb_dev->dev_addr = bce->dev_addr;
dev_set_name(&uwb_dev->dev, "%s", macbuf);
+
+ /* plug the beacon cache */
+ bce->uwb_dev = uwb_dev;
+ uwb_dev->bce = bce;
+ uwb_bce_get(bce); /* released in uwb_dev_sys_release() */
+
result = uwb_dev_add(uwb_dev, &rc->uwb_dev.dev, rc);
if (result < 0) {
dev_err(dev, "new device %s: cannot instantiate device\n",
macbuf);
goto error_dev_add;
}
- /* plug the beacon cache */
- bce->uwb_dev = uwb_dev;
- uwb_dev->bce = bce;
- uwb_bce_get(bce); /* released in uwb_dev_sys_release() */
+
dev_info(dev, "uwb device (mac %s dev %s) connected to %s %s\n",
macbuf, devbuf, rc->uwb_dev.dev.parent->bus->name,
dev_name(rc->uwb_dev.dev.parent));
@@ -448,6 +451,8 @@
return;
error_dev_add:
+ bce->uwb_dev = NULL;
+ uwb_bce_put(bce);
kfree(uwb_dev);
return;
}
diff --git a/drivers/video/fbdev/amba-clcd.c b/drivers/video/fbdev/amba-clcd.c
index a7b6217..6ad23bd 100644
--- a/drivers/video/fbdev/amba-clcd.c
+++ b/drivers/video/fbdev/amba-clcd.c
@@ -639,9 +639,7 @@
if (g0 != panels[i].g0)
continue;
if (r0 == panels[i].r0 && b0 == panels[i].b0)
- fb->panel->caps = panels[i].caps & CLCD_CAP_RGB;
- if (r0 == panels[i].b0 && b0 == panels[i].r0)
- fb->panel->caps = panels[i].caps & CLCD_CAP_BGR;
+ fb->panel->caps = panels[i].caps;
}
return fb->panel->caps ? 0 : -EINVAL;
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 4d08f45a..3b1f89b 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -99,36 +99,10 @@
#define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
-static inline struct scatterlist *sg_next_chained(struct scatterlist *sg,
- unsigned int *count)
-{
- return sg_next(sg);
-}
-
-static inline struct scatterlist *sg_next_arr(struct scatterlist *sg,
- unsigned int *count)
-{
- if (--(*count) == 0)
- return NULL;
- return sg + 1;
-}
-
-/* Set up an indirect table of descriptors and add it to the queue. */
-static inline int vring_add_indirect(struct vring_virtqueue *vq,
- struct scatterlist *sgs[],
- struct scatterlist *(*next)
- (struct scatterlist *, unsigned int *),
- unsigned int total_sg,
- unsigned int total_out,
- unsigned int total_in,
- unsigned int out_sgs,
- unsigned int in_sgs,
- gfp_t gfp)
+static struct vring_desc *alloc_indirect(unsigned int total_sg, gfp_t gfp)
{
struct vring_desc *desc;
- unsigned head;
- struct scatterlist *sg;
- int i, n;
+ unsigned int i;
/*
* We require lowmem mappings for the descriptors because
@@ -139,57 +113,16 @@
desc = kmalloc(total_sg * sizeof(struct vring_desc), gfp);
if (!desc)
- return -ENOMEM;
+ return NULL;
- /* Transfer entries from the sg lists into the indirect page */
- i = 0;
- for (n = 0; n < out_sgs; n++) {
- for (sg = sgs[n]; sg; sg = next(sg, &total_out)) {
- desc[i].flags = VRING_DESC_F_NEXT;
- desc[i].addr = sg_phys(sg);
- desc[i].len = sg->length;
- desc[i].next = i+1;
- i++;
- }
- }
- for (; n < (out_sgs + in_sgs); n++) {
- for (sg = sgs[n]; sg; sg = next(sg, &total_in)) {
- desc[i].flags = VRING_DESC_F_NEXT|VRING_DESC_F_WRITE;
- desc[i].addr = sg_phys(sg);
- desc[i].len = sg->length;
- desc[i].next = i+1;
- i++;
- }
- }
- BUG_ON(i != total_sg);
-
- /* Last one doesn't continue. */
- desc[i-1].flags &= ~VRING_DESC_F_NEXT;
- desc[i-1].next = 0;
-
- /* We're about to use a buffer */
- vq->vq.num_free--;
-
- /* Use a single buffer which doesn't continue */
- head = vq->free_head;
- vq->vring.desc[head].flags = VRING_DESC_F_INDIRECT;
- vq->vring.desc[head].addr = virt_to_phys(desc);
- /* kmemleak gives a false positive, as it's hidden by virt_to_phys */
- kmemleak_ignore(desc);
- vq->vring.desc[head].len = i * sizeof(struct vring_desc);
-
- /* Update free pointer */
- vq->free_head = vq->vring.desc[head].next;
-
- return head;
+ for (i = 0; i < total_sg; i++)
+ desc[i].next = i+1;
+ return desc;
}
static inline int virtqueue_add(struct virtqueue *_vq,
struct scatterlist *sgs[],
- struct scatterlist *(*next)
- (struct scatterlist *, unsigned int *),
- unsigned int total_out,
- unsigned int total_in,
+ unsigned int total_sg,
unsigned int out_sgs,
unsigned int in_sgs,
void *data,
@@ -197,8 +130,10 @@
{
struct vring_virtqueue *vq = to_vvq(_vq);
struct scatterlist *sg;
- unsigned int i, n, avail, uninitialized_var(prev), total_sg;
+ struct vring_desc *desc;
+ unsigned int i, n, avail, descs_used, uninitialized_var(prev);
int head;
+ bool indirect;
START_USE(vq);
@@ -222,24 +157,40 @@
}
#endif
- total_sg = total_in + total_out;
-
- /* If the host supports indirect descriptor tables, and we have multiple
- * buffers, then go indirect. FIXME: tune this threshold */
- if (vq->indirect && total_sg > 1 && vq->vq.num_free) {
- head = vring_add_indirect(vq, sgs, next, total_sg, total_out,
- total_in,
- out_sgs, in_sgs, gfp);
- if (likely(head >= 0))
- goto add_head;
- }
-
BUG_ON(total_sg > vq->vring.num);
BUG_ON(total_sg == 0);
- if (vq->vq.num_free < total_sg) {
+ head = vq->free_head;
+
+ /* If the host supports indirect descriptor tables, and we have multiple
+ * buffers, then go indirect. FIXME: tune this threshold */
+ if (vq->indirect && total_sg > 1 && vq->vq.num_free)
+ desc = alloc_indirect(total_sg, gfp);
+ else
+ desc = NULL;
+
+ if (desc) {
+ /* Use a single buffer which doesn't continue */
+ vq->vring.desc[head].flags = VRING_DESC_F_INDIRECT;
+ vq->vring.desc[head].addr = virt_to_phys(desc);
+ /* avoid kmemleak false positive (hidden by virt_to_phys) */
+ kmemleak_ignore(desc);
+ vq->vring.desc[head].len = total_sg * sizeof(struct vring_desc);
+
+ /* Set up rest to use this indirect table. */
+ i = 0;
+ descs_used = 1;
+ indirect = true;
+ } else {
+ desc = vq->vring.desc;
+ i = head;
+ descs_used = total_sg;
+ indirect = false;
+ }
+
+ if (vq->vq.num_free < descs_used) {
pr_debug("Can't add buf len %i - avail = %i\n",
- total_sg, vq->vq.num_free);
+ descs_used, vq->vq.num_free);
/* FIXME: for historical reasons, we force a notify here if
* there are outgoing parts to the buffer. Presumably the
* host should service the ring ASAP. */
@@ -250,34 +201,35 @@
}
/* We're about to use some buffers from the free list. */
- vq->vq.num_free -= total_sg;
+ vq->vq.num_free -= descs_used;
- head = i = vq->free_head;
for (n = 0; n < out_sgs; n++) {
- for (sg = sgs[n]; sg; sg = next(sg, &total_out)) {
- vq->vring.desc[i].flags = VRING_DESC_F_NEXT;
- vq->vring.desc[i].addr = sg_phys(sg);
- vq->vring.desc[i].len = sg->length;
+ for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+ desc[i].flags = VRING_DESC_F_NEXT;
+ desc[i].addr = sg_phys(sg);
+ desc[i].len = sg->length;
prev = i;
- i = vq->vring.desc[i].next;
+ i = desc[i].next;
}
}
for (; n < (out_sgs + in_sgs); n++) {
- for (sg = sgs[n]; sg; sg = next(sg, &total_in)) {
- vq->vring.desc[i].flags = VRING_DESC_F_NEXT|VRING_DESC_F_WRITE;
- vq->vring.desc[i].addr = sg_phys(sg);
- vq->vring.desc[i].len = sg->length;
+ for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+ desc[i].flags = VRING_DESC_F_NEXT|VRING_DESC_F_WRITE;
+ desc[i].addr = sg_phys(sg);
+ desc[i].len = sg->length;
prev = i;
- i = vq->vring.desc[i].next;
+ i = desc[i].next;
}
}
/* Last one doesn't continue. */
- vq->vring.desc[prev].flags &= ~VRING_DESC_F_NEXT;
+ desc[prev].flags &= ~VRING_DESC_F_NEXT;
/* Update free pointer */
- vq->free_head = i;
+ if (indirect)
+ vq->free_head = vq->vring.desc[head].next;
+ else
+ vq->free_head = i;
-add_head:
/* Set token. */
vq->data[head] = data;
@@ -324,29 +276,23 @@
void *data,
gfp_t gfp)
{
- unsigned int i, total_out, total_in;
+ unsigned int i, total_sg = 0;
/* Count them first. */
- for (i = total_out = total_in = 0; i < out_sgs; i++) {
+ for (i = 0; i < out_sgs + in_sgs; i++) {
struct scatterlist *sg;
for (sg = sgs[i]; sg; sg = sg_next(sg))
- total_out++;
+ total_sg++;
}
- for (; i < out_sgs + in_sgs; i++) {
- struct scatterlist *sg;
- for (sg = sgs[i]; sg; sg = sg_next(sg))
- total_in++;
- }
- return virtqueue_add(_vq, sgs, sg_next_chained,
- total_out, total_in, out_sgs, in_sgs, data, gfp);
+ return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs, data, gfp);
}
EXPORT_SYMBOL_GPL(virtqueue_add_sgs);
/**
* virtqueue_add_outbuf - expose output buffers to other end
* @vq: the struct virtqueue we're talking about.
- * @sgs: array of scatterlists (need not be terminated!)
- * @num: the number of scatterlists readable by other side
+ * @sg: scatterlist (must be well-formed and terminated!)
+ * @num: the number of entries in @sg readable by other side
* @data: the token identifying the buffer.
* @gfp: how to do memory allocations (if necessary).
*
@@ -356,19 +302,19 @@
* Returns zero or a negative error (ie. ENOSPC, ENOMEM, EIO).
*/
int virtqueue_add_outbuf(struct virtqueue *vq,
- struct scatterlist sg[], unsigned int num,
+ struct scatterlist *sg, unsigned int num,
void *data,
gfp_t gfp)
{
- return virtqueue_add(vq, &sg, sg_next_arr, num, 0, 1, 0, data, gfp);
+ return virtqueue_add(vq, &sg, num, 1, 0, data, gfp);
}
EXPORT_SYMBOL_GPL(virtqueue_add_outbuf);
/**
* virtqueue_add_inbuf - expose input buffers to other end
* @vq: the struct virtqueue we're talking about.
- * @sgs: array of scatterlists (need not be terminated!)
- * @num: the number of scatterlists writable by other side
+ * @sg: scatterlist (must be well-formed and terminated!)
+ * @num: the number of entries in @sg writable by other side
* @data: the token identifying the buffer.
* @gfp: how to do memory allocations (if necessary).
*
@@ -378,11 +324,11 @@
* Returns zero or a negative error (ie. ENOSPC, ENOMEM, EIO).
*/
int virtqueue_add_inbuf(struct virtqueue *vq,
- struct scatterlist sg[], unsigned int num,
+ struct scatterlist *sg, unsigned int num,
void *data,
gfp_t gfp)
{
- return virtqueue_add(vq, &sg, sg_next_arr, 0, num, 0, 1, data, gfp);
+ return virtqueue_add(vq, &sg, num, 0, 1, data, gfp);
}
EXPORT_SYMBOL_GPL(virtqueue_add_inbuf);
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 5c660c7..1e0a317 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -230,8 +230,8 @@
rc = add_memory(nid, hotplug_start_paddr, balloon_hotplug << PAGE_SHIFT);
if (rc) {
- pr_info("%s: add_memory() failed: %i\n", __func__, rc);
- return BP_EAGAIN;
+ pr_warn("Cannot add additional memory (%i)\n", rc);
+ return BP_ECANCELED;
}
balloon_hotplug -= credit;
diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c
index 787d179..e53fe19 100644
--- a/drivers/xen/gntalloc.c
+++ b/drivers/xen/gntalloc.c
@@ -124,7 +124,7 @@
int i, rc, readonly;
LIST_HEAD(queue_gref);
LIST_HEAD(queue_file);
- struct gntalloc_gref *gref;
+ struct gntalloc_gref *gref, *next;
readonly = !(op->flags & GNTALLOC_FLAG_WRITABLE);
rc = -ENOMEM;
@@ -141,13 +141,11 @@
goto undo;
/* Grant foreign access to the page. */
- gref->gref_id = gnttab_grant_foreign_access(op->domid,
+ rc = gnttab_grant_foreign_access(op->domid,
pfn_to_mfn(page_to_pfn(gref->page)), readonly);
- if ((int)gref->gref_id < 0) {
- rc = gref->gref_id;
+ if (rc < 0)
goto undo;
- }
- gref_ids[i] = gref->gref_id;
+ gref_ids[i] = gref->gref_id = rc;
}
/* Add to gref lists. */
@@ -162,8 +160,8 @@
mutex_lock(&gref_mutex);
gref_size -= (op->count - i);
- list_for_each_entry(gref, &queue_file, next_file) {
- /* __del_gref does not remove from queue_file */
+ list_for_each_entry_safe(gref, next, &queue_file, next_file) {
+ list_del(&gref->next_file);
__del_gref(gref);
}
@@ -193,7 +191,7 @@
gref->notify.flags = 0;
- if (gref->gref_id > 0) {
+ if (gref->gref_id) {
if (gnttab_query_foreign_access(gref->gref_id))
return;
diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index 5f1e1f3..f8bb36f 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -103,16 +103,11 @@
shutting_down = SHUTDOWN_SUSPEND;
-#ifdef CONFIG_PREEMPT
- /* If the kernel is preemptible, we need to freeze all the processes
- to prevent them from being in the middle of a pagetable update
- during suspend. */
err = freeze_processes();
if (err) {
pr_err("%s: freeze failed %d\n", __func__, err);
goto out;
}
-#endif
err = dpm_suspend_start(PMSG_FREEZE);
if (err) {
@@ -157,10 +152,8 @@
dpm_resume_end(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
out_thaw:
-#ifdef CONFIG_PREEMPT
thaw_processes();
out:
-#endif
shutting_down = SHUTDOWN_INVALID;
}
#endif /* CONFIG_HIBERNATE_CALLBACKS */
diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index 43527fd..56b8522 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -234,8 +234,17 @@
BTRFS_I(inode)->last_sub_trans <=
BTRFS_I(inode)->last_log_commit &&
BTRFS_I(inode)->last_sub_trans <=
- BTRFS_I(inode)->root->last_log_commit)
- return 1;
+ BTRFS_I(inode)->root->last_log_commit) {
+ /*
+ * After a ranged fsync we might have left some extent maps
+ * (that fall outside the fsync's range). So return false
+ * here if the list isn't empty, to make sure btrfs_log_inode()
+ * will be called and process those extent maps.
+ */
+ smp_mb();
+ if (list_empty(&BTRFS_I(inode)->extent_tree.modified_extents))
+ return 1;
+ }
return 0;
}
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 36861b7..ff1cc03 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1966,7 +1966,7 @@
btrfs_init_log_ctx(&ctx);
- ret = btrfs_log_dentry_safe(trans, root, dentry, &ctx);
+ ret = btrfs_log_dentry_safe(trans, root, dentry, start, end, &ctx);
if (ret < 0) {
/* Fallthrough and commit/free transaction. */
ret = 1;
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 9c194bd..016c403 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -778,8 +778,12 @@
ins.offset,
BTRFS_ORDERED_COMPRESSED,
async_extent->compress_type);
- if (ret)
+ if (ret) {
+ btrfs_drop_extent_cache(inode, async_extent->start,
+ async_extent->start +
+ async_extent->ram_size - 1, 0);
goto out_free_reserve;
+ }
/*
* clear dirty, set writeback and unlock the pages.
@@ -971,14 +975,14 @@
ret = btrfs_add_ordered_extent(inode, start, ins.objectid,
ram_size, cur_alloc_size, 0);
if (ret)
- goto out_reserve;
+ goto out_drop_extent_cache;
if (root->root_key.objectid ==
BTRFS_DATA_RELOC_TREE_OBJECTID) {
ret = btrfs_reloc_clone_csums(inode, start,
cur_alloc_size);
if (ret)
- goto out_reserve;
+ goto out_drop_extent_cache;
}
if (disk_num_bytes < cur_alloc_size)
@@ -1006,6 +1010,8 @@
out:
return ret;
+out_drop_extent_cache:
+ btrfs_drop_extent_cache(inode, start, start + ram_size - 1, 0);
out_reserve:
btrfs_free_reserved_extent(root, ins.objectid, ins.offset, 1);
out_unlock:
@@ -4242,7 +4248,8 @@
btrfs_abort_transaction(trans, root, ret);
}
error:
- if (last_size != (u64)-1)
+ if (last_size != (u64)-1 &&
+ root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID)
btrfs_ordered_update_i_size(inode, last_size, NULL);
btrfs_free_path(path);
return err;
@@ -5627,6 +5634,17 @@
return ret;
}
+static int btrfs_insert_inode_locked(struct inode *inode)
+{
+ struct btrfs_iget_args args;
+ args.location = &BTRFS_I(inode)->location;
+ args.root = BTRFS_I(inode)->root;
+
+ return insert_inode_locked4(inode,
+ btrfs_inode_hash(inode->i_ino, BTRFS_I(inode)->root),
+ btrfs_find_actor, &args);
+}
+
static struct inode *btrfs_new_inode(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct inode *dir,
@@ -5719,10 +5737,19 @@
sizes[1] = name_len + sizeof(*ref);
}
+ location = &BTRFS_I(inode)->location;
+ location->objectid = objectid;
+ location->offset = 0;
+ btrfs_set_key_type(location, BTRFS_INODE_ITEM_KEY);
+
+ ret = btrfs_insert_inode_locked(inode);
+ if (ret < 0)
+ goto fail;
+
path->leave_spinning = 1;
ret = btrfs_insert_empty_items(trans, root, path, key, sizes, nitems);
if (ret != 0)
- goto fail;
+ goto fail_unlock;
inode_init_owner(inode, dir, mode);
inode_set_bytes(inode, 0);
@@ -5745,11 +5772,6 @@
btrfs_mark_buffer_dirty(path->nodes[0]);
btrfs_free_path(path);
- location = &BTRFS_I(inode)->location;
- location->objectid = objectid;
- location->offset = 0;
- btrfs_set_key_type(location, BTRFS_INODE_ITEM_KEY);
-
btrfs_inherit_iflags(inode, dir);
if (S_ISREG(mode)) {
@@ -5760,7 +5782,6 @@
BTRFS_INODE_NODATASUM;
}
- btrfs_insert_inode_hash(inode);
inode_tree_add(inode);
trace_btrfs_inode_new(inode);
@@ -5775,6 +5796,9 @@
btrfs_ino(inode), root->root_key.objectid, ret);
return inode;
+
+fail_unlock:
+ unlock_new_inode(inode);
fail:
if (dir && name)
BTRFS_I(dir)->index_cnt--;
@@ -5909,28 +5933,28 @@
goto out_unlock;
}
- err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
- if (err) {
- drop_inode = 1;
- goto out_unlock;
- }
-
/*
* If the active LSM wants to access the inode during
* d_instantiate it needs these. Smack checks to see
* if the filesystem supports xattrs by looking at the
* ops vector.
*/
-
inode->i_op = &btrfs_special_inode_operations;
- err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index);
+ init_special_inode(inode, inode->i_mode, rdev);
+
+ err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
if (err)
- drop_inode = 1;
- else {
- init_special_inode(inode, inode->i_mode, rdev);
+ goto out_unlock_inode;
+
+ err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index);
+ if (err) {
+ goto out_unlock_inode;
+ } else {
btrfs_update_inode(trans, root, inode);
+ unlock_new_inode(inode);
d_instantiate(dentry, inode);
}
+
out_unlock:
btrfs_end_transaction(trans, root);
btrfs_balance_delayed_items(root);
@@ -5940,6 +5964,12 @@
iput(inode);
}
return err;
+
+out_unlock_inode:
+ drop_inode = 1;
+ unlock_new_inode(inode);
+ goto out_unlock;
+
}
static int btrfs_create(struct inode *dir, struct dentry *dentry,
@@ -5974,15 +6004,6 @@
goto out_unlock;
}
drop_inode_on_err = 1;
-
- err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
- if (err)
- goto out_unlock;
-
- err = btrfs_update_inode(trans, root, inode);
- if (err)
- goto out_unlock;
-
/*
* If the active LSM wants to access the inode during
* d_instantiate it needs these. Smack checks to see
@@ -5991,14 +6012,23 @@
*/
inode->i_fop = &btrfs_file_operations;
inode->i_op = &btrfs_file_inode_operations;
+ inode->i_mapping->a_ops = &btrfs_aops;
+ inode->i_mapping->backing_dev_info = &root->fs_info->bdi;
+
+ err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
+ if (err)
+ goto out_unlock_inode;
+
+ err = btrfs_update_inode(trans, root, inode);
+ if (err)
+ goto out_unlock_inode;
err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index);
if (err)
- goto out_unlock;
+ goto out_unlock_inode;
- inode->i_mapping->a_ops = &btrfs_aops;
- inode->i_mapping->backing_dev_info = &root->fs_info->bdi;
BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops;
+ unlock_new_inode(inode);
d_instantiate(dentry, inode);
out_unlock:
@@ -6010,6 +6040,11 @@
btrfs_balance_delayed_items(root);
btrfs_btree_balance_dirty(root);
return err;
+
+out_unlock_inode:
+ unlock_new_inode(inode);
+ goto out_unlock;
+
}
static int btrfs_link(struct dentry *old_dentry, struct inode *dir,
@@ -6117,25 +6152,30 @@
}
drop_on_err = 1;
+ /* these must be set before we unlock the inode */
+ inode->i_op = &btrfs_dir_inode_operations;
+ inode->i_fop = &btrfs_dir_file_operations;
err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
if (err)
- goto out_fail;
-
- inode->i_op = &btrfs_dir_inode_operations;
- inode->i_fop = &btrfs_dir_file_operations;
+ goto out_fail_inode;
btrfs_i_size_write(inode, 0);
err = btrfs_update_inode(trans, root, inode);
if (err)
- goto out_fail;
+ goto out_fail_inode;
err = btrfs_add_link(trans, dir, inode, dentry->d_name.name,
dentry->d_name.len, 0, index);
if (err)
- goto out_fail;
+ goto out_fail_inode;
d_instantiate(dentry, inode);
+ /*
+ * mkdir is special. We're unlocking after we call d_instantiate
+ * to avoid a race with nfsd calling d_instantiate.
+ */
+ unlock_new_inode(inode);
drop_on_err = 0;
out_fail:
@@ -6145,6 +6185,10 @@
btrfs_balance_delayed_items(root);
btrfs_btree_balance_dirty(root);
return err;
+
+out_fail_inode:
+ unlock_new_inode(inode);
+ goto out_fail;
}
/* helper for btfs_get_extent. Given an existing extent in the tree,
@@ -8100,6 +8144,7 @@
set_nlink(inode, 1);
btrfs_i_size_write(inode, 0);
+ unlock_new_inode(inode);
err = btrfs_subvol_inherit_props(trans, new_root, parent_root);
if (err)
@@ -8760,12 +8805,6 @@
goto out_unlock;
}
- err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
- if (err) {
- drop_inode = 1;
- goto out_unlock;
- }
-
/*
* If the active LSM wants to access the inode during
* d_instantiate it needs these. Smack checks to see
@@ -8774,23 +8813,22 @@
*/
inode->i_fop = &btrfs_file_operations;
inode->i_op = &btrfs_file_inode_operations;
+ inode->i_mapping->a_ops = &btrfs_aops;
+ inode->i_mapping->backing_dev_info = &root->fs_info->bdi;
+ BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops;
+
+ err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
+ if (err)
+ goto out_unlock_inode;
err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index);
if (err)
- drop_inode = 1;
- else {
- inode->i_mapping->a_ops = &btrfs_aops;
- inode->i_mapping->backing_dev_info = &root->fs_info->bdi;
- BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops;
- }
- if (drop_inode)
- goto out_unlock;
+ goto out_unlock_inode;
path = btrfs_alloc_path();
if (!path) {
err = -ENOMEM;
- drop_inode = 1;
- goto out_unlock;
+ goto out_unlock_inode;
}
key.objectid = btrfs_ino(inode);
key.offset = 0;
@@ -8799,9 +8837,8 @@
err = btrfs_insert_empty_item(trans, root, path, &key,
datasize);
if (err) {
- drop_inode = 1;
btrfs_free_path(path);
- goto out_unlock;
+ goto out_unlock_inode;
}
leaf = path->nodes[0];
ei = btrfs_item_ptr(leaf, path->slots[0],
@@ -8825,12 +8862,15 @@
inode_set_bytes(inode, name_len);
btrfs_i_size_write(inode, name_len);
err = btrfs_update_inode(trans, root, inode);
- if (err)
+ if (err) {
drop_inode = 1;
+ goto out_unlock_inode;
+ }
+
+ unlock_new_inode(inode);
+ d_instantiate(dentry, inode);
out_unlock:
- if (!err)
- d_instantiate(dentry, inode);
btrfs_end_transaction(trans, root);
if (drop_inode) {
inode_dec_link_count(inode);
@@ -8838,6 +8878,11 @@
}
btrfs_btree_balance_dirty(root);
return err;
+
+out_unlock_inode:
+ drop_inode = 1;
+ unlock_new_inode(inode);
+ goto out_unlock;
}
static int __btrfs_prealloc_file_range(struct inode *inode, int mode,
@@ -9021,14 +9066,6 @@
goto out;
}
- ret = btrfs_init_inode_security(trans, inode, dir, NULL);
- if (ret)
- goto out;
-
- ret = btrfs_update_inode(trans, root, inode);
- if (ret)
- goto out;
-
inode->i_fop = &btrfs_file_operations;
inode->i_op = &btrfs_file_inode_operations;
@@ -9036,9 +9073,16 @@
inode->i_mapping->backing_dev_info = &root->fs_info->bdi;
BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops;
+ ret = btrfs_init_inode_security(trans, inode, dir, NULL);
+ if (ret)
+ goto out_inode;
+
+ ret = btrfs_update_inode(trans, root, inode);
+ if (ret)
+ goto out_inode;
ret = btrfs_orphan_add(trans, inode);
if (ret)
- goto out;
+ goto out_inode;
/*
* We set number of links to 0 in btrfs_new_inode(), and here we set
@@ -9048,6 +9092,7 @@
* d_tmpfile() -> inode_dec_link_count() -> drop_nlink()
*/
set_nlink(inode, 1);
+ unlock_new_inode(inode);
d_tmpfile(dentry, inode);
mark_inode_dirty(inode);
@@ -9057,8 +9102,12 @@
iput(inode);
btrfs_balance_delayed_items(root);
btrfs_btree_balance_dirty(root);
-
return ret;
+
+out_inode:
+ unlock_new_inode(inode);
+ goto out;
+
}
static const struct inode_operations btrfs_dir_inode_operations = {
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index fce6fd0e..8a8e298 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -1019,8 +1019,10 @@
return false;
next = defrag_lookup_extent(inode, em->start + em->len);
- if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE ||
- (em->block_start + em->block_len == next->block_start))
+ if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
+ ret = false;
+ else if ((em->block_start + em->block_len == next->block_start) &&
+ (em->block_len > 128 * 1024 && next->block_len > 128 * 1024))
ret = false;
free_extent_map(next);
@@ -1055,7 +1057,6 @@
}
next_mergeable = defrag_check_next_extent(inode, em);
-
/*
* we hit a real extent, if it is big or the next extent is not a
* real extent, don't bother defragging it
@@ -1702,7 +1703,7 @@
~(BTRFS_SUBVOL_CREATE_ASYNC | BTRFS_SUBVOL_RDONLY |
BTRFS_SUBVOL_QGROUP_INHERIT)) {
ret = -EOPNOTSUPP;
- goto out;
+ goto free_args;
}
if (vol_args->flags & BTRFS_SUBVOL_CREATE_ASYNC)
@@ -1712,27 +1713,31 @@
if (vol_args->flags & BTRFS_SUBVOL_QGROUP_INHERIT) {
if (vol_args->size > PAGE_CACHE_SIZE) {
ret = -EINVAL;
- goto out;
+ goto free_args;
}
inherit = memdup_user(vol_args->qgroup_inherit, vol_args->size);
if (IS_ERR(inherit)) {
ret = PTR_ERR(inherit);
- goto out;
+ goto free_args;
}
}
ret = btrfs_ioctl_snap_create_transid(file, vol_args->name,
vol_args->fd, subvol, ptr,
readonly, inherit);
+ if (ret)
+ goto free_inherit;
- if (ret == 0 && ptr &&
- copy_to_user(arg +
- offsetof(struct btrfs_ioctl_vol_args_v2,
- transid), ptr, sizeof(*ptr)))
+ if (ptr && copy_to_user(arg +
+ offsetof(struct btrfs_ioctl_vol_args_v2,
+ transid),
+ ptr, sizeof(*ptr)))
ret = -EFAULT;
-out:
- kfree(vol_args);
+
+free_inherit:
kfree(inherit);
+free_args:
+ kfree(vol_args);
return ret;
}
@@ -2652,7 +2657,7 @@
vol_args = memdup_user(arg, sizeof(*vol_args));
if (IS_ERR(vol_args)) {
ret = PTR_ERR(vol_args);
- goto out;
+ goto err_drop;
}
vol_args->name[BTRFS_PATH_NAME_MAX] = '\0';
@@ -2670,6 +2675,7 @@
out:
kfree(vol_args);
+err_drop:
mnt_drop_write_file(file);
return ret;
}
diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
index 7e0e6e3..1d1ba08 100644
--- a/fs/btrfs/tree-log.c
+++ b/fs/btrfs/tree-log.c
@@ -94,8 +94,10 @@
#define LOG_WALK_REPLAY_ALL 3
static int btrfs_log_inode(struct btrfs_trans_handle *trans,
- struct btrfs_root *root, struct inode *inode,
- int inode_only);
+ struct btrfs_root *root, struct inode *inode,
+ int inode_only,
+ const loff_t start,
+ const loff_t end);
static int link_to_fixup_dir(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct btrfs_path *path, u64 objectid);
@@ -3858,8 +3860,10 @@
* This handles both files and directories.
*/
static int btrfs_log_inode(struct btrfs_trans_handle *trans,
- struct btrfs_root *root, struct inode *inode,
- int inode_only)
+ struct btrfs_root *root, struct inode *inode,
+ int inode_only,
+ const loff_t start,
+ const loff_t end)
{
struct btrfs_path *path;
struct btrfs_path *dst_path;
@@ -3876,6 +3880,7 @@
int ins_nr;
bool fast_search = false;
u64 ino = btrfs_ino(inode);
+ struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;
path = btrfs_alloc_path();
if (!path)
@@ -4049,13 +4054,35 @@
goto out_unlock;
}
} else if (inode_only == LOG_INODE_ALL) {
- struct extent_map_tree *tree = &BTRFS_I(inode)->extent_tree;
struct extent_map *em, *n;
- write_lock(&tree->lock);
- list_for_each_entry_safe(em, n, &tree->modified_extents, list)
- list_del_init(&em->list);
- write_unlock(&tree->lock);
+ write_lock(&em_tree->lock);
+ /*
+ * We can't just remove every em if we're called for a ranged
+ * fsync - that is, one that doesn't cover the whole possible
+ * file range (0 to LLONG_MAX). This is because we can have
+ * em's that fall outside the range we're logging and therefore
+ * their ordered operations haven't completed yet
+ * (btrfs_finish_ordered_io() not invoked yet). This means we
+ * didn't get their respective file extent item in the fs/subvol
+ * tree yet, and need to let the next fast fsync (one which
+ * consults the list of modified extent maps) find the em so
+ * that it logs a matching file extent item and waits for the
+ * respective ordered operation to complete (if it's still
+ * running).
+ *
+ * Removing every em outside the range we're logging would make
+ * the next fast fsync not log their matching file extent items,
+ * therefore making us lose data after a log replay.
+ */
+ list_for_each_entry_safe(em, n, &em_tree->modified_extents,
+ list) {
+ const u64 mod_end = em->mod_start + em->mod_len - 1;
+
+ if (em->mod_start >= start && mod_end <= end)
+ list_del_init(&em->list);
+ }
+ write_unlock(&em_tree->lock);
}
if (inode_only == LOG_INODE_ALL && S_ISDIR(inode->i_mode)) {
@@ -4065,6 +4092,7 @@
goto out_unlock;
}
}
+
BTRFS_I(inode)->logged_trans = trans->transid;
BTRFS_I(inode)->last_log_commit = BTRFS_I(inode)->last_sub_trans;
out_unlock:
@@ -4161,7 +4189,10 @@
*/
static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct inode *inode,
- struct dentry *parent, int exists_only,
+ struct dentry *parent,
+ const loff_t start,
+ const loff_t end,
+ int exists_only,
struct btrfs_log_ctx *ctx)
{
int inode_only = exists_only ? LOG_INODE_EXISTS : LOG_INODE_ALL;
@@ -4207,7 +4238,7 @@
if (ret)
goto end_no_trans;
- ret = btrfs_log_inode(trans, root, inode, inode_only);
+ ret = btrfs_log_inode(trans, root, inode, inode_only, start, end);
if (ret)
goto end_trans;
@@ -4235,7 +4266,8 @@
if (BTRFS_I(inode)->generation >
root->fs_info->last_trans_committed) {
- ret = btrfs_log_inode(trans, root, inode, inode_only);
+ ret = btrfs_log_inode(trans, root, inode, inode_only,
+ 0, LLONG_MAX);
if (ret)
goto end_trans;
}
@@ -4269,13 +4301,15 @@
*/
int btrfs_log_dentry_safe(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct dentry *dentry,
+ const loff_t start,
+ const loff_t end,
struct btrfs_log_ctx *ctx)
{
struct dentry *parent = dget_parent(dentry);
int ret;
ret = btrfs_log_inode_parent(trans, root, dentry->d_inode, parent,
- 0, ctx);
+ start, end, 0, ctx);
dput(parent);
return ret;
@@ -4512,6 +4546,7 @@
root->fs_info->last_trans_committed))
return 0;
- return btrfs_log_inode_parent(trans, root, inode, parent, 1, NULL);
+ return btrfs_log_inode_parent(trans, root, inode, parent, 0,
+ LLONG_MAX, 1, NULL);
}
diff --git a/fs/btrfs/tree-log.h b/fs/btrfs/tree-log.h
index 7f5b41b..e2e798a 100644
--- a/fs/btrfs/tree-log.h
+++ b/fs/btrfs/tree-log.h
@@ -59,6 +59,8 @@
int btrfs_recover_log_trees(struct btrfs_root *tree_root);
int btrfs_log_dentry_safe(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct dentry *dentry,
+ const loff_t start,
+ const loff_t end,
struct btrfs_log_ctx *ctx);
int btrfs_del_dir_entries_in_log(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 340a92d..2c2d6d1 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -529,12 +529,12 @@
*/
/*
- * As of now don't allow update to btrfs_fs_device through
- * the btrfs dev scan cli, after FS has been mounted.
+ * For now, we do allow update to btrfs_fs_device through the
+ * btrfs dev scan cli after FS has been mounted. We're still
+ * tracking a problem where systems fail mount by subvolume id
+ * when we reject replacement on a mounted FS.
*/
- if (fs_devices->opened) {
- return -EBUSY;
- } else {
+ if (!fs_devices->opened && found_transid < device->generation) {
/*
* That is if the FS is _not_ mounted and if you
* are here, that means there is more than one
@@ -542,8 +542,7 @@
* with larger generation number or the last-in if
* generation are equal.
*/
- if (found_transid < device->generation)
- return -EEXIST;
+ return -EEXIST;
}
name = rcu_string_strdup(path, GFP_NOFS);
diff --git a/fs/buffer.c b/fs/buffer.c
index 8f05111..3588a80 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1022,7 +1022,8 @@
bh = page_buffers(page);
if (bh->b_size == size) {
end_block = init_page_buffers(page, bdev,
- index << sizebits, size);
+ (sector_t)index << sizebits,
+ size);
goto done;
}
if (!try_to_free_buffers(page))
@@ -1043,7 +1044,8 @@
*/
spin_lock(&inode->i_mapping->private_lock);
link_dev_buffers(page, bh);
- end_block = init_page_buffers(page, bdev, index << sizebits, size);
+ end_block = init_page_buffers(page, bdev, (sector_t)index << sizebits,
+ size);
spin_unlock(&inode->i_mapping->private_lock);
done:
ret = (block < end_block) ? 1 : -ENXIO;
diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
index 5bf2b41..83e9c94 100644
--- a/fs/cachefiles/namei.c
+++ b/fs/cachefiles/namei.c
@@ -779,7 +779,8 @@
!subdir->d_inode->i_op->lookup ||
!subdir->d_inode->i_op->mkdir ||
!subdir->d_inode->i_op->create ||
- !subdir->d_inode->i_op->rename ||
+ (!subdir->d_inode->i_op->rename &&
+ !subdir->d_inode->i_op->rename2) ||
!subdir->d_inode->i_op->rmdir ||
!subdir->d_inode->i_op->unlink)
goto check_error;
diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c
index 4b1fb5c..25e745b 100644
--- a/fs/cachefiles/rdwr.c
+++ b/fs/cachefiles/rdwr.c
@@ -151,7 +151,6 @@
struct cachefiles_one_read *monitor;
struct cachefiles_object *object;
struct fscache_retrieval *op;
- struct pagevec pagevec;
int error, max;
op = container_of(_op, struct fscache_retrieval, op);
@@ -160,8 +159,6 @@
_enter("{ino=%lu}", object->backer->d_inode->i_ino);
- pagevec_init(&pagevec, 0);
-
max = 8;
spin_lock_irq(&object->work_lock);
@@ -396,7 +393,6 @@
{
struct cachefiles_object *object;
struct cachefiles_cache *cache;
- struct pagevec pagevec;
struct inode *inode;
sector_t block0, block;
unsigned shift;
@@ -427,8 +423,6 @@
op->op.flags |= FSCACHE_OP_ASYNC;
op->op.processor = cachefiles_read_copier;
- pagevec_init(&pagevec, 0);
-
/* we assume the absence or presence of the first block is a good
* enough indication for the page as a whole
* - TODO: don't use bmap() for this as it is _not_ actually good
diff --git a/fs/cifs/Kconfig b/fs/cifs/Kconfig
index 603f18a..a2172f3 100644
--- a/fs/cifs/Kconfig
+++ b/fs/cifs/Kconfig
@@ -22,6 +22,11 @@
support for OS/2 and Windows ME and similar servers is provided as
well.
+ The module also provides optional support for the followon
+ protocols for CIFS including SMB3, which enables
+ useful performance and security features (see the description
+ of CONFIG_CIFS_SMB2).
+
The cifs module provides an advanced network file system
client for mounting to CIFS compliant servers. It includes
support for DFS (hierarchical name space), secure per-user
@@ -121,7 +126,8 @@
depends on CIFS_XATTR && KEYS
help
Allows fetching CIFS/NTFS ACL from the server. The DACL blob
- is handed over to the application/caller.
+ is handed over to the application/caller. See the man
+ page for getcifsacl for more information.
config CIFS_DEBUG
bool "Enable CIFS debugging routines"
@@ -162,7 +168,7 @@
Allows NFS server to export a CIFS mounted share (nfsd over cifs)
config CIFS_SMB2
- bool "SMB2 network file system support"
+ bool "SMB2 and SMB3 network file system support"
depends on CIFS && INET
select NLS
select KEYS
@@ -170,16 +176,21 @@
select DNS_RESOLVER
help
- This enables experimental support for the SMB2 (Server Message Block
- version 2) protocol. The SMB2 protocol is the successor to the
- popular CIFS and SMB network file sharing protocols. SMB2 is the
- native file sharing mechanism for recent versions of Windows
- operating systems (since Vista). SMB2 enablement will eventually
- allow users better performance, security and features, than would be
- possible with cifs. Note that smb2 mount options also are simpler
- (compared to cifs) due to protocol improvements.
-
- Unless you are a developer or tester, say N.
+ This enables support for the Server Message Block version 2
+ family of protocols, including SMB3. SMB3 support is
+ enabled on mount by specifying "vers=3.0" in the mount
+ options. These protocols are the successors to the popular
+ CIFS and SMB network file sharing protocols. SMB3 is the
+ native file sharing mechanism for the more recent
+ versions of Windows (Windows 8 and Windows 2012 and
+ later) and Samba server and many others support SMB3 well.
+ In general SMB3 enables better performance, security
+ and features, than would be possible with CIFS (Note that
+ when mounting to Samba, due to the CIFS POSIX extensions,
+ CIFS mounts can provide slightly better POSIX compatibility
+ than SMB3 mounts do though). Note that SMB2/SMB3 mount
+ options are also slightly simpler (compared to CIFS) due
+ to protocol improvements.
config CIFS_FSCACHE
bool "Provide CIFS client caching support"
diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h
index b0fafa4..002e0c1 100644
--- a/fs/cifs/cifsfs.h
+++ b/fs/cifs/cifsfs.h
@@ -136,5 +136,5 @@
extern const struct export_operations cifs_export_ops;
#endif /* CONFIG_CIFS_NFSD_EXPORT */
-#define CIFS_VERSION "2.04"
+#define CIFS_VERSION "2.05"
#endif /* _CIFSFS_H */
diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
index dfc731b..25b8392 100644
--- a/fs/cifs/cifsglob.h
+++ b/fs/cifs/cifsglob.h
@@ -70,11 +70,6 @@
#define SERVER_NAME_LENGTH 40
#define SERVER_NAME_LEN_WITH_NULL (SERVER_NAME_LENGTH + 1)
-/* used to define string lengths for reversing unicode strings */
-/* (256+1)*2 = 514 */
-/* (max path length + 1 for null) * 2 for unicode */
-#define MAX_NAME 514
-
/* SMB echo "timeout" -- FIXME: tunable? */
#define SMB_ECHO_INTERVAL (60 * HZ)
diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index 03ed8a0..36ca204 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -1600,6 +1600,7 @@
tmp_end++;
if (!(tmp_end < end && tmp_end[1] == delim)) {
/* No it is not. Set the password to NULL */
+ kfree(vol->password);
vol->password = NULL;
break;
}
@@ -1637,6 +1638,7 @@
options = end;
}
+ kfree(vol->password);
/* Now build new password string */
temp_len = strlen(value);
vol->password = kzalloc(temp_len+1, GFP_KERNEL);
diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
index 3db0c5f..6cbd9c6 100644
--- a/fs/cifs/dir.c
+++ b/fs/cifs/dir.c
@@ -497,6 +497,14 @@
goto out;
}
+ if (file->f_flags & O_DIRECT &&
+ CIFS_SB(inode->i_sb)->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) {
+ if (CIFS_SB(inode->i_sb)->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)
+ file->f_op = &cifs_file_direct_nobrl_ops;
+ else
+ file->f_op = &cifs_file_direct_ops;
+ }
+
file_info = cifs_new_fileinfo(&fid, file, tlink, oplock);
if (file_info == NULL) {
if (server->ops->close)
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index d5fec92..7c018a1 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -467,6 +467,14 @@
cifs_dbg(FYI, "inode = 0x%p file flags are 0x%x for %s\n",
inode, file->f_flags, full_path);
+ if (file->f_flags & O_DIRECT &&
+ cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) {
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)
+ file->f_op = &cifs_file_direct_nobrl_ops;
+ else
+ file->f_op = &cifs_file_direct_ops;
+ }
+
if (server->oplocks)
oplock = REQ_OPLOCK;
else
diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
index 949ec90..7899a40 100644
--- a/fs/cifs/inode.c
+++ b/fs/cifs/inode.c
@@ -1720,7 +1720,10 @@
unlink_target:
/* Try unlinking the target dentry if it's not negative */
if (target_dentry->d_inode && (rc == -EACCES || rc == -EEXIST)) {
- tmprc = cifs_unlink(target_dir, target_dentry);
+ if (d_is_dir(target_dentry))
+ tmprc = cifs_rmdir(target_dir, target_dentry);
+ else
+ tmprc = cifs_unlink(target_dir, target_dentry);
if (tmprc)
goto cifs_rename_exit;
rc = cifs_do_rename(xid, source_dentry, from_name,
diff --git a/fs/cifs/link.c b/fs/cifs/link.c
index 68559fd..5657416 100644
--- a/fs/cifs/link.c
+++ b/fs/cifs/link.c
@@ -213,8 +213,12 @@
if (rc)
goto out;
- rc = tcon->ses->server->ops->create_mf_symlink(xid, tcon, cifs_sb,
- fromName, buf, &bytes_written);
+ if (tcon->ses->server->ops->create_mf_symlink)
+ rc = tcon->ses->server->ops->create_mf_symlink(xid, tcon,
+ cifs_sb, fromName, buf, &bytes_written);
+ else
+ rc = -EOPNOTSUPP;
+
if (rc)
goto out;
@@ -339,9 +343,11 @@
if (rc)
return rc;
- if (file_info.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE))
+ if (file_info.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) {
+ rc = -ENOENT;
/* it's not a symlink */
goto out;
+ }
io_parms.netfid = fid.netfid;
io_parms.pid = current->tgid;
diff --git a/fs/cifs/netmisc.c b/fs/cifs/netmisc.c
index 6834b9c..b333ff6 100644
--- a/fs/cifs/netmisc.c
+++ b/fs/cifs/netmisc.c
@@ -925,11 +925,23 @@
/* BB what about the timezone? BB */
/* Subtract the NTFS time offset, then convert to 1s intervals. */
- u64 t;
+ s64 t = le64_to_cpu(ntutc) - NTFS_TIME_OFFSET;
- t = le64_to_cpu(ntutc) - NTFS_TIME_OFFSET;
- ts.tv_nsec = do_div(t, 10000000) * 100;
- ts.tv_sec = t;
+ /*
+ * Unfortunately can not use normal 64 bit division on 32 bit arch, but
+ * the alternative, do_div, does not work with negative numbers so have
+ * to special case them
+ */
+ if (t < 0) {
+ t = -t;
+ ts.tv_nsec = (long)(do_div(t, 10000000) * 100);
+ ts.tv_nsec = -ts.tv_nsec;
+ ts.tv_sec = -t;
+ } else {
+ ts.tv_nsec = (long)do_div(t, 10000000) * 100;
+ ts.tv_sec = t;
+ }
+
return ts;
}
diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c
index 798c80a..b334a89 100644
--- a/fs/cifs/readdir.c
+++ b/fs/cifs/readdir.c
@@ -596,8 +596,8 @@
if (server->ops->dir_needs_close(cfile)) {
cfile->invalidHandle = true;
spin_unlock(&cifs_file_list_lock);
- if (server->ops->close)
- server->ops->close(xid, tcon, &cfile->fid);
+ if (server->ops->close_dir)
+ server->ops->close_dir(xid, tcon, &cfile->fid);
} else
spin_unlock(&cifs_file_list_lock);
if (cfile->srch_inf.ntwrk_buf_start) {
diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
index 39ee326..57db63f 100644
--- a/fs/cifs/sess.c
+++ b/fs/cifs/sess.c
@@ -243,10 +243,11 @@
kfree(ses->serverOS);
ses->serverOS = kzalloc(len + 1, GFP_KERNEL);
- if (ses->serverOS)
+ if (ses->serverOS) {
strncpy(ses->serverOS, bcc_ptr, len);
- if (strncmp(ses->serverOS, "OS/2", 4) == 0)
- cifs_dbg(FYI, "OS/2 server\n");
+ if (strncmp(ses->serverOS, "OS/2", 4) == 0)
+ cifs_dbg(FYI, "OS/2 server\n");
+ }
bcc_ptr += len + 1;
bleft -= len + 1;
@@ -744,14 +745,6 @@
sess_free_buffer(sess_data);
}
-#else
-
-static void
-sess_auth_lanman(struct sess_data *sess_data)
-{
- sess_data->result = -EOPNOTSUPP;
- sess_data->func = NULL;
-}
#endif
static void
@@ -1102,15 +1095,6 @@
ses->auth_key.response = NULL;
}
-#else
-
-static void
-sess_auth_kerberos(struct sess_data *sess_data)
-{
- cifs_dbg(VFS, "Kerberos negotiated but upcall support disabled!\n");
- sess_data->result = -ENOSYS;
- sess_data->func = NULL;
-}
#endif /* ! CONFIG_CIFS_UPCALL */
/*
diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
index 3f17b45..4599294 100644
--- a/fs/cifs/smb2file.c
+++ b/fs/cifs/smb2file.c
@@ -50,7 +50,7 @@
goto out;
}
- smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2,
+ smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
GFP_KERNEL);
if (smb2_data == NULL) {
rc = -ENOMEM;
diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
index 0150182..899bbc8 100644
--- a/fs/cifs/smb2inode.c
+++ b/fs/cifs/smb2inode.c
@@ -131,7 +131,7 @@
*adjust_tz = false;
*symlink = false;
- smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2,
+ smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
GFP_KERNEL);
if (smb2_data == NULL)
return -ENOMEM;
diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
index 5a48aa2..f522193 100644
--- a/fs/cifs/smb2ops.c
+++ b/fs/cifs/smb2ops.c
@@ -389,7 +389,7 @@
int rc;
struct smb2_file_all_info *smb2_data;
- smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2,
+ smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
GFP_KERNEL);
if (smb2_data == NULL)
return -ENOMEM;
@@ -1035,7 +1035,7 @@
if (keep_size == false)
return -EOPNOTSUPP;
- /*
+ /*
* Must check if file sparse since fallocate -z (zero range) assumes
* non-sparse allocation
*/
diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
index fa0dd04..74b3a66 100644
--- a/fs/cifs/smb2pdu.c
+++ b/fs/cifs/smb2pdu.c
@@ -530,7 +530,7 @@
struct smb2_sess_setup_rsp *rsp = NULL;
struct kvec iov[2];
int rc = 0;
- int resp_buftype;
+ int resp_buftype = CIFS_NO_BUFFER;
__le32 phase = NtLmNegotiate; /* NTLMSSP, if needed, is multistage */
struct TCP_Server_Info *server = ses->server;
u16 blob_length = 0;
@@ -1403,8 +1403,7 @@
rsp = (struct smb2_close_rsp *)iov[0].iov_base;
if (rc != 0) {
- if (tcon)
- cifs_stats_fail_inc(tcon, SMB2_CLOSE_HE);
+ cifs_stats_fail_inc(tcon, SMB2_CLOSE_HE);
goto close_exit;
}
@@ -1533,7 +1532,7 @@
{
return query_info(xid, tcon, persistent_fid, volatile_fid,
FILE_ALL_INFORMATION,
- sizeof(struct smb2_file_all_info) + MAX_NAME * 2,
+ sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
sizeof(struct smb2_file_all_info), data);
}
diff --git a/fs/dcache.c b/fs/dcache.c
index d30ce69..7a5b514 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -106,8 +106,7 @@
unsigned int hash)
{
hash += (unsigned long) parent / L1_CACHE_BYTES;
- hash = hash + (hash >> d_hash_shift);
- return dentry_hashtable + (hash & d_hash_mask);
+ return dentry_hashtable + hash_32(hash, d_hash_shift);
}
/* Statistics gathering. */
@@ -2656,6 +2655,12 @@
dentry->d_parent = dentry;
list_del_init(&dentry->d_u.d_child);
anon->d_parent = dparent;
+ if (likely(!d_unhashed(anon))) {
+ hlist_bl_lock(&anon->d_sb->s_anon);
+ __hlist_bl_del(&anon->d_hash);
+ anon->d_hash.pprev = NULL;
+ hlist_bl_unlock(&anon->d_sb->s_anon);
+ }
list_move(&anon->d_u.d_child, &dparent->d_subdirs);
write_seqcount_end(&dentry->d_seq);
@@ -2714,7 +2719,6 @@
write_seqlock(&rename_lock);
__d_materialise_dentry(dentry, new);
write_sequnlock(&rename_lock);
- __d_drop(new);
_d_rehash(new);
spin_unlock(&new->d_lock);
spin_unlock(&inode->i_lock);
@@ -2778,7 +2782,6 @@
* could splice into our tree? */
__d_materialise_dentry(dentry, alias);
write_sequnlock(&rename_lock);
- __d_drop(alias);
goto found;
} else {
/* Nope, but we must(!) avoid directory
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index b10b48c..7bcfff9 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -1852,7 +1852,8 @@
goto error_tgt_fput;
/* Check if EPOLLWAKEUP is allowed */
- ep_take_care_of_epollwakeup(&epds);
+ if (ep_op_has_event(op))
+ ep_take_care_of_epollwakeup(&epds);
/*
* We have to check that the file structure underneath the file descriptor
diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
index 90a3cdc..603e4eb 100644
--- a/fs/ext4/namei.c
+++ b/fs/ext4/namei.c
@@ -3240,6 +3240,7 @@
&new.de, &new.inlined);
if (IS_ERR(new.bh)) {
retval = PTR_ERR(new.bh);
+ new.bh = NULL;
goto end_rename;
}
if (new.bh) {
@@ -3386,6 +3387,7 @@
&new.de, &new.inlined);
if (IS_ERR(new.bh)) {
retval = PTR_ERR(new.bh);
+ new.bh = NULL;
goto end_rename;
}
diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
index bb0e80f..1e43b90 100644
--- a/fs/ext4/resize.c
+++ b/fs/ext4/resize.c
@@ -575,6 +575,7 @@
bh = bclean(handle, sb, block);
if (IS_ERR(bh)) {
err = PTR_ERR(bh);
+ bh = NULL;
goto out;
}
overhead = ext4_group_overhead_blocks(sb, group);
@@ -603,6 +604,7 @@
bh = bclean(handle, sb, block);
if (IS_ERR(bh)) {
err = PTR_ERR(bh);
+ bh = NULL;
goto out;
}
diff --git a/fs/fscache/object.c b/fs/fscache/object.c
index d3b4539..da032da 100644
--- a/fs/fscache/object.c
+++ b/fs/fscache/object.c
@@ -982,6 +982,7 @@
submit_op_failed:
clear_bit(FSCACHE_OBJECT_IS_LIVE, &object->flags);
spin_unlock(&cookie->lock);
+ fscache_unuse_cookie(object);
kfree(op);
_leave(" [EIO]");
return transit_to(KILL_OBJECT);
diff --git a/fs/fscache/page.c b/fs/fscache/page.c
index 85332b9..de33b3f 100644
--- a/fs/fscache/page.c
+++ b/fs/fscache/page.c
@@ -44,6 +44,19 @@
EXPORT_SYMBOL(__fscache_wait_on_page_write);
/*
+ * wait for a page to finish being written to the cache. Put a timeout here
+ * since we might be called recursively via parent fs.
+ */
+static
+bool release_page_wait_timeout(struct fscache_cookie *cookie, struct page *page)
+{
+ wait_queue_head_t *wq = bit_waitqueue(&cookie->flags, 0);
+
+ return wait_event_timeout(*wq, !__fscache_check_page_write(cookie, page),
+ HZ);
+}
+
+/*
* decide whether a page can be released, possibly by cancelling a store to it
* - we're allowed to sleep if __GFP_WAIT is flagged
*/
@@ -115,7 +128,10 @@
}
fscache_stat(&fscache_n_store_vmscan_wait);
- __fscache_wait_on_page_write(cookie, page);
+ if (!release_page_wait_timeout(cookie, page))
+ _debug("fscache writeout timeout page: %p{%lx}",
+ page, page->index);
+
gfp &= ~__GFP_WAIT;
goto try_again;
}
@@ -182,7 +198,7 @@
{
struct fscache_operation *op;
struct fscache_object *object;
- bool wake_cookie;
+ bool wake_cookie = false;
_enter("%p", cookie);
@@ -212,15 +228,16 @@
__fscache_use_cookie(cookie);
if (fscache_submit_exclusive_op(object, op) < 0)
- goto nobufs;
+ goto nobufs_dec;
spin_unlock(&cookie->lock);
fscache_stat(&fscache_n_attr_changed_ok);
fscache_put_operation(op);
_leave(" = 0");
return 0;
-nobufs:
+nobufs_dec:
wake_cookie = __fscache_unuse_cookie(cookie);
+nobufs:
spin_unlock(&cookie->lock);
kfree(op);
if (wake_cookie)
diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
index e6ee5b6..f0b945a 100644
--- a/fs/gfs2/bmap.c
+++ b/fs/gfs2/bmap.c
@@ -359,7 +359,7 @@
* Returns: The length of the extent (minimum of one block)
*/
-static inline unsigned int gfs2_extent_length(void *start, unsigned int len, __be64 *ptr, unsigned limit, int *eob)
+static inline unsigned int gfs2_extent_length(void *start, unsigned int len, __be64 *ptr, size_t limit, int *eob)
{
const __be64 *end = (start + len);
const __be64 *first = ptr;
@@ -449,7 +449,7 @@
struct buffer_head *bh_map, struct metapath *mp,
const unsigned int sheight,
const unsigned int height,
- const unsigned int maxlen)
+ const size_t maxlen)
{
struct gfs2_inode *ip = GFS2_I(inode);
struct gfs2_sbd *sdp = GFS2_SB(inode);
@@ -483,7 +483,8 @@
} else {
/* Need to allocate indirect blocks */
ptrs_per_blk = height > 1 ? sdp->sd_inptrs : sdp->sd_diptrs;
- dblks = min(maxlen, ptrs_per_blk - mp->mp_list[end_of_metadata]);
+ dblks = min(maxlen, (size_t)(ptrs_per_blk -
+ mp->mp_list[end_of_metadata]));
if (height == ip->i_height) {
/* Writing into existing tree, extend tree down */
iblks = height - sheight;
@@ -605,7 +606,7 @@
struct gfs2_inode *ip = GFS2_I(inode);
struct gfs2_sbd *sdp = GFS2_SB(inode);
unsigned int bsize = sdp->sd_sb.sb_bsize;
- const unsigned int maxlen = bh_map->b_size >> inode->i_blkbits;
+ const size_t maxlen = bh_map->b_size >> inode->i_blkbits;
const u64 *arr = sdp->sd_heightsize;
__be64 *ptr;
u64 size;
diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index 26b3f95..7f4ed3d 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -26,6 +26,7 @@
#include <linux/dlm.h>
#include <linux/dlm_plock.h>
#include <linux/aio.h>
+#include <linux/delay.h>
#include "gfs2.h"
#include "incore.h"
@@ -979,9 +980,10 @@
unsigned int state;
int flags;
int error = 0;
+ int sleeptime;
state = (fl->fl_type == F_WRLCK) ? LM_ST_EXCLUSIVE : LM_ST_SHARED;
- flags = (IS_SETLKW(cmd) ? 0 : LM_FLAG_TRY) | GL_EXACT;
+ flags = (IS_SETLKW(cmd) ? 0 : LM_FLAG_TRY_1CB) | GL_EXACT;
mutex_lock(&fp->f_fl_mutex);
@@ -1001,7 +1003,14 @@
gfs2_holder_init(gl, state, flags, fl_gh);
gfs2_glock_put(gl);
}
- error = gfs2_glock_nq(fl_gh);
+ for (sleeptime = 1; sleeptime <= 4; sleeptime <<= 1) {
+ error = gfs2_glock_nq(fl_gh);
+ if (error != GLR_TRYFAILED)
+ break;
+ fl_gh->gh_flags = LM_FLAG_TRY | GL_EXACT;
+ fl_gh->gh_error = 0;
+ msleep(sleeptime);
+ }
if (error) {
gfs2_holder_uninit(fl_gh);
if (error == GLR_TRYFAILED)
@@ -1024,7 +1033,7 @@
mutex_lock(&fp->f_fl_mutex);
flock_lock_file_wait(file, fl);
if (fl_gh->gh_gl) {
- gfs2_glock_dq_wait(fl_gh);
+ gfs2_glock_dq(fl_gh);
gfs2_holder_uninit(fl_gh);
}
mutex_unlock(&fp->f_fl_mutex);
diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
index 67d310c..39e7e99 100644
--- a/fs/gfs2/incore.h
+++ b/fs/gfs2/incore.h
@@ -262,6 +262,9 @@
unsigned long gh_ip;
};
+/* Number of quota types we support */
+#define GFS2_MAXQUOTAS 2
+
/* Resource group multi-block reservation, in order of appearance:
Step 1. Function prepares to write, allocates a mb, sets the size hint.
@@ -282,8 +285,8 @@
u64 rs_inum; /* Inode number for reservation */
/* ancillary quota stuff */
- struct gfs2_quota_data *rs_qa_qd[2 * MAXQUOTAS];
- struct gfs2_holder rs_qa_qd_ghs[2 * MAXQUOTAS];
+ struct gfs2_quota_data *rs_qa_qd[2 * GFS2_MAXQUOTAS];
+ struct gfs2_holder rs_qa_qd_ghs[2 * GFS2_MAXQUOTAS];
unsigned int rs_qa_qd_num;
};
diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
index e62e594..fc8ac2e 100644
--- a/fs/gfs2/inode.c
+++ b/fs/gfs2/inode.c
@@ -626,8 +626,10 @@
if (!IS_ERR(inode)) {
d = d_splice_alias(inode, dentry);
error = PTR_ERR(d);
- if (IS_ERR(d))
+ if (IS_ERR(d)) {
+ inode = ERR_CAST(d);
goto fail_gunlock;
+ }
error = 0;
if (file) {
if (S_ISREG(inode->i_mode)) {
@@ -840,8 +842,10 @@
int error;
inode = gfs2_lookupi(dir, &dentry->d_name, 0);
- if (!inode)
+ if (inode == NULL) {
+ d_add(dentry, NULL);
return NULL;
+ }
if (IS_ERR(inode))
return ERR_CAST(inode);
@@ -854,7 +858,6 @@
d = d_splice_alias(inode, dentry);
if (IS_ERR(d)) {
- iput(inode);
gfs2_glock_dq_uninit(&gh);
return d;
}
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index 2607ff1..a346f56 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -1294,7 +1294,7 @@
int val;
if (is_ancestor(root, sdp->sd_master_dir))
- seq_printf(s, ",meta");
+ seq_puts(s, ",meta");
if (args->ar_lockproto[0])
seq_printf(s, ",lockproto=%s", args->ar_lockproto);
if (args->ar_locktable[0])
@@ -1302,13 +1302,13 @@
if (args->ar_hostdata[0])
seq_printf(s, ",hostdata=%s", args->ar_hostdata);
if (args->ar_spectator)
- seq_printf(s, ",spectator");
+ seq_puts(s, ",spectator");
if (args->ar_localflocks)
- seq_printf(s, ",localflocks");
+ seq_puts(s, ",localflocks");
if (args->ar_debug)
- seq_printf(s, ",debug");
+ seq_puts(s, ",debug");
if (args->ar_posix_acl)
- seq_printf(s, ",acl");
+ seq_puts(s, ",acl");
if (args->ar_quota != GFS2_QUOTA_DEFAULT) {
char *state;
switch (args->ar_quota) {
@@ -1328,7 +1328,7 @@
seq_printf(s, ",quota=%s", state);
}
if (args->ar_suiddir)
- seq_printf(s, ",suiddir");
+ seq_puts(s, ",suiddir");
if (args->ar_data != GFS2_DATA_DEFAULT) {
char *state;
switch (args->ar_data) {
@@ -1345,7 +1345,7 @@
seq_printf(s, ",data=%s", state);
}
if (args->ar_discard)
- seq_printf(s, ",discard");
+ seq_puts(s, ",discard");
val = sdp->sd_tune.gt_logd_secs;
if (val != 30)
seq_printf(s, ",commit=%d", val);
@@ -1376,11 +1376,11 @@
seq_printf(s, ",errors=%s", state);
}
if (test_bit(SDF_NOBARRIERS, &sdp->sd_flags))
- seq_printf(s, ",nobarrier");
+ seq_puts(s, ",nobarrier");
if (test_bit(SDF_DEMOTE, &sdp->sd_flags))
- seq_printf(s, ",demote_interface_used");
+ seq_puts(s, ",demote_interface_used");
if (args->ar_rgrplvb)
- seq_printf(s, ",rgrplvb");
+ seq_puts(s, ",rgrplvb");
return 0;
}
diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index 8f27c93..ec9e082f 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -253,13 +253,11 @@
error = make_socks(serv, net);
if (error < 0)
- goto err_socks;
+ goto err_bind;
set_grace_period(net);
dprintk("lockd_up_net: per-net data created; net=%p\n", net);
return 0;
-err_socks:
- svc_rpcb_cleanup(serv, net);
err_bind:
ln->nlmsvc_users--;
return error;
diff --git a/fs/namei.c b/fs/namei.c
index a996bb4..a7b05bf 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -34,6 +34,7 @@
#include <linux/device_cgroup.h>
#include <linux/fs_struct.h>
#include <linux/posix_acl.h>
+#include <linux/hash.h>
#include <asm/uaccess.h>
#include "internal.h"
@@ -643,24 +644,22 @@
static __always_inline void set_root(struct nameidata *nd)
{
- if (!nd->root.mnt)
- get_fs_root(current->fs, &nd->root);
+ get_fs_root(current->fs, &nd->root);
}
static int link_path_walk(const char *, struct nameidata *);
-static __always_inline void set_root_rcu(struct nameidata *nd)
+static __always_inline unsigned set_root_rcu(struct nameidata *nd)
{
- if (!nd->root.mnt) {
- struct fs_struct *fs = current->fs;
- unsigned seq;
+ struct fs_struct *fs = current->fs;
+ unsigned seq, res;
- do {
- seq = read_seqcount_begin(&fs->seq);
- nd->root = fs->root;
- nd->seq = __read_seqcount_begin(&nd->root.dentry->d_seq);
- } while (read_seqcount_retry(&fs->seq, seq));
- }
+ do {
+ seq = read_seqcount_begin(&fs->seq);
+ nd->root = fs->root;
+ res = __read_seqcount_begin(&nd->root.dentry->d_seq);
+ } while (read_seqcount_retry(&fs->seq, seq));
+ return res;
}
static void path_put_conditional(struct path *path, struct nameidata *nd)
@@ -860,7 +859,8 @@
return PTR_ERR(s);
}
if (*s == '/') {
- set_root(nd);
+ if (!nd->root.mnt)
+ set_root(nd);
path_put(&nd->path);
nd->path = nd->root;
path_get(&nd->root);
@@ -1137,13 +1137,15 @@
*/
*inode = path->dentry->d_inode;
}
- return read_seqretry(&mount_lock, nd->m_seq) &&
+ return !read_seqretry(&mount_lock, nd->m_seq) &&
!(path->dentry->d_flags & DCACHE_NEED_AUTOMOUNT);
}
static int follow_dotdot_rcu(struct nameidata *nd)
{
- set_root_rcu(nd);
+ struct inode *inode = nd->inode;
+ if (!nd->root.mnt)
+ set_root_rcu(nd);
while (1) {
if (nd->path.dentry == nd->root.dentry &&
@@ -1155,6 +1157,7 @@
struct dentry *parent = old->d_parent;
unsigned seq;
+ inode = parent->d_inode;
seq = read_seqcount_begin(&parent->d_seq);
if (read_seqcount_retry(&old->d_seq, nd->seq))
goto failed;
@@ -1164,6 +1167,7 @@
}
if (!follow_up_rcu(&nd->path))
break;
+ inode = nd->path.dentry->d_inode;
nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq);
}
while (d_mountpoint(nd->path.dentry)) {
@@ -1173,11 +1177,12 @@
break;
nd->path.mnt = &mounted->mnt;
nd->path.dentry = mounted->mnt.mnt_root;
+ inode = nd->path.dentry->d_inode;
nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq);
- if (!read_seqretry(&mount_lock, nd->m_seq))
+ if (read_seqretry(&mount_lock, nd->m_seq))
goto failed;
}
- nd->inode = nd->path.dentry->d_inode;
+ nd->inode = inode;
return 0;
failed:
@@ -1256,7 +1261,8 @@
static void follow_dotdot(struct nameidata *nd)
{
- set_root(nd);
+ if (!nd->root.mnt)
+ set_root(nd);
while(1) {
struct dentry *old = nd->path.dentry;
@@ -1634,8 +1640,7 @@
static inline unsigned int fold_hash(unsigned long hash)
{
- hash += hash >> (8*sizeof(int));
- return hash;
+ return hash_64(hash, 32);
}
#else /* 32-bit case */
@@ -1669,9 +1674,9 @@
/*
* Calculate the length and hash of the path component, and
- * return the length of the component;
+ * return the "hash_len" as the result.
*/
-static inline unsigned long hash_name(const char *name, unsigned int *hashp)
+static inline u64 hash_name(const char *name)
{
unsigned long a, b, adata, bdata, mask, hash, len;
const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;
@@ -1691,9 +1696,8 @@
mask = create_zero_mask(adata | bdata);
hash += a & zero_bytemask(mask);
- *hashp = fold_hash(hash);
-
- return len + find_zero(mask);
+ len += find_zero(mask);
+ return hashlen_create(fold_hash(hash), len);
}
#else
@@ -1711,7 +1715,7 @@
* We know there's a real path component here of at least
* one character.
*/
-static inline unsigned long hash_name(const char *name, unsigned int *hashp)
+static inline u64 hash_name(const char *name)
{
unsigned long hash = init_name_hash();
unsigned long len = 0, c;
@@ -1722,8 +1726,7 @@
hash = partial_name_hash(c, hash);
c = (unsigned char)name[len];
} while (c && c != '/');
- *hashp = end_name_hash(hash);
- return len;
+ return hashlen_create(end_name_hash(hash), len);
}
#endif
@@ -1748,20 +1751,17 @@
/* At this point we know we have a real path component. */
for(;;) {
- struct qstr this;
- long len;
+ u64 hash_len;
int type;
err = may_lookup(nd);
if (err)
break;
- len = hash_name(name, &this.hash);
- this.name = name;
- this.len = len;
+ hash_len = hash_name(name);
type = LAST_NORM;
- if (name[0] == '.') switch (len) {
+ if (name[0] == '.') switch (hashlen_len(hash_len)) {
case 2:
if (name[1] == '.') {
type = LAST_DOTDOT;
@@ -1775,29 +1775,32 @@
struct dentry *parent = nd->path.dentry;
nd->flags &= ~LOOKUP_JUMPED;
if (unlikely(parent->d_flags & DCACHE_OP_HASH)) {
+ struct qstr this = { { .hash_len = hash_len }, .name = name };
err = parent->d_op->d_hash(parent, &this);
if (err < 0)
break;
+ hash_len = this.hash_len;
+ name = this.name;
}
}
- nd->last = this;
+ nd->last.hash_len = hash_len;
+ nd->last.name = name;
nd->last_type = type;
- if (!name[len])
+ name += hashlen_len(hash_len);
+ if (!*name)
return 0;
/*
* If it wasn't NUL, we know it was '/'. Skip that
* slash, and continue until no more slashes.
*/
do {
- len++;
- } while (unlikely(name[len] == '/'));
- if (!name[len])
+ name++;
+ } while (unlikely(*name == '/'));
+ if (!*name)
return 0;
- name += len;
-
err = walk_component(nd, &next, LOOKUP_FOLLOW);
if (err < 0)
return err;
@@ -1852,7 +1855,7 @@
if (*name=='/') {
if (flags & LOOKUP_RCU) {
rcu_read_lock();
- set_root_rcu(nd);
+ nd->seq = set_root_rcu(nd);
} else {
set_root(nd);
path_get(&nd->root);
@@ -1903,7 +1906,14 @@
}
nd->inode = nd->path.dentry->d_inode;
- return 0;
+ if (!(flags & LOOKUP_RCU))
+ return 0;
+ if (likely(!read_seqcount_retry(&nd->path.dentry->d_seq, nd->seq)))
+ return 0;
+ if (!(nd->flags & LOOKUP_ROOT))
+ nd->root.mnt = NULL;
+ rcu_read_unlock();
+ return -ECHILD;
}
static inline int lookup_last(struct nameidata *nd, struct path *path)
diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 1c5ff6d..6a4f366 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -1412,24 +1412,18 @@
p = proc_create("volumes", S_IFREG|S_IRUGO,
nn->proc_nfsfs, &nfs_volume_list_fops);
if (!p)
- goto error_2;
+ goto error_1;
return 0;
-error_2:
- remove_proc_entry("servers", nn->proc_nfsfs);
error_1:
- remove_proc_entry("fs/nfsfs", NULL);
+ remove_proc_subtree("nfsfs", net->proc_net);
error_0:
return -ENOMEM;
}
void nfs_fs_proc_net_exit(struct net *net)
{
- struct nfs_net *nn = net_generic(net, nfs_net_id);
-
- remove_proc_entry("volumes", nn->proc_nfsfs);
- remove_proc_entry("servers", nn->proc_nfsfs);
- remove_proc_entry("fs/nfsfs", NULL);
+ remove_proc_subtree("nfsfs", net->proc_net);
}
/*
diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
index 1359c4a..9097807 100644
--- a/fs/nfs/filelayout/filelayout.c
+++ b/fs/nfs/filelayout/filelayout.c
@@ -1269,11 +1269,12 @@
static void filelayout_retry_commit(struct nfs_commit_info *cinfo, int idx)
{
struct pnfs_ds_commit_info *fl_cinfo = cinfo->ds;
- struct pnfs_commit_bucket *bucket = fl_cinfo->buckets;
+ struct pnfs_commit_bucket *bucket;
struct pnfs_layout_segment *freeme;
int i;
- for (i = idx; i < fl_cinfo->nbuckets; i++, bucket++) {
+ for (i = idx; i < fl_cinfo->nbuckets; i++) {
+ bucket = &fl_cinfo->buckets[i];
if (list_empty(&bucket->committing))
continue;
nfs_retry_commit(&bucket->committing, bucket->clseg, cinfo);
diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
index 92193ed..a8b855a 100644
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -130,16 +130,15 @@
*/
struct nfs4_lock_state {
- struct list_head ls_locks; /* Other lock stateids */
- struct nfs4_state * ls_state; /* Pointer to open state */
+ struct list_head ls_locks; /* Other lock stateids */
+ struct nfs4_state * ls_state; /* Pointer to open state */
#define NFS_LOCK_INITIALIZED 0
#define NFS_LOCK_LOST 1
- unsigned long ls_flags;
+ unsigned long ls_flags;
struct nfs_seqid_counter ls_seqid;
- nfs4_stateid ls_stateid;
- atomic_t ls_count;
- fl_owner_t ls_owner;
- struct work_struct ls_release;
+ nfs4_stateid ls_stateid;
+ atomic_t ls_count;
+ fl_owner_t ls_owner;
};
/* bits for nfs4_state->flags */
diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
index 53e435a..ffdb28d 100644
--- a/fs/nfs/nfs4client.c
+++ b/fs/nfs/nfs4client.c
@@ -482,6 +482,16 @@
spin_lock(&nn->nfs_client_lock);
list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) {
+
+ if (pos->rpc_ops != new->rpc_ops)
+ continue;
+
+ if (pos->cl_proto != new->cl_proto)
+ continue;
+
+ if (pos->cl_minorversion != new->cl_minorversion)
+ continue;
+
/* If "pos" isn't marked ready, we can't trust the
* remaining fields in "pos" */
if (pos->cl_cons_state > NFS_CS_READY) {
@@ -501,15 +511,6 @@
if (pos->cl_cons_state != NFS_CS_READY)
continue;
- if (pos->rpc_ops != new->rpc_ops)
- continue;
-
- if (pos->cl_proto != new->cl_proto)
- continue;
-
- if (pos->cl_minorversion != new->cl_minorversion)
- continue;
-
if (pos->cl_clientid != new->cl_clientid)
continue;
@@ -622,6 +623,16 @@
spin_lock(&nn->nfs_client_lock);
list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) {
+
+ if (pos->rpc_ops != new->rpc_ops)
+ continue;
+
+ if (pos->cl_proto != new->cl_proto)
+ continue;
+
+ if (pos->cl_minorversion != new->cl_minorversion)
+ continue;
+
/* If "pos" isn't marked ready, we can't trust the
* remaining fields in "pos", especially the client
* ID and serverowner fields. Wait for CREATE_SESSION
@@ -647,15 +658,6 @@
if (pos->cl_cons_state != NFS_CS_READY)
continue;
- if (pos->rpc_ops != new->rpc_ops)
- continue;
-
- if (pos->cl_proto != new->cl_proto)
- continue;
-
- if (pos->cl_minorversion != new->cl_minorversion)
- continue;
-
if (!nfs4_match_clientids(pos, new))
continue;
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 7dd8aca..6ca0c8e 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -2226,9 +2226,13 @@
ret = _nfs4_proc_open(opendata);
if (ret != 0) {
if (ret == -ENOENT) {
- d_drop(opendata->dentry);
- d_add(opendata->dentry, NULL);
- nfs_set_verifier(opendata->dentry,
+ dentry = opendata->dentry;
+ if (dentry->d_inode)
+ d_delete(dentry);
+ else if (d_unhashed(dentry))
+ d_add(dentry, NULL);
+
+ nfs_set_verifier(dentry,
nfs_save_change_attribute(opendata->dir->d_inode));
}
goto out;
@@ -2614,23 +2618,23 @@
is_rdwr = test_bit(NFS_O_RDWR_STATE, &state->flags);
is_rdonly = test_bit(NFS_O_RDONLY_STATE, &state->flags);
is_wronly = test_bit(NFS_O_WRONLY_STATE, &state->flags);
- /* Calculate the current open share mode */
- calldata->arg.fmode = 0;
- if (is_rdonly || is_rdwr)
- calldata->arg.fmode |= FMODE_READ;
- if (is_wronly || is_rdwr)
- calldata->arg.fmode |= FMODE_WRITE;
/* Calculate the change in open mode */
+ calldata->arg.fmode = 0;
if (state->n_rdwr == 0) {
- if (state->n_rdonly == 0) {
- call_close |= is_rdonly || is_rdwr;
- calldata->arg.fmode &= ~FMODE_READ;
- }
- if (state->n_wronly == 0) {
- call_close |= is_wronly || is_rdwr;
- calldata->arg.fmode &= ~FMODE_WRITE;
- }
- }
+ if (state->n_rdonly == 0)
+ call_close |= is_rdonly;
+ else if (is_rdonly)
+ calldata->arg.fmode |= FMODE_READ;
+ if (state->n_wronly == 0)
+ call_close |= is_wronly;
+ else if (is_wronly)
+ calldata->arg.fmode |= FMODE_WRITE;
+ } else if (is_rdwr)
+ calldata->arg.fmode |= FMODE_READ|FMODE_WRITE;
+
+ if (calldata->arg.fmode == 0)
+ call_close |= is_rdwr;
+
if (!nfs4_valid_open_stateid(state))
call_close = 0;
spin_unlock(&state->owner->so_lock);
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index a043f61..22fe351 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -799,18 +799,6 @@
return NULL;
}
-static void
-free_lock_state_work(struct work_struct *work)
-{
- struct nfs4_lock_state *lsp = container_of(work,
- struct nfs4_lock_state, ls_release);
- struct nfs4_state *state = lsp->ls_state;
- struct nfs_server *server = state->owner->so_server;
- struct nfs_client *clp = server->nfs_client;
-
- clp->cl_mvops->free_lock_state(server, lsp);
-}
-
/*
* Return a compatible lock_state. If no initialized lock_state structure
* exists, return an uninitialized one.
@@ -832,7 +820,6 @@
if (lsp->ls_seqid.owner_id < 0)
goto out_free;
INIT_LIST_HEAD(&lsp->ls_locks);
- INIT_WORK(&lsp->ls_release, free_lock_state_work);
return lsp;
out_free:
kfree(lsp);
@@ -896,12 +883,13 @@
if (list_empty(&state->lock_states))
clear_bit(LK_STATE_IN_USE, &state->flags);
spin_unlock(&state->state_lock);
- if (test_bit(NFS_LOCK_INITIALIZED, &lsp->ls_flags))
- queue_work(nfsiod_workqueue, &lsp->ls_release);
- else {
- server = state->owner->so_server;
+ server = state->owner->so_server;
+ if (test_bit(NFS_LOCK_INITIALIZED, &lsp->ls_flags)) {
+ struct nfs_client *clp = server->nfs_client;
+
+ clp->cl_mvops->free_lock_state(server, lsp);
+ } else
nfs4_free_lock_state(server, lsp);
- }
}
static void nfs4_fl_copy_lock(struct file_lock *dst, struct file_lock *src)
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index f9821ce..e94457c 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -2657,6 +2657,7 @@
struct xdr_stream *xdr = cd->xdr;
int start_offset = xdr->buf->len;
int cookie_offset;
+ u32 name_and_cookie;
int entry_bytes;
__be32 nfserr = nfserr_toosmall;
__be64 wire_offset;
@@ -2718,7 +2719,14 @@
cd->rd_maxcount -= entry_bytes;
if (!cd->rd_dircount)
goto fail;
- cd->rd_dircount--;
+ /*
+ * RFC 3530 14.2.24 describes rd_dircount as only a "hint", so
+ * let's always let through the first entry, at least:
+ */
+ name_and_cookie = 4 * XDR_QUADLEN(namlen) + 8;
+ if (name_and_cookie > cd->rd_dircount && cd->cookie_offset)
+ goto fail;
+ cd->rd_dircount -= min(cd->rd_dircount, name_and_cookie);
cd->cookie_offset = cookie_offset;
skip_entry:
cd->common.err = nfs_ok;
@@ -3321,6 +3329,10 @@
}
maxcount = min_t(int, maxcount-16, bytes_left);
+ /* RFC 3530 14.2.24 allows us to ignore dircount when it's 0: */
+ if (!readdir->rd_dircount)
+ readdir->rd_dircount = INT_MAX;
+
readdir->xdr = xdr;
readdir->rd_maxcount = maxcount;
readdir->common.err = 0;
diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c
index 238a593..9d7e2b9 100644
--- a/fs/notify/fdinfo.c
+++ b/fs/notify/fdinfo.c
@@ -42,7 +42,7 @@
{
struct {
struct file_handle handle;
- u8 pad[64];
+ u8 pad[MAX_HANDLE_SZ];
} f;
int size, ret, i;
@@ -50,7 +50,7 @@
size = f.handle.handle_bytes >> 2;
ret = exportfs_encode_inode_fh(inode, (struct fid *)f.handle.f_handle, &size, 0);
- if ((ret == 255) || (ret == -ENOSPC)) {
+ if ((ret == FILEID_INVALID) || (ret < 0)) {
WARN_ONCE(1, "Can't encode file handler for inotify: %d\n", ret);
return 0;
}
diff --git a/fs/udf/ialloc.c b/fs/udf/ialloc.c
index 6eaf5ed..e77db62 100644
--- a/fs/udf/ialloc.c
+++ b/fs/udf/ialloc.c
@@ -45,7 +45,7 @@
udf_free_blocks(sb, NULL, &UDF_I(inode)->i_location, 0, 1);
}
-struct inode *udf_new_inode(struct inode *dir, umode_t mode, int *err)
+struct inode *udf_new_inode(struct inode *dir, umode_t mode)
{
struct super_block *sb = dir->i_sb;
struct udf_sb_info *sbi = UDF_SB(sb);
@@ -55,14 +55,12 @@
struct udf_inode_info *iinfo;
struct udf_inode_info *dinfo = UDF_I(dir);
struct logicalVolIntegrityDescImpUse *lvidiu;
+ int err;
inode = new_inode(sb);
- if (!inode) {
- *err = -ENOMEM;
- return NULL;
- }
- *err = -ENOSPC;
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
iinfo = UDF_I(inode);
if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_EXTENDED_FE)) {
@@ -80,21 +78,22 @@
}
if (!iinfo->i_ext.i_data) {
iput(inode);
- *err = -ENOMEM;
- return NULL;
+ return ERR_PTR(-ENOMEM);
}
+ err = -ENOSPC;
block = udf_new_block(dir->i_sb, NULL,
dinfo->i_location.partitionReferenceNum,
- start, err);
- if (*err) {
+ start, &err);
+ if (err) {
iput(inode);
- return NULL;
+ return ERR_PTR(err);
}
lvidiu = udf_sb_lvidiu(sb);
if (lvidiu) {
iinfo->i_unique = lvid_get_unique_id(sb);
+ inode->i_generation = iinfo->i_unique;
mutex_lock(&sbi->s_alloc_mutex);
if (S_ISDIR(mode))
le32_add_cpu(&lvidiu->numDirs, 1);
@@ -123,9 +122,12 @@
iinfo->i_alloc_type = ICBTAG_FLAG_AD_LONG;
inode->i_mtime = inode->i_atime = inode->i_ctime =
iinfo->i_crtime = current_fs_time(inode->i_sb);
- insert_inode_hash(inode);
+ if (unlikely(insert_inode_locked(inode) < 0)) {
+ make_bad_inode(inode);
+ iput(inode);
+ return ERR_PTR(-EIO);
+ }
mark_inode_dirty(inode);
- *err = 0;
return inode;
}
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
index 236cd48..0859884 100644
--- a/fs/udf/inode.c
+++ b/fs/udf/inode.c
@@ -51,7 +51,6 @@
static umode_t udf_convert_permissions(struct fileEntry *);
static int udf_update_inode(struct inode *, int);
-static void udf_fill_inode(struct inode *, struct buffer_head *);
static int udf_sync_inode(struct inode *inode);
static int udf_alloc_i_data(struct inode *inode, size_t size);
static sector_t inode_getblk(struct inode *, sector_t, int *, int *);
@@ -1271,12 +1270,33 @@
return 0;
}
-static void __udf_read_inode(struct inode *inode)
+/*
+ * Maximum length of linked list formed by ICB hierarchy. The chosen number is
+ * arbitrary - just that we hopefully don't limit any real use of rewritten
+ * inode on write-once media but avoid looping for too long on corrupted media.
+ */
+#define UDF_MAX_ICB_NESTING 1024
+
+static int udf_read_inode(struct inode *inode)
{
struct buffer_head *bh = NULL;
struct fileEntry *fe;
+ struct extendedFileEntry *efe;
uint16_t ident;
struct udf_inode_info *iinfo = UDF_I(inode);
+ struct udf_sb_info *sbi = UDF_SB(inode->i_sb);
+ struct kernel_lb_addr *iloc = &iinfo->i_location;
+ unsigned int link_count;
+ unsigned int indirections = 0;
+ int ret = -EIO;
+
+reread:
+ if (iloc->logicalBlockNum >=
+ sbi->s_partmaps[iloc->partitionReferenceNum].s_partition_len) {
+ udf_debug("block=%d, partition=%d out of range\n",
+ iloc->logicalBlockNum, iloc->partitionReferenceNum);
+ return -EIO;
+ }
/*
* Set defaults, but the inode is still incomplete!
@@ -1290,78 +1310,54 @@
* i_nlink = 1
* i_op = NULL;
*/
- bh = udf_read_ptagged(inode->i_sb, &iinfo->i_location, 0, &ident);
+ bh = udf_read_ptagged(inode->i_sb, iloc, 0, &ident);
if (!bh) {
udf_err(inode->i_sb, "(ino %ld) failed !bh\n", inode->i_ino);
- make_bad_inode(inode);
- return;
+ return -EIO;
}
if (ident != TAG_IDENT_FE && ident != TAG_IDENT_EFE &&
ident != TAG_IDENT_USE) {
udf_err(inode->i_sb, "(ino %ld) failed ident=%d\n",
inode->i_ino, ident);
- brelse(bh);
- make_bad_inode(inode);
- return;
+ goto out;
}
fe = (struct fileEntry *)bh->b_data;
+ efe = (struct extendedFileEntry *)bh->b_data;
if (fe->icbTag.strategyType == cpu_to_le16(4096)) {
struct buffer_head *ibh;
- ibh = udf_read_ptagged(inode->i_sb, &iinfo->i_location, 1,
- &ident);
+ ibh = udf_read_ptagged(inode->i_sb, iloc, 1, &ident);
if (ident == TAG_IDENT_IE && ibh) {
- struct buffer_head *nbh = NULL;
struct kernel_lb_addr loc;
struct indirectEntry *ie;
ie = (struct indirectEntry *)ibh->b_data;
loc = lelb_to_cpu(ie->indirectICB.extLocation);
- if (ie->indirectICB.extLength &&
- (nbh = udf_read_ptagged(inode->i_sb, &loc, 0,
- &ident))) {
- if (ident == TAG_IDENT_FE ||
- ident == TAG_IDENT_EFE) {
- memcpy(&iinfo->i_location,
- &loc,
- sizeof(struct kernel_lb_addr));
- brelse(bh);
- brelse(ibh);
- brelse(nbh);
- __udf_read_inode(inode);
- return;
+ if (ie->indirectICB.extLength) {
+ brelse(ibh);
+ memcpy(&iinfo->i_location, &loc,
+ sizeof(struct kernel_lb_addr));
+ if (++indirections > UDF_MAX_ICB_NESTING) {
+ udf_err(inode->i_sb,
+ "too many ICBs in ICB hierarchy"
+ " (max %d supported)\n",
+ UDF_MAX_ICB_NESTING);
+ goto out;
}
- brelse(nbh);
+ brelse(bh);
+ goto reread;
}
}
brelse(ibh);
} else if (fe->icbTag.strategyType != cpu_to_le16(4)) {
udf_err(inode->i_sb, "unsupported strategy type: %d\n",
le16_to_cpu(fe->icbTag.strategyType));
- brelse(bh);
- make_bad_inode(inode);
- return;
+ goto out;
}
- udf_fill_inode(inode, bh);
-
- brelse(bh);
-}
-
-static void udf_fill_inode(struct inode *inode, struct buffer_head *bh)
-{
- struct fileEntry *fe;
- struct extendedFileEntry *efe;
- struct udf_sb_info *sbi = UDF_SB(inode->i_sb);
- struct udf_inode_info *iinfo = UDF_I(inode);
- unsigned int link_count;
-
- fe = (struct fileEntry *)bh->b_data;
- efe = (struct extendedFileEntry *)bh->b_data;
-
if (fe->icbTag.strategyType == cpu_to_le16(4))
iinfo->i_strat4096 = 0;
else /* if (fe->icbTag.strategyType == cpu_to_le16(4096)) */
@@ -1378,11 +1374,10 @@
if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_EFE)) {
iinfo->i_efe = 1;
iinfo->i_use = 0;
- if (udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
- sizeof(struct extendedFileEntry))) {
- make_bad_inode(inode);
- return;
- }
+ ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
+ sizeof(struct extendedFileEntry));
+ if (ret)
+ goto out;
memcpy(iinfo->i_ext.i_data,
bh->b_data + sizeof(struct extendedFileEntry),
inode->i_sb->s_blocksize -
@@ -1390,11 +1385,10 @@
} else if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_FE)) {
iinfo->i_efe = 0;
iinfo->i_use = 0;
- if (udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
- sizeof(struct fileEntry))) {
- make_bad_inode(inode);
- return;
- }
+ ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
+ sizeof(struct fileEntry));
+ if (ret)
+ goto out;
memcpy(iinfo->i_ext.i_data,
bh->b_data + sizeof(struct fileEntry),
inode->i_sb->s_blocksize - sizeof(struct fileEntry));
@@ -1404,18 +1398,18 @@
iinfo->i_lenAlloc = le32_to_cpu(
((struct unallocSpaceEntry *)bh->b_data)->
lengthAllocDescs);
- if (udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
- sizeof(struct unallocSpaceEntry))) {
- make_bad_inode(inode);
- return;
- }
+ ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
+ sizeof(struct unallocSpaceEntry));
+ if (ret)
+ goto out;
memcpy(iinfo->i_ext.i_data,
bh->b_data + sizeof(struct unallocSpaceEntry),
inode->i_sb->s_blocksize -
sizeof(struct unallocSpaceEntry));
- return;
+ return 0;
}
+ ret = -EIO;
read_lock(&sbi->s_cred_lock);
i_uid_write(inode, le32_to_cpu(fe->uid));
if (!uid_valid(inode->i_uid) ||
@@ -1441,8 +1435,10 @@
read_unlock(&sbi->s_cred_lock);
link_count = le16_to_cpu(fe->fileLinkCount);
- if (!link_count)
- link_count = 1;
+ if (!link_count) {
+ ret = -ESTALE;
+ goto out;
+ }
set_nlink(inode, link_count);
inode->i_size = le64_to_cpu(fe->informationLength);
@@ -1488,6 +1484,7 @@
iinfo->i_lenAlloc = le32_to_cpu(efe->lengthAllocDescs);
iinfo->i_checkpoint = le32_to_cpu(efe->checkpoint);
}
+ inode->i_generation = iinfo->i_unique;
switch (fe->icbTag.fileType) {
case ICBTAG_FILE_TYPE_DIRECTORY:
@@ -1537,8 +1534,7 @@
default:
udf_err(inode->i_sb, "(ino %ld) failed unknown file type=%d\n",
inode->i_ino, fe->icbTag.fileType);
- make_bad_inode(inode);
- return;
+ goto out;
}
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
struct deviceSpec *dsea =
@@ -1549,8 +1545,12 @@
le32_to_cpu(dsea->minorDeviceIdent)));
/* Developer ID ??? */
} else
- make_bad_inode(inode);
+ goto out;
}
+ ret = 0;
+out:
+ brelse(bh);
+ return ret;
}
static int udf_alloc_i_data(struct inode *inode, size_t size)
@@ -1664,7 +1664,7 @@
FE_PERM_U_DELETE | FE_PERM_U_CHATTR));
fe->permissions = cpu_to_le32(udfperms);
- if (S_ISDIR(inode->i_mode))
+ if (S_ISDIR(inode->i_mode) && inode->i_nlink > 0)
fe->fileLinkCount = cpu_to_le16(inode->i_nlink - 1);
else
fe->fileLinkCount = cpu_to_le16(inode->i_nlink);
@@ -1830,32 +1830,23 @@
{
unsigned long block = udf_get_lb_pblock(sb, ino, 0);
struct inode *inode = iget_locked(sb, block);
+ int err;
if (!inode)
- return NULL;
+ return ERR_PTR(-ENOMEM);
- if (inode->i_state & I_NEW) {
- memcpy(&UDF_I(inode)->i_location, ino, sizeof(struct kernel_lb_addr));
- __udf_read_inode(inode);
- unlock_new_inode(inode);
+ if (!(inode->i_state & I_NEW))
+ return inode;
+
+ memcpy(&UDF_I(inode)->i_location, ino, sizeof(struct kernel_lb_addr));
+ err = udf_read_inode(inode);
+ if (err < 0) {
+ iget_failed(inode);
+ return ERR_PTR(err);
}
-
- if (is_bad_inode(inode))
- goto out_iput;
-
- if (ino->logicalBlockNum >= UDF_SB(sb)->
- s_partmaps[ino->partitionReferenceNum].s_partition_len) {
- udf_debug("block=%d, partition=%d out of range\n",
- ino->logicalBlockNum, ino->partitionReferenceNum);
- make_bad_inode(inode);
- goto out_iput;
- }
+ unlock_new_inode(inode);
return inode;
-
- out_iput:
- iput(inode);
- return NULL;
}
int udf_add_aext(struct inode *inode, struct extent_position *epos,
diff --git a/fs/udf/namei.c b/fs/udf/namei.c
index 83a0600..c12e260 100644
--- a/fs/udf/namei.c
+++ b/fs/udf/namei.c
@@ -270,9 +270,8 @@
NULL, 0),
};
inode = udf_iget(dir->i_sb, lb);
- if (!inode) {
- return ERR_PTR(-EACCES);
- }
+ if (IS_ERR(inode))
+ return inode;
} else
#endif /* UDF_RECOVERY */
@@ -285,9 +284,8 @@
loc = lelb_to_cpu(cfi.icb.extLocation);
inode = udf_iget(dir->i_sb, &loc);
- if (!inode) {
- return ERR_PTR(-EACCES);
- }
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
}
return d_splice_alias(inode, dentry);
@@ -550,32 +548,18 @@
return udf_write_fi(inode, cfi, fi, fibh, NULL, NULL);
}
-static int udf_create(struct inode *dir, struct dentry *dentry, umode_t mode,
- bool excl)
+static int udf_add_nondir(struct dentry *dentry, struct inode *inode)
{
+ struct udf_inode_info *iinfo = UDF_I(inode);
+ struct inode *dir = dentry->d_parent->d_inode;
struct udf_fileident_bh fibh;
- struct inode *inode;
struct fileIdentDesc cfi, *fi;
int err;
- struct udf_inode_info *iinfo;
-
- inode = udf_new_inode(dir, mode, &err);
- if (!inode) {
- return err;
- }
-
- iinfo = UDF_I(inode);
- if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
- inode->i_data.a_ops = &udf_adinicb_aops;
- else
- inode->i_data.a_ops = &udf_aops;
- inode->i_op = &udf_file_inode_operations;
- inode->i_fop = &udf_file_operations;
- mark_inode_dirty(inode);
fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err);
- if (!fi) {
+ if (unlikely(!fi)) {
inode_dec_link_count(inode);
+ unlock_new_inode(inode);
iput(inode);
return err;
}
@@ -589,23 +573,21 @@
if (fibh.sbh != fibh.ebh)
brelse(fibh.ebh);
brelse(fibh.sbh);
+ unlock_new_inode(inode);
d_instantiate(dentry, inode);
return 0;
}
-static int udf_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
+static int udf_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ bool excl)
{
- struct inode *inode;
- struct udf_inode_info *iinfo;
- int err;
+ struct inode *inode = udf_new_inode(dir, mode);
- inode = udf_new_inode(dir, mode, &err);
- if (!inode)
- return err;
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
- iinfo = UDF_I(inode);
- if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
+ if (UDF_I(inode)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
inode->i_data.a_ops = &udf_adinicb_aops;
else
inode->i_data.a_ops = &udf_aops;
@@ -613,7 +595,25 @@
inode->i_fop = &udf_file_operations;
mark_inode_dirty(inode);
+ return udf_add_nondir(dentry, inode);
+}
+
+static int udf_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
+{
+ struct inode *inode = udf_new_inode(dir, mode);
+
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
+
+ if (UDF_I(inode)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
+ inode->i_data.a_ops = &udf_adinicb_aops;
+ else
+ inode->i_data.a_ops = &udf_aops;
+ inode->i_op = &udf_file_inode_operations;
+ inode->i_fop = &udf_file_operations;
+ mark_inode_dirty(inode);
d_tmpfile(dentry, inode);
+ unlock_new_inode(inode);
return 0;
}
@@ -621,44 +621,16 @@
dev_t rdev)
{
struct inode *inode;
- struct udf_fileident_bh fibh;
- struct fileIdentDesc cfi, *fi;
- int err;
- struct udf_inode_info *iinfo;
if (!old_valid_dev(rdev))
return -EINVAL;
- err = -EIO;
- inode = udf_new_inode(dir, mode, &err);
- if (!inode)
- goto out;
+ inode = udf_new_inode(dir, mode);
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
- iinfo = UDF_I(inode);
init_special_inode(inode, mode, rdev);
- fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err);
- if (!fi) {
- inode_dec_link_count(inode);
- iput(inode);
- return err;
- }
- cfi.icb.extLength = cpu_to_le32(inode->i_sb->s_blocksize);
- cfi.icb.extLocation = cpu_to_lelb(iinfo->i_location);
- *(__le32 *)((struct allocDescImpUse *)cfi.icb.impUse)->impUse =
- cpu_to_le32(iinfo->i_unique & 0x00000000FFFFFFFFUL);
- udf_write_fi(dir, &cfi, fi, &fibh, NULL, NULL);
- if (UDF_I(dir)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
- mark_inode_dirty(dir);
- mark_inode_dirty(inode);
-
- if (fibh.sbh != fibh.ebh)
- brelse(fibh.ebh);
- brelse(fibh.sbh);
- d_instantiate(dentry, inode);
- err = 0;
-
-out:
- return err;
+ return udf_add_nondir(dentry, inode);
}
static int udf_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
@@ -670,10 +642,9 @@
struct udf_inode_info *dinfo = UDF_I(dir);
struct udf_inode_info *iinfo;
- err = -EIO;
- inode = udf_new_inode(dir, S_IFDIR | mode, &err);
- if (!inode)
- goto out;
+ inode = udf_new_inode(dir, S_IFDIR | mode);
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
iinfo = UDF_I(inode);
inode->i_op = &udf_dir_inode_operations;
@@ -681,6 +652,7 @@
fi = udf_add_entry(inode, NULL, &fibh, &cfi, &err);
if (!fi) {
inode_dec_link_count(inode);
+ unlock_new_inode(inode);
iput(inode);
goto out;
}
@@ -699,6 +671,7 @@
if (!fi) {
clear_nlink(inode);
mark_inode_dirty(inode);
+ unlock_new_inode(inode);
iput(inode);
goto out;
}
@@ -710,6 +683,7 @@
udf_write_fi(dir, &cfi, fi, &fibh, NULL, NULL);
inc_nlink(dir);
mark_inode_dirty(dir);
+ unlock_new_inode(inode);
d_instantiate(dentry, inode);
if (fibh.sbh != fibh.ebh)
brelse(fibh.ebh);
@@ -876,14 +850,11 @@
static int udf_symlink(struct inode *dir, struct dentry *dentry,
const char *symname)
{
- struct inode *inode;
+ struct inode *inode = udf_new_inode(dir, S_IFLNK | S_IRWXUGO);
struct pathComponent *pc;
const char *compstart;
- struct udf_fileident_bh fibh;
struct extent_position epos = {};
int eoffset, elen = 0;
- struct fileIdentDesc *fi;
- struct fileIdentDesc cfi;
uint8_t *ea;
int err;
int block;
@@ -892,9 +863,8 @@
struct udf_inode_info *iinfo;
struct super_block *sb = dir->i_sb;
- inode = udf_new_inode(dir, S_IFLNK | S_IRWXUGO, &err);
- if (!inode)
- goto out;
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
iinfo = UDF_I(inode);
down_write(&iinfo->i_data_sem);
@@ -1012,32 +982,15 @@
mark_inode_dirty(inode);
up_write(&iinfo->i_data_sem);
- fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err);
- if (!fi)
- goto out_fail;
- cfi.icb.extLength = cpu_to_le32(sb->s_blocksize);
- cfi.icb.extLocation = cpu_to_lelb(iinfo->i_location);
- if (UDF_SB(inode->i_sb)->s_lvid_bh) {
- *(__le32 *)((struct allocDescImpUse *)cfi.icb.impUse)->impUse =
- cpu_to_le32(lvid_get_unique_id(sb));
- }
- udf_write_fi(dir, &cfi, fi, &fibh, NULL, NULL);
- if (UDF_I(dir)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
- mark_inode_dirty(dir);
- if (fibh.sbh != fibh.ebh)
- brelse(fibh.ebh);
- brelse(fibh.sbh);
- d_instantiate(dentry, inode);
- err = 0;
-
+ err = udf_add_nondir(dentry, inode);
out:
kfree(name);
return err;
out_no_entry:
up_write(&iinfo->i_data_sem);
-out_fail:
inode_dec_link_count(inode);
+ unlock_new_inode(inode);
iput(inode);
goto out;
}
@@ -1222,7 +1175,7 @@
struct udf_fileident_bh fibh;
if (!udf_find_entry(child->d_inode, &dotdot, &fibh, &cfi))
- goto out_unlock;
+ return ERR_PTR(-EACCES);
if (fibh.sbh != fibh.ebh)
brelse(fibh.ebh);
@@ -1230,12 +1183,10 @@
tloc = lelb_to_cpu(cfi.icb.extLocation);
inode = udf_iget(child->d_inode->i_sb, &tloc);
- if (!inode)
- goto out_unlock;
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
return d_obtain_alias(inode);
-out_unlock:
- return ERR_PTR(-EACCES);
}
@@ -1252,8 +1203,8 @@
loc.partitionReferenceNum = partref;
inode = udf_iget(sb, &loc);
- if (inode == NULL)
- return ERR_PTR(-ENOMEM);
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
if (generation && inode->i_generation != generation) {
iput(inode);
diff --git a/fs/udf/super.c b/fs/udf/super.c
index 813da94..5401fc3 100644
--- a/fs/udf/super.c
+++ b/fs/udf/super.c
@@ -961,12 +961,14 @@
metadata_fe = udf_iget(sb, &addr);
- if (metadata_fe == NULL)
+ if (IS_ERR(metadata_fe)) {
udf_warn(sb, "metadata inode efe not found\n");
- else if (UDF_I(metadata_fe)->i_alloc_type != ICBTAG_FLAG_AD_SHORT) {
+ return metadata_fe;
+ }
+ if (UDF_I(metadata_fe)->i_alloc_type != ICBTAG_FLAG_AD_SHORT) {
udf_warn(sb, "metadata inode efe does not have short allocation descriptors!\n");
iput(metadata_fe);
- metadata_fe = NULL;
+ return ERR_PTR(-EIO);
}
return metadata_fe;
@@ -978,6 +980,7 @@
struct udf_part_map *map;
struct udf_meta_data *mdata;
struct kernel_lb_addr addr;
+ struct inode *fe;
map = &sbi->s_partmaps[partition];
mdata = &map->s_type_specific.s_metadata;
@@ -986,22 +989,24 @@
udf_debug("Metadata file location: block = %d part = %d\n",
mdata->s_meta_file_loc, map->s_partition_num);
- mdata->s_metadata_fe = udf_find_metadata_inode_efe(sb,
- mdata->s_meta_file_loc, map->s_partition_num);
-
- if (mdata->s_metadata_fe == NULL) {
+ fe = udf_find_metadata_inode_efe(sb, mdata->s_meta_file_loc,
+ map->s_partition_num);
+ if (IS_ERR(fe)) {
/* mirror file entry */
udf_debug("Mirror metadata file location: block = %d part = %d\n",
mdata->s_mirror_file_loc, map->s_partition_num);
- mdata->s_mirror_fe = udf_find_metadata_inode_efe(sb,
- mdata->s_mirror_file_loc, map->s_partition_num);
+ fe = udf_find_metadata_inode_efe(sb, mdata->s_mirror_file_loc,
+ map->s_partition_num);
- if (mdata->s_mirror_fe == NULL) {
+ if (IS_ERR(fe)) {
udf_err(sb, "Both metadata and mirror metadata inode efe can not found\n");
- return -EIO;
+ return PTR_ERR(fe);
}
- }
+ mdata->s_mirror_fe = fe;
+ } else
+ mdata->s_metadata_fe = fe;
+
/*
* bitmap file entry
@@ -1015,15 +1020,16 @@
udf_debug("Bitmap file location: block = %d part = %d\n",
addr.logicalBlockNum, addr.partitionReferenceNum);
- mdata->s_bitmap_fe = udf_iget(sb, &addr);
- if (mdata->s_bitmap_fe == NULL) {
+ fe = udf_iget(sb, &addr);
+ if (IS_ERR(fe)) {
if (sb->s_flags & MS_RDONLY)
udf_warn(sb, "bitmap inode efe not found but it's ok since the disc is mounted read-only\n");
else {
udf_err(sb, "bitmap inode efe not found and attempted read-write mount\n");
- return -EIO;
+ return PTR_ERR(fe);
}
- }
+ } else
+ mdata->s_bitmap_fe = fe;
}
udf_debug("udf_load_metadata_files Ok\n");
@@ -1111,13 +1117,15 @@
phd->unallocSpaceTable.extPosition),
.partitionReferenceNum = p_index,
};
+ struct inode *inode;
- map->s_uspace.s_table = udf_iget(sb, &loc);
- if (!map->s_uspace.s_table) {
+ inode = udf_iget(sb, &loc);
+ if (IS_ERR(inode)) {
udf_debug("cannot load unallocSpaceTable (part %d)\n",
p_index);
- return -EIO;
+ return PTR_ERR(inode);
}
+ map->s_uspace.s_table = inode;
map->s_partition_flags |= UDF_PART_FLAG_UNALLOC_TABLE;
udf_debug("unallocSpaceTable (part %d) @ %ld\n",
p_index, map->s_uspace.s_table->i_ino);
@@ -1144,14 +1152,15 @@
phd->freedSpaceTable.extPosition),
.partitionReferenceNum = p_index,
};
+ struct inode *inode;
- map->s_fspace.s_table = udf_iget(sb, &loc);
- if (!map->s_fspace.s_table) {
+ inode = udf_iget(sb, &loc);
+ if (IS_ERR(inode)) {
udf_debug("cannot load freedSpaceTable (part %d)\n",
p_index);
- return -EIO;
+ return PTR_ERR(inode);
}
-
+ map->s_fspace.s_table = inode;
map->s_partition_flags |= UDF_PART_FLAG_FREED_TABLE;
udf_debug("freedSpaceTable (part %d) @ %ld\n",
p_index, map->s_fspace.s_table->i_ino);
@@ -1178,6 +1187,7 @@
struct udf_part_map *map = &sbi->s_partmaps[p_index];
sector_t vat_block;
struct kernel_lb_addr ino;
+ struct inode *inode;
/*
* VAT file entry is in the last recorded block. Some broken disks have
@@ -1186,10 +1196,13 @@
ino.partitionReferenceNum = type1_index;
for (vat_block = start_block;
vat_block >= map->s_partition_root &&
- vat_block >= start_block - 3 &&
- !sbi->s_vat_inode; vat_block--) {
+ vat_block >= start_block - 3; vat_block--) {
ino.logicalBlockNum = vat_block - map->s_partition_root;
- sbi->s_vat_inode = udf_iget(sb, &ino);
+ inode = udf_iget(sb, &ino);
+ if (!IS_ERR(inode)) {
+ sbi->s_vat_inode = inode;
+ break;
+ }
}
}
@@ -2205,10 +2218,10 @@
/* assign inodes by physical block number */
/* perhaps it's not extensible enough, but for now ... */
inode = udf_iget(sb, &rootdir);
- if (!inode) {
+ if (IS_ERR(inode)) {
udf_err(sb, "Error in udf_iget, block=%d, partition=%d\n",
rootdir.logicalBlockNum, rootdir.partitionReferenceNum);
- ret = -EIO;
+ ret = PTR_ERR(inode);
goto error_out;
}
diff --git a/fs/udf/udfdecl.h b/fs/udf/udfdecl.h
index be7dabb..742557b 100644
--- a/fs/udf/udfdecl.h
+++ b/fs/udf/udfdecl.h
@@ -143,7 +143,6 @@
extern struct buffer_head *udf_expand_dir_adinicb(struct inode *, int *, int *);
extern struct buffer_head *udf_bread(struct inode *, int, int, int *);
extern int udf_setsize(struct inode *, loff_t);
-extern void udf_read_inode(struct inode *);
extern void udf_evict_inode(struct inode *);
extern int udf_write_inode(struct inode *, struct writeback_control *wbc);
extern long udf_block_map(struct inode *, sector_t);
@@ -209,7 +208,7 @@
/* ialloc.c */
extern void udf_free_inode(struct inode *);
-extern struct inode *udf_new_inode(struct inode *, umode_t, int *);
+extern struct inode *udf_new_inode(struct inode *, umode_t);
/* truncate.c */
extern void udf_truncate_tail_extent(struct inode *);
diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
index c1c9de1..d91e59b 100644
--- a/include/acpi/acpi_bus.h
+++ b/include/acpi/acpi_bus.h
@@ -204,10 +204,9 @@
u32 match_driver:1;
u32 initialized:1;
u32 visited:1;
- u32 no_hotplug:1;
u32 hotplug_notify:1;
u32 is_dock_station:1;
- u32 reserved:22;
+ u32 reserved:23;
};
/* File System */
@@ -411,7 +410,6 @@
int acpi_bus_get_private_data(acpi_handle, void **);
int acpi_bus_attach_private_data(acpi_handle, void *);
void acpi_bus_detach_private_data(acpi_handle);
-void acpi_bus_no_hotplug(acpi_handle handle);
extern int acpi_notifier_call_chain(struct acpi_device *, u32, u32);
extern int register_acpi_notifier(struct notifier_block *);
extern int unregister_acpi_notifier(struct notifier_block *);
diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h
index 831d786..882675e 100644
--- a/include/crypto/drbg.h
+++ b/include/crypto/drbg.h
@@ -162,12 +162,25 @@
static inline size_t drbg_max_addtl(struct drbg_state *drbg)
{
+#if (__BITS_PER_LONG == 32)
+ /*
+ * SP800-90A allows smaller maximum numbers to be returned -- we
+ * return SIZE_MAX - 1 to allow the verification of the enforcement
+ * of this value in drbg_healthcheck_sanity.
+ */
+ return (SIZE_MAX - 1);
+#else
return (1UL<<(drbg->core->max_addtllen));
+#endif
}
static inline size_t drbg_max_requests(struct drbg_state *drbg)
{
+#if (__BITS_PER_LONG == 32)
+ return SIZE_MAX;
+#else
return (1UL<<(drbg->core->max_req));
+#endif
}
/*
diff --git a/include/linux/bcma/bcma.h b/include/linux/bcma/bcma.h
index 0272e49..6345979 100644
--- a/include/linux/bcma/bcma.h
+++ b/include/linux/bcma/bcma.h
@@ -267,7 +267,7 @@
u8 core_unit;
u32 addr;
- u32 addr1;
+ u32 addr_s[8];
u32 wrap;
void __iomem *io_addr;
@@ -332,10 +332,10 @@
struct bcma_device *mapped_core;
struct list_head cores;
u8 nr_cores;
- u8 init_done:1;
u8 num;
struct bcma_drv_cc drv_cc;
+ struct bcma_drv_cc_b drv_cc_b;
struct bcma_drv_pci drv_pci[2];
struct bcma_drv_pcie2 drv_pcie2;
struct bcma_drv_mips drv_mips;
diff --git a/include/linux/bcma/bcma_driver_chipcommon.h b/include/linux/bcma/bcma_driver_chipcommon.h
index 63d105c..db6fa21 100644
--- a/include/linux/bcma/bcma_driver_chipcommon.h
+++ b/include/linux/bcma/bcma_driver_chipcommon.h
@@ -644,6 +644,12 @@
#endif
};
+struct bcma_drv_cc_b {
+ struct bcma_device *core;
+ u8 setup_done:1;
+ void __iomem *mii;
+};
+
/* Register access */
#define bcma_cc_read32(cc, offset) \
bcma_read32((cc)->core, offset)
@@ -699,4 +705,6 @@
extern u32 bcma_pmu_get_bus_clock(struct bcma_drv_cc *cc);
+void bcma_chipco_b_mii_write(struct bcma_drv_cc_b *ccb, u32 offset, u32 value);
+
#endif /* LINUX_BCMA_DRIVER_CC_H_ */
diff --git a/include/linux/bcma/bcma_soc.h b/include/linux/bcma/bcma_soc.h
index 4203c55..f24d245 100644
--- a/include/linux/bcma/bcma_soc.h
+++ b/include/linux/bcma/bcma_soc.h
@@ -10,6 +10,7 @@
};
int __init bcma_host_soc_register(struct bcma_soc *soc);
+int __init bcma_host_soc_init(struct bcma_soc *soc);
int bcma_bus_register(struct bcma_bus *bus);
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
new file mode 100644
index 0000000..3cf9175
--- /dev/null
+++ b/include/linux/bpf.h
@@ -0,0 +1,136 @@
+/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#ifndef _LINUX_BPF_H
+#define _LINUX_BPF_H 1
+
+#include <uapi/linux/bpf.h>
+#include <linux/workqueue.h>
+#include <linux/file.h>
+
+struct bpf_map;
+
+/* map is generic key/value storage optionally accesible by eBPF programs */
+struct bpf_map_ops {
+ /* funcs callable from userspace (via syscall) */
+ struct bpf_map *(*map_alloc)(union bpf_attr *attr);
+ void (*map_free)(struct bpf_map *);
+ int (*map_get_next_key)(struct bpf_map *map, void *key, void *next_key);
+
+ /* funcs callable from userspace and from eBPF programs */
+ void *(*map_lookup_elem)(struct bpf_map *map, void *key);
+ int (*map_update_elem)(struct bpf_map *map, void *key, void *value);
+ int (*map_delete_elem)(struct bpf_map *map, void *key);
+};
+
+struct bpf_map {
+ atomic_t refcnt;
+ enum bpf_map_type map_type;
+ u32 key_size;
+ u32 value_size;
+ u32 max_entries;
+ struct bpf_map_ops *ops;
+ struct work_struct work;
+};
+
+struct bpf_map_type_list {
+ struct list_head list_node;
+ struct bpf_map_ops *ops;
+ enum bpf_map_type type;
+};
+
+void bpf_register_map_type(struct bpf_map_type_list *tl);
+void bpf_map_put(struct bpf_map *map);
+struct bpf_map *bpf_map_get(struct fd f);
+
+/* function argument constraints */
+enum bpf_arg_type {
+ ARG_ANYTHING = 0, /* any argument is ok */
+
+ /* the following constraints used to prototype
+ * bpf_map_lookup/update/delete_elem() functions
+ */
+ ARG_CONST_MAP_PTR, /* const argument used as pointer to bpf_map */
+ ARG_PTR_TO_MAP_KEY, /* pointer to stack used as map key */
+ ARG_PTR_TO_MAP_VALUE, /* pointer to stack used as map value */
+
+ /* the following constraints used to prototype bpf_memcmp() and other
+ * functions that access data on eBPF program stack
+ */
+ ARG_PTR_TO_STACK, /* any pointer to eBPF program stack */
+ ARG_CONST_STACK_SIZE, /* number of bytes accessed from stack */
+};
+
+/* type of values returned from helper functions */
+enum bpf_return_type {
+ RET_INTEGER, /* function returns integer */
+ RET_VOID, /* function doesn't return anything */
+ RET_PTR_TO_MAP_VALUE_OR_NULL, /* returns a pointer to map elem value or NULL */
+};
+
+/* eBPF function prototype used by verifier to allow BPF_CALLs from eBPF programs
+ * to in-kernel helper functions and for adjusting imm32 field in BPF_CALL
+ * instructions after verifying
+ */
+struct bpf_func_proto {
+ u64 (*func)(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
+ bool gpl_only;
+ enum bpf_return_type ret_type;
+ enum bpf_arg_type arg1_type;
+ enum bpf_arg_type arg2_type;
+ enum bpf_arg_type arg3_type;
+ enum bpf_arg_type arg4_type;
+ enum bpf_arg_type arg5_type;
+};
+
+/* bpf_context is intentionally undefined structure. Pointer to bpf_context is
+ * the first argument to eBPF programs.
+ * For socket filters: 'struct bpf_context *' == 'struct sk_buff *'
+ */
+struct bpf_context;
+
+enum bpf_access_type {
+ BPF_READ = 1,
+ BPF_WRITE = 2
+};
+
+struct bpf_verifier_ops {
+ /* return eBPF function prototype for verification */
+ const struct bpf_func_proto *(*get_func_proto)(enum bpf_func_id func_id);
+
+ /* return true if 'size' wide access at offset 'off' within bpf_context
+ * with 'type' (read or write) is allowed
+ */
+ bool (*is_valid_access)(int off, int size, enum bpf_access_type type);
+};
+
+struct bpf_prog_type_list {
+ struct list_head list_node;
+ struct bpf_verifier_ops *ops;
+ enum bpf_prog_type type;
+};
+
+void bpf_register_prog_type(struct bpf_prog_type_list *tl);
+
+struct bpf_prog;
+
+struct bpf_prog_aux {
+ atomic_t refcnt;
+ bool is_gpl_compatible;
+ enum bpf_prog_type prog_type;
+ struct bpf_verifier_ops *ops;
+ struct bpf_map **used_maps;
+ u32 used_map_cnt;
+ struct bpf_prog *prog;
+ struct work_struct work;
+};
+
+void bpf_prog_put(struct bpf_prog *prog);
+struct bpf_prog *bpf_prog_get(u32 ufd);
+/* verify correctness of eBPF program */
+int bpf_check(struct bpf_prog *fp, union bpf_attr *attr);
+
+#endif /* _LINUX_BPF_H */
diff --git a/include/linux/brcmphy.h b/include/linux/brcmphy.h
index 5bd35cc..3f626fe4 100644
--- a/include/linux/brcmphy.h
+++ b/include/linux/brcmphy.h
@@ -40,7 +40,8 @@
#define PHY_BRCM_CLEAR_RGMII_MODE 0x00004000
#define PHY_BRCM_DIS_TXCRXC_NOENRGY 0x00008000
/* Broadcom BCM7xxx specific workarounds */
-#define PHY_BRCM_100MBPS_WAR 0x00010000
+#define PHY_BRCM_7XXX_REV(x) (((x) >> 8) & 0xff)
+#define PHY_BRCM_7XXX_PATCH(x) ((x) & 0xff)
#define PHY_BCM_FLAGS_VALID 0x80000000
/* Broadcom BCM54XX register definitions, common to most Broadcom PHYs */
diff --git a/include/linux/ccp.h b/include/linux/ccp.h
index ebcc9d1..7f43703 100644
--- a/include/linux/ccp.h
+++ b/include/linux/ccp.h
@@ -27,6 +27,13 @@
defined(CONFIG_CRYPTO_DEV_CCP_DD_MODULE)
/**
+ * ccp_present - check if a CCP device is present
+ *
+ * Returns zero if a CCP device is present, -ENODEV otherwise.
+ */
+int ccp_present(void);
+
+/**
* ccp_enqueue_cmd - queue an operation for processing by the CCP
*
* @cmd: ccp_cmd struct to be processed
@@ -53,6 +60,11 @@
#else /* CONFIG_CRYPTO_DEV_CCP_DD is not enabled */
+static inline int ccp_present(void)
+{
+ return -ENODEV;
+}
+
static inline int ccp_enqueue_cmd(struct ccp_cmd *cmd)
{
return -ENODEV;
diff --git a/include/linux/com20020.h b/include/linux/com20020.h
index 5dcfb94..8589899 100644
--- a/include/linux/com20020.h
+++ b/include/linux/com20020.h
@@ -41,6 +41,35 @@
#define BUS_ALIGN 1
#endif
+#define PLX_PCI_MAX_CARDS 2
+
+struct com20020_pci_channel_map {
+ u32 bar;
+ u32 offset;
+ u32 size; /* 0x00 - auto, e.g. length of entire bar */
+};
+
+struct com20020_pci_card_info {
+ const char *name;
+ int devcount;
+
+ struct com20020_pci_channel_map chan_map_tbl[PLX_PCI_MAX_CARDS];
+
+ unsigned int flags;
+};
+
+struct com20020_priv {
+ struct com20020_pci_card_info *ci;
+ struct list_head list_dev;
+};
+
+struct com20020_dev {
+ struct list_head list;
+ struct net_device *dev;
+
+ struct com20020_priv *pci_priv;
+ int index;
+};
#define _INTMASK (ioaddr+BUS_ALIGN*0) /* writable */
#define _STATUS (ioaddr+BUS_ALIGN*0) /* readable */
diff --git a/include/linux/dcache.h b/include/linux/dcache.h
index e4ae2ad..75a227c 100644
--- a/include/linux/dcache.h
+++ b/include/linux/dcache.h
@@ -55,6 +55,7 @@
#define QSTR_INIT(n,l) { { { .len = l } }, .name = n }
#define hashlen_hash(hashlen) ((u32) (hashlen))
#define hashlen_len(hashlen) ((u32)((hashlen) >> 32))
+#define hashlen_create(hash,len) (((u64)(len)<<32)|(u32)(hash))
struct dentry_stat_t {
long nr_dentry;
diff --git a/include/linux/dynamic_queue_limits.h b/include/linux/dynamic_queue_limits.h
index 5621547..a4be703 100644
--- a/include/linux/dynamic_queue_limits.h
+++ b/include/linux/dynamic_queue_limits.h
@@ -73,14 +73,22 @@
{
BUG_ON(count > DQL_MAX_OBJECT);
- dql->num_queued += count;
dql->last_obj_cnt = count;
+
+ /* We want to force a write first, so that cpu do not attempt
+ * to get cache line containing last_obj_cnt, num_queued, adj_limit
+ * in Shared state, but directly does a Request For Ownership
+ * It is only a hint, we use barrier() only.
+ */
+ barrier();
+
+ dql->num_queued += count;
}
/* Returns how many objects can be queued, < 0 indicates over limit. */
static inline int dql_avail(const struct dql *dql)
{
- return dql->adj_limit - dql->num_queued;
+ return ACCESS_ONCE(dql->adj_limit) - ACCESS_ONCE(dql->num_queued);
}
/* Record number of completed objects and recalculate the limit. */
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 1a0bc6d..ca95abd 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -21,6 +21,7 @@
struct sk_buff;
struct sock;
struct seccomp_data;
+struct bpf_prog_aux;
/* ArgX, context and stack frame pointer register positions. Note,
* Arg1, Arg2, Arg3, etc are used as argument mappings of function
@@ -144,6 +145,12 @@
.off = 0, \
.imm = ((__u64) (IMM)) >> 32 })
+#define BPF_PSEUDO_MAP_FD 1
+
+/* pseudo BPF_LD_IMM64 insn used to refer to process-local map_fd */
+#define BPF_LD_MAP_FD(DST, MAP_FD) \
+ BPF_LD_IMM64_RAW(DST, BPF_PSEUDO_MAP_FD, MAP_FD)
+
/* Short form of mov based on type, BPF_X: dst_reg = src_reg, BPF_K: dst_reg = imm32 */
#define BPF_MOV64_RAW(TYPE, DST, SRC, IMM) \
@@ -300,17 +307,12 @@
u8 image[];
};
-struct bpf_work_struct {
- struct bpf_prog *prog;
- struct work_struct work;
-};
-
struct bpf_prog {
u16 pages; /* Number of allocated pages */
bool jited; /* Is our filter JIT'ed? */
u32 len; /* Number of filter blocks */
struct sock_fprog_kern *orig_prog; /* Original BPF program */
- struct bpf_work_struct *work; /* Deferred free work struct */
+ struct bpf_prog_aux *aux; /* Auxiliary fields */
unsigned int (*bpf_func)(const struct sk_buff *skb,
const struct bpf_insn *filter);
/* Instructions for interpreter */
diff --git a/include/linux/hash.h b/include/linux/hash.h
index bd1754c..d0494c3 100644
--- a/include/linux/hash.h
+++ b/include/linux/hash.h
@@ -37,6 +37,9 @@
{
u64 hash = val;
+#if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64
+ hash = hash * GOLDEN_RATIO_PRIME_64;
+#else
/* Sigh, gcc can't optimise this alone like it does for 32 bits. */
u64 n = hash;
n <<= 18;
@@ -51,6 +54,7 @@
hash += n;
n <<= 2;
hash += n;
+#endif
/* High bits are more random, so use them. */
return hash >> (64 - bits);
diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
index 8018c91..b1be39c 100644
--- a/include/linux/ieee80211.h
+++ b/include/linux/ieee80211.h
@@ -6,6 +6,7 @@
* Copyright (c) 2002-2003, Jouni Malinen <jkmaline@cc.hut.fi>
* Copyright (c) 2005, Devicescape Software, Inc.
* Copyright (c) 2006, Michael Wu <flamingice@sourmilk.net>
+ * Copyright (c) 2013 - 2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -165,8 +166,12 @@
#define IEEE80211_MAX_MESH_ID_LEN 32
+#define IEEE80211_FIRST_TSPEC_TSID 8
#define IEEE80211_NUM_TIDS 16
+/* number of user priorities 802.11 uses */
+#define IEEE80211_NUM_UPS 8
+
#define IEEE80211_QOS_CTL_LEN 2
/* 1d tag mask */
#define IEEE80211_QOS_CTL_TAG1D_MASK 0x0007
@@ -1823,7 +1828,8 @@
WLAN_EID_DMG_TSPEC = 146,
WLAN_EID_DMG_AT = 147,
WLAN_EID_DMG_CAP = 148,
- /* 149-150 reserved for Cisco */
+ /* 149 reserved for Cisco */
+ WLAN_EID_CISCO_VENDOR_SPECIFIC = 150,
WLAN_EID_DMG_OPERATION = 151,
WLAN_EID_DMG_BSS_PARAM_CHANGE = 152,
WLAN_EID_DMG_BEAM_REFINEMENT = 153,
diff --git a/include/linux/iio/trigger.h b/include/linux/iio/trigger.h
index 4b79ffe..fa76c79 100644
--- a/include/linux/iio/trigger.h
+++ b/include/linux/iio/trigger.h
@@ -84,10 +84,12 @@
put_device(&trig->dev);
}
-static inline void iio_trigger_get(struct iio_trigger *trig)
+static inline struct iio_trigger *iio_trigger_get(struct iio_trigger *trig)
{
get_device(&trig->dev);
__module_get(trig->ops->owner);
+
+ return trig;
}
/**
diff --git a/include/linux/jiffies.h b/include/linux/jiffies.h
index 1f44466..c367cbd 100644
--- a/include/linux/jiffies.h
+++ b/include/linux/jiffies.h
@@ -258,23 +258,11 @@
#define SEC_JIFFIE_SC (32 - SHIFT_HZ)
#endif
#define NSEC_JIFFIE_SC (SEC_JIFFIE_SC + 29)
-#define USEC_JIFFIE_SC (SEC_JIFFIE_SC + 19)
#define SEC_CONVERSION ((unsigned long)((((u64)NSEC_PER_SEC << SEC_JIFFIE_SC) +\
TICK_NSEC -1) / (u64)TICK_NSEC))
#define NSEC_CONVERSION ((unsigned long)((((u64)1 << NSEC_JIFFIE_SC) +\
TICK_NSEC -1) / (u64)TICK_NSEC))
-#define USEC_CONVERSION \
- ((unsigned long)((((u64)NSEC_PER_USEC << USEC_JIFFIE_SC) +\
- TICK_NSEC -1) / (u64)TICK_NSEC))
-/*
- * USEC_ROUND is used in the timeval to jiffie conversion. See there
- * for more details. It is the scaled resolution rounding value. Note
- * that it is a 64-bit value. Since, when it is applied, we are already
- * in jiffies (albit scaled), it is nothing but the bits we will shift
- * off.
- */
-#define USEC_ROUND (u64)(((u64)1 << USEC_JIFFIE_SC) - 1)
/*
* The maximum jiffie value is (MAX_INT >> 1). Here we translate that
* into seconds. The 64-bit case will overflow if we are not careful,
diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h
index 1befd8d..03b5608 100644
--- a/include/linux/mlx4/device.h
+++ b/include/linux/mlx4/device.h
@@ -185,19 +185,24 @@
MLX4_DEV_CAP_FLAG2_DMFS_IPOIB = 1LL << 9,
MLX4_DEV_CAP_FLAG2_VXLAN_OFFLOADS = 1LL << 10,
MLX4_DEV_CAP_FLAG2_MAD_DEMUX = 1LL << 11,
+ MLX4_DEV_CAP_FLAG2_CQE_STRIDE = 1LL << 12,
+ MLX4_DEV_CAP_FLAG2_EQE_STRIDE = 1LL << 13
};
enum {
MLX4_DEV_CAP_64B_EQE_ENABLED = 1LL << 0,
- MLX4_DEV_CAP_64B_CQE_ENABLED = 1LL << 1
+ MLX4_DEV_CAP_64B_CQE_ENABLED = 1LL << 1,
+ MLX4_DEV_CAP_CQE_STRIDE_ENABLED = 1LL << 2,
+ MLX4_DEV_CAP_EQE_STRIDE_ENABLED = 1LL << 3
};
enum {
- MLX4_USER_DEV_CAP_64B_CQE = 1L << 0
+ MLX4_USER_DEV_CAP_LARGE_CQE = 1L << 0
};
enum {
- MLX4_FUNC_CAP_64B_EQE_CQE = 1L << 0
+ MLX4_FUNC_CAP_64B_EQE_CQE = 1L << 0,
+ MLX4_FUNC_CAP_EQE_CQE_STRIDE = 1L << 1
};
@@ -210,6 +215,7 @@
MLX4_BMME_FLAG_TYPE_2_WIN = 1 << 9,
MLX4_BMME_FLAG_RESERVED_LKEY = 1 << 10,
MLX4_BMME_FLAG_FAST_REG_WR = 1 << 11,
+ MLX4_BMME_FLAG_VSD_INIT2RTR = 1 << 28,
};
enum mlx4_event {
diff --git a/include/linux/mlx4/qp.h b/include/linux/mlx4/qp.h
index 7040dc9..5f4e36c 100644
--- a/include/linux/mlx4/qp.h
+++ b/include/linux/mlx4/qp.h
@@ -56,7 +56,8 @@
MLX4_QP_OPTPAR_RNR_RETRY = 1 << 13,
MLX4_QP_OPTPAR_ACK_TIMEOUT = 1 << 14,
MLX4_QP_OPTPAR_SCHED_QUEUE = 1 << 16,
- MLX4_QP_OPTPAR_COUNTER_INDEX = 1 << 20
+ MLX4_QP_OPTPAR_COUNTER_INDEX = 1 << 20,
+ MLX4_QP_OPTPAR_VLAN_STRIPPING = 1 << 21,
};
enum mlx4_qp_state {
@@ -423,13 +424,20 @@
enum mlx4_update_qp_attr {
MLX4_UPDATE_QP_SMAC = 1 << 0,
+ MLX4_UPDATE_QP_VSD = 1 << 2,
+ MLX4_UPDATE_QP_SUPPORTED_ATTRS = (1 << 2) - 1
+};
+
+enum mlx4_update_qp_params_flags {
+ MLX4_UPDATE_QP_PARAMS_FLAGS_VSD_ENABLE = 1 << 0,
};
struct mlx4_update_qp_params {
u8 smac_index;
+ u32 flags;
};
-int mlx4_update_qp(struct mlx4_dev *dev, struct mlx4_qp *qp,
+int mlx4_update_qp(struct mlx4_dev *dev, u32 qpn,
enum mlx4_update_qp_attr attr,
struct mlx4_update_qp_params *params);
int mlx4_qp_modify(struct mlx4_dev *dev, struct mlx4_mtt *mtt,
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index ba72f6b..9b7fbac 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -543,7 +543,7 @@
* read mostly part
*/
struct net_device *dev;
- struct Qdisc *qdisc;
+ struct Qdisc __rcu *qdisc;
struct Qdisc *qdisc_sleeping;
#ifdef CONFIG_SYSFS
struct kobject kobj;
@@ -1789,7 +1789,7 @@
static inline bool netdev_uses_dsa(struct net_device *dev)
{
-#ifdef CONFIG_NET_DSA
+#if IS_ENABLED(CONFIG_NET_DSA)
if (dev->dsa_ptr != NULL)
return dsa_uses_tagged_protocol(dev->dsa_ptr);
#endif
@@ -1874,7 +1874,7 @@
/* jiffies when first packet was created/queued */
unsigned long age;
- /* Used in ipv6_gro_receive() */
+ /* Used in ipv6_gro_receive() and foo-over-udp */
u16 proto;
/* Used in udp_gro_receive */
@@ -1911,7 +1911,6 @@
struct offload_callbacks {
struct sk_buff *(*gso_segment)(struct sk_buff *skb,
netdev_features_t features);
- int (*gso_send_check)(struct sk_buff *skb);
struct sk_buff **(*gro_receive)(struct sk_buff **head,
struct sk_buff *skb);
int (*gro_complete)(struct sk_buff *skb, int nhoff);
@@ -1925,16 +1924,10 @@
struct udp_offload {
__be16 port;
+ u8 ipproto;
struct offload_callbacks callbacks;
};
-struct dsa_device_ops {
- netdev_tx_t (*xmit)(struct sk_buff *skb, struct net_device *dev);
- int (*rcv)(struct sk_buff *skb, struct net_device *dev,
- struct packet_type *pt, struct net_device *orig_dev);
-};
-
-
/* often modified stats are per cpu, other are shared (netdev->stats) */
struct pcpu_sw_netstats {
u64 rx_packets;
@@ -2083,8 +2076,8 @@
void dev_add_offload(struct packet_offload *po);
void dev_remove_offload(struct packet_offload *po);
-struct net_device *dev_get_by_flags_rcu(struct net *net, unsigned short flags,
- unsigned short mask);
+struct net_device *__dev_get_by_flags(struct net *net, unsigned short flags,
+ unsigned short mask);
struct net_device *dev_get_by_name(struct net *net, const char *name);
struct net_device *dev_get_by_name_rcu(struct net *net, const char *name);
struct net_device *__dev_get_by_name(struct net *net, const char *name);
@@ -2356,12 +2349,7 @@
DECLARE_PER_CPU_ALIGNED(struct softnet_data, softnet_data);
void __netif_schedule(struct Qdisc *q);
-
-static inline void netif_schedule_queue(struct netdev_queue *txq)
-{
- if (!(txq->state & QUEUE_STATE_ANY_XOFF))
- __netif_schedule(txq->qdisc);
-}
+void netif_schedule_queue(struct netdev_queue *txq);
static inline void netif_tx_schedule_all(struct net_device *dev)
{
@@ -2397,11 +2385,7 @@
}
}
-static inline void netif_tx_wake_queue(struct netdev_queue *dev_queue)
-{
- if (test_and_clear_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state))
- __netif_schedule(dev_queue->qdisc);
-}
+void netif_tx_wake_queue(struct netdev_queue *dev_queue);
/**
* netif_wake_queue - restart transmit
@@ -2673,19 +2657,7 @@
return __netif_subqueue_stopped(dev, skb_get_queue_mapping(skb));
}
-/**
- * netif_wake_subqueue - allow sending packets on subqueue
- * @dev: network device
- * @queue_index: sub queue index
- *
- * Resume individual transmit queue of a device with multiple transmit queues.
- */
-static inline void netif_wake_subqueue(struct net_device *dev, u16 queue_index)
-{
- struct netdev_queue *txq = netdev_get_tx_queue(dev, queue_index);
- if (test_and_clear_bit(__QUEUE_STATE_DRV_XOFF, &txq->state))
- __netif_schedule(txq->qdisc);
-}
+void netif_wake_subqueue(struct net_device *dev, u16 queue_index);
#ifdef CONFIG_XPS
int netif_set_xps_queue(struct net_device *dev, const struct cpumask *mask,
@@ -3640,22 +3612,22 @@
}
__printf(3, 4)
-int netdev_printk(const char *level, const struct net_device *dev,
- const char *format, ...);
+void netdev_printk(const char *level, const struct net_device *dev,
+ const char *format, ...);
__printf(2, 3)
-int netdev_emerg(const struct net_device *dev, const char *format, ...);
+void netdev_emerg(const struct net_device *dev, const char *format, ...);
__printf(2, 3)
-int netdev_alert(const struct net_device *dev, const char *format, ...);
+void netdev_alert(const struct net_device *dev, const char *format, ...);
__printf(2, 3)
-int netdev_crit(const struct net_device *dev, const char *format, ...);
+void netdev_crit(const struct net_device *dev, const char *format, ...);
__printf(2, 3)
-int netdev_err(const struct net_device *dev, const char *format, ...);
+void netdev_err(const struct net_device *dev, const char *format, ...);
__printf(2, 3)
-int netdev_warn(const struct net_device *dev, const char *format, ...);
+void netdev_warn(const struct net_device *dev, const char *format, ...);
__printf(2, 3)
-int netdev_notice(const struct net_device *dev, const char *format, ...);
+void netdev_notice(const struct net_device *dev, const char *format, ...);
__printf(2, 3)
-int netdev_info(const struct net_device *dev, const char *format, ...);
+void netdev_info(const struct net_device *dev, const char *format, ...);
#define MODULE_ALIAS_NETDEV(device) \
MODULE_ALIAS("netdev-" device)
@@ -3673,7 +3645,6 @@
({ \
if (0) \
netdev_printk(KERN_DEBUG, __dev, format, ##args); \
- 0; \
})
#endif
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 61978a4..96453f9 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -303,6 +303,7 @@
D3cold, not set for devices
powered on/off by the
corresponding bridge */
+ unsigned int ignore_hotplug:1; /* Ignore hotplug events */
unsigned int d3_delay; /* D3->D0 transition time in ms */
unsigned int d3cold_delay; /* D3cold->D0 transition time in ms */
@@ -1021,6 +1022,11 @@
bool pci_check_pme_status(struct pci_dev *dev);
void pci_pme_wakeup_bus(struct pci_bus *bus);
+static inline void pci_ignore_hotplug(struct pci_dev *dev)
+{
+ dev->ignore_hotplug = 1;
+}
+
static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state,
bool enable)
{
diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index 3dfbf23..ef5894c 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -71,6 +71,7 @@
void percpu_ref_exit(struct percpu_ref *ref);
void percpu_ref_kill_and_confirm(struct percpu_ref *ref,
percpu_ref_func_t *confirm_kill);
+void __percpu_ref_kill_expedited(struct percpu_ref *ref);
/**
* percpu_ref_kill - drop the initial ref
diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
index 167bae7..6cacbce 100644
--- a/include/linux/rtnetlink.h
+++ b/include/linux/rtnetlink.h
@@ -47,6 +47,16 @@
rcu_dereference_check(p, lockdep_rtnl_is_held())
/**
+ * rcu_dereference_bh_rtnl - rcu_dereference_bh with debug checking
+ * @p: The pointer to read, prior to dereference
+ *
+ * Do an rcu_dereference_bh(p), but check caller either holds rcu_read_lock_bh()
+ * or RTNL. Note : Please prefer rtnl_dereference() or rcu_dereference_bh()
+ */
+#define rcu_dereference_bh_rtnl(p) \
+ rcu_dereference_bh_check(p, lockdep_rtnl_is_held())
+
+/**
* rtnl_dereference - fetch RCU pointer when updates are prevented by RTNL
* @p: The pointer to read, prior to dereferencing
*
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index c4ff43f..262efdb 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -527,49 +527,76 @@
char cb[48] __aligned(8);
unsigned long _skb_refdst;
+ void (*destructor)(struct sk_buff *skb);
#ifdef CONFIG_XFRM
struct sec_path *sp;
#endif
- unsigned int len,
- data_len;
- __u16 mac_len,
- hdr_len;
- union {
- __wsum csum;
- struct {
- __u16 csum_start;
- __u16 csum_offset;
- };
- };
- __u32 priority;
- kmemcheck_bitfield_begin(flags1);
- __u8 ignore_df:1,
- cloned:1,
- ip_summed:2,
- nohdr:1,
- nfctinfo:3;
- __u8 pkt_type:3,
- fclone:2,
- ipvs_property:1,
- peeked:1,
- nf_trace:1;
- kmemcheck_bitfield_end(flags1);
- __be16 protocol;
-
- void (*destructor)(struct sk_buff *skb);
#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE)
struct nf_conntrack *nfct;
#endif
#if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
struct nf_bridge_info *nf_bridge;
#endif
+ unsigned int len,
+ data_len;
+ __u16 mac_len,
+ hdr_len;
- int skb_iif;
+ /* Following fields are _not_ copied in __copy_skb_header()
+ * Note that queue_mapping is here mostly to fill a hole.
+ */
+ kmemcheck_bitfield_begin(flags1);
+ __u16 queue_mapping;
+ __u8 cloned:1,
+ nohdr:1,
+ fclone:2,
+ peeked:1,
+ head_frag:1,
+ xmit_more:1;
+ /* one bit hole */
+ kmemcheck_bitfield_end(flags1);
- __u32 hash;
+ /* fields enclosed in headers_start/headers_end are copied
+ * using a single memcpy() in __copy_skb_header()
+ */
+ __u32 headers_start[0];
- __be16 vlan_proto;
- __u16 vlan_tci;
+/* if you move pkt_type around you also must adapt those constants */
+#ifdef __BIG_ENDIAN_BITFIELD
+#define PKT_TYPE_MAX (7 << 5)
+#else
+#define PKT_TYPE_MAX 7
+#endif
+#define PKT_TYPE_OFFSET() offsetof(struct sk_buff, __pkt_type_offset)
+
+ __u8 __pkt_type_offset[0];
+ __u8 pkt_type:3;
+ __u8 pfmemalloc:1;
+ __u8 ignore_df:1;
+ __u8 nfctinfo:3;
+
+ __u8 nf_trace:1;
+ __u8 ip_summed:2;
+ __u8 ooo_okay:1;
+ __u8 l4_hash:1;
+ __u8 sw_hash:1;
+ __u8 wifi_acked_valid:1;
+ __u8 wifi_acked:1;
+
+ __u8 no_fcs:1;
+ /* Indicates the inner headers are valid in the skbuff. */
+ __u8 encapsulation:1;
+ __u8 encap_hdr_csum:1;
+ __u8 csum_valid:1;
+ __u8 csum_complete_sw:1;
+ __u8 csum_level:2;
+ __u8 csum_bad:1;
+
+#ifdef CONFIG_IPV6_NDISC_NODETYPE
+ __u8 ndisc_nodetype:2;
+#endif
+ __u8 ipvs_property:1;
+ /* 5 or 7 bit hole */
#ifdef CONFIG_NET_SCHED
__u16 tc_index; /* traffic control index */
@@ -578,28 +605,18 @@
#endif
#endif
- __u16 queue_mapping;
- kmemcheck_bitfield_begin(flags2);
- __u8 xmit_more:1;
-#ifdef CONFIG_IPV6_NDISC_NODETYPE
- __u8 ndisc_nodetype:2;
-#endif
- __u8 pfmemalloc:1;
- __u8 ooo_okay:1;
- __u8 l4_hash:1;
- __u8 sw_hash:1;
- __u8 wifi_acked_valid:1;
- __u8 wifi_acked:1;
- __u8 no_fcs:1;
- __u8 head_frag:1;
- /* Indicates the inner headers are valid in the skbuff. */
- __u8 encapsulation:1;
- __u8 encap_hdr_csum:1;
- __u8 csum_valid:1;
- __u8 csum_complete_sw:1;
- /* 1/3 bit hole (depending on ndisc_nodetype presence) */
- kmemcheck_bitfield_end(flags2);
-
+ union {
+ __wsum csum;
+ struct {
+ __u16 csum_start;
+ __u16 csum_offset;
+ };
+ };
+ __u32 priority;
+ int skb_iif;
+ __u32 hash;
+ __be16 vlan_proto;
+ __u16 vlan_tci;
#if defined CONFIG_NET_DMA || defined CONFIG_NET_RX_BUSY_POLL
union {
unsigned int napi_id;
@@ -615,19 +632,18 @@
__u32 reserved_tailroom;
};
- kmemcheck_bitfield_begin(flags3);
- __u8 csum_level:2;
- __u8 csum_bad:1;
- /* 13 bit hole */
- kmemcheck_bitfield_end(flags3);
-
__be16 inner_protocol;
__u16 inner_transport_header;
__u16 inner_network_header;
__u16 inner_mac_header;
+
+ __be16 protocol;
__u16 transport_header;
__u16 network_header;
__u16 mac_header;
+
+ __u32 headers_end[0];
+
/* These elements must be at the end, see alloc_skb() for details. */
sk_buff_data_t tail;
sk_buff_data_t end;
@@ -759,6 +775,12 @@
return __alloc_skb(size, priority, 0, NUMA_NO_NODE);
}
+struct sk_buff *alloc_skb_with_frags(unsigned long header_len,
+ unsigned long data_len,
+ int max_page_order,
+ int *errcode,
+ gfp_t gfp_mask);
+
static inline struct sk_buff *alloc_skb_fclone(unsigned int size,
gfp_t priority)
{
@@ -1067,6 +1089,7 @@
* Drop a reference to the header part of the buffer. This is done
* by acquiring a payload reference. You must not read from the header
* part of skb->data after this.
+ * Note : Check if you can use __skb_header_release() instead.
*/
static inline void skb_header_release(struct sk_buff *skb)
{
@@ -1076,6 +1099,20 @@
}
/**
+ * __skb_header_release - release reference to header
+ * @skb: buffer to operate on
+ *
+ * Variant of skb_header_release() assuming skb is private to caller.
+ * We can avoid one atomic operation.
+ */
+static inline void __skb_header_release(struct sk_buff *skb)
+{
+ skb->nohdr = 1;
+ atomic_set(&skb_shinfo(skb)->dataref, 1 + (1 << SKB_DATAREF_SHIFT));
+}
+
+
+/**
* skb_shared - is the buffer shared
* @skb: buffer to check
*
@@ -3009,19 +3046,22 @@
}
/* Note: This doesn't put any conntrack and bridge info in dst. */
-static inline void __nf_copy(struct sk_buff *dst, const struct sk_buff *src)
+static inline void __nf_copy(struct sk_buff *dst, const struct sk_buff *src,
+ bool copy)
{
#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE)
dst->nfct = src->nfct;
nf_conntrack_get(src->nfct);
- dst->nfctinfo = src->nfctinfo;
+ if (copy)
+ dst->nfctinfo = src->nfctinfo;
#endif
#if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
dst->nf_bridge = src->nf_bridge;
nf_bridge_get(src->nf_bridge);
#endif
#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || defined(CONFIG_NF_TABLES)
- dst->nf_trace = src->nf_trace;
+ if (copy)
+ dst->nf_trace = src->nf_trace;
#endif
}
@@ -3033,7 +3073,7 @@
#if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
nf_bridge_put(dst->nf_bridge);
#endif
- __nf_copy(dst, src);
+ __nf_copy(dst, src, true);
}
#ifdef CONFIG_NETWORK_SECMARK
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 0f86d85..bda9b81 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -65,6 +65,7 @@
struct perf_event_attr;
struct file_handle;
struct sigaltstack;
+union bpf_attr;
#include <linux/types.h>
#include <linux/aio_abi.h>
@@ -875,5 +876,5 @@
const char __user *uargs);
asmlinkage long sys_getrandom(char __user *buf, size_t count,
unsigned int flags);
-
+asmlinkage long sys_bpf(int cmd, union bpf_attr *attr, unsigned int size);
#endif
diff --git a/include/linux/vga_switcheroo.h b/include/linux/vga_switcheroo.h
index 502073a..b483abd 100644
--- a/include/linux/vga_switcheroo.h
+++ b/include/linux/vga_switcheroo.h
@@ -64,6 +64,7 @@
void vga_switcheroo_set_dynamic_switch(struct pci_dev *pdev, enum vga_switcheroo_state dynamic);
int vga_switcheroo_init_domain_pm_ops(struct device *dev, struct dev_pm_domain *domain);
+void vga_switcheroo_fini_domain_pm_ops(struct device *dev);
int vga_switcheroo_init_domain_pm_optimus_hdmi_audio(struct device *dev, struct dev_pm_domain *domain);
#else
@@ -82,6 +83,7 @@
static inline void vga_switcheroo_set_dynamic_switch(struct pci_dev *pdev, enum vga_switcheroo_state dynamic) {}
static inline int vga_switcheroo_init_domain_pm_ops(struct device *dev, struct dev_pm_domain *domain) { return -EINVAL; }
+static inline void vga_switcheroo_fini_domain_pm_ops(struct device *dev) {}
static inline int vga_switcheroo_init_domain_pm_optimus_hdmi_audio(struct device *dev, struct dev_pm_domain *domain) { return -EINVAL; }
#endif
diff --git a/include/linux/vgaarb.h b/include/linux/vgaarb.h
index 2c02f3a..c37bd4d 100644
--- a/include/linux/vgaarb.h
+++ b/include/linux/vgaarb.h
@@ -182,7 +182,6 @@
* vga_get()...
*/
-#ifndef __ARCH_HAS_VGA_DEFAULT_DEVICE
#ifdef CONFIG_VGA_ARB
extern struct pci_dev *vga_default_device(void);
extern void vga_set_default_device(struct pci_dev *pdev);
@@ -190,7 +189,6 @@
static inline struct pci_dev *vga_default_device(void) { return NULL; };
static inline void vga_set_default_device(struct pci_dev *pdev) { };
#endif
-#endif
/**
* vga_conflicts
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index a0cc2e9..b996e6cd 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -419,7 +419,7 @@
alloc_workqueue("%s", WQ_FREEZABLE | WQ_UNBOUND | WQ_MEM_RECLAIM, \
1, (name))
#define create_singlethread_workqueue(name) \
- alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM, 1, (name))
+ alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM, name)
extern void destroy_workqueue(struct workqueue_struct *wq);
diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
index fc910a6..2fefcf4 100644
--- a/include/media/videobuf2-core.h
+++ b/include/media/videobuf2-core.h
@@ -295,7 +295,7 @@
* can return an error if hardware fails, in that case all
* buffers that have been already given by the @buf_queue
* callback are to be returned by the driver by calling
- * @vb2_buffer_done(VB2_BUF_STATE_DEQUEUED).
+ * @vb2_buffer_done(VB2_BUF_STATE_QUEUED).
* If you need a minimum number of buffers before you can
* start streaming, then set @min_buffers_needed in the
* vb2_queue structure. If that is non-zero then
@@ -380,6 +380,9 @@
* @start_streaming_called: start_streaming() was called successfully and we
* started streaming.
* @error: a fatal error occurred on the queue
+ * @waiting_for_buffers: used in poll() to check if vb2 is still waiting for
+ * buffers. Only set for capture queues if qbuf has not yet been
+ * called since poll() needs to return POLLERR in that situation.
* @fileio: file io emulator internal data, used only if emulator is active
* @threadio: thread io internal data, used only if thread is active
*/
@@ -417,6 +420,7 @@
unsigned int streaming:1;
unsigned int start_streaming_called:1;
unsigned int error:1;
+ unsigned int waiting_for_buffers:1;
struct vb2_fileio_data *fileio;
struct vb2_threadio_data *threadio;
diff --git a/include/net/addrconf.h b/include/net/addrconf.h
index f679877..d13573b 100644
--- a/include/net/addrconf.h
+++ b/include/net/addrconf.h
@@ -202,8 +202,9 @@
const struct in6_addr *addr);
void ipv6_sock_ac_close(struct sock *sk);
-int ipv6_dev_ac_inc(struct net_device *dev, const struct in6_addr *addr);
+int __ipv6_dev_ac_inc(struct inet6_dev *idev, const struct in6_addr *addr);
int __ipv6_dev_ac_dec(struct inet6_dev *idev, const struct in6_addr *addr);
+void ipv6_ac_destroy_dev(struct inet6_dev *idev);
bool ipv6_chk_acast_addr(struct net *net, struct net_device *dev,
const struct in6_addr *addr);
bool ipv6_chk_acast_addr_src(struct net *net, struct net_device *dev,
diff --git a/include/net/ah.h b/include/net/ah.h
index ca95b98..4e2dfa4 100644
--- a/include/net/ah.h
+++ b/include/net/ah.h
@@ -3,9 +3,6 @@
#include <linux/skbuff.h>
-/* This is the maximum truncated ICV length that we know of. */
-#define MAX_AH_AUTH_LEN 64
-
struct crypto_ahash;
struct ah_data {
diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
index b0ded13..206b92b 100644
--- a/include/net/bluetooth/hci_core.h
+++ b/include/net/bluetooth/hci_core.h
@@ -539,7 +539,6 @@
HCI_CONN_RSWITCH_PEND,
HCI_CONN_MODE_CHANGE_PEND,
HCI_CONN_SCO_SETUP_PEND,
- HCI_CONN_LE_SMP_PEND,
HCI_CONN_MGMT_CONNECTED,
HCI_CONN_SSP_ENABLED,
HCI_CONN_SC_ENABLED,
@@ -553,6 +552,7 @@
HCI_CONN_FIPS,
HCI_CONN_STK_ENCRYPT,
HCI_CONN_AUTH_INITIATOR,
+ HCI_CONN_DROP,
};
static inline bool hci_conn_ssp_enabled(struct hci_conn *conn)
@@ -702,7 +702,7 @@
return NULL;
}
-void hci_disconnect(struct hci_conn *conn, __u8 reason);
+int hci_disconnect(struct hci_conn *conn, __u8 reason);
bool hci_setup_sync(struct hci_conn *conn, __u16 handle);
void hci_sco_setup(struct hci_conn *conn, __u8 status);
@@ -756,9 +756,10 @@
* _get()/_drop() in it, but require the caller to have a valid ref (FIXME).
*/
-static inline void hci_conn_get(struct hci_conn *conn)
+static inline struct hci_conn *hci_conn_get(struct hci_conn *conn)
{
get_device(&conn->dev);
+ return conn;
}
static inline void hci_conn_put(struct hci_conn *conn)
@@ -790,7 +791,7 @@
if (!conn->out)
timeo *= 2;
} else {
- timeo = msecs_to_jiffies(10);
+ timeo = 0;
}
break;
@@ -799,7 +800,7 @@
break;
default:
- timeo = msecs_to_jiffies(10);
+ timeo = 0;
break;
}
@@ -1341,8 +1342,7 @@
int mgmt_user_passkey_notify(struct hci_dev *hdev, bdaddr_t *bdaddr,
u8 link_type, u8 addr_type, u32 passkey,
u8 entered);
-void mgmt_auth_failed(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
- u8 addr_type, u8 status);
+void mgmt_auth_failed(struct hci_conn *conn, u8 status);
void mgmt_auth_enable_complete(struct hci_dev *hdev, u8 status);
void mgmt_ssp_enable_complete(struct hci_dev *hdev, u8 enable, u8 status);
void mgmt_sc_enable_complete(struct hci_dev *hdev, u8 enable, u8 status);
diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h
index cedda39..ead99f0 100644
--- a/include/net/bluetooth/l2cap.h
+++ b/include/net/bluetooth/l2cap.h
@@ -625,9 +625,6 @@
struct delayed_work info_timer;
- int disconn_err;
- struct work_struct disconn_work;
-
struct sk_buff *rx_skb;
__u32 rx_len;
__u8 tx_ident;
@@ -636,6 +633,8 @@
struct sk_buff_head pending_rx;
struct work_struct pending_rx_work;
+ struct work_struct id_addr_update_work;
+
__u8 disc_reason;
struct l2cap_chan *smp;
@@ -711,6 +710,7 @@
FLAG_DEFER_SETUP,
FLAG_LE_CONN_REQ_SENT,
FLAG_PENDING_SECURITY,
+ FLAG_HOLD_HCI_CONN,
};
enum {
@@ -939,15 +939,13 @@
void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan);
void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan);
void l2cap_chan_del(struct l2cap_chan *chan, int err);
-void l2cap_conn_update_id_addr(struct hci_conn *hcon);
void l2cap_send_conn_req(struct l2cap_chan *chan);
void l2cap_move_start(struct l2cap_chan *chan);
void l2cap_logical_cfm(struct l2cap_chan *chan, struct hci_chan *hchan,
u8 status);
void __l2cap_physical_cfm(struct l2cap_chan *chan, int result);
-void l2cap_conn_shutdown(struct l2cap_conn *conn, int err);
-void l2cap_conn_get(struct l2cap_conn *conn);
+struct l2cap_conn *l2cap_conn_get(struct l2cap_conn *conn);
void l2cap_conn_put(struct l2cap_conn *conn);
int l2cap_register_user(struct l2cap_conn *conn, struct l2cap_user *user);
diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
index ab21299..a2ddcf2 100644
--- a/include/net/cfg80211.h
+++ b/include/net/cfg80211.h
@@ -4,6 +4,7 @@
* 802.11 device and configuration interface
*
* Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -663,6 +664,7 @@
* @crypto: crypto settings
* @privacy: the BSS uses privacy
* @auth_type: Authentication type (algorithm)
+ * @smps_mode: SMPS mode
* @inactivity_timeout: time in seconds to determine station's inactivity.
* @p2p_ctwindow: P2P CT Window
* @p2p_opp_ps: P2P opportunistic PS
@@ -681,6 +683,7 @@
struct cfg80211_crypto_settings crypto;
bool privacy;
enum nl80211_auth_type auth_type;
+ enum nl80211_smps_mode smps_mode;
int inactivity_timeout;
u8 p2p_ctwindow;
bool p2p_opp_ps;
@@ -1607,10 +1610,12 @@
*
* @ASSOC_REQ_DISABLE_HT: Disable HT (802.11n)
* @ASSOC_REQ_DISABLE_VHT: Disable VHT
+ * @ASSOC_REQ_USE_RRM: Declare RRM capability in this association
*/
enum cfg80211_assoc_req_flags {
ASSOC_REQ_DISABLE_HT = BIT(0),
ASSOC_REQ_DISABLE_VHT = BIT(1),
+ ASSOC_REQ_USE_RRM = BIT(2),
};
/**
@@ -1802,6 +1807,7 @@
* @WIPHY_PARAM_FRAG_THRESHOLD: wiphy->frag_threshold has changed
* @WIPHY_PARAM_RTS_THRESHOLD: wiphy->rts_threshold has changed
* @WIPHY_PARAM_COVERAGE_CLASS: coverage class changed
+ * @WIPHY_PARAM_DYN_ACK: dynack has been enabled
*/
enum wiphy_params_flags {
WIPHY_PARAM_RETRY_SHORT = 1 << 0,
@@ -1809,6 +1815,7 @@
WIPHY_PARAM_FRAG_THRESHOLD = 1 << 2,
WIPHY_PARAM_RTS_THRESHOLD = 1 << 3,
WIPHY_PARAM_COVERAGE_CLASS = 1 << 4,
+ WIPHY_PARAM_DYN_ACK = 1 << 5,
};
/*
@@ -1975,14 +1982,12 @@
/**
* struct cfg80211_gtk_rekey_data - rekey data
- * @kek: key encryption key
- * @kck: key confirmation key
- * @replay_ctr: replay counter
+ * @kek: key encryption key (NL80211_KEK_LEN bytes)
+ * @kck: key confirmation key (NL80211_KCK_LEN bytes)
+ * @replay_ctr: replay counter (NL80211_REPLAY_CTR_LEN bytes)
*/
struct cfg80211_gtk_rekey_data {
- u8 kek[NL80211_KEK_LEN];
- u8 kck[NL80211_KCK_LEN];
- u8 replay_ctr[NL80211_REPLAY_CTR_LEN];
+ const u8 *kek, *kck, *replay_ctr;
};
/**
@@ -2315,6 +2320,17 @@
* @set_ap_chanwidth: Set the AP (including P2P GO) mode channel width for the
* given interface This is used e.g. for dynamic HT 20/40 MHz channel width
* changes during the lifetime of the BSS.
+ *
+ * @add_tx_ts: validate (if admitted_time is 0) or add a TX TS to the device
+ * with the given parameters; action frame exchange has been handled by
+ * userspace so this just has to modify the TX path to take the TS into
+ * account.
+ * If the admitted time is 0 just validate the parameters to make sure
+ * the session can be created at all; it is valid to just always return
+ * success for that but that may result in inefficient behaviour (handshake
+ * with the peer followed by immediate teardown when the addition is later
+ * rejected)
+ * @del_tx_ts: remove an existing TX TS
*/
struct cfg80211_ops {
int (*suspend)(struct wiphy *wiphy, struct cfg80211_wowlan *wow);
@@ -2555,6 +2571,12 @@
int (*set_ap_chanwidth)(struct wiphy *wiphy, struct net_device *dev,
struct cfg80211_chan_def *chandef);
+
+ int (*add_tx_ts)(struct wiphy *wiphy, struct net_device *dev,
+ u8 tsid, const u8 *peer, u8 user_prio,
+ u16 admitted_time);
+ int (*del_tx_ts)(struct wiphy *wiphy, struct net_device *dev,
+ u8 tsid, const u8 *peer);
};
/*
@@ -2601,9 +2623,13 @@
* @WIPHY_FLAG_SUPPORTS_5_10_MHZ: Device supports 5 MHz and 10 MHz channels.
* @WIPHY_FLAG_HAS_CHANNEL_SWITCH: Device supports channel switch in
* beaconing mode (AP, IBSS, Mesh, ...).
+ * @WIPHY_FLAG_SUPPORTS_WMM_ADMISSION: the device supports setting up WMM
+ * TSPEC sessions (TID aka TSID 0-7) with the NL80211_CMD_ADD_TX_TS
+ * command. Standard IEEE 802.11 TSPEC setup is not yet supported, it
+ * needs to be able to handle Block-Ack agreements and other things.
*/
enum wiphy_flags {
- /* use hole at 0 */
+ WIPHY_FLAG_SUPPORTS_WMM_ADMISSION = BIT(0),
/* use hole at 1 */
/* use hole at 2 */
WIPHY_FLAG_NETNS_OK = BIT(3),
@@ -3920,6 +3946,7 @@
* moves to cfg80211 in this call
* @buf: authentication frame (header + body)
* @len: length of the frame data
+ * @uapsd_queues: bitmap of ACs configured to uapsd. -1 if n/a.
*
* After being asked to associate via cfg80211_ops::assoc() the driver must
* call either this function or cfg80211_auth_timeout().
@@ -3928,7 +3955,8 @@
*/
void cfg80211_rx_assoc_resp(struct net_device *dev,
struct cfg80211_bss *bss,
- const u8 *buf, size_t len);
+ const u8 *buf, size_t len,
+ int uapsd_queues);
/**
* cfg80211_assoc_timeout - notification of timed out association
diff --git a/include/net/checksum.h b/include/net/checksum.h
index 87cb190..6465bae 100644
--- a/include/net/checksum.h
+++ b/include/net/checksum.h
@@ -122,9 +122,7 @@
static inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
{
- __be32 diff[] = { ~from, to };
-
- *sum = csum_fold(csum_partial(diff, sizeof(diff), ~csum_unfold(*sum)));
+ *sum = csum_fold(csum_add(csum_sub(~csum_unfold(*sum), from), to));
}
/* Implements RFC 1624 (Incremental Internet Checksum)
diff --git a/include/net/dsa.h b/include/net/dsa.h
index 9771292..58ad8c6 100644
--- a/include/net/dsa.h
+++ b/include/net/dsa.h
@@ -19,10 +19,13 @@
#include <linux/phy.h>
#include <linux/phy_fixed.h>
-/* Not an official ethertype value, used only internally for DSA
- * demultiplexing
- */
-#define ETH_P_BRCMTAG (ETH_P_XDSA + 1)
+enum dsa_tag_protocol {
+ DSA_TAG_PROTO_NONE = 0,
+ DSA_TAG_PROTO_DSA,
+ DSA_TAG_PROTO_TRAILER,
+ DSA_TAG_PROTO_EDSA,
+ DSA_TAG_PROTO_BRCM,
+};
#define DSA_MAX_SWITCHES 4
#define DSA_MAX_PORTS 12
@@ -31,7 +34,7 @@
/*
* How to access the switch configuration registers.
*/
- struct device *mii_bus;
+ struct device *host_dev;
int sw_addr;
/* Device tree node pointer for this specific switch chip
@@ -74,7 +77,7 @@
struct dsa_chip_data *chip;
};
-struct dsa_device_ops;
+struct packet_type;
struct dsa_switch_tree {
/*
@@ -88,8 +91,11 @@
* protocol to use.
*/
struct net_device *master_netdev;
- const struct dsa_device_ops *ops;
- __be16 tag_protocol;
+ int (*rcv)(struct sk_buff *skb,
+ struct net_device *dev,
+ struct packet_type *pt,
+ struct net_device *orig_dev);
+ enum dsa_tag_protocol tag_protocol;
/*
* The switch and port to which the CPU is attached.
@@ -128,9 +134,9 @@
struct dsa_switch_driver *drv;
/*
- * Reference to mii bus to use.
+ * Reference to host device to use.
*/
- struct mii_bus *master_mii_bus;
+ struct device *master_dev;
/*
* Slave mii_bus and devices for the individual ports.
@@ -166,15 +172,16 @@
struct dsa_switch_driver {
struct list_head list;
- __be16 tag_protocol;
+ enum dsa_tag_protocol tag_protocol;
int priv_size;
/*
* Probing and setup.
*/
- char *(*probe)(struct mii_bus *bus, int sw_addr);
+ char *(*probe)(struct device *host_dev, int sw_addr);
int (*setup)(struct dsa_switch *ds);
int (*set_addr)(struct dsa_switch *ds, u8 *addr);
+ u32 (*get_phy_flags)(struct dsa_switch *ds, int port);
/*
* Access to the switch's PHY registers.
@@ -203,10 +210,42 @@
void (*get_ethtool_stats)(struct dsa_switch *ds,
int port, uint64_t *data);
int (*get_sset_count)(struct dsa_switch *ds);
+
+ /*
+ * ethtool Wake-on-LAN
+ */
+ void (*get_wol)(struct dsa_switch *ds, int port,
+ struct ethtool_wolinfo *w);
+ int (*set_wol)(struct dsa_switch *ds, int port,
+ struct ethtool_wolinfo *w);
+
+ /*
+ * Suspend and resume
+ */
+ int (*suspend)(struct dsa_switch *ds);
+ int (*resume)(struct dsa_switch *ds);
+
+ /*
+ * Port enable/disable
+ */
+ int (*port_enable)(struct dsa_switch *ds, int port,
+ struct phy_device *phy);
+ void (*port_disable)(struct dsa_switch *ds, int port,
+ struct phy_device *phy);
+
+ /*
+ * EEE setttings
+ */
+ int (*set_eee)(struct dsa_switch *ds, int port,
+ struct phy_device *phydev,
+ struct ethtool_eee *e);
+ int (*get_eee)(struct dsa_switch *ds, int port,
+ struct ethtool_eee *e);
};
void register_switch_driver(struct dsa_switch_driver *type);
void unregister_switch_driver(struct dsa_switch_driver *type);
+struct mii_bus *dsa_host_dev_to_mii_bus(struct device *dev);
static inline void *ds_to_priv(struct dsa_switch *ds)
{
@@ -215,7 +254,6 @@
static inline bool dsa_uses_tagged_protocol(struct dsa_switch_tree *dst)
{
- return dst->tag_protocol != 0;
+ return dst->rcv != NULL;
}
-
#endif
diff --git a/include/net/dst.h b/include/net/dst.h
index 71c60f4..a8ae4e7 100644
--- a/include/net/dst.h
+++ b/include/net/dst.h
@@ -480,6 +480,7 @@
/* Flags for xfrm_lookup flags argument. */
enum {
XFRM_LOOKUP_ICMP = 1 << 0,
+ XFRM_LOOKUP_QUEUE = 1 << 1,
};
struct flowi;
@@ -490,7 +491,16 @@
int flags)
{
return dst_orig;
-}
+}
+
+static inline struct dst_entry *xfrm_lookup_route(struct net *net,
+ struct dst_entry *dst_orig,
+ const struct flowi *fl,
+ struct sock *sk,
+ int flags)
+{
+ return dst_orig;
+}
static inline struct xfrm_state *dst_xfrm(const struct dst_entry *dst)
{
@@ -502,6 +512,10 @@
const struct flowi *fl, struct sock *sk,
int flags);
+struct dst_entry *xfrm_lookup_route(struct net *net, struct dst_entry *dst_orig,
+ const struct flowi *fl, struct sock *sk,
+ int flags);
+
/* skb attached with this dst needs transformation if dst->xfrm is valid */
static inline struct xfrm_state *dst_xfrm(const struct dst_entry *dst)
{
diff --git a/include/net/genetlink.h b/include/net/genetlink.h
index 93695f0..af10c2c 100644
--- a/include/net/genetlink.h
+++ b/include/net/genetlink.h
@@ -394,4 +394,12 @@
return netlink_set_err(net->genl_sock, portid, group, code);
}
+static inline int genl_has_listeners(struct genl_family *family,
+ struct sock *sk, unsigned int group)
+{
+ if (WARN_ON_ONCE(group >= family->n_mcgrps))
+ return -EINVAL;
+ group = family->mcgrp_offset + group;
+ return netlink_has_listeners(sk, group);
+}
#endif /* __NET_GENERIC_NETLINK_H */
diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
index 5fbe656..848e85c 100644
--- a/include/net/inet_connection_sock.h
+++ b/include/net/inet_connection_sock.h
@@ -242,6 +242,15 @@
#endif
}
+static inline unsigned long
+inet_csk_rto_backoff(const struct inet_connection_sock *icsk,
+ unsigned long max_when)
+{
+ u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff;
+
+ return (unsigned long)min_t(u64, when, max_when);
+}
+
struct sock *inet_csk_accept(struct sock *sk, int flags, int *err);
struct request_sock *inet_csk_search_req(const struct sock *sk,
diff --git a/include/net/ip.h b/include/net/ip.h
index 14bfc8e..0bb6207 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -180,8 +180,10 @@
return (arg->flags & IP_REPLY_ARG_NOSRCCHECK) ? FLOWI_FLAG_ANYSRC : 0;
}
-void ip_send_unicast_reply(struct net *net, struct sk_buff *skb, __be32 daddr,
- __be32 saddr, const struct ip_reply_arg *arg,
+void ip_send_unicast_reply(struct net *net, struct sk_buff *skb,
+ const struct ip_options *sopt,
+ __be32 daddr, __be32 saddr,
+ const struct ip_reply_arg *arg,
unsigned int len);
#define IP_INC_STATS(net, field) SNMP_INC_STATS64((net)->mib.ip_statistics, field)
@@ -511,7 +513,14 @@
void ip_options_build(struct sk_buff *skb, struct ip_options *opt,
__be32 daddr, struct rtable *rt, int is_frag);
-int ip_options_echo(struct ip_options *dopt, struct sk_buff *skb);
+
+int __ip_options_echo(struct ip_options *dopt, struct sk_buff *skb,
+ const struct ip_options *sopt);
+static inline int ip_options_echo(struct ip_options *dopt, struct sk_buff *skb)
+{
+ return __ip_options_echo(dopt, skb, &IPCB(skb)->opt);
+}
+
void ip_options_fragment(struct sk_buff *skb);
int ip_options_compile(struct net *net, struct ip_options *opt,
struct sk_buff *skb);
@@ -548,6 +557,10 @@
void ip_local_error(struct sock *sk, int err, __be32 daddr, __be16 dport,
u32 info);
+bool icmp_global_allow(void);
+extern int sysctl_icmp_msgs_per_sec;
+extern int sysctl_icmp_msgs_burst;
+
#ifdef CONFIG_PROC_FS
int ip_misc_proc_init(void);
#endif
diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
index 8dd8cab..7f538ba 100644
--- a/include/net/ip_tunnels.h
+++ b/include/net/ip_tunnels.h
@@ -10,6 +10,7 @@
#include <net/gro_cells.h>
#include <net/inet_ecn.h>
#include <net/ip.h>
+#include <net/netns/generic.h>
#include <net/rtnetlink.h>
#if IS_ENABLED(CONFIG_IPV6)
@@ -31,6 +32,13 @@
};
#endif
+struct ip_tunnel_encap {
+ __u16 type;
+ __u16 flags;
+ __be16 sport;
+ __be16 dport;
+};
+
struct ip_tunnel_prl_entry {
struct ip_tunnel_prl_entry __rcu *next;
__be32 addr;
@@ -56,13 +64,18 @@
/* These four fields used only by GRE */
__u32 i_seqno; /* The last seen seqno */
__u32 o_seqno; /* The last output seqno */
- int hlen; /* Precalculated header length */
+ int tun_hlen; /* Precalculated header length */
int mlink;
struct ip_tunnel_dst __percpu *dst_cache;
struct ip_tunnel_parm parms;
+ int encap_hlen; /* Encap header length (FOU,GUE) */
+ struct ip_tunnel_encap encap;
+
+ int hlen; /* tun_hlen + encap_hlen */
+
/* for SIT */
#ifdef CONFIG_IPV6_SIT_6RD
struct ip_tunnel_6rd_parm ip6rd;
@@ -114,6 +127,8 @@
void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
const struct iphdr *tnl_params, const u8 protocol);
int ip_tunnel_ioctl(struct net_device *dev, struct ip_tunnel_parm *p, int cmd);
+int ip_tunnel_encap(struct sk_buff *skb, struct ip_tunnel *t,
+ u8 *protocol, struct flowi4 *fl4);
int ip_tunnel_change_mtu(struct net_device *dev, int new_mtu);
struct rtnl_link_stats64 *ip_tunnel_get_stats64(struct net_device *dev,
@@ -131,6 +146,8 @@
struct ip_tunnel_parm *p);
void ip_tunnel_setup(struct net_device *dev, int net_id);
void ip_tunnel_dst_reset_all(struct ip_tunnel *t);
+int ip_tunnel_encap_setup(struct ip_tunnel *t,
+ struct ip_tunnel_encap *ipencap);
/* Extract dsfield from inner protocol */
static inline u8 ip_tunnel_get_dsfield(const struct iphdr *iph,
diff --git a/include/net/ipv6.h b/include/net/ipv6.h
index 7e247e9..97f4720 100644
--- a/include/net/ipv6.h
+++ b/include/net/ipv6.h
@@ -288,7 +288,8 @@
struct ipv6_txoptions *ipv6_fixup_options(struct ipv6_txoptions *opt_space,
struct ipv6_txoptions *opt);
-bool ipv6_opt_accepted(const struct sock *sk, const struct sk_buff *skb);
+bool ipv6_opt_accepted(const struct sock *sk, const struct sk_buff *skb,
+ const struct inet6_skb_parm *opt);
static inline bool ipv6_accept_ra(struct inet6_dev *idev)
{
diff --git a/include/net/mac80211.h b/include/net/mac80211.h
index c9b2bec..0ad1f47 100644
--- a/include/net/mac80211.h
+++ b/include/net/mac80211.h
@@ -4,6 +4,7 @@
* Copyright 2002-2005, Devicescape Software, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -1536,16 +1537,6 @@
* @IEEE80211_HW_MFP_CAPABLE:
* Hardware supports management frame protection (MFP, IEEE 802.11w).
*
- * @IEEE80211_HW_SUPPORTS_STATIC_SMPS:
- * Hardware supports static spatial multiplexing powersave,
- * ie. can turn off all but one chain even on HT connections
- * that should be using more chains.
- *
- * @IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS:
- * Hardware supports dynamic spatial multiplexing powersave,
- * ie. can turn off all but one chain and then wake the rest
- * up as required after, for example, rts/cts handshake.
- *
* @IEEE80211_HW_SUPPORTS_UAPSD:
* Hardware supports Unscheduled Automatic Power Save Delivery
* (U-APSD) in managed mode. The mode is configured with
@@ -1631,8 +1622,7 @@
IEEE80211_HW_SUPPORTS_DYNAMIC_PS = 1<<12,
IEEE80211_HW_MFP_CAPABLE = 1<<13,
IEEE80211_HW_WANT_MONITOR_VIF = 1<<14,
- IEEE80211_HW_SUPPORTS_STATIC_SMPS = 1<<15,
- IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS = 1<<16,
+ /* free slots */
IEEE80211_HW_SUPPORTS_UAPSD = 1<<17,
IEEE80211_HW_REPORTS_TX_ACK_STATUS = 1<<18,
IEEE80211_HW_CONNECTION_MONITOR = 1<<19,
@@ -2672,7 +2662,9 @@
*
* @set_coverage_class: Set slot time for given coverage class as specified
* in IEEE 802.11-2007 section 17.3.8.6 and modify ACK timeout
- * accordingly. This callback is not required and may sleep.
+ * accordingly; coverage class equals to -1 to enable ACK timeout
+ * estimation algorithm (dynack). To disable dynack set valid value for
+ * coverage class. This callback is not required and may sleep.
*
* @testmode_cmd: Implement a cfg80211 test mode command. The passed @vif may
* be %NULL. The callback can sleep.
@@ -2956,7 +2948,7 @@
int (*get_survey)(struct ieee80211_hw *hw, int idx,
struct survey_info *survey);
void (*rfkill_poll)(struct ieee80211_hw *hw);
- void (*set_coverage_class)(struct ieee80211_hw *hw, u8 coverage_class);
+ void (*set_coverage_class)(struct ieee80211_hw *hw, s16 coverage_class);
#ifdef CONFIG_NL80211_TESTMODE
int (*testmode_cmd)(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
void *data, int len);
diff --git a/include/net/mld.h b/include/net/mld.h
index faa1d16..01d7513 100644
--- a/include/net/mld.h
+++ b/include/net/mld.h
@@ -88,12 +88,15 @@
#define MLDV2_QQIC_EXP(value) (((value) >> 4) & 0x07)
#define MLDV2_QQIC_MAN(value) ((value) & 0x0f)
+#define MLD_EXP_MIN_LIMIT 32768UL
+#define MLDV1_MRD_MAX_COMPAT (MLD_EXP_MIN_LIMIT - 1)
+
static inline unsigned long mldv2_mrc(const struct mld2_query *mlh2)
{
/* RFC3810, 5.1.3. Maximum Response Code */
unsigned long ret, mc_mrc = ntohs(mlh2->mld2q_mrc);
- if (mc_mrc < 32768) {
+ if (mc_mrc < MLD_EXP_MIN_LIMIT) {
ret = mc_mrc;
} else {
unsigned long mc_man, mc_exp;
diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h
index 3492434..9da7982 100644
--- a/include/net/netns/xfrm.h
+++ b/include/net/netns/xfrm.h
@@ -13,6 +13,19 @@
struct xfrm_policy_hash {
struct hlist_head *table;
unsigned int hmask;
+ u8 dbits4;
+ u8 sbits4;
+ u8 dbits6;
+ u8 sbits6;
+};
+
+struct xfrm_policy_hthresh {
+ struct work_struct work;
+ seqlock_t lock;
+ u8 lbits4;
+ u8 rbits4;
+ u8 lbits6;
+ u8 rbits6;
};
struct netns_xfrm {
@@ -41,6 +54,7 @@
struct xfrm_policy_hash policy_bydst[XFRM_POLICY_MAX * 2];
unsigned int policy_count[XFRM_POLICY_MAX * 2];
struct work_struct policy_hash_work;
+ struct xfrm_policy_hthresh policy_hthresh;
struct sock *nlsk;
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 6da46dc..73f9532 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -137,7 +137,7 @@
int tcf_exts_validate(struct net *net, struct tcf_proto *tp,
struct nlattr **tb, struct nlattr *rate_tlv,
struct tcf_exts *exts, bool ovr);
-void tcf_exts_destroy(struct tcf_proto *tp, struct tcf_exts *exts);
+void tcf_exts_destroy(struct tcf_exts *exts);
void tcf_exts_change(struct tcf_proto *tp, struct tcf_exts *dst,
struct tcf_exts *src);
int tcf_exts_dump(struct sk_buff *skb, struct tcf_exts *exts);
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index a3cfb8e..e65b8e0 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -143,7 +143,7 @@
void (*walk)(struct Qdisc *, struct qdisc_walker * arg);
/* Filter manipulation */
- struct tcf_proto ** (*tcf_chain)(struct Qdisc *, unsigned long);
+ struct tcf_proto __rcu ** (*tcf_chain)(struct Qdisc *, unsigned long);
unsigned long (*bind_tcf)(struct Qdisc *, unsigned long,
u32 classid);
void (*unbind_tcf)(struct Qdisc *, unsigned long);
@@ -212,8 +212,8 @@
struct tcf_proto {
/* Fast access part */
- struct tcf_proto *next;
- void *root;
+ struct tcf_proto __rcu *next;
+ void __rcu *root;
int (*classify)(struct sk_buff *,
const struct tcf_proto *,
struct tcf_result *);
@@ -225,13 +225,15 @@
struct Qdisc *q;
void *data;
const struct tcf_proto_ops *ops;
+ struct rcu_head rcu;
};
struct qdisc_skb_cb {
unsigned int pkt_len;
u16 slave_dev_queue_mapping;
u16 _pad;
- unsigned char data[24];
+#define QDISC_CB_PRIV_LEN 20
+ unsigned char data[QDISC_CB_PRIV_LEN];
};
static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
@@ -259,7 +261,9 @@
static inline struct Qdisc *qdisc_root(const struct Qdisc *qdisc)
{
- return qdisc->dev_queue->qdisc;
+ struct Qdisc *q = rcu_dereference_rtnl(qdisc->dev_queue->qdisc);
+
+ return q;
}
static inline struct Qdisc *qdisc_root_sleeping(const struct Qdisc *qdisc)
@@ -376,7 +380,7 @@
void __qdisc_calculate_pkt_len(struct sk_buff *skb,
const struct qdisc_size_table *stab);
void tcf_destroy(struct tcf_proto *tp);
-void tcf_destroy_chain(struct tcf_proto **fl);
+void tcf_destroy_chain(struct tcf_proto __rcu **fl);
/* Reset all TX qdiscs greater then index of a device. */
static inline void qdisc_reset_all_tx_gt(struct net_device *dev, unsigned int i)
@@ -384,7 +388,7 @@
struct Qdisc *qdisc;
for (; i < dev->num_tx_queues; i++) {
- qdisc = netdev_get_tx_queue(dev, i)->qdisc;
+ qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc);
if (qdisc) {
spin_lock_bh(qdisc_lock(qdisc));
qdisc_reset(qdisc);
@@ -402,13 +406,18 @@
static inline bool qdisc_all_tx_empty(const struct net_device *dev)
{
unsigned int i;
+
+ rcu_read_lock();
for (i = 0; i < dev->num_tx_queues; i++) {
struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
- const struct Qdisc *q = txq->qdisc;
+ const struct Qdisc *q = rcu_dereference(txq->qdisc);
- if (q->q.qlen)
+ if (q->q.qlen) {
+ rcu_read_unlock();
return false;
+ }
}
+ rcu_read_unlock();
return true;
}
@@ -416,9 +425,10 @@
static inline bool qdisc_tx_changing(const struct net_device *dev)
{
unsigned int i;
+
for (i = 0; i < dev->num_tx_queues; i++) {
struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
- if (txq->qdisc != txq->qdisc_sleeping)
+ if (rcu_access_pointer(txq->qdisc) != txq->qdisc_sleeping)
return true;
}
return false;
@@ -428,9 +438,10 @@
static inline bool qdisc_tx_is_noop(const struct net_device *dev)
{
unsigned int i;
+
for (i = 0; i < dev->num_tx_queues; i++) {
struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
- if (txq->qdisc != &noop_qdisc)
+ if (rcu_access_pointer(txq->qdisc) != &noop_qdisc)
return false;
}
return true;
diff --git a/include/net/snmp.h b/include/net/snmp.h
index f1f27fd..8fd2f49 100644
--- a/include/net/snmp.h
+++ b/include/net/snmp.h
@@ -146,19 +146,15 @@
#define SNMP_ADD_STATS(mib, field, addend) \
this_cpu_add(mib->mibs[field], addend)
-/*
- * Use "__typeof__(*mib) *ptr" instead of "__typeof__(mib) ptr"
- * to make @ptr a non-percpu pointer.
- */
#define SNMP_UPD_PO_STATS(mib, basefield, addend) \
do { \
- __typeof__(*mib->mibs) *ptr = mib->mibs; \
+ __typeof__((mib->mibs) + 0) ptr = mib->mibs; \
this_cpu_inc(ptr[basefield##PKTS]); \
this_cpu_add(ptr[basefield##OCTETS], addend); \
} while (0)
#define SNMP_UPD_PO_STATS_BH(mib, basefield, addend) \
do { \
- __typeof__(*mib->mibs) *ptr = mib->mibs; \
+ __typeof__((mib->mibs) + 0) ptr = mib->mibs; \
__this_cpu_inc(ptr[basefield##PKTS]); \
__this_cpu_add(ptr[basefield##OCTETS], addend); \
} while (0)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index a4201ef..545a79a 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -696,15 +696,18 @@
* If this grows please adjust skbuff.h:skbuff->cb[xxx] size appropriately.
*/
struct tcp_skb_cb {
- union {
- struct inet_skb_parm h4;
-#if IS_ENABLED(CONFIG_IPV6)
- struct inet6_skb_parm h6;
-#endif
- } header; /* For incoming frames */
__u32 seq; /* Starting sequence number */
__u32 end_seq; /* SEQ + FIN + SYN + datalen */
- __u32 tcp_tw_isn; /* isn chosen by tcp_timewait_state_process() */
+ union {
+ /* Note : tcp_tw_isn is used in input path only
+ * (isn chosen by tcp_timewait_state_process())
+ *
+ * tcp_gso_segs is used in write queue only,
+ * cf tcp_skb_pcount()
+ */
+ __u32 tcp_tw_isn;
+ __u32 tcp_gso_segs;
+ };
__u8 tcp_flags; /* TCP header flags. (tcp[13]) */
__u8 sacked; /* State flags for SACK/FACK. */
@@ -720,33 +723,32 @@
__u8 ip_dsfield; /* IPv4 tos or IPv6 dsfield */
/* 1 byte hole */
__u32 ack_seq; /* Sequence number ACK'd */
+ union {
+ struct inet_skb_parm h4;
+#if IS_ENABLED(CONFIG_IPV6)
+ struct inet6_skb_parm h6;
+#endif
+ } header; /* For incoming frames */
};
#define TCP_SKB_CB(__skb) ((struct tcp_skb_cb *)&((__skb)->cb[0]))
-/* RFC3168 : 6.1.1 SYN packets must not have ECT/ECN bits set
- *
- * If we receive a SYN packet with these bits set, it means a network is
- * playing bad games with TOS bits. In order to avoid possible false congestion
- * notifications, we disable TCP ECN negociation.
- */
-static inline void
-TCP_ECN_create_request(struct request_sock *req, const struct sk_buff *skb,
- struct net *net)
-{
- const struct tcphdr *th = tcp_hdr(skb);
-
- if (net->ipv4.sysctl_tcp_ecn && th->ece && th->cwr &&
- INET_ECN_is_not_ect(TCP_SKB_CB(skb)->ip_dsfield))
- inet_rsk(req)->ecn_ok = 1;
-}
-
/* Due to TSO, an SKB can be composed of multiple actual
* packets. To keep these tracked properly, we use this.
*/
static inline int tcp_skb_pcount(const struct sk_buff *skb)
{
- return skb_shinfo(skb)->gso_segs;
+ return TCP_SKB_CB(skb)->tcp_gso_segs;
+}
+
+static inline void tcp_skb_pcount_set(struct sk_buff *skb, int segs)
+{
+ TCP_SKB_CB(skb)->tcp_gso_segs = segs;
+}
+
+static inline void tcp_skb_pcount_add(struct sk_buff *skb, int segs)
+{
+ TCP_SKB_CB(skb)->tcp_gso_segs += segs;
}
/* This is valid iff tcp_skb_pcount() > 1. */
@@ -761,8 +763,17 @@
CA_EVENT_CWND_RESTART, /* congestion window restart */
CA_EVENT_COMPLETE_CWR, /* end of congestion recovery */
CA_EVENT_LOSS, /* loss timeout */
- CA_EVENT_FAST_ACK, /* in sequence ack */
- CA_EVENT_SLOW_ACK, /* other ack */
+ CA_EVENT_ECN_NO_CE, /* ECT set, but not CE marked */
+ CA_EVENT_ECN_IS_CE, /* received CE marked IP packet */
+ CA_EVENT_DELAYED_ACK, /* Delayed ack is sent */
+ CA_EVENT_NON_DELAYED_ACK,
+};
+
+/* Information about inbound ACK, passed to cong_ops->in_ack_event() */
+enum tcp_ca_ack_event_flags {
+ CA_ACK_SLOWPATH = (1 << 0), /* In slow path processing */
+ CA_ACK_WIN_UPDATE = (1 << 1), /* ACK updated window */
+ CA_ACK_ECE = (1 << 2), /* ECE bit is set on ack */
};
/*
@@ -772,7 +783,10 @@
#define TCP_CA_MAX 128
#define TCP_CA_BUF_MAX (TCP_CA_NAME_MAX*TCP_CA_MAX)
+/* Algorithm can be set on socket without CAP_NET_ADMIN privileges */
#define TCP_CONG_NON_RESTRICTED 0x1
+/* Requires ECN/ECT set on all packets */
+#define TCP_CONG_NEEDS_ECN 0x2
struct tcp_congestion_ops {
struct list_head list;
@@ -791,6 +805,8 @@
void (*set_state)(struct sock *sk, u8 new_state);
/* call when cwnd event occurs (optional) */
void (*cwnd_event)(struct sock *sk, enum tcp_ca_event ev);
+ /* call when ack arrives (optional) */
+ void (*in_ack_event)(struct sock *sk, u32 flags);
/* new value of cwnd after loss (optional) */
u32 (*undo_cwnd)(struct sock *sk);
/* hook for packet ack accounting (optional) */
@@ -805,6 +821,7 @@
int tcp_register_congestion_control(struct tcp_congestion_ops *type);
void tcp_unregister_congestion_control(struct tcp_congestion_ops *type);
+void tcp_assign_congestion_control(struct sock *sk);
void tcp_init_congestion_control(struct sock *sk);
void tcp_cleanup_congestion_control(struct sock *sk);
int tcp_set_default_congestion_control(const char *name);
@@ -816,11 +833,17 @@
int tcp_slow_start(struct tcp_sock *tp, u32 acked);
void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w);
-extern struct tcp_congestion_ops tcp_init_congestion_ops;
u32 tcp_reno_ssthresh(struct sock *sk);
void tcp_reno_cong_avoid(struct sock *sk, u32 ack, u32 acked);
extern struct tcp_congestion_ops tcp_reno;
+static inline bool tcp_ca_needs_ecn(const struct sock *sk)
+{
+ const struct inet_connection_sock *icsk = inet_csk(sk);
+
+ return icsk->icsk_ca_ops->flags & TCP_CONG_NEEDS_ECN;
+}
+
static inline void tcp_set_ca_state(struct sock *sk, const u8 ca_state)
{
struct inet_connection_sock *icsk = inet_csk(sk);
diff --git a/include/net/udp_tunnel.h b/include/net/udp_tunnel.h
index ffd69cb..a47790b 100644
--- a/include/net/udp_tunnel.h
+++ b/include/net/udp_tunnel.h
@@ -1,6 +1,14 @@
#ifndef __NET_UDP_TUNNEL_H
#define __NET_UDP_TUNNEL_H
+#include <net/ip_tunnels.h>
+#include <net/udp.h>
+
+#if IS_ENABLED(CONFIG_IPV6)
+#include <net/ipv6.h>
+#include <net/addrconf.h>
+#endif
+
struct udp_port_cfg {
u8 family;
@@ -26,7 +34,80 @@
use_udp6_rx_checksums:1;
};
-int udp_sock_create(struct net *net, struct udp_port_cfg *cfg,
- struct socket **sockp);
+int udp_sock_create4(struct net *net, struct udp_port_cfg *cfg,
+ struct socket **sockp);
+
+#if IS_ENABLED(CONFIG_IPV6)
+int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg,
+ struct socket **sockp);
+#else
+static inline int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg,
+ struct socket **sockp)
+{
+ return 0;
+}
+#endif
+
+static inline int udp_sock_create(struct net *net,
+ struct udp_port_cfg *cfg,
+ struct socket **sockp)
+{
+ if (cfg->family == AF_INET)
+ return udp_sock_create4(net, cfg, sockp);
+
+ if (cfg->family == AF_INET6)
+ return udp_sock_create6(net, cfg, sockp);
+
+ return -EPFNOSUPPORT;
+}
+
+typedef int (*udp_tunnel_encap_rcv_t)(struct sock *sk, struct sk_buff *skb);
+typedef void (*udp_tunnel_encap_destroy_t)(struct sock *sk);
+
+struct udp_tunnel_sock_cfg {
+ void *sk_user_data; /* user data used by encap_rcv call back */
+ /* Used for setting up udp_sock fields, see udp.h for details */
+ __u8 encap_type;
+ udp_tunnel_encap_rcv_t encap_rcv;
+ udp_tunnel_encap_destroy_t encap_destroy;
+};
+
+/* Setup the given (UDP) sock to receive UDP encapsulated packets */
+void setup_udp_tunnel_sock(struct net *net, struct socket *sock,
+ struct udp_tunnel_sock_cfg *sock_cfg);
+
+/* Transmit the skb using UDP encapsulation. */
+int udp_tunnel_xmit_skb(struct socket *sock, struct rtable *rt,
+ struct sk_buff *skb, __be32 src, __be32 dst,
+ __u8 tos, __u8 ttl, __be16 df, __be16 src_port,
+ __be16 dst_port, bool xnet);
+
+#if IS_ENABLED(CONFIG_IPV6)
+int udp_tunnel6_xmit_skb(struct socket *sock, struct dst_entry *dst,
+ struct sk_buff *skb, struct net_device *dev,
+ struct in6_addr *saddr, struct in6_addr *daddr,
+ __u8 prio, __u8 ttl, __be16 src_port,
+ __be16 dst_port);
+#endif
+
+void udp_tunnel_sock_release(struct socket *sock);
+
+static inline struct sk_buff *udp_tunnel_handle_offloads(struct sk_buff *skb,
+ bool udp_csum)
+{
+ int type = udp_csum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
+
+ return iptunnel_handle_offloads(skb, udp_csum, type);
+}
+
+static inline void udp_tunnel_encap_enable(struct socket *sock)
+{
+#if IS_ENABLED(CONFIG_IPV6)
+ if (sock->sk->sk_family == PF_INET6)
+ ipv6_stub->udpv6_encap_enable();
+ else
+#endif
+ udp_encap_enable();
+}
#endif
diff --git a/include/net/xfrm.h b/include/net/xfrm.h
index 721e9c3..dc4865e 100644
--- a/include/net/xfrm.h
+++ b/include/net/xfrm.h
@@ -1591,6 +1591,7 @@
struct xfrm_policy *xfrm_policy_byid(struct net *net, u32 mark, u8, int dir,
u32 id, int delete, int *err);
int xfrm_policy_flush(struct net *net, u8 type, bool task_valid);
+void xfrm_policy_hash_rebuild(struct net *net);
u32 xfrm_get_acqseq(void);
int verify_spi_info(u8 proto, u32 min, u32 max);
int xfrm_alloc_spi(struct xfrm_state *x, u32 minspi, u32 maxspi);
diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h
index 1ea0b65..a2bf41e 100644
--- a/include/rdma/ib_umem.h
+++ b/include/rdma/ib_umem.h
@@ -47,6 +47,7 @@
int writable;
int hugetlb;
struct work_struct work;
+ struct pid *pid;
struct mm_struct *mm;
unsigned long diff;
struct sg_table sg_head;
diff --git a/include/scsi/scsi_tcq.h b/include/scsi/scsi_tcq.h
index cdcc90b..e645835 100644
--- a/include/scsi/scsi_tcq.h
+++ b/include/scsi/scsi_tcq.h
@@ -68,7 +68,7 @@
return;
if (!shost_use_blk_mq(sdev->host) &&
- blk_queue_tagged(sdev->request_queue))
+ !blk_queue_tagged(sdev->request_queue))
blk_queue_init_tags(sdev->request_queue, depth,
sdev->host->bqt);
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 11d11bc..22749c1 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -705,9 +705,11 @@
__SYSCALL(__NR_getrandom, sys_getrandom)
#define __NR_memfd_create 279
__SYSCALL(__NR_memfd_create, sys_memfd_create)
+#define __NR_bpf 280
+__SYSCALL(__NR_bpf, sys_bpf)
#undef __NR_syscalls
-#define __NR_syscalls 280
+#define __NR_syscalls 281
/*
* All syscalls below here should go away really,
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index fb3f7b6..70e150e 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -241,6 +241,7 @@
header-y += mdio.h
header-y += media.h
header-y += mei.h
+header-y += memfd.h
header-y += mempolicy.h
header-y += meye.h
header-y += mic_common.h
@@ -396,6 +397,7 @@
header-y += unistd.h
header-y += unix_diag.h
header-y += usbdevice_fs.h
+header-y += usbip.h
header-y += utime.h
header-y += utsname.h
header-y += uuid.h
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 479ed0b..31b0ac2 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -62,4 +62,94 @@
__s32 imm; /* signed immediate constant */
};
+/* BPF syscall commands */
+enum bpf_cmd {
+ /* create a map with given type and attributes
+ * fd = bpf(BPF_MAP_CREATE, union bpf_attr *, u32 size)
+ * returns fd or negative error
+ * map is deleted when fd is closed
+ */
+ BPF_MAP_CREATE,
+
+ /* lookup key in a given map
+ * err = bpf(BPF_MAP_LOOKUP_ELEM, union bpf_attr *attr, u32 size)
+ * Using attr->map_fd, attr->key, attr->value
+ * returns zero and stores found elem into value
+ * or negative error
+ */
+ BPF_MAP_LOOKUP_ELEM,
+
+ /* create or update key/value pair in a given map
+ * err = bpf(BPF_MAP_UPDATE_ELEM, union bpf_attr *attr, u32 size)
+ * Using attr->map_fd, attr->key, attr->value
+ * returns zero or negative error
+ */
+ BPF_MAP_UPDATE_ELEM,
+
+ /* find and delete elem by key in a given map
+ * err = bpf(BPF_MAP_DELETE_ELEM, union bpf_attr *attr, u32 size)
+ * Using attr->map_fd, attr->key
+ * returns zero or negative error
+ */
+ BPF_MAP_DELETE_ELEM,
+
+ /* lookup key in a given map and return next key
+ * err = bpf(BPF_MAP_GET_NEXT_KEY, union bpf_attr *attr, u32 size)
+ * Using attr->map_fd, attr->key, attr->next_key
+ * returns zero and stores next key or negative error
+ */
+ BPF_MAP_GET_NEXT_KEY,
+
+ /* verify and load eBPF program
+ * prog_fd = bpf(BPF_PROG_LOAD, union bpf_attr *attr, u32 size)
+ * Using attr->prog_type, attr->insns, attr->license
+ * returns fd or negative error
+ */
+ BPF_PROG_LOAD,
+};
+
+enum bpf_map_type {
+ BPF_MAP_TYPE_UNSPEC,
+};
+
+enum bpf_prog_type {
+ BPF_PROG_TYPE_UNSPEC,
+};
+
+union bpf_attr {
+ struct { /* anonymous struct used by BPF_MAP_CREATE command */
+ __u32 map_type; /* one of enum bpf_map_type */
+ __u32 key_size; /* size of key in bytes */
+ __u32 value_size; /* size of value in bytes */
+ __u32 max_entries; /* max number of entries in a map */
+ };
+
+ struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */
+ __u32 map_fd;
+ __aligned_u64 key;
+ union {
+ __aligned_u64 value;
+ __aligned_u64 next_key;
+ };
+ };
+
+ struct { /* anonymous struct used by BPF_PROG_LOAD command */
+ __u32 prog_type; /* one of enum bpf_prog_type */
+ __u32 insn_cnt;
+ __aligned_u64 insns;
+ __aligned_u64 license;
+ __u32 log_level; /* verbosity level of verifier */
+ __u32 log_size; /* size of user buffer */
+ __aligned_u64 log_buf; /* user supplied buffer */
+ };
+} __attribute__((aligned(8)));
+
+/* integer value in 'imm' field of BPF_CALL instruction selects which helper
+ * function eBPF program intends to call
+ */
+enum bpf_func_id {
+ BPF_FUNC_unspec,
+ __BPF_FUNC_MAX_ID,
+};
+
#endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/include/uapi/linux/fou.h b/include/uapi/linux/fou.h
new file mode 100644
index 0000000..e03376d
--- /dev/null
+++ b/include/uapi/linux/fou.h
@@ -0,0 +1,32 @@
+/* fou.h - FOU Interface */
+
+#ifndef _UAPI_LINUX_FOU_H
+#define _UAPI_LINUX_FOU_H
+
+/* NETLINK_GENERIC related info
+ */
+#define FOU_GENL_NAME "fou"
+#define FOU_GENL_VERSION 0x1
+
+enum {
+ FOU_ATTR_UNSPEC,
+ FOU_ATTR_PORT, /* u16 */
+ FOU_ATTR_AF, /* u8 */
+ FOU_ATTR_IPPROTO, /* u8 */
+
+ __FOU_ATTR_MAX,
+};
+
+#define FOU_ATTR_MAX (__FOU_ATTR_MAX - 1)
+
+enum {
+ FOU_CMD_UNSPEC,
+ FOU_CMD_ADD,
+ FOU_CMD_DEL,
+
+ __FOU_CMD_MAX,
+};
+
+#define FOU_CMD_MAX (__FOU_CMD_MAX - 1)
+
+#endif /* _UAPI_LINUX_FOU_H */
diff --git a/include/uapi/linux/if_tunnel.h b/include/uapi/linux/if_tunnel.h
index 3bce9e9..7c832af 100644
--- a/include/uapi/linux/if_tunnel.h
+++ b/include/uapi/linux/if_tunnel.h
@@ -53,10 +53,22 @@
IFLA_IPTUN_6RD_RELAY_PREFIX,
IFLA_IPTUN_6RD_PREFIXLEN,
IFLA_IPTUN_6RD_RELAY_PREFIXLEN,
+ IFLA_IPTUN_ENCAP_TYPE,
+ IFLA_IPTUN_ENCAP_FLAGS,
+ IFLA_IPTUN_ENCAP_SPORT,
+ IFLA_IPTUN_ENCAP_DPORT,
__IFLA_IPTUN_MAX,
};
#define IFLA_IPTUN_MAX (__IFLA_IPTUN_MAX - 1)
+enum tunnel_encap_types {
+ TUNNEL_ENCAP_NONE,
+ TUNNEL_ENCAP_FOU,
+};
+
+#define TUNNEL_ENCAP_FLAG_CSUM (1<<0)
+#define TUNNEL_ENCAP_FLAG_CSUM6 (1<<1)
+
/* SIT-mode i_flags */
#define SIT_ISATAP 0x0001
@@ -94,6 +106,10 @@
IFLA_GRE_ENCAP_LIMIT,
IFLA_GRE_FLOWINFO,
IFLA_GRE_FLAGS,
+ IFLA_GRE_ENCAP_TYPE,
+ IFLA_GRE_ENCAP_FLAGS,
+ IFLA_GRE_ENCAP_SPORT,
+ IFLA_GRE_ENCAP_DPORT,
__IFLA_GRE_MAX,
};
diff --git a/include/uapi/linux/inet_diag.h b/include/uapi/linux/inet_diag.h
index bbde90f..d65c0a0 100644
--- a/include/uapi/linux/inet_diag.h
+++ b/include/uapi/linux/inet_diag.h
@@ -110,10 +110,10 @@
INET_DIAG_TCLASS,
INET_DIAG_SKMEMINFO,
INET_DIAG_SHUTDOWN,
+ INET_DIAG_DCTCPINFO,
};
-#define INET_DIAG_MAX INET_DIAG_SHUTDOWN
-
+#define INET_DIAG_MAX INET_DIAG_DCTCPINFO
/* INET_DIAG_MEM */
@@ -133,5 +133,14 @@
__u32 tcpv_minrtt;
};
+/* INET_DIAG_DCTCPINFO */
+
+struct tcp_dctcp_info {
+ __u16 dctcp_enabled;
+ __u16 dctcp_ce_state;
+ __u32 dctcp_alpha;
+ __u32 dctcp_ab_ecn;
+ __u32 dctcp_ab_tot;
+};
#endif /* _UAPI_INET_DIAG_H_ */
diff --git a/include/uapi/linux/input.h b/include/uapi/linux/input.h
index 19df18c..1874ebe 100644
--- a/include/uapi/linux/input.h
+++ b/include/uapi/linux/input.h
@@ -165,6 +165,7 @@
#define INPUT_PROP_BUTTONPAD 0x02 /* has button(s) under pad */
#define INPUT_PROP_SEMI_MT 0x03 /* touch rectangle only */
#define INPUT_PROP_TOPBUTTONPAD 0x04 /* softbuttons at top of pad */
+#define INPUT_PROP_POINTING_STICK 0x05 /* is a pointing stick */
#define INPUT_PROP_MAX 0x1f
#define INPUT_PROP_CNT (INPUT_PROP_MAX + 1)
diff --git a/include/uapi/linux/nl80211.h b/include/uapi/linux/nl80211.h
index d097568..4b28dc0 100644
--- a/include/uapi/linux/nl80211.h
+++ b/include/uapi/linux/nl80211.h
@@ -722,6 +722,22 @@
* QoS mapping is relevant for IP packets, it is only valid during an
* association. This is cleared on disassociation and AP restart.
*
+ * @NL80211_CMD_ADD_TX_TS: Ask the kernel to add a traffic stream for the given
+ * %NL80211_ATTR_TSID and %NL80211_ATTR_MAC with %NL80211_ATTR_USER_PRIO
+ * and %NL80211_ATTR_ADMITTED_TIME parameters.
+ * Note that the action frame handshake with the AP shall be handled by
+ * userspace via the normal management RX/TX framework, this only sets
+ * up the TX TS in the driver/device.
+ * If the admitted time attribute is not added then the request just checks
+ * if a subsequent setup could be successful, the intent is to use this to
+ * avoid setting up a session with the AP when local restrictions would
+ * make that impossible. However, the subsequent "real" setup may still
+ * fail even if the check was successful.
+ * @NL80211_CMD_DEL_TX_TS: Remove an existing TS with the %NL80211_ATTR_TSID
+ * and %NL80211_ATTR_MAC parameters. It isn't necessary to call this
+ * before removing a station entry entirely, or before disassociating
+ * or similar, cleanup will happen in the driver/device in this case.
+ *
* @NL80211_CMD_MAX: highest used command number
* @__NL80211_CMD_AFTER_LAST: internal use
*/
@@ -893,6 +909,9 @@
NL80211_CMD_SET_QOS_MAP,
+ NL80211_CMD_ADD_TX_TS,
+ NL80211_CMD_DEL_TX_TS,
+
/* add new commands above here */
/* used to define NL80211_CMD_MAX below */
@@ -1594,6 +1613,31 @@
* @NL80211_ATTR_TDLS_INITIATOR: flag attribute indicating the current end is
* the TDLS link initiator.
*
+ * @NL80211_ATTR_USE_RRM: flag for indicating whether the current connection
+ * shall support Radio Resource Measurements (11k). This attribute can be
+ * used with %NL80211_CMD_ASSOCIATE and %NL80211_CMD_CONNECT requests.
+ * User space applications are expected to use this flag only if the
+ * underlying device supports these minimal RRM features:
+ * %NL80211_FEATURE_DS_PARAM_SET_IE_IN_PROBES,
+ * %NL80211_FEATURE_QUIET,
+ * If this flag is used, driver must add the Power Capabilities IE to the
+ * association request. In addition, it must also set the RRM capability
+ * flag in the association request's Capability Info field.
+ *
+ * @NL80211_ATTR_WIPHY_DYN_ACK: flag attribute used to enable ACK timeout
+ * estimation algorithm (dynack). In order to activate dynack
+ * %NL80211_FEATURE_ACKTO_ESTIMATION feature flag must be set by lower
+ * drivers to indicate dynack capability. Dynack is automatically disabled
+ * setting valid value for coverage class.
+ *
+ * @NL80211_ATTR_TSID: a TSID value (u8 attribute)
+ * @NL80211_ATTR_USER_PRIO: user priority value (u8 attribute)
+ * @NL80211_ATTR_ADMITTED_TIME: admitted time in units of 32 microseconds
+ * (per second) (u16 attribute)
+ *
+ * @NL80211_ATTR_SMPS_MODE: SMPS mode to use (ap mode). see
+ * &enum nl80211_smps_mode.
+ *
* @NL80211_ATTR_MAX: highest attribute number currently defined
* @__NL80211_ATTR_AFTER_LAST: internal use
*/
@@ -1936,6 +1980,16 @@
NL80211_ATTR_TDLS_INITIATOR,
+ NL80211_ATTR_USE_RRM,
+
+ NL80211_ATTR_WIPHY_DYN_ACK,
+
+ NL80211_ATTR_TSID,
+ NL80211_ATTR_USER_PRIO,
+ NL80211_ATTR_ADMITTED_TIME,
+
+ NL80211_ATTR_SMPS_MODE,
+
/* add attributes here, update the policy in nl80211.c */
__NL80211_ATTR_AFTER_LAST,
@@ -3968,6 +4022,26 @@
* @NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE: This driver supports dynamic
* channel bandwidth change (e.g., HT 20 <-> 40 MHz channel) during the
* lifetime of a BSS.
+ * @NL80211_FEATURE_DS_PARAM_SET_IE_IN_PROBES: This device adds a DS Parameter
+ * Set IE to probe requests.
+ * @NL80211_FEATURE_WFA_TPC_IE_IN_PROBES: This device adds a WFA TPC Report IE
+ * to probe requests.
+ * @NL80211_FEATURE_QUIET: This device, in client mode, supports Quiet Period
+ * requests sent to it by an AP.
+ * @NL80211_FEATURE_TX_POWER_INSERTION: This device is capable of inserting the
+ * current tx power value into the TPC Report IE in the spectrum
+ * management TPC Report action frame, and in the Radio Measurement Link
+ * Measurement Report action frame.
+ * @NL80211_FEATURE_ACKTO_ESTIMATION: This driver supports dynamic ACK timeout
+ * estimation (dynack). %NL80211_ATTR_WIPHY_DYN_ACK flag attribute is used
+ * to enable dynack.
+ * @NL80211_FEATURE_STATIC_SMPS: Device supports static spatial
+ * multiplexing powersave, ie. can turn off all but one chain
+ * even on HT connections that should be using more chains.
+ * @NL80211_FEATURE_DYNAMIC_SMPS: Device supports dynamic spatial
+ * multiplexing powersave, ie. can turn off all but one chain
+ * and then wake the rest up as required after, for example,
+ * rts/cts handshake.
*/
enum nl80211_feature_flags {
NL80211_FEATURE_SK_TX_STATUS = 1 << 0,
@@ -3989,6 +4063,13 @@
NL80211_FEATURE_USERSPACE_MPM = 1 << 16,
NL80211_FEATURE_ACTIVE_MONITOR = 1 << 17,
NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE = 1 << 18,
+ NL80211_FEATURE_DS_PARAM_SET_IE_IN_PROBES = 1 << 19,
+ NL80211_FEATURE_WFA_TPC_IE_IN_PROBES = 1 << 20,
+ NL80211_FEATURE_QUIET = 1 << 21,
+ NL80211_FEATURE_TX_POWER_INSERTION = 1 << 22,
+ NL80211_FEATURE_ACKTO_ESTIMATION = 1 << 23,
+ NL80211_FEATURE_STATIC_SMPS = 1 << 24,
+ NL80211_FEATURE_DYNAMIC_SMPS = 1 << 25,
};
/**
@@ -4063,6 +4144,25 @@
};
/**
+ * enum nl80211_smps_mode - SMPS mode
+ *
+ * Requested SMPS mode (for AP mode)
+ *
+ * @NL80211_SMPS_OFF: SMPS off (use all antennas).
+ * @NL80211_SMPS_STATIC: static SMPS (use a single antenna)
+ * @NL80211_SMPS_DYNAMIC: dynamic smps (start with a single antenna and
+ * turn on other antennas after CTS/RTS).
+ */
+enum nl80211_smps_mode {
+ NL80211_SMPS_OFF,
+ NL80211_SMPS_STATIC,
+ NL80211_SMPS_DYNAMIC,
+
+ __NL80211_SMPS_AFTER_LAST,
+ NL80211_SMPS_MAX = __NL80211_SMPS_AFTER_LAST - 1
+};
+
+/**
* enum nl80211_radar_event - type of radar event for DFS operation
*
* Type of event to be used with NL80211_ATTR_RADAR_EVENT to inform userspace
diff --git a/include/uapi/linux/openvswitch.h b/include/uapi/linux/openvswitch.h
index a794d1d..f7fc507 100644
--- a/include/uapi/linux/openvswitch.h
+++ b/include/uapi/linux/openvswitch.h
@@ -289,6 +289,9 @@
OVS_KEY_ATTR_TUNNEL, /* Nested set of ovs_tunnel attributes */
OVS_KEY_ATTR_SCTP, /* struct ovs_key_sctp */
OVS_KEY_ATTR_TCP_FLAGS, /* be16 TCP flags. */
+ OVS_KEY_ATTR_DP_HASH, /* u32 hash value. Value 0 indicates the hash
+ is not computed by the datapath. */
+ OVS_KEY_ATTR_RECIRC_ID, /* u32 recirc id */
#ifdef __KERNEL__
OVS_KEY_ATTR_IPV4_TUNNEL, /* struct ovs_key_ipv4_tunnel */
@@ -493,6 +496,27 @@
__be16 vlan_tci; /* 802.1Q TCI (VLAN ID and priority). */
};
+/* Data path hash algorithm for computing Datapath hash.
+ *
+ * The algorithm type only specifies the fields in a flow
+ * will be used as part of the hash. Each datapath is free
+ * to use its own hash algorithm. The hash value will be
+ * opaque to the user space daemon.
+ */
+enum ovs_hash_alg {
+ OVS_HASH_ALG_L4,
+};
+
+/*
+ * struct ovs_action_hash - %OVS_ACTION_ATTR_HASH action argument.
+ * @hash_alg: Algorithm used to compute hash prior to recirculation.
+ * @hash_basis: basis used for computing hash.
+ */
+struct ovs_action_hash {
+ uint32_t hash_alg; /* One of ovs_hash_alg. */
+ uint32_t hash_basis;
+};
+
/**
* enum ovs_action_attr - Action types.
*
@@ -521,6 +545,8 @@
OVS_ACTION_ATTR_PUSH_VLAN, /* struct ovs_action_push_vlan. */
OVS_ACTION_ATTR_POP_VLAN, /* No argument. */
OVS_ACTION_ATTR_SAMPLE, /* Nested OVS_SAMPLE_ATTR_*. */
+ OVS_ACTION_ATTR_RECIRC, /* u32 recirc_id. */
+ OVS_ACTION_ATTR_HASH, /* struct ovs_action_hash. */
__OVS_ACTION_ATTR_MAX
};
diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
index 25e5dd9..02d5125 100644
--- a/include/uapi/linux/xfrm.h
+++ b/include/uapi/linux/xfrm.h
@@ -328,6 +328,8 @@
XFRMA_SPD_UNSPEC,
XFRMA_SPD_INFO,
XFRMA_SPD_HINFO,
+ XFRMA_SPD_IPV4_HTHRESH,
+ XFRMA_SPD_IPV6_HTHRESH,
__XFRMA_SPD_MAX
#define XFRMA_SPD_MAX (__XFRMA_SPD_MAX - 1)
@@ -347,6 +349,11 @@
__u32 spdhmcnt;
};
+struct xfrmu_spdhthresh {
+ __u8 lbits;
+ __u8 rbits;
+};
+
struct xfrm_usersa_info {
struct xfrm_selector sel;
struct xfrm_id id;
diff --git a/include/xen/interface/features.h b/include/xen/interface/features.h
index 131a6cc..14334d0 100644
--- a/include/xen/interface/features.h
+++ b/include/xen/interface/features.h
@@ -53,6 +53,9 @@
/* operation as Dom0 is supported */
#define XENFEAT_dom0 11
+/* Xen also maps grant references at pfn = mfn */
+#define XENFEAT_grant_map_identity 12
+
#define XENFEAT_NR_SUBMAPS 1
#endif /* __XEN_PUBLIC_FEATURES_H__ */
diff --git a/init/do_mounts.c b/init/do_mounts.c
index b6237c3..82f2288 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -539,6 +539,12 @@
{
int is_floppy;
+ if (root_delay) {
+ printk(KERN_INFO "Waiting %d sec before mounting root device...\n",
+ root_delay);
+ ssleep(root_delay);
+ }
+
/*
* wait for the known devices to complete their probing
*
@@ -565,12 +571,6 @@
if (initrd_load())
goto out;
- if (root_delay) {
- pr_info("Waiting %d sec before mounting root device...\n",
- root_delay);
- ssleep(root_delay);
- }
-
/* wait for any asynchronous scanning to complete */
if ((ROOT_DEV == 0) && root_wait) {
printk(KERN_INFO "Waiting for root device %s...\n",
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 6a71145..4542723 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -1 +1,5 @@
-obj-y := core.o
+obj-y := core.o syscall.o verifier.o
+
+ifdef CONFIG_TEST_BPF
+obj-y += test_stub.o
+endif
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 8b70024..f0c30c5 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -27,6 +27,7 @@
#include <linux/random.h>
#include <linux/moduleloader.h>
#include <asm/unaligned.h>
+#include <linux/bpf.h>
/* Registers */
#define BPF_R0 regs[BPF_REG_0]
@@ -71,7 +72,7 @@
{
gfp_t gfp_flags = GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO |
gfp_extra_flags;
- struct bpf_work_struct *ws;
+ struct bpf_prog_aux *aux;
struct bpf_prog *fp;
size = round_up(size, PAGE_SIZE);
@@ -79,14 +80,14 @@
if (fp == NULL)
return NULL;
- ws = kmalloc(sizeof(*ws), GFP_KERNEL | gfp_extra_flags);
- if (ws == NULL) {
+ aux = kzalloc(sizeof(*aux), GFP_KERNEL | gfp_extra_flags);
+ if (aux == NULL) {
vfree(fp);
return NULL;
}
fp->pages = size / PAGE_SIZE;
- fp->work = ws;
+ fp->aux = aux;
return fp;
}
@@ -110,10 +111,10 @@
memcpy(fp, fp_old, fp_old->pages * PAGE_SIZE);
fp->pages = size / PAGE_SIZE;
- /* We keep fp->work from fp_old around in the new
+ /* We keep fp->aux from fp_old around in the new
* reallocated structure.
*/
- fp_old->work = NULL;
+ fp_old->aux = NULL;
__bpf_prog_free(fp_old);
}
@@ -123,7 +124,7 @@
void __bpf_prog_free(struct bpf_prog *fp)
{
- kfree(fp->work);
+ kfree(fp->aux);
vfree(fp);
}
EXPORT_SYMBOL_GPL(__bpf_prog_free);
@@ -638,19 +639,19 @@
static void bpf_prog_free_deferred(struct work_struct *work)
{
- struct bpf_work_struct *ws;
+ struct bpf_prog_aux *aux;
- ws = container_of(work, struct bpf_work_struct, work);
- bpf_jit_free(ws->prog);
+ aux = container_of(work, struct bpf_prog_aux, work);
+ bpf_jit_free(aux->prog);
}
/* Free internal BPF program */
void bpf_prog_free(struct bpf_prog *fp)
{
- struct bpf_work_struct *ws = fp->work;
+ struct bpf_prog_aux *aux = fp->aux;
- INIT_WORK(&ws->work, bpf_prog_free_deferred);
- ws->prog = fp;
- schedule_work(&ws->work);
+ INIT_WORK(&aux->work, bpf_prog_free_deferred);
+ aux->prog = fp;
+ schedule_work(&aux->work);
}
EXPORT_SYMBOL_GPL(bpf_prog_free);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
new file mode 100644
index 0000000..ba61c8c
--- /dev/null
+++ b/kernel/bpf/syscall.c
@@ -0,0 +1,606 @@
+/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/bpf.h>
+#include <linux/syscalls.h>
+#include <linux/slab.h>
+#include <linux/anon_inodes.h>
+#include <linux/file.h>
+#include <linux/license.h>
+#include <linux/filter.h>
+
+static LIST_HEAD(bpf_map_types);
+
+static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)
+{
+ struct bpf_map_type_list *tl;
+ struct bpf_map *map;
+
+ list_for_each_entry(tl, &bpf_map_types, list_node) {
+ if (tl->type == attr->map_type) {
+ map = tl->ops->map_alloc(attr);
+ if (IS_ERR(map))
+ return map;
+ map->ops = tl->ops;
+ map->map_type = attr->map_type;
+ return map;
+ }
+ }
+ return ERR_PTR(-EINVAL);
+}
+
+/* boot time registration of different map implementations */
+void bpf_register_map_type(struct bpf_map_type_list *tl)
+{
+ list_add(&tl->list_node, &bpf_map_types);
+}
+
+/* called from workqueue */
+static void bpf_map_free_deferred(struct work_struct *work)
+{
+ struct bpf_map *map = container_of(work, struct bpf_map, work);
+
+ /* implementation dependent freeing */
+ map->ops->map_free(map);
+}
+
+/* decrement map refcnt and schedule it for freeing via workqueue
+ * (unrelying map implementation ops->map_free() might sleep)
+ */
+void bpf_map_put(struct bpf_map *map)
+{
+ if (atomic_dec_and_test(&map->refcnt)) {
+ INIT_WORK(&map->work, bpf_map_free_deferred);
+ schedule_work(&map->work);
+ }
+}
+
+static int bpf_map_release(struct inode *inode, struct file *filp)
+{
+ struct bpf_map *map = filp->private_data;
+
+ bpf_map_put(map);
+ return 0;
+}
+
+static const struct file_operations bpf_map_fops = {
+ .release = bpf_map_release,
+};
+
+/* helper macro to check that unused fields 'union bpf_attr' are zero */
+#define CHECK_ATTR(CMD) \
+ memchr_inv((void *) &attr->CMD##_LAST_FIELD + \
+ sizeof(attr->CMD##_LAST_FIELD), 0, \
+ sizeof(*attr) - \
+ offsetof(union bpf_attr, CMD##_LAST_FIELD) - \
+ sizeof(attr->CMD##_LAST_FIELD)) != NULL
+
+#define BPF_MAP_CREATE_LAST_FIELD max_entries
+/* called via syscall */
+static int map_create(union bpf_attr *attr)
+{
+ struct bpf_map *map;
+ int err;
+
+ err = CHECK_ATTR(BPF_MAP_CREATE);
+ if (err)
+ return -EINVAL;
+
+ /* find map type and init map: hashtable vs rbtree vs bloom vs ... */
+ map = find_and_alloc_map(attr);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+
+ atomic_set(&map->refcnt, 1);
+
+ err = anon_inode_getfd("bpf-map", &bpf_map_fops, map, O_RDWR | O_CLOEXEC);
+
+ if (err < 0)
+ /* failed to allocate fd */
+ goto free_map;
+
+ return err;
+
+free_map:
+ map->ops->map_free(map);
+ return err;
+}
+
+/* if error is returned, fd is released.
+ * On success caller should complete fd access with matching fdput()
+ */
+struct bpf_map *bpf_map_get(struct fd f)
+{
+ struct bpf_map *map;
+
+ if (!f.file)
+ return ERR_PTR(-EBADF);
+
+ if (f.file->f_op != &bpf_map_fops) {
+ fdput(f);
+ return ERR_PTR(-EINVAL);
+ }
+
+ map = f.file->private_data;
+
+ return map;
+}
+
+/* helper to convert user pointers passed inside __aligned_u64 fields */
+static void __user *u64_to_ptr(__u64 val)
+{
+ return (void __user *) (unsigned long) val;
+}
+
+/* last field in 'union bpf_attr' used by this command */
+#define BPF_MAP_LOOKUP_ELEM_LAST_FIELD value
+
+static int map_lookup_elem(union bpf_attr *attr)
+{
+ void __user *ukey = u64_to_ptr(attr->key);
+ void __user *uvalue = u64_to_ptr(attr->value);
+ int ufd = attr->map_fd;
+ struct fd f = fdget(ufd);
+ struct bpf_map *map;
+ void *key, *value;
+ int err;
+
+ if (CHECK_ATTR(BPF_MAP_LOOKUP_ELEM))
+ return -EINVAL;
+
+ map = bpf_map_get(f);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+
+ err = -ENOMEM;
+ key = kmalloc(map->key_size, GFP_USER);
+ if (!key)
+ goto err_put;
+
+ err = -EFAULT;
+ if (copy_from_user(key, ukey, map->key_size) != 0)
+ goto free_key;
+
+ err = -ESRCH;
+ rcu_read_lock();
+ value = map->ops->map_lookup_elem(map, key);
+ if (!value)
+ goto err_unlock;
+
+ err = -EFAULT;
+ if (copy_to_user(uvalue, value, map->value_size) != 0)
+ goto err_unlock;
+
+ err = 0;
+
+err_unlock:
+ rcu_read_unlock();
+free_key:
+ kfree(key);
+err_put:
+ fdput(f);
+ return err;
+}
+
+#define BPF_MAP_UPDATE_ELEM_LAST_FIELD value
+
+static int map_update_elem(union bpf_attr *attr)
+{
+ void __user *ukey = u64_to_ptr(attr->key);
+ void __user *uvalue = u64_to_ptr(attr->value);
+ int ufd = attr->map_fd;
+ struct fd f = fdget(ufd);
+ struct bpf_map *map;
+ void *key, *value;
+ int err;
+
+ if (CHECK_ATTR(BPF_MAP_UPDATE_ELEM))
+ return -EINVAL;
+
+ map = bpf_map_get(f);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+
+ err = -ENOMEM;
+ key = kmalloc(map->key_size, GFP_USER);
+ if (!key)
+ goto err_put;
+
+ err = -EFAULT;
+ if (copy_from_user(key, ukey, map->key_size) != 0)
+ goto free_key;
+
+ err = -ENOMEM;
+ value = kmalloc(map->value_size, GFP_USER);
+ if (!value)
+ goto free_key;
+
+ err = -EFAULT;
+ if (copy_from_user(value, uvalue, map->value_size) != 0)
+ goto free_value;
+
+ /* eBPF program that use maps are running under rcu_read_lock(),
+ * therefore all map accessors rely on this fact, so do the same here
+ */
+ rcu_read_lock();
+ err = map->ops->map_update_elem(map, key, value);
+ rcu_read_unlock();
+
+free_value:
+ kfree(value);
+free_key:
+ kfree(key);
+err_put:
+ fdput(f);
+ return err;
+}
+
+#define BPF_MAP_DELETE_ELEM_LAST_FIELD key
+
+static int map_delete_elem(union bpf_attr *attr)
+{
+ void __user *ukey = u64_to_ptr(attr->key);
+ int ufd = attr->map_fd;
+ struct fd f = fdget(ufd);
+ struct bpf_map *map;
+ void *key;
+ int err;
+
+ if (CHECK_ATTR(BPF_MAP_DELETE_ELEM))
+ return -EINVAL;
+
+ map = bpf_map_get(f);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+
+ err = -ENOMEM;
+ key = kmalloc(map->key_size, GFP_USER);
+ if (!key)
+ goto err_put;
+
+ err = -EFAULT;
+ if (copy_from_user(key, ukey, map->key_size) != 0)
+ goto free_key;
+
+ rcu_read_lock();
+ err = map->ops->map_delete_elem(map, key);
+ rcu_read_unlock();
+
+free_key:
+ kfree(key);
+err_put:
+ fdput(f);
+ return err;
+}
+
+/* last field in 'union bpf_attr' used by this command */
+#define BPF_MAP_GET_NEXT_KEY_LAST_FIELD next_key
+
+static int map_get_next_key(union bpf_attr *attr)
+{
+ void __user *ukey = u64_to_ptr(attr->key);
+ void __user *unext_key = u64_to_ptr(attr->next_key);
+ int ufd = attr->map_fd;
+ struct fd f = fdget(ufd);
+ struct bpf_map *map;
+ void *key, *next_key;
+ int err;
+
+ if (CHECK_ATTR(BPF_MAP_GET_NEXT_KEY))
+ return -EINVAL;
+
+ map = bpf_map_get(f);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+
+ err = -ENOMEM;
+ key = kmalloc(map->key_size, GFP_USER);
+ if (!key)
+ goto err_put;
+
+ err = -EFAULT;
+ if (copy_from_user(key, ukey, map->key_size) != 0)
+ goto free_key;
+
+ err = -ENOMEM;
+ next_key = kmalloc(map->key_size, GFP_USER);
+ if (!next_key)
+ goto free_key;
+
+ rcu_read_lock();
+ err = map->ops->map_get_next_key(map, key, next_key);
+ rcu_read_unlock();
+ if (err)
+ goto free_next_key;
+
+ err = -EFAULT;
+ if (copy_to_user(unext_key, next_key, map->key_size) != 0)
+ goto free_next_key;
+
+ err = 0;
+
+free_next_key:
+ kfree(next_key);
+free_key:
+ kfree(key);
+err_put:
+ fdput(f);
+ return err;
+}
+
+static LIST_HEAD(bpf_prog_types);
+
+static int find_prog_type(enum bpf_prog_type type, struct bpf_prog *prog)
+{
+ struct bpf_prog_type_list *tl;
+
+ list_for_each_entry(tl, &bpf_prog_types, list_node) {
+ if (tl->type == type) {
+ prog->aux->ops = tl->ops;
+ prog->aux->prog_type = type;
+ return 0;
+ }
+ }
+ return -EINVAL;
+}
+
+void bpf_register_prog_type(struct bpf_prog_type_list *tl)
+{
+ list_add(&tl->list_node, &bpf_prog_types);
+}
+
+/* fixup insn->imm field of bpf_call instructions:
+ * if (insn->imm == BPF_FUNC_map_lookup_elem)
+ * insn->imm = bpf_map_lookup_elem - __bpf_call_base;
+ * else if (insn->imm == BPF_FUNC_map_update_elem)
+ * insn->imm = bpf_map_update_elem - __bpf_call_base;
+ * else ...
+ *
+ * this function is called after eBPF program passed verification
+ */
+static void fixup_bpf_calls(struct bpf_prog *prog)
+{
+ const struct bpf_func_proto *fn;
+ int i;
+
+ for (i = 0; i < prog->len; i++) {
+ struct bpf_insn *insn = &prog->insnsi[i];
+
+ if (insn->code == (BPF_JMP | BPF_CALL)) {
+ /* we reach here when program has bpf_call instructions
+ * and it passed bpf_check(), means that
+ * ops->get_func_proto must have been supplied, check it
+ */
+ BUG_ON(!prog->aux->ops->get_func_proto);
+
+ fn = prog->aux->ops->get_func_proto(insn->imm);
+ /* all functions that have prototype and verifier allowed
+ * programs to call them, must be real in-kernel functions
+ */
+ BUG_ON(!fn->func);
+ insn->imm = fn->func - __bpf_call_base;
+ }
+ }
+}
+
+/* drop refcnt on maps used by eBPF program and free auxilary data */
+static void free_used_maps(struct bpf_prog_aux *aux)
+{
+ int i;
+
+ for (i = 0; i < aux->used_map_cnt; i++)
+ bpf_map_put(aux->used_maps[i]);
+
+ kfree(aux->used_maps);
+}
+
+void bpf_prog_put(struct bpf_prog *prog)
+{
+ if (atomic_dec_and_test(&prog->aux->refcnt)) {
+ free_used_maps(prog->aux);
+ bpf_prog_free(prog);
+ }
+}
+
+static int bpf_prog_release(struct inode *inode, struct file *filp)
+{
+ struct bpf_prog *prog = filp->private_data;
+
+ bpf_prog_put(prog);
+ return 0;
+}
+
+static const struct file_operations bpf_prog_fops = {
+ .release = bpf_prog_release,
+};
+
+static struct bpf_prog *get_prog(struct fd f)
+{
+ struct bpf_prog *prog;
+
+ if (!f.file)
+ return ERR_PTR(-EBADF);
+
+ if (f.file->f_op != &bpf_prog_fops) {
+ fdput(f);
+ return ERR_PTR(-EINVAL);
+ }
+
+ prog = f.file->private_data;
+
+ return prog;
+}
+
+/* called by sockets/tracing/seccomp before attaching program to an event
+ * pairs with bpf_prog_put()
+ */
+struct bpf_prog *bpf_prog_get(u32 ufd)
+{
+ struct fd f = fdget(ufd);
+ struct bpf_prog *prog;
+
+ prog = get_prog(f);
+
+ if (IS_ERR(prog))
+ return prog;
+
+ atomic_inc(&prog->aux->refcnt);
+ fdput(f);
+ return prog;
+}
+
+/* last field in 'union bpf_attr' used by this command */
+#define BPF_PROG_LOAD_LAST_FIELD log_buf
+
+static int bpf_prog_load(union bpf_attr *attr)
+{
+ enum bpf_prog_type type = attr->prog_type;
+ struct bpf_prog *prog;
+ int err;
+ char license[128];
+ bool is_gpl;
+
+ if (CHECK_ATTR(BPF_PROG_LOAD))
+ return -EINVAL;
+
+ /* copy eBPF program license from user space */
+ if (strncpy_from_user(license, u64_to_ptr(attr->license),
+ sizeof(license) - 1) < 0)
+ return -EFAULT;
+ license[sizeof(license) - 1] = 0;
+
+ /* eBPF programs must be GPL compatible to use GPL-ed functions */
+ is_gpl = license_is_gpl_compatible(license);
+
+ if (attr->insn_cnt >= BPF_MAXINSNS)
+ return -EINVAL;
+
+ /* plain bpf_prog allocation */
+ prog = bpf_prog_alloc(bpf_prog_size(attr->insn_cnt), GFP_USER);
+ if (!prog)
+ return -ENOMEM;
+
+ prog->len = attr->insn_cnt;
+
+ err = -EFAULT;
+ if (copy_from_user(prog->insns, u64_to_ptr(attr->insns),
+ prog->len * sizeof(struct bpf_insn)) != 0)
+ goto free_prog;
+
+ prog->orig_prog = NULL;
+ prog->jited = false;
+
+ atomic_set(&prog->aux->refcnt, 1);
+ prog->aux->is_gpl_compatible = is_gpl;
+
+ /* find program type: socket_filter vs tracing_filter */
+ err = find_prog_type(type, prog);
+ if (err < 0)
+ goto free_prog;
+
+ /* run eBPF verifier */
+ err = bpf_check(prog, attr);
+
+ if (err < 0)
+ goto free_used_maps;
+
+ /* fixup BPF_CALL->imm field */
+ fixup_bpf_calls(prog);
+
+ /* eBPF program is ready to be JITed */
+ bpf_prog_select_runtime(prog);
+
+ err = anon_inode_getfd("bpf-prog", &bpf_prog_fops, prog, O_RDWR | O_CLOEXEC);
+
+ if (err < 0)
+ /* failed to allocate fd */
+ goto free_used_maps;
+
+ return err;
+
+free_used_maps:
+ free_used_maps(prog->aux);
+free_prog:
+ bpf_prog_free(prog);
+ return err;
+}
+
+SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, size)
+{
+ union bpf_attr attr = {};
+ int err;
+
+ /* the syscall is limited to root temporarily. This restriction will be
+ * lifted when security audit is clean. Note that eBPF+tracing must have
+ * this restriction, since it may pass kernel data to user space
+ */
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (!access_ok(VERIFY_READ, uattr, 1))
+ return -EFAULT;
+
+ if (size > PAGE_SIZE) /* silly large */
+ return -E2BIG;
+
+ /* If we're handed a bigger struct than we know of,
+ * ensure all the unknown bits are 0 - i.e. new
+ * user-space does not rely on any kernel feature
+ * extensions we dont know about yet.
+ */
+ if (size > sizeof(attr)) {
+ unsigned char __user *addr;
+ unsigned char __user *end;
+ unsigned char val;
+
+ addr = (void __user *)uattr + sizeof(attr);
+ end = (void __user *)uattr + size;
+
+ for (; addr < end; addr++) {
+ err = get_user(val, addr);
+ if (err)
+ return err;
+ if (val)
+ return -E2BIG;
+ }
+ size = sizeof(attr);
+ }
+
+ /* copy attributes from user space, may be less than sizeof(bpf_attr) */
+ if (copy_from_user(&attr, uattr, size) != 0)
+ return -EFAULT;
+
+ switch (cmd) {
+ case BPF_MAP_CREATE:
+ err = map_create(&attr);
+ break;
+ case BPF_MAP_LOOKUP_ELEM:
+ err = map_lookup_elem(&attr);
+ break;
+ case BPF_MAP_UPDATE_ELEM:
+ err = map_update_elem(&attr);
+ break;
+ case BPF_MAP_DELETE_ELEM:
+ err = map_delete_elem(&attr);
+ break;
+ case BPF_MAP_GET_NEXT_KEY:
+ err = map_get_next_key(&attr);
+ break;
+ case BPF_PROG_LOAD:
+ err = bpf_prog_load(&attr);
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
diff --git a/kernel/bpf/test_stub.c b/kernel/bpf/test_stub.c
new file mode 100644
index 0000000..fcaddff
--- /dev/null
+++ b/kernel/bpf/test_stub.c
@@ -0,0 +1,116 @@
+/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/err.h>
+#include <linux/bpf.h>
+
+/* test stubs for BPF_MAP_TYPE_UNSPEC and for BPF_PROG_TYPE_UNSPEC
+ * to be used by user space verifier testsuite
+ */
+struct bpf_context {
+ u64 arg1;
+ u64 arg2;
+};
+
+static u64 test_func(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
+{
+ return 0;
+}
+
+static struct bpf_func_proto test_funcs[] = {
+ [BPF_FUNC_unspec] = {
+ .func = test_func,
+ .gpl_only = true,
+ .ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL,
+ .arg1_type = ARG_CONST_MAP_PTR,
+ .arg2_type = ARG_PTR_TO_MAP_KEY,
+ },
+};
+
+static const struct bpf_func_proto *test_func_proto(enum bpf_func_id func_id)
+{
+ if (func_id < 0 || func_id >= ARRAY_SIZE(test_funcs))
+ return NULL;
+ return &test_funcs[func_id];
+}
+
+static const struct bpf_context_access {
+ int size;
+ enum bpf_access_type type;
+} test_ctx_access[] = {
+ [offsetof(struct bpf_context, arg1)] = {
+ FIELD_SIZEOF(struct bpf_context, arg1),
+ BPF_READ
+ },
+ [offsetof(struct bpf_context, arg2)] = {
+ FIELD_SIZEOF(struct bpf_context, arg2),
+ BPF_READ
+ },
+};
+
+static bool test_is_valid_access(int off, int size, enum bpf_access_type type)
+{
+ const struct bpf_context_access *access;
+
+ if (off < 0 || off >= ARRAY_SIZE(test_ctx_access))
+ return false;
+
+ access = &test_ctx_access[off];
+ if (access->size == size && (access->type & type))
+ return true;
+
+ return false;
+}
+
+static struct bpf_verifier_ops test_ops = {
+ .get_func_proto = test_func_proto,
+ .is_valid_access = test_is_valid_access,
+};
+
+static struct bpf_prog_type_list tl_prog = {
+ .ops = &test_ops,
+ .type = BPF_PROG_TYPE_UNSPEC,
+};
+
+static struct bpf_map *test_map_alloc(union bpf_attr *attr)
+{
+ struct bpf_map *map;
+
+ map = kzalloc(sizeof(*map), GFP_USER);
+ if (!map)
+ return ERR_PTR(-ENOMEM);
+
+ map->key_size = attr->key_size;
+ map->value_size = attr->value_size;
+ map->max_entries = attr->max_entries;
+ return map;
+}
+
+static void test_map_free(struct bpf_map *map)
+{
+ kfree(map);
+}
+
+static struct bpf_map_ops test_map_ops = {
+ .map_alloc = test_map_alloc,
+ .map_free = test_map_free,
+};
+
+static struct bpf_map_type_list tl_map = {
+ .ops = &test_map_ops,
+ .type = BPF_MAP_TYPE_UNSPEC,
+};
+
+static int __init register_test_ops(void)
+{
+ bpf_register_map_type(&tl_map);
+ bpf_register_prog_type(&tl_prog);
+ return 0;
+}
+late_initcall(register_test_ops);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
new file mode 100644
index 0000000..a086dd3
--- /dev/null
+++ b/kernel/bpf/verifier.c
@@ -0,0 +1,1777 @@
+/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/bpf.h>
+#include <linux/filter.h>
+#include <net/netlink.h>
+#include <linux/file.h>
+#include <linux/vmalloc.h>
+
+/* bpf_check() is a static code analyzer that walks eBPF program
+ * instruction by instruction and updates register/stack state.
+ * All paths of conditional branches are analyzed until 'bpf_exit' insn.
+ *
+ * The first pass is depth-first-search to check that the program is a DAG.
+ * It rejects the following programs:
+ * - larger than BPF_MAXINSNS insns
+ * - if loop is present (detected via back-edge)
+ * - unreachable insns exist (shouldn't be a forest. program = one function)
+ * - out of bounds or malformed jumps
+ * The second pass is all possible path descent from the 1st insn.
+ * Since it's analyzing all pathes through the program, the length of the
+ * analysis is limited to 32k insn, which may be hit even if total number of
+ * insn is less then 4K, but there are too many branches that change stack/regs.
+ * Number of 'branches to be analyzed' is limited to 1k
+ *
+ * On entry to each instruction, each register has a type, and the instruction
+ * changes the types of the registers depending on instruction semantics.
+ * If instruction is BPF_MOV64_REG(BPF_REG_1, BPF_REG_5), then type of R5 is
+ * copied to R1.
+ *
+ * All registers are 64-bit.
+ * R0 - return register
+ * R1-R5 argument passing registers
+ * R6-R9 callee saved registers
+ * R10 - frame pointer read-only
+ *
+ * At the start of BPF program the register R1 contains a pointer to bpf_context
+ * and has type PTR_TO_CTX.
+ *
+ * Verifier tracks arithmetic operations on pointers in case:
+ * BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ * BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -20),
+ * 1st insn copies R10 (which has FRAME_PTR) type into R1
+ * and 2nd arithmetic instruction is pattern matched to recognize
+ * that it wants to construct a pointer to some element within stack.
+ * So after 2nd insn, the register R1 has type PTR_TO_STACK
+ * (and -20 constant is saved for further stack bounds checking).
+ * Meaning that this reg is a pointer to stack plus known immediate constant.
+ *
+ * Most of the time the registers have UNKNOWN_VALUE type, which
+ * means the register has some value, but it's not a valid pointer.
+ * (like pointer plus pointer becomes UNKNOWN_VALUE type)
+ *
+ * When verifier sees load or store instructions the type of base register
+ * can be: PTR_TO_MAP_VALUE, PTR_TO_CTX, FRAME_PTR. These are three pointer
+ * types recognized by check_mem_access() function.
+ *
+ * PTR_TO_MAP_VALUE means that this register is pointing to 'map element value'
+ * and the range of [ptr, ptr + map's value_size) is accessible.
+ *
+ * registers used to pass values to function calls are checked against
+ * function argument constraints.
+ *
+ * ARG_PTR_TO_MAP_KEY is one of such argument constraints.
+ * It means that the register type passed to this function must be
+ * PTR_TO_STACK and it will be used inside the function as
+ * 'pointer to map element key'
+ *
+ * For example the argument constraints for bpf_map_lookup_elem():
+ * .ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL,
+ * .arg1_type = ARG_CONST_MAP_PTR,
+ * .arg2_type = ARG_PTR_TO_MAP_KEY,
+ *
+ * ret_type says that this function returns 'pointer to map elem value or null'
+ * function expects 1st argument to be a const pointer to 'struct bpf_map' and
+ * 2nd argument should be a pointer to stack, which will be used inside
+ * the helper function as a pointer to map element key.
+ *
+ * On the kernel side the helper function looks like:
+ * u64 bpf_map_lookup_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
+ * {
+ * struct bpf_map *map = (struct bpf_map *) (unsigned long) r1;
+ * void *key = (void *) (unsigned long) r2;
+ * void *value;
+ *
+ * here kernel can access 'key' and 'map' pointers safely, knowing that
+ * [key, key + map->key_size) bytes are valid and were initialized on
+ * the stack of eBPF program.
+ * }
+ *
+ * Corresponding eBPF program may look like:
+ * BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), // after this insn R2 type is FRAME_PTR
+ * BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), // after this insn R2 type is PTR_TO_STACK
+ * BPF_LD_MAP_FD(BPF_REG_1, map_fd), // after this insn R1 type is CONST_PTR_TO_MAP
+ * BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+ * here verifier looks at prototype of map_lookup_elem() and sees:
+ * .arg1_type == ARG_CONST_MAP_PTR and R1->type == CONST_PTR_TO_MAP, which is ok,
+ * Now verifier knows that this map has key of R1->map_ptr->key_size bytes
+ *
+ * Then .arg2_type == ARG_PTR_TO_MAP_KEY and R2->type == PTR_TO_STACK, ok so far,
+ * Now verifier checks that [R2, R2 + map's key_size) are within stack limits
+ * and were initialized prior to this call.
+ * If it's ok, then verifier allows this BPF_CALL insn and looks at
+ * .ret_type which is RET_PTR_TO_MAP_VALUE_OR_NULL, so it sets
+ * R0->type = PTR_TO_MAP_VALUE_OR_NULL which means bpf_map_lookup_elem() function
+ * returns ether pointer to map value or NULL.
+ *
+ * When type PTR_TO_MAP_VALUE_OR_NULL passes through 'if (reg != 0) goto +off'
+ * insn, the register holding that pointer in the true branch changes state to
+ * PTR_TO_MAP_VALUE and the same register changes state to CONST_IMM in the false
+ * branch. See check_cond_jmp_op().
+ *
+ * After the call R0 is set to return type of the function and registers R1-R5
+ * are set to NOT_INIT to indicate that they are no longer readable.
+ */
+
+/* types of values stored in eBPF registers */
+enum bpf_reg_type {
+ NOT_INIT = 0, /* nothing was written into register */
+ UNKNOWN_VALUE, /* reg doesn't contain a valid pointer */
+ PTR_TO_CTX, /* reg points to bpf_context */
+ CONST_PTR_TO_MAP, /* reg points to struct bpf_map */
+ PTR_TO_MAP_VALUE, /* reg points to map element value */
+ PTR_TO_MAP_VALUE_OR_NULL,/* points to map elem value or NULL */
+ FRAME_PTR, /* reg == frame_pointer */
+ PTR_TO_STACK, /* reg == frame_pointer + imm */
+ CONST_IMM, /* constant integer value */
+};
+
+struct reg_state {
+ enum bpf_reg_type type;
+ union {
+ /* valid when type == CONST_IMM | PTR_TO_STACK */
+ int imm;
+
+ /* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE |
+ * PTR_TO_MAP_VALUE_OR_NULL
+ */
+ struct bpf_map *map_ptr;
+ };
+};
+
+enum bpf_stack_slot_type {
+ STACK_INVALID, /* nothing was stored in this stack slot */
+ STACK_SPILL, /* 1st byte of register spilled into stack */
+ STACK_SPILL_PART, /* other 7 bytes of register spill */
+ STACK_MISC /* BPF program wrote some data into this slot */
+};
+
+struct bpf_stack_slot {
+ enum bpf_stack_slot_type stype;
+ struct reg_state reg_st;
+};
+
+/* state of the program:
+ * type of all registers and stack info
+ */
+struct verifier_state {
+ struct reg_state regs[MAX_BPF_REG];
+ struct bpf_stack_slot stack[MAX_BPF_STACK];
+};
+
+/* linked list of verifier states used to prune search */
+struct verifier_state_list {
+ struct verifier_state state;
+ struct verifier_state_list *next;
+};
+
+/* verifier_state + insn_idx are pushed to stack when branch is encountered */
+struct verifier_stack_elem {
+ /* verifer state is 'st'
+ * before processing instruction 'insn_idx'
+ * and after processing instruction 'prev_insn_idx'
+ */
+ struct verifier_state st;
+ int insn_idx;
+ int prev_insn_idx;
+ struct verifier_stack_elem *next;
+};
+
+#define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
+
+/* single container for all structs
+ * one verifier_env per bpf_check() call
+ */
+struct verifier_env {
+ struct bpf_prog *prog; /* eBPF program being verified */
+ struct verifier_stack_elem *head; /* stack of verifier states to be processed */
+ int stack_size; /* number of states to be processed */
+ struct verifier_state cur_state; /* current verifier state */
+ struct bpf_map *used_maps[MAX_USED_MAPS]; /* array of map's used by eBPF program */
+ u32 used_map_cnt; /* number of used maps */
+};
+
+/* verbose verifier prints what it's seeing
+ * bpf_check() is called under lock, so no race to access these global vars
+ */
+static u32 log_level, log_size, log_len;
+static char *log_buf;
+
+static DEFINE_MUTEX(bpf_verifier_lock);
+
+/* log_level controls verbosity level of eBPF verifier.
+ * verbose() is used to dump the verification trace to the log, so the user
+ * can figure out what's wrong with the program
+ */
+static void verbose(const char *fmt, ...)
+{
+ va_list args;
+
+ if (log_level == 0 || log_len >= log_size - 1)
+ return;
+
+ va_start(args, fmt);
+ log_len += vscnprintf(log_buf + log_len, log_size - log_len, fmt, args);
+ va_end(args);
+}
+
+/* string representation of 'enum bpf_reg_type' */
+static const char * const reg_type_str[] = {
+ [NOT_INIT] = "?",
+ [UNKNOWN_VALUE] = "inv",
+ [PTR_TO_CTX] = "ctx",
+ [CONST_PTR_TO_MAP] = "map_ptr",
+ [PTR_TO_MAP_VALUE] = "map_value",
+ [PTR_TO_MAP_VALUE_OR_NULL] = "map_value_or_null",
+ [FRAME_PTR] = "fp",
+ [PTR_TO_STACK] = "fp",
+ [CONST_IMM] = "imm",
+};
+
+static void print_verifier_state(struct verifier_env *env)
+{
+ enum bpf_reg_type t;
+ int i;
+
+ for (i = 0; i < MAX_BPF_REG; i++) {
+ t = env->cur_state.regs[i].type;
+ if (t == NOT_INIT)
+ continue;
+ verbose(" R%d=%s", i, reg_type_str[t]);
+ if (t == CONST_IMM || t == PTR_TO_STACK)
+ verbose("%d", env->cur_state.regs[i].imm);
+ else if (t == CONST_PTR_TO_MAP || t == PTR_TO_MAP_VALUE ||
+ t == PTR_TO_MAP_VALUE_OR_NULL)
+ verbose("(ks=%d,vs=%d)",
+ env->cur_state.regs[i].map_ptr->key_size,
+ env->cur_state.regs[i].map_ptr->value_size);
+ }
+ for (i = 0; i < MAX_BPF_STACK; i++) {
+ if (env->cur_state.stack[i].stype == STACK_SPILL)
+ verbose(" fp%d=%s", -MAX_BPF_STACK + i,
+ reg_type_str[env->cur_state.stack[i].reg_st.type]);
+ }
+ verbose("\n");
+}
+
+static const char *const bpf_class_string[] = {
+ [BPF_LD] = "ld",
+ [BPF_LDX] = "ldx",
+ [BPF_ST] = "st",
+ [BPF_STX] = "stx",
+ [BPF_ALU] = "alu",
+ [BPF_JMP] = "jmp",
+ [BPF_RET] = "BUG",
+ [BPF_ALU64] = "alu64",
+};
+
+static const char *const bpf_alu_string[] = {
+ [BPF_ADD >> 4] = "+=",
+ [BPF_SUB >> 4] = "-=",
+ [BPF_MUL >> 4] = "*=",
+ [BPF_DIV >> 4] = "/=",
+ [BPF_OR >> 4] = "|=",
+ [BPF_AND >> 4] = "&=",
+ [BPF_LSH >> 4] = "<<=",
+ [BPF_RSH >> 4] = ">>=",
+ [BPF_NEG >> 4] = "neg",
+ [BPF_MOD >> 4] = "%=",
+ [BPF_XOR >> 4] = "^=",
+ [BPF_MOV >> 4] = "=",
+ [BPF_ARSH >> 4] = "s>>=",
+ [BPF_END >> 4] = "endian",
+};
+
+static const char *const bpf_ldst_string[] = {
+ [BPF_W >> 3] = "u32",
+ [BPF_H >> 3] = "u16",
+ [BPF_B >> 3] = "u8",
+ [BPF_DW >> 3] = "u64",
+};
+
+static const char *const bpf_jmp_string[] = {
+ [BPF_JA >> 4] = "jmp",
+ [BPF_JEQ >> 4] = "==",
+ [BPF_JGT >> 4] = ">",
+ [BPF_JGE >> 4] = ">=",
+ [BPF_JSET >> 4] = "&",
+ [BPF_JNE >> 4] = "!=",
+ [BPF_JSGT >> 4] = "s>",
+ [BPF_JSGE >> 4] = "s>=",
+ [BPF_CALL >> 4] = "call",
+ [BPF_EXIT >> 4] = "exit",
+};
+
+static void print_bpf_insn(struct bpf_insn *insn)
+{
+ u8 class = BPF_CLASS(insn->code);
+
+ if (class == BPF_ALU || class == BPF_ALU64) {
+ if (BPF_SRC(insn->code) == BPF_X)
+ verbose("(%02x) %sr%d %s %sr%d\n",
+ insn->code, class == BPF_ALU ? "(u32) " : "",
+ insn->dst_reg,
+ bpf_alu_string[BPF_OP(insn->code) >> 4],
+ class == BPF_ALU ? "(u32) " : "",
+ insn->src_reg);
+ else
+ verbose("(%02x) %sr%d %s %s%d\n",
+ insn->code, class == BPF_ALU ? "(u32) " : "",
+ insn->dst_reg,
+ bpf_alu_string[BPF_OP(insn->code) >> 4],
+ class == BPF_ALU ? "(u32) " : "",
+ insn->imm);
+ } else if (class == BPF_STX) {
+ if (BPF_MODE(insn->code) == BPF_MEM)
+ verbose("(%02x) *(%s *)(r%d %+d) = r%d\n",
+ insn->code,
+ bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+ insn->dst_reg,
+ insn->off, insn->src_reg);
+ else if (BPF_MODE(insn->code) == BPF_XADD)
+ verbose("(%02x) lock *(%s *)(r%d %+d) += r%d\n",
+ insn->code,
+ bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+ insn->dst_reg, insn->off,
+ insn->src_reg);
+ else
+ verbose("BUG_%02x\n", insn->code);
+ } else if (class == BPF_ST) {
+ if (BPF_MODE(insn->code) != BPF_MEM) {
+ verbose("BUG_st_%02x\n", insn->code);
+ return;
+ }
+ verbose("(%02x) *(%s *)(r%d %+d) = %d\n",
+ insn->code,
+ bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+ insn->dst_reg,
+ insn->off, insn->imm);
+ } else if (class == BPF_LDX) {
+ if (BPF_MODE(insn->code) != BPF_MEM) {
+ verbose("BUG_ldx_%02x\n", insn->code);
+ return;
+ }
+ verbose("(%02x) r%d = *(%s *)(r%d %+d)\n",
+ insn->code, insn->dst_reg,
+ bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+ insn->src_reg, insn->off);
+ } else if (class == BPF_LD) {
+ if (BPF_MODE(insn->code) == BPF_ABS) {
+ verbose("(%02x) r0 = *(%s *)skb[%d]\n",
+ insn->code,
+ bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+ insn->imm);
+ } else if (BPF_MODE(insn->code) == BPF_IND) {
+ verbose("(%02x) r0 = *(%s *)skb[r%d + %d]\n",
+ insn->code,
+ bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+ insn->src_reg, insn->imm);
+ } else if (BPF_MODE(insn->code) == BPF_IMM) {
+ verbose("(%02x) r%d = 0x%x\n",
+ insn->code, insn->dst_reg, insn->imm);
+ } else {
+ verbose("BUG_ld_%02x\n", insn->code);
+ return;
+ }
+ } else if (class == BPF_JMP) {
+ u8 opcode = BPF_OP(insn->code);
+
+ if (opcode == BPF_CALL) {
+ verbose("(%02x) call %d\n", insn->code, insn->imm);
+ } else if (insn->code == (BPF_JMP | BPF_JA)) {
+ verbose("(%02x) goto pc%+d\n",
+ insn->code, insn->off);
+ } else if (insn->code == (BPF_JMP | BPF_EXIT)) {
+ verbose("(%02x) exit\n", insn->code);
+ } else if (BPF_SRC(insn->code) == BPF_X) {
+ verbose("(%02x) if r%d %s r%d goto pc%+d\n",
+ insn->code, insn->dst_reg,
+ bpf_jmp_string[BPF_OP(insn->code) >> 4],
+ insn->src_reg, insn->off);
+ } else {
+ verbose("(%02x) if r%d %s 0x%x goto pc%+d\n",
+ insn->code, insn->dst_reg,
+ bpf_jmp_string[BPF_OP(insn->code) >> 4],
+ insn->imm, insn->off);
+ }
+ } else {
+ verbose("(%02x) %s\n", insn->code, bpf_class_string[class]);
+ }
+}
+
+static int pop_stack(struct verifier_env *env, int *prev_insn_idx)
+{
+ struct verifier_stack_elem *elem;
+ int insn_idx;
+
+ if (env->head == NULL)
+ return -1;
+
+ memcpy(&env->cur_state, &env->head->st, sizeof(env->cur_state));
+ insn_idx = env->head->insn_idx;
+ if (prev_insn_idx)
+ *prev_insn_idx = env->head->prev_insn_idx;
+ elem = env->head->next;
+ kfree(env->head);
+ env->head = elem;
+ env->stack_size--;
+ return insn_idx;
+}
+
+static struct verifier_state *push_stack(struct verifier_env *env, int insn_idx,
+ int prev_insn_idx)
+{
+ struct verifier_stack_elem *elem;
+
+ elem = kmalloc(sizeof(struct verifier_stack_elem), GFP_KERNEL);
+ if (!elem)
+ goto err;
+
+ memcpy(&elem->st, &env->cur_state, sizeof(env->cur_state));
+ elem->insn_idx = insn_idx;
+ elem->prev_insn_idx = prev_insn_idx;
+ elem->next = env->head;
+ env->head = elem;
+ env->stack_size++;
+ if (env->stack_size > 1024) {
+ verbose("BPF program is too complex\n");
+ goto err;
+ }
+ return &elem->st;
+err:
+ /* pop all elements and return */
+ while (pop_stack(env, NULL) >= 0);
+ return NULL;
+}
+
+#define CALLER_SAVED_REGS 6
+static const int caller_saved[CALLER_SAVED_REGS] = {
+ BPF_REG_0, BPF_REG_1, BPF_REG_2, BPF_REG_3, BPF_REG_4, BPF_REG_5
+};
+
+static void init_reg_state(struct reg_state *regs)
+{
+ int i;
+
+ for (i = 0; i < MAX_BPF_REG; i++) {
+ regs[i].type = NOT_INIT;
+ regs[i].imm = 0;
+ regs[i].map_ptr = NULL;
+ }
+
+ /* frame pointer */
+ regs[BPF_REG_FP].type = FRAME_PTR;
+
+ /* 1st arg to a function */
+ regs[BPF_REG_1].type = PTR_TO_CTX;
+}
+
+static void mark_reg_unknown_value(struct reg_state *regs, u32 regno)
+{
+ BUG_ON(regno >= MAX_BPF_REG);
+ regs[regno].type = UNKNOWN_VALUE;
+ regs[regno].imm = 0;
+ regs[regno].map_ptr = NULL;
+}
+
+enum reg_arg_type {
+ SRC_OP, /* register is used as source operand */
+ DST_OP, /* register is used as destination operand */
+ DST_OP_NO_MARK /* same as above, check only, don't mark */
+};
+
+static int check_reg_arg(struct reg_state *regs, u32 regno,
+ enum reg_arg_type t)
+{
+ if (regno >= MAX_BPF_REG) {
+ verbose("R%d is invalid\n", regno);
+ return -EINVAL;
+ }
+
+ if (t == SRC_OP) {
+ /* check whether register used as source operand can be read */
+ if (regs[regno].type == NOT_INIT) {
+ verbose("R%d !read_ok\n", regno);
+ return -EACCES;
+ }
+ } else {
+ /* check whether register used as dest operand can be written to */
+ if (regno == BPF_REG_FP) {
+ verbose("frame pointer is read only\n");
+ return -EACCES;
+ }
+ if (t == DST_OP)
+ mark_reg_unknown_value(regs, regno);
+ }
+ return 0;
+}
+
+static int bpf_size_to_bytes(int bpf_size)
+{
+ if (bpf_size == BPF_W)
+ return 4;
+ else if (bpf_size == BPF_H)
+ return 2;
+ else if (bpf_size == BPF_B)
+ return 1;
+ else if (bpf_size == BPF_DW)
+ return 8;
+ else
+ return -EINVAL;
+}
+
+/* check_stack_read/write functions track spill/fill of registers,
+ * stack boundary and alignment are checked in check_mem_access()
+ */
+static int check_stack_write(struct verifier_state *state, int off, int size,
+ int value_regno)
+{
+ struct bpf_stack_slot *slot;
+ int i;
+
+ if (value_regno >= 0 &&
+ (state->regs[value_regno].type == PTR_TO_MAP_VALUE ||
+ state->regs[value_regno].type == PTR_TO_STACK ||
+ state->regs[value_regno].type == PTR_TO_CTX)) {
+
+ /* register containing pointer is being spilled into stack */
+ if (size != 8) {
+ verbose("invalid size of register spill\n");
+ return -EACCES;
+ }
+
+ slot = &state->stack[MAX_BPF_STACK + off];
+ slot->stype = STACK_SPILL;
+ /* save register state */
+ slot->reg_st = state->regs[value_regno];
+ for (i = 1; i < 8; i++) {
+ slot = &state->stack[MAX_BPF_STACK + off + i];
+ slot->stype = STACK_SPILL_PART;
+ slot->reg_st.type = UNKNOWN_VALUE;
+ slot->reg_st.map_ptr = NULL;
+ }
+ } else {
+
+ /* regular write of data into stack */
+ for (i = 0; i < size; i++) {
+ slot = &state->stack[MAX_BPF_STACK + off + i];
+ slot->stype = STACK_MISC;
+ slot->reg_st.type = UNKNOWN_VALUE;
+ slot->reg_st.map_ptr = NULL;
+ }
+ }
+ return 0;
+}
+
+static int check_stack_read(struct verifier_state *state, int off, int size,
+ int value_regno)
+{
+ int i;
+ struct bpf_stack_slot *slot;
+
+ slot = &state->stack[MAX_BPF_STACK + off];
+
+ if (slot->stype == STACK_SPILL) {
+ if (size != 8) {
+ verbose("invalid size of register spill\n");
+ return -EACCES;
+ }
+ for (i = 1; i < 8; i++) {
+ if (state->stack[MAX_BPF_STACK + off + i].stype !=
+ STACK_SPILL_PART) {
+ verbose("corrupted spill memory\n");
+ return -EACCES;
+ }
+ }
+
+ if (value_regno >= 0)
+ /* restore register state from stack */
+ state->regs[value_regno] = slot->reg_st;
+ return 0;
+ } else {
+ for (i = 0; i < size; i++) {
+ if (state->stack[MAX_BPF_STACK + off + i].stype !=
+ STACK_MISC) {
+ verbose("invalid read from stack off %d+%d size %d\n",
+ off, i, size);
+ return -EACCES;
+ }
+ }
+ if (value_regno >= 0)
+ /* have read misc data from the stack */
+ mark_reg_unknown_value(state->regs, value_regno);
+ return 0;
+ }
+}
+
+/* check read/write into map element returned by bpf_map_lookup_elem() */
+static int check_map_access(struct verifier_env *env, u32 regno, int off,
+ int size)
+{
+ struct bpf_map *map = env->cur_state.regs[regno].map_ptr;
+
+ if (off < 0 || off + size > map->value_size) {
+ verbose("invalid access to map value, value_size=%d off=%d size=%d\n",
+ map->value_size, off, size);
+ return -EACCES;
+ }
+ return 0;
+}
+
+/* check access to 'struct bpf_context' fields */
+static int check_ctx_access(struct verifier_env *env, int off, int size,
+ enum bpf_access_type t)
+{
+ if (env->prog->aux->ops->is_valid_access &&
+ env->prog->aux->ops->is_valid_access(off, size, t))
+ return 0;
+
+ verbose("invalid bpf_context access off=%d size=%d\n", off, size);
+ return -EACCES;
+}
+
+/* check whether memory at (regno + off) is accessible for t = (read | write)
+ * if t==write, value_regno is a register which value is stored into memory
+ * if t==read, value_regno is a register which will receive the value from memory
+ * if t==write && value_regno==-1, some unknown value is stored into memory
+ * if t==read && value_regno==-1, don't care what we read from memory
+ */
+static int check_mem_access(struct verifier_env *env, u32 regno, int off,
+ int bpf_size, enum bpf_access_type t,
+ int value_regno)
+{
+ struct verifier_state *state = &env->cur_state;
+ int size, err = 0;
+
+ size = bpf_size_to_bytes(bpf_size);
+ if (size < 0)
+ return size;
+
+ if (off % size != 0) {
+ verbose("misaligned access off %d size %d\n", off, size);
+ return -EACCES;
+ }
+
+ if (state->regs[regno].type == PTR_TO_MAP_VALUE) {
+ err = check_map_access(env, regno, off, size);
+ if (!err && t == BPF_READ && value_regno >= 0)
+ mark_reg_unknown_value(state->regs, value_regno);
+
+ } else if (state->regs[regno].type == PTR_TO_CTX) {
+ err = check_ctx_access(env, off, size, t);
+ if (!err && t == BPF_READ && value_regno >= 0)
+ mark_reg_unknown_value(state->regs, value_regno);
+
+ } else if (state->regs[regno].type == FRAME_PTR) {
+ if (off >= 0 || off < -MAX_BPF_STACK) {
+ verbose("invalid stack off=%d size=%d\n", off, size);
+ return -EACCES;
+ }
+ if (t == BPF_WRITE)
+ err = check_stack_write(state, off, size, value_regno);
+ else
+ err = check_stack_read(state, off, size, value_regno);
+ } else {
+ verbose("R%d invalid mem access '%s'\n",
+ regno, reg_type_str[state->regs[regno].type]);
+ return -EACCES;
+ }
+ return err;
+}
+
+static int check_xadd(struct verifier_env *env, struct bpf_insn *insn)
+{
+ struct reg_state *regs = env->cur_state.regs;
+ int err;
+
+ if ((BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) ||
+ insn->imm != 0) {
+ verbose("BPF_XADD uses reserved fields\n");
+ return -EINVAL;
+ }
+
+ /* check src1 operand */
+ err = check_reg_arg(regs, insn->src_reg, SRC_OP);
+ if (err)
+ return err;
+
+ /* check src2 operand */
+ err = check_reg_arg(regs, insn->dst_reg, SRC_OP);
+ if (err)
+ return err;
+
+ /* check whether atomic_add can read the memory */
+ err = check_mem_access(env, insn->dst_reg, insn->off,
+ BPF_SIZE(insn->code), BPF_READ, -1);
+ if (err)
+ return err;
+
+ /* check whether atomic_add can write into the same memory */
+ return check_mem_access(env, insn->dst_reg, insn->off,
+ BPF_SIZE(insn->code), BPF_WRITE, -1);
+}
+
+/* when register 'regno' is passed into function that will read 'access_size'
+ * bytes from that pointer, make sure that it's within stack boundary
+ * and all elements of stack are initialized
+ */
+static int check_stack_boundary(struct verifier_env *env,
+ int regno, int access_size)
+{
+ struct verifier_state *state = &env->cur_state;
+ struct reg_state *regs = state->regs;
+ int off, i;
+
+ if (regs[regno].type != PTR_TO_STACK)
+ return -EACCES;
+
+ off = regs[regno].imm;
+ if (off >= 0 || off < -MAX_BPF_STACK || off + access_size > 0 ||
+ access_size <= 0) {
+ verbose("invalid stack type R%d off=%d access_size=%d\n",
+ regno, off, access_size);
+ return -EACCES;
+ }
+
+ for (i = 0; i < access_size; i++) {
+ if (state->stack[MAX_BPF_STACK + off + i].stype != STACK_MISC) {
+ verbose("invalid indirect read from stack off %d+%d size %d\n",
+ off, i, access_size);
+ return -EACCES;
+ }
+ }
+ return 0;
+}
+
+static int check_func_arg(struct verifier_env *env, u32 regno,
+ enum bpf_arg_type arg_type, struct bpf_map **mapp)
+{
+ struct reg_state *reg = env->cur_state.regs + regno;
+ enum bpf_reg_type expected_type;
+ int err = 0;
+
+ if (arg_type == ARG_ANYTHING)
+ return 0;
+
+ if (reg->type == NOT_INIT) {
+ verbose("R%d !read_ok\n", regno);
+ return -EACCES;
+ }
+
+ if (arg_type == ARG_PTR_TO_STACK || arg_type == ARG_PTR_TO_MAP_KEY ||
+ arg_type == ARG_PTR_TO_MAP_VALUE) {
+ expected_type = PTR_TO_STACK;
+ } else if (arg_type == ARG_CONST_STACK_SIZE) {
+ expected_type = CONST_IMM;
+ } else if (arg_type == ARG_CONST_MAP_PTR) {
+ expected_type = CONST_PTR_TO_MAP;
+ } else {
+ verbose("unsupported arg_type %d\n", arg_type);
+ return -EFAULT;
+ }
+
+ if (reg->type != expected_type) {
+ verbose("R%d type=%s expected=%s\n", regno,
+ reg_type_str[reg->type], reg_type_str[expected_type]);
+ return -EACCES;
+ }
+
+ if (arg_type == ARG_CONST_MAP_PTR) {
+ /* bpf_map_xxx(map_ptr) call: remember that map_ptr */
+ *mapp = reg->map_ptr;
+
+ } else if (arg_type == ARG_PTR_TO_MAP_KEY) {
+ /* bpf_map_xxx(..., map_ptr, ..., key) call:
+ * check that [key, key + map->key_size) are within
+ * stack limits and initialized
+ */
+ if (!*mapp) {
+ /* in function declaration map_ptr must come before
+ * map_key, so that it's verified and known before
+ * we have to check map_key here. Otherwise it means
+ * that kernel subsystem misconfigured verifier
+ */
+ verbose("invalid map_ptr to access map->key\n");
+ return -EACCES;
+ }
+ err = check_stack_boundary(env, regno, (*mapp)->key_size);
+
+ } else if (arg_type == ARG_PTR_TO_MAP_VALUE) {
+ /* bpf_map_xxx(..., map_ptr, ..., value) call:
+ * check [value, value + map->value_size) validity
+ */
+ if (!*mapp) {
+ /* kernel subsystem misconfigured verifier */
+ verbose("invalid map_ptr to access map->value\n");
+ return -EACCES;
+ }
+ err = check_stack_boundary(env, regno, (*mapp)->value_size);
+
+ } else if (arg_type == ARG_CONST_STACK_SIZE) {
+ /* bpf_xxx(..., buf, len) call will access 'len' bytes
+ * from stack pointer 'buf'. Check it
+ * note: regno == len, regno - 1 == buf
+ */
+ if (regno == 0) {
+ /* kernel subsystem misconfigured verifier */
+ verbose("ARG_CONST_STACK_SIZE cannot be first argument\n");
+ return -EACCES;
+ }
+ err = check_stack_boundary(env, regno - 1, reg->imm);
+ }
+
+ return err;
+}
+
+static int check_call(struct verifier_env *env, int func_id)
+{
+ struct verifier_state *state = &env->cur_state;
+ const struct bpf_func_proto *fn = NULL;
+ struct reg_state *regs = state->regs;
+ struct bpf_map *map = NULL;
+ struct reg_state *reg;
+ int i, err;
+
+ /* find function prototype */
+ if (func_id < 0 || func_id >= __BPF_FUNC_MAX_ID) {
+ verbose("invalid func %d\n", func_id);
+ return -EINVAL;
+ }
+
+ if (env->prog->aux->ops->get_func_proto)
+ fn = env->prog->aux->ops->get_func_proto(func_id);
+
+ if (!fn) {
+ verbose("unknown func %d\n", func_id);
+ return -EINVAL;
+ }
+
+ /* eBPF programs must be GPL compatible to use GPL-ed functions */
+ if (!env->prog->aux->is_gpl_compatible && fn->gpl_only) {
+ verbose("cannot call GPL only function from proprietary program\n");
+ return -EINVAL;
+ }
+
+ /* check args */
+ err = check_func_arg(env, BPF_REG_1, fn->arg1_type, &map);
+ if (err)
+ return err;
+ err = check_func_arg(env, BPF_REG_2, fn->arg2_type, &map);
+ if (err)
+ return err;
+ err = check_func_arg(env, BPF_REG_3, fn->arg3_type, &map);
+ if (err)
+ return err;
+ err = check_func_arg(env, BPF_REG_4, fn->arg4_type, &map);
+ if (err)
+ return err;
+ err = check_func_arg(env, BPF_REG_5, fn->arg5_type, &map);
+ if (err)
+ return err;
+
+ /* reset caller saved regs */
+ for (i = 0; i < CALLER_SAVED_REGS; i++) {
+ reg = regs + caller_saved[i];
+ reg->type = NOT_INIT;
+ reg->imm = 0;
+ }
+
+ /* update return register */
+ if (fn->ret_type == RET_INTEGER) {
+ regs[BPF_REG_0].type = UNKNOWN_VALUE;
+ } else if (fn->ret_type == RET_VOID) {
+ regs[BPF_REG_0].type = NOT_INIT;
+ } else if (fn->ret_type == RET_PTR_TO_MAP_VALUE_OR_NULL) {
+ regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
+ /* remember map_ptr, so that check_map_access()
+ * can check 'value_size' boundary of memory access
+ * to map element returned from bpf_map_lookup_elem()
+ */
+ if (map == NULL) {
+ verbose("kernel subsystem misconfigured verifier\n");
+ return -EINVAL;
+ }
+ regs[BPF_REG_0].map_ptr = map;
+ } else {
+ verbose("unknown return type %d of func %d\n",
+ fn->ret_type, func_id);
+ return -EINVAL;
+ }
+ return 0;
+}
+
+/* check validity of 32-bit and 64-bit arithmetic operations */
+static int check_alu_op(struct reg_state *regs, struct bpf_insn *insn)
+{
+ u8 opcode = BPF_OP(insn->code);
+ int err;
+
+ if (opcode == BPF_END || opcode == BPF_NEG) {
+ if (opcode == BPF_NEG) {
+ if (BPF_SRC(insn->code) != 0 ||
+ insn->src_reg != BPF_REG_0 ||
+ insn->off != 0 || insn->imm != 0) {
+ verbose("BPF_NEG uses reserved fields\n");
+ return -EINVAL;
+ }
+ } else {
+ if (insn->src_reg != BPF_REG_0 || insn->off != 0 ||
+ (insn->imm != 16 && insn->imm != 32 && insn->imm != 64)) {
+ verbose("BPF_END uses reserved fields\n");
+ return -EINVAL;
+ }
+ }
+
+ /* check src operand */
+ err = check_reg_arg(regs, insn->dst_reg, SRC_OP);
+ if (err)
+ return err;
+
+ /* check dest operand */
+ err = check_reg_arg(regs, insn->dst_reg, DST_OP);
+ if (err)
+ return err;
+
+ } else if (opcode == BPF_MOV) {
+
+ if (BPF_SRC(insn->code) == BPF_X) {
+ if (insn->imm != 0 || insn->off != 0) {
+ verbose("BPF_MOV uses reserved fields\n");
+ return -EINVAL;
+ }
+
+ /* check src operand */
+ err = check_reg_arg(regs, insn->src_reg, SRC_OP);
+ if (err)
+ return err;
+ } else {
+ if (insn->src_reg != BPF_REG_0 || insn->off != 0) {
+ verbose("BPF_MOV uses reserved fields\n");
+ return -EINVAL;
+ }
+ }
+
+ /* check dest operand */
+ err = check_reg_arg(regs, insn->dst_reg, DST_OP);
+ if (err)
+ return err;
+
+ if (BPF_SRC(insn->code) == BPF_X) {
+ if (BPF_CLASS(insn->code) == BPF_ALU64) {
+ /* case: R1 = R2
+ * copy register state to dest reg
+ */
+ regs[insn->dst_reg] = regs[insn->src_reg];
+ } else {
+ regs[insn->dst_reg].type = UNKNOWN_VALUE;
+ regs[insn->dst_reg].map_ptr = NULL;
+ }
+ } else {
+ /* case: R = imm
+ * remember the value we stored into this reg
+ */
+ regs[insn->dst_reg].type = CONST_IMM;
+ regs[insn->dst_reg].imm = insn->imm;
+ }
+
+ } else if (opcode > BPF_END) {
+ verbose("invalid BPF_ALU opcode %x\n", opcode);
+ return -EINVAL;
+
+ } else { /* all other ALU ops: and, sub, xor, add, ... */
+
+ bool stack_relative = false;
+
+ if (BPF_SRC(insn->code) == BPF_X) {
+ if (insn->imm != 0 || insn->off != 0) {
+ verbose("BPF_ALU uses reserved fields\n");
+ return -EINVAL;
+ }
+ /* check src1 operand */
+ err = check_reg_arg(regs, insn->src_reg, SRC_OP);
+ if (err)
+ return err;
+ } else {
+ if (insn->src_reg != BPF_REG_0 || insn->off != 0) {
+ verbose("BPF_ALU uses reserved fields\n");
+ return -EINVAL;
+ }
+ }
+
+ /* check src2 operand */
+ err = check_reg_arg(regs, insn->dst_reg, SRC_OP);
+ if (err)
+ return err;
+
+ if ((opcode == BPF_MOD || opcode == BPF_DIV) &&
+ BPF_SRC(insn->code) == BPF_K && insn->imm == 0) {
+ verbose("div by zero\n");
+ return -EINVAL;
+ }
+
+ /* pattern match 'bpf_add Rx, imm' instruction */
+ if (opcode == BPF_ADD && BPF_CLASS(insn->code) == BPF_ALU64 &&
+ regs[insn->dst_reg].type == FRAME_PTR &&
+ BPF_SRC(insn->code) == BPF_K)
+ stack_relative = true;
+
+ /* check dest operand */
+ err = check_reg_arg(regs, insn->dst_reg, DST_OP);
+ if (err)
+ return err;
+
+ if (stack_relative) {
+ regs[insn->dst_reg].type = PTR_TO_STACK;
+ regs[insn->dst_reg].imm = insn->imm;
+ }
+ }
+
+ return 0;
+}
+
+static int check_cond_jmp_op(struct verifier_env *env,
+ struct bpf_insn *insn, int *insn_idx)
+{
+ struct reg_state *regs = env->cur_state.regs;
+ struct verifier_state *other_branch;
+ u8 opcode = BPF_OP(insn->code);
+ int err;
+
+ if (opcode > BPF_EXIT) {
+ verbose("invalid BPF_JMP opcode %x\n", opcode);
+ return -EINVAL;
+ }
+
+ if (BPF_SRC(insn->code) == BPF_X) {
+ if (insn->imm != 0) {
+ verbose("BPF_JMP uses reserved fields\n");
+ return -EINVAL;
+ }
+
+ /* check src1 operand */
+ err = check_reg_arg(regs, insn->src_reg, SRC_OP);
+ if (err)
+ return err;
+ } else {
+ if (insn->src_reg != BPF_REG_0) {
+ verbose("BPF_JMP uses reserved fields\n");
+ return -EINVAL;
+ }
+ }
+
+ /* check src2 operand */
+ err = check_reg_arg(regs, insn->dst_reg, SRC_OP);
+ if (err)
+ return err;
+
+ /* detect if R == 0 where R was initialized to zero earlier */
+ if (BPF_SRC(insn->code) == BPF_K &&
+ (opcode == BPF_JEQ || opcode == BPF_JNE) &&
+ regs[insn->dst_reg].type == CONST_IMM &&
+ regs[insn->dst_reg].imm == insn->imm) {
+ if (opcode == BPF_JEQ) {
+ /* if (imm == imm) goto pc+off;
+ * only follow the goto, ignore fall-through
+ */
+ *insn_idx += insn->off;
+ return 0;
+ } else {
+ /* if (imm != imm) goto pc+off;
+ * only follow fall-through branch, since
+ * that's where the program will go
+ */
+ return 0;
+ }
+ }
+
+ other_branch = push_stack(env, *insn_idx + insn->off + 1, *insn_idx);
+ if (!other_branch)
+ return -EFAULT;
+
+ /* detect if R == 0 where R is returned value from bpf_map_lookup_elem() */
+ if (BPF_SRC(insn->code) == BPF_K &&
+ insn->imm == 0 && (opcode == BPF_JEQ ||
+ opcode == BPF_JNE) &&
+ regs[insn->dst_reg].type == PTR_TO_MAP_VALUE_OR_NULL) {
+ if (opcode == BPF_JEQ) {
+ /* next fallthrough insn can access memory via
+ * this register
+ */
+ regs[insn->dst_reg].type = PTR_TO_MAP_VALUE;
+ /* branch targer cannot access it, since reg == 0 */
+ other_branch->regs[insn->dst_reg].type = CONST_IMM;
+ other_branch->regs[insn->dst_reg].imm = 0;
+ } else {
+ other_branch->regs[insn->dst_reg].type = PTR_TO_MAP_VALUE;
+ regs[insn->dst_reg].type = CONST_IMM;
+ regs[insn->dst_reg].imm = 0;
+ }
+ } else if (BPF_SRC(insn->code) == BPF_K &&
+ (opcode == BPF_JEQ || opcode == BPF_JNE)) {
+
+ if (opcode == BPF_JEQ) {
+ /* detect if (R == imm) goto
+ * and in the target state recognize that R = imm
+ */
+ other_branch->regs[insn->dst_reg].type = CONST_IMM;
+ other_branch->regs[insn->dst_reg].imm = insn->imm;
+ } else {
+ /* detect if (R != imm) goto
+ * and in the fall-through state recognize that R = imm
+ */
+ regs[insn->dst_reg].type = CONST_IMM;
+ regs[insn->dst_reg].imm = insn->imm;
+ }
+ }
+ if (log_level)
+ print_verifier_state(env);
+ return 0;
+}
+
+/* return the map pointer stored inside BPF_LD_IMM64 instruction */
+static struct bpf_map *ld_imm64_to_map_ptr(struct bpf_insn *insn)
+{
+ u64 imm64 = ((u64) (u32) insn[0].imm) | ((u64) (u32) insn[1].imm) << 32;
+
+ return (struct bpf_map *) (unsigned long) imm64;
+}
+
+/* verify BPF_LD_IMM64 instruction */
+static int check_ld_imm(struct verifier_env *env, struct bpf_insn *insn)
+{
+ struct reg_state *regs = env->cur_state.regs;
+ int err;
+
+ if (BPF_SIZE(insn->code) != BPF_DW) {
+ verbose("invalid BPF_LD_IMM insn\n");
+ return -EINVAL;
+ }
+ if (insn->off != 0) {
+ verbose("BPF_LD_IMM64 uses reserved fields\n");
+ return -EINVAL;
+ }
+
+ err = check_reg_arg(regs, insn->dst_reg, DST_OP);
+ if (err)
+ return err;
+
+ if (insn->src_reg == 0)
+ /* generic move 64-bit immediate into a register */
+ return 0;
+
+ /* replace_map_fd_with_map_ptr() should have caught bad ld_imm64 */
+ BUG_ON(insn->src_reg != BPF_PSEUDO_MAP_FD);
+
+ regs[insn->dst_reg].type = CONST_PTR_TO_MAP;
+ regs[insn->dst_reg].map_ptr = ld_imm64_to_map_ptr(insn);
+ return 0;
+}
+
+/* non-recursive DFS pseudo code
+ * 1 procedure DFS-iterative(G,v):
+ * 2 label v as discovered
+ * 3 let S be a stack
+ * 4 S.push(v)
+ * 5 while S is not empty
+ * 6 t <- S.pop()
+ * 7 if t is what we're looking for:
+ * 8 return t
+ * 9 for all edges e in G.adjacentEdges(t) do
+ * 10 if edge e is already labelled
+ * 11 continue with the next edge
+ * 12 w <- G.adjacentVertex(t,e)
+ * 13 if vertex w is not discovered and not explored
+ * 14 label e as tree-edge
+ * 15 label w as discovered
+ * 16 S.push(w)
+ * 17 continue at 5
+ * 18 else if vertex w is discovered
+ * 19 label e as back-edge
+ * 20 else
+ * 21 // vertex w is explored
+ * 22 label e as forward- or cross-edge
+ * 23 label t as explored
+ * 24 S.pop()
+ *
+ * convention:
+ * 0x10 - discovered
+ * 0x11 - discovered and fall-through edge labelled
+ * 0x12 - discovered and fall-through and branch edges labelled
+ * 0x20 - explored
+ */
+
+enum {
+ DISCOVERED = 0x10,
+ EXPLORED = 0x20,
+ FALLTHROUGH = 1,
+ BRANCH = 2,
+};
+
+static int *insn_stack; /* stack of insns to process */
+static int cur_stack; /* current stack index */
+static int *insn_state;
+
+/* t, w, e - match pseudo-code above:
+ * t - index of current instruction
+ * w - next instruction
+ * e - edge
+ */
+static int push_insn(int t, int w, int e, struct verifier_env *env)
+{
+ if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
+ return 0;
+
+ if (e == BRANCH && insn_state[t] >= (DISCOVERED | BRANCH))
+ return 0;
+
+ if (w < 0 || w >= env->prog->len) {
+ verbose("jump out of range from insn %d to %d\n", t, w);
+ return -EINVAL;
+ }
+
+ if (insn_state[w] == 0) {
+ /* tree-edge */
+ insn_state[t] = DISCOVERED | e;
+ insn_state[w] = DISCOVERED;
+ if (cur_stack >= env->prog->len)
+ return -E2BIG;
+ insn_stack[cur_stack++] = w;
+ return 1;
+ } else if ((insn_state[w] & 0xF0) == DISCOVERED) {
+ verbose("back-edge from insn %d to %d\n", t, w);
+ return -EINVAL;
+ } else if (insn_state[w] == EXPLORED) {
+ /* forward- or cross-edge */
+ insn_state[t] = DISCOVERED | e;
+ } else {
+ verbose("insn state internal bug\n");
+ return -EFAULT;
+ }
+ return 0;
+}
+
+/* non-recursive depth-first-search to detect loops in BPF program
+ * loop == back-edge in directed graph
+ */
+static int check_cfg(struct verifier_env *env)
+{
+ struct bpf_insn *insns = env->prog->insnsi;
+ int insn_cnt = env->prog->len;
+ int ret = 0;
+ int i, t;
+
+ insn_state = kcalloc(insn_cnt, sizeof(int), GFP_KERNEL);
+ if (!insn_state)
+ return -ENOMEM;
+
+ insn_stack = kcalloc(insn_cnt, sizeof(int), GFP_KERNEL);
+ if (!insn_stack) {
+ kfree(insn_state);
+ return -ENOMEM;
+ }
+
+ insn_state[0] = DISCOVERED; /* mark 1st insn as discovered */
+ insn_stack[0] = 0; /* 0 is the first instruction */
+ cur_stack = 1;
+
+peek_stack:
+ if (cur_stack == 0)
+ goto check_state;
+ t = insn_stack[cur_stack - 1];
+
+ if (BPF_CLASS(insns[t].code) == BPF_JMP) {
+ u8 opcode = BPF_OP(insns[t].code);
+
+ if (opcode == BPF_EXIT) {
+ goto mark_explored;
+ } else if (opcode == BPF_CALL) {
+ ret = push_insn(t, t + 1, FALLTHROUGH, env);
+ if (ret == 1)
+ goto peek_stack;
+ else if (ret < 0)
+ goto err_free;
+ } else if (opcode == BPF_JA) {
+ if (BPF_SRC(insns[t].code) != BPF_K) {
+ ret = -EINVAL;
+ goto err_free;
+ }
+ /* unconditional jump with single edge */
+ ret = push_insn(t, t + insns[t].off + 1,
+ FALLTHROUGH, env);
+ if (ret == 1)
+ goto peek_stack;
+ else if (ret < 0)
+ goto err_free;
+ } else {
+ /* conditional jump with two edges */
+ ret = push_insn(t, t + 1, FALLTHROUGH, env);
+ if (ret == 1)
+ goto peek_stack;
+ else if (ret < 0)
+ goto err_free;
+
+ ret = push_insn(t, t + insns[t].off + 1, BRANCH, env);
+ if (ret == 1)
+ goto peek_stack;
+ else if (ret < 0)
+ goto err_free;
+ }
+ } else {
+ /* all other non-branch instructions with single
+ * fall-through edge
+ */
+ ret = push_insn(t, t + 1, FALLTHROUGH, env);
+ if (ret == 1)
+ goto peek_stack;
+ else if (ret < 0)
+ goto err_free;
+ }
+
+mark_explored:
+ insn_state[t] = EXPLORED;
+ if (cur_stack-- <= 0) {
+ verbose("pop stack internal bug\n");
+ ret = -EFAULT;
+ goto err_free;
+ }
+ goto peek_stack;
+
+check_state:
+ for (i = 0; i < insn_cnt; i++) {
+ if (insn_state[i] != EXPLORED) {
+ verbose("unreachable insn %d\n", i);
+ ret = -EINVAL;
+ goto err_free;
+ }
+ }
+ ret = 0; /* cfg looks good */
+
+err_free:
+ kfree(insn_state);
+ kfree(insn_stack);
+ return ret;
+}
+
+static int do_check(struct verifier_env *env)
+{
+ struct verifier_state *state = &env->cur_state;
+ struct bpf_insn *insns = env->prog->insnsi;
+ struct reg_state *regs = state->regs;
+ int insn_cnt = env->prog->len;
+ int insn_idx, prev_insn_idx = 0;
+ int insn_processed = 0;
+ bool do_print_state = false;
+
+ init_reg_state(regs);
+ insn_idx = 0;
+ for (;;) {
+ struct bpf_insn *insn;
+ u8 class;
+ int err;
+
+ if (insn_idx >= insn_cnt) {
+ verbose("invalid insn idx %d insn_cnt %d\n",
+ insn_idx, insn_cnt);
+ return -EFAULT;
+ }
+
+ insn = &insns[insn_idx];
+ class = BPF_CLASS(insn->code);
+
+ if (++insn_processed > 32768) {
+ verbose("BPF program is too large. Proccessed %d insn\n",
+ insn_processed);
+ return -E2BIG;
+ }
+
+ if (log_level && do_print_state) {
+ verbose("\nfrom %d to %d:", prev_insn_idx, insn_idx);
+ print_verifier_state(env);
+ do_print_state = false;
+ }
+
+ if (log_level) {
+ verbose("%d: ", insn_idx);
+ print_bpf_insn(insn);
+ }
+
+ if (class == BPF_ALU || class == BPF_ALU64) {
+ err = check_alu_op(regs, insn);
+ if (err)
+ return err;
+
+ } else if (class == BPF_LDX) {
+ if (BPF_MODE(insn->code) != BPF_MEM ||
+ insn->imm != 0) {
+ verbose("BPF_LDX uses reserved fields\n");
+ return -EINVAL;
+ }
+ /* check src operand */
+ err = check_reg_arg(regs, insn->src_reg, SRC_OP);
+ if (err)
+ return err;
+
+ err = check_reg_arg(regs, insn->dst_reg, DST_OP_NO_MARK);
+ if (err)
+ return err;
+
+ /* check that memory (src_reg + off) is readable,
+ * the state of dst_reg will be updated by this func
+ */
+ err = check_mem_access(env, insn->src_reg, insn->off,
+ BPF_SIZE(insn->code), BPF_READ,
+ insn->dst_reg);
+ if (err)
+ return err;
+
+ } else if (class == BPF_STX) {
+ if (BPF_MODE(insn->code) == BPF_XADD) {
+ err = check_xadd(env, insn);
+ if (err)
+ return err;
+ insn_idx++;
+ continue;
+ }
+
+ if (BPF_MODE(insn->code) != BPF_MEM ||
+ insn->imm != 0) {
+ verbose("BPF_STX uses reserved fields\n");
+ return -EINVAL;
+ }
+ /* check src1 operand */
+ err = check_reg_arg(regs, insn->src_reg, SRC_OP);
+ if (err)
+ return err;
+ /* check src2 operand */
+ err = check_reg_arg(regs, insn->dst_reg, SRC_OP);
+ if (err)
+ return err;
+
+ /* check that memory (dst_reg + off) is writeable */
+ err = check_mem_access(env, insn->dst_reg, insn->off,
+ BPF_SIZE(insn->code), BPF_WRITE,
+ insn->src_reg);
+ if (err)
+ return err;
+
+ } else if (class == BPF_ST) {
+ if (BPF_MODE(insn->code) != BPF_MEM ||
+ insn->src_reg != BPF_REG_0) {
+ verbose("BPF_ST uses reserved fields\n");
+ return -EINVAL;
+ }
+ /* check src operand */
+ err = check_reg_arg(regs, insn->dst_reg, SRC_OP);
+ if (err)
+ return err;
+
+ /* check that memory (dst_reg + off) is writeable */
+ err = check_mem_access(env, insn->dst_reg, insn->off,
+ BPF_SIZE(insn->code), BPF_WRITE,
+ -1);
+ if (err)
+ return err;
+
+ } else if (class == BPF_JMP) {
+ u8 opcode = BPF_OP(insn->code);
+
+ if (opcode == BPF_CALL) {
+ if (BPF_SRC(insn->code) != BPF_K ||
+ insn->off != 0 ||
+ insn->src_reg != BPF_REG_0 ||
+ insn->dst_reg != BPF_REG_0) {
+ verbose("BPF_CALL uses reserved fields\n");
+ return -EINVAL;
+ }
+
+ err = check_call(env, insn->imm);
+ if (err)
+ return err;
+
+ } else if (opcode == BPF_JA) {
+ if (BPF_SRC(insn->code) != BPF_K ||
+ insn->imm != 0 ||
+ insn->src_reg != BPF_REG_0 ||
+ insn->dst_reg != BPF_REG_0) {
+ verbose("BPF_JA uses reserved fields\n");
+ return -EINVAL;
+ }
+
+ insn_idx += insn->off + 1;
+ continue;
+
+ } else if (opcode == BPF_EXIT) {
+ if (BPF_SRC(insn->code) != BPF_K ||
+ insn->imm != 0 ||
+ insn->src_reg != BPF_REG_0 ||
+ insn->dst_reg != BPF_REG_0) {
+ verbose("BPF_EXIT uses reserved fields\n");
+ return -EINVAL;
+ }
+
+ /* eBPF calling convetion is such that R0 is used
+ * to return the value from eBPF program.
+ * Make sure that it's readable at this time
+ * of bpf_exit, which means that program wrote
+ * something into it earlier
+ */
+ err = check_reg_arg(regs, BPF_REG_0, SRC_OP);
+ if (err)
+ return err;
+
+ insn_idx = pop_stack(env, &prev_insn_idx);
+ if (insn_idx < 0) {
+ break;
+ } else {
+ do_print_state = true;
+ continue;
+ }
+ } else {
+ err = check_cond_jmp_op(env, insn, &insn_idx);
+ if (err)
+ return err;
+ }
+ } else if (class == BPF_LD) {
+ u8 mode = BPF_MODE(insn->code);
+
+ if (mode == BPF_ABS || mode == BPF_IND) {
+ verbose("LD_ABS is not supported yet\n");
+ return -EINVAL;
+ } else if (mode == BPF_IMM) {
+ err = check_ld_imm(env, insn);
+ if (err)
+ return err;
+
+ insn_idx++;
+ } else {
+ verbose("invalid BPF_LD mode\n");
+ return -EINVAL;
+ }
+ } else {
+ verbose("unknown insn class %d\n", class);
+ return -EINVAL;
+ }
+
+ insn_idx++;
+ }
+
+ return 0;
+}
+
+/* look for pseudo eBPF instructions that access map FDs and
+ * replace them with actual map pointers
+ */
+static int replace_map_fd_with_map_ptr(struct verifier_env *env)
+{
+ struct bpf_insn *insn = env->prog->insnsi;
+ int insn_cnt = env->prog->len;
+ int i, j;
+
+ for (i = 0; i < insn_cnt; i++, insn++) {
+ if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) {
+ struct bpf_map *map;
+ struct fd f;
+
+ if (i == insn_cnt - 1 || insn[1].code != 0 ||
+ insn[1].dst_reg != 0 || insn[1].src_reg != 0 ||
+ insn[1].off != 0) {
+ verbose("invalid bpf_ld_imm64 insn\n");
+ return -EINVAL;
+ }
+
+ if (insn->src_reg == 0)
+ /* valid generic load 64-bit imm */
+ goto next_insn;
+
+ if (insn->src_reg != BPF_PSEUDO_MAP_FD) {
+ verbose("unrecognized bpf_ld_imm64 insn\n");
+ return -EINVAL;
+ }
+
+ f = fdget(insn->imm);
+
+ map = bpf_map_get(f);
+ if (IS_ERR(map)) {
+ verbose("fd %d is not pointing to valid bpf_map\n",
+ insn->imm);
+ fdput(f);
+ return PTR_ERR(map);
+ }
+
+ /* store map pointer inside BPF_LD_IMM64 instruction */
+ insn[0].imm = (u32) (unsigned long) map;
+ insn[1].imm = ((u64) (unsigned long) map) >> 32;
+
+ /* check whether we recorded this map already */
+ for (j = 0; j < env->used_map_cnt; j++)
+ if (env->used_maps[j] == map) {
+ fdput(f);
+ goto next_insn;
+ }
+
+ if (env->used_map_cnt >= MAX_USED_MAPS) {
+ fdput(f);
+ return -E2BIG;
+ }
+
+ /* remember this map */
+ env->used_maps[env->used_map_cnt++] = map;
+
+ /* hold the map. If the program is rejected by verifier,
+ * the map will be released by release_maps() or it
+ * will be used by the valid program until it's unloaded
+ * and all maps are released in free_bpf_prog_info()
+ */
+ atomic_inc(&map->refcnt);
+
+ fdput(f);
+next_insn:
+ insn++;
+ i++;
+ }
+ }
+
+ /* now all pseudo BPF_LD_IMM64 instructions load valid
+ * 'struct bpf_map *' into a register instead of user map_fd.
+ * These pointers will be used later by verifier to validate map access.
+ */
+ return 0;
+}
+
+/* drop refcnt of maps used by the rejected program */
+static void release_maps(struct verifier_env *env)
+{
+ int i;
+
+ for (i = 0; i < env->used_map_cnt; i++)
+ bpf_map_put(env->used_maps[i]);
+}
+
+/* convert pseudo BPF_LD_IMM64 into generic BPF_LD_IMM64 */
+static void convert_pseudo_ld_imm64(struct verifier_env *env)
+{
+ struct bpf_insn *insn = env->prog->insnsi;
+ int insn_cnt = env->prog->len;
+ int i;
+
+ for (i = 0; i < insn_cnt; i++, insn++)
+ if (insn->code == (BPF_LD | BPF_IMM | BPF_DW))
+ insn->src_reg = 0;
+}
+
+int bpf_check(struct bpf_prog *prog, union bpf_attr *attr)
+{
+ char __user *log_ubuf = NULL;
+ struct verifier_env *env;
+ int ret = -EINVAL;
+
+ if (prog->len <= 0 || prog->len > BPF_MAXINSNS)
+ return -E2BIG;
+
+ /* 'struct verifier_env' can be global, but since it's not small,
+ * allocate/free it every time bpf_check() is called
+ */
+ env = kzalloc(sizeof(struct verifier_env), GFP_KERNEL);
+ if (!env)
+ return -ENOMEM;
+
+ env->prog = prog;
+
+ /* grab the mutex to protect few globals used by verifier */
+ mutex_lock(&bpf_verifier_lock);
+
+ if (attr->log_level || attr->log_buf || attr->log_size) {
+ /* user requested verbose verifier output
+ * and supplied buffer to store the verification trace
+ */
+ log_level = attr->log_level;
+ log_ubuf = (char __user *) (unsigned long) attr->log_buf;
+ log_size = attr->log_size;
+ log_len = 0;
+
+ ret = -EINVAL;
+ /* log_* values have to be sane */
+ if (log_size < 128 || log_size > UINT_MAX >> 8 ||
+ log_level == 0 || log_ubuf == NULL)
+ goto free_env;
+
+ ret = -ENOMEM;
+ log_buf = vmalloc(log_size);
+ if (!log_buf)
+ goto free_env;
+ } else {
+ log_level = 0;
+ }
+
+ ret = replace_map_fd_with_map_ptr(env);
+ if (ret < 0)
+ goto skip_full_check;
+
+ ret = check_cfg(env);
+ if (ret < 0)
+ goto skip_full_check;
+
+ ret = do_check(env);
+
+skip_full_check:
+ while (pop_stack(env, NULL) >= 0);
+
+ if (log_level && log_len >= log_size - 1) {
+ BUG_ON(log_len >= log_size);
+ /* verifier log exceeded user supplied buffer */
+ ret = -ENOSPC;
+ /* fall through to return what was recorded */
+ }
+
+ /* copy verifier log back to user space including trailing zero */
+ if (log_level && copy_to_user(log_ubuf, log_buf, log_len + 1) != 0) {
+ ret = -EFAULT;
+ goto free_log_buf;
+ }
+
+ if (ret == 0 && env->used_map_cnt) {
+ /* if program passed verifier, update used_maps in bpf_prog_info */
+ prog->aux->used_maps = kmalloc_array(env->used_map_cnt,
+ sizeof(env->used_maps[0]),
+ GFP_KERNEL);
+
+ if (!prog->aux->used_maps) {
+ ret = -ENOMEM;
+ goto free_log_buf;
+ }
+
+ memcpy(prog->aux->used_maps, env->used_maps,
+ sizeof(env->used_maps[0]) * env->used_map_cnt);
+ prog->aux->used_map_cnt = env->used_map_cnt;
+
+ /* program is valid. Convert pseudo bpf_ld_imm64 into generic
+ * bpf_ld_imm64 instructions
+ */
+ convert_pseudo_ld_imm64(env);
+ }
+
+free_log_buf:
+ if (log_level)
+ vfree(log_buf);
+free_env:
+ if (!prog->aux->used_maps)
+ /* if we didn't copy map pointers into bpf_prog_info, release
+ * them now. Otherwise free_bpf_prog_info() will release them.
+ */
+ release_maps(env);
+ kfree(env);
+ mutex_unlock(&bpf_verifier_lock);
+ return ret;
+}
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 940aced..3a73f99 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -3985,7 +3985,6 @@
l = cgroup_pidlist_find_create(cgrp, type);
if (!l) {
- mutex_unlock(&cgrp->pidlist_mutex);
pidlist_free(array);
return -ENOMEM;
}
diff --git a/kernel/events/core.c b/kernel/events/core.c
index f9c1ed0..d640a8b 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1524,6 +1524,11 @@
*/
if (ctx->is_active) {
raw_spin_unlock_irq(&ctx->lock);
+ /*
+ * Reload the task pointer, it might have been changed by
+ * a concurrent perf_event_context_sched_out().
+ */
+ task = ctx->task;
goto retry;
}
@@ -1967,6 +1972,11 @@
*/
if (ctx->is_active) {
raw_spin_unlock_irq(&ctx->lock);
+ /*
+ * Reload the task pointer, it might have been changed by
+ * a concurrent perf_event_context_sched_out().
+ */
+ task = ctx->task;
goto retry;
}
diff --git a/kernel/futex.c b/kernel/futex.c
index d3a9d94..815d7af 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -2592,6 +2592,7 @@
* shared futexes. We need to compare the keys:
*/
if (match_futex(&q.key, &key2)) {
+ queue_unlock(hb);
ret = -EINVAL;
goto out_put_keys;
}
diff --git a/kernel/kcmp.c b/kernel/kcmp.c
index e30ac0f..0aa69ea 100644
--- a/kernel/kcmp.c
+++ b/kernel/kcmp.c
@@ -44,11 +44,12 @@
*/
static int kcmp_ptr(void *v1, void *v2, enum kcmp_type type)
{
- long ret;
+ long t1, t2;
- ret = kptr_obfuscate((long)v1, type) - kptr_obfuscate((long)v2, type);
+ t1 = kptr_obfuscate((long)v1, type);
+ t2 = kptr_obfuscate((long)v2, type);
- return (ret < 0) | ((ret > 0) << 1);
+ return (t1 < t2) | ((t1 > t2) << 1);
}
/* The caller must have pinned the task */
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index e04c455..1ce7706 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1665,15 +1665,15 @@
raw_spin_lock(&logbuf_lock);
logbuf_cpu = this_cpu;
- if (recursion_bug) {
+ if (unlikely(recursion_bug)) {
static const char recursion_msg[] =
"BUG: recent printk recursion!";
recursion_bug = 0;
- text_len = strlen(recursion_msg);
/* emit KERN_CRIT message */
printed_len += log_store(0, 2, LOG_PREFIX|LOG_NEWLINE, 0,
- NULL, 0, recursion_msg, text_len);
+ NULL, 0, recursion_msg,
+ strlen(recursion_msg));
}
/*
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 391d4dd..b4b5083 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -218,3 +218,6 @@
/* operate on Secure Computing state */
cond_syscall(sys_seccomp);
+
+/* access BPF programs and maps */
+cond_syscall(sys_bpf);
diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
index 4aec4a4..a7077d3 100644
--- a/kernel/time/alarmtimer.c
+++ b/kernel/time/alarmtimer.c
@@ -464,18 +464,26 @@
static enum alarmtimer_restart alarm_handle_timer(struct alarm *alarm,
ktime_t now)
{
+ unsigned long flags;
struct k_itimer *ptr = container_of(alarm, struct k_itimer,
it.alarm.alarmtimer);
- if (posix_timer_event(ptr, 0) != 0)
- ptr->it_overrun++;
+ enum alarmtimer_restart result = ALARMTIMER_NORESTART;
+
+ spin_lock_irqsave(&ptr->it_lock, flags);
+ if ((ptr->it_sigev_notify & ~SIGEV_THREAD_ID) != SIGEV_NONE) {
+ if (posix_timer_event(ptr, 0) != 0)
+ ptr->it_overrun++;
+ }
/* Re-add periodic timers */
if (ptr->it.alarm.interval.tv64) {
ptr->it_overrun += alarm_forward(alarm, now,
ptr->it.alarm.interval);
- return ALARMTIMER_RESTART;
+ result = ALARMTIMER_RESTART;
}
- return ALARMTIMER_NORESTART;
+ spin_unlock_irqrestore(&ptr->it_lock, flags);
+
+ return result;
}
/**
@@ -541,18 +549,22 @@
* @new_timer: k_itimer pointer
* @cur_setting: itimerspec data to fill
*
- * Copies the itimerspec data out from the k_itimer
+ * Copies out the current itimerspec data
*/
static void alarm_timer_get(struct k_itimer *timr,
struct itimerspec *cur_setting)
{
- memset(cur_setting, 0, sizeof(struct itimerspec));
+ ktime_t relative_expiry_time =
+ alarm_expires_remaining(&(timr->it.alarm.alarmtimer));
- cur_setting->it_interval =
- ktime_to_timespec(timr->it.alarm.interval);
- cur_setting->it_value =
- ktime_to_timespec(timr->it.alarm.alarmtimer.node.expires);
- return;
+ if (ktime_to_ns(relative_expiry_time) > 0) {
+ cur_setting->it_value = ktime_to_timespec(relative_expiry_time);
+ } else {
+ cur_setting->it_value.tv_sec = 0;
+ cur_setting->it_value.tv_nsec = 0;
+ }
+
+ cur_setting->it_interval = ktime_to_timespec(timr->it.alarm.interval);
}
/**
diff --git a/kernel/time/time.c b/kernel/time/time.c
index f0294ba..a9ae20f 100644
--- a/kernel/time/time.c
+++ b/kernel/time/time.c
@@ -559,17 +559,20 @@
* that a remainder subtract here would not do the right thing as the
* resolution values don't fall on second boundries. I.e. the line:
* nsec -= nsec % TICK_NSEC; is NOT a correct resolution rounding.
+ * Note that due to the small error in the multiplier here, this
+ * rounding is incorrect for sufficiently large values of tv_nsec, but
+ * well formed timespecs should have tv_nsec < NSEC_PER_SEC, so we're
+ * OK.
*
* Rather, we just shift the bits off the right.
*
* The >> (NSEC_JIFFIE_SC - SEC_JIFFIE_SC) converts the scaled nsec
* value to a scaled second value.
*/
-unsigned long
-timespec_to_jiffies(const struct timespec *value)
+static unsigned long
+__timespec_to_jiffies(unsigned long sec, long nsec)
{
- unsigned long sec = value->tv_sec;
- long nsec = value->tv_nsec + TICK_NSEC - 1;
+ nsec = nsec + TICK_NSEC - 1;
if (sec >= MAX_SEC_IN_JIFFIES){
sec = MAX_SEC_IN_JIFFIES;
@@ -580,6 +583,13 @@
(NSEC_JIFFIE_SC - SEC_JIFFIE_SC))) >> SEC_JIFFIE_SC;
}
+
+unsigned long
+timespec_to_jiffies(const struct timespec *value)
+{
+ return __timespec_to_jiffies(value->tv_sec, value->tv_nsec);
+}
+
EXPORT_SYMBOL(timespec_to_jiffies);
void
@@ -596,31 +606,27 @@
}
EXPORT_SYMBOL(jiffies_to_timespec);
-/* Same for "timeval"
+/*
+ * We could use a similar algorithm to timespec_to_jiffies (with a
+ * different multiplier for usec instead of nsec). But this has a
+ * problem with rounding: we can't exactly add TICK_NSEC - 1 to the
+ * usec value, since it's not necessarily integral.
*
- * Well, almost. The problem here is that the real system resolution is
- * in nanoseconds and the value being converted is in micro seconds.
- * Also for some machines (those that use HZ = 1024, in-particular),
- * there is a LARGE error in the tick size in microseconds.
-
- * The solution we use is to do the rounding AFTER we convert the
- * microsecond part. Thus the USEC_ROUND, the bits to be shifted off.
- * Instruction wise, this should cost only an additional add with carry
- * instruction above the way it was done above.
+ * We could instead round in the intermediate scaled representation
+ * (i.e. in units of 1/2^(large scale) jiffies) but that's also
+ * perilous: the scaling introduces a small positive error, which
+ * combined with a division-rounding-upward (i.e. adding 2^(scale) - 1
+ * units to the intermediate before shifting) leads to accidental
+ * overflow and overestimates.
+ *
+ * At the cost of one additional multiplication by a constant, just
+ * use the timespec implementation.
*/
unsigned long
timeval_to_jiffies(const struct timeval *value)
{
- unsigned long sec = value->tv_sec;
- long usec = value->tv_usec;
-
- if (sec >= MAX_SEC_IN_JIFFIES){
- sec = MAX_SEC_IN_JIFFIES;
- usec = 0;
- }
- return (((u64)sec * SEC_CONVERSION) +
- (((u64)usec * USEC_CONVERSION + USEC_ROUND) >>
- (USEC_JIFFIE_SC - SEC_JIFFIE_SC))) >> SEC_JIFFIE_SC;
+ return __timespec_to_jiffies(value->tv_sec,
+ value->tv_usec * NSEC_PER_USEC);
}
EXPORT_SYMBOL(timeval_to_jiffies);
diff --git a/lib/Kconfig b/lib/Kconfig
index a5ce0c7..54cf309 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -51,6 +51,9 @@
config ARCH_USE_CMPXCHG_LOCKREF
bool
+config ARCH_HAS_FAST_MULTIPLIER
+ bool
+
config CRC_CCITT
tristate "CRC-CCITT functions"
help
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index a285900..3ac43f3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1672,7 +1672,8 @@
against the BPF interpreter or BPF JIT compiler depending on the
current setting. This is in particular useful for BPF JIT compiler
development, but also to run regression tests against changes in
- the interpreter code.
+ the interpreter code. It also enables test stubs for eBPF maps and
+ verifier used by user space verifier testsuite.
If unsure, say N.
diff --git a/lib/assoc_array.c b/lib/assoc_array.c
index ae146f0..2404d03 100644
--- a/lib/assoc_array.c
+++ b/lib/assoc_array.c
@@ -1723,11 +1723,13 @@
shortcut = assoc_array_ptr_to_shortcut(ptr);
slot = shortcut->parent_slot;
cursor = shortcut->back_pointer;
+ if (!cursor)
+ goto gc_complete;
} else {
slot = node->parent_slot;
cursor = ptr;
}
- BUG_ON(!ptr);
+ BUG_ON(!cursor);
node = assoc_array_ptr_to_node(cursor);
slot++;
goto continue_node;
diff --git a/lib/hweight.c b/lib/hweight.c
index b7d81ba..9a5c1f2 100644
--- a/lib/hweight.c
+++ b/lib/hweight.c
@@ -11,7 +11,7 @@
unsigned int __sw_hweight32(unsigned int w)
{
-#ifdef ARCH_HAS_FAST_MULTIPLIER
+#ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER
w -= (w >> 1) & 0x55555555;
w = (w & 0x33333333) + ((w >> 2) & 0x33333333);
w = (w + (w >> 4)) & 0x0f0f0f0f;
@@ -49,7 +49,7 @@
return __sw_hweight32((unsigned int)(w >> 32)) +
__sw_hweight32((unsigned int)w);
#elif BITS_PER_LONG == 64
-#ifdef ARCH_HAS_FAST_MULTIPLIER
+#ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER
w -= (w >> 1) & 0x5555555555555555ul;
w = (w & 0x3333333333333333ul) + ((w >> 2) & 0x3333333333333333ul);
w = (w + (w >> 4)) & 0x0f0f0f0f0f0f0f0ful;
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index fe5a334..a89cf09 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
@@ -184,3 +184,19 @@
call_rcu_sched(&ref->rcu, percpu_ref_kill_rcu);
}
EXPORT_SYMBOL_GPL(percpu_ref_kill_and_confirm);
+
+/*
+ * XXX: Temporary kludge to work around SCSI blk-mq stall. Used only by
+ * block/blk-mq.c::blk_mq_freeze_queue(). Will be removed during v3.18
+ * devel cycle. Do not use anywhere else.
+ */
+void __percpu_ref_kill_expedited(struct percpu_ref *ref)
+{
+ WARN_ONCE(ref->pcpu_count_ptr & PCPU_REF_DEAD,
+ "percpu_ref_kill() called more than once on %pf!",
+ ref->release);
+
+ ref->pcpu_count_ptr |= PCPU_REF_DEAD;
+ synchronize_sched_expedited();
+ percpu_ref_kill_rcu(&ref->rcu);
+}
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 8dfec3f..25f14c5 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -23,7 +23,6 @@
#include <linux/hash.h>
#include <linux/random.h>
#include <linux/rhashtable.h>
-#include <linux/log2.h>
#define HASH_DEFAULT_SIZE 64UL
#define HASH_MIN_SIZE 4UL
diff --git a/lib/string.c b/lib/string.c
index 992bf30..f3c6ff5 100644
--- a/lib/string.c
+++ b/lib/string.c
@@ -807,9 +807,9 @@
return check_bytes8(start, value, bytes);
value64 = value;
-#if defined(ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64
+#if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64
value64 *= 0x0101010101010101;
-#elif defined(ARCH_HAS_FAST_MULTIPLIER)
+#elif defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER)
value64 *= 0x01010101;
value64 |= value64 << 32;
#else
diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 4138908..23e070b 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -1738,7 +1738,7 @@
{
"load 64-bit immediate",
.u.insns_int = {
- BPF_LD_IMM64(R1, 0x567800001234L),
+ BPF_LD_IMM64(R1, 0x567800001234LL),
BPF_MOV64_REG(R2, R1),
BPF_MOV64_REG(R3, R2),
BPF_ALU64_IMM(BPF_RSH, R2, 32),
@@ -1894,7 +1894,7 @@
int runs, u64 *duration)
{
u64 start, finish;
- int ret, i;
+ int ret = 0, i;
start = ktime_to_us(ktime_get());
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 306baa5..ba8019b 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -176,7 +176,7 @@
if (list_empty(&dev->dma_pools) &&
device_create_file(dev, &dev_attr_pools)) {
kfree(retval);
- return NULL;
+ retval = NULL;
} else
list_add(&retval->pools, &dev->dma_pools);
mutex_unlock(&pools_lock);
diff --git a/mm/memblock.c b/mm/memblock.c
index 70fad0c..6ecb0d9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -816,6 +816,10 @@
if (nid != NUMA_NO_NODE && nid != m_nid)
continue;
+ /* skip hotpluggable memory regions if needed */
+ if (movable_node_is_enabled() && memblock_is_hotpluggable(m))
+ continue;
+
if (!type_b) {
if (out_start)
*out_start = m_start;
diff --git a/mm/memory.c b/mm/memory.c
index adeac30..d17f1bc 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -118,6 +118,8 @@
unsigned long zero_pfn __read_mostly;
unsigned long highest_memmap_pfn __read_mostly;
+EXPORT_SYMBOL(zero_pfn);
+
/*
* CONFIG_MMU architectures set up ZERO_PAGE in their paging_init()
*/
diff --git a/mm/mmap.c b/mm/mmap.c
index c1f2ea4..c0a3637 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -369,20 +369,20 @@
struct vm_area_struct *vma;
vma = rb_entry(nd, struct vm_area_struct, vm_rb);
if (vma->vm_start < prev) {
- pr_info("vm_start %lx prev %lx\n", vma->vm_start, prev);
+ pr_emerg("vm_start %lx prev %lx\n", vma->vm_start, prev);
bug = 1;
}
if (vma->vm_start < pend) {
- pr_info("vm_start %lx pend %lx\n", vma->vm_start, pend);
+ pr_emerg("vm_start %lx pend %lx\n", vma->vm_start, pend);
bug = 1;
}
if (vma->vm_start > vma->vm_end) {
- pr_info("vm_end %lx < vm_start %lx\n",
+ pr_emerg("vm_end %lx < vm_start %lx\n",
vma->vm_end, vma->vm_start);
bug = 1;
}
if (vma->rb_subtree_gap != vma_compute_subtree_gap(vma)) {
- pr_info("free gap %lx, correct %lx\n",
+ pr_emerg("free gap %lx, correct %lx\n",
vma->rb_subtree_gap,
vma_compute_subtree_gap(vma));
bug = 1;
@@ -396,7 +396,7 @@
for (nd = pn; nd; nd = rb_prev(nd))
j++;
if (i != j) {
- pr_info("backwards %d, forwards %d\n", j, i);
+ pr_emerg("backwards %d, forwards %d\n", j, i);
bug = 1;
}
return bug ? -1 : i;
@@ -431,17 +431,17 @@
i++;
}
if (i != mm->map_count) {
- pr_info("map_count %d vm_next %d\n", mm->map_count, i);
+ pr_emerg("map_count %d vm_next %d\n", mm->map_count, i);
bug = 1;
}
if (highest_address != mm->highest_vm_end) {
- pr_info("mm->highest_vm_end %lx, found %lx\n",
+ pr_emerg("mm->highest_vm_end %lx, found %lx\n",
mm->highest_vm_end, highest_address);
bug = 1;
}
i = browse_rb(&mm->mm_rb);
if (i != mm->map_count) {
- pr_info("map_count %d rb %d\n", mm->map_count, i);
+ pr_emerg("map_count %d rb %d\n", mm->map_count, i);
bug = 1;
}
BUG_ON(bug);
diff --git a/mm/nobootmem.c b/mm/nobootmem.c
index 7ed5860..7c7ab32 100644
--- a/mm/nobootmem.c
+++ b/mm/nobootmem.c
@@ -119,6 +119,8 @@
phys_addr_t start, end;
u64 i;
+ memblock_clear_hotplug(0, -1);
+
for_each_free_mem_range(i, NUMA_NO_NODE, &start, &end, NULL)
count += __free_memory_core(start, end);
diff --git a/net/bluetooth/6lowpan.c b/net/bluetooth/6lowpan.c
index 35ebe79..0920cb6 100644
--- a/net/bluetooth/6lowpan.c
+++ b/net/bluetooth/6lowpan.c
@@ -39,6 +39,7 @@
struct skb_cb {
struct in6_addr addr;
+ struct in6_addr gw;
struct l2cap_chan *chan;
int status;
};
@@ -158,6 +159,54 @@
return NULL;
}
+static inline struct lowpan_peer *peer_lookup_dst(struct lowpan_dev *dev,
+ struct in6_addr *daddr,
+ struct sk_buff *skb)
+{
+ struct lowpan_peer *peer, *tmp;
+ struct in6_addr *nexthop;
+ struct rt6_info *rt = (struct rt6_info *)skb_dst(skb);
+ int count = atomic_read(&dev->peer_count);
+
+ BT_DBG("peers %d addr %pI6c rt %p", count, daddr, rt);
+
+ /* If we have multiple 6lowpan peers, then check where we should
+ * send the packet. If only one peer exists, then we can send the
+ * packet right away.
+ */
+ if (count == 1)
+ return list_first_entry(&dev->peers, struct lowpan_peer,
+ list);
+
+ if (!rt) {
+ nexthop = &lowpan_cb(skb)->gw;
+
+ if (ipv6_addr_any(nexthop))
+ return NULL;
+ } else {
+ nexthop = rt6_nexthop(rt);
+
+ /* We need to remember the address because it is needed
+ * by bt_xmit() when sending the packet. In bt_xmit(), the
+ * destination routing info is not set.
+ */
+ memcpy(&lowpan_cb(skb)->gw, nexthop, sizeof(struct in6_addr));
+ }
+
+ BT_DBG("gw %pI6c", nexthop);
+
+ list_for_each_entry_safe(peer, tmp, &dev->peers, list) {
+ BT_DBG("dst addr %pMR dst type %d ip %pI6c",
+ &peer->chan->dst, peer->chan->dst_type,
+ &peer->peer_addr);
+
+ if (!ipv6_addr_cmp(&peer->peer_addr, nexthop))
+ return peer;
+ }
+
+ return NULL;
+}
+
static struct lowpan_peer *lookup_peer(struct l2cap_conn *conn)
{
struct lowpan_dev *entry, *tmp;
@@ -415,8 +464,18 @@
read_unlock_irqrestore(&devices_lock, flags);
if (!peer) {
- BT_DBG("no such peer %pMR found", &addr);
- return -ENOENT;
+ /* The packet might be sent to 6lowpan interface
+ * because of routing (either via default route
+ * or user set route) so get peer according to
+ * the destination address.
+ */
+ read_lock_irqsave(&devices_lock, flags);
+ peer = peer_lookup_dst(dev, &hdr->daddr, skb);
+ read_unlock_irqrestore(&devices_lock, flags);
+ if (!peer) {
+ BT_DBG("no such peer %pMR found", &addr);
+ return -ENOENT;
+ }
}
daddr = peer->eui64_addr;
@@ -520,6 +579,8 @@
read_lock_irqsave(&devices_lock, flags);
peer = peer_lookup_ba(dev, &addr, addr_type);
+ if (!peer)
+ peer = peer_lookup_dst(dev, &lowpan_cb(skb)->addr, skb);
read_unlock_irqrestore(&devices_lock, flags);
BT_DBG("xmit %s to %pMR type %d IP %pI6c peer %p",
@@ -671,6 +732,14 @@
return chan;
}
+static void set_ip_addr_bits(u8 addr_type, u8 *addr)
+{
+ if (addr_type == BDADDR_LE_PUBLIC)
+ *addr |= 0x02;
+ else
+ *addr &= ~0x02;
+}
+
static struct l2cap_chan *add_peer_chan(struct l2cap_chan *chan,
struct lowpan_dev *dev)
{
@@ -693,6 +762,11 @@
memcpy(&peer->eui64_addr, (u8 *)&peer->peer_addr.s6_addr + 8,
EUI64_ADDR_LEN);
+ /* IPv6 address needs to have the U/L bit set properly so toggle
+ * it back here.
+ */
+ set_ip_addr_bits(chan->dst_type, (u8 *)&peer->peer_addr.s6_addr + 8);
+
write_lock_irqsave(&devices_lock, flags);
INIT_LIST_HEAD(&peer->list);
peer_add(dev, peer);
@@ -890,7 +964,7 @@
static long chan_get_sndtimeo_cb(struct l2cap_chan *chan)
{
- return msecs_to_jiffies(1000);
+ return L2CAP_CONN_TIMEOUT;
}
static const struct l2cap_ops bt_6lowpan_chan_ops = {
diff --git a/net/bluetooth/amp.c b/net/bluetooth/amp.c
index 016cdb6..2640d78 100644
--- a/net/bluetooth/amp.c
+++ b/net/bluetooth/amp.c
@@ -149,15 +149,14 @@
if (ret) {
BT_DBG("crypto_ahash_setkey failed: err %d", ret);
} else {
- struct {
- struct shash_desc shash;
- char ctx[crypto_shash_descsize(tfm)];
- } desc;
+ char desc[sizeof(struct shash_desc) +
+ crypto_shash_descsize(tfm)] CRYPTO_MINALIGN_ATTR;
+ struct shash_desc *shash = (struct shash_desc *)desc;
- desc.shash.tfm = tfm;
- desc.shash.flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+ shash->tfm = tfm;
+ shash->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
- ret = crypto_shash_digest(&desc.shash, plaintext, psize,
+ ret = crypto_shash_digest(shash, plaintext, psize,
output);
}
diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
index faff624..e3d7ae9 100644
--- a/net/bluetooth/hci_conn.c
+++ b/net/bluetooth/hci_conn.c
@@ -122,17 +122,30 @@
hci_send_cmd(conn->hdev, HCI_OP_REJECT_SYNC_CONN_REQ, sizeof(cp), &cp);
}
-void hci_disconnect(struct hci_conn *conn, __u8 reason)
+int hci_disconnect(struct hci_conn *conn, __u8 reason)
{
struct hci_cp_disconnect cp;
BT_DBG("hcon %p", conn);
+ /* When we are master of an established connection and it enters
+ * the disconnect timeout, then go ahead and try to read the
+ * current clock offset. Processing of the result is done
+ * within the event handling and hci_clock_offset_evt function.
+ */
+ if (conn->type == ACL_LINK && conn->role == HCI_ROLE_MASTER) {
+ struct hci_dev *hdev = conn->hdev;
+ struct hci_cp_read_clock_offset cp;
+
+ cp.handle = cpu_to_le16(conn->handle);
+ hci_send_cmd(hdev, HCI_OP_READ_CLOCK_OFFSET, sizeof(cp), &cp);
+ }
+
conn->state = BT_DISCONN;
cp.handle = cpu_to_le16(conn->handle);
cp.reason = reason;
- hci_send_cmd(conn->hdev, HCI_OP_DISCONNECT, sizeof(cp), &cp);
+ return hci_send_cmd(conn->hdev, HCI_OP_DISCONNECT, sizeof(cp), &cp);
}
static void hci_amp_disconn(struct hci_conn *conn)
@@ -325,25 +338,6 @@
hci_amp_disconn(conn);
} else {
__u8 reason = hci_proto_disconn_ind(conn);
-
- /* When we are master of an established connection
- * and it enters the disconnect timeout, then go
- * ahead and try to read the current clock offset.
- *
- * Processing of the result is done within the
- * event handling and hci_clock_offset_evt function.
- */
- if (conn->type == ACL_LINK &&
- conn->role == HCI_ROLE_MASTER) {
- struct hci_dev *hdev = conn->hdev;
- struct hci_cp_read_clock_offset cp;
-
- cp.handle = cpu_to_le16(conn->handle);
-
- hci_send_cmd(hdev, HCI_OP_READ_CLOCK_OFFSET,
- sizeof(cp), &cp);
- }
-
hci_disconnect(conn, reason);
}
break;
@@ -595,6 +589,7 @@
conn->dst_type);
if (params && params->conn) {
hci_conn_drop(params->conn);
+ hci_conn_put(params->conn);
params->conn = NULL;
}
@@ -1290,11 +1285,16 @@
BT_DBG("%s hcon %p", hdev->name, conn);
+ if (test_bit(HCI_CONN_DROP, &conn->flags)) {
+ BT_DBG("Refusing to create new hci_chan");
+ return NULL;
+ }
+
chan = kzalloc(sizeof(*chan), GFP_KERNEL);
if (!chan)
return NULL;
- chan->conn = conn;
+ chan->conn = hci_conn_get(conn);
skb_queue_head_init(&chan->data_q);
chan->state = BT_CONNECTED;
@@ -1314,7 +1314,10 @@
synchronize_rcu();
- hci_conn_drop(conn);
+ /* Prevent new hci_chan's to be created for this hci_conn */
+ set_bit(HCI_CONN_DROP, &conn->flags);
+
+ hci_conn_put(conn);
skb_queue_purge(&chan->data_q);
kfree(chan);
diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
index 9b71459..067526d 100644
--- a/net/bluetooth/hci_core.c
+++ b/net/bluetooth/hci_core.c
@@ -2541,6 +2541,7 @@
list_for_each_entry(p, &hdev->le_conn_params, list) {
if (p->conn) {
hci_conn_drop(p->conn);
+ hci_conn_put(p->conn);
p->conn = NULL;
}
list_del_init(&p->action);
@@ -3725,6 +3726,18 @@
return 0;
}
+static void hci_conn_params_free(struct hci_conn_params *params)
+{
+ if (params->conn) {
+ hci_conn_drop(params->conn);
+ hci_conn_put(params->conn);
+ }
+
+ list_del(¶ms->action);
+ list_del(¶ms->list);
+ kfree(params);
+}
+
/* This function requires the caller holds hdev->lock */
void hci_conn_params_del(struct hci_dev *hdev, bdaddr_t *addr, u8 addr_type)
{
@@ -3734,12 +3747,7 @@
if (!params)
return;
- if (params->conn)
- hci_conn_drop(params->conn);
-
- list_del(¶ms->action);
- list_del(¶ms->list);
- kfree(params);
+ hci_conn_params_free(params);
hci_update_background_scan(hdev);
@@ -3766,13 +3774,8 @@
{
struct hci_conn_params *params, *tmp;
- list_for_each_entry_safe(params, tmp, &hdev->le_conn_params, list) {
- if (params->conn)
- hci_conn_drop(params->conn);
- list_del(¶ms->action);
- list_del(¶ms->list);
- kfree(params);
- }
+ list_for_each_entry_safe(params, tmp, &hdev->le_conn_params, list)
+ hci_conn_params_free(params);
hci_update_background_scan(hdev);
@@ -3869,6 +3872,7 @@
if (test_bit(HCI_LE_ADV, &hdev->dev_flags) ||
hci_conn_hash_lookup_state(hdev, LE_LINK, BT_CONNECT)) {
BT_DBG("Deferring random address update");
+ set_bit(HCI_RPA_EXPIRED, &hdev->dev_flags);
return;
}
diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
index 3a99f30..8b0a2a6 100644
--- a/net/bluetooth/hci_event.c
+++ b/net/bluetooth/hci_event.c
@@ -2320,8 +2320,7 @@
conn->sec_level = conn->pending_sec_level;
}
} else {
- mgmt_auth_failed(hdev, &conn->dst, conn->type, conn->dst_type,
- ev->status);
+ mgmt_auth_failed(conn, ev->status);
}
clear_bit(HCI_CONN_AUTH_PEND, &conn->flags);
@@ -2439,6 +2438,12 @@
}
}
+ /* We should disregard the current RPA and generate a new one
+ * whenever the encryption procedure fails.
+ */
+ if (ev->status && conn->type == LE_LINK)
+ set_bit(HCI_RPA_EXPIRED, &hdev->dev_flags);
+
clear_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags);
if (ev->status && conn->state == BT_CONNECTED) {
@@ -3900,8 +3905,7 @@
* event gets always produced as initiator and is also mapped to
* the mgmt_auth_failed event */
if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags) && ev->status)
- mgmt_auth_failed(hdev, &conn->dst, conn->type, conn->dst_type,
- ev->status);
+ mgmt_auth_failed(conn, ev->status);
hci_conn_drop(conn);
@@ -4193,16 +4197,16 @@
conn->dst_type = irk->addr_type;
}
- if (conn->dst_type == ADDR_LE_DEV_PUBLIC)
- addr_type = BDADDR_LE_PUBLIC;
- else
- addr_type = BDADDR_LE_RANDOM;
-
if (ev->status) {
hci_le_conn_failed(conn, ev->status);
goto unlock;
}
+ if (conn->dst_type == ADDR_LE_DEV_PUBLIC)
+ addr_type = BDADDR_LE_PUBLIC;
+ else
+ addr_type = BDADDR_LE_RANDOM;
+
/* Drop the connection if the device is blocked */
if (hci_bdaddr_list_lookup(&hdev->blacklist, &conn->dst, addr_type)) {
hci_conn_drop(conn);
@@ -4225,11 +4229,13 @@
hci_proto_connect_cfm(conn, ev->status);
- params = hci_conn_params_lookup(hdev, &conn->dst, conn->dst_type);
+ params = hci_pend_le_action_lookup(&hdev->pend_le_conns, &conn->dst,
+ conn->dst_type);
if (params) {
list_del_init(¶ms->action);
if (params->conn) {
hci_conn_drop(params->conn);
+ hci_conn_put(params->conn);
params->conn = NULL;
}
}
@@ -4321,7 +4327,7 @@
* the parameters get removed and keep the reference
* count consistent once the connection is established.
*/
- params->conn = conn;
+ params->conn = hci_conn_get(conn);
return;
}
@@ -4506,10 +4512,7 @@
memcpy(cp.ltk, ltk->val, sizeof(ltk->val));
cp.handle = cpu_to_le16(conn->handle);
- if (ltk->authenticated)
- conn->pending_sec_level = BT_SECURITY_HIGH;
- else
- conn->pending_sec_level = BT_SECURITY_MEDIUM;
+ conn->pending_sec_level = smp_ltk_sec_level(ltk);
conn->enc_key_size = ltk->enc_size;
diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
index 6c7ecf1..1b7d605 100644
--- a/net/bluetooth/hidp/core.c
+++ b/net/bluetooth/hidp/core.c
@@ -915,7 +915,7 @@
/* connection management */
bacpy(&session->bdaddr, bdaddr);
- session->conn = conn;
+ session->conn = l2cap_conn_get(conn);
session->user.probe = hidp_session_probe;
session->user.remove = hidp_session_remove;
session->ctrl_sock = ctrl_sock;
@@ -941,13 +941,13 @@
if (ret)
goto err_free;
- l2cap_conn_get(session->conn);
get_file(session->intr_sock->file);
get_file(session->ctrl_sock->file);
*out = session;
return 0;
err_free:
+ l2cap_conn_put(session->conn);
kfree(session);
return ret;
}
@@ -1327,10 +1327,8 @@
conn = NULL;
l2cap_chan_lock(chan);
- if (chan->conn) {
- l2cap_conn_get(chan->conn);
- conn = chan->conn;
- }
+ if (chan->conn)
+ conn = l2cap_conn_get(chan->conn);
l2cap_chan_unlock(chan);
if (!conn)
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
index 4a90438..8d53fc5 100644
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -546,7 +546,10 @@
l2cap_chan_hold(chan);
- hci_conn_hold(conn->hcon);
+ /* Only keep a reference for fixed channels if they requested it */
+ if (chan->chan_type != L2CAP_CHAN_FIXED ||
+ test_bit(FLAG_HOLD_HCI_CONN, &chan->flags))
+ hci_conn_hold(conn->hcon);
list_add(&chan->list, &conn->chan_l);
}
@@ -577,7 +580,12 @@
chan->conn = NULL;
- if (chan->scid != L2CAP_CID_A2MP)
+ /* Reference was only held for non-fixed channels or
+ * fixed channels that explicitly requested it using the
+ * FLAG_HOLD_HCI_CONN flag.
+ */
+ if (chan->chan_type != L2CAP_CHAN_FIXED ||
+ test_bit(FLAG_HOLD_HCI_CONN, &chan->flags))
hci_conn_drop(conn->hcon);
if (mgr && mgr->bredr_chan == chan)
@@ -623,9 +631,11 @@
}
EXPORT_SYMBOL_GPL(l2cap_chan_del);
-void l2cap_conn_update_id_addr(struct hci_conn *hcon)
+static void l2cap_conn_update_id_addr(struct work_struct *work)
{
- struct l2cap_conn *conn = hcon->l2cap_data;
+ struct l2cap_conn *conn = container_of(work, struct l2cap_conn,
+ id_addr_update_work);
+ struct hci_conn *hcon = conn->hcon;
struct l2cap_chan *chan;
mutex_lock(&conn->chan_lock);
@@ -1273,6 +1283,24 @@
}
}
+static void l2cap_request_info(struct l2cap_conn *conn)
+{
+ struct l2cap_info_req req;
+
+ if (conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT)
+ return;
+
+ req.type = cpu_to_le16(L2CAP_IT_FEAT_MASK);
+
+ conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_SENT;
+ conn->info_ident = l2cap_get_ident(conn);
+
+ schedule_delayed_work(&conn->info_timer, L2CAP_INFO_TIMEOUT);
+
+ l2cap_send_cmd(conn, conn->info_ident, L2CAP_INFO_REQ,
+ sizeof(req), &req);
+}
+
static void l2cap_do_start(struct l2cap_chan *chan)
{
struct l2cap_conn *conn = chan->conn;
@@ -1282,26 +1310,17 @@
return;
}
- if (conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT) {
- if (!(conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_DONE))
- return;
-
- if (l2cap_chan_check_security(chan, true) &&
- __l2cap_no_conn_pending(chan)) {
- l2cap_start_connection(chan);
- }
- } else {
- struct l2cap_info_req req;
- req.type = cpu_to_le16(L2CAP_IT_FEAT_MASK);
-
- conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_SENT;
- conn->info_ident = l2cap_get_ident(conn);
-
- schedule_delayed_work(&conn->info_timer, L2CAP_INFO_TIMEOUT);
-
- l2cap_send_cmd(conn, conn->info_ident, L2CAP_INFO_REQ,
- sizeof(req), &req);
+ if (!(conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT)) {
+ l2cap_request_info(conn);
+ return;
}
+
+ if (!(conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_DONE))
+ return;
+
+ if (l2cap_chan_check_security(chan, true) &&
+ __l2cap_no_conn_pending(chan))
+ l2cap_start_connection(chan);
}
static inline int l2cap_mode_supported(__u8 mode, __u32 feat_mask)
@@ -1360,6 +1379,7 @@
l2cap_chan_lock(chan);
if (chan->chan_type != L2CAP_CHAN_CONN_ORIENTED) {
+ l2cap_chan_ready(chan);
l2cap_chan_unlock(chan);
continue;
}
@@ -1464,6 +1484,9 @@
BT_DBG("conn %p", conn);
+ if (hcon->type == ACL_LINK)
+ l2cap_request_info(conn);
+
mutex_lock(&conn->chan_lock);
list_for_each_entry(chan, &conn->chan_l, list) {
@@ -1478,8 +1501,8 @@
if (hcon->type == LE_LINK) {
l2cap_le_start(chan);
} else if (chan->chan_type != L2CAP_CHAN_CONN_ORIENTED) {
- l2cap_chan_ready(chan);
-
+ if (conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_DONE)
+ l2cap_chan_ready(chan);
} else if (chan->state == BT_CONNECT) {
l2cap_do_start(chan);
}
@@ -1627,11 +1650,14 @@
if (work_pending(&conn->pending_rx_work))
cancel_work_sync(&conn->pending_rx_work);
- if (work_pending(&conn->disconn_work))
- cancel_work_sync(&conn->disconn_work);
+ if (work_pending(&conn->id_addr_update_work))
+ cancel_work_sync(&conn->id_addr_update_work);
l2cap_unregister_all_users(conn);
+ /* Force the connection to be immediately dropped */
+ hcon->disc_timeout = 0;
+
mutex_lock(&conn->chan_lock);
/* Kill channels */
@@ -1659,26 +1685,6 @@
l2cap_conn_put(conn);
}
-static void disconn_work(struct work_struct *work)
-{
- struct l2cap_conn *conn = container_of(work, struct l2cap_conn,
- disconn_work);
-
- BT_DBG("conn %p", conn);
-
- l2cap_conn_del(conn->hcon, conn->disconn_err);
-}
-
-void l2cap_conn_shutdown(struct l2cap_conn *conn, int err)
-{
- struct hci_dev *hdev = conn->hcon->hdev;
-
- BT_DBG("conn %p err %d", conn, err);
-
- conn->disconn_err = err;
- queue_work(hdev->workqueue, &conn->disconn_work);
-}
-
static void l2cap_conn_free(struct kref *ref)
{
struct l2cap_conn *conn = container_of(ref, struct l2cap_conn, ref);
@@ -1687,9 +1693,10 @@
kfree(conn);
}
-void l2cap_conn_get(struct l2cap_conn *conn)
+struct l2cap_conn *l2cap_conn_get(struct l2cap_conn *conn)
{
kref_get(&conn->ref);
+ return conn;
}
EXPORT_SYMBOL(l2cap_conn_get);
@@ -2358,12 +2365,8 @@
BT_DBG("chan %p, msg %p, len %zu", chan, msg, len);
- pdu_len = chan->conn->mtu - L2CAP_HDR_SIZE;
-
- pdu_len = min_t(size_t, pdu_len, chan->remote_mps);
-
sdu_len = len;
- pdu_len -= L2CAP_SDULEN_SIZE;
+ pdu_len = chan->remote_mps - L2CAP_SDULEN_SIZE;
while (len > 0) {
if (len <= pdu_len)
@@ -5428,6 +5431,11 @@
if (test_bit(FLAG_DEFER_SETUP, &chan->flags)) {
l2cap_state_change(chan, BT_CONNECT2);
+ /* The following result value is actually not defined
+ * for LE CoC but we use it to let the function know
+ * that it should bail out after doing its cleanup
+ * instead of sending a response.
+ */
result = L2CAP_CR_PEND;
chan->ops->defer(chan);
} else {
@@ -6904,8 +6912,7 @@
kref_init(&conn->ref);
hcon->l2cap_data = conn;
- conn->hcon = hcon;
- hci_conn_get(conn->hcon);
+ conn->hcon = hci_conn_get(hcon);
conn->hchan = hchan;
BT_DBG("hcon %p conn %p hchan %p", hcon, conn, hchan);
@@ -6936,10 +6943,9 @@
INIT_DELAYED_WORK(&conn->info_timer, l2cap_info_timeout);
- INIT_WORK(&conn->disconn_work, disconn_work);
-
skb_queue_head_init(&conn->pending_rx);
INIT_WORK(&conn->pending_rx_work, process_pending_rx);
+ INIT_WORK(&conn->id_addr_update_work, l2cap_conn_update_id_addr);
conn->disc_reason = HCI_ERROR_REMOTE_USER_TERM;
@@ -7082,9 +7088,7 @@
bacpy(&chan->src, &hcon->src);
chan->src_type = bdaddr_type(hcon, hcon->src_type);
- l2cap_chan_unlock(chan);
l2cap_chan_add(conn, chan);
- l2cap_chan_lock(chan);
/* l2cap_chan_add takes its own ref so we can drop this one */
hci_conn_drop(hcon);
diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
index ed06f88..31f106e 100644
--- a/net/bluetooth/l2cap_sock.c
+++ b/net/bluetooth/l2cap_sock.c
@@ -146,6 +146,14 @@
case L2CAP_CHAN_RAW:
chan->sec_level = BT_SECURITY_SDP;
break;
+ case L2CAP_CHAN_FIXED:
+ /* Fixed channels default to the L2CAP core not holding a
+ * hci_conn reference for them. For fixed channels mapping to
+ * L2CAP sockets we do want to hold a reference so set the
+ * appropriate flag to request it.
+ */
+ set_bit(FLAG_HOLD_HCI_CONN, &chan->flags);
+ break;
}
bacpy(&chan->src, &la.l2_bdaddr);
diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
index c245743..efb71b0 100644
--- a/net/bluetooth/mgmt.c
+++ b/net/bluetooth/mgmt.c
@@ -2788,7 +2788,6 @@
{
struct mgmt_cp_disconnect *cp = data;
struct mgmt_rp_disconnect rp;
- struct hci_cp_disconnect dc;
struct pending_cmd *cmd;
struct hci_conn *conn;
int err;
@@ -2836,10 +2835,7 @@
goto failed;
}
- dc.handle = cpu_to_le16(conn->handle);
- dc.reason = HCI_ERROR_REMOTE_USER_TERM;
-
- err = hci_send_cmd(hdev, HCI_OP_DISCONNECT, sizeof(dc), &dc);
+ err = hci_disconnect(conn, HCI_ERROR_REMOTE_USER_TERM);
if (err < 0)
mgmt_pending_remove(cmd);
@@ -3063,6 +3059,7 @@
conn->disconn_cfm_cb = NULL;
hci_conn_drop(conn);
+ hci_conn_put(conn);
mgmt_pending_remove(cmd);
}
@@ -3212,7 +3209,7 @@
}
conn->io_capability = cp->io_cap;
- cmd->user_data = conn;
+ cmd->user_data = hci_conn_get(conn);
if ((conn->state == BT_CONNECTED || conn->state == BT_CONFIG) &&
hci_conn_security(conn, sec_level, auth_type, true))
@@ -4914,6 +4911,7 @@
match->mgmt_status, &rp, sizeof(rp));
hci_conn_drop(conn);
+ hci_conn_put(conn);
mgmt_pending_remove(cmd);
}
@@ -5070,7 +5068,7 @@
}
hci_conn_hold(conn);
- cmd->user_data = conn;
+ cmd->user_data = hci_conn_get(conn);
conn->conn_info_timestamp = jiffies;
} else {
@@ -5134,8 +5132,10 @@
cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(status),
&rp, sizeof(rp));
mgmt_pending_remove(cmd);
- if (conn)
+ if (conn) {
hci_conn_drop(conn);
+ hci_conn_put(conn);
+ }
unlock:
hci_dev_unlock(hdev);
@@ -5198,7 +5198,7 @@
if (conn) {
hci_conn_hold(conn);
- cmd->user_data = conn;
+ cmd->user_data = hci_conn_get(conn);
hci_cp.handle = cpu_to_le16(conn->handle);
hci_cp.which = 0x01; /* Piconet clock */
@@ -6485,16 +6485,23 @@
return mgmt_event(MGMT_EV_PASSKEY_NOTIFY, hdev, &ev, sizeof(ev), NULL);
}
-void mgmt_auth_failed(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
- u8 addr_type, u8 status)
+void mgmt_auth_failed(struct hci_conn *conn, u8 hci_status)
{
struct mgmt_ev_auth_failed ev;
+ struct pending_cmd *cmd;
+ u8 status = mgmt_status(hci_status);
- bacpy(&ev.addr.bdaddr, bdaddr);
- ev.addr.type = link_to_bdaddr(link_type, addr_type);
- ev.status = mgmt_status(status);
+ bacpy(&ev.addr.bdaddr, &conn->dst);
+ ev.addr.type = link_to_bdaddr(conn->type, conn->dst_type);
+ ev.status = status;
- mgmt_event(MGMT_EV_AUTH_FAILED, hdev, &ev, sizeof(ev), NULL);
+ cmd = find_pairing(conn);
+
+ mgmt_event(MGMT_EV_AUTH_FAILED, conn->hdev, &ev, sizeof(ev),
+ cmd ? cmd->sk : NULL);
+
+ if (cmd)
+ pairing_complete(cmd, status);
}
void mgmt_auth_enable_complete(struct hci_dev *hdev, u8 status)
diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
index 07ca4ce..51fc7db 100644
--- a/net/bluetooth/smp.c
+++ b/net/bluetooth/smp.c
@@ -31,9 +31,12 @@
#include "smp.h"
+#define SMP_ALLOW_CMD(smp, code) set_bit(code, &smp->allow_cmd)
+
#define SMP_TIMEOUT msecs_to_jiffies(30000)
#define AUTH_REQ_MASK 0x07
+#define KEY_DIST_MASK 0x07
enum {
SMP_FLAG_TK_VALID,
@@ -46,7 +49,7 @@
struct smp_chan {
struct l2cap_conn *conn;
struct delayed_work security_timer;
- struct work_struct distribute_work;
+ unsigned long allow_cmd; /* Bitmask of allowed commands */
u8 preq[7]; /* SMP Pairing Request */
u8 prsp[7]; /* SMP Pairing Response */
@@ -282,8 +285,7 @@
smp = chan->data;
cancel_delayed_work_sync(&smp->security_timer);
- if (test_bit(HCI_CONN_LE_SMP_PEND, &conn->hcon->flags))
- schedule_delayed_work(&smp->security_timer, SMP_TIMEOUT);
+ schedule_delayed_work(&smp->security_timer, SMP_TIMEOUT);
}
static __u8 authreq_to_seclevel(__u8 authreq)
@@ -375,15 +377,6 @@
BUG_ON(!smp);
cancel_delayed_work_sync(&smp->security_timer);
- /* In case the timeout freed the SMP context */
- if (!chan->data)
- return;
-
- if (work_pending(&smp->distribute_work)) {
- cancel_work_sync(&smp->distribute_work);
- if (!chan->data)
- return;
- }
complete = test_bit(SMP_FLAG_COMPLETE, &smp->flags);
mgmt_smp_complete(conn->hcon, complete);
@@ -420,22 +413,15 @@
{
struct hci_conn *hcon = conn->hcon;
struct l2cap_chan *chan = conn->smp;
- struct smp_chan *smp;
if (reason)
smp_send_cmd(conn, SMP_CMD_PAIRING_FAIL, sizeof(reason),
&reason);
clear_bit(HCI_CONN_ENCRYPT_PEND, &hcon->flags);
- mgmt_auth_failed(hcon->hdev, &hcon->dst, hcon->type, hcon->dst_type,
- HCI_ERROR_AUTH_FAILURE);
+ mgmt_auth_failed(hcon, HCI_ERROR_AUTH_FAILURE);
- if (!chan->data)
- return;
-
- smp = chan->data;
-
- if (test_and_clear_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags))
+ if (chan->data)
smp_chan_destroy(conn);
}
@@ -569,6 +555,11 @@
smp_send_cmd(smp->conn, SMP_CMD_PAIRING_CONFIRM, sizeof(cp), &cp);
+ if (conn->hcon->out)
+ SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_CONFIRM);
+ else
+ SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
+
return 0;
}
@@ -658,7 +649,7 @@
*/
bacpy(&hcon->dst, &smp->remote_irk->bdaddr);
hcon->dst_type = smp->remote_irk->addr_type;
- l2cap_conn_update_id_addr(hcon);
+ queue_work(hdev->workqueue, &conn->id_addr_update_work);
/* When receiving an indentity resolving key for
* a remote device that does not use a resolvable
@@ -707,10 +698,22 @@
}
}
-static void smp_distribute_keys(struct work_struct *work)
+static void smp_allow_key_dist(struct smp_chan *smp)
{
- struct smp_chan *smp = container_of(work, struct smp_chan,
- distribute_work);
+ /* Allow the first expected phase 3 PDU. The rest of the PDUs
+ * will be allowed in each PDU handler to ensure we receive
+ * them in the correct order.
+ */
+ if (smp->remote_key_dist & SMP_DIST_ENC_KEY)
+ SMP_ALLOW_CMD(smp, SMP_CMD_ENCRYPT_INFO);
+ else if (smp->remote_key_dist & SMP_DIST_ID_KEY)
+ SMP_ALLOW_CMD(smp, SMP_CMD_IDENT_INFO);
+ else if (smp->remote_key_dist & SMP_DIST_SIGN)
+ SMP_ALLOW_CMD(smp, SMP_CMD_SIGN_INFO);
+}
+
+static void smp_distribute_keys(struct smp_chan *smp)
+{
struct smp_cmd_pairing *req, *rsp;
struct l2cap_conn *conn = smp->conn;
struct hci_conn *hcon = conn->hcon;
@@ -719,14 +722,13 @@
BT_DBG("conn %p", conn);
- if (!test_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags))
- return;
-
rsp = (void *) &smp->prsp[1];
/* The responder sends its keys first */
- if (hcon->out && (smp->remote_key_dist & 0x07))
+ if (hcon->out && (smp->remote_key_dist & KEY_DIST_MASK)) {
+ smp_allow_key_dist(smp);
return;
+ }
req = (void *) &smp->preq[1];
@@ -811,10 +813,11 @@
}
/* If there are still keys to be received wait for them */
- if ((smp->remote_key_dist & 0x07))
+ if (smp->remote_key_dist & KEY_DIST_MASK) {
+ smp_allow_key_dist(smp);
return;
+ }
- clear_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags);
set_bit(SMP_FLAG_COMPLETE, &smp->flags);
smp_notify_keys(conn);
@@ -829,7 +832,7 @@
BT_DBG("conn %p", conn);
- l2cap_conn_shutdown(conn, ETIMEDOUT);
+ hci_disconnect(conn->hcon, HCI_ERROR_REMOTE_USER_TERM);
}
static struct smp_chan *smp_chan_create(struct l2cap_conn *conn)
@@ -838,23 +841,21 @@
struct smp_chan *smp;
smp = kzalloc(sizeof(*smp), GFP_ATOMIC);
- if (!smp) {
- clear_bit(HCI_CONN_LE_SMP_PEND, &conn->hcon->flags);
+ if (!smp)
return NULL;
- }
smp->tfm_aes = crypto_alloc_blkcipher("ecb(aes)", 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(smp->tfm_aes)) {
BT_ERR("Unable to create ECB crypto context");
kfree(smp);
- clear_bit(HCI_CONN_LE_SMP_PEND, &conn->hcon->flags);
return NULL;
}
smp->conn = conn;
chan->data = smp;
- INIT_WORK(&smp->distribute_work, smp_distribute_keys);
+ SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_FAIL);
+
INIT_DELAYED_WORK(&smp->security_timer, smp_timeout);
hci_conn_hold(conn->hcon);
@@ -868,16 +869,23 @@
struct l2cap_chan *chan;
struct smp_chan *smp;
u32 value;
+ int err;
BT_DBG("");
- if (!conn || !test_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags))
+ if (!conn)
return -ENOTCONN;
chan = conn->smp;
if (!chan)
return -ENOTCONN;
+ l2cap_chan_lock(chan);
+ if (!chan->data) {
+ err = -ENOTCONN;
+ goto unlock;
+ }
+
smp = chan->data;
switch (mgmt_op) {
@@ -893,12 +901,16 @@
case MGMT_OP_USER_PASSKEY_NEG_REPLY:
case MGMT_OP_USER_CONFIRM_NEG_REPLY:
smp_failure(conn, SMP_PASSKEY_ENTRY_FAILED);
- return 0;
+ err = 0;
+ goto unlock;
default:
smp_failure(conn, SMP_PASSKEY_ENTRY_FAILED);
- return -EOPNOTSUPP;
+ err = -EOPNOTSUPP;
+ goto unlock;
}
+ err = 0;
+
/* If it is our turn to send Pairing Confirm, do so now */
if (test_bit(SMP_FLAG_CFM_PENDING, &smp->flags)) {
u8 rsp = smp_confirm(smp);
@@ -906,12 +918,15 @@
smp_failure(conn, rsp);
}
- return 0;
+unlock:
+ l2cap_chan_unlock(chan);
+ return err;
}
static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
{
struct smp_cmd_pairing rsp, *req = (void *) skb->data;
+ struct l2cap_chan *chan = conn->smp;
struct hci_dev *hdev = conn->hcon->hdev;
struct smp_chan *smp;
u8 key_size, auth, sec_level;
@@ -925,28 +940,30 @@
if (conn->hcon->role != HCI_ROLE_SLAVE)
return SMP_CMD_NOTSUPP;
- if (!test_and_set_bit(HCI_CONN_LE_SMP_PEND, &conn->hcon->flags)) {
+ if (!chan->data)
smp = smp_chan_create(conn);
- } else {
- struct l2cap_chan *chan = conn->smp;
+ else
smp = chan->data;
- }
if (!smp)
return SMP_UNSPECIFIED;
+ /* We didn't start the pairing, so match remote */
+ auth = req->auth_req & AUTH_REQ_MASK;
+
if (!test_bit(HCI_BONDABLE, &hdev->dev_flags) &&
- (req->auth_req & SMP_AUTH_BONDING))
+ (auth & SMP_AUTH_BONDING))
return SMP_PAIRING_NOTSUPP;
smp->preq[0] = SMP_CMD_PAIRING_REQ;
memcpy(&smp->preq[1], req, sizeof(*req));
skb_pull(skb, sizeof(*req));
- /* We didn't start the pairing, so match remote */
- auth = req->auth_req;
+ if (conn->hcon->io_capability == HCI_IO_NO_INPUT_OUTPUT)
+ sec_level = BT_SECURITY_MEDIUM;
+ else
+ sec_level = authreq_to_seclevel(auth);
- sec_level = authreq_to_seclevel(auth);
if (sec_level > conn->hcon->pending_sec_level)
conn->hcon->pending_sec_level = sec_level;
@@ -972,6 +989,7 @@
memcpy(&smp->prsp[1], &rsp, sizeof(rsp));
smp_send_cmd(conn, SMP_CMD_PAIRING_RSP, sizeof(rsp), &rsp);
+ SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_CONFIRM);
/* Request setup of TK */
ret = tk_request(conn, 0, auth, rsp.io_capability, req->io_capability);
@@ -986,7 +1004,7 @@
struct smp_cmd_pairing *req, *rsp = (void *) skb->data;
struct l2cap_chan *chan = conn->smp;
struct smp_chan *smp = chan->data;
- u8 key_size, auth = SMP_AUTH_NONE;
+ u8 key_size, auth;
int ret;
BT_DBG("conn %p", conn);
@@ -1005,6 +1023,8 @@
if (check_enc_key_size(conn, key_size))
return SMP_ENC_KEY_SIZE;
+ auth = rsp->auth_req & AUTH_REQ_MASK;
+
/* If we need MITM check that it can be acheived */
if (conn->hcon->pending_sec_level >= BT_SECURITY_HIGH) {
u8 method;
@@ -1025,11 +1045,7 @@
*/
smp->remote_key_dist &= rsp->resp_key_dist;
- if ((req->auth_req & SMP_AUTH_BONDING) &&
- (rsp->auth_req & SMP_AUTH_BONDING))
- auth = SMP_AUTH_BONDING;
-
- auth |= (req->auth_req | rsp->auth_req) & SMP_AUTH_MITM;
+ auth |= req->auth_req;
ret = tk_request(conn, 0, auth, req->io_capability, rsp->io_capability);
if (ret)
@@ -1057,10 +1073,14 @@
memcpy(smp->pcnf, skb->data, sizeof(smp->pcnf));
skb_pull(skb, sizeof(smp->pcnf));
- if (conn->hcon->out)
+ if (conn->hcon->out) {
smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM, sizeof(smp->prnd),
smp->prnd);
- else if (test_bit(SMP_FLAG_TK_VALID, &smp->flags))
+ SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
+ return 0;
+ }
+
+ if (test_bit(SMP_FLAG_TK_VALID, &smp->flags))
return smp_confirm(smp);
else
set_bit(SMP_FLAG_CFM_PENDING, &smp->flags);
@@ -1094,7 +1114,7 @@
if (!key)
return false;
- if (sec_level > BT_SECURITY_MEDIUM && !key->authenticated)
+ if (smp_ltk_sec_level(key) < sec_level)
return false;
if (test_and_set_bit(HCI_CONN_ENCRYPT_PEND, &hcon->flags))
@@ -1137,7 +1157,7 @@
struct smp_cmd_pairing cp;
struct hci_conn *hcon = conn->hcon;
struct smp_chan *smp;
- u8 sec_level;
+ u8 sec_level, auth;
BT_DBG("conn %p", conn);
@@ -1147,7 +1167,13 @@
if (hcon->role != HCI_ROLE_MASTER)
return SMP_CMD_NOTSUPP;
- sec_level = authreq_to_seclevel(rp->auth_req);
+ auth = rp->auth_req & AUTH_REQ_MASK;
+
+ if (hcon->io_capability == HCI_IO_NO_INPUT_OUTPUT)
+ sec_level = BT_SECURITY_MEDIUM;
+ else
+ sec_level = authreq_to_seclevel(auth);
+
if (smp_sufficient_security(hcon, sec_level))
return 0;
@@ -1157,26 +1183,24 @@
if (smp_ltk_encrypt(conn, hcon->pending_sec_level))
return 0;
- if (test_and_set_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags))
- return 0;
-
smp = smp_chan_create(conn);
if (!smp)
return SMP_UNSPECIFIED;
if (!test_bit(HCI_BONDABLE, &hcon->hdev->dev_flags) &&
- (rp->auth_req & SMP_AUTH_BONDING))
+ (auth & SMP_AUTH_BONDING))
return SMP_PAIRING_NOTSUPP;
skb_pull(skb, sizeof(*rp));
memset(&cp, 0, sizeof(cp));
- build_pairing_cmd(conn, &cp, NULL, rp->auth_req);
+ build_pairing_cmd(conn, &cp, NULL, auth);
smp->preq[0] = SMP_CMD_PAIRING_REQ;
memcpy(&smp->preq[1], &cp, sizeof(cp));
smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
+ SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
return 0;
}
@@ -1184,8 +1208,10 @@
int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
{
struct l2cap_conn *conn = hcon->l2cap_data;
+ struct l2cap_chan *chan;
struct smp_chan *smp;
__u8 authreq;
+ int ret;
BT_DBG("conn %p hcon %p level 0x%2.2x", conn, hcon, sec_level);
@@ -1193,6 +1219,8 @@
if (!conn)
return 1;
+ chan = conn->smp;
+
if (!test_bit(HCI_LE_ENABLED, &hcon->hdev->dev_flags))
return 1;
@@ -1206,12 +1234,19 @@
if (smp_ltk_encrypt(conn, hcon->pending_sec_level))
return 0;
- if (test_and_set_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags))
- return 0;
+ l2cap_chan_lock(chan);
+
+ /* If SMP is already in progress ignore this request */
+ if (chan->data) {
+ ret = 0;
+ goto unlock;
+ }
smp = smp_chan_create(conn);
- if (!smp)
- return 1;
+ if (!smp) {
+ ret = 1;
+ goto unlock;
+ }
authreq = seclevel_to_authreq(sec_level);
@@ -1230,15 +1265,20 @@
memcpy(&smp->preq[1], &cp, sizeof(cp));
smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
+ SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
} else {
struct smp_cmd_security_req cp;
cp.auth_req = authreq;
smp_send_cmd(conn, SMP_CMD_SECURITY_REQ, sizeof(cp), &cp);
+ SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_REQ);
}
set_bit(SMP_FLAG_INITIATOR, &smp->flags);
+ ret = 0;
- return 0;
+unlock:
+ l2cap_chan_unlock(chan);
+ return ret;
}
static int smp_cmd_encrypt_info(struct l2cap_conn *conn, struct sk_buff *skb)
@@ -1252,9 +1292,7 @@
if (skb->len < sizeof(*rp))
return SMP_INVALID_PARAMS;
- /* Ignore this PDU if it wasn't requested */
- if (!(smp->remote_key_dist & SMP_DIST_ENC_KEY))
- return 0;
+ SMP_ALLOW_CMD(smp, SMP_CMD_MASTER_IDENT);
skb_pull(skb, sizeof(*rp));
@@ -1278,13 +1316,14 @@
if (skb->len < sizeof(*rp))
return SMP_INVALID_PARAMS;
- /* Ignore this PDU if it wasn't requested */
- if (!(smp->remote_key_dist & SMP_DIST_ENC_KEY))
- return 0;
-
/* Mark the information as received */
smp->remote_key_dist &= ~SMP_DIST_ENC_KEY;
+ if (smp->remote_key_dist & SMP_DIST_ID_KEY)
+ SMP_ALLOW_CMD(smp, SMP_CMD_IDENT_INFO);
+ else if (smp->remote_key_dist & SMP_DIST_SIGN)
+ SMP_ALLOW_CMD(smp, SMP_CMD_SIGN_INFO);
+
skb_pull(skb, sizeof(*rp));
hci_dev_lock(hdev);
@@ -1293,8 +1332,8 @@
authenticated, smp->tk, smp->enc_key_size,
rp->ediv, rp->rand);
smp->ltk = ltk;
- if (!(smp->remote_key_dist & SMP_DIST_ID_KEY))
- queue_work(hdev->workqueue, &smp->distribute_work);
+ if (!(smp->remote_key_dist & KEY_DIST_MASK))
+ smp_distribute_keys(smp);
hci_dev_unlock(hdev);
return 0;
@@ -1311,9 +1350,7 @@
if (skb->len < sizeof(*info))
return SMP_INVALID_PARAMS;
- /* Ignore this PDU if it wasn't requested */
- if (!(smp->remote_key_dist & SMP_DIST_ID_KEY))
- return 0;
+ SMP_ALLOW_CMD(smp, SMP_CMD_IDENT_ADDR_INFO);
skb_pull(skb, sizeof(*info));
@@ -1329,7 +1366,6 @@
struct l2cap_chan *chan = conn->smp;
struct smp_chan *smp = chan->data;
struct hci_conn *hcon = conn->hcon;
- struct hci_dev *hdev = hcon->hdev;
bdaddr_t rpa;
BT_DBG("");
@@ -1337,13 +1373,12 @@
if (skb->len < sizeof(*info))
return SMP_INVALID_PARAMS;
- /* Ignore this PDU if it wasn't requested */
- if (!(smp->remote_key_dist & SMP_DIST_ID_KEY))
- return 0;
-
/* Mark the information as received */
smp->remote_key_dist &= ~SMP_DIST_ID_KEY;
+ if (smp->remote_key_dist & SMP_DIST_SIGN)
+ SMP_ALLOW_CMD(smp, SMP_CMD_SIGN_INFO);
+
skb_pull(skb, sizeof(*info));
hci_dev_lock(hcon->hdev);
@@ -1372,7 +1407,8 @@
smp->id_addr_type, smp->irk, &rpa);
distribute:
- queue_work(hdev->workqueue, &smp->distribute_work);
+ if (!(smp->remote_key_dist & KEY_DIST_MASK))
+ smp_distribute_keys(smp);
hci_dev_unlock(hcon->hdev);
@@ -1392,10 +1428,6 @@
if (skb->len < sizeof(*rp))
return SMP_INVALID_PARAMS;
- /* Ignore this PDU if it wasn't requested */
- if (!(smp->remote_key_dist & SMP_DIST_SIGN))
- return 0;
-
/* Mark the information as received */
smp->remote_key_dist &= ~SMP_DIST_SIGN;
@@ -1408,7 +1440,7 @@
memcpy(csrk->val, rp->csrk, sizeof(csrk->val));
}
smp->csrk = csrk;
- queue_work(hdev->workqueue, &smp->distribute_work);
+ smp_distribute_keys(smp);
hci_dev_unlock(hdev);
return 0;
@@ -1418,6 +1450,7 @@
{
struct l2cap_conn *conn = chan->conn;
struct hci_conn *hcon = conn->hcon;
+ struct smp_chan *smp;
__u8 code, reason;
int err = 0;
@@ -1430,7 +1463,6 @@
return -EILSEQ;
if (!test_bit(HCI_LE_ENABLED, &hcon->hdev->dev_flags)) {
- err = -EOPNOTSUPP;
reason = SMP_PAIRING_NOTSUPP;
goto done;
}
@@ -1438,19 +1470,19 @@
code = skb->data[0];
skb_pull(skb, sizeof(code));
- /*
- * The SMP context must be initialized for all other PDUs except
- * pairing and security requests. If we get any other PDU when
- * not initialized simply disconnect (done if this function
- * returns an error).
+ smp = chan->data;
+
+ if (code > SMP_CMD_MAX)
+ goto drop;
+
+ if (smp && !test_and_clear_bit(code, &smp->allow_cmd))
+ goto drop;
+
+ /* If we don't have a context the only allowed commands are
+ * pairing request and security request.
*/
- if (code != SMP_CMD_PAIRING_REQ && code != SMP_CMD_SECURITY_REQ &&
- !test_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags)) {
- BT_ERR("Unexpected SMP command 0x%02x. Disconnecting.", code);
- reason = SMP_CMD_NOTSUPP;
- err = -EOPNOTSUPP;
- goto done;
- }
+ if (!smp && code != SMP_CMD_PAIRING_REQ && code != SMP_CMD_SECURITY_REQ)
+ goto drop;
switch (code) {
case SMP_CMD_PAIRING_REQ:
@@ -1459,7 +1491,6 @@
case SMP_CMD_PAIRING_FAIL:
smp_failure(conn, 0);
- reason = 0;
err = -EPERM;
break;
@@ -1501,18 +1532,24 @@
default:
BT_DBG("Unknown command code 0x%2.2x", code);
-
reason = SMP_CMD_NOTSUPP;
- err = -EOPNOTSUPP;
goto done;
}
done:
- if (reason)
- smp_failure(conn, reason);
- if (!err)
+ if (!err) {
+ if (reason)
+ smp_failure(conn, reason);
kfree_skb(skb);
+ }
+
return err;
+
+drop:
+ BT_ERR("%s unexpected SMP command 0x%02x from %pMR", hcon->hdev->name,
+ code, &hcon->dst);
+ kfree_skb(skb);
+ return 0;
}
static void smp_teardown_cb(struct l2cap_chan *chan, int err)
@@ -1521,7 +1558,7 @@
BT_DBG("chan %p", chan);
- if (test_and_clear_bit(HCI_CONN_LE_SMP_PEND, &conn->hcon->flags))
+ if (chan->data)
smp_chan_destroy(conn);
conn->smp = NULL;
@@ -1533,17 +1570,18 @@
struct smp_chan *smp = chan->data;
struct l2cap_conn *conn = chan->conn;
struct hci_conn *hcon = conn->hcon;
- struct hci_dev *hdev = hcon->hdev;
BT_DBG("chan %p", chan);
if (!smp)
return;
+ if (!test_bit(HCI_CONN_ENCRYPT, &hcon->flags))
+ return;
+
cancel_delayed_work(&smp->security_timer);
- if (test_bit(HCI_CONN_ENCRYPT, &hcon->flags))
- queue_work(hdev->workqueue, &smp->distribute_work);
+ smp_distribute_keys(smp);
}
static void smp_ready_cb(struct l2cap_chan *chan)
@@ -1569,7 +1607,7 @@
if (smp)
cancel_delayed_work_sync(&smp->security_timer);
- l2cap_conn_shutdown(chan->conn, -err);
+ hci_disconnect(chan->conn->hcon, HCI_ERROR_AUTH_FAILURE);
}
return err;
diff --git a/net/bluetooth/smp.h b/net/bluetooth/smp.h
index cf10946..86a683a 100644
--- a/net/bluetooth/smp.h
+++ b/net/bluetooth/smp.h
@@ -102,6 +102,8 @@
__u8 auth_req;
} __packed;
+#define SMP_CMD_MAX 0x0b
+
#define SMP_PASSKEY_ENTRY_FAILED 0x01
#define SMP_OOB_NOT_AVAIL 0x02
#define SMP_AUTH_REQUIREMENTS 0x03
@@ -123,6 +125,14 @@
SMP_LTK_SLAVE,
};
+static inline u8 smp_ltk_sec_level(struct smp_ltk *key)
+{
+ if (key->authenticated)
+ return BT_SECURITY_HIGH;
+
+ return BT_SECURITY_MEDIUM;
+}
+
/* SMP Commands */
bool smp_sufficient_security(struct hci_conn *hcon, u8 sec_level);
int smp_conn_security(struct hci_conn *hcon, __u8 sec_level);
diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
index d304d75..f53592f 100644
--- a/net/bridge/br_private.h
+++ b/net/bridge/br_private.h
@@ -309,6 +309,9 @@
int igmp;
int mrouters_only;
#endif
+#ifdef CONFIG_BRIDGE_VLAN_FILTERING
+ bool vlan_filtered;
+#endif
};
#define BR_INPUT_SKB_CB(__skb) ((struct br_input_skb_cb *)(__skb)->cb)
diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c
index e1bcd65..3ba57fc 100644
--- a/net/bridge/br_vlan.c
+++ b/net/bridge/br_vlan.c
@@ -27,9 +27,13 @@
{
if (flags & BRIDGE_VLAN_INFO_PVID)
__vlan_add_pvid(v, vid);
+ else
+ __vlan_delete_pvid(v, vid);
if (flags & BRIDGE_VLAN_INFO_UNTAGGED)
set_bit(vid, v->untagged_bitmap);
+ else
+ clear_bit(vid, v->untagged_bitmap);
}
static int __vlan_add(struct net_port_vlans *v, u16 vid, u16 flags)
@@ -125,7 +129,8 @@
{
u16 vid;
- if (!br->vlan_enabled)
+ /* If this packet was not filtered at input, let it pass */
+ if (!BR_INPUT_SKB_CB(skb)->vlan_filtered)
goto out;
/* Vlan filter table must be configured at this point. The
@@ -164,8 +169,10 @@
/* If VLAN filtering is disabled on the bridge, all packets are
* permitted.
*/
- if (!br->vlan_enabled)
+ if (!br->vlan_enabled) {
+ BR_INPUT_SKB_CB(skb)->vlan_filtered = false;
return true;
+ }
/* If there are no vlan in the permitted list, all packets are
* rejected.
@@ -173,6 +180,7 @@
if (!v)
goto drop;
+ BR_INPUT_SKB_CB(skb)->vlan_filtered = true;
proto = br->vlan_proto;
/* If vlan tx offload is disabled on bridge device and frame was
@@ -251,7 +259,8 @@
{
u16 vid;
- if (!br->vlan_enabled)
+ /* If this packet was not filtered at input, let it pass */
+ if (!BR_INPUT_SKB_CB(skb)->vlan_filtered)
return true;
if (!v)
@@ -270,6 +279,7 @@
struct net_bridge *br = p->br;
struct net_port_vlans *v;
+ /* If filtering was disabled at input, let it pass. */
if (!br->vlan_enabled)
return true;
diff --git a/net/ceph/auth_x.c b/net/ceph/auth_x.c
index 96238ba..de6662b 100644
--- a/net/ceph/auth_x.c
+++ b/net/ceph/auth_x.c
@@ -13,8 +13,6 @@
#include "auth_x.h"
#include "auth_x_protocol.h"
-#define TEMP_TICKET_BUF_LEN 256
-
static void ceph_x_validate_tickets(struct ceph_auth_client *ac, int *pneed);
static int ceph_x_is_authenticated(struct ceph_auth_client *ac)
@@ -64,7 +62,7 @@
}
static int ceph_x_decrypt(struct ceph_crypto_key *secret,
- void **p, void *end, void *obuf, size_t olen)
+ void **p, void *end, void **obuf, size_t olen)
{
struct ceph_x_encrypt_header head;
size_t head_len = sizeof(head);
@@ -75,8 +73,14 @@
return -EINVAL;
dout("ceph_x_decrypt len %d\n", len);
- ret = ceph_decrypt2(secret, &head, &head_len, obuf, &olen,
- *p, len);
+ if (*obuf == NULL) {
+ *obuf = kmalloc(len, GFP_NOFS);
+ if (!*obuf)
+ return -ENOMEM;
+ olen = len;
+ }
+
+ ret = ceph_decrypt2(secret, &head, &head_len, *obuf, &olen, *p, len);
if (ret)
return ret;
if (head.struct_v != 1 || le64_to_cpu(head.magic) != CEPHX_ENC_MAGIC)
@@ -129,139 +133,120 @@
kfree(th);
}
-static int ceph_x_proc_ticket_reply(struct ceph_auth_client *ac,
- struct ceph_crypto_key *secret,
- void *buf, void *end)
+static int process_one_ticket(struct ceph_auth_client *ac,
+ struct ceph_crypto_key *secret,
+ void **p, void *end)
{
struct ceph_x_info *xi = ac->private;
- int num;
- void *p = buf;
+ int type;
+ u8 tkt_struct_v, blob_struct_v;
+ struct ceph_x_ticket_handler *th;
+ void *dbuf = NULL;
+ void *dp, *dend;
+ int dlen;
+ char is_enc;
+ struct timespec validity;
+ struct ceph_crypto_key old_key;
+ void *ticket_buf = NULL;
+ void *tp, *tpend;
+ struct ceph_timespec new_validity;
+ struct ceph_crypto_key new_session_key;
+ struct ceph_buffer *new_ticket_blob;
+ unsigned long new_expires, new_renew_after;
+ u64 new_secret_id;
int ret;
- char *dbuf;
- char *ticket_buf;
- u8 reply_struct_v;
- dbuf = kmalloc(TEMP_TICKET_BUF_LEN, GFP_NOFS);
- if (!dbuf)
- return -ENOMEM;
+ ceph_decode_need(p, end, sizeof(u32) + 1, bad);
- ret = -ENOMEM;
- ticket_buf = kmalloc(TEMP_TICKET_BUF_LEN, GFP_NOFS);
- if (!ticket_buf)
- goto out_dbuf;
+ type = ceph_decode_32(p);
+ dout(" ticket type %d %s\n", type, ceph_entity_type_name(type));
- ceph_decode_need(&p, end, 1 + sizeof(u32), bad);
- reply_struct_v = ceph_decode_8(&p);
- if (reply_struct_v != 1)
+ tkt_struct_v = ceph_decode_8(p);
+ if (tkt_struct_v != 1)
goto bad;
- num = ceph_decode_32(&p);
- dout("%d tickets\n", num);
- while (num--) {
- int type;
- u8 tkt_struct_v, blob_struct_v;
- struct ceph_x_ticket_handler *th;
- void *dp, *dend;
- int dlen;
- char is_enc;
- struct timespec validity;
- struct ceph_crypto_key old_key;
- void *tp, *tpend;
- struct ceph_timespec new_validity;
- struct ceph_crypto_key new_session_key;
- struct ceph_buffer *new_ticket_blob;
- unsigned long new_expires, new_renew_after;
- u64 new_secret_id;
- ceph_decode_need(&p, end, sizeof(u32) + 1, bad);
+ th = get_ticket_handler(ac, type);
+ if (IS_ERR(th)) {
+ ret = PTR_ERR(th);
+ goto out;
+ }
- type = ceph_decode_32(&p);
- dout(" ticket type %d %s\n", type, ceph_entity_type_name(type));
+ /* blob for me */
+ dlen = ceph_x_decrypt(secret, p, end, &dbuf, 0);
+ if (dlen <= 0) {
+ ret = dlen;
+ goto out;
+ }
+ dout(" decrypted %d bytes\n", dlen);
+ dp = dbuf;
+ dend = dp + dlen;
- tkt_struct_v = ceph_decode_8(&p);
- if (tkt_struct_v != 1)
- goto bad;
+ tkt_struct_v = ceph_decode_8(&dp);
+ if (tkt_struct_v != 1)
+ goto bad;
- th = get_ticket_handler(ac, type);
- if (IS_ERR(th)) {
- ret = PTR_ERR(th);
- goto out;
- }
+ memcpy(&old_key, &th->session_key, sizeof(old_key));
+ ret = ceph_crypto_key_decode(&new_session_key, &dp, dend);
+ if (ret)
+ goto out;
- /* blob for me */
- dlen = ceph_x_decrypt(secret, &p, end, dbuf,
- TEMP_TICKET_BUF_LEN);
- if (dlen <= 0) {
+ ceph_decode_copy(&dp, &new_validity, sizeof(new_validity));
+ ceph_decode_timespec(&validity, &new_validity);
+ new_expires = get_seconds() + validity.tv_sec;
+ new_renew_after = new_expires - (validity.tv_sec / 4);
+ dout(" expires=%lu renew_after=%lu\n", new_expires,
+ new_renew_after);
+
+ /* ticket blob for service */
+ ceph_decode_8_safe(p, end, is_enc, bad);
+ if (is_enc) {
+ /* encrypted */
+ dout(" encrypted ticket\n");
+ dlen = ceph_x_decrypt(&old_key, p, end, &ticket_buf, 0);
+ if (dlen < 0) {
ret = dlen;
goto out;
}
- dout(" decrypted %d bytes\n", dlen);
- dend = dbuf + dlen;
- dp = dbuf;
-
- tkt_struct_v = ceph_decode_8(&dp);
- if (tkt_struct_v != 1)
- goto bad;
-
- memcpy(&old_key, &th->session_key, sizeof(old_key));
- ret = ceph_crypto_key_decode(&new_session_key, &dp, dend);
- if (ret)
- goto out;
-
- ceph_decode_copy(&dp, &new_validity, sizeof(new_validity));
- ceph_decode_timespec(&validity, &new_validity);
- new_expires = get_seconds() + validity.tv_sec;
- new_renew_after = new_expires - (validity.tv_sec / 4);
- dout(" expires=%lu renew_after=%lu\n", new_expires,
- new_renew_after);
-
- /* ticket blob for service */
- ceph_decode_8_safe(&p, end, is_enc, bad);
tp = ticket_buf;
- if (is_enc) {
- /* encrypted */
- dout(" encrypted ticket\n");
- dlen = ceph_x_decrypt(&old_key, &p, end, ticket_buf,
- TEMP_TICKET_BUF_LEN);
- if (dlen < 0) {
- ret = dlen;
- goto out;
- }
- dlen = ceph_decode_32(&tp);
- } else {
- /* unencrypted */
- ceph_decode_32_safe(&p, end, dlen, bad);
- ceph_decode_need(&p, end, dlen, bad);
- ceph_decode_copy(&p, ticket_buf, dlen);
- }
- tpend = tp + dlen;
- dout(" ticket blob is %d bytes\n", dlen);
- ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad);
- blob_struct_v = ceph_decode_8(&tp);
- new_secret_id = ceph_decode_64(&tp);
- ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend);
- if (ret)
+ dlen = ceph_decode_32(&tp);
+ } else {
+ /* unencrypted */
+ ceph_decode_32_safe(p, end, dlen, bad);
+ ticket_buf = kmalloc(dlen, GFP_NOFS);
+ if (!ticket_buf) {
+ ret = -ENOMEM;
goto out;
-
- /* all is well, update our ticket */
- ceph_crypto_key_destroy(&th->session_key);
- if (th->ticket_blob)
- ceph_buffer_put(th->ticket_blob);
- th->session_key = new_session_key;
- th->ticket_blob = new_ticket_blob;
- th->validity = new_validity;
- th->secret_id = new_secret_id;
- th->expires = new_expires;
- th->renew_after = new_renew_after;
- dout(" got ticket service %d (%s) secret_id %lld len %d\n",
- type, ceph_entity_type_name(type), th->secret_id,
- (int)th->ticket_blob->vec.iov_len);
- xi->have_keys |= th->service;
+ }
+ tp = ticket_buf;
+ ceph_decode_need(p, end, dlen, bad);
+ ceph_decode_copy(p, ticket_buf, dlen);
}
+ tpend = tp + dlen;
+ dout(" ticket blob is %d bytes\n", dlen);
+ ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad);
+ blob_struct_v = ceph_decode_8(&tp);
+ new_secret_id = ceph_decode_64(&tp);
+ ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend);
+ if (ret)
+ goto out;
- ret = 0;
+ /* all is well, update our ticket */
+ ceph_crypto_key_destroy(&th->session_key);
+ if (th->ticket_blob)
+ ceph_buffer_put(th->ticket_blob);
+ th->session_key = new_session_key;
+ th->ticket_blob = new_ticket_blob;
+ th->validity = new_validity;
+ th->secret_id = new_secret_id;
+ th->expires = new_expires;
+ th->renew_after = new_renew_after;
+ dout(" got ticket service %d (%s) secret_id %lld len %d\n",
+ type, ceph_entity_type_name(type), th->secret_id,
+ (int)th->ticket_blob->vec.iov_len);
+ xi->have_keys |= th->service;
+
out:
kfree(ticket_buf);
-out_dbuf:
kfree(dbuf);
return ret;
@@ -270,6 +255,34 @@
goto out;
}
+static int ceph_x_proc_ticket_reply(struct ceph_auth_client *ac,
+ struct ceph_crypto_key *secret,
+ void *buf, void *end)
+{
+ void *p = buf;
+ u8 reply_struct_v;
+ u32 num;
+ int ret;
+
+ ceph_decode_8_safe(&p, end, reply_struct_v, bad);
+ if (reply_struct_v != 1)
+ return -EINVAL;
+
+ ceph_decode_32_safe(&p, end, num, bad);
+ dout("%d tickets\n", num);
+
+ while (num--) {
+ ret = process_one_ticket(ac, secret, &p, end);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+
+bad:
+ return -EINVAL;
+}
+
static int ceph_x_build_authorizer(struct ceph_auth_client *ac,
struct ceph_x_ticket_handler *th,
struct ceph_x_authorizer *au)
@@ -583,13 +596,14 @@
struct ceph_x_ticket_handler *th;
int ret = 0;
struct ceph_x_authorize_reply reply;
+ void *preply = &reply;
void *p = au->reply_buf;
void *end = p + sizeof(au->reply_buf);
th = get_ticket_handler(ac, au->service);
if (IS_ERR(th))
return PTR_ERR(th);
- ret = ceph_x_decrypt(&th->session_key, &p, end, &reply, sizeof(reply));
+ ret = ceph_x_decrypt(&th->session_key, &p, end, &preply, sizeof(reply));
if (ret < 0)
return ret;
if (ret != sizeof(reply))
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index 067d3af..61fcfc3 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -1181,7 +1181,15 @@
if (!m) {
pr_info("alloc_msg unknown type %d\n", type);
*skip = 1;
+ } else if (front_len > m->front_alloc_len) {
+ pr_warning("mon_alloc_msg front %d > prealloc %d (%u#%llu)\n",
+ front_len, m->front_alloc_len,
+ (unsigned int)con->peer_name.type,
+ le64_to_cpu(con->peer_name.num));
+ ceph_msg_put(m);
+ m = ceph_msg_new(type, front_len, GFP_NOFS, false);
}
+
return m;
}
diff --git a/net/core/dev.c b/net/core/dev.c
index 3c6a967..e55c546 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -897,23 +897,25 @@
EXPORT_SYMBOL(dev_getfirstbyhwtype);
/**
- * dev_get_by_flags_rcu - find any device with given flags
+ * __dev_get_by_flags - find any device with given flags
* @net: the applicable net namespace
* @if_flags: IFF_* values
* @mask: bitmask of bits in if_flags to check
*
* Search for any interface with the given flags. Returns NULL if a device
* is not found or a pointer to the device. Must be called inside
- * rcu_read_lock(), and result refcount is unchanged.
+ * rtnl_lock(), and result refcount is unchanged.
*/
-struct net_device *dev_get_by_flags_rcu(struct net *net, unsigned short if_flags,
- unsigned short mask)
+struct net_device *__dev_get_by_flags(struct net *net, unsigned short if_flags,
+ unsigned short mask)
{
struct net_device *dev, *ret;
+ ASSERT_RTNL();
+
ret = NULL;
- for_each_netdev_rcu(net, dev) {
+ for_each_netdev(net, dev) {
if (((dev->flags ^ if_flags) & mask) == 0) {
ret = dev;
break;
@@ -921,7 +923,7 @@
}
return ret;
}
-EXPORT_SYMBOL(dev_get_by_flags_rcu);
+EXPORT_SYMBOL(__dev_get_by_flags);
/**
* dev_valid_name - check if name is okay for network device
@@ -2177,6 +2179,53 @@
return (struct dev_kfree_skb_cb *)skb->cb;
}
+void netif_schedule_queue(struct netdev_queue *txq)
+{
+ rcu_read_lock();
+ if (!(txq->state & QUEUE_STATE_ANY_XOFF)) {
+ struct Qdisc *q = rcu_dereference(txq->qdisc);
+
+ __netif_schedule(q);
+ }
+ rcu_read_unlock();
+}
+EXPORT_SYMBOL(netif_schedule_queue);
+
+/**
+ * netif_wake_subqueue - allow sending packets on subqueue
+ * @dev: network device
+ * @queue_index: sub queue index
+ *
+ * Resume individual transmit queue of a device with multiple transmit queues.
+ */
+void netif_wake_subqueue(struct net_device *dev, u16 queue_index)
+{
+ struct netdev_queue *txq = netdev_get_tx_queue(dev, queue_index);
+
+ if (test_and_clear_bit(__QUEUE_STATE_DRV_XOFF, &txq->state)) {
+ struct Qdisc *q;
+
+ rcu_read_lock();
+ q = rcu_dereference(txq->qdisc);
+ __netif_schedule(q);
+ rcu_read_unlock();
+ }
+}
+EXPORT_SYMBOL(netif_wake_subqueue);
+
+void netif_tx_wake_queue(struct netdev_queue *dev_queue)
+{
+ if (test_and_clear_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state)) {
+ struct Qdisc *q;
+
+ rcu_read_lock();
+ q = rcu_dereference(dev_queue->qdisc);
+ __netif_schedule(q);
+ rcu_read_unlock();
+ }
+}
+EXPORT_SYMBOL(netif_tx_wake_queue);
+
void __dev_kfree_skb_irq(struct sk_buff *skb, enum skb_free_reason reason)
{
unsigned long flags;
@@ -2373,16 +2422,6 @@
rcu_read_lock();
list_for_each_entry_rcu(ptype, &offload_base, list) {
if (ptype->type == type && ptype->callbacks.gso_segment) {
- if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) {
- int err;
-
- err = ptype->callbacks.gso_send_check(skb);
- segs = ERR_PTR(err);
- if (err || skb_gso_ok(skb, features))
- break;
- __skb_push(skb, (skb->data -
- skb_network_header(skb)));
- }
segs = ptype->callbacks.gso_segment(skb, features);
break;
}
@@ -2645,10 +2684,12 @@
struct sk_buff *segs;
segs = skb_gso_segment(skb, features);
- kfree_skb(skb);
- if (IS_ERR(segs))
+ if (IS_ERR(segs)) {
segs = NULL;
- skb = segs;
+ } else if (segs) {
+ consume_skb(skb);
+ skb = segs;
+ }
} else {
if (skb_needs_linearize(skb, features) &&
__skb_linearize(skb))
@@ -3432,7 +3473,7 @@
skb->tc_verd = SET_TC_RTTL(skb->tc_verd, ttl);
skb->tc_verd = SET_TC_AT(skb->tc_verd, AT_INGRESS);
- q = rxq->qdisc;
+ q = rcu_dereference(rxq->qdisc);
if (q != &noop_qdisc) {
spin_lock(qdisc_lock(q));
if (likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
@@ -3449,7 +3490,7 @@
{
struct netdev_queue *rxq = rcu_dereference(skb->dev->ingress_queue);
- if (!rxq || rxq->qdisc == &noop_qdisc)
+ if (!rxq || rcu_access_pointer(rxq->qdisc) == &noop_qdisc)
goto out;
if (*pt_prev) {
@@ -4814,9 +4855,14 @@
sysfs_remove_link(&(dev->dev.kobj), linkname);
}
-#define netdev_adjacent_is_neigh_list(dev, dev_list) \
- (dev_list == &dev->adj_list.upper || \
- dev_list == &dev->adj_list.lower)
+static inline bool netdev_adjacent_is_neigh_list(struct net_device *dev,
+ struct net_device *adj_dev,
+ struct list_head *dev_list)
+{
+ return (dev_list == &dev->adj_list.upper ||
+ dev_list == &dev->adj_list.lower) &&
+ net_eq(dev_net(dev), dev_net(adj_dev));
+}
static int __netdev_adjacent_dev_insert(struct net_device *dev,
struct net_device *adj_dev,
@@ -4846,7 +4892,7 @@
pr_debug("dev_hold for %s, because of link added from %s to %s\n",
adj_dev->name, dev->name, adj_dev->name);
- if (netdev_adjacent_is_neigh_list(dev, dev_list)) {
+ if (netdev_adjacent_is_neigh_list(dev, adj_dev, dev_list)) {
ret = netdev_adjacent_sysfs_add(dev, adj_dev, dev_list);
if (ret)
goto free_adj;
@@ -4867,7 +4913,7 @@
return 0;
remove_symlinks:
- if (netdev_adjacent_is_neigh_list(dev, dev_list))
+ if (netdev_adjacent_is_neigh_list(dev, adj_dev, dev_list))
netdev_adjacent_sysfs_del(dev, adj_dev->name, dev_list);
free_adj:
kfree(adj);
@@ -4900,8 +4946,7 @@
if (adj->master)
sysfs_remove_link(&(dev->dev.kobj), "master");
- if (netdev_adjacent_is_neigh_list(dev, dev_list) &&
- net_eq(dev_net(dev),dev_net(adj_dev)))
+ if (netdev_adjacent_is_neigh_list(dev, adj_dev, dev_list))
netdev_adjacent_sysfs_del(dev, adj_dev->name, dev_list);
list_del_rcu(&adj->list);
@@ -7021,53 +7066,45 @@
return empty;
}
-static int __netdev_printk(const char *level, const struct net_device *dev,
- struct va_format *vaf)
+static void __netdev_printk(const char *level, const struct net_device *dev,
+ struct va_format *vaf)
{
- int r;
-
if (dev && dev->dev.parent) {
- r = dev_printk_emit(level[1] - '0',
- dev->dev.parent,
- "%s %s %s%s: %pV",
- dev_driver_string(dev->dev.parent),
- dev_name(dev->dev.parent),
- netdev_name(dev), netdev_reg_state(dev),
- vaf);
+ dev_printk_emit(level[1] - '0',
+ dev->dev.parent,
+ "%s %s %s%s: %pV",
+ dev_driver_string(dev->dev.parent),
+ dev_name(dev->dev.parent),
+ netdev_name(dev), netdev_reg_state(dev),
+ vaf);
} else if (dev) {
- r = printk("%s%s%s: %pV", level, netdev_name(dev),
- netdev_reg_state(dev), vaf);
+ printk("%s%s%s: %pV",
+ level, netdev_name(dev), netdev_reg_state(dev), vaf);
} else {
- r = printk("%s(NULL net_device): %pV", level, vaf);
+ printk("%s(NULL net_device): %pV", level, vaf);
}
-
- return r;
}
-int netdev_printk(const char *level, const struct net_device *dev,
- const char *format, ...)
+void netdev_printk(const char *level, const struct net_device *dev,
+ const char *format, ...)
{
struct va_format vaf;
va_list args;
- int r;
va_start(args, format);
vaf.fmt = format;
vaf.va = &args;
- r = __netdev_printk(level, dev, &vaf);
+ __netdev_printk(level, dev, &vaf);
va_end(args);
-
- return r;
}
EXPORT_SYMBOL(netdev_printk);
#define define_netdev_printk_level(func, level) \
-int func(const struct net_device *dev, const char *fmt, ...) \
+void func(const struct net_device *dev, const char *fmt, ...) \
{ \
- int r; \
struct va_format vaf; \
va_list args; \
\
@@ -7076,11 +7113,9 @@
vaf.fmt = fmt; \
vaf.va = &args; \
\
- r = __netdev_printk(level, dev, &vaf); \
+ __netdev_printk(level, dev, &vaf); \
\
va_end(args); \
- \
- return r; \
} \
EXPORT_SYMBOL(func);
diff --git a/net/core/filter.c b/net/core/filter.c
index dfc716f..fcd3f67 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -87,30 +87,6 @@
}
EXPORT_SYMBOL(sk_filter);
-/* Helper to find the offset of pkt_type in sk_buff structure. We want
- * to make sure its still a 3bit field starting at a byte boundary;
- * taken from arch/x86/net/bpf_jit_comp.c.
- */
-#ifdef __BIG_ENDIAN_BITFIELD
-#define PKT_TYPE_MAX (7 << 5)
-#else
-#define PKT_TYPE_MAX 7
-#endif
-static unsigned int pkt_type_offset(void)
-{
- struct sk_buff skb_probe = { .pkt_type = ~0, };
- u8 *ct = (u8 *) &skb_probe;
- unsigned int off;
-
- for (off = 0; off < sizeof(struct sk_buff); off++) {
- if (ct[off] == PKT_TYPE_MAX)
- return off;
- }
-
- pr_err_once("Please fix %s, as pkt_type couldn't be found!\n", __func__);
- return -1;
-}
-
static u64 __skb_get_pay_offset(u64 ctx, u64 a, u64 x, u64 r4, u64 r5)
{
return skb_get_poff((struct sk_buff *)(unsigned long) ctx);
@@ -190,11 +166,8 @@
break;
case SKF_AD_OFF + SKF_AD_PKTTYPE:
- *insn = BPF_LDX_MEM(BPF_B, BPF_REG_A, BPF_REG_CTX,
- pkt_type_offset());
- if (insn->off < 0)
- return false;
- insn++;
+ *insn++ = BPF_LDX_MEM(BPF_B, BPF_REG_A, BPF_REG_CTX,
+ PKT_TYPE_OFFSET());
*insn = BPF_ALU32_IMM(BPF_AND, BPF_REG_A, PKT_TYPE_MAX);
#ifdef __BIG_ENDIAN_BITFIELD
insn++;
@@ -1074,7 +1047,7 @@
return -ENOMEM;
if (copy_from_user(prog->insns, fprog->filter, fsize)) {
- kfree(prog);
+ __bpf_prog_free(prog);
return -EFAULT;
}
@@ -1082,7 +1055,7 @@
err = bpf_prog_store_orig_filter(prog, fprog);
if (err) {
- kfree(prog);
+ __bpf_prog_free(prog);
return -ENOMEM;
}
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index a18dfb0..4be570a 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -261,7 +261,6 @@
atomic_t *fclone_ref = (atomic_t *) (child + 1);
kmemcheck_annotate_bitfield(child, flags1);
- kmemcheck_annotate_bitfield(child, flags2);
skb->fclone = SKB_FCLONE_ORIG;
atomic_set(fclone_ref, 1);
@@ -491,32 +490,33 @@
static void skb_release_data(struct sk_buff *skb)
{
- if (!skb->cloned ||
- !atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,
- &skb_shinfo(skb)->dataref)) {
- if (skb_shinfo(skb)->nr_frags) {
- int i;
- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
- skb_frag_unref(skb, i);
- }
+ struct skb_shared_info *shinfo = skb_shinfo(skb);
+ int i;
- /*
- * If skb buf is from userspace, we need to notify the caller
- * the lower device DMA has done;
- */
- if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) {
- struct ubuf_info *uarg;
+ if (skb->cloned &&
+ atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,
+ &shinfo->dataref))
+ return;
- uarg = skb_shinfo(skb)->destructor_arg;
- if (uarg->callback)
- uarg->callback(uarg, true);
- }
+ for (i = 0; i < shinfo->nr_frags; i++)
+ __skb_frag_unref(&shinfo->frags[i]);
- if (skb_has_frag_list(skb))
- skb_drop_fraglist(skb);
+ /*
+ * If skb buf is from userspace, we need to notify the caller
+ * the lower device DMA has done;
+ */
+ if (shinfo->tx_flags & SKBTX_DEV_ZEROCOPY) {
+ struct ubuf_info *uarg;
- skb_free_head(skb);
+ uarg = shinfo->destructor_arg;
+ if (uarg->callback)
+ uarg->callback(uarg, true);
}
+
+ if (shinfo->frag_list)
+ kfree_skb_list(shinfo->frag_list);
+
+ skb_free_head(skb);
}
/*
@@ -674,57 +674,61 @@
}
EXPORT_SYMBOL(consume_skb);
+/* Make sure a field is enclosed inside headers_start/headers_end section */
+#define CHECK_SKB_FIELD(field) \
+ BUILD_BUG_ON(offsetof(struct sk_buff, field) < \
+ offsetof(struct sk_buff, headers_start)); \
+ BUILD_BUG_ON(offsetof(struct sk_buff, field) > \
+ offsetof(struct sk_buff, headers_end)); \
+
static void __copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
{
new->tstamp = old->tstamp;
+ /* We do not copy old->sk */
new->dev = old->dev;
- new->transport_header = old->transport_header;
- new->network_header = old->network_header;
- new->mac_header = old->mac_header;
- new->inner_protocol = old->inner_protocol;
- new->inner_transport_header = old->inner_transport_header;
- new->inner_network_header = old->inner_network_header;
- new->inner_mac_header = old->inner_mac_header;
+ memcpy(new->cb, old->cb, sizeof(old->cb));
skb_dst_copy(new, old);
- skb_copy_hash(new, old);
- new->ooo_okay = old->ooo_okay;
- new->no_fcs = old->no_fcs;
- new->encapsulation = old->encapsulation;
- new->encap_hdr_csum = old->encap_hdr_csum;
- new->csum_valid = old->csum_valid;
- new->csum_complete_sw = old->csum_complete_sw;
#ifdef CONFIG_XFRM
new->sp = secpath_get(old->sp);
#endif
- memcpy(new->cb, old->cb, sizeof(old->cb));
- new->csum = old->csum;
- new->ignore_df = old->ignore_df;
- new->pkt_type = old->pkt_type;
- new->ip_summed = old->ip_summed;
- skb_copy_queue_mapping(new, old);
- new->priority = old->priority;
-#if IS_ENABLED(CONFIG_IP_VS)
- new->ipvs_property = old->ipvs_property;
-#endif
- new->pfmemalloc = old->pfmemalloc;
- new->protocol = old->protocol;
- new->mark = old->mark;
- new->skb_iif = old->skb_iif;
- __nf_copy(new, old);
-#ifdef CONFIG_NET_SCHED
- new->tc_index = old->tc_index;
-#ifdef CONFIG_NET_CLS_ACT
- new->tc_verd = old->tc_verd;
-#endif
-#endif
- new->vlan_proto = old->vlan_proto;
- new->vlan_tci = old->vlan_tci;
+ __nf_copy(new, old, false);
- skb_copy_secmark(new, old);
+ /* Note : this field could be in headers_start/headers_end section
+ * It is not yet because we do not want to have a 16 bit hole
+ */
+ new->queue_mapping = old->queue_mapping;
+ memcpy(&new->headers_start, &old->headers_start,
+ offsetof(struct sk_buff, headers_end) -
+ offsetof(struct sk_buff, headers_start));
+ CHECK_SKB_FIELD(protocol);
+ CHECK_SKB_FIELD(csum);
+ CHECK_SKB_FIELD(hash);
+ CHECK_SKB_FIELD(priority);
+ CHECK_SKB_FIELD(skb_iif);
+ CHECK_SKB_FIELD(vlan_proto);
+ CHECK_SKB_FIELD(vlan_tci);
+ CHECK_SKB_FIELD(transport_header);
+ CHECK_SKB_FIELD(network_header);
+ CHECK_SKB_FIELD(mac_header);
+ CHECK_SKB_FIELD(inner_protocol);
+ CHECK_SKB_FIELD(inner_transport_header);
+ CHECK_SKB_FIELD(inner_network_header);
+ CHECK_SKB_FIELD(inner_mac_header);
+ CHECK_SKB_FIELD(mark);
+#ifdef CONFIG_NETWORK_SECMARK
+ CHECK_SKB_FIELD(secmark);
+#endif
#ifdef CONFIG_NET_RX_BUSY_POLL
- new->napi_id = old->napi_id;
+ CHECK_SKB_FIELD(napi_id);
#endif
+#ifdef CONFIG_NET_SCHED
+ CHECK_SKB_FIELD(tc_index);
+#ifdef CONFIG_NET_CLS_ACT
+ CHECK_SKB_FIELD(tc_verd);
+#endif
+#endif
+
}
/*
@@ -875,7 +879,6 @@
return NULL;
kmemcheck_annotate_bitfield(n, flags1);
- kmemcheck_annotate_bitfield(n, flags2);
n->fclone = SKB_FCLONE_UNAVAILABLE;
}
@@ -3179,7 +3182,7 @@
skb_shinfo(nskb)->frag_list = p;
skb_shinfo(nskb)->gso_size = pinfo->gso_size;
pinfo->gso_size = 0;
- skb_header_release(p);
+ __skb_header_release(p);
NAPI_GRO_CB(nskb)->last = p;
nskb->data_len += p->len;
@@ -3211,7 +3214,7 @@
else
NAPI_GRO_CB(p)->last->next = skb;
NAPI_GRO_CB(p)->last = skb;
- skb_header_release(skb);
+ __skb_header_release(skb);
lp = p;
done:
@@ -3511,6 +3514,19 @@
}
EXPORT_SYMBOL(sock_dequeue_err_skb);
+/**
+ * skb_clone_sk - create clone of skb, and take reference to socket
+ * @skb: the skb to clone
+ *
+ * This function creates a clone of a buffer that holds a reference on
+ * sk_refcnt. Buffers created via this function are meant to be
+ * returned using sock_queue_err_skb, or free via kfree_skb.
+ *
+ * When passing buffers allocated with this function to sock_queue_err_skb
+ * it is necessary to wrap the call with sock_hold/sock_put in order to
+ * prevent the socket from being released prior to being enqueued on
+ * the sk_error_queue.
+ */
struct sk_buff *skb_clone_sk(struct sk_buff *skb)
{
struct sock *sk = skb->sk;
@@ -3615,9 +3631,14 @@
serr->ee.ee_errno = ENOMSG;
serr->ee.ee_origin = SO_EE_ORIGIN_TXSTATUS;
+ /* take a reference to prevent skb_orphan() from freeing the socket */
+ sock_hold(sk);
+
err = sock_queue_err_skb(sk, skb);
if (err)
kfree_skb(skb);
+
+ sock_put(sk);
}
EXPORT_SYMBOL_GPL(skb_complete_wifi_ack);
@@ -3918,7 +3939,8 @@
return false;
if (len <= skb_tailroom(to)) {
- BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));
+ if (len)
+ BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));
*delta_truesize = 0;
return true;
}
@@ -4083,3 +4105,81 @@
return NULL;
}
EXPORT_SYMBOL(skb_vlan_untag);
+
+/**
+ * alloc_skb_with_frags - allocate skb with page frags
+ *
+ * header_len: size of linear part
+ * data_len: needed length in frags
+ * max_page_order: max page order desired.
+ * errcode: pointer to error code if any
+ * gfp_mask: allocation mask
+ *
+ * This can be used to allocate a paged skb, given a maximal order for frags.
+ */
+struct sk_buff *alloc_skb_with_frags(unsigned long header_len,
+ unsigned long data_len,
+ int max_page_order,
+ int *errcode,
+ gfp_t gfp_mask)
+{
+ int npages = (data_len + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
+ unsigned long chunk;
+ struct sk_buff *skb;
+ struct page *page;
+ gfp_t gfp_head;
+ int i;
+
+ *errcode = -EMSGSIZE;
+ /* Note this test could be relaxed, if we succeed to allocate
+ * high order pages...
+ */
+ if (npages > MAX_SKB_FRAGS)
+ return NULL;
+
+ gfp_head = gfp_mask;
+ if (gfp_head & __GFP_WAIT)
+ gfp_head |= __GFP_REPEAT;
+
+ *errcode = -ENOBUFS;
+ skb = alloc_skb(header_len, gfp_head);
+ if (!skb)
+ return NULL;
+
+ skb->truesize += npages << PAGE_SHIFT;
+
+ for (i = 0; npages > 0; i++) {
+ int order = max_page_order;
+
+ while (order) {
+ if (npages >= 1 << order) {
+ page = alloc_pages(gfp_mask |
+ __GFP_COMP |
+ __GFP_NOWARN |
+ __GFP_NORETRY,
+ order);
+ if (page)
+ goto fill_page;
+ /* Do not retry other high order allocations */
+ order = 1;
+ max_page_order = 0;
+ }
+ order--;
+ }
+ page = alloc_page(gfp_mask);
+ if (!page)
+ goto failure;
+fill_page:
+ chunk = min_t(unsigned long, data_len,
+ PAGE_SIZE << order);
+ skb_fill_page_desc(skb, i, page, 0, chunk);
+ data_len -= chunk;
+ npages -= 1 << order;
+ }
+ return skb;
+
+failure:
+ kfree_skb(skb);
+ return NULL;
+}
+EXPORT_SYMBOL(alloc_skb_with_frags);
diff --git a/net/core/sock.c b/net/core/sock.c
index 6f436b5..e5ad7d3 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1762,21 +1762,12 @@
unsigned long data_len, int noblock,
int *errcode, int max_page_order)
{
- struct sk_buff *skb = NULL;
- unsigned long chunk;
- gfp_t gfp_mask;
+ struct sk_buff *skb;
long timeo;
int err;
- int npages = (data_len + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
- struct page *page;
- int i;
-
- err = -EMSGSIZE;
- if (npages > MAX_SKB_FRAGS)
- goto failure;
timeo = sock_sndtimeo(sk, noblock);
- while (!skb) {
+ for (;;) {
err = sock_error(sk);
if (err != 0)
goto failure;
@@ -1785,66 +1776,27 @@
if (sk->sk_shutdown & SEND_SHUTDOWN)
goto failure;
- if (atomic_read(&sk->sk_wmem_alloc) >= sk->sk_sndbuf) {
- set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags);
- set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
- err = -EAGAIN;
- if (!timeo)
- goto failure;
- if (signal_pending(current))
- goto interrupted;
- timeo = sock_wait_for_wmem(sk, timeo);
- continue;
- }
+ if (sk_wmem_alloc_get(sk) < sk->sk_sndbuf)
+ break;
- err = -ENOBUFS;
- gfp_mask = sk->sk_allocation;
- if (gfp_mask & __GFP_WAIT)
- gfp_mask |= __GFP_REPEAT;
-
- skb = alloc_skb(header_len, gfp_mask);
- if (!skb)
+ set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags);
+ set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+ err = -EAGAIN;
+ if (!timeo)
goto failure;
-
- skb->truesize += data_len;
-
- for (i = 0; npages > 0; i++) {
- int order = max_page_order;
-
- while (order) {
- if (npages >= 1 << order) {
- page = alloc_pages(sk->sk_allocation |
- __GFP_COMP |
- __GFP_NOWARN |
- __GFP_NORETRY,
- order);
- if (page)
- goto fill_page;
- /* Do not retry other high order allocations */
- order = 1;
- max_page_order = 0;
- }
- order--;
- }
- page = alloc_page(sk->sk_allocation);
- if (!page)
- goto failure;
-fill_page:
- chunk = min_t(unsigned long, data_len,
- PAGE_SIZE << order);
- skb_fill_page_desc(skb, i, page, 0, chunk);
- data_len -= chunk;
- npages -= 1 << order;
- }
+ if (signal_pending(current))
+ goto interrupted;
+ timeo = sock_wait_for_wmem(sk, timeo);
}
-
- skb_set_owner_w(skb, sk);
+ skb = alloc_skb_with_frags(header_len, data_len, max_page_order,
+ errcode, sk->sk_allocation);
+ if (skb)
+ skb_set_owner_w(skb, sk);
return skb;
interrupted:
err = sock_intr_errno(timeo);
failure:
- kfree_skb(skb);
*errcode = err;
return NULL;
}
@@ -1864,7 +1816,7 @@
* skb_page_frag_refill - check that a page_frag contains enough room
* @sz: minimum size of the fragment we want to get
* @pfrag: pointer to page_frag
- * @prio: priority for memory allocation
+ * @gfp: priority for memory allocation
*
* Note: While this allocator tries to use high order pages, there is
* no guarantee that allocations succeed. Therefore, @sz MUST be
diff --git a/net/core/utils.c b/net/core/utils.c
index eed3433..efc76dd 100644
--- a/net/core/utils.c
+++ b/net/core/utils.c
@@ -306,16 +306,14 @@
void inet_proto_csum_replace4(__sum16 *sum, struct sk_buff *skb,
__be32 from, __be32 to, int pseudohdr)
{
- __be32 diff[] = { ~from, to };
if (skb->ip_summed != CHECKSUM_PARTIAL) {
- *sum = csum_fold(csum_partial(diff, sizeof(diff),
- ~csum_unfold(*sum)));
+ *sum = csum_fold(csum_add(csum_sub(~csum_unfold(*sum), from),
+ to));
if (skb->ip_summed == CHECKSUM_COMPLETE && pseudohdr)
- skb->csum = ~csum_partial(diff, sizeof(diff),
- ~skb->csum);
+ skb->csum = ~csum_add(csum_sub(~(skb->csum), from), to);
} else if (pseudohdr)
- *sum = ~csum_fold(csum_partial(diff, sizeof(diff),
- csum_unfold(*sum)));
+ *sum = ~csum_fold(csum_add(csum_sub(csum_unfold(*sum), from),
+ to));
}
EXPORT_SYMBOL(inet_proto_csum_replace4);
diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
index 04cb17d..ad2acfe 100644
--- a/net/dccp/ipv6.c
+++ b/net/dccp/ipv6.c
@@ -404,7 +404,7 @@
ireq->ir_v6_rmt_addr = ipv6_hdr(skb)->saddr;
ireq->ir_v6_loc_addr = ipv6_hdr(skb)->daddr;
- if (ipv6_opt_accepted(sk, skb) ||
+ if (ipv6_opt_accepted(sk, skb, IP6CB(skb)) ||
np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo ||
np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) {
atomic_inc(&skb->users);
diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
index 61f145c..6905f2d 100644
--- a/net/dsa/dsa.c
+++ b/net/dsa/dsa.c
@@ -10,7 +10,6 @@
*/
#include <linux/list.h>
-#include <linux/netdevice.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/module.h>
@@ -44,7 +43,7 @@
EXPORT_SYMBOL_GPL(unregister_switch_driver);
static struct dsa_switch_driver *
-dsa_switch_probe(struct mii_bus *bus, int sw_addr, char **_name)
+dsa_switch_probe(struct device *host_dev, int sw_addr, char **_name)
{
struct dsa_switch_driver *ret;
struct list_head *list;
@@ -59,7 +58,7 @@
drv = list_entry(list, struct dsa_switch_driver, list);
- name = drv->probe(bus, sw_addr);
+ name = drv->probe(host_dev, sw_addr);
if (name != NULL) {
ret = drv;
break;
@@ -76,7 +75,7 @@
/* basic switch operations **************************************************/
static struct dsa_switch *
dsa_switch_setup(struct dsa_switch_tree *dst, int index,
- struct device *parent, struct mii_bus *bus)
+ struct device *parent, struct device *host_dev)
{
struct dsa_chip_data *pd = dst->pd->chip + index;
struct dsa_switch_driver *drv;
@@ -89,7 +88,7 @@
/*
* Probe for switch model.
*/
- drv = dsa_switch_probe(bus, pd->sw_addr, &name);
+ drv = dsa_switch_probe(host_dev, pd->sw_addr, &name);
if (drv == NULL) {
printk(KERN_ERR "%s[%d]: could not detect attached switch\n",
dst->master_netdev->name, index);
@@ -110,8 +109,7 @@
ds->index = index;
ds->pd = dst->pd->chip + index;
ds->drv = drv;
- ds->master_mii_bus = bus;
-
+ ds->master_dev = host_dev;
/*
* Validate supplied switch configuration.
@@ -154,9 +152,34 @@
* tagging protocol to the preferred tagging format of this
* switch.
*/
- if (ds->dst->cpu_switch == index)
- ds->dst->tag_protocol = drv->tag_protocol;
+ if (dst->cpu_switch == index) {
+ switch (drv->tag_protocol) {
+#ifdef CONFIG_NET_DSA_TAG_DSA
+ case DSA_TAG_PROTO_DSA:
+ dst->rcv = dsa_netdev_ops.rcv;
+ break;
+#endif
+#ifdef CONFIG_NET_DSA_TAG_EDSA
+ case DSA_TAG_PROTO_EDSA:
+ dst->rcv = edsa_netdev_ops.rcv;
+ break;
+#endif
+#ifdef CONFIG_NET_DSA_TAG_TRAILER
+ case DSA_TAG_PROTO_TRAILER:
+ dst->rcv = trailer_netdev_ops.rcv;
+ break;
+#endif
+#ifdef CONFIG_NET_DSA_TAG_BRCM
+ case DSA_TAG_PROTO_BRCM:
+ dst->rcv = brcm_netdev_ops.rcv;
+ break;
+#endif
+ default:
+ break;
+ }
+ dst->tag_protocol = drv->tag_protocol;
+ }
/*
* Do basic register setup.
@@ -215,6 +238,49 @@
{
}
+static int dsa_switch_suspend(struct dsa_switch *ds)
+{
+ int i, ret = 0;
+
+ /* Suspend slave network devices */
+ for (i = 0; i < DSA_MAX_PORTS; i++) {
+ if (!(ds->phys_port_mask & (1 << i)))
+ continue;
+
+ ret = dsa_slave_suspend(ds->ports[i]);
+ if (ret)
+ return ret;
+ }
+
+ if (ds->drv->suspend)
+ ret = ds->drv->suspend(ds);
+
+ return ret;
+}
+
+static int dsa_switch_resume(struct dsa_switch *ds)
+{
+ int i, ret = 0;
+
+ if (ds->drv->resume)
+ ret = ds->drv->resume(ds);
+
+ if (ret)
+ return ret;
+
+ /* Resume slave network devices */
+ for (i = 0; i < DSA_MAX_PORTS; i++) {
+ if (!(ds->phys_port_mask & (1 << i)))
+ continue;
+
+ ret = dsa_slave_resume(ds->ports[i]);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
/* link polling *************************************************************/
static void dsa_link_poll_work(struct work_struct *ugly)
@@ -261,7 +327,7 @@
return device_find_child(parent, class, dev_is_class);
}
-static struct mii_bus *dev_to_mii_bus(struct device *dev)
+struct mii_bus *dsa_host_dev_to_mii_bus(struct device *dev)
{
struct device *d;
@@ -277,6 +343,7 @@
return NULL;
}
+EXPORT_SYMBOL_GPL(dsa_host_dev_to_mii_bus);
static struct net_device *dev_to_net_device(struct device *dev)
{
@@ -416,7 +483,7 @@
cd = &pd->chip[chip_index];
cd->of_node = child;
- cd->mii_bus = &mdio_bus->dev;
+ cd->host_dev = &mdio_bus->dev;
sw_addr = of_get_property(child, "reg", NULL);
if (!sw_addr)
@@ -542,17 +609,9 @@
dst->cpu_port = -1;
for (i = 0; i < pd->nr_chips; i++) {
- struct mii_bus *bus;
struct dsa_switch *ds;
- bus = dev_to_mii_bus(pd->chip[i].mii_bus);
- if (bus == NULL) {
- printk(KERN_ERR "%s[%d]: no mii bus found for "
- "dsa switch\n", dev->name, i);
- continue;
- }
-
- ds = dsa_switch_setup(dst, i, &pdev->dev, bus);
+ ds = dsa_switch_setup(dst, i, &pdev->dev, pd->chip[i].host_dev);
if (IS_ERR(ds)) {
printk(KERN_ERR "%s[%d]: couldn't create dsa switch "
"instance (error %ld)\n", dev->name, i,
@@ -626,7 +685,7 @@
return 0;
}
- return dst->ops->rcv(skb, dev, pt, orig_dev);
+ return dst->rcv(skb, dev, pt, orig_dev);
}
static struct packet_type dsa_pack_type __read_mostly = {
@@ -634,6 +693,42 @@
.func = dsa_switch_rcv,
};
+#ifdef CONFIG_PM_SLEEP
+static int dsa_suspend(struct device *d)
+{
+ struct platform_device *pdev = to_platform_device(d);
+ struct dsa_switch_tree *dst = platform_get_drvdata(pdev);
+ int i, ret = 0;
+
+ for (i = 0; i < dst->pd->nr_chips; i++) {
+ struct dsa_switch *ds = dst->ds[i];
+
+ if (ds != NULL)
+ ret = dsa_switch_suspend(ds);
+ }
+
+ return ret;
+}
+
+static int dsa_resume(struct device *d)
+{
+ struct platform_device *pdev = to_platform_device(d);
+ struct dsa_switch_tree *dst = platform_get_drvdata(pdev);
+ int i, ret = 0;
+
+ for (i = 0; i < dst->pd->nr_chips; i++) {
+ struct dsa_switch *ds = dst->ds[i];
+
+ if (ds != NULL)
+ ret = dsa_switch_resume(ds);
+ }
+
+ return ret;
+}
+#endif
+
+static SIMPLE_DEV_PM_OPS(dsa_pm_ops, dsa_suspend, dsa_resume);
+
static const struct of_device_id dsa_of_match_table[] = {
{ .compatible = "brcm,bcm7445-switch-v4.0" },
{ .compatible = "marvell,dsa", },
@@ -649,6 +744,7 @@
.name = "dsa",
.owner = THIS_MODULE,
.of_match_table = dsa_of_match_table,
+ .pm = &dsa_pm_ops,
},
};
diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
index 98afed4..dc9756d 100644
--- a/net/dsa/dsa_priv.h
+++ b/net/dsa/dsa_priv.h
@@ -12,7 +12,13 @@
#define __DSA_PRIV_H
#include <linux/phy.h>
-#include <net/dsa.h>
+#include <linux/netdevice.h>
+
+struct dsa_device_ops {
+ netdev_tx_t (*xmit)(struct sk_buff *skb, struct net_device *dev);
+ int (*rcv)(struct sk_buff *skb, struct net_device *dev,
+ struct packet_type *pt, struct net_device *orig_dev);
+};
struct dsa_slave_priv {
/*
@@ -20,6 +26,8 @@
* switch port.
*/
struct net_device *dev;
+ netdev_tx_t (*xmit)(struct sk_buff *skb,
+ struct net_device *dev);
/*
* Which switch this port is a part of, and the port index
@@ -43,10 +51,13 @@
extern char dsa_driver_version[];
/* slave.c */
+extern const struct dsa_device_ops notag_netdev_ops;
void dsa_slave_mii_bus_init(struct dsa_switch *ds);
struct net_device *dsa_slave_create(struct dsa_switch *ds,
struct device *parent,
int port, char *name);
+int dsa_slave_suspend(struct net_device *slave_dev);
+int dsa_slave_resume(struct net_device *slave_dev);
/* tag_dsa.c */
extern const struct dsa_device_ops dsa_netdev_ops;
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index 7333a4a..36953c8 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -9,7 +9,6 @@
*/
#include <linux/list.h>
-#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/phy.h>
#include <linux/of_net.h>
@@ -45,7 +44,7 @@
ds->slave_mii_bus->write = dsa_slave_phy_write;
snprintf(ds->slave_mii_bus->id, MII_BUS_ID_SIZE, "dsa-%d:%.2x",
ds->index, ds->pd->sw_addr);
- ds->slave_mii_bus->parent = &ds->master_mii_bus->dev;
+ ds->slave_mii_bus->parent = ds->master_dev;
}
@@ -63,6 +62,7 @@
{
struct dsa_slave_priv *p = netdev_priv(dev);
struct net_device *master = p->parent->dst->master_netdev;
+ struct dsa_switch *ds = p->parent;
int err;
if (!(master->flags & IFF_UP))
@@ -85,8 +85,20 @@
goto clear_allmulti;
}
+ if (ds->drv->port_enable) {
+ err = ds->drv->port_enable(ds, p->port, p->phy);
+ if (err)
+ goto clear_promisc;
+ }
+
+ if (p->phy)
+ phy_start(p->phy);
+
return 0;
+clear_promisc:
+ if (dev->flags & IFF_PROMISC)
+ dev_set_promiscuity(master, 0);
clear_allmulti:
if (dev->flags & IFF_ALLMULTI)
dev_set_allmulti(master, -1);
@@ -101,6 +113,10 @@
{
struct dsa_slave_priv *p = netdev_priv(dev);
struct net_device *master = p->parent->dst->master_netdev;
+ struct dsa_switch *ds = p->parent;
+
+ if (p->phy)
+ phy_stop(p->phy);
dev_mc_unsync(master, dev);
dev_uc_unsync(master, dev);
@@ -112,6 +128,9 @@
if (!ether_addr_equal(dev->dev_addr, master->dev_addr))
dev_uc_del(master, dev->dev_addr);
+ if (ds->drv->port_disable)
+ ds->drv->port_disable(ds, p->port, p->phy);
+
return 0;
}
@@ -176,9 +195,8 @@
static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct dsa_slave_priv *p = netdev_priv(dev);
- struct dsa_switch_tree *dst = p->parent->dst;
- return dst->ops->xmit(skb, dev);
+ return p->xmit(skb, dev);
}
static netdev_tx_t dsa_slave_notag_xmit(struct sk_buff *skb,
@@ -303,6 +321,65 @@
return -EOPNOTSUPP;
}
+static void dsa_slave_get_wol(struct net_device *dev, struct ethtool_wolinfo *w)
+{
+ struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_switch *ds = p->parent;
+
+ if (ds->drv->get_wol)
+ ds->drv->get_wol(ds, p->port, w);
+}
+
+static int dsa_slave_set_wol(struct net_device *dev, struct ethtool_wolinfo *w)
+{
+ struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_switch *ds = p->parent;
+ int ret = -EOPNOTSUPP;
+
+ if (ds->drv->set_wol)
+ ret = ds->drv->set_wol(ds, p->port, w);
+
+ return ret;
+}
+
+static int dsa_slave_set_eee(struct net_device *dev, struct ethtool_eee *e)
+{
+ struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_switch *ds = p->parent;
+ int ret;
+
+ if (!ds->drv->set_eee)
+ return -EOPNOTSUPP;
+
+ ret = ds->drv->set_eee(ds, p->port, p->phy, e);
+ if (ret)
+ return ret;
+
+ if (p->phy)
+ ret = phy_ethtool_set_eee(p->phy, e);
+
+ return ret;
+}
+
+static int dsa_slave_get_eee(struct net_device *dev, struct ethtool_eee *e)
+{
+ struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_switch *ds = p->parent;
+ int ret;
+
+ if (!ds->drv->get_eee)
+ return -EOPNOTSUPP;
+
+ ret = ds->drv->get_eee(ds, p->port, e);
+ if (ret)
+ return ret;
+
+ if (p->phy)
+ ret = phy_ethtool_get_eee(p->phy, e);
+
+ return ret;
+}
+
static const struct ethtool_ops dsa_slave_ethtool_ops = {
.get_settings = dsa_slave_get_settings,
.set_settings = dsa_slave_set_settings,
@@ -312,6 +389,10 @@
.get_strings = dsa_slave_get_strings,
.get_ethtool_stats = dsa_slave_get_ethtool_stats,
.get_sset_count = dsa_slave_get_sset_count,
+ .set_wol = dsa_slave_set_wol,
+ .get_wol = dsa_slave_get_wol,
+ .set_eee = dsa_slave_set_eee,
+ .get_eee = dsa_slave_get_eee,
};
static const struct net_device_ops dsa_slave_netdev_ops = {
@@ -325,11 +406,6 @@
.ndo_do_ioctl = dsa_slave_ioctl,
};
-static const struct dsa_device_ops notag_netdev_ops = {
- .xmit = dsa_slave_notag_xmit,
- .rcv = NULL,
-};
-
static void dsa_slave_adjust_link(struct net_device *dev)
{
struct dsa_slave_priv *p = netdev_priv(dev);
@@ -378,6 +454,7 @@
struct dsa_chip_data *cd = ds->pd;
struct device_node *phy_dn, *port_dn;
bool phy_is_fixed = false;
+ u32 phy_flags = 0;
int ret;
port_dn = cd->port_dn[p->port];
@@ -397,9 +474,12 @@
phy_dn = port_dn;
}
+ if (ds->drv->get_phy_flags)
+ phy_flags = ds->drv->get_phy_flags(ds, p->port);
+
if (phy_dn)
p->phy = of_phy_connect(slave_dev, phy_dn,
- dsa_slave_adjust_link, 0,
+ dsa_slave_adjust_link, phy_flags,
p->phy_interface);
if (p->phy && phy_is_fixed)
@@ -415,6 +495,37 @@
p->phy->addr, p->phy->drv->name);
}
+int dsa_slave_suspend(struct net_device *slave_dev)
+{
+ struct dsa_slave_priv *p = netdev_priv(slave_dev);
+
+ netif_device_detach(slave_dev);
+
+ if (p->phy) {
+ phy_stop(p->phy);
+ p->old_pause = -1;
+ p->old_link = -1;
+ p->old_duplex = -1;
+ phy_suspend(p->phy);
+ }
+
+ return 0;
+}
+
+int dsa_slave_resume(struct net_device *slave_dev)
+{
+ struct dsa_slave_priv *p = netdev_priv(slave_dev);
+
+ netif_device_attach(slave_dev);
+
+ if (p->phy) {
+ phy_resume(p->phy);
+ phy_start(p->phy);
+ }
+
+ return 0;
+}
+
struct net_device *
dsa_slave_create(struct dsa_switch *ds, struct device *parent,
int port, char *name)
@@ -435,32 +546,6 @@
slave_dev->tx_queue_len = 0;
slave_dev->netdev_ops = &dsa_slave_netdev_ops;
- switch (ds->dst->tag_protocol) {
-#ifdef CONFIG_NET_DSA_TAG_DSA
- case htons(ETH_P_DSA):
- ds->dst->ops = &dsa_netdev_ops;
- break;
-#endif
-#ifdef CONFIG_NET_DSA_TAG_EDSA
- case htons(ETH_P_EDSA):
- ds->dst->ops = &edsa_netdev_ops;
- break;
-#endif
-#ifdef CONFIG_NET_DSA_TAG_TRAILER
- case htons(ETH_P_TRAILER):
- ds->dst->ops = &trailer_netdev_ops;
- break;
-#endif
-#ifdef CONFIG_NET_DSA_TAG_BRCM
- case htons(ETH_P_BRCMTAG):
- ds->dst->ops = &brcm_netdev_ops;
- break;
-#endif
- default:
- ds->dst->ops = ¬ag_netdev_ops;
- break;
- }
-
SET_NETDEV_DEV(slave_dev, parent);
slave_dev->dev.of_node = ds->pd->port_dn[port];
slave_dev->vlan_features = master->vlan_features;
@@ -470,6 +555,32 @@
p->parent = ds;
p->port = port;
+ switch (ds->dst->tag_protocol) {
+#ifdef CONFIG_NET_DSA_TAG_DSA
+ case DSA_TAG_PROTO_DSA:
+ p->xmit = dsa_netdev_ops.xmit;
+ break;
+#endif
+#ifdef CONFIG_NET_DSA_TAG_EDSA
+ case DSA_TAG_PROTO_EDSA:
+ p->xmit = edsa_netdev_ops.xmit;
+ break;
+#endif
+#ifdef CONFIG_NET_DSA_TAG_TRAILER
+ case DSA_TAG_PROTO_TRAILER:
+ p->xmit = trailer_netdev_ops.xmit;
+ break;
+#endif
+#ifdef CONFIG_NET_DSA_TAG_BRCM
+ case DSA_TAG_PROTO_BRCM:
+ p->xmit = brcm_netdev_ops.xmit;
+ break;
+#endif
+ default:
+ p->xmit = dsa_slave_notag_xmit;
+ break;
+ }
+
p->old_pause = -1;
p->old_link = -1;
p->old_duplex = -1;
@@ -487,6 +598,9 @@
netif_carrier_off(slave_dev);
if (p->phy != NULL) {
+ if (ds->drv->get_phy_flags(ds, port))
+ p->phy->dev_flags |= ds->drv->get_phy_flags(ds, port);
+
phy_attach(slave_dev, dev_name(&p->phy->dev),
PHY_INTERFACE_MODE_GMII);
diff --git a/net/dsa/tag_brcm.c b/net/dsa/tag_brcm.c
index e0b759e..83d3572 100644
--- a/net/dsa/tag_brcm.c
+++ b/net/dsa/tag_brcm.c
@@ -11,7 +11,6 @@
#include <linux/etherdevice.h>
#include <linux/list.h>
-#include <linux/netdevice.h>
#include <linux/slab.h>
#include "dsa_priv.h"
@@ -91,7 +90,6 @@
/* Queue the SKB for transmission on the parent interface, but
* do not modify its EtherType
*/
- skb->protocol = htons(ETH_P_BRCMTAG);
skb->dev = p->parent->dst->master_netdev;
dev_queue_xmit(skb);
diff --git a/net/dsa/tag_dsa.c b/net/dsa/tag_dsa.c
index d7dbc5b..ce90c8b 100644
--- a/net/dsa/tag_dsa.c
+++ b/net/dsa/tag_dsa.c
@@ -10,7 +10,6 @@
#include <linux/etherdevice.h>
#include <linux/list.h>
-#include <linux/netdevice.h>
#include <linux/slab.h>
#include "dsa_priv.h"
diff --git a/net/dsa/tag_edsa.c b/net/dsa/tag_edsa.c
index 6b30abe..94fcce7 100644
--- a/net/dsa/tag_edsa.c
+++ b/net/dsa/tag_edsa.c
@@ -10,7 +10,6 @@
#include <linux/etherdevice.h>
#include <linux/list.h>
-#include <linux/netdevice.h>
#include <linux/slab.h>
#include "dsa_priv.h"
diff --git a/net/dsa/tag_trailer.c b/net/dsa/tag_trailer.c
index 5fe9444..115fdca 100644
--- a/net/dsa/tag_trailer.c
+++ b/net/dsa/tag_trailer.c
@@ -10,7 +10,6 @@
#include <linux/etherdevice.h>
#include <linux/list.h>
-#include <linux/netdevice.h>
#include <linux/slab.h>
#include "dsa_priv.h"
diff --git a/net/ipv4/Kconfig b/net/ipv4/Kconfig
index dbc10d8..69fb378 100644
--- a/net/ipv4/Kconfig
+++ b/net/ipv4/Kconfig
@@ -311,6 +311,16 @@
tristate
default n
+config NET_FOU
+ tristate "IP: Foo (IP protocols) over UDP"
+ select XFRM
+ select NET_UDP_TUNNEL
+ ---help---
+ Foo over UDP allows any IP protocol to be directly encapsulated
+ over UDP include tunnels (IPIP, GRE, SIT). By encapsulating in UDP
+ network mechanisms and optimizations for UDP (such as ECMP
+ and RSS) can be leveraged to provide better service.
+
config INET_AH
tristate "IP: AH transformation"
select XFRM_ALGO
@@ -560,6 +570,27 @@
For further details see:
http://www.ews.uiuc.edu/~shaoliu/tcpillinois/index.html
+config TCP_CONG_DCTCP
+ tristate "DataCenter TCP (DCTCP)"
+ default n
+ ---help---
+ DCTCP leverages Explicit Congestion Notification (ECN) in the network to
+ provide multi-bit feedback to the end hosts. It is designed to provide:
+
+ - High burst tolerance (incast due to partition/aggregate),
+ - Low latency (short flows, queries),
+ - High throughput (continuous data updates, large file transfers) with
+ commodity, shallow-buffered switches.
+
+ All switches in the data center network running DCTCP must support
+ ECN marking and be configured for marking when reaching defined switch
+ buffer thresholds. The default ECN marking threshold heuristic for
+ DCTCP on switches is 20 packets (30KB) at 1Gbps, and 65 packets
+ (~100KB) at 10Gbps, but might need further careful tweaking.
+
+ For further details see:
+ http://simula.stanford.edu/~alizade/Site/DCTCP_files/dctcp-final.pdf
+
choice
prompt "Default TCP congestion control"
default DEFAULT_CUBIC
@@ -588,9 +619,11 @@
config DEFAULT_WESTWOOD
bool "Westwood" if TCP_CONG_WESTWOOD=y
+ config DEFAULT_DCTCP
+ bool "DCTCP" if TCP_CONG_DCTCP=y
+
config DEFAULT_RENO
bool "Reno"
-
endchoice
endif
@@ -610,6 +643,7 @@
default "westwood" if DEFAULT_WESTWOOD
default "veno" if DEFAULT_VENO
default "reno" if DEFAULT_RENO
+ default "dctcp" if DEFAULT_DCTCP
default "cubic"
config TCP_MD5SIG
diff --git a/net/ipv4/Makefile b/net/ipv4/Makefile
index 8ee1cd4..d810578 100644
--- a/net/ipv4/Makefile
+++ b/net/ipv4/Makefile
@@ -20,6 +20,7 @@
obj-$(CONFIG_IP_MROUTE) += ipmr.o
obj-$(CONFIG_NET_IPIP) += ipip.o
gre-y := gre_demux.o
+obj-$(CONFIG_NET_FOU) += fou.o
obj-$(CONFIG_NET_IPGRE_DEMUX) += gre.o
obj-$(CONFIG_NET_IPGRE) += ip_gre.o
obj-$(CONFIG_NET_UDP_TUNNEL) += udp_tunnel.o
@@ -42,6 +43,7 @@
obj-$(CONFIG_NET_TCPPROBE) += tcp_probe.o
obj-$(CONFIG_TCP_CONG_BIC) += tcp_bic.o
obj-$(CONFIG_TCP_CONG_CUBIC) += tcp_cubic.o
+obj-$(CONFIG_TCP_CONG_DCTCP) += tcp_dctcp.o
obj-$(CONFIG_TCP_CONG_WESTWOOD) += tcp_westwood.o
obj-$(CONFIG_TCP_CONG_HSTCP) += tcp_highspeed.o
obj-$(CONFIG_TCP_CONG_HYBLA) += tcp_hybla.o
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index 72011cc..28e589c 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -1197,40 +1197,6 @@
}
EXPORT_SYMBOL(inet_sk_rebuild_header);
-static int inet_gso_send_check(struct sk_buff *skb)
-{
- const struct net_offload *ops;
- const struct iphdr *iph;
- int proto;
- int ihl;
- int err = -EINVAL;
-
- if (unlikely(!pskb_may_pull(skb, sizeof(*iph))))
- goto out;
-
- iph = ip_hdr(skb);
- ihl = iph->ihl * 4;
- if (ihl < sizeof(*iph))
- goto out;
-
- proto = iph->protocol;
-
- /* Warning: after this point, iph might be no longer valid */
- if (unlikely(!pskb_may_pull(skb, ihl)))
- goto out;
- __skb_pull(skb, ihl);
-
- skb_reset_transport_header(skb);
- err = -EPROTONOSUPPORT;
-
- ops = rcu_dereference(inet_offloads[proto]);
- if (likely(ops && ops->callbacks.gso_send_check))
- err = ops->callbacks.gso_send_check(skb);
-
-out:
- return err;
-}
-
static struct sk_buff *inet_gso_segment(struct sk_buff *skb,
netdev_features_t features)
{
@@ -1655,7 +1621,6 @@
static struct packet_offload ip_packet_offload __read_mostly = {
.type = cpu_to_be16(ETH_P_IP),
.callbacks = {
- .gso_send_check = inet_gso_send_check,
.gso_segment = inet_gso_segment,
.gro_receive = inet_gro_receive,
.gro_complete = inet_gro_complete,
@@ -1664,7 +1629,6 @@
static const struct net_offload ipip_offload = {
.callbacks = {
- .gso_send_check = inet_gso_send_check,
.gso_segment = inet_gso_segment,
.gro_receive = inet_gro_receive,
.gro_complete = inet_gro_complete,
diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c
index a2afa89..ac9a32e 100644
--- a/net/ipv4/ah4.c
+++ b/net/ipv4/ah4.c
@@ -505,8 +505,6 @@
ahp->icv_full_len = aalg_desc->uinfo.auth.icv_fullbits/8;
ahp->icv_trunc_len = x->aalg->alg_trunc_len/8;
- BUG_ON(ahp->icv_trunc_len > MAX_AH_AUTH_LEN);
-
if (x->props.flags & XFRM_STATE_ALIGN4)
x->props.header_len = XFRM_ALIGN4(sizeof(struct ip_auth_hdr) +
ahp->icv_trunc_len);
diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
index 1a9b99e..16acb59 100644
--- a/net/ipv4/arp.c
+++ b/net/ipv4/arp.c
@@ -953,10 +953,11 @@
{
const struct arphdr *arp;
+ /* do not tweak dropwatch on an ARP we will ignore */
if (dev->flags & IFF_NOARP ||
skb->pkt_type == PACKET_OTHERHOST ||
skb->pkt_type == PACKET_LOOPBACK)
- goto freeskb;
+ goto consumeskb;
skb = skb_share_check(skb, GFP_ATOMIC);
if (!skb)
@@ -974,6 +975,9 @@
return NF_HOOK(NFPROTO_ARP, NF_ARP_IN, skb, dev, NULL, arp_process);
+consumeskb:
+ consume_skb(skb);
+ return 0;
freeskb:
kfree_skb(skb);
out_of_mem:
diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c
new file mode 100644
index 0000000..dced89f
--- /dev/null
+++ b/net/ipv4/fou.c
@@ -0,0 +1,368 @@
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/socket.h>
+#include <linux/skbuff.h>
+#include <linux/ip.h>
+#include <linux/udp.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <net/genetlink.h>
+#include <net/ip.h>
+#include <net/protocol.h>
+#include <net/udp.h>
+#include <net/udp_tunnel.h>
+#include <net/xfrm.h>
+#include <uapi/linux/fou.h>
+#include <uapi/linux/genetlink.h>
+
+static DEFINE_SPINLOCK(fou_lock);
+static LIST_HEAD(fou_list);
+
+struct fou {
+ struct socket *sock;
+ u8 protocol;
+ u16 port;
+ struct udp_offload udp_offloads;
+ struct list_head list;
+};
+
+struct fou_cfg {
+ u8 protocol;
+ struct udp_port_cfg udp_config;
+};
+
+static inline struct fou *fou_from_sock(struct sock *sk)
+{
+ return sk->sk_user_data;
+}
+
+static int fou_udp_encap_recv_deliver(struct sk_buff *skb,
+ u8 protocol, size_t len)
+{
+ struct iphdr *iph = ip_hdr(skb);
+
+ /* Remove 'len' bytes from the packet (UDP header and
+ * FOU header if present), modify the protocol to the one
+ * we found, and then call rcv_encap.
+ */
+ iph->tot_len = htons(ntohs(iph->tot_len) - len);
+ __skb_pull(skb, len);
+ skb_postpull_rcsum(skb, udp_hdr(skb), len);
+ skb_reset_transport_header(skb);
+
+ return -protocol;
+}
+
+static int fou_udp_recv(struct sock *sk, struct sk_buff *skb)
+{
+ struct fou *fou = fou_from_sock(sk);
+
+ if (!fou)
+ return 1;
+
+ return fou_udp_encap_recv_deliver(skb, fou->protocol,
+ sizeof(struct udphdr));
+}
+
+static struct sk_buff **fou_gro_receive(struct sk_buff **head,
+ struct sk_buff *skb,
+ const struct net_offload **offloads)
+{
+ const struct net_offload *ops;
+ struct sk_buff **pp = NULL;
+ u8 proto = NAPI_GRO_CB(skb)->proto;
+
+ rcu_read_lock();
+ ops = rcu_dereference(offloads[proto]);
+ if (!ops || !ops->callbacks.gro_receive)
+ goto out_unlock;
+
+ pp = ops->callbacks.gro_receive(head, skb);
+
+out_unlock:
+ rcu_read_unlock();
+
+ return pp;
+}
+
+static int fou_gro_complete(struct sk_buff *skb, int nhoff,
+ const struct net_offload **offloads)
+{
+ const struct net_offload *ops;
+ u8 proto = NAPI_GRO_CB(skb)->proto;
+ int err = -ENOSYS;
+
+ rcu_read_lock();
+ ops = rcu_dereference(offloads[proto]);
+ if (WARN_ON(!ops || !ops->callbacks.gro_complete))
+ goto out_unlock;
+
+ err = ops->callbacks.gro_complete(skb, nhoff);
+
+out_unlock:
+ rcu_read_unlock();
+
+ return err;
+}
+
+static struct sk_buff **fou4_gro_receive(struct sk_buff **head,
+ struct sk_buff *skb)
+{
+ return fou_gro_receive(head, skb, inet_offloads);
+}
+
+static int fou4_gro_complete(struct sk_buff *skb, int nhoff)
+{
+ return fou_gro_complete(skb, nhoff, inet_offloads);
+}
+
+static struct sk_buff **fou6_gro_receive(struct sk_buff **head,
+ struct sk_buff *skb)
+{
+ return fou_gro_receive(head, skb, inet6_offloads);
+}
+
+static int fou6_gro_complete(struct sk_buff *skb, int nhoff)
+{
+ return fou_gro_complete(skb, nhoff, inet6_offloads);
+}
+
+static int fou_add_to_port_list(struct fou *fou)
+{
+ struct fou *fout;
+
+ spin_lock(&fou_lock);
+ list_for_each_entry(fout, &fou_list, list) {
+ if (fou->port == fout->port) {
+ spin_unlock(&fou_lock);
+ return -EALREADY;
+ }
+ }
+
+ list_add(&fou->list, &fou_list);
+ spin_unlock(&fou_lock);
+
+ return 0;
+}
+
+static void fou_release(struct fou *fou)
+{
+ struct socket *sock = fou->sock;
+ struct sock *sk = sock->sk;
+
+ udp_del_offload(&fou->udp_offloads);
+
+ list_del(&fou->list);
+
+ /* Remove hooks into tunnel socket */
+ sk->sk_user_data = NULL;
+
+ sock_release(sock);
+
+ kfree(fou);
+}
+
+static int fou_create(struct net *net, struct fou_cfg *cfg,
+ struct socket **sockp)
+{
+ struct fou *fou = NULL;
+ int err;
+ struct socket *sock = NULL;
+ struct sock *sk;
+
+ /* Open UDP socket */
+ err = udp_sock_create(net, &cfg->udp_config, &sock);
+ if (err < 0)
+ goto error;
+
+ /* Allocate FOU port structure */
+ fou = kzalloc(sizeof(*fou), GFP_KERNEL);
+ if (!fou) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ sk = sock->sk;
+
+ /* Mark socket as an encapsulation socket. See net/ipv4/udp.c */
+ fou->protocol = cfg->protocol;
+ fou->port = cfg->udp_config.local_udp_port;
+ udp_sk(sk)->encap_rcv = fou_udp_recv;
+
+ udp_sk(sk)->encap_type = 1;
+ udp_encap_enable();
+
+ sk->sk_user_data = fou;
+ fou->sock = sock;
+
+ udp_set_convert_csum(sk, true);
+
+ sk->sk_allocation = GFP_ATOMIC;
+
+ switch (cfg->udp_config.family) {
+ case AF_INET:
+ fou->udp_offloads.callbacks.gro_receive = fou4_gro_receive;
+ fou->udp_offloads.callbacks.gro_complete = fou4_gro_complete;
+ break;
+ case AF_INET6:
+ fou->udp_offloads.callbacks.gro_receive = fou6_gro_receive;
+ fou->udp_offloads.callbacks.gro_complete = fou6_gro_complete;
+ break;
+ default:
+ err = -EPFNOSUPPORT;
+ goto error;
+ }
+
+ fou->udp_offloads.port = cfg->udp_config.local_udp_port;
+ fou->udp_offloads.ipproto = cfg->protocol;
+
+ if (cfg->udp_config.family == AF_INET) {
+ err = udp_add_offload(&fou->udp_offloads);
+ if (err)
+ goto error;
+ }
+
+ err = fou_add_to_port_list(fou);
+ if (err)
+ goto error;
+
+ if (sockp)
+ *sockp = sock;
+
+ return 0;
+
+error:
+ kfree(fou);
+ if (sock)
+ sock_release(sock);
+
+ return err;
+}
+
+static int fou_destroy(struct net *net, struct fou_cfg *cfg)
+{
+ struct fou *fou;
+ u16 port = cfg->udp_config.local_udp_port;
+ int err = -EINVAL;
+
+ spin_lock(&fou_lock);
+ list_for_each_entry(fou, &fou_list, list) {
+ if (fou->port == port) {
+ udp_del_offload(&fou->udp_offloads);
+ fou_release(fou);
+ err = 0;
+ break;
+ }
+ }
+ spin_unlock(&fou_lock);
+
+ return err;
+}
+
+static struct genl_family fou_nl_family = {
+ .id = GENL_ID_GENERATE,
+ .hdrsize = 0,
+ .name = FOU_GENL_NAME,
+ .version = FOU_GENL_VERSION,
+ .maxattr = FOU_ATTR_MAX,
+ .netnsok = true,
+};
+
+static struct nla_policy fou_nl_policy[FOU_ATTR_MAX + 1] = {
+ [FOU_ATTR_PORT] = { .type = NLA_U16, },
+ [FOU_ATTR_AF] = { .type = NLA_U8, },
+ [FOU_ATTR_IPPROTO] = { .type = NLA_U8, },
+};
+
+static int parse_nl_config(struct genl_info *info,
+ struct fou_cfg *cfg)
+{
+ memset(cfg, 0, sizeof(*cfg));
+
+ cfg->udp_config.family = AF_INET;
+
+ if (info->attrs[FOU_ATTR_AF]) {
+ u8 family = nla_get_u8(info->attrs[FOU_ATTR_AF]);
+
+ if (family != AF_INET && family != AF_INET6)
+ return -EINVAL;
+
+ cfg->udp_config.family = family;
+ }
+
+ if (info->attrs[FOU_ATTR_PORT]) {
+ u16 port = nla_get_u16(info->attrs[FOU_ATTR_PORT]);
+
+ cfg->udp_config.local_udp_port = port;
+ }
+
+ if (info->attrs[FOU_ATTR_IPPROTO])
+ cfg->protocol = nla_get_u8(info->attrs[FOU_ATTR_IPPROTO]);
+
+ return 0;
+}
+
+static int fou_nl_cmd_add_port(struct sk_buff *skb, struct genl_info *info)
+{
+ struct fou_cfg cfg;
+ int err;
+
+ err = parse_nl_config(info, &cfg);
+ if (err)
+ return err;
+
+ return fou_create(&init_net, &cfg, NULL);
+}
+
+static int fou_nl_cmd_rm_port(struct sk_buff *skb, struct genl_info *info)
+{
+ struct fou_cfg cfg;
+
+ parse_nl_config(info, &cfg);
+
+ return fou_destroy(&init_net, &cfg);
+}
+
+static const struct genl_ops fou_nl_ops[] = {
+ {
+ .cmd = FOU_CMD_ADD,
+ .doit = fou_nl_cmd_add_port,
+ .policy = fou_nl_policy,
+ .flags = GENL_ADMIN_PERM,
+ },
+ {
+ .cmd = FOU_CMD_DEL,
+ .doit = fou_nl_cmd_rm_port,
+ .policy = fou_nl_policy,
+ .flags = GENL_ADMIN_PERM,
+ },
+};
+
+static int __init fou_init(void)
+{
+ int ret;
+
+ ret = genl_register_family_with_ops(&fou_nl_family,
+ fou_nl_ops);
+
+ return ret;
+}
+
+static void __exit fou_fini(void)
+{
+ struct fou *fou, *next;
+
+ genl_unregister_family(&fou_nl_family);
+
+ /* Close all the FOU sockets */
+
+ spin_lock(&fou_lock);
+ list_for_each_entry_safe(fou, next, &fou_list, list)
+ fou_release(fou);
+ spin_unlock(&fou_lock);
+}
+
+module_init(fou_init);
+module_exit(fou_fini);
+MODULE_AUTHOR("Tom Herbert <therbert@google.com>");
+MODULE_LICENSE("GPL");
diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c
index d3fe2ac..a777295 100644
--- a/net/ipv4/gre_offload.c
+++ b/net/ipv4/gre_offload.c
@@ -15,13 +15,6 @@
#include <net/protocol.h>
#include <net/gre.h>
-static int gre_gso_send_check(struct sk_buff *skb)
-{
- if (!skb->encapsulation)
- return -EINVAL;
- return 0;
-}
-
static struct sk_buff *gre_gso_segment(struct sk_buff *skb,
netdev_features_t features)
{
@@ -46,6 +39,9 @@
SKB_GSO_IPIP)))
goto out;
+ if (!skb->encapsulation)
+ goto out;
+
if (unlikely(!pskb_may_pull(skb, sizeof(*greh))))
goto out;
@@ -256,7 +252,6 @@
static const struct net_offload gre_offload = {
.callbacks = {
- .gso_send_check = gre_gso_send_check,
.gso_segment = gre_gso_segment,
.gro_receive = gre_gro_receive,
.gro_complete = gre_gro_complete,
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
index ea7d4af..5882f58 100644
--- a/net/ipv4/icmp.c
+++ b/net/ipv4/icmp.c
@@ -231,12 +231,62 @@
spin_unlock_bh(&sk->sk_lock.slock);
}
+int sysctl_icmp_msgs_per_sec __read_mostly = 1000;
+int sysctl_icmp_msgs_burst __read_mostly = 50;
+
+static struct {
+ spinlock_t lock;
+ u32 credit;
+ u32 stamp;
+} icmp_global = {
+ .lock = __SPIN_LOCK_UNLOCKED(icmp_global.lock),
+};
+
+/**
+ * icmp_global_allow - Are we allowed to send one more ICMP message ?
+ *
+ * Uses a token bucket to limit our ICMP messages to sysctl_icmp_msgs_per_sec.
+ * Returns false if we reached the limit and can not send another packet.
+ * Note: called with BH disabled
+ */
+bool icmp_global_allow(void)
+{
+ u32 credit, delta, incr = 0, now = (u32)jiffies;
+ bool rc = false;
+
+ /* Check if token bucket is empty and cannot be refilled
+ * without taking the spinlock.
+ */
+ if (!icmp_global.credit) {
+ delta = min_t(u32, now - icmp_global.stamp, HZ);
+ if (delta < HZ / 50)
+ return false;
+ }
+
+ spin_lock(&icmp_global.lock);
+ delta = min_t(u32, now - icmp_global.stamp, HZ);
+ if (delta >= HZ / 50) {
+ incr = sysctl_icmp_msgs_per_sec * delta / HZ ;
+ if (incr)
+ icmp_global.stamp = now;
+ }
+ credit = min_t(u32, icmp_global.credit + incr, sysctl_icmp_msgs_burst);
+ if (credit) {
+ credit--;
+ rc = true;
+ }
+ icmp_global.credit = credit;
+ spin_unlock(&icmp_global.lock);
+ return rc;
+}
+EXPORT_SYMBOL(icmp_global_allow);
+
/*
* Send an ICMP frame.
*/
-static inline bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
- struct flowi4 *fl4, int type, int code)
+static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
+ struct flowi4 *fl4, int type, int code)
{
struct dst_entry *dst = &rt->dst;
bool rc = true;
@@ -253,8 +303,14 @@
goto out;
/* Limit if icmp type is enabled in ratemask. */
- if ((1 << type) & net->ipv4.sysctl_icmp_ratemask) {
- struct inet_peer *peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr, 1);
+ if (!((1 << type) & net->ipv4.sysctl_icmp_ratemask))
+ goto out;
+
+ rc = false;
+ if (icmp_global_allow()) {
+ struct inet_peer *peer;
+
+ peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr, 1);
rc = inet_peer_xrlim_allow(peer,
net->ipv4.sysctl_icmp_ratelimit);
if (peer)
diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
index 9b84254..829aff8b 100644
--- a/net/ipv4/ip_gre.c
+++ b/net/ipv4/ip_gre.c
@@ -239,7 +239,7 @@
tpi.seq = htonl(tunnel->o_seqno);
/* Push GRE header. */
- gre_build_header(skb, &tpi, tunnel->hlen);
+ gre_build_header(skb, &tpi, tunnel->tun_hlen);
ip_tunnel_xmit(skb, dev, tnl_params, tnl_params->protocol);
}
@@ -310,7 +310,7 @@
static int ipgre_tunnel_ioctl(struct net_device *dev,
struct ifreq *ifr, int cmd)
{
- int err = 0;
+ int err;
struct ip_tunnel_parm p;
if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof(p)))
@@ -470,13 +470,18 @@
static void __gre_tunnel_init(struct net_device *dev)
{
struct ip_tunnel *tunnel;
+ int t_hlen;
tunnel = netdev_priv(dev);
- tunnel->hlen = ip_gre_calc_hlen(tunnel->parms.o_flags);
+ tunnel->tun_hlen = ip_gre_calc_hlen(tunnel->parms.o_flags);
tunnel->parms.iph.protocol = IPPROTO_GRE;
- dev->needed_headroom = LL_MAX_HEADER + sizeof(struct iphdr) + 4;
- dev->mtu = ETH_DATA_LEN - sizeof(struct iphdr) - 4;
+ tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen;
+
+ t_hlen = tunnel->hlen + sizeof(struct iphdr);
+
+ dev->needed_headroom = LL_MAX_HEADER + t_hlen + 4;
+ dev->mtu = ETH_DATA_LEN - t_hlen - 4;
dev->features |= GRE_FEATURES;
dev->hw_features |= GRE_FEATURES;
@@ -628,6 +633,40 @@
parms->iph.frag_off = htons(IP_DF);
}
+/* This function returns true when ENCAP attributes are present in the nl msg */
+static bool ipgre_netlink_encap_parms(struct nlattr *data[],
+ struct ip_tunnel_encap *ipencap)
+{
+ bool ret = false;
+
+ memset(ipencap, 0, sizeof(*ipencap));
+
+ if (!data)
+ return ret;
+
+ if (data[IFLA_GRE_ENCAP_TYPE]) {
+ ret = true;
+ ipencap->type = nla_get_u16(data[IFLA_GRE_ENCAP_TYPE]);
+ }
+
+ if (data[IFLA_GRE_ENCAP_FLAGS]) {
+ ret = true;
+ ipencap->flags = nla_get_u16(data[IFLA_GRE_ENCAP_FLAGS]);
+ }
+
+ if (data[IFLA_GRE_ENCAP_SPORT]) {
+ ret = true;
+ ipencap->sport = nla_get_u16(data[IFLA_GRE_ENCAP_SPORT]);
+ }
+
+ if (data[IFLA_GRE_ENCAP_DPORT]) {
+ ret = true;
+ ipencap->dport = nla_get_u16(data[IFLA_GRE_ENCAP_DPORT]);
+ }
+
+ return ret;
+}
+
static int gre_tap_init(struct net_device *dev)
{
__gre_tunnel_init(dev);
@@ -657,6 +696,15 @@
struct nlattr *tb[], struct nlattr *data[])
{
struct ip_tunnel_parm p;
+ struct ip_tunnel_encap ipencap;
+
+ if (ipgre_netlink_encap_parms(data, &ipencap)) {
+ struct ip_tunnel *t = netdev_priv(dev);
+ int err = ip_tunnel_encap_setup(t, &ipencap);
+
+ if (err < 0)
+ return err;
+ }
ipgre_netlink_parms(data, tb, &p);
return ip_tunnel_newlink(dev, tb, &p);
@@ -666,6 +714,15 @@
struct nlattr *data[])
{
struct ip_tunnel_parm p;
+ struct ip_tunnel_encap ipencap;
+
+ if (ipgre_netlink_encap_parms(data, &ipencap)) {
+ struct ip_tunnel *t = netdev_priv(dev);
+ int err = ip_tunnel_encap_setup(t, &ipencap);
+
+ if (err < 0)
+ return err;
+ }
ipgre_netlink_parms(data, tb, &p);
return ip_tunnel_changelink(dev, tb, &p);
@@ -694,6 +751,14 @@
nla_total_size(1) +
/* IFLA_GRE_PMTUDISC */
nla_total_size(1) +
+ /* IFLA_GRE_ENCAP_TYPE */
+ nla_total_size(2) +
+ /* IFLA_GRE_ENCAP_FLAGS */
+ nla_total_size(2) +
+ /* IFLA_GRE_ENCAP_SPORT */
+ nla_total_size(2) +
+ /* IFLA_GRE_ENCAP_DPORT */
+ nla_total_size(2) +
0;
}
@@ -714,6 +779,17 @@
nla_put_u8(skb, IFLA_GRE_PMTUDISC,
!!(p->iph.frag_off & htons(IP_DF))))
goto nla_put_failure;
+
+ if (nla_put_u16(skb, IFLA_GRE_ENCAP_TYPE,
+ t->encap.type) ||
+ nla_put_u16(skb, IFLA_GRE_ENCAP_SPORT,
+ t->encap.sport) ||
+ nla_put_u16(skb, IFLA_GRE_ENCAP_DPORT,
+ t->encap.dport) ||
+ nla_put_u16(skb, IFLA_GRE_ENCAP_FLAGS,
+ t->encap.dport))
+ goto nla_put_failure;
+
return 0;
nla_put_failure:
@@ -731,6 +807,10 @@
[IFLA_GRE_TTL] = { .type = NLA_U8 },
[IFLA_GRE_TOS] = { .type = NLA_U8 },
[IFLA_GRE_PMTUDISC] = { .type = NLA_U8 },
+ [IFLA_GRE_ENCAP_TYPE] = { .type = NLA_U16 },
+ [IFLA_GRE_ENCAP_FLAGS] = { .type = NLA_U16 },
+ [IFLA_GRE_ENCAP_SPORT] = { .type = NLA_U16 },
+ [IFLA_GRE_ENCAP_DPORT] = { .type = NLA_U16 },
};
static struct rtnl_link_ops ipgre_link_ops __read_mostly = {
diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
index ad38249..5b3d91b 100644
--- a/net/ipv4/ip_options.c
+++ b/net/ipv4/ip_options.c
@@ -87,17 +87,15 @@
* NOTE: dopt cannot point to skb.
*/
-int ip_options_echo(struct ip_options *dopt, struct sk_buff *skb)
+int __ip_options_echo(struct ip_options *dopt, struct sk_buff *skb,
+ const struct ip_options *sopt)
{
- const struct ip_options *sopt;
unsigned char *sptr, *dptr;
int soffset, doffset;
int optlen;
memset(dopt, 0, sizeof(struct ip_options));
- sopt = &(IPCB(skb)->opt);
-
if (sopt->optlen == 0)
return 0;
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
index 215af2b..c8fa624 100644
--- a/net/ipv4/ip_output.c
+++ b/net/ipv4/ip_output.c
@@ -1522,8 +1522,10 @@
.uc_ttl = -1,
};
-void ip_send_unicast_reply(struct net *net, struct sk_buff *skb, __be32 daddr,
- __be32 saddr, const struct ip_reply_arg *arg,
+void ip_send_unicast_reply(struct net *net, struct sk_buff *skb,
+ const struct ip_options *sopt,
+ __be32 daddr, __be32 saddr,
+ const struct ip_reply_arg *arg,
unsigned int len)
{
struct ip_options_data replyopts;
@@ -1534,7 +1536,7 @@
struct sock *sk;
struct inet_sock *inet;
- if (ip_options_echo(&replyopts.opt.opt, skb))
+ if (__ip_options_echo(&replyopts.opt.opt, skb, sopt))
return;
ipc.addr = daddr;
diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
index afed1aa..b75b47b 100644
--- a/net/ipv4/ip_tunnel.c
+++ b/net/ipv4/ip_tunnel.c
@@ -55,6 +55,7 @@
#include <net/net_namespace.h>
#include <net/netns/generic.h>
#include <net/rtnetlink.h>
+#include <net/udp.h>
#if IS_ENABLED(CONFIG_IPV6)
#include <net/ipv6.h>
@@ -79,10 +80,10 @@
idst->saddr = saddr;
}
-static void tunnel_dst_set(struct ip_tunnel *t,
+static noinline void tunnel_dst_set(struct ip_tunnel *t,
struct dst_entry *dst, __be32 saddr)
{
- __tunnel_dst_set(this_cpu_ptr(t->dst_cache), dst, saddr);
+ __tunnel_dst_set(raw_cpu_ptr(t->dst_cache), dst, saddr);
}
static void tunnel_dst_reset(struct ip_tunnel *t)
@@ -106,7 +107,7 @@
struct dst_entry *dst;
rcu_read_lock();
- idst = this_cpu_ptr(t->dst_cache);
+ idst = raw_cpu_ptr(t->dst_cache);
dst = rcu_dereference(idst->dst);
if (dst && !atomic_inc_not_zero(&dst->__refcnt))
dst = NULL;
@@ -487,6 +488,91 @@
}
EXPORT_SYMBOL_GPL(ip_tunnel_rcv);
+static int ip_encap_hlen(struct ip_tunnel_encap *e)
+{
+ switch (e->type) {
+ case TUNNEL_ENCAP_NONE:
+ return 0;
+ case TUNNEL_ENCAP_FOU:
+ return sizeof(struct udphdr);
+ default:
+ return -EINVAL;
+ }
+}
+
+int ip_tunnel_encap_setup(struct ip_tunnel *t,
+ struct ip_tunnel_encap *ipencap)
+{
+ int hlen;
+
+ memset(&t->encap, 0, sizeof(t->encap));
+
+ hlen = ip_encap_hlen(ipencap);
+ if (hlen < 0)
+ return hlen;
+
+ t->encap.type = ipencap->type;
+ t->encap.sport = ipencap->sport;
+ t->encap.dport = ipencap->dport;
+ t->encap.flags = ipencap->flags;
+
+ t->encap_hlen = hlen;
+ t->hlen = t->encap_hlen + t->tun_hlen;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ip_tunnel_encap_setup);
+
+static int fou_build_header(struct sk_buff *skb, struct ip_tunnel_encap *e,
+ size_t hdr_len, u8 *protocol, struct flowi4 *fl4)
+{
+ struct udphdr *uh;
+ __be16 sport;
+ bool csum = !!(e->flags & TUNNEL_ENCAP_FLAG_CSUM);
+ int type = csum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
+
+ skb = iptunnel_handle_offloads(skb, csum, type);
+
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+ /* Get length and hash before making space in skb */
+
+ sport = e->sport ? : udp_flow_src_port(dev_net(skb->dev),
+ skb, 0, 0, false);
+
+ skb_push(skb, hdr_len);
+
+ skb_reset_transport_header(skb);
+ uh = udp_hdr(skb);
+
+ uh->dest = e->dport;
+ uh->source = sport;
+ uh->len = htons(skb->len);
+ uh->check = 0;
+ udp_set_csum(!(e->flags & TUNNEL_ENCAP_FLAG_CSUM), skb,
+ fl4->saddr, fl4->daddr, skb->len);
+
+ *protocol = IPPROTO_UDP;
+
+ return 0;
+}
+
+int ip_tunnel_encap(struct sk_buff *skb, struct ip_tunnel *t,
+ u8 *protocol, struct flowi4 *fl4)
+{
+ switch (t->encap.type) {
+ case TUNNEL_ENCAP_NONE:
+ return 0;
+ case TUNNEL_ENCAP_FOU:
+ return fou_build_header(skb, &t->encap, t->encap_hlen,
+ protocol, fl4);
+ default:
+ return -EINVAL;
+ }
+}
+EXPORT_SYMBOL(ip_tunnel_encap);
+
static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
struct rtable *rt, __be16 df)
{
@@ -536,7 +622,7 @@
}
void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
- const struct iphdr *tnl_params, const u8 protocol)
+ const struct iphdr *tnl_params, u8 protocol)
{
struct ip_tunnel *tunnel = netdev_priv(dev);
const struct iphdr *inner_iph;
@@ -617,6 +703,9 @@
init_tunnel_flow(&fl4, protocol, dst, tnl_params->saddr,
tunnel->parms.o_key, RT_TOS(tos), tunnel->parms.link);
+ if (ip_tunnel_encap(skb, tunnel, &protocol, &fl4) < 0)
+ goto tx_error;
+
rt = connected ? tunnel_rtable_get(tunnel, 0, &fl4.saddr) : NULL;
if (!rt) {
diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c
index 62eaa00..bfec31d 100644
--- a/net/ipv4/ipip.c
+++ b/net/ipv4/ipip.c
@@ -301,7 +301,8 @@
memcpy(dev->dev_addr, &tunnel->parms.iph.saddr, 4);
memcpy(dev->broadcast, &tunnel->parms.iph.daddr, 4);
- tunnel->hlen = 0;
+ tunnel->tun_hlen = 0;
+ tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen;
tunnel->parms.iph.protocol = IPPROTO_IPIP;
return ip_tunnel_init(dev);
}
@@ -340,10 +341,53 @@
parms->iph.frag_off = htons(IP_DF);
}
+/* This function returns true when ENCAP attributes are present in the nl msg */
+static bool ipip_netlink_encap_parms(struct nlattr *data[],
+ struct ip_tunnel_encap *ipencap)
+{
+ bool ret = false;
+
+ memset(ipencap, 0, sizeof(*ipencap));
+
+ if (!data)
+ return ret;
+
+ if (data[IFLA_IPTUN_ENCAP_TYPE]) {
+ ret = true;
+ ipencap->type = nla_get_u16(data[IFLA_IPTUN_ENCAP_TYPE]);
+ }
+
+ if (data[IFLA_IPTUN_ENCAP_FLAGS]) {
+ ret = true;
+ ipencap->flags = nla_get_u16(data[IFLA_IPTUN_ENCAP_FLAGS]);
+ }
+
+ if (data[IFLA_IPTUN_ENCAP_SPORT]) {
+ ret = true;
+ ipencap->sport = nla_get_u16(data[IFLA_IPTUN_ENCAP_SPORT]);
+ }
+
+ if (data[IFLA_IPTUN_ENCAP_DPORT]) {
+ ret = true;
+ ipencap->dport = nla_get_u16(data[IFLA_IPTUN_ENCAP_DPORT]);
+ }
+
+ return ret;
+}
+
static int ipip_newlink(struct net *src_net, struct net_device *dev,
struct nlattr *tb[], struct nlattr *data[])
{
struct ip_tunnel_parm p;
+ struct ip_tunnel_encap ipencap;
+
+ if (ipip_netlink_encap_parms(data, &ipencap)) {
+ struct ip_tunnel *t = netdev_priv(dev);
+ int err = ip_tunnel_encap_setup(t, &ipencap);
+
+ if (err < 0)
+ return err;
+ }
ipip_netlink_parms(data, &p);
return ip_tunnel_newlink(dev, tb, &p);
@@ -353,6 +397,15 @@
struct nlattr *data[])
{
struct ip_tunnel_parm p;
+ struct ip_tunnel_encap ipencap;
+
+ if (ipip_netlink_encap_parms(data, &ipencap)) {
+ struct ip_tunnel *t = netdev_priv(dev);
+ int err = ip_tunnel_encap_setup(t, &ipencap);
+
+ if (err < 0)
+ return err;
+ }
ipip_netlink_parms(data, &p);
@@ -378,6 +431,14 @@
nla_total_size(1) +
/* IFLA_IPTUN_PMTUDISC */
nla_total_size(1) +
+ /* IFLA_IPTUN_ENCAP_TYPE */
+ nla_total_size(2) +
+ /* IFLA_IPTUN_ENCAP_FLAGS */
+ nla_total_size(2) +
+ /* IFLA_IPTUN_ENCAP_SPORT */
+ nla_total_size(2) +
+ /* IFLA_IPTUN_ENCAP_DPORT */
+ nla_total_size(2) +
0;
}
@@ -394,6 +455,17 @@
nla_put_u8(skb, IFLA_IPTUN_PMTUDISC,
!!(parm->iph.frag_off & htons(IP_DF))))
goto nla_put_failure;
+
+ if (nla_put_u16(skb, IFLA_IPTUN_ENCAP_TYPE,
+ tunnel->encap.type) ||
+ nla_put_u16(skb, IFLA_IPTUN_ENCAP_SPORT,
+ tunnel->encap.sport) ||
+ nla_put_u16(skb, IFLA_IPTUN_ENCAP_DPORT,
+ tunnel->encap.dport) ||
+ nla_put_u16(skb, IFLA_IPTUN_ENCAP_FLAGS,
+ tunnel->encap.dport))
+ goto nla_put_failure;
+
return 0;
nla_put_failure:
@@ -407,6 +479,10 @@
[IFLA_IPTUN_TTL] = { .type = NLA_U8 },
[IFLA_IPTUN_TOS] = { .type = NLA_U8 },
[IFLA_IPTUN_PMTUDISC] = { .type = NLA_U8 },
+ [IFLA_IPTUN_ENCAP_TYPE] = { .type = NLA_U16 },
+ [IFLA_IPTUN_ENCAP_FLAGS] = { .type = NLA_U16 },
+ [IFLA_IPTUN_ENCAP_SPORT] = { .type = NLA_U16 },
+ [IFLA_IPTUN_ENCAP_DPORT] = { .type = NLA_U16 },
};
static struct rtnl_link_ops ipip_link_ops __read_mostly = {
diff --git a/net/ipv4/protocol.c b/net/ipv4/protocol.c
index 46d6a1c..4b7c0ec 100644
--- a/net/ipv4/protocol.c
+++ b/net/ipv4/protocol.c
@@ -30,6 +30,7 @@
const struct net_protocol __rcu *inet_protos[MAX_INET_PROTOS] __read_mostly;
const struct net_offload __rcu *inet_offloads[MAX_INET_PROTOS] __read_mostly;
+EXPORT_SYMBOL(inet_offloads);
int inet_add_protocol(const struct net_protocol *prot, unsigned char protocol)
{
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 234a43e..d4bd68d 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -2265,9 +2265,9 @@
return rt;
if (flp4->flowi4_proto)
- rt = (struct rtable *) xfrm_lookup(net, &rt->dst,
- flowi4_to_flowi(flp4),
- sk, 0);
+ rt = (struct rtable *)xfrm_lookup_route(net, &rt->dst,
+ flowi4_to_flowi(flp4),
+ sk, 0);
return rt;
}
diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
index 1599966..8a25509 100644
--- a/net/ipv4/sysctl_net_ipv4.c
+++ b/net/ipv4/sysctl_net_ipv4.c
@@ -731,6 +731,22 @@
.extra2 = &one,
},
{
+ .procname = "icmp_msgs_per_sec",
+ .data = &sysctl_icmp_msgs_per_sec,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &zero,
+ },
+ {
+ .procname = "icmp_msgs_burst",
+ .data = &sysctl_icmp_msgs_burst,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &zero,
+ },
+ {
.procname = "udp_mem",
.data = &sysctl_udp_mem,
.maxlen = sizeof(sysctl_udp_mem),
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 541f26a..cf5e508 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -405,7 +405,7 @@
tp->reordering = sysctl_tcp_reordering;
tcp_enable_early_retrans(tp);
- icsk->icsk_ca_ops = &tcp_init_congestion_ops;
+ tcp_assign_congestion_control(sk);
tp->tsoffset = 0;
@@ -609,7 +609,7 @@
return after(tp->write_seq, tp->pushed_seq + (tp->max_window >> 1));
}
-static inline void skb_entail(struct sock *sk, struct sk_buff *skb)
+static void skb_entail(struct sock *sk, struct sk_buff *skb)
{
struct tcp_sock *tp = tcp_sk(sk);
struct tcp_skb_cb *tcb = TCP_SKB_CB(skb);
@@ -618,7 +618,7 @@
tcb->seq = tcb->end_seq = tp->write_seq;
tcb->tcp_flags = TCPHDR_ACK;
tcb->sacked = 0;
- skb_header_release(skb);
+ __skb_header_release(skb);
tcp_add_write_queue_tail(sk, skb);
sk->sk_wmem_queued += skb->truesize;
sk_mem_charge(sk, skb->truesize);
@@ -963,7 +963,7 @@
skb->ip_summed = CHECKSUM_PARTIAL;
tp->write_seq += copy;
TCP_SKB_CB(skb)->end_seq += copy;
- skb_shinfo(skb)->gso_segs = 0;
+ tcp_skb_pcount_set(skb, 0);
if (!copied)
TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_PSH;
@@ -1261,7 +1261,7 @@
tp->write_seq += copy;
TCP_SKB_CB(skb)->end_seq += copy;
- skb_shinfo(skb)->gso_segs = 0;
+ tcp_skb_pcount_set(skb, 0);
from += copy;
copied += copy;
@@ -1510,9 +1510,9 @@
while ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) {
offset = seq - TCP_SKB_CB(skb)->seq;
- if (tcp_hdr(skb)->syn)
+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN)
offset--;
- if (offset < skb->len || tcp_hdr(skb)->fin) {
+ if (offset < skb->len || (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)) {
*off = offset;
return skb;
}
@@ -1585,7 +1585,7 @@
if (offset + 1 != skb->len)
continue;
}
- if (tcp_hdr(skb)->fin) {
+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) {
sk_eat_skb(sk, skb, false);
++seq;
break;
@@ -1722,11 +1722,11 @@
break;
offset = *seq - TCP_SKB_CB(skb)->seq;
- if (tcp_hdr(skb)->syn)
+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN)
offset--;
if (offset < skb->len)
goto found_ok_skb;
- if (tcp_hdr(skb)->fin)
+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
goto found_fin_ok;
WARN(!(flags & MSG_PEEK),
"recvmsg bug 2: copied %X seq %X rcvnxt %X fl %X\n",
@@ -1959,7 +1959,7 @@
if (used + offset < skb->len)
continue;
- if (tcp_hdr(skb)->fin)
+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
goto found_fin_ok;
if (!(flags & MSG_PEEK)) {
sk_eat_skb(sk, skb, copied_early);
@@ -2160,8 +2160,10 @@
* reader process may not have drained the data yet!
*/
while ((skb = __skb_dequeue(&sk->sk_receive_queue)) != NULL) {
- u32 len = TCP_SKB_CB(skb)->end_seq - TCP_SKB_CB(skb)->seq -
- tcp_hdr(skb)->fin;
+ u32 len = TCP_SKB_CB(skb)->end_seq - TCP_SKB_CB(skb)->seq;
+
+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
+ len--;
data_was_unread += len;
__kfree_skb(skb);
}
@@ -3256,8 +3258,6 @@
tcp_hashinfo.ehash_mask + 1, tcp_hashinfo.bhash_size);
tcp_metrics_init();
-
- tcp_register_congestion_control(&tcp_reno);
-
+ BUG_ON(tcp_register_congestion_control(&tcp_reno) != 0);
tcp_tasklet_init();
}
diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
index 80248f5..a6c8a57 100644
--- a/net/ipv4/tcp_cong.c
+++ b/net/ipv4/tcp_cong.c
@@ -74,24 +74,34 @@
EXPORT_SYMBOL_GPL(tcp_unregister_congestion_control);
/* Assign choice of congestion control. */
-void tcp_init_congestion_control(struct sock *sk)
+void tcp_assign_congestion_control(struct sock *sk)
{
struct inet_connection_sock *icsk = inet_csk(sk);
struct tcp_congestion_ops *ca;
- /* if no choice made yet assign the current value set as default */
- if (icsk->icsk_ca_ops == &tcp_init_congestion_ops) {
- rcu_read_lock();
- list_for_each_entry_rcu(ca, &tcp_cong_list, list) {
- if (try_module_get(ca->owner)) {
- icsk->icsk_ca_ops = ca;
- break;
- }
-
- /* fallback to next available */
+ rcu_read_lock();
+ list_for_each_entry_rcu(ca, &tcp_cong_list, list) {
+ if (likely(try_module_get(ca->owner))) {
+ icsk->icsk_ca_ops = ca;
+ goto out;
}
- rcu_read_unlock();
+ /* Fallback to next available. The last really
+ * guaranteed fallback is Reno from this list.
+ */
}
+out:
+ rcu_read_unlock();
+
+ /* Clear out private data before diag gets it and
+ * the ca has not been initialized.
+ */
+ if (ca->get_info)
+ memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));
+}
+
+void tcp_init_congestion_control(struct sock *sk)
+{
+ const struct inet_connection_sock *icsk = inet_csk(sk);
if (icsk->icsk_ca_ops->init)
icsk->icsk_ca_ops->init(sk);
@@ -345,15 +355,3 @@
.ssthresh = tcp_reno_ssthresh,
.cong_avoid = tcp_reno_cong_avoid,
};
-
-/* Initial congestion control used (until SYN)
- * really reno under another name so we can tell difference
- * during tcp_set_default_congestion_control
- */
-struct tcp_congestion_ops tcp_init_congestion_ops = {
- .name = "",
- .owner = THIS_MODULE,
- .ssthresh = tcp_reno_ssthresh,
- .cong_avoid = tcp_reno_cong_avoid,
-};
-EXPORT_SYMBOL_GPL(tcp_init_congestion_ops);
diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
new file mode 100644
index 0000000..b504371
--- /dev/null
+++ b/net/ipv4/tcp_dctcp.c
@@ -0,0 +1,344 @@
+/* DataCenter TCP (DCTCP) congestion control.
+ *
+ * http://simula.stanford.edu/~alizade/Site/DCTCP.html
+ *
+ * This is an implementation of DCTCP over Reno, an enhancement to the
+ * TCP congestion control algorithm designed for data centers. DCTCP
+ * leverages Explicit Congestion Notification (ECN) in the network to
+ * provide multi-bit feedback to the end hosts. DCTCP's goal is to meet
+ * the following three data center transport requirements:
+ *
+ * - High burst tolerance (incast due to partition/aggregate)
+ * - Low latency (short flows, queries)
+ * - High throughput (continuous data updates, large file transfers)
+ * with commodity shallow buffered switches
+ *
+ * The algorithm is described in detail in the following two papers:
+ *
+ * 1) Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye,
+ * Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, and Murari Sridharan:
+ * "Data Center TCP (DCTCP)", Data Center Networks session
+ * Proc. ACM SIGCOMM, New Delhi, 2010.
+ * http://simula.stanford.edu/~alizade/Site/DCTCP_files/dctcp-final.pdf
+ *
+ * 2) Mohammad Alizadeh, Adel Javanmard, and Balaji Prabhakar:
+ * "Analysis of DCTCP: Stability, Convergence, and Fairness"
+ * Proc. ACM SIGMETRICS, San Jose, 2011.
+ * http://simula.stanford.edu/~alizade/Site/DCTCP_files/dctcp_analysis-full.pdf
+ *
+ * Initial prototype from Abdul Kabbani, Masato Yasuda and Mohammad Alizadeh.
+ *
+ * Authors:
+ *
+ * Daniel Borkmann <dborkman@redhat.com>
+ * Florian Westphal <fw@strlen.de>
+ * Glenn Judd <glenn.judd@morganstanley.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <net/tcp.h>
+#include <linux/inet_diag.h>
+
+#define DCTCP_MAX_ALPHA 1024U
+
+struct dctcp {
+ u32 acked_bytes_ecn;
+ u32 acked_bytes_total;
+ u32 prior_snd_una;
+ u32 prior_rcv_nxt;
+ u32 dctcp_alpha;
+ u32 next_seq;
+ u32 ce_state;
+ u32 delayed_ack_reserved;
+};
+
+static unsigned int dctcp_shift_g __read_mostly = 4; /* g = 1/2^4 */
+module_param(dctcp_shift_g, uint, 0644);
+MODULE_PARM_DESC(dctcp_shift_g, "parameter g for updating dctcp_alpha");
+
+static unsigned int dctcp_alpha_on_init __read_mostly = DCTCP_MAX_ALPHA;
+module_param(dctcp_alpha_on_init, uint, 0644);
+MODULE_PARM_DESC(dctcp_alpha_on_init, "parameter for initial alpha value");
+
+static unsigned int dctcp_clamp_alpha_on_loss __read_mostly;
+module_param(dctcp_clamp_alpha_on_loss, uint, 0644);
+MODULE_PARM_DESC(dctcp_clamp_alpha_on_loss,
+ "parameter for clamping alpha on loss");
+
+static struct tcp_congestion_ops dctcp_reno;
+
+static void dctcp_reset(const struct tcp_sock *tp, struct dctcp *ca)
+{
+ ca->next_seq = tp->snd_nxt;
+
+ ca->acked_bytes_ecn = 0;
+ ca->acked_bytes_total = 0;
+}
+
+static void dctcp_init(struct sock *sk)
+{
+ const struct tcp_sock *tp = tcp_sk(sk);
+
+ if ((tp->ecn_flags & TCP_ECN_OK) ||
+ (sk->sk_state == TCP_LISTEN ||
+ sk->sk_state == TCP_CLOSE)) {
+ struct dctcp *ca = inet_csk_ca(sk);
+
+ ca->prior_snd_una = tp->snd_una;
+ ca->prior_rcv_nxt = tp->rcv_nxt;
+
+ ca->dctcp_alpha = min(dctcp_alpha_on_init, DCTCP_MAX_ALPHA);
+
+ ca->delayed_ack_reserved = 0;
+ ca->ce_state = 0;
+
+ dctcp_reset(tp, ca);
+ return;
+ }
+
+ /* No ECN support? Fall back to Reno. Also need to clear
+ * ECT from sk since it is set during 3WHS for DCTCP.
+ */
+ inet_csk(sk)->icsk_ca_ops = &dctcp_reno;
+ INET_ECN_dontxmit(sk);
+}
+
+static u32 dctcp_ssthresh(struct sock *sk)
+{
+ const struct dctcp *ca = inet_csk_ca(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ return max(tp->snd_cwnd - ((tp->snd_cwnd * ca->dctcp_alpha) >> 11U), 2U);
+}
+
+/* Minimal DCTP CE state machine:
+ *
+ * S: 0 <- last pkt was non-CE
+ * 1 <- last pkt was CE
+ */
+
+static void dctcp_ce_state_0_to_1(struct sock *sk)
+{
+ struct dctcp *ca = inet_csk_ca(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ /* State has changed from CE=0 to CE=1 and delayed
+ * ACK has not sent yet.
+ */
+ if (!ca->ce_state && ca->delayed_ack_reserved) {
+ u32 tmp_rcv_nxt;
+
+ /* Save current rcv_nxt. */
+ tmp_rcv_nxt = tp->rcv_nxt;
+
+ /* Generate previous ack with CE=0. */
+ tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
+ tp->rcv_nxt = ca->prior_rcv_nxt;
+
+ tcp_send_ack(sk);
+
+ /* Recover current rcv_nxt. */
+ tp->rcv_nxt = tmp_rcv_nxt;
+ }
+
+ ca->prior_rcv_nxt = tp->rcv_nxt;
+ ca->ce_state = 1;
+
+ tp->ecn_flags |= TCP_ECN_DEMAND_CWR;
+}
+
+static void dctcp_ce_state_1_to_0(struct sock *sk)
+{
+ struct dctcp *ca = inet_csk_ca(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ /* State has changed from CE=1 to CE=0 and delayed
+ * ACK has not sent yet.
+ */
+ if (ca->ce_state && ca->delayed_ack_reserved) {
+ u32 tmp_rcv_nxt;
+
+ /* Save current rcv_nxt. */
+ tmp_rcv_nxt = tp->rcv_nxt;
+
+ /* Generate previous ack with CE=1. */
+ tp->ecn_flags |= TCP_ECN_DEMAND_CWR;
+ tp->rcv_nxt = ca->prior_rcv_nxt;
+
+ tcp_send_ack(sk);
+
+ /* Recover current rcv_nxt. */
+ tp->rcv_nxt = tmp_rcv_nxt;
+ }
+
+ ca->prior_rcv_nxt = tp->rcv_nxt;
+ ca->ce_state = 0;
+
+ tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
+}
+
+static void dctcp_update_alpha(struct sock *sk, u32 flags)
+{
+ const struct tcp_sock *tp = tcp_sk(sk);
+ struct dctcp *ca = inet_csk_ca(sk);
+ u32 acked_bytes = tp->snd_una - ca->prior_snd_una;
+
+ /* If ack did not advance snd_una, count dupack as MSS size.
+ * If ack did update window, do not count it at all.
+ */
+ if (acked_bytes == 0 && !(flags & CA_ACK_WIN_UPDATE))
+ acked_bytes = inet_csk(sk)->icsk_ack.rcv_mss;
+ if (acked_bytes) {
+ ca->acked_bytes_total += acked_bytes;
+ ca->prior_snd_una = tp->snd_una;
+
+ if (flags & CA_ACK_ECE)
+ ca->acked_bytes_ecn += acked_bytes;
+ }
+
+ /* Expired RTT */
+ if (!before(tp->snd_una, ca->next_seq)) {
+ /* For avoiding denominator == 1. */
+ if (ca->acked_bytes_total == 0)
+ ca->acked_bytes_total = 1;
+
+ /* alpha = (1 - g) * alpha + g * F */
+ ca->dctcp_alpha = ca->dctcp_alpha -
+ (ca->dctcp_alpha >> dctcp_shift_g) +
+ (ca->acked_bytes_ecn << (10U - dctcp_shift_g)) /
+ ca->acked_bytes_total;
+
+ if (ca->dctcp_alpha > DCTCP_MAX_ALPHA)
+ /* Clamp dctcp_alpha to max. */
+ ca->dctcp_alpha = DCTCP_MAX_ALPHA;
+
+ dctcp_reset(tp, ca);
+ }
+}
+
+static void dctcp_state(struct sock *sk, u8 new_state)
+{
+ if (dctcp_clamp_alpha_on_loss && new_state == TCP_CA_Loss) {
+ struct dctcp *ca = inet_csk_ca(sk);
+
+ /* If this extension is enabled, we clamp dctcp_alpha to
+ * max on packet loss; the motivation is that dctcp_alpha
+ * is an indicator to the extend of congestion and packet
+ * loss is an indicator of extreme congestion; setting
+ * this in practice turned out to be beneficial, and
+ * effectively assumes total congestion which reduces the
+ * window by half.
+ */
+ ca->dctcp_alpha = DCTCP_MAX_ALPHA;
+ }
+}
+
+static void dctcp_update_ack_reserved(struct sock *sk, enum tcp_ca_event ev)
+{
+ struct dctcp *ca = inet_csk_ca(sk);
+
+ switch (ev) {
+ case CA_EVENT_DELAYED_ACK:
+ if (!ca->delayed_ack_reserved)
+ ca->delayed_ack_reserved = 1;
+ break;
+ case CA_EVENT_NON_DELAYED_ACK:
+ if (ca->delayed_ack_reserved)
+ ca->delayed_ack_reserved = 0;
+ break;
+ default:
+ /* Don't care for the rest. */
+ break;
+ }
+}
+
+static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev)
+{
+ switch (ev) {
+ case CA_EVENT_ECN_IS_CE:
+ dctcp_ce_state_0_to_1(sk);
+ break;
+ case CA_EVENT_ECN_NO_CE:
+ dctcp_ce_state_1_to_0(sk);
+ break;
+ case CA_EVENT_DELAYED_ACK:
+ case CA_EVENT_NON_DELAYED_ACK:
+ dctcp_update_ack_reserved(sk, ev);
+ break;
+ default:
+ /* Don't care for the rest. */
+ break;
+ }
+}
+
+static void dctcp_get_info(struct sock *sk, u32 ext, struct sk_buff *skb)
+{
+ const struct dctcp *ca = inet_csk_ca(sk);
+
+ /* Fill it also in case of VEGASINFO due to req struct limits.
+ * We can still correctly retrieve it later.
+ */
+ if (ext & (1 << (INET_DIAG_DCTCPINFO - 1)) ||
+ ext & (1 << (INET_DIAG_VEGASINFO - 1))) {
+ struct tcp_dctcp_info info;
+
+ memset(&info, 0, sizeof(info));
+ if (inet_csk(sk)->icsk_ca_ops != &dctcp_reno) {
+ info.dctcp_enabled = 1;
+ info.dctcp_ce_state = (u16) ca->ce_state;
+ info.dctcp_alpha = ca->dctcp_alpha;
+ info.dctcp_ab_ecn = ca->acked_bytes_ecn;
+ info.dctcp_ab_tot = ca->acked_bytes_total;
+ }
+
+ nla_put(skb, INET_DIAG_DCTCPINFO, sizeof(info), &info);
+ }
+}
+
+static struct tcp_congestion_ops dctcp __read_mostly = {
+ .init = dctcp_init,
+ .in_ack_event = dctcp_update_alpha,
+ .cwnd_event = dctcp_cwnd_event,
+ .ssthresh = dctcp_ssthresh,
+ .cong_avoid = tcp_reno_cong_avoid,
+ .set_state = dctcp_state,
+ .get_info = dctcp_get_info,
+ .flags = TCP_CONG_NEEDS_ECN,
+ .owner = THIS_MODULE,
+ .name = "dctcp",
+};
+
+static struct tcp_congestion_ops dctcp_reno __read_mostly = {
+ .ssthresh = tcp_reno_ssthresh,
+ .cong_avoid = tcp_reno_cong_avoid,
+ .get_info = dctcp_get_info,
+ .owner = THIS_MODULE,
+ .name = "dctcp-reno",
+};
+
+static int __init dctcp_register(void)
+{
+ BUILD_BUG_ON(sizeof(struct dctcp) > ICSK_CA_PRIV_SIZE);
+ return tcp_register_congestion_control(&dctcp);
+}
+
+static void __exit dctcp_unregister(void)
+{
+ tcp_unregister_congestion_control(&dctcp);
+}
+
+module_init(dctcp_register);
+module_exit(dctcp_unregister);
+
+MODULE_AUTHOR("Daniel Borkmann <dborkman@redhat.com>");
+MODULE_AUTHOR("Florian Westphal <fw@strlen.de>");
+MODULE_AUTHOR("Glenn Judd <glenn.judd@morganstanley.com>");
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("DataCenter TCP (DCTCP)");
diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
index 9771563..815c85e 100644
--- a/net/ipv4/tcp_fastopen.c
+++ b/net/ipv4/tcp_fastopen.c
@@ -115,7 +115,7 @@
if (__tcp_fastopen_cookie_gen(&ip6h->saddr, &tmp)) {
struct in6_addr *buf = (struct in6_addr *) tmp.val;
- int i = 4;
+ int i;
for (i = 0; i < 4; i++)
buf->s6_addr32[i] ^= ip6h->daddr.s6_addr32[i];
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index f97003a..aa38f98 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -201,28 +201,25 @@
return icsk->icsk_ack.quick && !icsk->icsk_ack.pingpong;
}
-static inline void TCP_ECN_queue_cwr(struct tcp_sock *tp)
+static void tcp_ecn_queue_cwr(struct tcp_sock *tp)
{
if (tp->ecn_flags & TCP_ECN_OK)
tp->ecn_flags |= TCP_ECN_QUEUE_CWR;
}
-static inline void TCP_ECN_accept_cwr(struct tcp_sock *tp, const struct sk_buff *skb)
+static void tcp_ecn_accept_cwr(struct tcp_sock *tp, const struct sk_buff *skb)
{
if (tcp_hdr(skb)->cwr)
tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
}
-static inline void TCP_ECN_withdraw_cwr(struct tcp_sock *tp)
+static void tcp_ecn_withdraw_cwr(struct tcp_sock *tp)
{
tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
}
-static inline void TCP_ECN_check_ce(struct tcp_sock *tp, const struct sk_buff *skb)
+static void __tcp_ecn_check_ce(struct tcp_sock *tp, const struct sk_buff *skb)
{
- if (!(tp->ecn_flags & TCP_ECN_OK))
- return;
-
switch (TCP_SKB_CB(skb)->ip_dsfield & INET_ECN_MASK) {
case INET_ECN_NOT_ECT:
/* Funny extension: if ECT is not set on a segment,
@@ -233,30 +230,43 @@
tcp_enter_quickack_mode((struct sock *)tp);
break;
case INET_ECN_CE:
+ if (tcp_ca_needs_ecn((struct sock *)tp))
+ tcp_ca_event((struct sock *)tp, CA_EVENT_ECN_IS_CE);
+
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) {
/* Better not delay acks, sender can have a very low cwnd */
tcp_enter_quickack_mode((struct sock *)tp);
tp->ecn_flags |= TCP_ECN_DEMAND_CWR;
}
- /* fallinto */
- default:
tp->ecn_flags |= TCP_ECN_SEEN;
+ break;
+ default:
+ if (tcp_ca_needs_ecn((struct sock *)tp))
+ tcp_ca_event((struct sock *)tp, CA_EVENT_ECN_NO_CE);
+ tp->ecn_flags |= TCP_ECN_SEEN;
+ break;
}
}
-static inline void TCP_ECN_rcv_synack(struct tcp_sock *tp, const struct tcphdr *th)
+static void tcp_ecn_check_ce(struct tcp_sock *tp, const struct sk_buff *skb)
+{
+ if (tp->ecn_flags & TCP_ECN_OK)
+ __tcp_ecn_check_ce(tp, skb);
+}
+
+static void tcp_ecn_rcv_synack(struct tcp_sock *tp, const struct tcphdr *th)
{
if ((tp->ecn_flags & TCP_ECN_OK) && (!th->ece || th->cwr))
tp->ecn_flags &= ~TCP_ECN_OK;
}
-static inline void TCP_ECN_rcv_syn(struct tcp_sock *tp, const struct tcphdr *th)
+static void tcp_ecn_rcv_syn(struct tcp_sock *tp, const struct tcphdr *th)
{
if ((tp->ecn_flags & TCP_ECN_OK) && (!th->ece || !th->cwr))
tp->ecn_flags &= ~TCP_ECN_OK;
}
-static bool TCP_ECN_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr *th)
+static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr *th)
{
if (th->ece && !th->syn && (tp->ecn_flags & TCP_ECN_OK))
return true;
@@ -653,7 +663,7 @@
}
icsk->icsk_ack.lrcvtime = now;
- TCP_ECN_check_ce(tp, skb);
+ tcp_ecn_check_ce(tp, skb);
if (skb->len >= 128)
tcp_grow_window(sk, skb);
@@ -1295,9 +1305,9 @@
TCP_SKB_CB(prev)->end_seq += shifted;
TCP_SKB_CB(skb)->seq += shifted;
- skb_shinfo(prev)->gso_segs += pcount;
- BUG_ON(skb_shinfo(skb)->gso_segs < pcount);
- skb_shinfo(skb)->gso_segs -= pcount;
+ tcp_skb_pcount_add(prev, pcount);
+ BUG_ON(tcp_skb_pcount(skb) < pcount);
+ tcp_skb_pcount_add(skb, -pcount);
/* When we're adding to gso_segs == 1, gso_size will be zero,
* in theory this shouldn't be necessary but as long as DSACK
@@ -1310,7 +1320,7 @@
}
/* CHECKME: To clear or not to clear? Mimics normal skb currently */
- if (skb_shinfo(skb)->gso_segs <= 1) {
+ if (tcp_skb_pcount(skb) <= 1) {
skb_shinfo(skb)->gso_size = 0;
skb_shinfo(skb)->gso_type = 0;
}
@@ -1969,7 +1979,7 @@
sysctl_tcp_reordering);
tcp_set_ca_state(sk, TCP_CA_Loss);
tp->high_seq = tp->snd_nxt;
- TCP_ECN_queue_cwr(tp);
+ tcp_ecn_queue_cwr(tp);
/* F-RTO RFC5682 sec 3.1 step 1: retransmit SND.UNA if no previous
* loss recovery is underway except recurring timeout(s) on
@@ -2361,7 +2371,7 @@
if (tp->prior_ssthresh > tp->snd_ssthresh) {
tp->snd_ssthresh = tp->prior_ssthresh;
- TCP_ECN_withdraw_cwr(tp);
+ tcp_ecn_withdraw_cwr(tp);
}
} else {
tp->snd_cwnd = max(tp->snd_cwnd, tp->snd_ssthresh);
@@ -2491,7 +2501,7 @@
tp->prr_delivered = 0;
tp->prr_out = 0;
tp->snd_ssthresh = inet_csk(sk)->icsk_ca_ops->ssthresh(sk);
- TCP_ECN_queue_cwr(tp);
+ tcp_ecn_queue_cwr(tp);
}
static void tcp_cwnd_reduction(struct sock *sk, const int prior_unsacked,
@@ -3208,9 +3218,10 @@
* This function is not for random using!
*/
} else {
+ unsigned long when = inet_csk_rto_backoff(icsk, TCP_RTO_MAX);
+
inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
- min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
- TCP_RTO_MAX);
+ when, TCP_RTO_MAX);
}
}
@@ -3361,6 +3372,14 @@
}
}
+static inline void tcp_in_ack_event(struct sock *sk, u32 flags)
+{
+ const struct inet_connection_sock *icsk = inet_csk(sk);
+
+ if (icsk->icsk_ca_ops->in_ack_event)
+ icsk->icsk_ca_ops->in_ack_event(sk, flags);
+}
+
/* This routine deals with incoming acks, but not outgoing ones. */
static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
{
@@ -3420,10 +3439,12 @@
tp->snd_una = ack;
flag |= FLAG_WIN_UPDATE;
- tcp_ca_event(sk, CA_EVENT_FAST_ACK);
+ tcp_in_ack_event(sk, CA_ACK_WIN_UPDATE);
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPHPACKS);
} else {
+ u32 ack_ev_flags = CA_ACK_SLOWPATH;
+
if (ack_seq != TCP_SKB_CB(skb)->end_seq)
flag |= FLAG_DATA;
else
@@ -3435,10 +3456,15 @@
flag |= tcp_sacktag_write_queue(sk, skb, prior_snd_una,
&sack_rtt_us);
- if (TCP_ECN_rcv_ecn_echo(tp, tcp_hdr(skb)))
+ if (tcp_ecn_rcv_ecn_echo(tp, tcp_hdr(skb))) {
flag |= FLAG_ECE;
+ ack_ev_flags |= CA_ACK_ECE;
+ }
- tcp_ca_event(sk, CA_EVENT_SLOW_ACK);
+ if (flag & FLAG_WIN_UPDATE)
+ ack_ev_flags |= CA_ACK_WIN_UPDATE;
+
+ tcp_in_ack_event(sk, ack_ev_flags);
}
/* We passed data and got it acked, remove any soft error
@@ -4060,6 +4086,44 @@
tp->rx_opt.num_sacks = num_sacks;
}
+/**
+ * tcp_try_coalesce - try to merge skb to prior one
+ * @sk: socket
+ * @to: prior buffer
+ * @from: buffer to add in queue
+ * @fragstolen: pointer to boolean
+ *
+ * Before queueing skb @from after @to, try to merge them
+ * to reduce overall memory use and queue lengths, if cost is small.
+ * Packets in ofo or receive queues can stay a long time.
+ * Better try to coalesce them right now to avoid future collapses.
+ * Returns true if caller should free @from instead of queueing it
+ */
+static bool tcp_try_coalesce(struct sock *sk,
+ struct sk_buff *to,
+ struct sk_buff *from,
+ bool *fragstolen)
+{
+ int delta;
+
+ *fragstolen = false;
+
+ /* Its possible this segment overlaps with prior segment in queue */
+ if (TCP_SKB_CB(from)->seq != TCP_SKB_CB(to)->end_seq)
+ return false;
+
+ if (!skb_try_coalesce(to, from, fragstolen, &delta))
+ return false;
+
+ atomic_add(delta, &sk->sk_rmem_alloc);
+ sk_mem_charge(sk, delta);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRCVCOALESCE);
+ TCP_SKB_CB(to)->end_seq = TCP_SKB_CB(from)->end_seq;
+ TCP_SKB_CB(to)->ack_seq = TCP_SKB_CB(from)->ack_seq;
+ TCP_SKB_CB(to)->tcp_flags |= TCP_SKB_CB(from)->tcp_flags;
+ return true;
+}
+
/* This one checks to see if we can put data from the
* out_of_order queue into the receive_queue.
*/
@@ -4067,7 +4131,8 @@
{
struct tcp_sock *tp = tcp_sk(sk);
__u32 dsack_high = tp->rcv_nxt;
- struct sk_buff *skb;
+ struct sk_buff *skb, *tail;
+ bool fragstolen, eaten;
while ((skb = skb_peek(&tp->out_of_order_queue)) != NULL) {
if (after(TCP_SKB_CB(skb)->seq, tp->rcv_nxt))
@@ -4080,9 +4145,9 @@
tcp_dsack_extend(sk, TCP_SKB_CB(skb)->seq, dsack);
}
+ __skb_unlink(skb, &tp->out_of_order_queue);
if (!after(TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt)) {
SOCK_DEBUG(sk, "ofo packet was already received\n");
- __skb_unlink(skb, &tp->out_of_order_queue);
__kfree_skb(skb);
continue;
}
@@ -4090,11 +4155,15 @@
tp->rcv_nxt, TCP_SKB_CB(skb)->seq,
TCP_SKB_CB(skb)->end_seq);
- __skb_unlink(skb, &tp->out_of_order_queue);
- __skb_queue_tail(&sk->sk_receive_queue, skb);
+ tail = skb_peek_tail(&sk->sk_receive_queue);
+ eaten = tail && tcp_try_coalesce(sk, tail, skb, &fragstolen);
tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq;
- if (tcp_hdr(skb)->fin)
+ if (!eaten)
+ __skb_queue_tail(&sk->sk_receive_queue, skb);
+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
tcp_fin(sk);
+ if (eaten)
+ kfree_skb_partial(skb, fragstolen);
}
}
@@ -4121,53 +4190,13 @@
return 0;
}
-/**
- * tcp_try_coalesce - try to merge skb to prior one
- * @sk: socket
- * @to: prior buffer
- * @from: buffer to add in queue
- * @fragstolen: pointer to boolean
- *
- * Before queueing skb @from after @to, try to merge them
- * to reduce overall memory use and queue lengths, if cost is small.
- * Packets in ofo or receive queues can stay a long time.
- * Better try to coalesce them right now to avoid future collapses.
- * Returns true if caller should free @from instead of queueing it
- */
-static bool tcp_try_coalesce(struct sock *sk,
- struct sk_buff *to,
- struct sk_buff *from,
- bool *fragstolen)
-{
- int delta;
-
- *fragstolen = false;
-
- if (tcp_hdr(from)->fin)
- return false;
-
- /* Its possible this segment overlaps with prior segment in queue */
- if (TCP_SKB_CB(from)->seq != TCP_SKB_CB(to)->end_seq)
- return false;
-
- if (!skb_try_coalesce(to, from, fragstolen, &delta))
- return false;
-
- atomic_add(delta, &sk->sk_rmem_alloc);
- sk_mem_charge(sk, delta);
- NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRCVCOALESCE);
- TCP_SKB_CB(to)->end_seq = TCP_SKB_CB(from)->end_seq;
- TCP_SKB_CB(to)->ack_seq = TCP_SKB_CB(from)->ack_seq;
- return true;
-}
-
static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
{
struct tcp_sock *tp = tcp_sk(sk);
struct sk_buff *skb1;
u32 seq, end_seq;
- TCP_ECN_check_ce(tp, skb);
+ tcp_ecn_check_ce(tp, skb);
if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) {
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPOFODROP);
@@ -4306,24 +4335,19 @@
int tcp_send_rcvq(struct sock *sk, struct msghdr *msg, size_t size)
{
- struct sk_buff *skb = NULL;
- struct tcphdr *th;
+ struct sk_buff *skb;
bool fragstolen;
if (size == 0)
return 0;
- skb = alloc_skb(size + sizeof(*th), sk->sk_allocation);
+ skb = alloc_skb(size, sk->sk_allocation);
if (!skb)
goto err;
- if (tcp_try_rmem_schedule(sk, skb, size + sizeof(*th)))
+ if (tcp_try_rmem_schedule(sk, skb, skb->truesize))
goto err_free;
- th = (struct tcphdr *)skb_put(skb, sizeof(*th));
- skb_reset_transport_header(skb);
- memset(th, 0, sizeof(*th));
-
if (memcpy_fromiovec(skb_put(skb, size), msg->msg_iov, size))
goto err_free;
@@ -4331,7 +4355,7 @@
TCP_SKB_CB(skb)->end_seq = TCP_SKB_CB(skb)->seq + size;
TCP_SKB_CB(skb)->ack_seq = tcp_sk(sk)->snd_una - 1;
- if (tcp_queue_rcv(sk, skb, sizeof(*th), &fragstolen)) {
+ if (tcp_queue_rcv(sk, skb, 0, &fragstolen)) {
WARN_ON_ONCE(fragstolen); /* should not happen */
__kfree_skb(skb);
}
@@ -4345,7 +4369,6 @@
static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
{
- const struct tcphdr *th = tcp_hdr(skb);
struct tcp_sock *tp = tcp_sk(sk);
int eaten = -1;
bool fragstolen = false;
@@ -4354,9 +4377,9 @@
goto drop;
skb_dst_drop(skb);
- __skb_pull(skb, th->doff * 4);
+ __skb_pull(skb, tcp_hdr(skb)->doff * 4);
- TCP_ECN_accept_cwr(tp, skb);
+ tcp_ecn_accept_cwr(tp, skb);
tp->rx_opt.dsack = 0;
@@ -4398,7 +4421,7 @@
tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq;
if (skb->len)
tcp_event_data_recv(sk, skb);
- if (th->fin)
+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
tcp_fin(sk);
if (!skb_queue_empty(&tp->out_of_order_queue)) {
@@ -4513,7 +4536,7 @@
* - bloated or contains data before "start" or
* overlaps to the next one.
*/
- if (!tcp_hdr(skb)->syn && !tcp_hdr(skb)->fin &&
+ if (!(TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) &&
(tcp_win_from_space(skb->truesize) > skb->len ||
before(TCP_SKB_CB(skb)->seq, start))) {
end_of_skbs = false;
@@ -4532,30 +4555,18 @@
/* Decided to skip this, advance start seq. */
start = TCP_SKB_CB(skb)->end_seq;
}
- if (end_of_skbs || tcp_hdr(skb)->syn || tcp_hdr(skb)->fin)
+ if (end_of_skbs ||
+ (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)))
return;
while (before(start, end)) {
+ int copy = min_t(int, SKB_MAX_ORDER(0, 0), end - start);
struct sk_buff *nskb;
- unsigned int header = skb_headroom(skb);
- int copy = SKB_MAX_ORDER(header, 0);
- /* Too big header? This can happen with IPv6. */
- if (copy < 0)
- return;
- if (end - start < copy)
- copy = end - start;
- nskb = alloc_skb(copy + header, GFP_ATOMIC);
+ nskb = alloc_skb(copy, GFP_ATOMIC);
if (!nskb)
return;
- skb_set_mac_header(nskb, skb_mac_header(skb) - skb->head);
- skb_set_network_header(nskb, (skb_network_header(skb) -
- skb->head));
- skb_set_transport_header(nskb, (skb_transport_header(skb) -
- skb->head));
- skb_reserve(nskb, header);
- memcpy(nskb->head, skb->head, header);
memcpy(nskb->cb, skb->cb, sizeof(skb->cb));
TCP_SKB_CB(nskb)->seq = TCP_SKB_CB(nskb)->end_seq = start;
__skb_queue_before(list, skb, nskb);
@@ -4579,8 +4590,7 @@
skb = tcp_collapse_one(sk, skb, list);
if (!skb ||
skb == tail ||
- tcp_hdr(skb)->syn ||
- tcp_hdr(skb)->fin)
+ (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)))
return;
}
}
@@ -5450,7 +5460,7 @@
* state to ESTABLISHED..."
*/
- TCP_ECN_rcv_synack(tp, th);
+ tcp_ecn_rcv_synack(tp, th);
tcp_init_wl(tp, TCP_SKB_CB(skb)->seq);
tcp_ack(sk, skb, FLAG_SLOWPATH);
@@ -5569,7 +5579,7 @@
tp->snd_wl1 = TCP_SKB_CB(skb)->seq;
tp->max_window = tp->snd_wnd;
- TCP_ECN_rcv_syn(tp, th);
+ tcp_ecn_rcv_syn(tp, th);
tcp_mtup_init(sk);
tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);
@@ -5899,6 +5909,40 @@
#endif
}
+/* RFC3168 : 6.1.1 SYN packets must not have ECT/ECN bits set
+ *
+ * If we receive a SYN packet with these bits set, it means a
+ * network is playing bad games with TOS bits. In order to
+ * avoid possible false congestion notifications, we disable
+ * TCP ECN negociation.
+ *
+ * Exception: tcp_ca wants ECN. This is required for DCTCP
+ * congestion control; it requires setting ECT on all packets,
+ * including SYN. We inverse the test in this case: If our
+ * local socket wants ECN, but peer only set ece/cwr (but not
+ * ECT in IP header) its probably a non-DCTCP aware sender.
+ */
+static void tcp_ecn_create_request(struct request_sock *req,
+ const struct sk_buff *skb,
+ const struct sock *listen_sk)
+{
+ const struct tcphdr *th = tcp_hdr(skb);
+ const struct net *net = sock_net(listen_sk);
+ bool th_ecn = th->ece && th->cwr;
+ bool ect, need_ecn;
+
+ if (!th_ecn)
+ return;
+
+ ect = !INET_ECN_is_not_ect(TCP_SKB_CB(skb)->ip_dsfield);
+ need_ecn = tcp_ca_needs_ecn(listen_sk);
+
+ if (!ect && !need_ecn && net->ipv4.sysctl_tcp_ecn)
+ inet_rsk(req)->ecn_ok = 1;
+ else if (ect && need_ecn)
+ inet_rsk(req)->ecn_ok = 1;
+}
+
int tcp_conn_request(struct request_sock_ops *rsk_ops,
const struct tcp_request_sock_ops *af_ops,
struct sock *sk, struct sk_buff *skb)
@@ -5959,7 +6003,7 @@
goto drop_and_free;
if (!want_cookie || tmp_opt.tstamp_ok)
- TCP_ECN_create_request(req, skb, sock_net(sk));
+ tcp_ecn_create_request(req, skb, sk);
if (want_cookie) {
isn = cookie_init_sequence(af_ops, sk, skb, &req->mss);
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 7881b96..9ce3eac 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -430,9 +430,9 @@
break;
icsk->icsk_backoff--;
- inet_csk(sk)->icsk_rto = (tp->srtt_us ? __tcp_set_rto(tp) :
- TCP_TIMEOUT_INIT) << icsk->icsk_backoff;
- tcp_bound_rto(sk);
+ icsk->icsk_rto = tp->srtt_us ? __tcp_set_rto(tp) :
+ TCP_TIMEOUT_INIT;
+ icsk->icsk_rto = inet_csk_rto_backoff(icsk, TCP_RTO_MAX);
skb = tcp_write_queue_head(sk);
BUG_ON(!skb);
@@ -681,8 +681,9 @@
net = dev_net(skb_dst(skb)->dev);
arg.tos = ip_hdr(skb)->tos;
- ip_send_unicast_reply(net, skb, ip_hdr(skb)->saddr,
- ip_hdr(skb)->daddr, &arg, arg.iov[0].iov_len);
+ ip_send_unicast_reply(net, skb, &TCP_SKB_CB(skb)->header.h4.opt,
+ ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
+ &arg, arg.iov[0].iov_len);
TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS);
TCP_INC_STATS_BH(net, TCP_MIB_OUTRSTS);
@@ -764,8 +765,9 @@
if (oif)
arg.bound_dev_if = oif;
arg.tos = tos;
- ip_send_unicast_reply(net, skb, ip_hdr(skb)->saddr,
- ip_hdr(skb)->daddr, &arg, arg.iov[0].iov_len);
+ ip_send_unicast_reply(net, skb, &TCP_SKB_CB(skb)->header.h4.opt,
+ ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
+ &arg, arg.iov[0].iov_len);
TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS);
}
@@ -884,18 +886,16 @@
*/
static struct ip_options_rcu *tcp_v4_save_options(struct sk_buff *skb)
{
- const struct ip_options *opt = &(IPCB(skb)->opt);
+ const struct ip_options *opt = &TCP_SKB_CB(skb)->header.h4.opt;
struct ip_options_rcu *dopt = NULL;
if (opt && opt->optlen) {
int opt_size = sizeof(*dopt) + opt->optlen;
dopt = kmalloc(opt_size, GFP_ATOMIC);
- if (dopt) {
- if (ip_options_echo(&dopt->opt, skb)) {
- kfree(dopt);
- dopt = NULL;
- }
+ if (dopt && __ip_options_echo(&dopt->opt, skb, opt)) {
+ kfree(dopt);
+ dopt = NULL;
}
}
return dopt;
@@ -1429,7 +1429,7 @@
#ifdef CONFIG_SYN_COOKIES
if (!th->syn)
- sk = cookie_v4_check(sk, skb, &(IPCB(skb)->opt));
+ sk = cookie_v4_check(sk, skb, &TCP_SKB_CB(skb)->header.h4.opt);
#endif
return sk;
}
@@ -1634,10 +1634,18 @@
th = tcp_hdr(skb);
iph = ip_hdr(skb);
+ /* This is tricky : We move IPCB at its correct location into TCP_SKB_CB()
+ * barrier() makes sure compiler wont play fool^Waliasing games.
+ */
+ memmove(&TCP_SKB_CB(skb)->header.h4, IPCB(skb),
+ sizeof(struct inet_skb_parm));
+ barrier();
+
TCP_SKB_CB(skb)->seq = ntohl(th->seq);
TCP_SKB_CB(skb)->end_seq = (TCP_SKB_CB(skb)->seq + th->syn + th->fin +
skb->len - th->doff * 4);
TCP_SKB_CB(skb)->ack_seq = ntohl(th->ack_seq);
+ TCP_SKB_CB(skb)->tcp_flags = tcp_flag_byte(th);
TCP_SKB_CB(skb)->tcp_tw_isn = 0;
TCP_SKB_CB(skb)->ip_dsfield = ipv4_get_dsfield(iph);
TCP_SKB_CB(skb)->sacked = 0;
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index a058f41..63d2680 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -393,8 +393,8 @@
}
EXPORT_SYMBOL(tcp_openreq_init_rwin);
-static inline void TCP_ECN_openreq_child(struct tcp_sock *tp,
- struct request_sock *req)
+static void tcp_ecn_openreq_child(struct tcp_sock *tp,
+ const struct request_sock *req)
{
tp->ecn_flags = inet_rsk(req)->ecn_ok ? TCP_ECN_OK : 0;
}
@@ -451,9 +451,8 @@
newtp->snd_cwnd = TCP_INIT_CWND;
newtp->snd_cwnd_cnt = 0;
- if (newicsk->icsk_ca_ops != &tcp_init_congestion_ops &&
- !try_module_get(newicsk->icsk_ca_ops->owner))
- newicsk->icsk_ca_ops = &tcp_init_congestion_ops;
+ if (!try_module_get(newicsk->icsk_ca_ops->owner))
+ tcp_assign_congestion_control(newsk);
tcp_set_ca_state(newsk, TCP_CA_Open);
tcp_init_xmit_timers(newsk);
@@ -508,7 +507,7 @@
if (skb->len >= TCP_MSS_DEFAULT + newtp->tcp_header_len)
newicsk->icsk_ack.last_seg_size = skb->len - newtp->tcp_header_len;
newtp->rx_opt.mss_clamp = req->mss;
- TCP_ECN_openreq_child(newtp, req);
+ tcp_ecn_openreq_child(newtp, req);
newtp->fastopen_rsk = NULL;
newtp->syn_data_acked = 0;
diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
index 7291253..5b90f2f 100644
--- a/net/ipv4/tcp_offload.c
+++ b/net/ipv4/tcp_offload.c
@@ -29,6 +29,28 @@
}
}
+struct sk_buff *tcp4_gso_segment(struct sk_buff *skb,
+ netdev_features_t features)
+{
+ if (!pskb_may_pull(skb, sizeof(struct tcphdr)))
+ return ERR_PTR(-EINVAL);
+
+ if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) {
+ const struct iphdr *iph = ip_hdr(skb);
+ struct tcphdr *th = tcp_hdr(skb);
+
+ /* Set up checksum pseudo header, usually expect stack to
+ * have done this already.
+ */
+
+ th->check = 0;
+ skb->ip_summed = CHECKSUM_PARTIAL;
+ __tcp_v4_send_check(skb, iph->saddr, iph->daddr);
+ }
+
+ return tcp_gso_segment(skb, features);
+}
+
struct sk_buff *tcp_gso_segment(struct sk_buff *skb,
netdev_features_t features)
{
@@ -44,9 +66,6 @@
__sum16 newcheck;
bool ooo_okay, copy_destructor;
- if (!pskb_may_pull(skb, sizeof(*th)))
- goto out;
-
th = tcp_hdr(skb);
thlen = th->doff * 4;
if (thlen < sizeof(*th))
@@ -269,23 +288,6 @@
}
EXPORT_SYMBOL(tcp_gro_complete);
-static int tcp_v4_gso_send_check(struct sk_buff *skb)
-{
- const struct iphdr *iph;
- struct tcphdr *th;
-
- if (!pskb_may_pull(skb, sizeof(*th)))
- return -EINVAL;
-
- iph = ip_hdr(skb);
- th = tcp_hdr(skb);
-
- th->check = 0;
- skb->ip_summed = CHECKSUM_PARTIAL;
- __tcp_v4_send_check(skb, iph->saddr, iph->daddr);
- return 0;
-}
-
static struct sk_buff **tcp4_gro_receive(struct sk_buff **head, struct sk_buff *skb)
{
/* Don't bother verifying checksum if we're going to flush anyway. */
@@ -313,8 +315,7 @@
static const struct net_offload tcpv4_offload = {
.callbacks = {
- .gso_send_check = tcp_v4_gso_send_check,
- .gso_segment = tcp_gso_segment,
+ .gso_segment = tcp4_gso_segment,
.gro_receive = tcp4_gro_receive,
.gro_complete = tcp4_gro_complete,
},
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 7f1280d..ee567e9 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -318,36 +318,47 @@
}
/* Packet ECN state for a SYN-ACK */
-static inline void TCP_ECN_send_synack(const struct tcp_sock *tp, struct sk_buff *skb)
+static void tcp_ecn_send_synack(struct sock *sk, struct sk_buff *skb)
{
+ const struct tcp_sock *tp = tcp_sk(sk);
+
TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_CWR;
if (!(tp->ecn_flags & TCP_ECN_OK))
TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_ECE;
+ else if (tcp_ca_needs_ecn(sk))
+ INET_ECN_xmit(sk);
}
/* Packet ECN state for a SYN. */
-static inline void TCP_ECN_send_syn(struct sock *sk, struct sk_buff *skb)
+static void tcp_ecn_send_syn(struct sock *sk, struct sk_buff *skb)
{
struct tcp_sock *tp = tcp_sk(sk);
tp->ecn_flags = 0;
- if (sock_net(sk)->ipv4.sysctl_tcp_ecn == 1) {
+ if (sock_net(sk)->ipv4.sysctl_tcp_ecn == 1 ||
+ tcp_ca_needs_ecn(sk)) {
TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_ECE | TCPHDR_CWR;
tp->ecn_flags = TCP_ECN_OK;
+ if (tcp_ca_needs_ecn(sk))
+ INET_ECN_xmit(sk);
}
}
-static __inline__ void
-TCP_ECN_make_synack(const struct request_sock *req, struct tcphdr *th)
+static void
+tcp_ecn_make_synack(const struct request_sock *req, struct tcphdr *th,
+ struct sock *sk)
{
- if (inet_rsk(req)->ecn_ok)
+ if (inet_rsk(req)->ecn_ok) {
th->ece = 1;
+ if (tcp_ca_needs_ecn(sk))
+ INET_ECN_xmit(sk);
+ }
}
/* Set up ECN state for a packet on a ESTABLISHED socket that is about to
* be sent.
*/
-static inline void TCP_ECN_send(struct sock *sk, struct sk_buff *skb,
+static void tcp_ecn_send(struct sock *sk, struct sk_buff *skb,
int tcp_header_len)
{
struct tcp_sock *tp = tcp_sk(sk);
@@ -362,7 +373,7 @@
tcp_hdr(skb)->cwr = 1;
skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN;
}
- } else {
+ } else if (!tcp_ca_needs_ecn(sk)) {
/* ACK or retransmitted segment: clear ECT|CE */
INET_ECN_dontxmit(sk);
}
@@ -384,7 +395,7 @@
TCP_SKB_CB(skb)->tcp_flags = flags;
TCP_SKB_CB(skb)->sacked = 0;
- shinfo->gso_segs = 1;
+ tcp_skb_pcount_set(skb, 1);
shinfo->gso_size = 0;
shinfo->gso_type = 0;
@@ -949,7 +960,7 @@
tcp_options_write((__be32 *)(th + 1), tp, &opts);
if (likely((tcb->tcp_flags & TCPHDR_SYN) == 0))
- TCP_ECN_send(sk, skb, tcp_header_size);
+ tcp_ecn_send(sk, skb, tcp_header_size);
#ifdef CONFIG_TCP_MD5SIG
/* Calculate the MD5 hash, as we have all we need now */
@@ -972,8 +983,16 @@
TCP_ADD_STATS(sock_net(sk), TCP_MIB_OUTSEGS,
tcp_skb_pcount(skb));
+ /* OK, its time to fill skb_shinfo(skb)->gso_segs */
+ skb_shinfo(skb)->gso_segs = tcp_skb_pcount(skb);
+
/* Our usage of tstamp should remain private */
skb->tstamp.tv64 = 0;
+
+ /* Cleanup our debris for IP stacks */
+ memset(skb->cb, 0, max(sizeof(struct inet_skb_parm),
+ sizeof(struct inet6_skb_parm)));
+
err = icsk->icsk_af_ops->queue_xmit(sk, skb, &inet->cork.fl);
if (likely(err <= 0))
@@ -995,7 +1014,7 @@
/* Advance write_seq and place onto the write_queue. */
tp->write_seq = TCP_SKB_CB(skb)->end_seq;
- skb_header_release(skb);
+ __skb_header_release(skb);
tcp_add_write_queue_tail(sk, skb);
sk->sk_wmem_queued += skb->truesize;
sk_mem_charge(sk, skb->truesize);
@@ -1014,11 +1033,11 @@
/* Avoid the costly divide in the normal
* non-TSO case.
*/
- shinfo->gso_segs = 1;
+ tcp_skb_pcount_set(skb, 1);
shinfo->gso_size = 0;
shinfo->gso_type = 0;
} else {
- shinfo->gso_segs = DIV_ROUND_UP(skb->len, mss_now);
+ tcp_skb_pcount_set(skb, DIV_ROUND_UP(skb->len, mss_now));
shinfo->gso_size = mss_now;
shinfo->gso_type = sk->sk_gso_type;
}
@@ -1167,7 +1186,7 @@
}
/* Link BUFF into the send queue. */
- skb_header_release(buff);
+ __skb_header_release(buff);
tcp_insert_write_queue_after(skb, buff, sk);
return 0;
@@ -1671,7 +1690,7 @@
tcp_set_skb_tso_segs(sk, buff, mss_now);
/* Link BUFF into the send queue. */
- skb_header_release(buff);
+ __skb_header_release(buff);
tcp_insert_write_queue_after(skb, buff, sk);
return 0;
@@ -2772,7 +2791,7 @@
if (nskb == NULL)
return -ENOMEM;
tcp_unlink_write_queue(skb, sk);
- skb_header_release(nskb);
+ __skb_header_release(nskb);
__tcp_add_write_queue_head(sk, nskb);
sk_wmem_free_skb(sk, skb);
sk->sk_wmem_queued += nskb->truesize;
@@ -2781,7 +2800,7 @@
}
TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_ACK;
- TCP_ECN_send_synack(tcp_sk(sk), skb);
+ tcp_ecn_send_synack(sk, skb);
}
return tcp_transmit_skb(sk, skb, 1, GFP_ATOMIC);
}
@@ -2840,7 +2859,7 @@
memset(th, 0, sizeof(struct tcphdr));
th->syn = 1;
th->ack = 1;
- TCP_ECN_make_synack(req, th);
+ tcp_ecn_make_synack(req, th, sk);
th->source = htons(ireq->ir_num);
th->dest = ireq->ir_rmt_port;
/* Setting of flags are superfluous here for callers (and ECE is
@@ -2947,7 +2966,7 @@
struct tcp_skb_cb *tcb = TCP_SKB_CB(skb);
tcb->end_seq += skb->len;
- skb_header_release(skb);
+ __skb_header_release(skb);
__tcp_add_write_queue_tail(sk, skb);
sk->sk_wmem_queued += skb->truesize;
sk_mem_charge(sk, skb->truesize);
@@ -3079,7 +3098,7 @@
tcp_init_nondata_skb(buff, tp->write_seq++, TCPHDR_SYN);
tp->retrans_stamp = tcp_time_stamp;
tcp_connect_queue_skb(sk, buff);
- TCP_ECN_send_syn(sk, buff);
+ tcp_ecn_send_syn(sk, buff);
/* Send off SYN; include data in Fast Open. */
err = tp->fastopen_req ? tcp_send_syn_data(sk, buff) :
@@ -3111,6 +3130,8 @@
int ato = icsk->icsk_ack.ato;
unsigned long timeout;
+ tcp_ca_event(sk, CA_EVENT_DELAYED_ACK);
+
if (ato > TCP_DELACK_MIN) {
const struct tcp_sock *tp = tcp_sk(sk);
int max_ato = HZ / 2;
@@ -3167,6 +3188,8 @@
if (sk->sk_state == TCP_CLOSE)
return;
+ tcp_ca_event(sk, CA_EVENT_NON_DELAYED_ACK);
+
/* We are not putting this on the write queue, so
* tcp_transmit_skb() will set the ownership to this
* sock.
@@ -3188,6 +3211,7 @@
skb_mstamp_get(&buff->skb_mstamp);
tcp_transmit_skb(sk, buff, 0, sk_gfp_atomic(sk, GFP_ATOMIC));
}
+EXPORT_SYMBOL_GPL(tcp_send_ack);
/* This routine sends a packet with an out of date sequence
* number. It assumes the other end will try to ack it.
@@ -3279,6 +3303,7 @@
{
struct inet_connection_sock *icsk = inet_csk(sk);
struct tcp_sock *tp = tcp_sk(sk);
+ unsigned long probe_max;
int err;
err = tcp_write_wakeup(sk);
@@ -3294,9 +3319,7 @@
if (icsk->icsk_backoff < sysctl_tcp_retries2)
icsk->icsk_backoff++;
icsk->icsk_probes_out++;
- inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
- min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
- TCP_RTO_MAX);
+ probe_max = TCP_RTO_MAX;
} else {
/* If packet was not sent due to local congestion,
* do not backoff and do not remember icsk_probes_out.
@@ -3306,11 +3329,11 @@
*/
if (!icsk->icsk_probes_out)
icsk->icsk_probes_out = 1;
- inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
- min(icsk->icsk_rto << icsk->icsk_backoff,
- TCP_RESOURCE_PROBE_INTERVAL),
- TCP_RTO_MAX);
+ probe_max = TCP_RESOURCE_PROBE_INTERVAL;
}
+ inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
+ inet_csk_rto_backoff(icsk, probe_max),
+ TCP_RTO_MAX);
}
int tcp_rtx_synack(struct sock *sk, struct request_sock *req)
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
index a339e7b..b24360f 100644
--- a/net/ipv4/tcp_timer.c
+++ b/net/ipv4/tcp_timer.c
@@ -180,7 +180,7 @@
retry_until = sysctl_tcp_retries2;
if (sock_flag(sk, SOCK_DEAD)) {
- const int alive = (icsk->icsk_rto < TCP_RTO_MAX);
+ const int alive = icsk->icsk_rto < TCP_RTO_MAX;
retry_until = tcp_orphan_retries(sk, alive);
do_reset = alive ||
@@ -294,7 +294,7 @@
max_probes = sysctl_tcp_retries2;
if (sock_flag(sk, SOCK_DEAD)) {
- const int alive = ((icsk->icsk_rto << icsk->icsk_backoff) < TCP_RTO_MAX);
+ const int alive = inet_csk_rto_backoff(icsk, TCP_RTO_MAX) < TCP_RTO_MAX;
max_probes = tcp_orphan_retries(sk, alive);
diff --git a/net/ipv4/tcp_westwood.c b/net/ipv4/tcp_westwood.c
index 81911a9..bb63fba 100644
--- a/net/ipv4/tcp_westwood.c
+++ b/net/ipv4/tcp_westwood.c
@@ -220,32 +220,35 @@
return max_t(u32, (w->bw_est * w->rtt_min) / tp->mss_cache, 2);
}
+static void tcp_westwood_ack(struct sock *sk, u32 ack_flags)
+{
+ if (ack_flags & CA_ACK_SLOWPATH) {
+ struct westwood *w = inet_csk_ca(sk);
+
+ westwood_update_window(sk);
+ w->bk += westwood_acked_count(sk);
+
+ update_rtt_min(w);
+ return;
+ }
+
+ westwood_fast_bw(sk);
+}
+
static void tcp_westwood_event(struct sock *sk, enum tcp_ca_event event)
{
struct tcp_sock *tp = tcp_sk(sk);
struct westwood *w = inet_csk_ca(sk);
switch (event) {
- case CA_EVENT_FAST_ACK:
- westwood_fast_bw(sk);
- break;
-
case CA_EVENT_COMPLETE_CWR:
tp->snd_cwnd = tp->snd_ssthresh = tcp_westwood_bw_rttmin(sk);
break;
-
case CA_EVENT_LOSS:
tp->snd_ssthresh = tcp_westwood_bw_rttmin(sk);
/* Update RTT_min when next ack arrives */
w->reset_rtt_min = 1;
break;
-
- case CA_EVENT_SLOW_ACK:
- westwood_update_window(sk);
- w->bk += westwood_acked_count(sk);
- update_rtt_min(w);
- break;
-
default:
/* don't care */
break;
@@ -274,6 +277,7 @@
.ssthresh = tcp_reno_ssthresh,
.cong_avoid = tcp_reno_cong_avoid,
.cwnd_event = tcp_westwood_event,
+ .in_ack_event = tcp_westwood_ack,
.get_info = tcp_westwood_info,
.pkts_acked = tcp_westwood_pkts_acked,
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
index 52d5f46..19ebe6a 100644
--- a/net/ipv4/udp_offload.c
+++ b/net/ipv4/udp_offload.c
@@ -25,28 +25,6 @@
struct udp_offload_priv __rcu *next;
};
-static int udp4_ufo_send_check(struct sk_buff *skb)
-{
- if (!pskb_may_pull(skb, sizeof(struct udphdr)))
- return -EINVAL;
-
- if (likely(!skb->encapsulation)) {
- const struct iphdr *iph;
- struct udphdr *uh;
-
- iph = ip_hdr(skb);
- uh = udp_hdr(skb);
-
- uh->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, skb->len,
- IPPROTO_UDP, 0);
- skb->csum_start = skb_transport_header(skb) - skb->head;
- skb->csum_offset = offsetof(struct udphdr, check);
- skb->ip_summed = CHECKSUM_PARTIAL;
- }
-
- return 0;
-}
-
struct sk_buff *skb_udp_tunnel_segment(struct sk_buff *skb,
netdev_features_t features)
{
@@ -128,8 +106,9 @@
{
struct sk_buff *segs = ERR_PTR(-EINVAL);
unsigned int mss;
- int offset;
__wsum csum;
+ struct udphdr *uh;
+ struct iphdr *iph;
if (skb->encapsulation &&
(skb_shinfo(skb)->gso_type &
@@ -138,6 +117,9 @@
goto out;
}
+ if (!pskb_may_pull(skb, sizeof(struct udphdr)))
+ goto out;
+
mss = skb_shinfo(skb)->gso_size;
if (unlikely(skb->len <= mss))
goto out;
@@ -165,10 +147,16 @@
* HW cannot do checksum of UDP packets sent as multiple
* IP fragments.
*/
- offset = skb_checksum_start_offset(skb);
- csum = skb_checksum(skb, offset, skb->len - offset, 0);
- offset += skb->csum_offset;
- *(__sum16 *)(skb->data + offset) = csum_fold(csum);
+
+ uh = udp_hdr(skb);
+ iph = ip_hdr(skb);
+
+ uh->check = 0;
+ csum = skb_checksum(skb, 0, skb->len, 0);
+ uh->check = udp_v4_check(skb->len, iph->saddr, iph->daddr, csum);
+ if (uh->check == 0)
+ uh->check = CSUM_MANGLED_0;
+
skb->ip_summed = CHECKSUM_NONE;
/* Fragment the skb. IP headers of the fragments are updated in
@@ -276,6 +264,7 @@
skb_gro_pull(skb, sizeof(struct udphdr)); /* pull encapsulating udp header */
skb_gro_postpull_rcsum(skb, uh, sizeof(struct udphdr));
+ NAPI_GRO_CB(skb)->proto = uo_priv->offload->ipproto;
pp = uo_priv->offload->callbacks.gro_receive(head, skb);
out_unlock:
@@ -294,7 +283,7 @@
goto flush;
/* Don't bother verifying checksum if we're going to flush anyway. */
- if (!NAPI_GRO_CB(skb)->flush)
+ if (NAPI_GRO_CB(skb)->flush)
goto skip;
if (skb_gro_checksum_validate_zero_check(skb, IPPROTO_UDP, uh->check,
@@ -329,8 +318,10 @@
break;
}
- if (uo_priv != NULL)
+ if (uo_priv != NULL) {
+ NAPI_GRO_CB(skb)->proto = uo_priv->offload->ipproto;
err = uo_priv->offload->callbacks.gro_complete(skb, nhoff + sizeof(struct udphdr));
+ }
rcu_read_unlock();
return err;
@@ -350,7 +341,6 @@
static const struct net_offload udpv4_offload = {
.callbacks = {
- .gso_send_check = udp4_ufo_send_check,
.gso_segment = udp4_ufo_fragment,
.gro_receive = udp4_gro_receive,
.gro_complete = udp4_gro_complete,
diff --git a/net/ipv4/udp_tunnel.c b/net/ipv4/udp_tunnel.c
index 61ec1a6..1671263 100644
--- a/net/ipv4/udp_tunnel.c
+++ b/net/ipv4/udp_tunnel.c
@@ -8,83 +8,40 @@
#include <net/udp_tunnel.h>
#include <net/net_namespace.h>
-int udp_sock_create(struct net *net, struct udp_port_cfg *cfg,
- struct socket **sockp)
+int udp_sock_create4(struct net *net, struct udp_port_cfg *cfg,
+ struct socket **sockp)
{
- int err = -EINVAL;
+ int err;
struct socket *sock = NULL;
+ struct sockaddr_in udp_addr;
-#if IS_ENABLED(CONFIG_IPV6)
- if (cfg->family == AF_INET6) {
- struct sockaddr_in6 udp6_addr;
+ err = sock_create_kern(AF_INET, SOCK_DGRAM, 0, &sock);
+ if (err < 0)
+ goto error;
- err = sock_create_kern(AF_INET6, SOCK_DGRAM, 0, &sock);
- if (err < 0)
- goto error;
+ sk_change_net(sock->sk, net);
- sk_change_net(sock->sk, net);
+ udp_addr.sin_family = AF_INET;
+ udp_addr.sin_addr = cfg->local_ip;
+ udp_addr.sin_port = cfg->local_udp_port;
+ err = kernel_bind(sock, (struct sockaddr *)&udp_addr,
+ sizeof(udp_addr));
+ if (err < 0)
+ goto error;
- udp6_addr.sin6_family = AF_INET6;
- memcpy(&udp6_addr.sin6_addr, &cfg->local_ip6,
- sizeof(udp6_addr.sin6_addr));
- udp6_addr.sin6_port = cfg->local_udp_port;
- err = kernel_bind(sock, (struct sockaddr *)&udp6_addr,
- sizeof(udp6_addr));
- if (err < 0)
- goto error;
-
- if (cfg->peer_udp_port) {
- udp6_addr.sin6_family = AF_INET6;
- memcpy(&udp6_addr.sin6_addr, &cfg->peer_ip6,
- sizeof(udp6_addr.sin6_addr));
- udp6_addr.sin6_port = cfg->peer_udp_port;
- err = kernel_connect(sock,
- (struct sockaddr *)&udp6_addr,
- sizeof(udp6_addr), 0);
- }
- if (err < 0)
- goto error;
-
- udp_set_no_check6_tx(sock->sk, !cfg->use_udp6_tx_checksums);
- udp_set_no_check6_rx(sock->sk, !cfg->use_udp6_rx_checksums);
- } else
-#endif
- if (cfg->family == AF_INET) {
- struct sockaddr_in udp_addr;
-
- err = sock_create_kern(AF_INET, SOCK_DGRAM, 0, &sock);
- if (err < 0)
- goto error;
-
- sk_change_net(sock->sk, net);
-
+ if (cfg->peer_udp_port) {
udp_addr.sin_family = AF_INET;
- udp_addr.sin_addr = cfg->local_ip;
- udp_addr.sin_port = cfg->local_udp_port;
- err = kernel_bind(sock, (struct sockaddr *)&udp_addr,
- sizeof(udp_addr));
+ udp_addr.sin_addr = cfg->peer_ip;
+ udp_addr.sin_port = cfg->peer_udp_port;
+ err = kernel_connect(sock, (struct sockaddr *)&udp_addr,
+ sizeof(udp_addr), 0);
if (err < 0)
goto error;
-
- if (cfg->peer_udp_port) {
- udp_addr.sin_family = AF_INET;
- udp_addr.sin_addr = cfg->peer_ip;
- udp_addr.sin_port = cfg->peer_udp_port;
- err = kernel_connect(sock,
- (struct sockaddr *)&udp_addr,
- sizeof(udp_addr), 0);
- if (err < 0)
- goto error;
- }
-
- sock->sk->sk_no_check_tx = !cfg->use_udp_checksums;
- } else {
- return -EPFNOSUPPORT;
}
+ sock->sk->sk_no_check_tx = !cfg->use_udp_checksums;
*sockp = sock;
-
return 0;
error:
@@ -95,6 +52,57 @@
*sockp = NULL;
return err;
}
-EXPORT_SYMBOL(udp_sock_create);
+EXPORT_SYMBOL(udp_sock_create4);
+
+void setup_udp_tunnel_sock(struct net *net, struct socket *sock,
+ struct udp_tunnel_sock_cfg *cfg)
+{
+ struct sock *sk = sock->sk;
+
+ /* Disable multicast loopback */
+ inet_sk(sk)->mc_loop = 0;
+
+ /* Enable CHECKSUM_UNNECESSARY to CHECKSUM_COMPLETE conversion */
+ udp_set_convert_csum(sk, true);
+
+ rcu_assign_sk_user_data(sk, cfg->sk_user_data);
+
+ udp_sk(sk)->encap_type = cfg->encap_type;
+ udp_sk(sk)->encap_rcv = cfg->encap_rcv;
+ udp_sk(sk)->encap_destroy = cfg->encap_destroy;
+
+ udp_tunnel_encap_enable(sock);
+}
+EXPORT_SYMBOL_GPL(setup_udp_tunnel_sock);
+
+int udp_tunnel_xmit_skb(struct socket *sock, struct rtable *rt,
+ struct sk_buff *skb, __be32 src, __be32 dst,
+ __u8 tos, __u8 ttl, __be16 df, __be16 src_port,
+ __be16 dst_port, bool xnet)
+{
+ struct udphdr *uh;
+
+ __skb_push(skb, sizeof(*uh));
+ skb_reset_transport_header(skb);
+ uh = udp_hdr(skb);
+
+ uh->dest = dst_port;
+ uh->source = src_port;
+ uh->len = htons(skb->len);
+
+ udp_set_csum(sock->sk->sk_no_check_tx, skb, src, dst, skb->len);
+
+ return iptunnel_xmit(sock->sk, rt, skb, src, dst, IPPROTO_UDP,
+ tos, ttl, df, xnet);
+}
+EXPORT_SYMBOL_GPL(udp_tunnel_xmit_skb);
+
+void udp_tunnel_sock_release(struct socket *sock)
+{
+ rcu_assign_sk_user_data(sock->sk, NULL);
+ kernel_sock_shutdown(sock, SHUT_RDWR);
+ sk_release_kernel(sock->sk);
+}
+EXPORT_SYMBOL_GPL(udp_tunnel_sock_release);
MODULE_LICENSE("GPL");
diff --git a/net/ipv6/Makefile b/net/ipv6/Makefile
index 2fe6836..2e8c061 100644
--- a/net/ipv6/Makefile
+++ b/net/ipv6/Makefile
@@ -45,3 +45,7 @@
obj-$(CONFIG_INET) += output_core.o protocol.o $(ipv6-offload)
obj-$(subst m,y,$(CONFIG_IPV6)) += inet6_hashtables.o
+
+ifneq ($(CONFIG_IPV6),)
+obj-$(CONFIG_NET_UDP_TUNNEL) += ip6_udp_tunnel.o
+endif
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index ad4598f..e189480 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -1725,7 +1725,7 @@
ipv6_addr_prefix(&addr, &ifp->addr, ifp->prefix_len);
if (ipv6_addr_any(&addr))
return;
- ipv6_dev_ac_inc(ifp->idev->dev, &addr);
+ __ipv6_dev_ac_inc(ifp->idev, &addr);
}
/* caller must hold RTNL */
@@ -2844,6 +2844,9 @@
if (dev->flags & IFF_SLAVE)
break;
+ if (idev && idev->cnf.disable_ipv6)
+ break;
+
if (event == NETDEV_UP) {
if (!addrconf_qdisc_ok(dev)) {
/* device is not ready yet. */
@@ -3094,11 +3097,13 @@
write_unlock_bh(&idev->lock);
- /* Step 5: Discard multicast list */
- if (how)
+ /* Step 5: Discard anycast and multicast list */
+ if (how) {
+ ipv6_ac_destroy_dev(idev);
ipv6_mc_destroy_dev(idev);
- else
+ } else {
ipv6_mc_down(idev);
+ }
idev->tstamp = jiffies;
diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
index e4865a3..34f726f 100644
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -672,10 +672,10 @@
}
EXPORT_SYMBOL_GPL(inet6_sk_rebuild_header);
-bool ipv6_opt_accepted(const struct sock *sk, const struct sk_buff *skb)
+bool ipv6_opt_accepted(const struct sock *sk, const struct sk_buff *skb,
+ const struct inet6_skb_parm *opt)
{
const struct ipv6_pinfo *np = inet6_sk(sk);
- const struct inet6_skb_parm *opt = IP6CB(skb);
if (np->rxopt.all) {
if ((opt->hop && (np->rxopt.bits.hopopts ||
diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c
index fcffd4e..6d16eb0 100644
--- a/net/ipv6/ah6.c
+++ b/net/ipv6/ah6.c
@@ -713,8 +713,6 @@
ahp->icv_full_len = aalg_desc->uinfo.auth.icv_fullbits/8;
ahp->icv_trunc_len = x->aalg->alg_trunc_len/8;
- BUG_ON(ahp->icv_trunc_len > MAX_AH_AUTH_LEN);
-
x->props.header_len = XFRM_ALIGN8(sizeof(struct ip_auth_hdr) +
ahp->icv_trunc_len);
switch (x->props.mode) {
diff --git a/net/ipv6/anycast.c b/net/ipv6/anycast.c
index ff2de7d..f5e319a 100644
--- a/net/ipv6/anycast.c
+++ b/net/ipv6/anycast.c
@@ -46,10 +46,6 @@
static int ipv6_dev_ac_dec(struct net_device *dev, const struct in6_addr *addr);
-/* Big ac list lock for all the sockets */
-static DEFINE_SPINLOCK(ipv6_sk_ac_lock);
-
-
/*
* socket join an anycast group
*/
@@ -78,7 +74,6 @@
pac->acl_addr = *addr;
rtnl_lock();
- rcu_read_lock();
if (ifindex == 0) {
struct rt6_info *rt;
@@ -91,11 +86,11 @@
goto error;
} else {
/* router, no matching interface: just pick one */
- dev = dev_get_by_flags_rcu(net, IFF_UP,
- IFF_UP | IFF_LOOPBACK);
+ dev = __dev_get_by_flags(net, IFF_UP,
+ IFF_UP | IFF_LOOPBACK);
}
} else
- dev = dev_get_by_index_rcu(net, ifindex);
+ dev = __dev_get_by_index(net, ifindex);
if (dev == NULL) {
err = -ENODEV;
@@ -127,17 +122,14 @@
goto error;
}
- err = ipv6_dev_ac_inc(dev, addr);
+ err = __ipv6_dev_ac_inc(idev, addr);
if (!err) {
- spin_lock_bh(&ipv6_sk_ac_lock);
pac->acl_next = np->ipv6_ac_list;
np->ipv6_ac_list = pac;
- spin_unlock_bh(&ipv6_sk_ac_lock);
pac = NULL;
}
error:
- rcu_read_unlock();
rtnl_unlock();
if (pac)
sock_kfree_s(sk, pac, sizeof(*pac));
@@ -154,7 +146,7 @@
struct ipv6_ac_socklist *pac, *prev_pac;
struct net *net = sock_net(sk);
- spin_lock_bh(&ipv6_sk_ac_lock);
+ rtnl_lock();
prev_pac = NULL;
for (pac = np->ipv6_ac_list; pac; pac = pac->acl_next) {
if ((ifindex == 0 || pac->acl_ifindex == ifindex) &&
@@ -163,7 +155,7 @@
prev_pac = pac;
}
if (!pac) {
- spin_unlock_bh(&ipv6_sk_ac_lock);
+ rtnl_unlock();
return -ENOENT;
}
if (prev_pac)
@@ -171,14 +163,9 @@
else
np->ipv6_ac_list = pac->acl_next;
- spin_unlock_bh(&ipv6_sk_ac_lock);
-
- rtnl_lock();
- rcu_read_lock();
- dev = dev_get_by_index_rcu(net, pac->acl_ifindex);
+ dev = __dev_get_by_index(net, pac->acl_ifindex);
if (dev)
ipv6_dev_ac_dec(dev, &pac->acl_addr);
- rcu_read_unlock();
rtnl_unlock();
sock_kfree_s(sk, pac, sizeof(*pac));
@@ -196,19 +183,16 @@
if (!np->ipv6_ac_list)
return;
- spin_lock_bh(&ipv6_sk_ac_lock);
+ rtnl_lock();
pac = np->ipv6_ac_list;
np->ipv6_ac_list = NULL;
- spin_unlock_bh(&ipv6_sk_ac_lock);
prev_index = 0;
- rtnl_lock();
- rcu_read_lock();
while (pac) {
struct ipv6_ac_socklist *next = pac->acl_next;
if (pac->acl_ifindex != prev_index) {
- dev = dev_get_by_index_rcu(net, pac->acl_ifindex);
+ dev = __dev_get_by_index(net, pac->acl_ifindex);
prev_index = pac->acl_ifindex;
}
if (dev)
@@ -216,10 +200,14 @@
sock_kfree_s(sk, pac, sizeof(*pac));
pac = next;
}
- rcu_read_unlock();
rtnl_unlock();
}
+static void aca_get(struct ifacaddr6 *aca)
+{
+ atomic_inc(&aca->aca_refcnt);
+}
+
static void aca_put(struct ifacaddr6 *ac)
{
if (atomic_dec_and_test(&ac->aca_refcnt)) {
@@ -229,23 +217,40 @@
}
}
+static struct ifacaddr6 *aca_alloc(struct rt6_info *rt,
+ const struct in6_addr *addr)
+{
+ struct inet6_dev *idev = rt->rt6i_idev;
+ struct ifacaddr6 *aca;
+
+ aca = kzalloc(sizeof(*aca), GFP_ATOMIC);
+ if (aca == NULL)
+ return NULL;
+
+ aca->aca_addr = *addr;
+ in6_dev_hold(idev);
+ aca->aca_idev = idev;
+ aca->aca_rt = rt;
+ aca->aca_users = 1;
+ /* aca_tstamp should be updated upon changes */
+ aca->aca_cstamp = aca->aca_tstamp = jiffies;
+ atomic_set(&aca->aca_refcnt, 1);
+ spin_lock_init(&aca->aca_lock);
+
+ return aca;
+}
+
/*
* device anycast group inc (add if not found)
*/
-int ipv6_dev_ac_inc(struct net_device *dev, const struct in6_addr *addr)
+int __ipv6_dev_ac_inc(struct inet6_dev *idev, const struct in6_addr *addr)
{
struct ifacaddr6 *aca;
- struct inet6_dev *idev;
struct rt6_info *rt;
int err;
ASSERT_RTNL();
- idev = in6_dev_get(dev);
-
- if (idev == NULL)
- return -EINVAL;
-
write_lock_bh(&idev->lock);
if (idev->dead) {
err = -ENODEV;
@@ -260,46 +265,35 @@
}
}
- /*
- * not found: create a new one.
- */
-
- aca = kzalloc(sizeof(struct ifacaddr6), GFP_ATOMIC);
-
+ rt = addrconf_dst_alloc(idev, addr, true);
+ if (IS_ERR(rt)) {
+ err = PTR_ERR(rt);
+ goto out;
+ }
+ aca = aca_alloc(rt, addr);
if (aca == NULL) {
+ ip6_rt_put(rt);
err = -ENOMEM;
goto out;
}
- rt = addrconf_dst_alloc(idev, addr, true);
- if (IS_ERR(rt)) {
- kfree(aca);
- err = PTR_ERR(rt);
- goto out;
- }
-
- aca->aca_addr = *addr;
- aca->aca_idev = idev;
- aca->aca_rt = rt;
- aca->aca_users = 1;
- /* aca_tstamp should be updated upon changes */
- aca->aca_cstamp = aca->aca_tstamp = jiffies;
- atomic_set(&aca->aca_refcnt, 2);
- spin_lock_init(&aca->aca_lock);
-
aca->aca_next = idev->ac_list;
idev->ac_list = aca;
+
+ /* Hold this for addrconf_join_solict() below before we unlock,
+ * it is already exposed via idev->ac_list.
+ */
+ aca_get(aca);
write_unlock_bh(&idev->lock);
ip6_ins_rt(rt);
- addrconf_join_solict(dev, &aca->aca_addr);
+ addrconf_join_solict(idev->dev, &aca->aca_addr);
aca_put(aca);
return 0;
out:
write_unlock_bh(&idev->lock);
- in6_dev_put(idev);
return err;
}
@@ -341,7 +335,7 @@
return 0;
}
-/* called with rcu_read_lock() */
+/* called with rtnl_lock() */
static int ipv6_dev_ac_dec(struct net_device *dev, const struct in6_addr *addr)
{
struct inet6_dev *idev = __in6_dev_get(dev);
@@ -351,6 +345,27 @@
return __ipv6_dev_ac_dec(idev, addr);
}
+void ipv6_ac_destroy_dev(struct inet6_dev *idev)
+{
+ struct ifacaddr6 *aca;
+
+ write_lock_bh(&idev->lock);
+ while ((aca = idev->ac_list) != NULL) {
+ idev->ac_list = aca->aca_next;
+ write_unlock_bh(&idev->lock);
+
+ addrconf_leave_solict(idev, &aca->aca_addr);
+
+ dst_hold(&aca->aca_rt->dst);
+ ip6_del_rt(aca->aca_rt);
+
+ aca_put(aca);
+
+ write_lock_bh(&idev->lock);
+ }
+ write_unlock_bh(&idev->lock);
+}
+
/*
* check if the interface has this anycast address
* called with rcu_read_lock()
diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
index 394bb82..141e1f3 100644
--- a/net/ipv6/icmp.c
+++ b/net/ipv6/icmp.c
@@ -170,11 +170,11 @@
/*
* Check the ICMP output rate limit
*/
-static inline bool icmpv6_xrlim_allow(struct sock *sk, u8 type,
- struct flowi6 *fl6)
+static bool icmpv6_xrlim_allow(struct sock *sk, u8 type,
+ struct flowi6 *fl6)
{
- struct dst_entry *dst;
struct net *net = sock_net(sk);
+ struct dst_entry *dst;
bool res = false;
/* Informational messages are not limited. */
@@ -199,16 +199,20 @@
} else {
struct rt6_info *rt = (struct rt6_info *)dst;
int tmo = net->ipv6.sysctl.icmpv6_time;
- struct inet_peer *peer;
/* Give more bandwidth to wider prefixes. */
if (rt->rt6i_dst.plen < 128)
tmo >>= ((128 - rt->rt6i_dst.plen)>>5);
- peer = inet_getpeer_v6(net->ipv6.peers, &rt->rt6i_dst.addr, 1);
- res = inet_peer_xrlim_allow(peer, tmo);
- if (peer)
- inet_putpeer(peer);
+ if (icmp_global_allow()) {
+ struct inet_peer *peer;
+
+ peer = inet_getpeer_v6(net->ipv6.peers,
+ &rt->rt6i_dst.addr, 1);
+ res = inet_peer_xrlim_allow(peer, tmo);
+ if (peer)
+ inet_putpeer(peer);
+ }
}
dst_release(dst);
return res;
diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
index 9952f3f..9034f76 100644
--- a/net/ipv6/ip6_offload.c
+++ b/net/ipv6/ip6_offload.c
@@ -53,31 +53,6 @@
return proto;
}
-static int ipv6_gso_send_check(struct sk_buff *skb)
-{
- const struct ipv6hdr *ipv6h;
- const struct net_offload *ops;
- int err = -EINVAL;
-
- if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h))))
- goto out;
-
- ipv6h = ipv6_hdr(skb);
- __skb_pull(skb, sizeof(*ipv6h));
- err = -EPROTONOSUPPORT;
-
- ops = rcu_dereference(inet6_offloads[
- ipv6_gso_pull_exthdrs(skb, ipv6h->nexthdr)]);
-
- if (likely(ops && ops->callbacks.gso_send_check)) {
- skb_reset_transport_header(skb);
- err = ops->callbacks.gso_send_check(skb);
- }
-
-out:
- return err;
-}
-
static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
netdev_features_t features)
{
@@ -306,7 +281,6 @@
static struct packet_offload ipv6_packet_offload __read_mostly = {
.type = cpu_to_be16(ETH_P_IPV6),
.callbacks = {
- .gso_send_check = ipv6_gso_send_check,
.gso_segment = ipv6_gso_segment,
.gro_receive = ipv6_gro_receive,
.gro_complete = ipv6_gro_complete,
@@ -315,7 +289,6 @@
static const struct net_offload sit_offload = {
.callbacks = {
- .gso_send_check = ipv6_gso_send_check,
.gso_segment = ipv6_gso_segment,
.gro_receive = ipv6_gro_receive,
.gro_complete = ipv6_gro_complete,
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
index 2e6a0db..8e950c2 100644
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -1004,7 +1004,7 @@
if (final_dst)
fl6->daddr = *final_dst;
- return xfrm_lookup(sock_net(sk), dst, flowi6_to_flowi(fl6), sk, 0);
+ return xfrm_lookup_route(sock_net(sk), dst, flowi6_to_flowi(fl6), sk, 0);
}
EXPORT_SYMBOL_GPL(ip6_dst_lookup_flow);
@@ -1036,7 +1036,7 @@
if (final_dst)
fl6->daddr = *final_dst;
- return xfrm_lookup(sock_net(sk), dst, flowi6_to_flowi(fl6), sk, 0);
+ return xfrm_lookup_route(sock_net(sk), dst, flowi6_to_flowi(fl6), sk, 0);
}
EXPORT_SYMBOL_GPL(ip6_sk_dst_lookup_flow);
diff --git a/net/ipv6/ip6_udp_tunnel.c b/net/ipv6/ip6_udp_tunnel.c
new file mode 100644
index 0000000..b04ed72
--- /dev/null
+++ b/net/ipv6/ip6_udp_tunnel.c
@@ -0,0 +1,107 @@
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/socket.h>
+#include <linux/udp.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/in6.h>
+#include <net/udp.h>
+#include <net/udp_tunnel.h>
+#include <net/net_namespace.h>
+#include <net/netns/generic.h>
+#include <net/ip6_tunnel.h>
+#include <net/ip6_checksum.h>
+
+int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg,
+ struct socket **sockp)
+{
+ struct sockaddr_in6 udp6_addr;
+ int err;
+ struct socket *sock = NULL;
+
+ err = sock_create_kern(AF_INET6, SOCK_DGRAM, 0, &sock);
+ if (err < 0)
+ goto error;
+
+ sk_change_net(sock->sk, net);
+
+ udp6_addr.sin6_family = AF_INET6;
+ memcpy(&udp6_addr.sin6_addr, &cfg->local_ip6,
+ sizeof(udp6_addr.sin6_addr));
+ udp6_addr.sin6_port = cfg->local_udp_port;
+ err = kernel_bind(sock, (struct sockaddr *)&udp6_addr,
+ sizeof(udp6_addr));
+ if (err < 0)
+ goto error;
+
+ if (cfg->peer_udp_port) {
+ udp6_addr.sin6_family = AF_INET6;
+ memcpy(&udp6_addr.sin6_addr, &cfg->peer_ip6,
+ sizeof(udp6_addr.sin6_addr));
+ udp6_addr.sin6_port = cfg->peer_udp_port;
+ err = kernel_connect(sock,
+ (struct sockaddr *)&udp6_addr,
+ sizeof(udp6_addr), 0);
+ }
+ if (err < 0)
+ goto error;
+
+ udp_set_no_check6_tx(sock->sk, !cfg->use_udp6_tx_checksums);
+ udp_set_no_check6_rx(sock->sk, !cfg->use_udp6_rx_checksums);
+
+ *sockp = sock;
+ return 0;
+
+error:
+ if (sock) {
+ kernel_sock_shutdown(sock, SHUT_RDWR);
+ sk_release_kernel(sock->sk);
+ }
+ *sockp = NULL;
+ return err;
+}
+EXPORT_SYMBOL_GPL(udp_sock_create6);
+
+int udp_tunnel6_xmit_skb(struct socket *sock, struct dst_entry *dst,
+ struct sk_buff *skb, struct net_device *dev,
+ struct in6_addr *saddr, struct in6_addr *daddr,
+ __u8 prio, __u8 ttl, __be16 src_port, __be16 dst_port)
+{
+ struct udphdr *uh;
+ struct ipv6hdr *ip6h;
+ struct sock *sk = sock->sk;
+
+ __skb_push(skb, sizeof(*uh));
+ skb_reset_transport_header(skb);
+ uh = udp_hdr(skb);
+
+ uh->dest = dst_port;
+ uh->source = src_port;
+
+ uh->len = htons(skb->len);
+ uh->check = 0;
+
+ memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
+ IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED
+ | IPSKB_REROUTED);
+ skb_dst_set(skb, dst);
+
+ udp6_set_csum(udp_get_no_check6_tx(sk), skb, &inet6_sk(sk)->saddr,
+ &sk->sk_v6_daddr, skb->len);
+
+ __skb_push(skb, sizeof(*ip6h));
+ skb_reset_network_header(skb);
+ ip6h = ipv6_hdr(skb);
+ ip6_flow_hdr(ip6h, prio, htonl(0));
+ ip6h->payload_len = htons(skb->len);
+ ip6h->nexthdr = IPPROTO_UDP;
+ ip6h->hop_limit = ttl;
+ ip6h->daddr = *daddr;
+ ip6h->saddr = *saddr;
+
+ ip6tunnel_xmit(skb, dev);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(udp_tunnel6_xmit_skb);
+
+MODULE_LICENSE("GPL");
diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
index 484a942..9648de2 100644
--- a/net/ipv6/mcast.c
+++ b/net/ipv6/mcast.c
@@ -73,9 +73,6 @@
static struct in6_addr mld2_all_mcr = MLD2_ALL_MCR_INIT;
-/* Big mc list lock for all the sockets */
-static DEFINE_SPINLOCK(ipv6_sk_mc_lock);
-
static void igmp6_join_group(struct ifmcaddr6 *ma);
static void igmp6_leave_group(struct ifmcaddr6 *ma);
static void igmp6_timer_handler(unsigned long data);
@@ -165,7 +162,6 @@
mc_lst->addr = *addr;
rtnl_lock();
- rcu_read_lock();
if (ifindex == 0) {
struct rt6_info *rt;
rt = rt6_lookup(net, addr, NULL, 0, 0);
@@ -174,10 +170,9 @@
ip6_rt_put(rt);
}
} else
- dev = dev_get_by_index_rcu(net, ifindex);
+ dev = __dev_get_by_index(net, ifindex);
if (dev == NULL) {
- rcu_read_unlock();
rtnl_unlock();
sock_kfree_s(sk, mc_lst, sizeof(*mc_lst));
return -ENODEV;
@@ -195,18 +190,14 @@
err = ipv6_dev_mc_inc(dev, addr);
if (err) {
- rcu_read_unlock();
rtnl_unlock();
sock_kfree_s(sk, mc_lst, sizeof(*mc_lst));
return err;
}
- spin_lock(&ipv6_sk_mc_lock);
mc_lst->next = np->ipv6_mc_list;
rcu_assign_pointer(np->ipv6_mc_list, mc_lst);
- spin_unlock(&ipv6_sk_mc_lock);
- rcu_read_unlock();
rtnl_unlock();
return 0;
@@ -226,20 +217,16 @@
return -EINVAL;
rtnl_lock();
- spin_lock(&ipv6_sk_mc_lock);
for (lnk = &np->ipv6_mc_list;
- (mc_lst = rcu_dereference_protected(*lnk,
- lockdep_is_held(&ipv6_sk_mc_lock))) != NULL;
+ (mc_lst = rtnl_dereference(*lnk)) != NULL;
lnk = &mc_lst->next) {
if ((ifindex == 0 || mc_lst->ifindex == ifindex) &&
ipv6_addr_equal(&mc_lst->addr, addr)) {
struct net_device *dev;
*lnk = mc_lst->next;
- spin_unlock(&ipv6_sk_mc_lock);
- rcu_read_lock();
- dev = dev_get_by_index_rcu(net, mc_lst->ifindex);
+ dev = __dev_get_by_index(net, mc_lst->ifindex);
if (dev != NULL) {
struct inet6_dev *idev = __in6_dev_get(dev);
@@ -248,7 +235,6 @@
__ipv6_dev_mc_dec(idev, &mc_lst->addr);
} else
(void) ip6_mc_leave_src(sk, mc_lst, NULL);
- rcu_read_unlock();
rtnl_unlock();
atomic_sub(sizeof(*mc_lst), &sk->sk_omem_alloc);
@@ -256,7 +242,6 @@
return 0;
}
}
- spin_unlock(&ipv6_sk_mc_lock);
rtnl_unlock();
return -EADDRNOTAVAIL;
@@ -303,16 +288,12 @@
return;
rtnl_lock();
- spin_lock(&ipv6_sk_mc_lock);
- while ((mc_lst = rcu_dereference_protected(np->ipv6_mc_list,
- lockdep_is_held(&ipv6_sk_mc_lock))) != NULL) {
+ while ((mc_lst = rtnl_dereference(np->ipv6_mc_list)) != NULL) {
struct net_device *dev;
np->ipv6_mc_list = mc_lst->next;
- spin_unlock(&ipv6_sk_mc_lock);
- rcu_read_lock();
- dev = dev_get_by_index_rcu(net, mc_lst->ifindex);
+ dev = __dev_get_by_index(net, mc_lst->ifindex);
if (dev) {
struct inet6_dev *idev = __in6_dev_get(dev);
@@ -321,14 +302,11 @@
__ipv6_dev_mc_dec(idev, &mc_lst->addr);
} else
(void) ip6_mc_leave_src(sk, mc_lst, NULL);
- rcu_read_unlock();
atomic_sub(sizeof(*mc_lst), &sk->sk_omem_alloc);
kfree_rcu(mc_lst, rcu);
- spin_lock(&ipv6_sk_mc_lock);
}
- spin_unlock(&ipv6_sk_mc_lock);
rtnl_unlock();
}
@@ -578,9 +556,8 @@
}
err = -EADDRNOTAVAIL;
- /*
- * changes to the ipv6_mc_list require the socket lock and
- * a read lock on ip6_sk_mc_lock. We have the socket lock,
+ /* changes to the ipv6_mc_list require the socket lock and
+ * rtnl lock. We have the socket lock and rcu read lock,
* so reading the list is safe.
*/
@@ -604,9 +581,8 @@
copy_to_user(optval, gsf, GROUP_FILTER_SIZE(0))) {
return -EFAULT;
}
- /* changes to psl require the socket lock, a read lock on
- * on ipv6_sk_mc_lock and a write lock on pmc->sflock. We
- * have the socket lock, so reading here is safe.
+ /* changes to psl require the socket lock, and a write lock
+ * on pmc->sflock. We have the socket lock so reading here is safe.
*/
for (i = 0; i < copycount; i++) {
struct sockaddr_in6 *psin6;
@@ -665,14 +641,6 @@
return rv;
}
-static void ma_put(struct ifmcaddr6 *mc)
-{
- if (atomic_dec_and_test(&mc->mca_refcnt)) {
- in6_dev_put(mc->idev);
- kfree(mc);
- }
-}
-
static void igmp6_group_added(struct ifmcaddr6 *mc)
{
struct net_device *dev = mc->idev->dev;
@@ -838,6 +806,48 @@
read_unlock_bh(&idev->lock);
}
+static void mca_get(struct ifmcaddr6 *mc)
+{
+ atomic_inc(&mc->mca_refcnt);
+}
+
+static void ma_put(struct ifmcaddr6 *mc)
+{
+ if (atomic_dec_and_test(&mc->mca_refcnt)) {
+ in6_dev_put(mc->idev);
+ kfree(mc);
+ }
+}
+
+static struct ifmcaddr6 *mca_alloc(struct inet6_dev *idev,
+ const struct in6_addr *addr)
+{
+ struct ifmcaddr6 *mc;
+
+ mc = kzalloc(sizeof(*mc), GFP_ATOMIC);
+ if (mc == NULL)
+ return NULL;
+
+ setup_timer(&mc->mca_timer, igmp6_timer_handler, (unsigned long)mc);
+
+ mc->mca_addr = *addr;
+ mc->idev = idev; /* reference taken by caller */
+ mc->mca_users = 1;
+ /* mca_stamp should be updated upon changes */
+ mc->mca_cstamp = mc->mca_tstamp = jiffies;
+ atomic_set(&mc->mca_refcnt, 1);
+ spin_lock_init(&mc->mca_lock);
+
+ /* initial mode is (EX, empty) */
+ mc->mca_sfmode = MCAST_EXCLUDE;
+ mc->mca_sfcount[MCAST_EXCLUDE] = 1;
+
+ if (ipv6_addr_is_ll_all_nodes(&mc->mca_addr) ||
+ IPV6_ADDR_MC_SCOPE(&mc->mca_addr) < IPV6_ADDR_SCOPE_LINKLOCAL)
+ mc->mca_flags |= MAF_NOREPORT;
+
+ return mc;
+}
/*
* device multicast group inc (add if not found)
@@ -873,38 +883,20 @@
}
}
- /*
- * not found: create a new one.
- */
-
- mc = kzalloc(sizeof(struct ifmcaddr6), GFP_ATOMIC);
-
- if (mc == NULL) {
+ mc = mca_alloc(idev, addr);
+ if (!mc) {
write_unlock_bh(&idev->lock);
in6_dev_put(idev);
return -ENOMEM;
}
- setup_timer(&mc->mca_timer, igmp6_timer_handler, (unsigned long)mc);
-
- mc->mca_addr = *addr;
- mc->idev = idev; /* (reference taken) */
- mc->mca_users = 1;
- /* mca_stamp should be updated upon changes */
- mc->mca_cstamp = mc->mca_tstamp = jiffies;
- atomic_set(&mc->mca_refcnt, 2);
- spin_lock_init(&mc->mca_lock);
-
- /* initial mode is (EX, empty) */
- mc->mca_sfmode = MCAST_EXCLUDE;
- mc->mca_sfcount[MCAST_EXCLUDE] = 1;
-
- if (ipv6_addr_is_ll_all_nodes(&mc->mca_addr) ||
- IPV6_ADDR_MC_SCOPE(&mc->mca_addr) < IPV6_ADDR_SCOPE_LINKLOCAL)
- mc->mca_flags |= MAF_NOREPORT;
-
mc->next = idev->mc_list;
idev->mc_list = mc;
+
+ /* Hold this for the code below before we unlock,
+ * it is already exposed via idev->mc_list.
+ */
+ mca_get(mc);
write_unlock_bh(&idev->lock);
mld_del_delrec(idev, &mc->mca_addr);
@@ -948,7 +940,7 @@
struct inet6_dev *idev;
int err;
- rcu_read_lock();
+ ASSERT_RTNL();
idev = __in6_dev_get(dev);
if (!idev)
@@ -956,7 +948,6 @@
else
err = __ipv6_dev_mc_dec(idev, addr);
- rcu_read_unlock();
return err;
}
@@ -1246,7 +1237,7 @@
}
static int mld_process_v1(struct inet6_dev *idev, struct mld_msg *mld,
- unsigned long *max_delay)
+ unsigned long *max_delay, bool v1_query)
{
unsigned long mldv1_md;
@@ -1254,11 +1245,32 @@
if (mld_in_v2_mode_only(idev))
return -EINVAL;
- /* MLDv1 router present */
mldv1_md = ntohs(mld->mld_maxdelay);
+
+ /* When in MLDv1 fallback and a MLDv2 router start-up being
+ * unaware of current MLDv1 operation, the MRC == MRD mapping
+ * only works when the exponential algorithm is not being
+ * used (as MLDv1 is unaware of such things).
+ *
+ * According to the RFC author, the MLDv2 implementations
+ * he's aware of all use a MRC < 32768 on start up queries.
+ *
+ * Thus, should we *ever* encounter something else larger
+ * than that, just assume the maximum possible within our
+ * reach.
+ */
+ if (!v1_query)
+ mldv1_md = min(mldv1_md, MLDV1_MRD_MAX_COMPAT);
+
*max_delay = max(msecs_to_jiffies(mldv1_md), 1UL);
- mld_set_v1_mode(idev);
+ /* MLDv1 router present: we need to go into v1 mode *only*
+ * when an MLDv1 query is received as per section 9.12. of
+ * RFC3810! And we know from RFC2710 section 3.7 that MLDv1
+ * queries MUST be of exactly 24 octets.
+ */
+ if (v1_query)
+ mld_set_v1_mode(idev);
/* cancel MLDv2 report timer */
mld_gq_stop_timer(idev);
@@ -1273,10 +1285,6 @@
static int mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld,
unsigned long *max_delay)
{
- /* hosts need to stay in MLDv1 mode, discard MLDv2 queries */
- if (mld_in_v1_mode(idev))
- return -EINVAL;
-
*max_delay = max(msecs_to_jiffies(mldv2_mrc(mld)), 1UL);
mld_update_qrv(idev, mld);
@@ -1333,8 +1341,11 @@
!(group_type&IPV6_ADDR_MULTICAST))
return -EINVAL;
- if (len == MLD_V1_QUERY_LEN) {
- err = mld_process_v1(idev, mld, &max_delay);
+ if (len < MLD_V1_QUERY_LEN) {
+ return -EINVAL;
+ } else if (len == MLD_V1_QUERY_LEN || mld_in_v1_mode(idev)) {
+ err = mld_process_v1(idev, mld, &max_delay,
+ len == MLD_V1_QUERY_LEN);
if (err < 0)
return err;
} else if (len >= MLD_V2_QUERY_LEN_MIN) {
@@ -1366,8 +1377,9 @@
mlh2 = (struct mld2_query *)skb_transport_header(skb);
mark = 1;
}
- } else
+ } else {
return -EINVAL;
+ }
read_lock_bh(&idev->lock);
if (group_type == IPV6_ADDR_ANY) {
@@ -2373,7 +2385,7 @@
{
int err;
- /* callers have the socket lock and a write lock on ipv6_sk_mc_lock,
+ /* callers have the socket lock and rtnl lock
* so no other readers or writers of iml or its sflist
*/
if (!iml->sflist) {
diff --git a/net/ipv6/protocol.c b/net/ipv6/protocol.c
index e048cf1..e3770ab 100644
--- a/net/ipv6/protocol.c
+++ b/net/ipv6/protocol.c
@@ -51,6 +51,7 @@
#endif
const struct net_offload __rcu *inet6_offloads[MAX_INET_PROTOS] __read_mostly;
+EXPORT_SYMBOL(inet6_offloads);
int inet6_add_offload(const struct net_offload *prot, unsigned char protocol)
{
diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
index 86e3fa8..db75809 100644
--- a/net/ipv6/sit.c
+++ b/net/ipv6/sit.c
@@ -822,6 +822,8 @@
int addr_type;
u8 ttl;
int err;
+ u8 protocol = IPPROTO_IPV6;
+ int t_hlen = tunnel->hlen + sizeof(struct iphdr);
if (skb->protocol != htons(ETH_P_IPV6))
goto tx_error;
@@ -911,8 +913,14 @@
goto tx_error;
}
+ skb = iptunnel_handle_offloads(skb, false, SKB_GSO_SIT);
+ if (IS_ERR(skb)) {
+ ip_rt_put(rt);
+ goto out;
+ }
+
if (df) {
- mtu = dst_mtu(&rt->dst) - sizeof(struct iphdr);
+ mtu = dst_mtu(&rt->dst) - t_hlen;
if (mtu < 68) {
dev->stats.collisions++;
@@ -947,7 +955,7 @@
/*
* Okay, now see if we can stuff it in the buffer as-is.
*/
- max_headroom = LL_RESERVED_SPACE(tdev)+sizeof(struct iphdr);
+ max_headroom = LL_RESERVED_SPACE(tdev) + t_hlen;
if (skb_headroom(skb) < max_headroom || skb_shared(skb) ||
(skb_cloned(skb) && !skb_clone_writable(skb, 0))) {
@@ -969,14 +977,13 @@
ttl = iph6->hop_limit;
tos = INET_ECN_encapsulate(tos, ipv6_get_dsfield(iph6));
- skb = iptunnel_handle_offloads(skb, false, SKB_GSO_SIT);
- if (IS_ERR(skb)) {
+ if (ip_tunnel_encap(skb, tunnel, &protocol, &fl4) < 0) {
ip_rt_put(rt);
- goto out;
+ goto tx_error;
}
err = iptunnel_xmit(skb->sk, rt, skb, fl4.saddr, fl4.daddr,
- IPPROTO_IPV6, tos, ttl, df,
+ protocol, tos, ttl, df,
!net_eq(tunnel->net, dev_net(dev)));
iptunnel_xmit_stats(err, &dev->stats, dev->tstats);
return NETDEV_TX_OK;
@@ -1059,8 +1066,10 @@
tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link);
if (tdev) {
+ int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+
dev->hard_header_len = tdev->hard_header_len + sizeof(struct iphdr);
- dev->mtu = tdev->mtu - sizeof(struct iphdr);
+ dev->mtu = tdev->mtu - t_hlen;
if (dev->mtu < IPV6_MIN_MTU)
dev->mtu = IPV6_MIN_MTU;
}
@@ -1307,7 +1316,10 @@
static int ipip6_tunnel_change_mtu(struct net_device *dev, int new_mtu)
{
- if (new_mtu < IPV6_MIN_MTU || new_mtu > 0xFFF8 - sizeof(struct iphdr))
+ struct ip_tunnel *tunnel = netdev_priv(dev);
+ int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+
+ if (new_mtu < IPV6_MIN_MTU || new_mtu > 0xFFF8 - t_hlen)
return -EINVAL;
dev->mtu = new_mtu;
return 0;
@@ -1338,12 +1350,15 @@
static void ipip6_tunnel_setup(struct net_device *dev)
{
+ struct ip_tunnel *tunnel = netdev_priv(dev);
+ int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+
dev->netdev_ops = &ipip6_netdev_ops;
dev->destructor = ipip6_dev_free;
dev->type = ARPHRD_SIT;
- dev->hard_header_len = LL_MAX_HEADER + sizeof(struct iphdr);
- dev->mtu = ETH_DATA_LEN - sizeof(struct iphdr);
+ dev->hard_header_len = LL_MAX_HEADER + t_hlen;
+ dev->mtu = ETH_DATA_LEN - t_hlen;
dev->flags = IFF_NOARP;
dev->priv_flags &= ~IFF_XMIT_DST_RELEASE;
dev->iflink = 0;
@@ -1466,6 +1481,40 @@
}
+/* This function returns true when ENCAP attributes are present in the nl msg */
+static bool ipip6_netlink_encap_parms(struct nlattr *data[],
+ struct ip_tunnel_encap *ipencap)
+{
+ bool ret = false;
+
+ memset(ipencap, 0, sizeof(*ipencap));
+
+ if (!data)
+ return ret;
+
+ if (data[IFLA_IPTUN_ENCAP_TYPE]) {
+ ret = true;
+ ipencap->type = nla_get_u16(data[IFLA_IPTUN_ENCAP_TYPE]);
+ }
+
+ if (data[IFLA_IPTUN_ENCAP_FLAGS]) {
+ ret = true;
+ ipencap->flags = nla_get_u16(data[IFLA_IPTUN_ENCAP_FLAGS]);
+ }
+
+ if (data[IFLA_IPTUN_ENCAP_SPORT]) {
+ ret = true;
+ ipencap->sport = nla_get_u16(data[IFLA_IPTUN_ENCAP_SPORT]);
+ }
+
+ if (data[IFLA_IPTUN_ENCAP_DPORT]) {
+ ret = true;
+ ipencap->dport = nla_get_u16(data[IFLA_IPTUN_ENCAP_DPORT]);
+ }
+
+ return ret;
+}
+
#ifdef CONFIG_IPV6_SIT_6RD
/* This function returns true when 6RD attributes are present in the nl msg */
static bool ipip6_netlink_6rd_parms(struct nlattr *data[],
@@ -1509,12 +1558,20 @@
{
struct net *net = dev_net(dev);
struct ip_tunnel *nt;
+ struct ip_tunnel_encap ipencap;
#ifdef CONFIG_IPV6_SIT_6RD
struct ip_tunnel_6rd ip6rd;
#endif
int err;
nt = netdev_priv(dev);
+
+ if (ipip6_netlink_encap_parms(data, &ipencap)) {
+ err = ip_tunnel_encap_setup(nt, &ipencap);
+ if (err < 0)
+ return err;
+ }
+
ipip6_netlink_parms(data, &nt->parms);
if (ipip6_tunnel_locate(net, &nt->parms, 0))
@@ -1537,15 +1594,23 @@
{
struct ip_tunnel *t = netdev_priv(dev);
struct ip_tunnel_parm p;
+ struct ip_tunnel_encap ipencap;
struct net *net = t->net;
struct sit_net *sitn = net_generic(net, sit_net_id);
#ifdef CONFIG_IPV6_SIT_6RD
struct ip_tunnel_6rd ip6rd;
#endif
+ int err;
if (dev == sitn->fb_tunnel_dev)
return -EINVAL;
+ if (ipip6_netlink_encap_parms(data, &ipencap)) {
+ err = ip_tunnel_encap_setup(t, &ipencap);
+ if (err < 0)
+ return err;
+ }
+
ipip6_netlink_parms(data, &p);
if (((dev->flags & IFF_POINTOPOINT) && !p.iph.daddr) ||
@@ -1599,6 +1664,14 @@
/* IFLA_IPTUN_6RD_RELAY_PREFIXLEN */
nla_total_size(2) +
#endif
+ /* IFLA_IPTUN_ENCAP_TYPE */
+ nla_total_size(2) +
+ /* IFLA_IPTUN_ENCAP_FLAGS */
+ nla_total_size(2) +
+ /* IFLA_IPTUN_ENCAP_SPORT */
+ nla_total_size(2) +
+ /* IFLA_IPTUN_ENCAP_DPORT */
+ nla_total_size(2) +
0;
}
@@ -1630,6 +1703,16 @@
goto nla_put_failure;
#endif
+ if (nla_put_u16(skb, IFLA_IPTUN_ENCAP_TYPE,
+ tunnel->encap.type) ||
+ nla_put_u16(skb, IFLA_IPTUN_ENCAP_SPORT,
+ tunnel->encap.sport) ||
+ nla_put_u16(skb, IFLA_IPTUN_ENCAP_DPORT,
+ tunnel->encap.dport) ||
+ nla_put_u16(skb, IFLA_IPTUN_ENCAP_FLAGS,
+ tunnel->encap.dport))
+ goto nla_put_failure;
+
return 0;
nla_put_failure:
@@ -1651,6 +1734,10 @@
[IFLA_IPTUN_6RD_PREFIXLEN] = { .type = NLA_U16 },
[IFLA_IPTUN_6RD_RELAY_PREFIXLEN] = { .type = NLA_U16 },
#endif
+ [IFLA_IPTUN_ENCAP_TYPE] = { .type = NLA_U16 },
+ [IFLA_IPTUN_ENCAP_FLAGS] = { .type = NLA_U16 },
+ [IFLA_IPTUN_ENCAP_SPORT] = { .type = NLA_U16 },
+ [IFLA_IPTUN_ENCAP_DPORT] = { .type = NLA_U16 },
};
static void ipip6_dellink(struct net_device *dev, struct list_head *head)
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index c643dc9..9a2838e 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -203,7 +203,7 @@
ireq->ir_num = ntohs(th->dest);
ireq->ir_v6_rmt_addr = ipv6_hdr(skb)->saddr;
ireq->ir_v6_loc_addr = ipv6_hdr(skb)->daddr;
- if (ipv6_opt_accepted(sk, skb) ||
+ if (ipv6_opt_accepted(sk, skb, &TCP_SKB_CB(skb)->header.h6) ||
np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo ||
np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) {
atomic_inc(&skb->users);
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 1835480..132bac1 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -742,7 +742,8 @@
ireq->ir_iif = inet6_iif(skb);
if (!TCP_SKB_CB(skb)->tcp_tw_isn &&
- (ipv6_opt_accepted(sk, skb) || np->rxopt.bits.rxinfo ||
+ (ipv6_opt_accepted(sk, skb, &TCP_SKB_CB(skb)->header.h6) ||
+ np->rxopt.bits.rxinfo ||
np->rxopt.bits.rxoinfo || np->rxopt.bits.rxhlim ||
np->rxopt.bits.rxohlim || np->repflow)) {
atomic_inc(&skb->users);
@@ -1367,7 +1368,7 @@
np->rcv_flowinfo = ip6_flowinfo(ipv6_hdr(opt_skb));
if (np->repflow)
np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb));
- if (ipv6_opt_accepted(sk, opt_skb)) {
+ if (ipv6_opt_accepted(sk, opt_skb, &TCP_SKB_CB(opt_skb)->header.h6)) {
skb_set_owner_r(opt_skb, sk);
opt_skb = xchg(&np->pktoptions, opt_skb);
} else {
@@ -1411,10 +1412,18 @@
th = tcp_hdr(skb);
hdr = ipv6_hdr(skb);
+ /* This is tricky : We move IPCB at its correct location into TCP_SKB_CB()
+ * barrier() makes sure compiler wont play fool^Waliasing games.
+ */
+ memmove(&TCP_SKB_CB(skb)->header.h6, IP6CB(skb),
+ sizeof(struct inet6_skb_parm));
+ barrier();
+
TCP_SKB_CB(skb)->seq = ntohl(th->seq);
TCP_SKB_CB(skb)->end_seq = (TCP_SKB_CB(skb)->seq + th->syn + th->fin +
skb->len - th->doff*4);
TCP_SKB_CB(skb)->ack_seq = ntohl(th->ack_seq);
+ TCP_SKB_CB(skb)->tcp_flags = tcp_flag_byte(th);
TCP_SKB_CB(skb)->tcp_tw_isn = 0;
TCP_SKB_CB(skb)->ip_dsfield = ipv6_get_dsfield(hdr);
TCP_SKB_CB(skb)->sacked = 0;
diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c
index dbb3d92..c1ab771 100644
--- a/net/ipv6/tcpv6_offload.c
+++ b/net/ipv6/tcpv6_offload.c
@@ -15,23 +15,6 @@
#include <net/ip6_checksum.h>
#include "ip6_offload.h"
-static int tcp_v6_gso_send_check(struct sk_buff *skb)
-{
- const struct ipv6hdr *ipv6h;
- struct tcphdr *th;
-
- if (!pskb_may_pull(skb, sizeof(*th)))
- return -EINVAL;
-
- ipv6h = ipv6_hdr(skb);
- th = tcp_hdr(skb);
-
- th->check = 0;
- skb->ip_summed = CHECKSUM_PARTIAL;
- __tcp_v6_send_check(skb, &ipv6h->saddr, &ipv6h->daddr);
- return 0;
-}
-
static struct sk_buff **tcp6_gro_receive(struct sk_buff **head,
struct sk_buff *skb)
{
@@ -58,10 +41,32 @@
return tcp_gro_complete(skb);
}
+struct sk_buff *tcp6_gso_segment(struct sk_buff *skb,
+ netdev_features_t features)
+{
+ struct tcphdr *th;
+
+ if (!pskb_may_pull(skb, sizeof(*th)))
+ return ERR_PTR(-EINVAL);
+
+ if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) {
+ const struct ipv6hdr *ipv6h = ipv6_hdr(skb);
+ struct tcphdr *th = tcp_hdr(skb);
+
+ /* Set up pseudo header, usually expect stack to have done
+ * this.
+ */
+
+ th->check = 0;
+ skb->ip_summed = CHECKSUM_PARTIAL;
+ __tcp_v6_send_check(skb, &ipv6h->saddr, &ipv6h->daddr);
+ }
+
+ return tcp_gso_segment(skb, features);
+}
static const struct net_offload tcpv6_offload = {
.callbacks = {
- .gso_send_check = tcp_v6_gso_send_check,
- .gso_segment = tcp_gso_segment,
+ .gso_segment = tcp6_gso_segment,
.gro_receive = tcp6_gro_receive,
.gro_complete = tcp6_gro_complete,
},
diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c
index a1ad34b..212ebfc 100644
--- a/net/ipv6/udp_offload.c
+++ b/net/ipv6/udp_offload.c
@@ -17,28 +17,6 @@
#include <net/ip6_checksum.h>
#include "ip6_offload.h"
-static int udp6_ufo_send_check(struct sk_buff *skb)
-{
- const struct ipv6hdr *ipv6h;
- struct udphdr *uh;
-
- if (!pskb_may_pull(skb, sizeof(*uh)))
- return -EINVAL;
-
- if (likely(!skb->encapsulation)) {
- ipv6h = ipv6_hdr(skb);
- uh = udp_hdr(skb);
-
- uh->check = ~csum_ipv6_magic(&ipv6h->saddr, &ipv6h->daddr, skb->len,
- IPPROTO_UDP, 0);
- skb->csum_start = skb_transport_header(skb) - skb->head;
- skb->csum_offset = offsetof(struct udphdr, check);
- skb->ip_summed = CHECKSUM_PARTIAL;
- }
-
- return 0;
-}
-
static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb,
netdev_features_t features)
{
@@ -49,7 +27,6 @@
u8 *packet_start, *prevhdr;
u8 nexthdr;
u8 frag_hdr_sz = sizeof(struct frag_hdr);
- int offset;
__wsum csum;
int tnl_hlen;
@@ -83,13 +60,27 @@
(SKB_GSO_UDP_TUNNEL|SKB_GSO_UDP_TUNNEL_CSUM))
segs = skb_udp_tunnel_segment(skb, features);
else {
+ const struct ipv6hdr *ipv6h;
+ struct udphdr *uh;
+
+ if (!pskb_may_pull(skb, sizeof(struct udphdr)))
+ goto out;
+
/* Do software UFO. Complete and fill in the UDP checksum as HW cannot
* do checksum of UDP packets sent as multiple IP fragments.
*/
- offset = skb_checksum_start_offset(skb);
- csum = skb_checksum(skb, offset, skb->len - offset, 0);
- offset += skb->csum_offset;
- *(__sum16 *)(skb->data + offset) = csum_fold(csum);
+
+ uh = udp_hdr(skb);
+ ipv6h = ipv6_hdr(skb);
+
+ uh->check = 0;
+ csum = skb_checksum(skb, 0, skb->len, 0);
+ uh->check = udp_v6_check(skb->len, &ipv6h->saddr,
+ &ipv6h->daddr, csum);
+
+ if (uh->check == 0)
+ uh->check = CSUM_MANGLED_0;
+
skb->ip_summed = CHECKSUM_NONE;
/* Check if there is enough headroom to insert fragment header. */
@@ -138,7 +129,7 @@
goto flush;
/* Don't bother verifying checksum if we're going to flush anyway. */
- if (!NAPI_GRO_CB(skb)->flush)
+ if (NAPI_GRO_CB(skb)->flush)
goto skip;
if (skb_gro_checksum_validate_zero_check(skb, IPPROTO_UDP, uh->check,
@@ -170,7 +161,6 @@
static const struct net_offload udpv6_offload = {
.callbacks = {
- .gso_send_check = udp6_ufo_send_check,
.gso_segment = udp6_ufo_fragment,
.gro_receive = udp6_gro_receive,
.gro_complete = udp6_gro_complete,
diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
index 2aa2b6c..895348e 100644
--- a/net/l2tp/l2tp_core.c
+++ b/net/l2tp/l2tp_core.c
@@ -1392,8 +1392,6 @@
if (err < 0)
goto out;
- udp_set_convert_csum(sock->sk, true);
-
break;
case L2TP_ENCAPTYPE_IP:
@@ -1584,19 +1582,17 @@
/* Mark socket as an encapsulation socket. See net/ipv4/udp.c */
tunnel->encap = encap;
if (encap == L2TP_ENCAPTYPE_UDP) {
- /* Mark socket as an encapsulation socket. See net/ipv4/udp.c */
- udp_sk(sk)->encap_type = UDP_ENCAP_L2TPINUDP;
- udp_sk(sk)->encap_rcv = l2tp_udp_encap_recv;
- udp_sk(sk)->encap_destroy = l2tp_udp_encap_destroy;
-#if IS_ENABLED(CONFIG_IPV6)
- if (sk->sk_family == PF_INET6 && !tunnel->v4mapped)
- udpv6_encap_enable();
- else
-#endif
- udp_encap_enable();
- }
+ struct udp_tunnel_sock_cfg udp_cfg;
- sk->sk_user_data = tunnel;
+ udp_cfg.sk_user_data = tunnel;
+ udp_cfg.encap_type = UDP_ENCAP_L2TPINUDP;
+ udp_cfg.encap_rcv = l2tp_udp_encap_recv;
+ udp_cfg.encap_destroy = l2tp_udp_encap_destroy;
+
+ setup_udp_tunnel_sock(net, sock, &udp_cfg);
+ } else {
+ sk->sk_user_data = tunnel;
+ }
/* Hook on the tunnel socket destructor so that we can cleanup
* if the tunnel socket goes away.
diff --git a/net/mac80211/agg-rx.c b/net/mac80211/agg-rx.c
index f0e84bc..a48bad4 100644
--- a/net/mac80211/agg-rx.c
+++ b/net/mac80211/agg-rx.c
@@ -227,7 +227,7 @@
void __ieee80211_start_rx_ba_session(struct sta_info *sta,
u8 dialog_token, u16 timeout,
u16 start_seq_num, u16 ba_policy, u16 tid,
- u16 buf_size, bool tx)
+ u16 buf_size, bool tx, bool auto_seq)
{
struct ieee80211_local *local = sta->sdata->local;
struct tid_ampdu_rx *tid_agg_rx;
@@ -326,6 +326,7 @@
tid_agg_rx->buf_size = buf_size;
tid_agg_rx->timeout = timeout;
tid_agg_rx->stored_mpdu_num = 0;
+ tid_agg_rx->auto_seq = auto_seq;
status = WLAN_STATUS_SUCCESS;
/* activate it for RX */
@@ -367,7 +368,7 @@
__ieee80211_start_rx_ba_session(sta, dialog_token, timeout,
start_seq_num, ba_policy, tid,
- buf_size, true);
+ buf_size, true, false);
}
void ieee80211_start_rx_ba_session_offl(struct ieee80211_vif *vif,
diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
index 4d8989b..fb6a150 100644
--- a/net/mac80211/cfg.c
+++ b/net/mac80211/cfg.c
@@ -2,6 +2,7 @@
* mac80211 configuration hooks for cfg80211
*
* Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This file is GPLv2 as found in COPYING.
*/
@@ -682,8 +683,19 @@
if (old)
return -EALREADY;
- /* TODO: make hostapd tell us what it wants */
- sdata->smps_mode = IEEE80211_SMPS_OFF;
+ switch (params->smps_mode) {
+ case NL80211_SMPS_OFF:
+ sdata->smps_mode = IEEE80211_SMPS_OFF;
+ break;
+ case NL80211_SMPS_STATIC:
+ sdata->smps_mode = IEEE80211_SMPS_STATIC;
+ break;
+ case NL80211_SMPS_DYNAMIC:
+ sdata->smps_mode = IEEE80211_SMPS_DYNAMIC;
+ break;
+ default:
+ return -EINVAL;
+ }
sdata->needed_rx_chains = sdata->local->rx_chains;
mutex_lock(&local->mtx);
@@ -1977,8 +1989,13 @@
return err;
}
- if (changed & WIPHY_PARAM_COVERAGE_CLASS) {
- err = drv_set_coverage_class(local, wiphy->coverage_class);
+ if ((changed & WIPHY_PARAM_COVERAGE_CLASS) ||
+ (changed & WIPHY_PARAM_DYN_ACK)) {
+ s16 coverage_class;
+
+ coverage_class = changed & WIPHY_PARAM_COVERAGE_CLASS ?
+ wiphy->coverage_class : -1;
+ err = drv_set_coverage_class(local, coverage_class);
if (err)
return err;
@@ -2351,6 +2368,58 @@
return 0;
}
+static bool ieee80211_coalesce_started_roc(struct ieee80211_local *local,
+ struct ieee80211_roc_work *new_roc,
+ struct ieee80211_roc_work *cur_roc)
+{
+ unsigned long j = jiffies;
+ unsigned long cur_roc_end = cur_roc->hw_start_time +
+ msecs_to_jiffies(cur_roc->duration);
+ struct ieee80211_roc_work *next_roc;
+ int new_dur;
+
+ if (WARN_ON(!cur_roc->started || !cur_roc->hw_begun))
+ return false;
+
+ if (time_after(j + IEEE80211_ROC_MIN_LEFT, cur_roc_end))
+ return false;
+
+ ieee80211_handle_roc_started(new_roc);
+
+ new_dur = new_roc->duration - jiffies_to_msecs(cur_roc_end - j);
+
+ /* cur_roc is long enough - add new_roc to the dependents list. */
+ if (new_dur <= 0) {
+ list_add_tail(&new_roc->list, &cur_roc->dependents);
+ return true;
+ }
+
+ new_roc->duration = new_dur;
+
+ /*
+ * if cur_roc was already coalesced before, we might
+ * want to extend the next roc instead of adding
+ * a new one.
+ */
+ next_roc = list_entry(cur_roc->list.next,
+ struct ieee80211_roc_work, list);
+ if (&next_roc->list != &local->roc_list &&
+ next_roc->chan == new_roc->chan &&
+ next_roc->sdata == new_roc->sdata &&
+ !WARN_ON(next_roc->started)) {
+ list_add_tail(&new_roc->list, &next_roc->dependents);
+ next_roc->duration = max(next_roc->duration,
+ new_roc->duration);
+ next_roc->type = max(next_roc->type, new_roc->type);
+ return true;
+ }
+
+ /* add right after cur_roc */
+ list_add(&new_roc->list, &cur_roc->list);
+
+ return true;
+}
+
static int ieee80211_start_roc_work(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata,
struct ieee80211_channel *channel,
@@ -2456,8 +2525,6 @@
/* If it has already started, it's more difficult ... */
if (local->ops->remain_on_channel) {
- unsigned long j = jiffies;
-
/*
* In the offloaded ROC case, if it hasn't begun, add
* this new one to the dependent list to be handled
@@ -2480,28 +2547,8 @@
break;
}
- if (time_before(j + IEEE80211_ROC_MIN_LEFT,
- tmp->hw_start_time +
- msecs_to_jiffies(tmp->duration))) {
- int new_dur;
-
- ieee80211_handle_roc_started(roc);
-
- new_dur = roc->duration -
- jiffies_to_msecs(tmp->hw_start_time +
- msecs_to_jiffies(
- tmp->duration) -
- j);
-
- if (new_dur > 0) {
- /* add right after tmp */
- list_add(&roc->list, &tmp->list);
- } else {
- list_add_tail(&roc->list,
- &tmp->dependents);
- }
+ if (ieee80211_coalesce_started_roc(local, roc, tmp))
queued = true;
- }
} else if (del_timer_sync(&tmp->work.timer)) {
unsigned long new_end;
diff --git a/net/mac80211/debugfs.c b/net/mac80211/debugfs.c
index 0e963bc..54a189f 100644
--- a/net/mac80211/debugfs.c
+++ b/net/mac80211/debugfs.c
@@ -3,6 +3,7 @@
* mac80211 debugfs for wireless PHYs
*
* Copyright 2007 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* GPLv2
*
@@ -302,11 +303,6 @@
sf += scnprintf(buf + sf, mxln - sf, "SUPPORTS_DYNAMIC_PS\n");
if (local->hw.flags & IEEE80211_HW_MFP_CAPABLE)
sf += scnprintf(buf + sf, mxln - sf, "MFP_CAPABLE\n");
- if (local->hw.flags & IEEE80211_HW_SUPPORTS_STATIC_SMPS)
- sf += scnprintf(buf + sf, mxln - sf, "SUPPORTS_STATIC_SMPS\n");
- if (local->hw.flags & IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS)
- sf += scnprintf(buf + sf, mxln - sf,
- "SUPPORTS_DYNAMIC_SMPS\n");
if (local->hw.flags & IEEE80211_HW_SUPPORTS_UAPSD)
sf += scnprintf(buf + sf, mxln - sf, "SUPPORTS_UAPSD\n");
if (local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS)
diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
index e205eba..c68896a 100644
--- a/net/mac80211/debugfs_netdev.c
+++ b/net/mac80211/debugfs_netdev.c
@@ -226,12 +226,12 @@
struct ieee80211_local *local = sdata->local;
int err;
- if (!(local->hw.flags & IEEE80211_HW_SUPPORTS_STATIC_SMPS) &&
+ if (!(local->hw.wiphy->features & NL80211_FEATURE_STATIC_SMPS) &&
smps_mode == IEEE80211_SMPS_STATIC)
return -EINVAL;
/* auto should be dynamic if in PS mode */
- if (!(local->hw.flags & IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS) &&
+ if (!(local->hw.wiphy->features & NL80211_FEATURE_DYNAMIC_SMPS) &&
(smps_mode == IEEE80211_SMPS_DYNAMIC ||
smps_mode == IEEE80211_SMPS_AUTOMATIC))
return -EINVAL;
diff --git a/net/mac80211/debugfs_sta.c b/net/mac80211/debugfs_sta.c
index 33eb4a4..bafe489 100644
--- a/net/mac80211/debugfs_sta.c
+++ b/net/mac80211/debugfs_sta.c
@@ -2,6 +2,7 @@
* Copyright 2003-2005 Devicescape Software, Inc.
* Copyright (c) 2006 Jiri Benc <jbenc@suse.cz>
* Copyright 2007 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
index 1142395..196d48c 100644
--- a/net/mac80211/driver-ops.h
+++ b/net/mac80211/driver-ops.h
@@ -450,7 +450,7 @@
}
static inline int drv_set_coverage_class(struct ieee80211_local *local,
- u8 value)
+ s16 value)
{
int ret = 0;
might_sleep();
diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
index 5f9654d..56b5357 100644
--- a/net/mac80211/ibss.c
+++ b/net/mac80211/ibss.c
@@ -6,6 +6,7 @@
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2007, Michael Wu <flamingice@sourmilk.net>
* Copyright 2009, Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
index ffb20e5..c2aaec4 100644
--- a/net/mac80211/ieee80211_i.h
+++ b/net/mac80211/ieee80211_i.h
@@ -3,6 +3,7 @@
* Copyright 2005, Devicescape Software, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -354,6 +355,7 @@
IEEE80211_STA_DISABLE_80P80MHZ = BIT(12),
IEEE80211_STA_DISABLE_160MHZ = BIT(13),
IEEE80211_STA_DISABLE_WMM = BIT(14),
+ IEEE80211_STA_ENABLE_RRM = BIT(15),
};
struct ieee80211_mgd_auth_data {
@@ -1367,6 +1369,7 @@
const struct ieee80211_wide_bw_chansw_ie *wide_bw_chansw_ie;
const u8 *country_elem;
const u8 *pwr_constr_elem;
+ const u8 *cisco_dtpc_elem;
const struct ieee80211_timeout_interval_ie *timeout_int;
const u8 *opmode_notif;
const struct ieee80211_sec_chan_offs_ie *sec_chan_offs;
@@ -1587,7 +1590,7 @@
void __ieee80211_start_rx_ba_session(struct sta_info *sta,
u8 dialog_token, u16 timeout,
u16 start_seq_num, u16 ba_policy, u16 tid,
- u16 buf_size, bool tx);
+ u16 buf_size, bool tx, bool auto_seq);
void ieee80211_sta_tear_down_BA_sessions(struct sta_info *sta,
enum ieee80211_agg_stop_reason reason);
void ieee80211_process_delba(struct ieee80211_sub_if_data *sdata,
@@ -1917,7 +1920,7 @@
size_t extra_ies_len);
int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
const u8 *peer, enum nl80211_tdls_operation oper);
-
+void ieee80211_tdls_peer_del_work(struct work_struct *wk);
extern const struct ethtool_ops ieee80211_ethtool_ops;
@@ -1928,4 +1931,3 @@
#endif
#endif /* IEEE80211_I_H */
-void ieee80211_tdls_peer_del_work(struct work_struct *wk);
diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
index f75e5f1..af23722 100644
--- a/net/mac80211/iface.c
+++ b/net/mac80211/iface.c
@@ -5,6 +5,7 @@
* Copyright 2005-2006, Devicescape Software, Inc.
* Copyright (c) 2006 Jiri Benc <jbenc@suse.cz>
* Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -1172,19 +1173,11 @@
rx_agg = (void *)&skb->cb;
mutex_lock(&local->sta_mtx);
sta = sta_info_get_bss(sdata, rx_agg->addr);
- if (sta) {
- u16 last_seq;
-
- last_seq = IEEE80211_SEQ_TO_SN(le16_to_cpu(
- sta->last_seq_ctrl[rx_agg->tid]));
-
+ if (sta)
__ieee80211_start_rx_ba_session(sta,
- 0, 0,
- ieee80211_sn_inc(last_seq),
- 1, rx_agg->tid,
+ 0, 0, 0, 1, rx_agg->tid,
IEEE80211_MAX_AMPDU_BUF,
- false);
- }
+ false, true);
mutex_unlock(&local->sta_mtx);
} else if (skb->pkt_type == IEEE80211_SDATA_QUEUE_RX_AGG_STOP) {
rx_agg = (void *)&skb->cb;
diff --git a/net/mac80211/key.c b/net/mac80211/key.c
index 6429d0e..4712150 100644
--- a/net/mac80211/key.c
+++ b/net/mac80211/key.c
@@ -3,6 +3,7 @@
* Copyright 2005-2006, Devicescape Software, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2007-2008 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -421,7 +422,7 @@
ieee80211_aes_key_free(key->u.ccmp.tfm);
if (key->conf.cipher == WLAN_CIPHER_SUITE_AES_CMAC)
ieee80211_aes_cmac_key_free(key->u.aes_cmac.tfm);
- kfree(key);
+ kzfree(key);
}
static void __ieee80211_key_destroy(struct ieee80211_key *key,
diff --git a/net/mac80211/main.c b/net/mac80211/main.c
index e0ab432..0de7c93 100644
--- a/net/mac80211/main.c
+++ b/net/mac80211/main.c
@@ -2,6 +2,7 @@
* Copyright 2002-2005, Instant802 Networks, Inc.
* Copyright 2005-2006, Devicescape Software, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
index 8a73de6..2de8870 100644
--- a/net/mac80211/mlme.c
+++ b/net/mac80211/mlme.c
@@ -5,6 +5,7 @@
* Copyright 2005, Devicescape Software, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -172,7 +173,7 @@
if (!(ht_cap->cap_info &
cpu_to_le16(IEEE80211_HT_CAP_SUP_WIDTH_20_40))) {
- ret = IEEE80211_STA_DISABLE_40MHZ | IEEE80211_STA_DISABLE_VHT;
+ ret = IEEE80211_STA_DISABLE_40MHZ;
goto out;
}
@@ -672,6 +673,9 @@
(local->hw.flags & IEEE80211_HW_SPECTRUM_MGMT))
capab |= WLAN_CAPABILITY_SPECTRUM_MGMT;
+ if (ifmgd->flags & IEEE80211_STA_ENABLE_RRM)
+ capab |= WLAN_CAPABILITY_RADIO_MEASURE;
+
mgmt = (struct ieee80211_mgmt *) skb_put(skb, 24);
memset(mgmt, 0, 24);
memcpy(mgmt->da, assoc_data->bss->bssid, ETH_ALEN);
@@ -737,16 +741,17 @@
}
}
- if (capab & WLAN_CAPABILITY_SPECTRUM_MGMT) {
- /* 1. power capabilities */
+ if (capab & WLAN_CAPABILITY_SPECTRUM_MGMT ||
+ capab & WLAN_CAPABILITY_RADIO_MEASURE) {
pos = skb_put(skb, 4);
*pos++ = WLAN_EID_PWR_CAPABILITY;
*pos++ = 2;
*pos++ = 0; /* min tx power */
/* max tx power */
*pos++ = ieee80211_chandef_max_power(&chanctx_conf->def);
+ }
- /* 2. supported channels */
+ if (capab & WLAN_CAPABILITY_SPECTRUM_MGMT) {
/* TODO: get this in reg domain format */
pos = skb_put(skb, 2 * sband->n_channels + 2);
*pos++ = WLAN_EID_SUPPORTED_CHANNELS;
@@ -1166,19 +1171,21 @@
TU_TO_EXP_TIME(csa_ie.count * cbss->beacon_interval));
}
-static u32 ieee80211_handle_pwr_constr(struct ieee80211_sub_if_data *sdata,
- struct ieee80211_channel *channel,
- const u8 *country_ie, u8 country_ie_len,
- const u8 *pwr_constr_elem)
+static bool
+ieee80211_find_80211h_pwr_constr(struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_channel *channel,
+ const u8 *country_ie, u8 country_ie_len,
+ const u8 *pwr_constr_elem,
+ int *chan_pwr, int *pwr_reduction)
{
struct ieee80211_country_ie_triplet *triplet;
int chan = ieee80211_frequency_to_channel(channel->center_freq);
- int i, chan_pwr, chan_increment, new_ap_level;
+ int i, chan_increment;
bool have_chan_pwr = false;
/* Invalid IE */
if (country_ie_len % 2 || country_ie_len < IEEE80211_COUNTRY_IE_MIN_LEN)
- return 0;
+ return false;
triplet = (void *)(country_ie + 3);
country_ie_len -= 3;
@@ -1206,7 +1213,7 @@
for (i = 0; i < triplet->chans.num_channels; i++) {
if (first_channel + i * chan_increment == chan) {
have_chan_pwr = true;
- chan_pwr = triplet->chans.max_power;
+ *chan_pwr = triplet->chans.max_power;
break;
}
}
@@ -1218,18 +1225,76 @@
country_ie_len -= 3;
}
- if (!have_chan_pwr)
+ if (have_chan_pwr)
+ *pwr_reduction = *pwr_constr_elem;
+ return have_chan_pwr;
+}
+
+static void ieee80211_find_cisco_dtpc(struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_channel *channel,
+ const u8 *cisco_dtpc_ie,
+ int *pwr_level)
+{
+ /* From practical testing, the first data byte of the DTPC element
+ * seems to contain the requested dBm level, and the CLI on Cisco
+ * APs clearly state the range is -127 to 127 dBm, which indicates
+ * a signed byte, although it seemingly never actually goes negative.
+ * The other byte seems to always be zero.
+ */
+ *pwr_level = (__s8)cisco_dtpc_ie[4];
+}
+
+static u32 ieee80211_handle_pwr_constr(struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_channel *channel,
+ struct ieee80211_mgmt *mgmt,
+ const u8 *country_ie, u8 country_ie_len,
+ const u8 *pwr_constr_ie,
+ const u8 *cisco_dtpc_ie)
+{
+ bool has_80211h_pwr = false, has_cisco_pwr = false;
+ int chan_pwr = 0, pwr_reduction_80211h = 0;
+ int pwr_level_cisco, pwr_level_80211h;
+ int new_ap_level;
+
+ if (country_ie && pwr_constr_ie &&
+ mgmt->u.probe_resp.capab_info &
+ cpu_to_le16(WLAN_CAPABILITY_SPECTRUM_MGMT)) {
+ has_80211h_pwr = ieee80211_find_80211h_pwr_constr(
+ sdata, channel, country_ie, country_ie_len,
+ pwr_constr_ie, &chan_pwr, &pwr_reduction_80211h);
+ pwr_level_80211h =
+ max_t(int, 0, chan_pwr - pwr_reduction_80211h);
+ }
+
+ if (cisco_dtpc_ie) {
+ ieee80211_find_cisco_dtpc(
+ sdata, channel, cisco_dtpc_ie, &pwr_level_cisco);
+ has_cisco_pwr = true;
+ }
+
+ if (!has_80211h_pwr && !has_cisco_pwr)
return 0;
- new_ap_level = max_t(int, 0, chan_pwr - *pwr_constr_elem);
+ /* If we have both 802.11h and Cisco DTPC, apply both limits
+ * by picking the smallest of the two power levels advertised.
+ */
+ if (has_80211h_pwr &&
+ (!has_cisco_pwr || pwr_level_80211h <= pwr_level_cisco)) {
+ sdata_info(sdata,
+ "Limiting TX power to %d (%d - %d) dBm as advertised by %pM\n",
+ pwr_level_80211h, chan_pwr, pwr_reduction_80211h,
+ sdata->u.mgd.bssid);
+ new_ap_level = pwr_level_80211h;
+ } else { /* has_cisco_pwr is always true here. */
+ sdata_info(sdata,
+ "Limiting TX power to %d dBm as advertised by %pM\n",
+ pwr_level_cisco, sdata->u.mgd.bssid);
+ new_ap_level = pwr_level_cisco;
+ }
if (sdata->ap_power_level == new_ap_level)
return 0;
- sdata_info(sdata,
- "Limiting TX power to %d (%d - %d) dBm as advertised by %pM\n",
- new_ap_level, chan_pwr, *pwr_constr_elem,
- sdata->u.mgd.bssid);
sdata->ap_power_level = new_ap_level;
if (__ieee80211_recalc_txpower(sdata))
return BSS_CHANGED_TXPOWER;
@@ -2752,6 +2817,7 @@
struct ieee80211_mgd_assoc_data *assoc_data = ifmgd->assoc_data;
u16 capab_info, status_code, aid;
struct ieee802_11_elems elems;
+ int ac, uapsd_queues = -1;
u8 *pos;
bool reassoc;
struct cfg80211_bss *bss;
@@ -2821,9 +2887,15 @@
* is set can cause the interface to go idle
*/
ieee80211_destroy_assoc_data(sdata, true);
+
+ /* get uapsd queues configuration */
+ uapsd_queues = 0;
+ for (ac = 0; ac < IEEE80211_NUM_ACS; ac++)
+ if (sdata->tx_conf[ac].uapsd)
+ uapsd_queues |= BIT(ac);
}
- cfg80211_rx_assoc_resp(sdata->dev, bss, (u8 *)mgmt, len);
+ cfg80211_rx_assoc_resp(sdata->dev, bss, (u8 *)mgmt, len, uapsd_queues);
}
static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
@@ -2893,7 +2965,9 @@
/*
* This is the canonical list of information elements we care about,
* the filter code also gives us all changes to the Microsoft OUI
- * (00:50:F2) vendor IE which is used for WMM which we need to track.
+ * (00:50:F2) vendor IE which is used for WMM which we need to track,
+ * as well as the DTPC IE (part of the Cisco OUI) used for signaling
+ * changes to requested client power.
*
* We implement beacon filtering in software since that means we can
* avoid processing the frame here and in cfg80211, and userspace
@@ -3199,13 +3273,11 @@
rx_status->band, true);
mutex_unlock(&local->sta_mtx);
- if (elems.country_elem && elems.pwr_constr_elem &&
- mgmt->u.probe_resp.capab_info &
- cpu_to_le16(WLAN_CAPABILITY_SPECTRUM_MGMT))
- changed |= ieee80211_handle_pwr_constr(sdata, chan,
- elems.country_elem,
- elems.country_elem_len,
- elems.pwr_constr_elem);
+ changed |= ieee80211_handle_pwr_constr(sdata, chan, mgmt,
+ elems.country_elem,
+ elems.country_elem_len,
+ elems.pwr_constr_elem,
+ elems.cisco_dtpc_elem);
ieee80211_bss_info_change_notify(sdata, changed);
}
@@ -3733,7 +3805,7 @@
ifmgd->uapsd_max_sp_len = sdata->local->hw.uapsd_max_sp_len;
ifmgd->p2p_noa_index = -1;
- if (sdata->local->hw.flags & IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS)
+ if (sdata->local->hw.wiphy->features & NL80211_FEATURE_DYNAMIC_SMPS)
ifmgd->req_smps = IEEE80211_SMPS_AUTOMATIC;
else
ifmgd->req_smps = IEEE80211_SMPS_OFF;
@@ -4408,6 +4480,11 @@
ifmgd->flags &= ~IEEE80211_STA_MFP_ENABLED;
}
+ if (req->flags & ASSOC_REQ_USE_RRM)
+ ifmgd->flags |= IEEE80211_STA_ENABLE_RRM;
+ else
+ ifmgd->flags &= ~IEEE80211_STA_ENABLE_RRM;
+
if (req->crypto.control_port)
ifmgd->flags |= IEEE80211_STA_CONTROL_PORT;
else
diff --git a/net/mac80211/rc80211_minstrel.c b/net/mac80211/rc80211_minstrel.c
index 1c1469c..2baa7ed 100644
--- a/net/mac80211/rc80211_minstrel.c
+++ b/net/mac80211/rc80211_minstrel.c
@@ -75,7 +75,7 @@
{
int j = MAX_THR_RATES;
- while (j > 0 && mi->r[i].cur_tp > mi->r[tp_list[j - 1]].cur_tp)
+ while (j > 0 && mi->r[i].stats.cur_tp > mi->r[tp_list[j - 1]].stats.cur_tp)
j--;
if (j < MAX_THR_RATES - 1)
memmove(&tp_list[j + 1], &tp_list[j], MAX_THR_RATES - (j + 1));
@@ -92,7 +92,7 @@
ratetbl->rate[offset].idx = r->rix;
ratetbl->rate[offset].count = r->adjusted_retry_count;
ratetbl->rate[offset].count_cts = r->retry_count_cts;
- ratetbl->rate[offset].count_rts = r->retry_count_rtscts;
+ ratetbl->rate[offset].count_rts = r->stats.retry_count_rtscts;
}
static void
@@ -140,44 +140,46 @@
for (i = 0; i < mi->n_rates; i++) {
struct minstrel_rate *mr = &mi->r[i];
+ struct minstrel_rate_stats *mrs = &mi->r[i].stats;
usecs = mr->perfect_tx_time;
if (!usecs)
usecs = 1000000;
- if (unlikely(mr->attempts > 0)) {
- mr->sample_skipped = 0;
- mr->cur_prob = MINSTREL_FRAC(mr->success, mr->attempts);
- mr->succ_hist += mr->success;
- mr->att_hist += mr->attempts;
- mr->probability = minstrel_ewma(mr->probability,
- mr->cur_prob,
- EWMA_LEVEL);
+ if (unlikely(mrs->attempts > 0)) {
+ mrs->sample_skipped = 0;
+ mrs->cur_prob = MINSTREL_FRAC(mrs->success,
+ mrs->attempts);
+ mrs->succ_hist += mrs->success;
+ mrs->att_hist += mrs->attempts;
+ mrs->probability = minstrel_ewma(mrs->probability,
+ mrs->cur_prob,
+ EWMA_LEVEL);
} else
- mr->sample_skipped++;
+ mrs->sample_skipped++;
- mr->last_success = mr->success;
- mr->last_attempts = mr->attempts;
- mr->success = 0;
- mr->attempts = 0;
+ mrs->last_success = mrs->success;
+ mrs->last_attempts = mrs->attempts;
+ mrs->success = 0;
+ mrs->attempts = 0;
/* Update throughput per rate, reset thr. below 10% success */
- if (mr->probability < MINSTREL_FRAC(10, 100))
- mr->cur_tp = 0;
+ if (mrs->probability < MINSTREL_FRAC(10, 100))
+ mrs->cur_tp = 0;
else
- mr->cur_tp = mr->probability * (1000000 / usecs);
+ mrs->cur_tp = mrs->probability * (1000000 / usecs);
/* Sample less often below the 10% chance of success.
* Sample less often above the 95% chance of success. */
- if (mr->probability > MINSTREL_FRAC(95, 100) ||
- mr->probability < MINSTREL_FRAC(10, 100)) {
- mr->adjusted_retry_count = mr->retry_count >> 1;
+ if (mrs->probability > MINSTREL_FRAC(95, 100) ||
+ mrs->probability < MINSTREL_FRAC(10, 100)) {
+ mr->adjusted_retry_count = mrs->retry_count >> 1;
if (mr->adjusted_retry_count > 2)
mr->adjusted_retry_count = 2;
mr->sample_limit = 4;
} else {
mr->sample_limit = -1;
- mr->adjusted_retry_count = mr->retry_count;
+ mr->adjusted_retry_count = mrs->retry_count;
}
if (!mr->adjusted_retry_count)
mr->adjusted_retry_count = 2;
@@ -190,11 +192,11 @@
* choose the maximum throughput rate as max_prob_rate
* (2) if all success probabilities < 95%, the rate with
* highest success probability is choosen as max_prob_rate */
- if (mr->probability >= MINSTREL_FRAC(95, 100)) {
- if (mr->cur_tp >= mi->r[tmp_prob_rate].cur_tp)
+ if (mrs->probability >= MINSTREL_FRAC(95, 100)) {
+ if (mrs->cur_tp >= mi->r[tmp_prob_rate].stats.cur_tp)
tmp_prob_rate = i;
} else {
- if (mr->probability >= mi->r[tmp_prob_rate].probability)
+ if (mrs->probability >= mi->r[tmp_prob_rate].stats.probability)
tmp_prob_rate = i;
}
}
@@ -240,14 +242,14 @@
if (ndx < 0)
continue;
- mi->r[ndx].attempts += ar[i].count;
+ mi->r[ndx].stats.attempts += ar[i].count;
if ((i != IEEE80211_TX_MAX_RATES - 1) && (ar[i + 1].idx < 0))
- mi->r[ndx].success += success;
+ mi->r[ndx].stats.success += success;
}
if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) && (i >= 0))
- mi->sample_count++;
+ mi->sample_packets++;
if (mi->sample_deferred > 0)
mi->sample_deferred--;
@@ -265,7 +267,7 @@
unsigned int retry = mr->adjusted_retry_count;
if (info->control.use_rts)
- retry = max(2U, min(mr->retry_count_rtscts, retry));
+ retry = max(2U, min(mr->stats.retry_count_rtscts, retry));
else if (info->control.use_cts_prot)
retry = max(2U, min(mr->retry_count_cts, retry));
return retry;
@@ -317,15 +319,15 @@
sampling_ratio = mp->lookaround_rate;
/* increase sum packet counter */
- mi->packet_count++;
+ mi->total_packets++;
#ifdef CONFIG_MAC80211_DEBUGFS
if (mp->fixed_rate_idx != -1)
return;
#endif
- delta = (mi->packet_count * sampling_ratio / 100) -
- (mi->sample_count + mi->sample_deferred / 2);
+ delta = (mi->total_packets * sampling_ratio / 100) -
+ (mi->sample_packets + mi->sample_deferred / 2);
/* delta < 0: no sampling required */
prev_sample = mi->prev_sample;
@@ -333,10 +335,10 @@
if (delta < 0 || (!mrr_capable && prev_sample))
return;
- if (mi->packet_count >= 10000) {
+ if (mi->total_packets >= 10000) {
mi->sample_deferred = 0;
- mi->sample_count = 0;
- mi->packet_count = 0;
+ mi->sample_packets = 0;
+ mi->total_packets = 0;
} else if (delta > mi->n_rates * 2) {
/* With multi-rate retry, not every planned sample
* attempt actually gets used, due to the way the retry
@@ -347,7 +349,7 @@
* starts getting worse, minstrel would start bursting
* out lots of sampling frames, which would result
* in a large throughput loss. */
- mi->sample_count += (delta - mi->n_rates * 2);
+ mi->sample_packets += (delta - mi->n_rates * 2);
}
/* get next random rate sample */
@@ -361,7 +363,7 @@
*/
if (mrr_capable &&
msr->perfect_tx_time > mr->perfect_tx_time &&
- msr->sample_skipped < 20) {
+ msr->stats.sample_skipped < 20) {
/* Only use IEEE80211_TX_CTL_RATE_CTRL_PROBE to mark
* packets that have the sampling rate deferred to the
* second MRR stage. Increase the sample counter only
@@ -375,7 +377,7 @@
if (!msr->sample_limit != 0)
return;
- mi->sample_count++;
+ mi->sample_packets++;
if (msr->sample_limit > 0)
msr->sample_limit--;
}
@@ -384,7 +386,7 @@
* has a probability of >95%, we shouldn't be attempting
* to use it, as this only wastes precious airtime */
if (!mrr_capable &&
- (mi->r[ndx].probability > MINSTREL_FRAC(95, 100)))
+ (mi->r[ndx].stats.probability > MINSTREL_FRAC(95, 100)))
return;
mi->prev_sample = true;
@@ -459,6 +461,7 @@
for (i = 0; i < sband->n_bitrates; i++) {
struct minstrel_rate *mr = &mi->r[n];
+ struct minstrel_rate_stats *mrs = &mi->r[n].stats;
unsigned int tx_time = 0, tx_time_cts = 0, tx_time_rtscts = 0;
unsigned int tx_time_single;
unsigned int cw = mp->cw_min;
@@ -471,6 +474,7 @@
n++;
memset(mr, 0, sizeof(*mr));
+ memset(mrs, 0, sizeof(*mrs));
mr->rix = i;
shift = ieee80211_chandef_get_shift(chandef);
@@ -482,9 +486,9 @@
/* calculate maximum number of retransmissions before
* fallback (based on maximum segment size) */
mr->sample_limit = -1;
- mr->retry_count = 1;
+ mrs->retry_count = 1;
mr->retry_count_cts = 1;
- mr->retry_count_rtscts = 1;
+ mrs->retry_count_rtscts = 1;
tx_time = mr->perfect_tx_time + mi->sp_ack_dur;
do {
/* add one retransmission */
@@ -501,13 +505,13 @@
(mr->retry_count_cts < mp->max_retry))
mr->retry_count_cts++;
if ((tx_time_rtscts < mp->segment_size) &&
- (mr->retry_count_rtscts < mp->max_retry))
- mr->retry_count_rtscts++;
+ (mrs->retry_count_rtscts < mp->max_retry))
+ mrs->retry_count_rtscts++;
} while ((tx_time < mp->segment_size) &&
- (++mr->retry_count < mp->max_retry));
- mr->adjusted_retry_count = mr->retry_count;
+ (++mr->stats.retry_count < mp->max_retry));
+ mr->adjusted_retry_count = mrs->retry_count;
if (!(sband->bitrates[i].flags & IEEE80211_RATE_ERP_G))
- mr->retry_count_cts = mr->retry_count;
+ mr->retry_count_cts = mrs->retry_count;
}
for (i = n; i < sband->n_bitrates; i++) {
@@ -665,7 +669,7 @@
/* convert pkt per sec in kbps (1200 is the average pkt size used for
* computing cur_tp
*/
- return MINSTREL_TRUNC(mi->r[idx].cur_tp) * 1200 * 8 / 1024;
+ return MINSTREL_TRUNC(mi->r[idx].stats.cur_tp) * 1200 * 8 / 1024;
}
const struct rate_control_ops mac80211_minstrel = {
diff --git a/net/mac80211/rc80211_minstrel.h b/net/mac80211/rc80211_minstrel.h
index 046d1bd..97eca86 100644
--- a/net/mac80211/rc80211_minstrel.h
+++ b/net/mac80211/rc80211_minstrel.h
@@ -31,6 +31,27 @@
return (new * (EWMA_DIV - weight) + old * weight) / EWMA_DIV;
}
+struct minstrel_rate_stats {
+ /* current / last sampling period attempts/success counters */
+ unsigned int attempts, last_attempts;
+ unsigned int success, last_success;
+
+ /* total attempts/success counters */
+ u64 att_hist, succ_hist;
+
+ /* current throughput */
+ unsigned int cur_tp;
+
+ /* packet delivery probabilities */
+ unsigned int cur_prob, probability;
+
+ /* maximum retry counts */
+ unsigned int retry_count;
+ unsigned int retry_count_rtscts;
+
+ u8 sample_skipped;
+ bool retry_updated;
+};
struct minstrel_rate {
int bitrate;
@@ -40,26 +61,10 @@
unsigned int ack_time;
int sample_limit;
- unsigned int retry_count;
unsigned int retry_count_cts;
- unsigned int retry_count_rtscts;
unsigned int adjusted_retry_count;
- u32 success;
- u32 attempts;
- u32 last_attempts;
- u32 last_success;
- u8 sample_skipped;
-
- /* parts per thousand */
- u32 cur_prob;
- u32 probability;
-
- /* per-rate throughput */
- u32 cur_tp;
-
- u64 succ_hist;
- u64 att_hist;
+ struct minstrel_rate_stats stats;
};
struct minstrel_sta_info {
@@ -73,8 +78,8 @@
u8 max_tp_rate[MAX_THR_RATES];
u8 max_prob_rate;
- unsigned int packet_count;
- unsigned int sample_count;
+ unsigned int total_packets;
+ unsigned int sample_packets;
int sample_deferred;
unsigned int sample_row;
diff --git a/net/mac80211/rc80211_minstrel_debugfs.c b/net/mac80211/rc80211_minstrel_debugfs.c
index fd0b9ca..edde723 100644
--- a/net/mac80211/rc80211_minstrel_debugfs.c
+++ b/net/mac80211/rc80211_minstrel_debugfs.c
@@ -72,6 +72,7 @@
"this succ/attempt success attempts\n");
for (i = 0; i < mi->n_rates; i++) {
struct minstrel_rate *mr = &mi->r[i];
+ struct minstrel_rate_stats *mrs = &mi->r[i].stats;
*(p++) = (i == mi->max_tp_rate[0]) ? 'A' : ' ';
*(p++) = (i == mi->max_tp_rate[1]) ? 'B' : ' ';
@@ -81,24 +82,24 @@
p += sprintf(p, "%3u%s", mr->bitrate / 2,
(mr->bitrate & 1 ? ".5" : " "));
- tp = MINSTREL_TRUNC(mr->cur_tp / 10);
- prob = MINSTREL_TRUNC(mr->cur_prob * 1000);
- eprob = MINSTREL_TRUNC(mr->probability * 1000);
+ tp = MINSTREL_TRUNC(mrs->cur_tp / 10);
+ prob = MINSTREL_TRUNC(mrs->cur_prob * 1000);
+ eprob = MINSTREL_TRUNC(mrs->probability * 1000);
p += sprintf(p, " %6u.%1u %6u.%1u %6u.%1u "
" %3u(%3u) %8llu %8llu\n",
tp / 10, tp % 10,
eprob / 10, eprob % 10,
prob / 10, prob % 10,
- mr->last_success,
- mr->last_attempts,
- (unsigned long long)mr->succ_hist,
- (unsigned long long)mr->att_hist);
+ mrs->last_success,
+ mrs->last_attempts,
+ (unsigned long long)mrs->succ_hist,
+ (unsigned long long)mrs->att_hist);
}
p += sprintf(p, "\nTotal packet count:: ideal %d "
"lookaround %d\n\n",
- mi->packet_count - mi->sample_count,
- mi->sample_count);
+ mi->total_packets - mi->sample_packets,
+ mi->sample_packets);
ms->len = p - ms->buf;
return 0;
diff --git a/net/mac80211/rc80211_minstrel_ht.c b/net/mac80211/rc80211_minstrel_ht.c
index 85c1e74..df90ce2 100644
--- a/net/mac80211/rc80211_minstrel_ht.c
+++ b/net/mac80211/rc80211_minstrel_ht.c
@@ -135,7 +135,7 @@
static int
minstrel_ht_get_group_idx(struct ieee80211_tx_rate *rate)
{
- return GROUP_IDX((rate->idx / 8) + 1,
+ return GROUP_IDX((rate->idx / MCS_GROUP_RATES) + 1,
!!(rate->flags & IEEE80211_TX_RC_SHORT_GI),
!!(rate->flags & IEEE80211_TX_RC_40_MHZ_WIDTH));
}
@@ -233,12 +233,151 @@
}
/*
+ * Find & sort topmost throughput rates
+ *
+ * If multiple rates provide equal throughput the sorting is based on their
+ * current success probability. Higher success probability is preferred among
+ * MCS groups, CCK rates do not provide aggregation and are therefore at last.
+ */
+static void
+minstrel_ht_sort_best_tp_rates(struct minstrel_ht_sta *mi, u8 index,
+ u8 *tp_list)
+{
+ int cur_group, cur_idx, cur_thr, cur_prob;
+ int tmp_group, tmp_idx, tmp_thr, tmp_prob;
+ int j = MAX_THR_RATES;
+
+ cur_group = index / MCS_GROUP_RATES;
+ cur_idx = index % MCS_GROUP_RATES;
+ cur_thr = mi->groups[cur_group].rates[cur_idx].cur_tp;
+ cur_prob = mi->groups[cur_group].rates[cur_idx].probability;
+
+ tmp_group = tp_list[j - 1] / MCS_GROUP_RATES;
+ tmp_idx = tp_list[j - 1] % MCS_GROUP_RATES;
+ tmp_thr = mi->groups[tmp_group].rates[tmp_idx].cur_tp;
+ tmp_prob = mi->groups[tmp_group].rates[tmp_idx].probability;
+
+ while (j > 0 && (cur_thr > tmp_thr ||
+ (cur_thr == tmp_thr && cur_prob > tmp_prob))) {
+ j--;
+ tmp_group = tp_list[j - 1] / MCS_GROUP_RATES;
+ tmp_idx = tp_list[j - 1] % MCS_GROUP_RATES;
+ tmp_thr = mi->groups[tmp_group].rates[tmp_idx].cur_tp;
+ tmp_prob = mi->groups[tmp_group].rates[tmp_idx].probability;
+ }
+
+ if (j < MAX_THR_RATES - 1) {
+ memmove(&tp_list[j + 1], &tp_list[j], (sizeof(*tp_list) *
+ (MAX_THR_RATES - (j + 1))));
+ }
+ if (j < MAX_THR_RATES)
+ tp_list[j] = index;
+}
+
+/*
+ * Find and set the topmost probability rate per sta and per group
+ */
+static void
+minstrel_ht_set_best_prob_rate(struct minstrel_ht_sta *mi, u8 index)
+{
+ struct minstrel_mcs_group_data *mg;
+ struct minstrel_rate_stats *mr;
+ int tmp_group, tmp_idx, tmp_tp, tmp_prob, max_tp_group;
+
+ mg = &mi->groups[index / MCS_GROUP_RATES];
+ mr = &mg->rates[index % MCS_GROUP_RATES];
+
+ tmp_group = mi->max_prob_rate / MCS_GROUP_RATES;
+ tmp_idx = mi->max_prob_rate % MCS_GROUP_RATES;
+ tmp_tp = mi->groups[tmp_group].rates[tmp_idx].cur_tp;
+ tmp_prob = mi->groups[tmp_group].rates[tmp_idx].probability;
+
+ /* if max_tp_rate[0] is from MCS_GROUP max_prob_rate get selected from
+ * MCS_GROUP as well as CCK_GROUP rates do not allow aggregation */
+ max_tp_group = mi->max_tp_rate[0] / MCS_GROUP_RATES;
+ if((index / MCS_GROUP_RATES == MINSTREL_CCK_GROUP) &&
+ (max_tp_group != MINSTREL_CCK_GROUP))
+ return;
+
+ if (mr->probability > MINSTREL_FRAC(75, 100)) {
+ if (mr->cur_tp > tmp_tp)
+ mi->max_prob_rate = index;
+ if (mr->cur_tp > mg->rates[mg->max_group_prob_rate].cur_tp)
+ mg->max_group_prob_rate = index;
+ } else {
+ if (mr->probability > tmp_prob)
+ mi->max_prob_rate = index;
+ if (mr->probability > mg->rates[mg->max_group_prob_rate].probability)
+ mg->max_group_prob_rate = index;
+ }
+}
+
+
+/*
+ * Assign new rate set per sta and use CCK rates only if the fastest
+ * rate (max_tp_rate[0]) is from CCK group. This prohibits such sorted
+ * rate sets where MCS and CCK rates are mixed, because CCK rates can
+ * not use aggregation.
+ */
+static void
+minstrel_ht_assign_best_tp_rates(struct minstrel_ht_sta *mi,
+ u8 tmp_mcs_tp_rate[MAX_THR_RATES],
+ u8 tmp_cck_tp_rate[MAX_THR_RATES])
+{
+ unsigned int tmp_group, tmp_idx, tmp_cck_tp, tmp_mcs_tp;
+ int i;
+
+ tmp_group = tmp_cck_tp_rate[0] / MCS_GROUP_RATES;
+ tmp_idx = tmp_cck_tp_rate[0] % MCS_GROUP_RATES;
+ tmp_cck_tp = mi->groups[tmp_group].rates[tmp_idx].cur_tp;
+
+ tmp_group = tmp_mcs_tp_rate[0] / MCS_GROUP_RATES;
+ tmp_idx = tmp_mcs_tp_rate[0] % MCS_GROUP_RATES;
+ tmp_mcs_tp = mi->groups[tmp_group].rates[tmp_idx].cur_tp;
+
+ if (tmp_cck_tp > tmp_mcs_tp) {
+ for(i = 0; i < MAX_THR_RATES; i++) {
+ minstrel_ht_sort_best_tp_rates(mi, tmp_cck_tp_rate[i],
+ tmp_mcs_tp_rate);
+ }
+ }
+
+}
+
+/*
+ * Try to increase robustness of max_prob rate by decrease number of
+ * streams if possible.
+ */
+static inline void
+minstrel_ht_prob_rate_reduce_streams(struct minstrel_ht_sta *mi)
+{
+ struct minstrel_mcs_group_data *mg;
+ struct minstrel_rate_stats *mr;
+ int tmp_max_streams, group;
+ int tmp_tp = 0;
+
+ tmp_max_streams = minstrel_mcs_groups[mi->max_tp_rate[0] /
+ MCS_GROUP_RATES].streams;
+ for (group = 0; group < ARRAY_SIZE(minstrel_mcs_groups); group++) {
+ mg = &mi->groups[group];
+ if (!mg->supported || group == MINSTREL_CCK_GROUP)
+ continue;
+ mr = minstrel_get_ratestats(mi, mg->max_group_prob_rate);
+ if (tmp_tp < mr->cur_tp &&
+ (minstrel_mcs_groups[group].streams < tmp_max_streams)) {
+ mi->max_prob_rate = mg->max_group_prob_rate;
+ tmp_tp = mr->cur_tp;
+ }
+ }
+}
+
+/*
* Update rate statistics and select new primary rates
*
* Rules for rate selection:
* - max_prob_rate must use only one stream, as a tradeoff between delivery
* probability and throughput during strong fluctuations
- * - as long as the max prob rate has a probability of more than 3/4, pick
+ * - as long as the max prob rate has a probability of more than 75%, pick
* higher throughput rates, even if the probablity is a bit lower
*/
static void
@@ -246,9 +385,9 @@
{
struct minstrel_mcs_group_data *mg;
struct minstrel_rate_stats *mr;
- int cur_prob, cur_prob_tp, cur_tp, cur_tp2;
- int group, i, index;
- bool mi_rates_valid = false;
+ int group, i, j;
+ u8 tmp_mcs_tp_rate[MAX_THR_RATES], tmp_group_tp_rate[MAX_THR_RATES];
+ u8 tmp_cck_tp_rate[MAX_THR_RATES], index;
if (mi->ampdu_packets > 0) {
mi->avg_ampdu_len = minstrel_ewma(mi->avg_ampdu_len,
@@ -260,13 +399,14 @@
mi->sample_slow = 0;
mi->sample_count = 0;
- for (group = 0; group < ARRAY_SIZE(minstrel_mcs_groups); group++) {
- bool mg_rates_valid = false;
+ /* Initialize global rate indexes */
+ for(j = 0; j < MAX_THR_RATES; j++){
+ tmp_mcs_tp_rate[j] = 0;
+ tmp_cck_tp_rate[j] = 0;
+ }
- cur_prob = 0;
- cur_prob_tp = 0;
- cur_tp = 0;
- cur_tp2 = 0;
+ /* Find best rate sets within all MCS groups*/
+ for (group = 0; group < ARRAY_SIZE(minstrel_mcs_groups); group++) {
mg = &mi->groups[group];
if (!mg->supported)
@@ -274,24 +414,16 @@
mi->sample_count++;
+ /* (re)Initialize group rate indexes */
+ for(j = 0; j < MAX_THR_RATES; j++)
+ tmp_group_tp_rate[j] = group;
+
for (i = 0; i < MCS_GROUP_RATES; i++) {
if (!(mg->supported & BIT(i)))
continue;
index = MCS_GROUP_RATES * group + i;
- /* initialize rates selections starting indexes */
- if (!mg_rates_valid) {
- mg->max_tp_rate = mg->max_tp_rate2 =
- mg->max_prob_rate = i;
- if (!mi_rates_valid) {
- mi->max_tp_rate = mi->max_tp_rate2 =
- mi->max_prob_rate = index;
- mi_rates_valid = true;
- }
- mg_rates_valid = true;
- }
-
mr = &mg->rates[i];
mr->retry_updated = false;
minstrel_calc_rate_ewma(mr);
@@ -300,82 +432,47 @@
if (!mr->cur_tp)
continue;
- if ((mr->cur_tp > cur_prob_tp && mr->probability >
- MINSTREL_FRAC(3, 4)) || mr->probability > cur_prob) {
- mg->max_prob_rate = index;
- cur_prob = mr->probability;
- cur_prob_tp = mr->cur_tp;
+ /* Find max throughput rate set */
+ if (group != MINSTREL_CCK_GROUP) {
+ minstrel_ht_sort_best_tp_rates(mi, index,
+ tmp_mcs_tp_rate);
+ } else if (group == MINSTREL_CCK_GROUP) {
+ minstrel_ht_sort_best_tp_rates(mi, index,
+ tmp_cck_tp_rate);
}
- if (mr->cur_tp > cur_tp) {
- swap(index, mg->max_tp_rate);
- cur_tp = mr->cur_tp;
- mr = minstrel_get_ratestats(mi, index);
- }
+ /* Find max throughput rate set within a group */
+ minstrel_ht_sort_best_tp_rates(mi, index,
+ tmp_group_tp_rate);
- if (index >= mg->max_tp_rate)
- continue;
-
- if (mr->cur_tp > cur_tp2) {
- mg->max_tp_rate2 = index;
- cur_tp2 = mr->cur_tp;
- }
+ /* Find max probability rate per group and global */
+ minstrel_ht_set_best_prob_rate(mi, index);
}
+
+ memcpy(mg->max_group_tp_rate, tmp_group_tp_rate,
+ sizeof(mg->max_group_tp_rate));
}
+ /* Assign new rate set per sta */
+ minstrel_ht_assign_best_tp_rates(mi, tmp_mcs_tp_rate, tmp_cck_tp_rate);
+ memcpy(mi->max_tp_rate, tmp_mcs_tp_rate, sizeof(mi->max_tp_rate));
+
+ /* Try to increase robustness of max_prob_rate*/
+ minstrel_ht_prob_rate_reduce_streams(mi);
+
/* try to sample all available rates during each interval */
mi->sample_count *= 8;
- cur_prob = 0;
- cur_prob_tp = 0;
- cur_tp = 0;
- cur_tp2 = 0;
- for (group = 0; group < ARRAY_SIZE(minstrel_mcs_groups); group++) {
- mg = &mi->groups[group];
- if (!mg->supported)
- continue;
-
- mr = minstrel_get_ratestats(mi, mg->max_tp_rate);
- if (cur_tp < mr->cur_tp) {
- mi->max_tp_rate2 = mi->max_tp_rate;
- cur_tp2 = cur_tp;
- mi->max_tp_rate = mg->max_tp_rate;
- cur_tp = mr->cur_tp;
- mi->max_prob_streams = minstrel_mcs_groups[group].streams - 1;
- }
-
- mr = minstrel_get_ratestats(mi, mg->max_tp_rate2);
- if (cur_tp2 < mr->cur_tp) {
- mi->max_tp_rate2 = mg->max_tp_rate2;
- cur_tp2 = mr->cur_tp;
- }
- }
-
- if (mi->max_prob_streams < 1)
- mi->max_prob_streams = 1;
-
- for (group = 0; group < ARRAY_SIZE(minstrel_mcs_groups); group++) {
- mg = &mi->groups[group];
- if (!mg->supported)
- continue;
- mr = minstrel_get_ratestats(mi, mg->max_prob_rate);
- if (cur_prob_tp < mr->cur_tp &&
- minstrel_mcs_groups[group].streams <= mi->max_prob_streams) {
- mi->max_prob_rate = mg->max_prob_rate;
- cur_prob = mr->cur_prob;
- cur_prob_tp = mr->cur_tp;
- }
- }
-
#ifdef CONFIG_MAC80211_DEBUGFS
/* use fixed index if set */
if (mp->fixed_rate_idx != -1) {
- mi->max_tp_rate = mp->fixed_rate_idx;
- mi->max_tp_rate2 = mp->fixed_rate_idx;
+ for (i = 0; i < 4; i++)
+ mi->max_tp_rate[i] = mp->fixed_rate_idx;
mi->max_prob_rate = mp->fixed_rate_idx;
}
#endif
+ /* Reset update timer */
mi->stats_update = jiffies;
}
@@ -420,8 +517,7 @@
}
static void
-minstrel_downgrade_rate(struct minstrel_ht_sta *mi, unsigned int *idx,
- bool primary)
+minstrel_downgrade_rate(struct minstrel_ht_sta *mi, u8 *idx, bool primary)
{
int group, orig_group;
@@ -437,9 +533,9 @@
continue;
if (primary)
- *idx = mi->groups[group].max_tp_rate;
+ *idx = mi->groups[group].max_group_tp_rate[0];
else
- *idx = mi->groups[group].max_tp_rate2;
+ *idx = mi->groups[group].max_group_tp_rate[1];
break;
}
}
@@ -524,19 +620,19 @@
* check for sudden death of spatial multiplexing,
* downgrade to a lower number of streams if necessary.
*/
- rate = minstrel_get_ratestats(mi, mi->max_tp_rate);
+ rate = minstrel_get_ratestats(mi, mi->max_tp_rate[0]);
if (rate->attempts > 30 &&
MINSTREL_FRAC(rate->success, rate->attempts) <
MINSTREL_FRAC(20, 100)) {
- minstrel_downgrade_rate(mi, &mi->max_tp_rate, true);
+ minstrel_downgrade_rate(mi, &mi->max_tp_rate[0], true);
update = true;
}
- rate2 = minstrel_get_ratestats(mi, mi->max_tp_rate2);
+ rate2 = minstrel_get_ratestats(mi, mi->max_tp_rate[1]);
if (rate2->attempts > 30 &&
MINSTREL_FRAC(rate2->success, rate2->attempts) <
MINSTREL_FRAC(20, 100)) {
- minstrel_downgrade_rate(mi, &mi->max_tp_rate2, false);
+ minstrel_downgrade_rate(mi, &mi->max_tp_rate[1], false);
update = true;
}
@@ -661,12 +757,12 @@
if (!rates)
return;
- /* Start with max_tp_rate */
- minstrel_ht_set_rate(mp, mi, rates, i++, mi->max_tp_rate);
+ /* Start with max_tp_rate[0] */
+ minstrel_ht_set_rate(mp, mi, rates, i++, mi->max_tp_rate[0]);
if (mp->hw->max_rates >= 3) {
- /* At least 3 tx rates supported, use max_tp_rate2 next */
- minstrel_ht_set_rate(mp, mi, rates, i++, mi->max_tp_rate2);
+ /* At least 3 tx rates supported, use max_tp_rate[1] next */
+ minstrel_ht_set_rate(mp, mi, rates, i++, mi->max_tp_rate[1]);
}
if (mp->hw->max_rates >= 2) {
@@ -691,7 +787,7 @@
{
struct minstrel_rate_stats *mr;
struct minstrel_mcs_group_data *mg;
- unsigned int sample_dur, sample_group;
+ unsigned int sample_dur, sample_group, cur_max_tp_streams;
int sample_idx = 0;
if (mi->sample_wait > 0) {
@@ -718,8 +814,8 @@
* to the frame. Hence, don't use sampling for the currently
* used rates.
*/
- if (sample_idx == mi->max_tp_rate ||
- sample_idx == mi->max_tp_rate2 ||
+ if (sample_idx == mi->max_tp_rate[0] ||
+ sample_idx == mi->max_tp_rate[1] ||
sample_idx == mi->max_prob_rate)
return -1;
@@ -734,9 +830,12 @@
* Make sure that lower rates get sampled only occasionally,
* if the link is working perfectly.
*/
+
+ cur_max_tp_streams = minstrel_mcs_groups[mi->max_tp_rate[0] /
+ MCS_GROUP_RATES].streams;
sample_dur = minstrel_get_duration(sample_idx);
- if (sample_dur >= minstrel_get_duration(mi->max_tp_rate2) &&
- (mi->max_prob_streams <
+ if (sample_dur >= minstrel_get_duration(mi->max_tp_rate[1]) &&
+ (cur_max_tp_streams - 1 <
minstrel_mcs_groups[sample_group].streams ||
sample_dur >= minstrel_get_duration(mi->max_prob_rate))) {
if (mr->sample_skipped < 20)
@@ -1041,8 +1140,8 @@
if (!msp->is_ht)
return mac80211_minstrel.get_expected_throughput(priv_sta);
- i = mi->max_tp_rate / MCS_GROUP_RATES;
- j = mi->max_tp_rate % MCS_GROUP_RATES;
+ i = mi->max_tp_rate[0] / MCS_GROUP_RATES;
+ j = mi->max_tp_rate[0] % MCS_GROUP_RATES;
/* convert cur_tp from pkt per second in kbps */
return mi->groups[i].rates[j].cur_tp * AVG_PKT_SIZE * 8 / 1024;
diff --git a/net/mac80211/rc80211_minstrel_ht.h b/net/mac80211/rc80211_minstrel_ht.h
index d655586..01570e0 100644
--- a/net/mac80211/rc80211_minstrel_ht.h
+++ b/net/mac80211/rc80211_minstrel_ht.h
@@ -26,28 +26,6 @@
extern const struct mcs_group minstrel_mcs_groups[];
-struct minstrel_rate_stats {
- /* current / last sampling period attempts/success counters */
- unsigned int attempts, last_attempts;
- unsigned int success, last_success;
-
- /* total attempts/success counters */
- u64 att_hist, succ_hist;
-
- /* current throughput */
- unsigned int cur_tp;
-
- /* packet delivery probabilities */
- unsigned int cur_prob, probability;
-
- /* maximum retry counts */
- unsigned int retry_count;
- unsigned int retry_count_rtscts;
-
- bool retry_updated;
- u8 sample_skipped;
-};
-
struct minstrel_mcs_group_data {
u8 index;
u8 column;
@@ -55,10 +33,9 @@
/* bitfield of supported MCS rates of this group */
u8 supported;
- /* selected primary rates */
- unsigned int max_tp_rate;
- unsigned int max_tp_rate2;
- unsigned int max_prob_rate;
+ /* sorted rate set within a MCS group*/
+ u8 max_group_tp_rate[MAX_THR_RATES];
+ u8 max_group_prob_rate;
/* MCS rate statistics */
struct minstrel_rate_stats rates[MCS_GROUP_RATES];
@@ -74,15 +51,9 @@
/* ampdu length (EWMA) */
unsigned int avg_ampdu_len;
- /* best throughput rate */
- unsigned int max_tp_rate;
-
- /* second best throughput rate */
- unsigned int max_tp_rate2;
-
- /* best probability rate */
- unsigned int max_prob_rate;
- unsigned int max_prob_streams;
+ /* overall sorted rate set */
+ u8 max_tp_rate[MAX_THR_RATES];
+ u8 max_prob_rate;
/* time of last status update */
unsigned long stats_update;
diff --git a/net/mac80211/rc80211_minstrel_ht_debugfs.c b/net/mac80211/rc80211_minstrel_ht_debugfs.c
index 3e7d793..a72ad46 100644
--- a/net/mac80211/rc80211_minstrel_ht_debugfs.c
+++ b/net/mac80211/rc80211_minstrel_ht_debugfs.c
@@ -46,8 +46,10 @@
else
p += sprintf(p, "HT%c0/%cGI ", htmode, gimode);
- *(p++) = (idx == mi->max_tp_rate) ? 'T' : ' ';
- *(p++) = (idx == mi->max_tp_rate2) ? 't' : ' ';
+ *(p++) = (idx == mi->max_tp_rate[0]) ? 'A' : ' ';
+ *(p++) = (idx == mi->max_tp_rate[1]) ? 'B' : ' ';
+ *(p++) = (idx == mi->max_tp_rate[2]) ? 'C' : ' ';
+ *(p++) = (idx == mi->max_tp_rate[3]) ? 'D' : ' ';
*(p++) = (idx == mi->max_prob_rate) ? 'P' : ' ';
if (i == max_mcs) {
@@ -100,8 +102,8 @@
file->private_data = ms;
p = ms->buf;
- p += sprintf(p, "type rate throughput ewma prob this prob "
- "retry this succ/attempt success attempts\n");
+ p += sprintf(p, "type rate throughput ewma prob "
+ "this prob retry this succ/attempt success attempts\n");
p = minstrel_ht_stats_dump(mi, max_mcs, p);
for (i = 0; i < max_mcs; i++)
diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
index a8d862f..b04ca40 100644
--- a/net/mac80211/rx.c
+++ b/net/mac80211/rx.c
@@ -3,6 +3,7 @@
* Copyright 2005-2006, Devicescape Software, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -835,6 +836,16 @@
spin_lock(&tid_agg_rx->reorder_lock);
+ /*
+ * Offloaded BA sessions have no known starting sequence number so pick
+ * one from first Rxed frame for this tid after BA was started.
+ */
+ if (unlikely(tid_agg_rx->auto_seq)) {
+ tid_agg_rx->auto_seq = false;
+ tid_agg_rx->ssn = mpdu_seq_num;
+ tid_agg_rx->head_seq_num = mpdu_seq_num;
+ }
+
buf_size = tid_agg_rx->buf_size;
head_seq_num = tid_agg_rx->head_seq_num;
diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
index a9bb6eb..af0d094 100644
--- a/net/mac80211/scan.c
+++ b/net/mac80211/scan.c
@@ -6,6 +6,7 @@
* Copyright 2005, Devicescape Software, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
index 7300305..de494df 100644
--- a/net/mac80211/sta_info.c
+++ b/net/mac80211/sta_info.c
@@ -1,6 +1,7 @@
/*
* Copyright 2002-2005, Instant802 Networks, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -1822,7 +1823,7 @@
sinfo->bss_param.flags |= BSS_PARAM_FLAGS_SHORT_PREAMBLE;
if (sdata->vif.bss_conf.use_short_slot)
sinfo->bss_param.flags |= BSS_PARAM_FLAGS_SHORT_SLOT_TIME;
- sinfo->bss_param.dtim_period = sdata->local->hw.conf.ps_dtim_period;
+ sinfo->bss_param.dtim_period = sdata->vif.bss_conf.dtim_period;
sinfo->bss_param.beacon_interval = sdata->vif.bss_conf.beacon_int;
sinfo->sta_flags.set = 0;
diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
index 89c40d5..42f68cb 100644
--- a/net/mac80211/sta_info.h
+++ b/net/mac80211/sta_info.h
@@ -1,5 +1,6 @@
/*
* Copyright 2002-2005, Devicescape Software, Inc.
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -167,6 +168,8 @@
* @dialog_token: dialog token for aggregation session
* @rcu_head: RCU head used for freeing this struct
* @reorder_lock: serializes access to reorder buffer, see below.
+ * @auto_seq: used for offloaded BA sessions to automatically pick head_seq_and
+ * and ssn.
*
* This structure's lifetime is managed by RCU, assignments to
* the array holding it must hold the aggregation mutex.
@@ -190,6 +193,7 @@
u16 buf_size;
u16 timeout;
u8 dialog_token;
+ bool auto_seq;
};
/**
@@ -446,6 +450,9 @@
enum ieee80211_smps_mode known_smps_mode;
const struct ieee80211_cipher_scheme *cipher_scheme;
+ /* TDLS timeout data */
+ unsigned long last_tdls_pkt_time;
+
/* keep last! */
struct ieee80211_sta sta;
};
diff --git a/net/mac80211/status.c b/net/mac80211/status.c
index aa06dca..89290e3 100644
--- a/net/mac80211/status.c
+++ b/net/mac80211/status.c
@@ -3,6 +3,7 @@
* Copyright 2005-2006, Devicescape Software, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2008-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -537,6 +538,8 @@
* - current throughput (higher value for higher tpt)?
*/
#define STA_LOST_PKT_THRESHOLD 50
+#define STA_LOST_TDLS_PKT_THRESHOLD 10
+#define STA_LOST_TDLS_PKT_TIME (10*HZ) /* 10secs since last ACK */
static void ieee80211_lost_packet(struct sta_info *sta, struct sk_buff *skb)
{
@@ -547,7 +550,20 @@
!(info->flags & IEEE80211_TX_STAT_AMPDU))
return;
- if (++sta->lost_packets < STA_LOST_PKT_THRESHOLD)
+ sta->lost_packets++;
+ if (!sta->sta.tdls && sta->lost_packets < STA_LOST_PKT_THRESHOLD)
+ return;
+
+ /*
+ * If we're in TDLS mode, make sure that all STA_LOST_TDLS_PKT_THRESHOLD
+ * of the last packets were lost, and that no ACK was received in the
+ * last STA_LOST_TDLS_PKT_TIME ms, before triggering the CQM packet-loss
+ * mechanism.
+ */
+ if (sta->sta.tdls &&
+ (sta->lost_packets < STA_LOST_TDLS_PKT_THRESHOLD ||
+ time_before(jiffies,
+ sta->last_tdls_pkt_time + STA_LOST_TDLS_PKT_TIME)))
return;
cfg80211_cqm_pktloss_notify(sta->sdata->dev, sta->sta.addr,
@@ -694,6 +710,10 @@
if (info->flags & IEEE80211_TX_STAT_ACK) {
if (sta->lost_packets)
sta->lost_packets = 0;
+
+ /* Track when last TDLS packet was ACKed */
+ if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH))
+ sta->last_tdls_pkt_time = jiffies;
} else {
ieee80211_lost_packet(sta, skb);
}
diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c
index f2cb3b6..4ea25de 100644
--- a/net/mac80211/tdls.c
+++ b/net/mac80211/tdls.c
@@ -3,6 +3,7 @@
*
* Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2014, Intel Corporation
+ * Copyright 2014 Intel Mobile Communications GmbH
*
* This file is GPLv2 as found in COPYING.
*/
@@ -411,6 +412,9 @@
tf->ether_type = cpu_to_be16(ETH_P_TDLS);
tf->payload_type = WLAN_TDLS_SNAP_RFTYPE;
+ /* network header is after the ethernet header */
+ skb_set_network_header(skb, ETH_HLEN);
+
switch (action_code) {
case WLAN_TDLS_SETUP_REQUEST:
tf->category = WLAN_CATEGORY_TDLS;
diff --git a/net/mac80211/trace.h b/net/mac80211/trace.h
index 02ac535..38fae7e 100644
--- a/net/mac80211/trace.h
+++ b/net/mac80211/trace.h
@@ -672,13 +672,13 @@
);
TRACE_EVENT(drv_set_coverage_class,
- TP_PROTO(struct ieee80211_local *local, u8 value),
+ TP_PROTO(struct ieee80211_local *local, s16 value),
TP_ARGS(local, value),
TP_STRUCT__entry(
LOCAL_ENTRY
- __field(u8, value)
+ __field(s16, value)
),
TP_fast_assign(
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index 925c39f..900632a2 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -3,6 +3,7 @@
* Copyright 2005-2006, Devicescape Software, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2007 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -1788,9 +1789,8 @@
* @skb: packet to be sent
* @dev: incoming interface
*
- * Returns: 0 on success (and frees skb in this case) or 1 on failure (skb will
- * not be freed, and caller is responsible for either retrying later or freeing
- * skb).
+ * Returns: NETDEV_TX_OK both on success and on failure. On failure skb will
+ * be freed.
*
* This function takes in an Ethernet header and encapsulates it with suitable
* IEEE 802.11 header based on which interface the packet is coming in. The
@@ -2072,30 +2072,23 @@
if (unlikely(!multicast && skb->sk &&
skb_shinfo(skb)->tx_flags & SKBTX_WIFI_STATUS)) {
- struct sk_buff *orig_skb = skb;
+ struct sk_buff *ack_skb = skb_clone_sk(skb);
- skb = skb_clone(skb, GFP_ATOMIC);
- if (skb) {
+ if (ack_skb) {
unsigned long flags;
int id;
spin_lock_irqsave(&local->ack_status_lock, flags);
- id = idr_alloc(&local->ack_status_frames, orig_skb,
+ id = idr_alloc(&local->ack_status_frames, ack_skb,
1, 0x10000, GFP_ATOMIC);
spin_unlock_irqrestore(&local->ack_status_lock, flags);
if (id >= 0) {
info_id = id;
info_flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
- } else if (skb_shared(skb)) {
- kfree_skb(orig_skb);
} else {
- kfree_skb(skb);
- skb = orig_skb;
+ kfree_skb(ack_skb);
}
- } else {
- /* couldn't clone -- lose tx status ... */
- skb = orig_skb;
}
}
diff --git a/net/mac80211/util.c b/net/mac80211/util.c
index 725af7a..3c61060 100644
--- a/net/mac80211/util.c
+++ b/net/mac80211/util.c
@@ -3,6 +3,7 @@
* Copyright 2005-2006, Devicescape Software, Inc.
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2007 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -1014,6 +1015,31 @@
}
elems->pwr_constr_elem = pos;
break;
+ case WLAN_EID_CISCO_VENDOR_SPECIFIC:
+ /* Lots of different options exist, but we only care
+ * about the Dynamic Transmit Power Control element.
+ * First check for the Cisco OUI, then for the DTPC
+ * tag (0x00).
+ */
+ if (elen < 4) {
+ elem_parse_failed = true;
+ break;
+ }
+
+ if (pos[0] != 0x00 || pos[1] != 0x40 ||
+ pos[2] != 0x96 || pos[3] != 0x00)
+ break;
+
+ if (elen != 6) {
+ elem_parse_failed = true;
+ break;
+ }
+
+ if (calc_crc)
+ crc = crc32_be(crc, pos - 2, elen + 2);
+
+ elems->cisco_dtpc_elem = pos;
+ break;
case WLAN_EID_TIMEOUT_INTERVAL:
if (elen >= sizeof(struct ieee80211_timeout_interval_ie))
elems->timeout_int = (void *)pos;
diff --git a/net/mac80211/wme.c b/net/mac80211/wme.c
index 6459946..3b87398 100644
--- a/net/mac80211/wme.c
+++ b/net/mac80211/wme.c
@@ -1,5 +1,6 @@
/*
* Copyright 2004, Instant802 Networks, Inc.
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
diff --git a/net/mac80211/wpa.c b/net/mac80211/wpa.c
index f7d4ca4..983527a 100644
--- a/net/mac80211/wpa.c
+++ b/net/mac80211/wpa.c
@@ -64,8 +64,11 @@
if (!info->control.hw_key)
tail += IEEE80211_TKIP_ICV_LEN;
- if (WARN_ON(skb_tailroom(skb) < tail ||
- skb_headroom(skb) < IEEE80211_TKIP_IV_LEN))
+ if (WARN(skb_tailroom(skb) < tail ||
+ skb_headroom(skb) < IEEE80211_TKIP_IV_LEN,
+ "mmic: not enough head/tail (%d/%d,%d/%d)\n",
+ skb_headroom(skb), IEEE80211_TKIP_IV_LEN,
+ skb_tailroom(skb), tail))
return TX_DROP;
key = &tx->key->conf.key[NL80211_TKIP_DATA_OFFSET_TX_MIC_KEY];
diff --git a/net/mpls/mpls_gso.c b/net/mpls/mpls_gso.c
index 6b38d08..e28ed2e 100644
--- a/net/mpls/mpls_gso.c
+++ b/net/mpls/mpls_gso.c
@@ -65,15 +65,9 @@
return segs;
}
-static int mpls_gso_send_check(struct sk_buff *skb)
-{
- return 0;
-}
-
static struct packet_offload mpls_mc_offload = {
.type = cpu_to_be16(ETH_P_MPLS_MC),
.callbacks = {
- .gso_send_check = mpls_gso_send_check,
.gso_segment = mpls_gso_segment,
},
};
@@ -81,7 +75,6 @@
static struct packet_offload mpls_uc_offload = {
.type = cpu_to_be16(ETH_P_MPLS_UC),
.callbacks = {
- .gso_send_check = mpls_gso_send_check,
.gso_segment = mpls_gso_segment,
},
};
diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
index 5231652..6932a42 100644
--- a/net/openvswitch/actions.c
+++ b/net/openvswitch/actions.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2007-2013 Nicira, Inc.
+ * Copyright (c) 2007-2014 Nicira, Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of version 2 of the GNU General Public
@@ -35,11 +35,78 @@
#include <net/sctp/checksum.h>
#include "datapath.h"
+#include "flow.h"
#include "vport.h"
static int do_execute_actions(struct datapath *dp, struct sk_buff *skb,
+ struct sw_flow_key *key,
const struct nlattr *attr, int len);
+struct deferred_action {
+ struct sk_buff *skb;
+ const struct nlattr *actions;
+
+ /* Store pkt_key clone when creating deferred action. */
+ struct sw_flow_key pkt_key;
+};
+
+#define DEFERRED_ACTION_FIFO_SIZE 10
+struct action_fifo {
+ int head;
+ int tail;
+ /* Deferred action fifo queue storage. */
+ struct deferred_action fifo[DEFERRED_ACTION_FIFO_SIZE];
+};
+
+static struct action_fifo __percpu *action_fifos;
+static DEFINE_PER_CPU(int, exec_actions_level);
+
+static void action_fifo_init(struct action_fifo *fifo)
+{
+ fifo->head = 0;
+ fifo->tail = 0;
+}
+
+static bool action_fifo_is_empty(struct action_fifo *fifo)
+{
+ return (fifo->head == fifo->tail);
+}
+
+static struct deferred_action *action_fifo_get(struct action_fifo *fifo)
+{
+ if (action_fifo_is_empty(fifo))
+ return NULL;
+
+ return &fifo->fifo[fifo->tail++];
+}
+
+static struct deferred_action *action_fifo_put(struct action_fifo *fifo)
+{
+ if (fifo->head >= DEFERRED_ACTION_FIFO_SIZE - 1)
+ return NULL;
+
+ return &fifo->fifo[fifo->head++];
+}
+
+/* Return true if fifo is not full */
+static struct deferred_action *add_deferred_actions(struct sk_buff *skb,
+ struct sw_flow_key *key,
+ const struct nlattr *attr)
+{
+ struct action_fifo *fifo;
+ struct deferred_action *da;
+
+ fifo = this_cpu_ptr(action_fifos);
+ da = action_fifo_put(fifo);
+ if (da) {
+ da->skb = skb;
+ da->actions = attr;
+ da->pkt_key = *key;
+ }
+
+ return da;
+}
+
static int make_writable(struct sk_buff *skb, int write_len)
{
if (!pskb_may_pull(skb, write_len))
@@ -410,16 +477,14 @@
}
static int output_userspace(struct datapath *dp, struct sk_buff *skb,
- const struct nlattr *attr)
+ struct sw_flow_key *key, const struct nlattr *attr)
{
struct dp_upcall_info upcall;
const struct nlattr *a;
int rem;
- BUG_ON(!OVS_CB(skb)->pkt_key);
-
upcall.cmd = OVS_PACKET_CMD_ACTION;
- upcall.key = OVS_CB(skb)->pkt_key;
+ upcall.key = key;
upcall.userdata = NULL;
upcall.portid = 0;
@@ -445,11 +510,10 @@
}
static int sample(struct datapath *dp, struct sk_buff *skb,
- const struct nlattr *attr)
+ struct sw_flow_key *key, const struct nlattr *attr)
{
const struct nlattr *acts_list = NULL;
const struct nlattr *a;
- struct sk_buff *sample_skb;
int rem;
for (a = nla_data(attr), rem = nla_len(attr); rem > 0;
@@ -469,31 +533,47 @@
rem = nla_len(acts_list);
a = nla_data(acts_list);
- /* Actions list is either empty or only contains a single user-space
- * action, the latter being a special case as it is the only known
- * usage of the sample action.
- * In these special cases don't clone the skb as there are no
- * side-effects in the nested actions.
- * Otherwise, clone in case the nested actions have side effects.
- */
- if (likely(rem == 0 || (nla_type(a) == OVS_ACTION_ATTR_USERSPACE &&
- last_action(a, rem)))) {
- sample_skb = skb;
- skb_get(skb);
- } else {
- sample_skb = skb_clone(skb, GFP_ATOMIC);
- if (!sample_skb) /* Skip sample action when out of memory. */
- return 0;
- }
+ /* Actions list is empty, do nothing */
+ if (unlikely(!rem))
+ return 0;
- /* Note that do_execute_actions() never consumes skb.
- * In the case where skb has been cloned above it is the clone that
- * is consumed. Otherwise the skb_get(skb) call prevents
- * consumption by do_execute_actions(). Thus, it is safe to simply
- * return the error code and let the caller (also
- * do_execute_actions()) free skb on error.
+ /* The only known usage of sample action is having a single user-space
+ * action. Treat this usage as a special case.
+ * The output_userspace() should clone the skb to be sent to the
+ * user space. This skb will be consumed by its caller.
*/
- return do_execute_actions(dp, sample_skb, a, rem);
+ if (likely(nla_type(a) == OVS_ACTION_ATTR_USERSPACE &&
+ last_action(a, rem)))
+ return output_userspace(dp, skb, key, a);
+
+ skb = skb_clone(skb, GFP_ATOMIC);
+ if (!skb)
+ /* Skip the sample action when out of memory. */
+ return 0;
+
+ if (!add_deferred_actions(skb, key, a)) {
+ if (net_ratelimit())
+ pr_warn("%s: deferred actions limit reached, dropping sample action\n",
+ ovs_dp_name(dp));
+
+ kfree_skb(skb);
+ }
+ return 0;
+}
+
+static void execute_hash(struct sk_buff *skb, struct sw_flow_key *key,
+ const struct nlattr *attr)
+{
+ struct ovs_action_hash *hash_act = nla_data(attr);
+ u32 hash = 0;
+
+ /* OVS_HASH_ALG_L4 is the only possible hash algorithm. */
+ hash = skb_get_hash(skb);
+ hash = jhash_1word(hash, hash_act->hash_basis);
+ if (!hash)
+ hash = 0x1;
+
+ key->ovs_flow_hash = hash;
}
static int execute_set_action(struct sk_buff *skb,
@@ -511,7 +591,7 @@
break;
case OVS_KEY_ATTR_IPV4_TUNNEL:
- OVS_CB(skb)->tun_key = nla_data(nested_attr);
+ OVS_CB(skb)->egress_tun_key = nla_data(nested_attr);
break;
case OVS_KEY_ATTR_ETHERNET:
@@ -542,8 +622,47 @@
return err;
}
+static int execute_recirc(struct datapath *dp, struct sk_buff *skb,
+ struct sw_flow_key *key,
+ const struct nlattr *a, int rem)
+{
+ struct deferred_action *da;
+ int err;
+
+ err = ovs_flow_key_update(skb, key);
+ if (err)
+ return err;
+
+ if (!last_action(a, rem)) {
+ /* Recirc action is the not the last action
+ * of the action list, need to clone the skb.
+ */
+ skb = skb_clone(skb, GFP_ATOMIC);
+
+ /* Skip the recirc action when out of memory, but
+ * continue on with the rest of the action list.
+ */
+ if (!skb)
+ return 0;
+ }
+
+ da = add_deferred_actions(skb, key, NULL);
+ if (da) {
+ da->pkt_key.recirc_id = nla_get_u32(a);
+ } else {
+ kfree_skb(skb);
+
+ if (net_ratelimit())
+ pr_warn("%s: deferred action limit reached, drop recirc action\n",
+ ovs_dp_name(dp));
+ }
+
+ return 0;
+}
+
/* Execute a list of actions against 'skb'. */
static int do_execute_actions(struct datapath *dp, struct sk_buff *skb,
+ struct sw_flow_key *key,
const struct nlattr *attr, int len)
{
/* Every output action needs a separate clone of 'skb', but the common
@@ -569,7 +688,11 @@
break;
case OVS_ACTION_ATTR_USERSPACE:
- output_userspace(dp, skb, a);
+ output_userspace(dp, skb, key, a);
+ break;
+
+ case OVS_ACTION_ATTR_HASH:
+ execute_hash(skb, key, a);
break;
case OVS_ACTION_ATTR_PUSH_VLAN:
@@ -582,12 +705,23 @@
err = pop_vlan(skb);
break;
+ case OVS_ACTION_ATTR_RECIRC:
+ err = execute_recirc(dp, skb, key, a, rem);
+ if (last_action(a, rem)) {
+ /* If this is the last action, the skb has
+ * been consumed or freed.
+ * Return immediately.
+ */
+ return err;
+ }
+ break;
+
case OVS_ACTION_ATTR_SET:
err = execute_set_action(skb, nla_data(a));
break;
case OVS_ACTION_ATTR_SAMPLE:
- err = sample(dp, skb, a);
+ err = sample(dp, skb, key, a);
if (unlikely(err)) /* skb already freed. */
return err;
break;
@@ -607,11 +741,63 @@
return 0;
}
-/* Execute a list of actions against 'skb'. */
-int ovs_execute_actions(struct datapath *dp, struct sk_buff *skb)
+static void process_deferred_actions(struct datapath *dp)
{
- struct sw_flow_actions *acts = rcu_dereference(OVS_CB(skb)->flow->sf_acts);
+ struct action_fifo *fifo = this_cpu_ptr(action_fifos);
- OVS_CB(skb)->tun_key = NULL;
- return do_execute_actions(dp, skb, acts->actions, acts->actions_len);
+ /* Do not touch the FIFO in case there is no deferred actions. */
+ if (action_fifo_is_empty(fifo))
+ return;
+
+ /* Finishing executing all deferred actions. */
+ do {
+ struct deferred_action *da = action_fifo_get(fifo);
+ struct sk_buff *skb = da->skb;
+ struct sw_flow_key *key = &da->pkt_key;
+ const struct nlattr *actions = da->actions;
+
+ if (actions)
+ do_execute_actions(dp, skb, key, actions,
+ nla_len(actions));
+ else
+ ovs_dp_process_packet(skb, key);
+ } while (!action_fifo_is_empty(fifo));
+
+ /* Reset FIFO for the next packet. */
+ action_fifo_init(fifo);
+}
+
+/* Execute a list of actions against 'skb'. */
+int ovs_execute_actions(struct datapath *dp, struct sk_buff *skb,
+ struct sw_flow_key *key)
+{
+ int level = this_cpu_read(exec_actions_level);
+ struct sw_flow_actions *acts;
+ int err;
+
+ acts = rcu_dereference(OVS_CB(skb)->flow->sf_acts);
+
+ this_cpu_inc(exec_actions_level);
+ err = do_execute_actions(dp, skb, key,
+ acts->actions, acts->actions_len);
+
+ if (!level)
+ process_deferred_actions(dp);
+
+ this_cpu_dec(exec_actions_level);
+ return err;
+}
+
+int action_fifos_init(void)
+{
+ action_fifos = alloc_percpu(struct action_fifo);
+ if (!action_fifos)
+ return -ENOMEM;
+
+ return 0;
+}
+
+void action_fifos_exit(void)
+{
+ free_percpu(action_fifos);
}
diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index 91d66b7..9e3a2fa 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -78,11 +78,12 @@
/* Check if need to build a reply message.
* OVS userspace sets the NLM_F_ECHO flag if it needs the reply. */
-static bool ovs_must_notify(struct genl_info *info,
- const struct genl_multicast_group *grp)
+static bool ovs_must_notify(struct genl_family *family, struct genl_info *info,
+ unsigned int group)
{
return info->nlhdr->nlmsg_flags & NLM_F_ECHO ||
- netlink_has_listeners(genl_info_net(info)->genl_sock, 0);
+ genl_has_listeners(family, genl_info_net(info)->genl_sock,
+ group);
}
static void ovs_notify(struct genl_family *family,
@@ -156,7 +157,7 @@
}
/* Must be called with rcu_read_lock or ovs_mutex. */
-static const char *ovs_dp_name(const struct datapath *dp)
+const char *ovs_dp_name(const struct datapath *dp)
{
struct vport *vport = ovs_vport_ovsl_rcu(dp, OVSP_LOCAL);
return vport->ops->get_name(vport);
@@ -237,32 +238,25 @@
}
/* Must be called with rcu_read_lock. */
-void ovs_dp_process_received_packet(struct vport *p, struct sk_buff *skb)
+void ovs_dp_process_packet(struct sk_buff *skb, struct sw_flow_key *key)
{
+ const struct vport *p = OVS_CB(skb)->input_vport;
struct datapath *dp = p->dp;
struct sw_flow *flow;
struct dp_stats_percpu *stats;
- struct sw_flow_key key;
u64 *stats_counter;
u32 n_mask_hit;
- int error;
stats = this_cpu_ptr(dp->stats_percpu);
- /* Extract flow from 'skb' into 'key'. */
- error = ovs_flow_extract(skb, p->port_no, &key);
- if (unlikely(error)) {
- kfree_skb(skb);
- return;
- }
-
/* Look up flow. */
- flow = ovs_flow_tbl_lookup_stats(&dp->table, &key, &n_mask_hit);
+ flow = ovs_flow_tbl_lookup_stats(&dp->table, key, &n_mask_hit);
if (unlikely(!flow)) {
struct dp_upcall_info upcall;
+ int error;
upcall.cmd = OVS_PACKET_CMD_MISS;
- upcall.key = &key;
+ upcall.key = key;
upcall.userdata = NULL;
upcall.portid = ovs_vport_find_upcall_portid(p, skb);
error = ovs_dp_upcall(dp, skb, &upcall);
@@ -275,10 +269,9 @@
}
OVS_CB(skb)->flow = flow;
- OVS_CB(skb)->pkt_key = &key;
- ovs_flow_stats_update(OVS_CB(skb)->flow, key.tp.flags, skb);
- ovs_execute_actions(dp, skb);
+ ovs_flow_stats_update(OVS_CB(skb)->flow, key->tp.flags, skb);
+ ovs_execute_actions(dp, skb, key);
stats_counter = &stats->n_hit;
out:
@@ -515,6 +508,7 @@
struct sw_flow *flow;
struct datapath *dp;
struct ethhdr *eth;
+ struct vport *input_vport;
int len;
int err;
@@ -549,13 +543,11 @@
if (IS_ERR(flow))
goto err_kfree_skb;
- err = ovs_flow_extract(packet, -1, &flow->key);
+ err = ovs_flow_key_extract_userspace(a[OVS_PACKET_ATTR_KEY], packet,
+ &flow->key);
if (err)
goto err_flow_free;
- err = ovs_nla_get_flow_metadata(flow, a[OVS_PACKET_ATTR_KEY]);
- if (err)
- goto err_flow_free;
acts = ovs_nla_alloc_flow_actions(nla_len(a[OVS_PACKET_ATTR_ACTIONS]));
err = PTR_ERR(acts);
if (IS_ERR(acts))
@@ -568,7 +560,6 @@
goto err_flow_free;
OVS_CB(packet)->flow = flow;
- OVS_CB(packet)->pkt_key = &flow->key;
packet->priority = flow->key.phy.priority;
packet->mark = flow->key.phy.skb_mark;
@@ -578,8 +569,17 @@
if (!dp)
goto err_unlock;
+ input_vport = ovs_vport_rcu(dp, flow->key.phy.in_port);
+ if (!input_vport)
+ input_vport = ovs_vport_rcu(dp, OVSP_LOCAL);
+
+ if (!input_vport)
+ goto err_unlock;
+
+ OVS_CB(packet)->input_vport = input_vport;
+
local_bh_disable();
- err = ovs_execute_actions(dp, packet);
+ err = ovs_execute_actions(dp, packet, &flow->key);
local_bh_enable();
rcu_read_unlock();
@@ -763,7 +763,7 @@
{
struct sk_buff *skb;
- if (!always && !ovs_must_notify(info, &ovs_dp_flow_multicast_group))
+ if (!always && !ovs_must_notify(&dp_flow_genl_family, info, 0))
return NULL;
skb = genlmsg_new_unicast(ovs_flow_cmd_msg_size(acts), info, GFP_KERNEL);
@@ -2066,10 +2066,14 @@
pr_info("Open vSwitch switching datapath\n");
- err = ovs_internal_dev_rtnl_link_register();
+ err = action_fifos_init();
if (err)
goto error;
+ err = ovs_internal_dev_rtnl_link_register();
+ if (err)
+ goto error_action_fifos_exit;
+
err = ovs_flow_init();
if (err)
goto error_unreg_rtnl_link;
@@ -2102,6 +2106,8 @@
ovs_flow_exit();
error_unreg_rtnl_link:
ovs_internal_dev_rtnl_link_unregister();
+error_action_fifos_exit:
+ action_fifos_exit();
error:
return err;
}
@@ -2115,6 +2121,7 @@
ovs_vport_exit();
ovs_flow_exit();
ovs_internal_dev_rtnl_link_unregister();
+ action_fifos_exit();
}
module_init(dp_init);
diff --git a/net/openvswitch/datapath.h b/net/openvswitch/datapath.h
index 701b573..ac3f3df 100644
--- a/net/openvswitch/datapath.h
+++ b/net/openvswitch/datapath.h
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2007-2012 Nicira, Inc.
+ * Copyright (c) 2007-2014 Nicira, Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of version 2 of the GNU General Public
@@ -95,14 +95,15 @@
/**
* struct ovs_skb_cb - OVS data in skb CB
* @flow: The flow associated with this packet. May be %NULL if no flow.
- * @pkt_key: The flow information extracted from the packet. Must be nonnull.
- * @tun_key: Key for the tunnel that encapsulated this packet. NULL if the
- * packet is not being tunneled.
+ * @egress_tun_key: Tunnel information about this packet on egress path.
+ * NULL if the packet is not being tunneled.
+ * @input_vport: The original vport packet came in on. This value is cached
+ * when a packet is received by OVS.
*/
struct ovs_skb_cb {
struct sw_flow *flow;
- struct sw_flow_key *pkt_key;
- struct ovs_key_ipv4_tunnel *tun_key;
+ struct vport *input_vport;
+ struct ovs_key_ipv4_tunnel *egress_tun_key;
};
#define OVS_CB(skb) ((struct ovs_skb_cb *)(skb)->cb)
@@ -183,17 +184,23 @@
extern struct notifier_block ovs_dp_device_notifier;
extern struct genl_family dp_vport_genl_family;
-void ovs_dp_process_received_packet(struct vport *, struct sk_buff *);
+void ovs_dp_process_packet(struct sk_buff *skb, struct sw_flow_key *key);
void ovs_dp_detach_port(struct vport *);
int ovs_dp_upcall(struct datapath *, struct sk_buff *,
const struct dp_upcall_info *);
+const char *ovs_dp_name(const struct datapath *dp);
struct sk_buff *ovs_vport_cmd_build_info(struct vport *, u32 pid, u32 seq,
u8 cmd);
-int ovs_execute_actions(struct datapath *dp, struct sk_buff *skb);
+int ovs_execute_actions(struct datapath *dp, struct sk_buff *skb,
+ struct sw_flow_key *);
+
void ovs_dp_notify_wq(struct work_struct *work);
+int action_fifos_init(void);
+void action_fifos_exit(void);
+
#define OVS_NLERR(fmt, ...) \
do { \
if (net_ratelimit()) \
diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
index 7064da9..4010423 100644
--- a/net/openvswitch/flow.c
+++ b/net/openvswitch/flow.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2007-2013 Nicira, Inc.
+ * Copyright (c) 2007-2014 Nicira, Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of version 2 of the GNU General Public
@@ -16,8 +16,6 @@
* 02110-1301, USA
*/
-#include "flow.h"
-#include "datapath.h"
#include <linux/uaccess.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
@@ -46,6 +44,10 @@
#include <net/ipv6.h>
#include <net/ndisc.h>
+#include "datapath.h"
+#include "flow.h"
+#include "flow_netlink.h"
+
u64 ovs_flow_used_time(unsigned long flow_jiffies)
{
struct timespec cur_ts;
@@ -420,10 +422,9 @@
}
/**
- * ovs_flow_extract - extracts a flow key from an Ethernet frame.
+ * key_extract - extracts a flow key from an Ethernet frame.
* @skb: sk_buff that contains the frame, with skb->data pointing to the
* Ethernet header
- * @in_port: port number on which @skb was received.
* @key: output flow key
*
* The caller must ensure that skb->len >= ETH_HLEN.
@@ -442,19 +443,11 @@
* of a correct length, otherwise the same as skb->network_header.
* For other key->eth.type values it is left untouched.
*/
-int ovs_flow_extract(struct sk_buff *skb, u16 in_port, struct sw_flow_key *key)
+static int key_extract(struct sk_buff *skb, struct sw_flow_key *key)
{
int error;
struct ethhdr *eth;
- memset(key, 0, sizeof(*key));
-
- key->phy.priority = skb->priority;
- if (OVS_CB(skb)->tun_key)
- memcpy(&key->tun_key, OVS_CB(skb)->tun_key, sizeof(key->tun_key));
- key->phy.in_port = in_port;
- key->phy.skb_mark = skb->mark;
-
skb_reset_mac_header(skb);
/* Link layer. We are guaranteed to have at least the 14 byte Ethernet
@@ -610,6 +603,40 @@
}
}
}
-
return 0;
}
+
+int ovs_flow_key_update(struct sk_buff *skb, struct sw_flow_key *key)
+{
+ return key_extract(skb, key);
+}
+
+int ovs_flow_key_extract(struct ovs_key_ipv4_tunnel *tun_key,
+ struct sk_buff *skb, struct sw_flow_key *key)
+{
+ /* Extract metadata from packet. */
+ memset(key, 0, sizeof(*key));
+ if (tun_key)
+ memcpy(&key->tun_key, tun_key, sizeof(key->tun_key));
+
+ key->phy.priority = skb->priority;
+ key->phy.in_port = OVS_CB(skb)->input_vport->port_no;
+ key->phy.skb_mark = skb->mark;
+
+ return key_extract(skb, key);
+}
+
+int ovs_flow_key_extract_userspace(const struct nlattr *attr,
+ struct sk_buff *skb,
+ struct sw_flow_key *key)
+{
+ int err;
+
+ memset(key, 0, sizeof(*key));
+ /* Extract metadata from netlink attributes. */
+ err = ovs_nla_get_flow_metadata(attr, key);
+ if (err)
+ return err;
+
+ return key_extract(skb, key);
+}
diff --git a/net/openvswitch/flow.h b/net/openvswitch/flow.h
index 5e5aaed..0f5db4e 100644
--- a/net/openvswitch/flow.h
+++ b/net/openvswitch/flow.h
@@ -72,6 +72,8 @@
u32 skb_mark; /* SKB mark. */
u16 in_port; /* Input switch port (or DP_MAX_PORTS). */
} __packed phy; /* Safe when right after 'tun_key'. */
+ u32 ovs_flow_hash; /* Datapath computed hash value. */
+ u32 recirc_id; /* Recirculation ID. */
struct {
u8 src[ETH_ALEN]; /* Ethernet source address. */
u8 dst[ETH_ALEN]; /* Ethernet destination address. */
@@ -187,6 +189,12 @@
void ovs_flow_stats_clear(struct sw_flow *);
u64 ovs_flow_used_time(unsigned long flow_jiffies);
-int ovs_flow_extract(struct sk_buff *, u16 in_port, struct sw_flow_key *);
+int ovs_flow_key_update(struct sk_buff *skb, struct sw_flow_key *key);
+int ovs_flow_key_extract(struct ovs_key_ipv4_tunnel *tun_key,
+ struct sk_buff *skb, struct sw_flow_key *key);
+/* Extract key from packet coming from userspace. */
+int ovs_flow_key_extract_userspace(const struct nlattr *attr,
+ struct sk_buff *skb,
+ struct sw_flow_key *key);
#endif /* flow.h */
diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
index d757848..f4c8daa 100644
--- a/net/openvswitch/flow_netlink.c
+++ b/net/openvswitch/flow_netlink.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2007-2013 Nicira, Inc.
+ * Copyright (c) 2007-2014 Nicira, Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of version 2 of the GNU General Public
@@ -251,6 +251,8 @@
[OVS_KEY_ATTR_ICMPV6] = sizeof(struct ovs_key_icmpv6),
[OVS_KEY_ATTR_ARP] = sizeof(struct ovs_key_arp),
[OVS_KEY_ATTR_ND] = sizeof(struct ovs_key_nd),
+ [OVS_KEY_ATTR_RECIRC_ID] = sizeof(u32),
+ [OVS_KEY_ATTR_DP_HASH] = sizeof(u32),
[OVS_KEY_ATTR_TUNNEL] = -1,
};
@@ -454,6 +456,20 @@
static int metadata_from_nlattrs(struct sw_flow_match *match, u64 *attrs,
const struct nlattr **a, bool is_mask)
{
+ if (*attrs & (1 << OVS_KEY_ATTR_DP_HASH)) {
+ u32 hash_val = nla_get_u32(a[OVS_KEY_ATTR_DP_HASH]);
+
+ SW_FLOW_KEY_PUT(match, ovs_flow_hash, hash_val, is_mask);
+ *attrs &= ~(1 << OVS_KEY_ATTR_DP_HASH);
+ }
+
+ if (*attrs & (1 << OVS_KEY_ATTR_RECIRC_ID)) {
+ u32 recirc_id = nla_get_u32(a[OVS_KEY_ATTR_RECIRC_ID]);
+
+ SW_FLOW_KEY_PUT(match, recirc_id, recirc_id, is_mask);
+ *attrs &= ~(1 << OVS_KEY_ATTR_RECIRC_ID);
+ }
+
if (*attrs & (1 << OVS_KEY_ATTR_PRIORITY)) {
SW_FLOW_KEY_PUT(match, phy.priority,
nla_get_u32(a[OVS_KEY_ATTR_PRIORITY]), is_mask);
@@ -836,7 +852,7 @@
/**
* ovs_nla_get_flow_metadata - parses Netlink attributes into a flow key.
- * @flow: Receives extracted in_port, priority, tun_key and skb_mark.
+ * @key: Receives extracted in_port, priority, tun_key and skb_mark.
* @attr: Netlink attribute holding nested %OVS_KEY_ATTR_* Netlink attribute
* sequence.
*
@@ -846,32 +862,24 @@
* extracted from the packet itself.
*/
-int ovs_nla_get_flow_metadata(struct sw_flow *flow,
- const struct nlattr *attr)
+int ovs_nla_get_flow_metadata(const struct nlattr *attr,
+ struct sw_flow_key *key)
{
- struct ovs_key_ipv4_tunnel *tun_key = &flow->key.tun_key;
const struct nlattr *a[OVS_KEY_ATTR_MAX + 1];
+ struct sw_flow_match match;
u64 attrs = 0;
int err;
- struct sw_flow_match match;
-
- flow->key.phy.in_port = DP_MAX_PORTS;
- flow->key.phy.priority = 0;
- flow->key.phy.skb_mark = 0;
- memset(tun_key, 0, sizeof(flow->key.tun_key));
err = parse_flow_nlattrs(attr, a, &attrs);
if (err)
return -EINVAL;
memset(&match, 0, sizeof(match));
- match.key = &flow->key;
+ match.key = key;
- err = metadata_from_nlattrs(&match, &attrs, a, false);
- if (err)
- return err;
+ key->phy.in_port = DP_MAX_PORTS;
- return 0;
+ return metadata_from_nlattrs(&match, &attrs, a, false);
}
int ovs_nla_put_flow(const struct sw_flow_key *swkey,
@@ -881,6 +889,12 @@
struct nlattr *nla, *encap;
bool is_mask = (swkey != output);
+ if (nla_put_u32(skb, OVS_KEY_ATTR_RECIRC_ID, output->recirc_id))
+ goto nla_put_failure;
+
+ if (nla_put_u32(skb, OVS_KEY_ATTR_DP_HASH, output->ovs_flow_hash))
+ goto nla_put_failure;
+
if (nla_put_u32(skb, OVS_KEY_ATTR_PRIORITY, output->phy.priority))
goto nla_put_failure;
@@ -1409,11 +1423,13 @@
/* Expected argument lengths, (u32)-1 for variable length. */
static const u32 action_lens[OVS_ACTION_ATTR_MAX + 1] = {
[OVS_ACTION_ATTR_OUTPUT] = sizeof(u32),
+ [OVS_ACTION_ATTR_RECIRC] = sizeof(u32),
[OVS_ACTION_ATTR_USERSPACE] = (u32)-1,
[OVS_ACTION_ATTR_PUSH_VLAN] = sizeof(struct ovs_action_push_vlan),
[OVS_ACTION_ATTR_POP_VLAN] = 0,
[OVS_ACTION_ATTR_SET] = (u32)-1,
- [OVS_ACTION_ATTR_SAMPLE] = (u32)-1
+ [OVS_ACTION_ATTR_SAMPLE] = (u32)-1,
+ [OVS_ACTION_ATTR_HASH] = sizeof(struct ovs_action_hash)
};
const struct ovs_action_push_vlan *vlan;
int type = nla_type(a);
@@ -1440,6 +1456,18 @@
return -EINVAL;
break;
+ case OVS_ACTION_ATTR_HASH: {
+ const struct ovs_action_hash *act_hash = nla_data(a);
+
+ switch (act_hash->hash_alg) {
+ case OVS_HASH_ALG_L4:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ break;
+ }
case OVS_ACTION_ATTR_POP_VLAN:
break;
@@ -1452,6 +1480,9 @@
return -EINVAL;
break;
+ case OVS_ACTION_ATTR_RECIRC:
+ break;
+
case OVS_ACTION_ATTR_SET:
err = validate_set(a, key, sfa, &skip_copy);
if (err)
diff --git a/net/openvswitch/flow_netlink.h b/net/openvswitch/flow_netlink.h
index 4401510..206e45a 100644
--- a/net/openvswitch/flow_netlink.h
+++ b/net/openvswitch/flow_netlink.h
@@ -42,8 +42,8 @@
int ovs_nla_put_flow(const struct sw_flow_key *,
const struct sw_flow_key *, struct sk_buff *);
-int ovs_nla_get_flow_metadata(struct sw_flow *flow,
- const struct nlattr *attr);
+int ovs_nla_get_flow_metadata(const struct nlattr *, struct sw_flow_key *);
+
int ovs_nla_get_match(struct sw_flow_match *match,
const struct nlattr *,
const struct nlattr *);
diff --git a/net/openvswitch/vport-gre.c b/net/openvswitch/vport-gre.c
index f49148a..309cca6 100644
--- a/net/openvswitch/vport-gre.c
+++ b/net/openvswitch/vport-gre.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2007-2013 Nicira, Inc.
+ * Copyright (c) 2007-2014 Nicira, Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of version 2 of the GNU General Public
@@ -63,7 +63,7 @@
static struct sk_buff *__build_header(struct sk_buff *skb,
int tunnel_hlen)
{
- const struct ovs_key_ipv4_tunnel *tun_key = OVS_CB(skb)->tun_key;
+ const struct ovs_key_ipv4_tunnel *tun_key = OVS_CB(skb)->egress_tun_key;
struct tnl_ptk_info tpi;
skb = gre_handle_offloads(skb, !!(tun_key->tun_flags & TUNNEL_CSUM));
@@ -129,6 +129,7 @@
static int gre_tnl_send(struct vport *vport, struct sk_buff *skb)
{
struct net *net = ovs_dp_get_net(vport->dp);
+ struct ovs_key_ipv4_tunnel *tun_key;
struct flowi4 fl;
struct rtable *rt;
int min_headroom;
@@ -136,16 +137,17 @@
__be16 df;
int err;
- if (unlikely(!OVS_CB(skb)->tun_key)) {
+ if (unlikely(!OVS_CB(skb)->egress_tun_key)) {
err = -EINVAL;
goto error;
}
+ tun_key = OVS_CB(skb)->egress_tun_key;
/* Route lookup */
memset(&fl, 0, sizeof(fl));
- fl.daddr = OVS_CB(skb)->tun_key->ipv4_dst;
- fl.saddr = OVS_CB(skb)->tun_key->ipv4_src;
- fl.flowi4_tos = RT_TOS(OVS_CB(skb)->tun_key->ipv4_tos);
+ fl.daddr = tun_key->ipv4_dst;
+ fl.saddr = tun_key->ipv4_src;
+ fl.flowi4_tos = RT_TOS(tun_key->ipv4_tos);
fl.flowi4_mark = skb->mark;
fl.flowi4_proto = IPPROTO_GRE;
@@ -153,7 +155,7 @@
if (IS_ERR(rt))
return PTR_ERR(rt);
- tunnel_hlen = ip_gre_calc_hlen(OVS_CB(skb)->tun_key->tun_flags);
+ tunnel_hlen = ip_gre_calc_hlen(tun_key->tun_flags);
min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
+ tunnel_hlen + sizeof(struct iphdr)
@@ -185,15 +187,14 @@
goto err_free_rt;
}
- df = OVS_CB(skb)->tun_key->tun_flags & TUNNEL_DONT_FRAGMENT ?
+ df = tun_key->tun_flags & TUNNEL_DONT_FRAGMENT ?
htons(IP_DF) : 0;
skb->ignore_df = 1;
return iptunnel_xmit(skb->sk, rt, skb, fl.saddr,
- OVS_CB(skb)->tun_key->ipv4_dst, IPPROTO_GRE,
- OVS_CB(skb)->tun_key->ipv4_tos,
- OVS_CB(skb)->tun_key->ipv4_ttl, df, false);
+ tun_key->ipv4_dst, IPPROTO_GRE,
+ tun_key->ipv4_tos, tun_key->ipv4_ttl, df, false);
err_free_rt:
ip_rt_put(rt);
error:
diff --git a/net/openvswitch/vport-vxlan.c b/net/openvswitch/vport-vxlan.c
index d8b7e24..f19539b 100644
--- a/net/openvswitch/vport-vxlan.c
+++ b/net/openvswitch/vport-vxlan.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2013 Nicira, Inc.
+ * Copyright (c) 2014 Nicira, Inc.
* Copyright (c) 2013 Cisco Systems, Inc.
*
* This program is free software; you can redistribute it and/or
@@ -140,22 +140,24 @@
struct net *net = ovs_dp_get_net(vport->dp);
struct vxlan_port *vxlan_port = vxlan_vport(vport);
__be16 dst_port = inet_sk(vxlan_port->vs->sock->sk)->inet_sport;
+ struct ovs_key_ipv4_tunnel *tun_key;
struct rtable *rt;
struct flowi4 fl;
__be16 src_port;
__be16 df;
int err;
- if (unlikely(!OVS_CB(skb)->tun_key)) {
+ if (unlikely(!OVS_CB(skb)->egress_tun_key)) {
err = -EINVAL;
goto error;
}
+ tun_key = OVS_CB(skb)->egress_tun_key;
/* Route lookup */
memset(&fl, 0, sizeof(fl));
- fl.daddr = OVS_CB(skb)->tun_key->ipv4_dst;
- fl.saddr = OVS_CB(skb)->tun_key->ipv4_src;
- fl.flowi4_tos = RT_TOS(OVS_CB(skb)->tun_key->ipv4_tos);
+ fl.daddr = tun_key->ipv4_dst;
+ fl.saddr = tun_key->ipv4_src;
+ fl.flowi4_tos = RT_TOS(tun_key->ipv4_tos);
fl.flowi4_mark = skb->mark;
fl.flowi4_proto = IPPROTO_UDP;
@@ -165,7 +167,7 @@
goto error;
}
- df = OVS_CB(skb)->tun_key->tun_flags & TUNNEL_DONT_FRAGMENT ?
+ df = tun_key->tun_flags & TUNNEL_DONT_FRAGMENT ?
htons(IP_DF) : 0;
skb->ignore_df = 1;
@@ -173,11 +175,10 @@
src_port = udp_flow_src_port(net, skb, 0, 0, true);
err = vxlan_xmit_skb(vxlan_port->vs, rt, skb,
- fl.saddr, OVS_CB(skb)->tun_key->ipv4_dst,
- OVS_CB(skb)->tun_key->ipv4_tos,
- OVS_CB(skb)->tun_key->ipv4_ttl, df,
+ fl.saddr, tun_key->ipv4_dst,
+ tun_key->ipv4_tos, tun_key->ipv4_ttl, df,
src_port, dst_port,
- htonl(be64_to_cpu(OVS_CB(skb)->tun_key->tun_id) << 8),
+ htonl(be64_to_cpu(tun_key->tun_id) << 8),
false);
if (err < 0)
ip_rt_put(rt);
diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
index f7e63f9..5df8377 100644
--- a/net/openvswitch/vport.c
+++ b/net/openvswitch/vport.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2007-2012 Nicira, Inc.
+ * Copyright (c) 2007-2014 Nicira, Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of version 2 of the GNU General Public
@@ -435,6 +435,8 @@
struct ovs_key_ipv4_tunnel *tun_key)
{
struct pcpu_sw_netstats *stats;
+ struct sw_flow_key key;
+ int error;
stats = this_cpu_ptr(vport->percpu_stats);
u64_stats_update_begin(&stats->syncp);
@@ -442,8 +444,15 @@
stats->rx_bytes += skb->len;
u64_stats_update_end(&stats->syncp);
- OVS_CB(skb)->tun_key = tun_key;
- ovs_dp_process_received_packet(vport, skb);
+ OVS_CB(skb)->input_vport = vport;
+ OVS_CB(skb)->egress_tun_key = NULL;
+ /* Extract flow from 'skb' into 'key'. */
+ error = ovs_flow_key_extract(tun_key, skb, &key);
+ if (unlikely(error)) {
+ kfree_skb(skb);
+ return;
+ }
+ ovs_dp_process_packet(skb, &key);
}
/**
diff --git a/net/openvswitch/vport.h b/net/openvswitch/vport.h
index 0d95b9f..0efd62f 100644
--- a/net/openvswitch/vport.h
+++ b/net/openvswitch/vport.h
@@ -35,7 +35,6 @@
/* The following definitions are for users of the vport subsytem: */
-/* The following definitions are for users of the vport subsytem: */
struct vport_net {
struct vport __rcu *gre_vport;
};
diff --git a/net/rfkill/rfkill-gpio.c b/net/rfkill/rfkill-gpio.c
index 02a86a2..0f62326 100644
--- a/net/rfkill/rfkill-gpio.c
+++ b/net/rfkill/rfkill-gpio.c
@@ -54,7 +54,7 @@
if (blocked && !IS_ERR(rfkill->clk) && rfkill->clk_enabled)
clk_disable(rfkill->clk);
- rfkill->clk_enabled = blocked;
+ rfkill->clk_enabled = !blocked;
return 0;
}
@@ -163,6 +163,7 @@
{ "LNV4752", RFKILL_TYPE_GPS },
{ },
};
+MODULE_DEVICE_TABLE(acpi, rfkill_acpi_match);
#endif
static struct platform_driver rfkill_gpio_driver = {
diff --git a/net/rxrpc/ar-key.c b/net/rxrpc/ar-key.c
index b45d080..1b24191 100644
--- a/net/rxrpc/ar-key.c
+++ b/net/rxrpc/ar-key.c
@@ -1143,7 +1143,7 @@
if (copy_to_user(xdr, (s), _l) != 0) \
goto fault; \
if (_l & 3 && \
- copy_to_user((u8 *)xdr + _l, &zero, 4 - (_l & 3)) != 0) \
+ copy_to_user((u8 __user *)xdr + _l, &zero, 4 - (_l & 3)) != 0) \
goto fault; \
xdr += (_l + 3) >> 2; \
} while(0)
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index c28b0d3..77147c8 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -117,7 +117,6 @@
{
struct net *net = sock_net(skb->sk);
struct nlattr *tca[TCA_MAX + 1];
- spinlock_t *root_lock;
struct tcmsg *t;
u32 protocol;
u32 prio;
@@ -125,7 +124,8 @@
u32 parent;
struct net_device *dev;
struct Qdisc *q;
- struct tcf_proto **back, **chain;
+ struct tcf_proto __rcu **back;
+ struct tcf_proto __rcu **chain;
struct tcf_proto *tp;
const struct tcf_proto_ops *tp_ops;
const struct Qdisc_class_ops *cops;
@@ -197,7 +197,9 @@
goto errout;
/* Check the chain for existence of proto-tcf with this priority */
- for (back = chain; (tp = *back) != NULL; back = &tp->next) {
+ for (back = chain;
+ (tp = rtnl_dereference(*back)) != NULL;
+ back = &tp->next) {
if (tp->prio >= prio) {
if (tp->prio == prio) {
if (!nprio ||
@@ -209,8 +211,6 @@
}
}
- root_lock = qdisc_root_sleeping_lock(q);
-
if (tp == NULL) {
/* Proto-tcf does not exist, create new one */
@@ -259,7 +259,8 @@
}
tp->ops = tp_ops;
tp->protocol = protocol;
- tp->prio = nprio ? : TC_H_MAJ(tcf_auto_prio(*back));
+ tp->prio = nprio ? :
+ TC_H_MAJ(tcf_auto_prio(rtnl_dereference(*back)));
tp->q = q;
tp->classify = tp_ops->classify;
tp->classid = parent;
@@ -280,9 +281,9 @@
if (fh == 0) {
if (n->nlmsg_type == RTM_DELTFILTER && t->tcm_handle == 0) {
- spin_lock_bh(root_lock);
- *back = tp->next;
- spin_unlock_bh(root_lock);
+ struct tcf_proto *next = rtnl_dereference(tp->next);
+
+ RCU_INIT_POINTER(*back, next);
tfilter_notify(net, skb, n, tp, fh, RTM_DELTFILTER);
tcf_destroy(tp);
@@ -322,10 +323,8 @@
n->nlmsg_flags & NLM_F_CREATE ? TCA_ACT_NOREPLACE : TCA_ACT_REPLACE);
if (err == 0) {
if (tp_created) {
- spin_lock_bh(root_lock);
- tp->next = *back;
- *back = tp;
- spin_unlock_bh(root_lock);
+ RCU_INIT_POINTER(tp->next, rtnl_dereference(*back));
+ rcu_assign_pointer(*back, tp);
}
tfilter_notify(net, skb, n, tp, fh, RTM_NEWTFILTER);
} else {
@@ -420,7 +419,7 @@
int s_t;
struct net_device *dev;
struct Qdisc *q;
- struct tcf_proto *tp, **chain;
+ struct tcf_proto *tp, __rcu **chain;
struct tcmsg *tcm = nlmsg_data(cb->nlh);
unsigned long cl = 0;
const struct Qdisc_class_ops *cops;
@@ -454,7 +453,8 @@
s_t = cb->args[0];
- for (tp = *chain, t = 0; tp; tp = tp->next, t++) {
+ for (tp = rtnl_dereference(*chain), t = 0;
+ tp; tp = rtnl_dereference(tp->next), t++) {
if (t < s_t)
continue;
if (TC_H_MAJ(tcm->tcm_info) &&
@@ -496,7 +496,7 @@
return skb->len;
}
-void tcf_exts_destroy(struct tcf_proto *tp, struct tcf_exts *exts)
+void tcf_exts_destroy(struct tcf_exts *exts)
{
#ifdef CONFIG_NET_CLS_ACT
tcf_action_destroy(&exts->actions, TCA_ACT_UNBIND);
diff --git a/net/sched/cls_basic.c b/net/sched/cls_basic.c
index 0ae1813..fe20826 100644
--- a/net/sched/cls_basic.c
+++ b/net/sched/cls_basic.c
@@ -24,6 +24,7 @@
struct basic_head {
u32 hgenerator;
struct list_head flist;
+ struct rcu_head rcu;
};
struct basic_filter {
@@ -31,17 +32,19 @@
struct tcf_exts exts;
struct tcf_ematch_tree ematches;
struct tcf_result res;
+ struct tcf_proto *tp;
struct list_head link;
+ struct rcu_head rcu;
};
static int basic_classify(struct sk_buff *skb, const struct tcf_proto *tp,
struct tcf_result *res)
{
int r;
- struct basic_head *head = tp->root;
+ struct basic_head *head = rcu_dereference_bh(tp->root);
struct basic_filter *f;
- list_for_each_entry(f, &head->flist, link) {
+ list_for_each_entry_rcu(f, &head->flist, link) {
if (!tcf_em_tree_match(skb, &f->ematches, NULL))
continue;
*res = f->res;
@@ -56,7 +59,7 @@
static unsigned long basic_get(struct tcf_proto *tp, u32 handle)
{
unsigned long l = 0UL;
- struct basic_head *head = tp->root;
+ struct basic_head *head = rtnl_dereference(tp->root);
struct basic_filter *f;
if (head == NULL)
@@ -81,41 +84,43 @@
if (head == NULL)
return -ENOBUFS;
INIT_LIST_HEAD(&head->flist);
- tp->root = head;
+ rcu_assign_pointer(tp->root, head);
return 0;
}
-static void basic_delete_filter(struct tcf_proto *tp, struct basic_filter *f)
+static void basic_delete_filter(struct rcu_head *head)
{
+ struct basic_filter *f = container_of(head, struct basic_filter, rcu);
+ struct tcf_proto *tp = f->tp;
+
tcf_unbind_filter(tp, &f->res);
- tcf_exts_destroy(tp, &f->exts);
+ tcf_exts_destroy(&f->exts);
tcf_em_tree_destroy(tp, &f->ematches);
kfree(f);
}
static void basic_destroy(struct tcf_proto *tp)
{
- struct basic_head *head = tp->root;
+ struct basic_head *head = rtnl_dereference(tp->root);
struct basic_filter *f, *n;
list_for_each_entry_safe(f, n, &head->flist, link) {
- list_del(&f->link);
- basic_delete_filter(tp, f);
+ list_del_rcu(&f->link);
+ call_rcu(&f->rcu, basic_delete_filter);
}
- kfree(head);
+ RCU_INIT_POINTER(tp->root, NULL);
+ kfree_rcu(head, rcu);
}
static int basic_delete(struct tcf_proto *tp, unsigned long arg)
{
- struct basic_head *head = tp->root;
+ struct basic_head *head = rtnl_dereference(tp->root);
struct basic_filter *t, *f = (struct basic_filter *) arg;
list_for_each_entry(t, &head->flist, link)
if (t == f) {
- tcf_tree_lock(tp);
- list_del(&t->link);
- tcf_tree_unlock(tp);
- basic_delete_filter(tp, t);
+ list_del_rcu(&t->link);
+ call_rcu(&t->rcu, basic_delete_filter);
return 0;
}
@@ -152,10 +157,11 @@
tcf_exts_change(tp, &f->exts, &e);
tcf_em_tree_change(tp, &f->ematches, &t);
+ f->tp = tp;
return 0;
errout:
- tcf_exts_destroy(tp, &e);
+ tcf_exts_destroy(&e);
return err;
}
@@ -164,9 +170,10 @@
struct nlattr **tca, unsigned long *arg, bool ovr)
{
int err;
- struct basic_head *head = tp->root;
+ struct basic_head *head = rtnl_dereference(tp->root);
struct nlattr *tb[TCA_BASIC_MAX + 1];
- struct basic_filter *f = (struct basic_filter *) *arg;
+ struct basic_filter *fold = (struct basic_filter *) *arg;
+ struct basic_filter *fnew;
if (tca[TCA_OPTIONS] == NULL)
return -EINVAL;
@@ -176,22 +183,23 @@
if (err < 0)
return err;
- if (f != NULL) {
- if (handle && f->handle != handle)
+ if (fold != NULL) {
+ if (handle && fold->handle != handle)
return -EINVAL;
- return basic_set_parms(net, tp, f, base, tb, tca[TCA_RATE], ovr);
}
err = -ENOBUFS;
- f = kzalloc(sizeof(*f), GFP_KERNEL);
- if (f == NULL)
+ fnew = kzalloc(sizeof(*fnew), GFP_KERNEL);
+ if (fnew == NULL)
goto errout;
- tcf_exts_init(&f->exts, TCA_BASIC_ACT, TCA_BASIC_POLICE);
+ tcf_exts_init(&fnew->exts, TCA_BASIC_ACT, TCA_BASIC_POLICE);
err = -EINVAL;
- if (handle)
- f->handle = handle;
- else {
+ if (handle) {
+ fnew->handle = handle;
+ } else if (fold) {
+ fnew->handle = fold->handle;
+ } else {
unsigned int i = 0x80000000;
do {
if (++head->hgenerator == 0x7FFFFFFF)
@@ -203,29 +211,31 @@
goto errout;
}
- f->handle = head->hgenerator;
+ fnew->handle = head->hgenerator;
}
- err = basic_set_parms(net, tp, f, base, tb, tca[TCA_RATE], ovr);
+ err = basic_set_parms(net, tp, fnew, base, tb, tca[TCA_RATE], ovr);
if (err < 0)
goto errout;
- tcf_tree_lock(tp);
- list_add(&f->link, &head->flist);
- tcf_tree_unlock(tp);
- *arg = (unsigned long) f;
+ *arg = (unsigned long)fnew;
+
+ if (fold) {
+ list_replace_rcu(&fold->link, &fnew->link);
+ call_rcu(&fold->rcu, basic_delete_filter);
+ } else {
+ list_add_rcu(&fnew->link, &head->flist);
+ }
return 0;
errout:
- if (*arg == 0UL && f)
- kfree(f);
-
+ kfree(fnew);
return err;
}
static void basic_walk(struct tcf_proto *tp, struct tcf_walker *arg)
{
- struct basic_head *head = tp->root;
+ struct basic_head *head = rtnl_dereference(tp->root);
struct basic_filter *f;
list_for_each_entry(f, &head->flist, link) {
diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c
index 0e30d58..4318d06 100644
--- a/net/sched/cls_bpf.c
+++ b/net/sched/cls_bpf.c
@@ -27,6 +27,7 @@
struct cls_bpf_head {
struct list_head plist;
u32 hgen;
+ struct rcu_head rcu;
};
struct cls_bpf_prog {
@@ -37,6 +38,8 @@
struct list_head link;
u32 handle;
u16 bpf_len;
+ struct tcf_proto *tp;
+ struct rcu_head rcu;
};
static const struct nla_policy bpf_policy[TCA_BPF_MAX + 1] = {
@@ -49,11 +52,11 @@
static int cls_bpf_classify(struct sk_buff *skb, const struct tcf_proto *tp,
struct tcf_result *res)
{
- struct cls_bpf_head *head = tp->root;
+ struct cls_bpf_head *head = rcu_dereference_bh(tp->root);
struct cls_bpf_prog *prog;
int ret;
- list_for_each_entry(prog, &head->plist, link) {
+ list_for_each_entry_rcu(prog, &head->plist, link) {
int filter_res = BPF_PROG_RUN(prog->filter, skb);
if (filter_res == 0)
@@ -81,8 +84,8 @@
if (head == NULL)
return -ENOBUFS;
- INIT_LIST_HEAD(&head->plist);
- tp->root = head;
+ INIT_LIST_HEAD_RCU(&head->plist);
+ rcu_assign_pointer(tp->root, head);
return 0;
}
@@ -90,7 +93,7 @@
static void cls_bpf_delete_prog(struct tcf_proto *tp, struct cls_bpf_prog *prog)
{
tcf_unbind_filter(tp, &prog->res);
- tcf_exts_destroy(tp, &prog->exts);
+ tcf_exts_destroy(&prog->exts);
bpf_prog_destroy(prog->filter);
@@ -98,18 +101,22 @@
kfree(prog);
}
+static void __cls_bpf_delete_prog(struct rcu_head *rcu)
+{
+ struct cls_bpf_prog *prog = container_of(rcu, struct cls_bpf_prog, rcu);
+
+ cls_bpf_delete_prog(prog->tp, prog);
+}
+
static int cls_bpf_delete(struct tcf_proto *tp, unsigned long arg)
{
- struct cls_bpf_head *head = tp->root;
+ struct cls_bpf_head *head = rtnl_dereference(tp->root);
struct cls_bpf_prog *prog, *todel = (struct cls_bpf_prog *) arg;
list_for_each_entry(prog, &head->plist, link) {
if (prog == todel) {
- tcf_tree_lock(tp);
- list_del(&prog->link);
- tcf_tree_unlock(tp);
-
- cls_bpf_delete_prog(tp, prog);
+ list_del_rcu(&prog->link);
+ call_rcu(&prog->rcu, __cls_bpf_delete_prog);
return 0;
}
}
@@ -119,27 +126,28 @@
static void cls_bpf_destroy(struct tcf_proto *tp)
{
- struct cls_bpf_head *head = tp->root;
+ struct cls_bpf_head *head = rtnl_dereference(tp->root);
struct cls_bpf_prog *prog, *tmp;
list_for_each_entry_safe(prog, tmp, &head->plist, link) {
- list_del(&prog->link);
- cls_bpf_delete_prog(tp, prog);
+ list_del_rcu(&prog->link);
+ call_rcu(&prog->rcu, __cls_bpf_delete_prog);
}
- kfree(head);
+ RCU_INIT_POINTER(tp->root, NULL);
+ kfree_rcu(head, rcu);
}
static unsigned long cls_bpf_get(struct tcf_proto *tp, u32 handle)
{
- struct cls_bpf_head *head = tp->root;
+ struct cls_bpf_head *head = rtnl_dereference(tp->root);
struct cls_bpf_prog *prog;
unsigned long ret = 0UL;
if (head == NULL)
return 0UL;
- list_for_each_entry(prog, &head->plist, link) {
+ list_for_each_entry_rcu(prog, &head->plist, link) {
if (prog->handle == handle) {
ret = (unsigned long) prog;
break;
@@ -158,10 +166,10 @@
unsigned long base, struct nlattr **tb,
struct nlattr *est, bool ovr)
{
- struct sock_filter *bpf_ops, *bpf_old;
+ struct sock_filter *bpf_ops;
struct tcf_exts exts;
struct sock_fprog_kern tmp;
- struct bpf_prog *fp, *fp_old;
+ struct bpf_prog *fp;
u16 bpf_size, bpf_len;
u32 classid;
int ret;
@@ -197,30 +205,19 @@
if (ret)
goto errout_free;
- tcf_tree_lock(tp);
- fp_old = prog->filter;
- bpf_old = prog->bpf_ops;
-
prog->bpf_len = bpf_len;
prog->bpf_ops = bpf_ops;
prog->filter = fp;
prog->res.classid = classid;
- tcf_tree_unlock(tp);
tcf_bind_filter(tp, &prog->res, base);
tcf_exts_change(tp, &prog->exts, &exts);
- if (fp_old)
- bpf_prog_destroy(fp_old);
- if (bpf_old)
- kfree(bpf_old);
-
return 0;
-
errout_free:
kfree(bpf_ops);
errout:
- tcf_exts_destroy(tp, &exts);
+ tcf_exts_destroy(&exts);
return ret;
}
@@ -244,9 +241,10 @@
u32 handle, struct nlattr **tca,
unsigned long *arg, bool ovr)
{
- struct cls_bpf_head *head = tp->root;
- struct cls_bpf_prog *prog = (struct cls_bpf_prog *) *arg;
+ struct cls_bpf_head *head = rtnl_dereference(tp->root);
+ struct cls_bpf_prog *oldprog = (struct cls_bpf_prog *) *arg;
struct nlattr *tb[TCA_BPF_MAX + 1];
+ struct cls_bpf_prog *prog;
int ret;
if (tca[TCA_OPTIONS] == NULL)
@@ -256,18 +254,19 @@
if (ret < 0)
return ret;
- if (prog != NULL) {
- if (handle && prog->handle != handle)
- return -EINVAL;
- return cls_bpf_modify_existing(net, tp, prog, base, tb,
- tca[TCA_RATE], ovr);
- }
-
prog = kzalloc(sizeof(*prog), GFP_KERNEL);
- if (prog == NULL)
+ if (!prog)
return -ENOBUFS;
tcf_exts_init(&prog->exts, TCA_BPF_ACT, TCA_BPF_POLICE);
+
+ if (oldprog) {
+ if (handle && oldprog->handle != handle) {
+ ret = -EINVAL;
+ goto errout;
+ }
+ }
+
if (handle == 0)
prog->handle = cls_bpf_grab_new_handle(tp, head);
else
@@ -281,16 +280,17 @@
if (ret < 0)
goto errout;
- tcf_tree_lock(tp);
- list_add(&prog->link, &head->plist);
- tcf_tree_unlock(tp);
+ if (oldprog) {
+ list_replace_rcu(&prog->link, &oldprog->link);
+ call_rcu(&oldprog->rcu, __cls_bpf_delete_prog);
+ } else {
+ list_add_rcu(&prog->link, &head->plist);
+ }
*arg = (unsigned long) prog;
-
return 0;
errout:
- if (*arg == 0UL && prog)
- kfree(prog);
+ kfree(prog);
return ret;
}
@@ -339,10 +339,10 @@
static void cls_bpf_walk(struct tcf_proto *tp, struct tcf_walker *arg)
{
- struct cls_bpf_head *head = tp->root;
+ struct cls_bpf_head *head = rtnl_dereference(tp->root);
struct cls_bpf_prog *prog;
- list_for_each_entry(prog, &head->plist, link) {
+ list_for_each_entry_rcu(prog, &head->plist, link) {
if (arg->count < arg->skip)
goto skip;
if (arg->fn(tp, (unsigned long) prog, arg) < 0) {
diff --git a/net/sched/cls_cgroup.c b/net/sched/cls_cgroup.c
index cacf01b..3409f16 100644
--- a/net/sched/cls_cgroup.c
+++ b/net/sched/cls_cgroup.c
@@ -22,17 +22,17 @@
u32 handle;
struct tcf_exts exts;
struct tcf_ematch_tree ematches;
+ struct tcf_proto *tp;
+ struct rcu_head rcu;
};
static int cls_cgroup_classify(struct sk_buff *skb, const struct tcf_proto *tp,
struct tcf_result *res)
{
- struct cls_cgroup_head *head = tp->root;
+ struct cls_cgroup_head *head = rcu_dereference_bh(tp->root);
u32 classid;
- rcu_read_lock();
classid = task_cls_state(current)->classid;
- rcu_read_unlock();
/*
* Due to the nature of the classifier it is required to ignore all
@@ -80,13 +80,25 @@
[TCA_CGROUP_EMATCHES] = { .type = NLA_NESTED },
};
+static void cls_cgroup_destroy_rcu(struct rcu_head *root)
+{
+ struct cls_cgroup_head *head = container_of(root,
+ struct cls_cgroup_head,
+ rcu);
+
+ tcf_exts_destroy(&head->exts);
+ tcf_em_tree_destroy(head->tp, &head->ematches);
+ kfree(head);
+}
+
static int cls_cgroup_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base,
u32 handle, struct nlattr **tca,
unsigned long *arg, bool ovr)
{
struct nlattr *tb[TCA_CGROUP_MAX + 1];
- struct cls_cgroup_head *head = tp->root;
+ struct cls_cgroup_head *head = rtnl_dereference(tp->root);
+ struct cls_cgroup_head *new;
struct tcf_ematch_tree t;
struct tcf_exts e;
int err;
@@ -94,53 +106,60 @@
if (!tca[TCA_OPTIONS])
return -EINVAL;
- if (head == NULL) {
- if (!handle)
- return -EINVAL;
+ if (!head && !handle)
+ return -EINVAL;
- head = kzalloc(sizeof(*head), GFP_KERNEL);
- if (head == NULL)
- return -ENOBUFS;
-
- tcf_exts_init(&head->exts, TCA_CGROUP_ACT, TCA_CGROUP_POLICE);
- head->handle = handle;
-
- tcf_tree_lock(tp);
- tp->root = head;
- tcf_tree_unlock(tp);
- }
-
- if (handle != head->handle)
+ if (head && handle != head->handle)
return -ENOENT;
+ new = kzalloc(sizeof(*head), GFP_KERNEL);
+ if (!new)
+ return -ENOBUFS;
+
+ tcf_exts_init(&new->exts, TCA_CGROUP_ACT, TCA_CGROUP_POLICE);
+ if (head)
+ new->handle = head->handle;
+ else
+ new->handle = handle;
+
+ new->tp = tp;
err = nla_parse_nested(tb, TCA_CGROUP_MAX, tca[TCA_OPTIONS],
cgroup_policy);
if (err < 0)
- return err;
+ goto errout;
tcf_exts_init(&e, TCA_CGROUP_ACT, TCA_CGROUP_POLICE);
err = tcf_exts_validate(net, tp, tb, tca[TCA_RATE], &e, ovr);
if (err < 0)
- return err;
+ goto errout;
err = tcf_em_tree_validate(tp, tb[TCA_CGROUP_EMATCHES], &t);
- if (err < 0)
- return err;
+ if (err < 0) {
+ tcf_exts_destroy(&e);
+ goto errout;
+ }
- tcf_exts_change(tp, &head->exts, &e);
- tcf_em_tree_change(tp, &head->ematches, &t);
+ tcf_exts_change(tp, &new->exts, &e);
+ tcf_em_tree_change(tp, &new->ematches, &t);
+ rcu_assign_pointer(tp->root, new);
+ if (head)
+ call_rcu(&head->rcu, cls_cgroup_destroy_rcu);
return 0;
+errout:
+ kfree(new);
+ return err;
}
static void cls_cgroup_destroy(struct tcf_proto *tp)
{
- struct cls_cgroup_head *head = tp->root;
+ struct cls_cgroup_head *head = rtnl_dereference(tp->root);
if (head) {
- tcf_exts_destroy(tp, &head->exts);
+ tcf_exts_destroy(&head->exts);
tcf_em_tree_destroy(tp, &head->ematches);
- kfree(head);
+ RCU_INIT_POINTER(tp->root, NULL);
+ kfree_rcu(head, rcu);
}
}
@@ -151,7 +170,7 @@
static void cls_cgroup_walk(struct tcf_proto *tp, struct tcf_walker *arg)
{
- struct cls_cgroup_head *head = tp->root;
+ struct cls_cgroup_head *head = rtnl_dereference(tp->root);
if (arg->count < arg->skip)
goto skip;
@@ -167,7 +186,7 @@
static int cls_cgroup_dump(struct net *net, struct tcf_proto *tp, unsigned long fh,
struct sk_buff *skb, struct tcmsg *t)
{
- struct cls_cgroup_head *head = tp->root;
+ struct cls_cgroup_head *head = rtnl_dereference(tp->root);
unsigned char *b = skb_tail_pointer(skb);
struct nlattr *nest;
diff --git a/net/sched/cls_flow.c b/net/sched/cls_flow.c
index 35be16f..f18d27f7 100644
--- a/net/sched/cls_flow.c
+++ b/net/sched/cls_flow.c
@@ -34,12 +34,14 @@
struct flow_head {
struct list_head filters;
+ struct rcu_head rcu;
};
struct flow_filter {
struct list_head list;
struct tcf_exts exts;
struct tcf_ematch_tree ematches;
+ struct tcf_proto *tp;
struct timer_list perturb_timer;
u32 perturb_period;
u32 handle;
@@ -54,6 +56,7 @@
u32 divisor;
u32 baseclass;
u32 hashrnd;
+ struct rcu_head rcu;
};
static inline u32 addr_fold(void *addr)
@@ -276,14 +279,14 @@
static int flow_classify(struct sk_buff *skb, const struct tcf_proto *tp,
struct tcf_result *res)
{
- struct flow_head *head = tp->root;
+ struct flow_head *head = rcu_dereference_bh(tp->root);
struct flow_filter *f;
u32 keymask;
u32 classid;
unsigned int n, key;
int r;
- list_for_each_entry(f, &head->filters, list) {
+ list_for_each_entry_rcu(f, &head->filters, list) {
u32 keys[FLOW_KEY_MAX + 1];
struct flow_keys flow_keys;
@@ -346,13 +349,23 @@
[TCA_FLOW_PERTURB] = { .type = NLA_U32 },
};
+static void flow_destroy_filter(struct rcu_head *head)
+{
+ struct flow_filter *f = container_of(head, struct flow_filter, rcu);
+
+ del_timer_sync(&f->perturb_timer);
+ tcf_exts_destroy(&f->exts);
+ tcf_em_tree_destroy(f->tp, &f->ematches);
+ kfree(f);
+}
+
static int flow_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base,
u32 handle, struct nlattr **tca,
unsigned long *arg, bool ovr)
{
- struct flow_head *head = tp->root;
- struct flow_filter *f;
+ struct flow_head *head = rtnl_dereference(tp->root);
+ struct flow_filter *fold, *fnew;
struct nlattr *opt = tca[TCA_OPTIONS];
struct nlattr *tb[TCA_FLOW_MAX + 1];
struct tcf_exts e;
@@ -401,20 +414,42 @@
if (err < 0)
goto err1;
- f = (struct flow_filter *)*arg;
- if (f != NULL) {
+ err = -ENOBUFS;
+ fnew = kzalloc(sizeof(*fnew), GFP_KERNEL);
+ if (!fnew)
+ goto err2;
+
+ fold = (struct flow_filter *)*arg;
+ if (fold) {
err = -EINVAL;
- if (f->handle != handle && handle)
+ if (fold->handle != handle && handle)
goto err2;
- mode = f->mode;
+ /* Copy fold into fnew */
+ fnew->handle = fold->handle;
+ fnew->keymask = fold->keymask;
+ fnew->tp = fold->tp;
+
+ fnew->handle = fold->handle;
+ fnew->nkeys = fold->nkeys;
+ fnew->keymask = fold->keymask;
+ fnew->mode = fold->mode;
+ fnew->mask = fold->mask;
+ fnew->xor = fold->xor;
+ fnew->rshift = fold->rshift;
+ fnew->addend = fold->addend;
+ fnew->divisor = fold->divisor;
+ fnew->baseclass = fold->baseclass;
+ fnew->hashrnd = fold->hashrnd;
+
+ mode = fold->mode;
if (tb[TCA_FLOW_MODE])
mode = nla_get_u32(tb[TCA_FLOW_MODE]);
if (mode != FLOW_MODE_HASH && nkeys > 1)
goto err2;
if (mode == FLOW_MODE_HASH)
- perturb_period = f->perturb_period;
+ perturb_period = fold->perturb_period;
if (tb[TCA_FLOW_PERTURB]) {
if (mode != FLOW_MODE_HASH)
goto err2;
@@ -444,83 +479,70 @@
if (TC_H_MIN(baseclass) == 0)
baseclass = TC_H_MAKE(baseclass, 1);
- err = -ENOBUFS;
- f = kzalloc(sizeof(*f), GFP_KERNEL);
- if (f == NULL)
- goto err2;
-
- f->handle = handle;
- f->mask = ~0U;
- tcf_exts_init(&f->exts, TCA_FLOW_ACT, TCA_FLOW_POLICE);
-
- get_random_bytes(&f->hashrnd, 4);
- f->perturb_timer.function = flow_perturbation;
- f->perturb_timer.data = (unsigned long)f;
- init_timer_deferrable(&f->perturb_timer);
+ fnew->handle = handle;
+ fnew->mask = ~0U;
+ fnew->tp = tp;
+ get_random_bytes(&fnew->hashrnd, 4);
+ tcf_exts_init(&fnew->exts, TCA_FLOW_ACT, TCA_FLOW_POLICE);
}
- tcf_exts_change(tp, &f->exts, &e);
- tcf_em_tree_change(tp, &f->ematches, &t);
+ fnew->perturb_timer.function = flow_perturbation;
+ fnew->perturb_timer.data = (unsigned long)fnew;
+ init_timer_deferrable(&fnew->perturb_timer);
- tcf_tree_lock(tp);
+ tcf_exts_change(tp, &fnew->exts, &e);
+ tcf_em_tree_change(tp, &fnew->ematches, &t);
if (tb[TCA_FLOW_KEYS]) {
- f->keymask = keymask;
- f->nkeys = nkeys;
+ fnew->keymask = keymask;
+ fnew->nkeys = nkeys;
}
- f->mode = mode;
+ fnew->mode = mode;
if (tb[TCA_FLOW_MASK])
- f->mask = nla_get_u32(tb[TCA_FLOW_MASK]);
+ fnew->mask = nla_get_u32(tb[TCA_FLOW_MASK]);
if (tb[TCA_FLOW_XOR])
- f->xor = nla_get_u32(tb[TCA_FLOW_XOR]);
+ fnew->xor = nla_get_u32(tb[TCA_FLOW_XOR]);
if (tb[TCA_FLOW_RSHIFT])
- f->rshift = nla_get_u32(tb[TCA_FLOW_RSHIFT]);
+ fnew->rshift = nla_get_u32(tb[TCA_FLOW_RSHIFT]);
if (tb[TCA_FLOW_ADDEND])
- f->addend = nla_get_u32(tb[TCA_FLOW_ADDEND]);
+ fnew->addend = nla_get_u32(tb[TCA_FLOW_ADDEND]);
if (tb[TCA_FLOW_DIVISOR])
- f->divisor = nla_get_u32(tb[TCA_FLOW_DIVISOR]);
+ fnew->divisor = nla_get_u32(tb[TCA_FLOW_DIVISOR]);
if (baseclass)
- f->baseclass = baseclass;
+ fnew->baseclass = baseclass;
- f->perturb_period = perturb_period;
- del_timer(&f->perturb_timer);
+ fnew->perturb_period = perturb_period;
if (perturb_period)
- mod_timer(&f->perturb_timer, jiffies + perturb_period);
+ mod_timer(&fnew->perturb_timer, jiffies + perturb_period);
if (*arg == 0)
- list_add_tail(&f->list, &head->filters);
+ list_add_tail_rcu(&fnew->list, &head->filters);
+ else
+ list_replace_rcu(&fnew->list, &fold->list);
- tcf_tree_unlock(tp);
+ *arg = (unsigned long)fnew;
- *arg = (unsigned long)f;
+ if (fold)
+ call_rcu(&fold->rcu, flow_destroy_filter);
return 0;
err2:
tcf_em_tree_destroy(tp, &t);
+ kfree(fnew);
err1:
- tcf_exts_destroy(tp, &e);
+ tcf_exts_destroy(&e);
return err;
}
-static void flow_destroy_filter(struct tcf_proto *tp, struct flow_filter *f)
-{
- del_timer_sync(&f->perturb_timer);
- tcf_exts_destroy(tp, &f->exts);
- tcf_em_tree_destroy(tp, &f->ematches);
- kfree(f);
-}
-
static int flow_delete(struct tcf_proto *tp, unsigned long arg)
{
struct flow_filter *f = (struct flow_filter *)arg;
- tcf_tree_lock(tp);
- list_del(&f->list);
- tcf_tree_unlock(tp);
- flow_destroy_filter(tp, f);
+ list_del_rcu(&f->list);
+ call_rcu(&f->rcu, flow_destroy_filter);
return 0;
}
@@ -532,28 +554,29 @@
if (head == NULL)
return -ENOBUFS;
INIT_LIST_HEAD(&head->filters);
- tp->root = head;
+ rcu_assign_pointer(tp->root, head);
return 0;
}
static void flow_destroy(struct tcf_proto *tp)
{
- struct flow_head *head = tp->root;
+ struct flow_head *head = rtnl_dereference(tp->root);
struct flow_filter *f, *next;
list_for_each_entry_safe(f, next, &head->filters, list) {
- list_del(&f->list);
- flow_destroy_filter(tp, f);
+ list_del_rcu(&f->list);
+ call_rcu(&f->rcu, flow_destroy_filter);
}
- kfree(head);
+ RCU_INIT_POINTER(tp->root, NULL);
+ kfree_rcu(head, rcu);
}
static unsigned long flow_get(struct tcf_proto *tp, u32 handle)
{
- struct flow_head *head = tp->root;
+ struct flow_head *head = rtnl_dereference(tp->root);
struct flow_filter *f;
- list_for_each_entry(f, &head->filters, list)
+ list_for_each_entry_rcu(f, &head->filters, list)
if (f->handle == handle)
return (unsigned long)f;
return 0;
@@ -626,10 +649,10 @@
static void flow_walk(struct tcf_proto *tp, struct tcf_walker *arg)
{
- struct flow_head *head = tp->root;
+ struct flow_head *head = rtnl_dereference(tp->root);
struct flow_filter *f;
- list_for_each_entry(f, &head->filters, list) {
+ list_for_each_entry_rcu(f, &head->filters, list) {
if (arg->count < arg->skip)
goto skip;
if (arg->fn(tp, (unsigned long)f, arg) < 0) {
diff --git a/net/sched/cls_fw.c b/net/sched/cls_fw.c
index 861b03c..da805ae 100644
--- a/net/sched/cls_fw.c
+++ b/net/sched/cls_fw.c
@@ -33,17 +33,20 @@
struct fw_head {
u32 mask;
- struct fw_filter *ht[HTSIZE];
+ struct fw_filter __rcu *ht[HTSIZE];
+ struct rcu_head rcu;
};
struct fw_filter {
- struct fw_filter *next;
+ struct fw_filter __rcu *next;
u32 id;
struct tcf_result res;
#ifdef CONFIG_NET_CLS_IND
int ifindex;
#endif /* CONFIG_NET_CLS_IND */
struct tcf_exts exts;
+ struct tcf_proto *tp;
+ struct rcu_head rcu;
};
static u32 fw_hash(u32 handle)
@@ -56,14 +59,16 @@
static int fw_classify(struct sk_buff *skb, const struct tcf_proto *tp,
struct tcf_result *res)
{
- struct fw_head *head = tp->root;
+ struct fw_head *head = rcu_dereference_bh(tp->root);
struct fw_filter *f;
int r;
u32 id = skb->mark;
if (head != NULL) {
id &= head->mask;
- for (f = head->ht[fw_hash(id)]; f; f = f->next) {
+
+ for (f = rcu_dereference_bh(head->ht[fw_hash(id)]); f;
+ f = rcu_dereference_bh(f->next)) {
if (f->id == id) {
*res = f->res;
#ifdef CONFIG_NET_CLS_IND
@@ -92,13 +97,14 @@
static unsigned long fw_get(struct tcf_proto *tp, u32 handle)
{
- struct fw_head *head = tp->root;
+ struct fw_head *head = rtnl_dereference(tp->root);
struct fw_filter *f;
if (head == NULL)
return 0;
- for (f = head->ht[fw_hash(handle)]; f; f = f->next) {
+ f = rtnl_dereference(head->ht[fw_hash(handle)]);
+ for (; f; f = rtnl_dereference(f->next)) {
if (f->id == handle)
return (unsigned long)f;
}
@@ -114,16 +120,19 @@
return 0;
}
-static void fw_delete_filter(struct tcf_proto *tp, struct fw_filter *f)
+static void fw_delete_filter(struct rcu_head *head)
{
+ struct fw_filter *f = container_of(head, struct fw_filter, rcu);
+ struct tcf_proto *tp = f->tp;
+
tcf_unbind_filter(tp, &f->res);
- tcf_exts_destroy(tp, &f->exts);
+ tcf_exts_destroy(&f->exts);
kfree(f);
}
static void fw_destroy(struct tcf_proto *tp)
{
- struct fw_head *head = tp->root;
+ struct fw_head *head = rtnl_dereference(tp->root);
struct fw_filter *f;
int h;
@@ -131,29 +140,33 @@
return;
for (h = 0; h < HTSIZE; h++) {
- while ((f = head->ht[h]) != NULL) {
- head->ht[h] = f->next;
- fw_delete_filter(tp, f);
+ while ((f = rtnl_dereference(head->ht[h])) != NULL) {
+ RCU_INIT_POINTER(head->ht[h],
+ rtnl_dereference(f->next));
+ call_rcu(&f->rcu, fw_delete_filter);
}
}
- kfree(head);
+ RCU_INIT_POINTER(tp->root, NULL);
+ kfree_rcu(head, rcu);
}
static int fw_delete(struct tcf_proto *tp, unsigned long arg)
{
- struct fw_head *head = tp->root;
+ struct fw_head *head = rtnl_dereference(tp->root);
struct fw_filter *f = (struct fw_filter *)arg;
- struct fw_filter **fp;
+ struct fw_filter __rcu **fp;
+ struct fw_filter *pfp;
if (head == NULL || f == NULL)
goto out;
- for (fp = &head->ht[fw_hash(f->id)]; *fp; fp = &(*fp)->next) {
- if (*fp == f) {
- tcf_tree_lock(tp);
- *fp = f->next;
- tcf_tree_unlock(tp);
- fw_delete_filter(tp, f);
+ fp = &head->ht[fw_hash(f->id)];
+
+ for (pfp = rtnl_dereference(*fp); pfp;
+ fp = &pfp->next, pfp = rtnl_dereference(*fp)) {
+ if (pfp == f) {
+ RCU_INIT_POINTER(*fp, rtnl_dereference(f->next));
+ call_rcu(&f->rcu, fw_delete_filter);
return 0;
}
}
@@ -171,7 +184,7 @@
fw_change_attrs(struct net *net, struct tcf_proto *tp, struct fw_filter *f,
struct nlattr **tb, struct nlattr **tca, unsigned long base, bool ovr)
{
- struct fw_head *head = tp->root;
+ struct fw_head *head = rtnl_dereference(tp->root);
struct tcf_exts e;
u32 mask;
int err;
@@ -210,7 +223,7 @@
return 0;
errout:
- tcf_exts_destroy(tp, &e);
+ tcf_exts_destroy(&e);
return err;
}
@@ -220,7 +233,7 @@
struct nlattr **tca,
unsigned long *arg, bool ovr)
{
- struct fw_head *head = tp->root;
+ struct fw_head *head = rtnl_dereference(tp->root);
struct fw_filter *f = (struct fw_filter *) *arg;
struct nlattr *opt = tca[TCA_OPTIONS];
struct nlattr *tb[TCA_FW_MAX + 1];
@@ -233,10 +246,44 @@
if (err < 0)
return err;
- if (f != NULL) {
+ if (f) {
+ struct fw_filter *pfp, *fnew;
+ struct fw_filter __rcu **fp;
+
if (f->id != handle && handle)
return -EINVAL;
- return fw_change_attrs(net, tp, f, tb, tca, base, ovr);
+
+ fnew = kzalloc(sizeof(struct fw_filter), GFP_KERNEL);
+ if (!fnew)
+ return -ENOBUFS;
+
+ fnew->id = f->id;
+ fnew->res = f->res;
+#ifdef CONFIG_NET_CLS_IND
+ fnew->ifindex = f->ifindex;
+#endif /* CONFIG_NET_CLS_IND */
+ fnew->tp = f->tp;
+
+ tcf_exts_init(&fnew->exts, TCA_FW_ACT, TCA_FW_POLICE);
+
+ err = fw_change_attrs(net, tp, fnew, tb, tca, base, ovr);
+ if (err < 0) {
+ kfree(fnew);
+ return err;
+ }
+
+ fp = &head->ht[fw_hash(fnew->id)];
+ for (pfp = rtnl_dereference(*fp); pfp;
+ fp = &pfp->next, pfp = rtnl_dereference(*fp))
+ if (pfp == f)
+ break;
+
+ RCU_INIT_POINTER(fnew->next, rtnl_dereference(pfp->next));
+ rcu_assign_pointer(*fp, fnew);
+ call_rcu(&f->rcu, fw_delete_filter);
+
+ *arg = (unsigned long)fnew;
+ return err;
}
if (!handle)
@@ -252,9 +299,7 @@
return -ENOBUFS;
head->mask = mask;
- tcf_tree_lock(tp);
- tp->root = head;
- tcf_tree_unlock(tp);
+ rcu_assign_pointer(tp->root, head);
}
f = kzalloc(sizeof(struct fw_filter), GFP_KERNEL);
@@ -263,15 +308,14 @@
tcf_exts_init(&f->exts, TCA_FW_ACT, TCA_FW_POLICE);
f->id = handle;
+ f->tp = tp;
err = fw_change_attrs(net, tp, f, tb, tca, base, ovr);
if (err < 0)
goto errout;
- f->next = head->ht[fw_hash(handle)];
- tcf_tree_lock(tp);
- head->ht[fw_hash(handle)] = f;
- tcf_tree_unlock(tp);
+ RCU_INIT_POINTER(f->next, head->ht[fw_hash(handle)]);
+ rcu_assign_pointer(head->ht[fw_hash(handle)], f);
*arg = (unsigned long)f;
return 0;
@@ -283,7 +327,7 @@
static void fw_walk(struct tcf_proto *tp, struct tcf_walker *arg)
{
- struct fw_head *head = tp->root;
+ struct fw_head *head = rtnl_dereference(tp->root);
int h;
if (head == NULL)
@@ -295,7 +339,8 @@
for (h = 0; h < HTSIZE; h++) {
struct fw_filter *f;
- for (f = head->ht[h]; f; f = f->next) {
+ for (f = rtnl_dereference(head->ht[h]); f;
+ f = rtnl_dereference(f->next)) {
if (arg->count < arg->skip) {
arg->count++;
continue;
@@ -312,7 +357,7 @@
static int fw_dump(struct net *net, struct tcf_proto *tp, unsigned long fh,
struct sk_buff *skb, struct tcmsg *t)
{
- struct fw_head *head = tp->root;
+ struct fw_head *head = rtnl_dereference(tp->root);
struct fw_filter *f = (struct fw_filter *)fh;
unsigned char *b = skb_tail_pointer(skb);
struct nlattr *nest;
diff --git a/net/sched/cls_route.c b/net/sched/cls_route.c
index dd9fc25..b665aee 100644
--- a/net/sched/cls_route.c
+++ b/net/sched/cls_route.c
@@ -29,25 +29,26 @@
* are mutually exclusive.
* 3. "to TAG from ANY" has higher priority, than "to ANY from XXX"
*/
-
struct route4_fastmap {
- struct route4_filter *filter;
- u32 id;
- int iif;
+ struct route4_filter *filter;
+ u32 id;
+ int iif;
};
struct route4_head {
- struct route4_fastmap fastmap[16];
- struct route4_bucket *table[256 + 1];
+ struct route4_fastmap fastmap[16];
+ struct route4_bucket __rcu *table[256 + 1];
+ struct rcu_head rcu;
};
struct route4_bucket {
/* 16 FROM buckets + 16 IIF buckets + 1 wildcard bucket */
- struct route4_filter *ht[16 + 16 + 1];
+ struct route4_filter __rcu *ht[16 + 16 + 1];
+ struct rcu_head rcu;
};
struct route4_filter {
- struct route4_filter *next;
+ struct route4_filter __rcu *next;
u32 id;
int iif;
@@ -55,6 +56,8 @@
struct tcf_exts exts;
u32 handle;
struct route4_bucket *bkt;
+ struct tcf_proto *tp;
+ struct rcu_head rcu;
};
#define ROUTE4_FAILURE ((struct route4_filter *)(-1L))
@@ -64,14 +67,13 @@
return id & 0xF;
}
+static DEFINE_SPINLOCK(fastmap_lock);
static void
-route4_reset_fastmap(struct Qdisc *q, struct route4_head *head, u32 id)
+route4_reset_fastmap(struct route4_head *head)
{
- spinlock_t *root_lock = qdisc_root_sleeping_lock(q);
-
- spin_lock_bh(root_lock);
+ spin_lock_bh(&fastmap_lock);
memset(head->fastmap, 0, sizeof(head->fastmap));
- spin_unlock_bh(root_lock);
+ spin_unlock_bh(&fastmap_lock);
}
static void
@@ -80,9 +82,12 @@
{
int h = route4_fastmap_hash(id, iif);
+ /* fastmap updates must look atomic to aling id, iff, filter */
+ spin_lock_bh(&fastmap_lock);
head->fastmap[h].id = id;
head->fastmap[h].iif = iif;
head->fastmap[h].filter = f;
+ spin_unlock_bh(&fastmap_lock);
}
static inline int route4_hash_to(u32 id)
@@ -123,7 +128,7 @@
static int route4_classify(struct sk_buff *skb, const struct tcf_proto *tp,
struct tcf_result *res)
{
- struct route4_head *head = tp->root;
+ struct route4_head *head = rcu_dereference_bh(tp->root);
struct dst_entry *dst;
struct route4_bucket *b;
struct route4_filter *f;
@@ -141,32 +146,43 @@
iif = inet_iif(skb);
h = route4_fastmap_hash(id, iif);
+
+ spin_lock(&fastmap_lock);
if (id == head->fastmap[h].id &&
iif == head->fastmap[h].iif &&
(f = head->fastmap[h].filter) != NULL) {
- if (f == ROUTE4_FAILURE)
+ if (f == ROUTE4_FAILURE) {
+ spin_unlock(&fastmap_lock);
goto failure;
+ }
*res = f->res;
+ spin_unlock(&fastmap_lock);
return 0;
}
+ spin_unlock(&fastmap_lock);
h = route4_hash_to(id);
restart:
- b = head->table[h];
+ b = rcu_dereference_bh(head->table[h]);
if (b) {
- for (f = b->ht[route4_hash_from(id)]; f; f = f->next)
+ for (f = rcu_dereference_bh(b->ht[route4_hash_from(id)]);
+ f;
+ f = rcu_dereference_bh(f->next))
if (f->id == id)
ROUTE4_APPLY_RESULT();
- for (f = b->ht[route4_hash_iif(iif)]; f; f = f->next)
+ for (f = rcu_dereference_bh(b->ht[route4_hash_iif(iif)]);
+ f;
+ f = rcu_dereference_bh(f->next))
if (f->iif == iif)
ROUTE4_APPLY_RESULT();
- for (f = b->ht[route4_hash_wild()]; f; f = f->next)
+ for (f = rcu_dereference_bh(b->ht[route4_hash_wild()]);
+ f;
+ f = rcu_dereference_bh(f->next))
ROUTE4_APPLY_RESULT();
-
}
if (h < 256) {
h = 256;
@@ -213,7 +229,7 @@
static unsigned long route4_get(struct tcf_proto *tp, u32 handle)
{
- struct route4_head *head = tp->root;
+ struct route4_head *head = rtnl_dereference(tp->root);
struct route4_bucket *b;
struct route4_filter *f;
unsigned int h1, h2;
@@ -229,9 +245,11 @@
if (h2 > 32)
return 0;
- b = head->table[h1];
+ b = rtnl_dereference(head->table[h1]);
if (b) {
- for (f = b->ht[h2]; f; f = f->next)
+ for (f = rtnl_dereference(b->ht[h2]);
+ f;
+ f = rtnl_dereference(f->next))
if (f->handle == handle)
return (unsigned long)f;
}
@@ -248,16 +266,19 @@
}
static void
-route4_delete_filter(struct tcf_proto *tp, struct route4_filter *f)
+route4_delete_filter(struct rcu_head *head)
{
+ struct route4_filter *f = container_of(head, struct route4_filter, rcu);
+ struct tcf_proto *tp = f->tp;
+
tcf_unbind_filter(tp, &f->res);
- tcf_exts_destroy(tp, &f->exts);
+ tcf_exts_destroy(&f->exts);
kfree(f);
}
static void route4_destroy(struct tcf_proto *tp)
{
- struct route4_head *head = tp->root;
+ struct route4_head *head = rtnl_dereference(tp->root);
int h1, h2;
if (head == NULL)
@@ -266,28 +287,35 @@
for (h1 = 0; h1 <= 256; h1++) {
struct route4_bucket *b;
- b = head->table[h1];
+ b = rtnl_dereference(head->table[h1]);
if (b) {
for (h2 = 0; h2 <= 32; h2++) {
struct route4_filter *f;
- while ((f = b->ht[h2]) != NULL) {
- b->ht[h2] = f->next;
- route4_delete_filter(tp, f);
+ while ((f = rtnl_dereference(b->ht[h2])) != NULL) {
+ struct route4_filter *next;
+
+ next = rtnl_dereference(f->next);
+ RCU_INIT_POINTER(b->ht[h2], next);
+ call_rcu(&f->rcu, route4_delete_filter);
}
}
- kfree(b);
+ RCU_INIT_POINTER(head->table[h1], NULL);
+ kfree_rcu(b, rcu);
}
}
- kfree(head);
+ RCU_INIT_POINTER(tp->root, NULL);
+ kfree_rcu(head, rcu);
}
static int route4_delete(struct tcf_proto *tp, unsigned long arg)
{
- struct route4_head *head = tp->root;
- struct route4_filter **fp, *f = (struct route4_filter *)arg;
- unsigned int h = 0;
+ struct route4_head *head = rtnl_dereference(tp->root);
+ struct route4_filter *f = (struct route4_filter *)arg;
+ struct route4_filter __rcu **fp;
+ struct route4_filter *nf;
struct route4_bucket *b;
+ unsigned int h = 0;
int i;
if (!head || !f)
@@ -296,27 +324,35 @@
h = f->handle;
b = f->bkt;
- for (fp = &b->ht[from_hash(h >> 16)]; *fp; fp = &(*fp)->next) {
- if (*fp == f) {
- tcf_tree_lock(tp);
- *fp = f->next;
- tcf_tree_unlock(tp);
+ fp = &b->ht[from_hash(h >> 16)];
+ for (nf = rtnl_dereference(*fp); nf;
+ fp = &nf->next, nf = rtnl_dereference(*fp)) {
+ if (nf == f) {
+ /* unlink it */
+ RCU_INIT_POINTER(*fp, rtnl_dereference(f->next));
- route4_reset_fastmap(tp->q, head, f->id);
- route4_delete_filter(tp, f);
+ /* Remove any fastmap lookups that might ref filter
+ * notice we unlink'd the filter so we can't get it
+ * back in the fastmap.
+ */
+ route4_reset_fastmap(head);
- /* Strip tree */
+ /* Delete it */
+ call_rcu(&f->rcu, route4_delete_filter);
- for (i = 0; i <= 32; i++)
- if (b->ht[i])
+ /* Strip RTNL protected tree */
+ for (i = 0; i <= 32; i++) {
+ struct route4_filter *rt;
+
+ rt = rtnl_dereference(b->ht[i]);
+ if (rt)
return 0;
+ }
/* OK, session has no flows */
- tcf_tree_lock(tp);
- head->table[to_hash(h)] = NULL;
- tcf_tree_unlock(tp);
+ RCU_INIT_POINTER(head->table[to_hash(h)], NULL);
+ kfree_rcu(b, rcu);
- kfree(b);
return 0;
}
}
@@ -380,26 +416,25 @@
}
h1 = to_hash(nhandle);
- b = head->table[h1];
+ b = rtnl_dereference(head->table[h1]);
if (!b) {
err = -ENOBUFS;
b = kzalloc(sizeof(struct route4_bucket), GFP_KERNEL);
if (b == NULL)
goto errout;
- tcf_tree_lock(tp);
- head->table[h1] = b;
- tcf_tree_unlock(tp);
+ rcu_assign_pointer(head->table[h1], b);
} else {
unsigned int h2 = from_hash(nhandle >> 16);
err = -EEXIST;
- for (fp = b->ht[h2]; fp; fp = fp->next)
+ for (fp = rtnl_dereference(b->ht[h2]);
+ fp;
+ fp = rtnl_dereference(fp->next))
if (fp->handle == f->handle)
goto errout;
}
- tcf_tree_lock(tp);
if (tb[TCA_ROUTE4_TO])
f->id = to;
@@ -410,7 +445,7 @@
f->handle = nhandle;
f->bkt = b;
- tcf_tree_unlock(tp);
+ f->tp = tp;
if (tb[TCA_ROUTE4_CLASSID]) {
f->res.classid = nla_get_u32(tb[TCA_ROUTE4_CLASSID]);
@@ -421,7 +456,7 @@
return 0;
errout:
- tcf_exts_destroy(tp, &e);
+ tcf_exts_destroy(&e);
return err;
}
@@ -431,14 +466,15 @@
struct nlattr **tca,
unsigned long *arg, bool ovr)
{
- struct route4_head *head = tp->root;
- struct route4_filter *f, *f1, **fp;
+ struct route4_head *head = rtnl_dereference(tp->root);
+ struct route4_filter __rcu **fp;
+ struct route4_filter *fold, *f1, *pfp, *f = NULL;
struct route4_bucket *b;
struct nlattr *opt = tca[TCA_OPTIONS];
struct nlattr *tb[TCA_ROUTE4_MAX + 1];
unsigned int h, th;
- u32 old_handle = 0;
int err;
+ bool new = true;
if (opt == NULL)
return handle ? -EINVAL : 0;
@@ -447,70 +483,70 @@
if (err < 0)
return err;
- f = (struct route4_filter *)*arg;
- if (f) {
- if (f->handle != handle && handle)
+ fold = (struct route4_filter *)*arg;
+ if (fold && handle && fold->handle != handle)
return -EINVAL;
- if (f->bkt)
- old_handle = f->handle;
-
- err = route4_set_parms(net, tp, base, f, handle, head, tb,
- tca[TCA_RATE], 0, ovr);
- if (err < 0)
- return err;
-
- goto reinsert;
- }
-
err = -ENOBUFS;
if (head == NULL) {
head = kzalloc(sizeof(struct route4_head), GFP_KERNEL);
if (head == NULL)
goto errout;
-
- tcf_tree_lock(tp);
- tp->root = head;
- tcf_tree_unlock(tp);
+ rcu_assign_pointer(tp->root, head);
}
f = kzalloc(sizeof(struct route4_filter), GFP_KERNEL);
- if (f == NULL)
+ if (!f)
goto errout;
tcf_exts_init(&f->exts, TCA_ROUTE4_ACT, TCA_ROUTE4_POLICE);
+ if (fold) {
+ f->id = fold->id;
+ f->iif = fold->iif;
+ f->res = fold->res;
+ f->handle = fold->handle;
+
+ f->tp = fold->tp;
+ f->bkt = fold->bkt;
+ new = false;
+ }
+
err = route4_set_parms(net, tp, base, f, handle, head, tb,
- tca[TCA_RATE], 1, ovr);
+ tca[TCA_RATE], new, ovr);
if (err < 0)
goto errout;
-reinsert:
h = from_hash(f->handle >> 16);
- for (fp = &f->bkt->ht[h]; (f1 = *fp) != NULL; fp = &f1->next)
+ fp = &f->bkt->ht[h];
+ for (pfp = rtnl_dereference(*fp);
+ (f1 = rtnl_dereference(*fp)) != NULL;
+ fp = &f1->next)
if (f->handle < f1->handle)
break;
- f->next = f1;
- tcf_tree_lock(tp);
- *fp = f;
+ rcu_assign_pointer(f->next, f1);
+ rcu_assign_pointer(*fp, f);
- if (old_handle && f->handle != old_handle) {
- th = to_hash(old_handle);
- h = from_hash(old_handle >> 16);
- b = head->table[th];
+ if (fold && fold->handle && f->handle != fold->handle) {
+ th = to_hash(fold->handle);
+ h = from_hash(fold->handle >> 16);
+ b = rtnl_dereference(head->table[th]);
if (b) {
- for (fp = &b->ht[h]; *fp; fp = &(*fp)->next) {
- if (*fp == f) {
+ fp = &b->ht[h];
+ for (pfp = rtnl_dereference(*fp); pfp;
+ fp = &pfp->next, pfp = rtnl_dereference(*fp)) {
+ if (pfp == f) {
*fp = f->next;
break;
}
}
}
}
- tcf_tree_unlock(tp);
- route4_reset_fastmap(tp->q, head, f->id);
+ route4_reset_fastmap(head);
*arg = (unsigned long)f;
+ if (fold)
+ call_rcu(&fold->rcu, route4_delete_filter);
return 0;
errout:
@@ -520,7 +556,7 @@
static void route4_walk(struct tcf_proto *tp, struct tcf_walker *arg)
{
- struct route4_head *head = tp->root;
+ struct route4_head *head = rtnl_dereference(tp->root);
unsigned int h, h1;
if (head == NULL)
@@ -530,13 +566,15 @@
return;
for (h = 0; h <= 256; h++) {
- struct route4_bucket *b = head->table[h];
+ struct route4_bucket *b = rtnl_dereference(head->table[h]);
if (b) {
for (h1 = 0; h1 <= 32; h1++) {
struct route4_filter *f;
- for (f = b->ht[h1]; f; f = f->next) {
+ for (f = rtnl_dereference(b->ht[h1]);
+ f;
+ f = rtnl_dereference(f->next)) {
if (arg->count < arg->skip) {
arg->count++;
continue;
diff --git a/net/sched/cls_rsvp.h b/net/sched/cls_rsvp.h
index 1020e23..6bb55f2 100644
--- a/net/sched/cls_rsvp.h
+++ b/net/sched/cls_rsvp.h
@@ -70,31 +70,34 @@
u32 tmap[256/32];
u32 hgenerator;
u8 tgenerator;
- struct rsvp_session *ht[256];
+ struct rsvp_session __rcu *ht[256];
+ struct rcu_head rcu;
};
struct rsvp_session {
- struct rsvp_session *next;
- __be32 dst[RSVP_DST_LEN];
- struct tc_rsvp_gpi dpi;
- u8 protocol;
- u8 tunnelid;
+ struct rsvp_session __rcu *next;
+ __be32 dst[RSVP_DST_LEN];
+ struct tc_rsvp_gpi dpi;
+ u8 protocol;
+ u8 tunnelid;
/* 16 (src,sport) hash slots, and one wildcard source slot */
- struct rsvp_filter *ht[16 + 1];
+ struct rsvp_filter __rcu *ht[16 + 1];
+ struct rcu_head rcu;
};
struct rsvp_filter {
- struct rsvp_filter *next;
- __be32 src[RSVP_DST_LEN];
- struct tc_rsvp_gpi spi;
- u8 tunnelhdr;
+ struct rsvp_filter __rcu *next;
+ __be32 src[RSVP_DST_LEN];
+ struct tc_rsvp_gpi spi;
+ u8 tunnelhdr;
- struct tcf_result res;
- struct tcf_exts exts;
+ struct tcf_result res;
+ struct tcf_exts exts;
- u32 handle;
- struct rsvp_session *sess;
+ u32 handle;
+ struct rsvp_session *sess;
+ struct rcu_head rcu;
};
static inline unsigned int hash_dst(__be32 *dst, u8 protocol, u8 tunnelid)
@@ -128,7 +131,7 @@
static int rsvp_classify(struct sk_buff *skb, const struct tcf_proto *tp,
struct tcf_result *res)
{
- struct rsvp_session **sht = ((struct rsvp_head *)tp->root)->ht;
+ struct rsvp_head *head = rcu_dereference_bh(tp->root);
struct rsvp_session *s;
struct rsvp_filter *f;
unsigned int h1, h2;
@@ -169,7 +172,8 @@
h1 = hash_dst(dst, protocol, tunnelid);
h2 = hash_src(src);
- for (s = sht[h1]; s; s = s->next) {
+ for (s = rcu_dereference_bh(head->ht[h1]); s;
+ s = rcu_dereference_bh(s->next)) {
if (dst[RSVP_DST_LEN-1] == s->dst[RSVP_DST_LEN - 1] &&
protocol == s->protocol &&
!(s->dpi.mask &
@@ -181,7 +185,8 @@
#endif
tunnelid == s->tunnelid) {
- for (f = s->ht[h2]; f; f = f->next) {
+ for (f = rcu_dereference_bh(s->ht[h2]); f;
+ f = rcu_dereference_bh(f->next)) {
if (src[RSVP_DST_LEN-1] == f->src[RSVP_DST_LEN - 1] &&
!(f->spi.mask & (*(u32 *)(xprt + f->spi.offset) ^ f->spi.key))
#if RSVP_DST_LEN == 4
@@ -205,7 +210,8 @@
}
/* And wildcard bucket... */
- for (f = s->ht[16]; f; f = f->next) {
+ for (f = rcu_dereference_bh(s->ht[16]); f;
+ f = rcu_dereference_bh(f->next)) {
*res = f->res;
RSVP_APPLY_RESULT();
goto matched;
@@ -216,9 +222,36 @@
return -1;
}
+static void rsvp_replace(struct tcf_proto *tp, struct rsvp_filter *n, u32 h)
+{
+ struct rsvp_head *head = rtnl_dereference(tp->root);
+ struct rsvp_session *s;
+ struct rsvp_filter __rcu **ins;
+ struct rsvp_filter *pins;
+ unsigned int h1 = h & 0xFF;
+ unsigned int h2 = (h >> 8) & 0xFF;
+
+ for (s = rtnl_dereference(head->ht[h1]); s;
+ s = rtnl_dereference(s->next)) {
+ for (ins = &s->ht[h2], pins = rtnl_dereference(*ins); ;
+ ins = &pins->next, pins = rtnl_dereference(*ins)) {
+ if (pins->handle == h) {
+ RCU_INIT_POINTER(n->next, pins->next);
+ rcu_assign_pointer(*ins, n);
+ return;
+ }
+ }
+ }
+
+ /* Something went wrong if we are trying to replace a non-existant
+ * node. Mind as well halt instead of silently failing.
+ */
+ BUG_ON(1);
+}
+
static unsigned long rsvp_get(struct tcf_proto *tp, u32 handle)
{
- struct rsvp_session **sht = ((struct rsvp_head *)tp->root)->ht;
+ struct rsvp_head *head = rtnl_dereference(tp->root);
struct rsvp_session *s;
struct rsvp_filter *f;
unsigned int h1 = handle & 0xFF;
@@ -227,8 +260,10 @@
if (h2 > 16)
return 0;
- for (s = sht[h1]; s; s = s->next) {
- for (f = s->ht[h2]; f; f = f->next) {
+ for (s = rtnl_dereference(head->ht[h1]); s;
+ s = rtnl_dereference(s->next)) {
+ for (f = rtnl_dereference(s->ht[h2]); f;
+ f = rtnl_dereference(f->next)) {
if (f->handle == handle)
return (unsigned long)f;
}
@@ -246,7 +281,7 @@
data = kzalloc(sizeof(struct rsvp_head), GFP_KERNEL);
if (data) {
- tp->root = data;
+ rcu_assign_pointer(tp->root, data);
return 0;
}
return -ENOBUFS;
@@ -256,54 +291,55 @@
rsvp_delete_filter(struct tcf_proto *tp, struct rsvp_filter *f)
{
tcf_unbind_filter(tp, &f->res);
- tcf_exts_destroy(tp, &f->exts);
- kfree(f);
+ tcf_exts_destroy(&f->exts);
+ kfree_rcu(f, rcu);
}
static void rsvp_destroy(struct tcf_proto *tp)
{
- struct rsvp_head *data = xchg(&tp->root, NULL);
- struct rsvp_session **sht;
+ struct rsvp_head *data = rtnl_dereference(tp->root);
int h1, h2;
if (data == NULL)
return;
- sht = data->ht;
+ RCU_INIT_POINTER(tp->root, NULL);
for (h1 = 0; h1 < 256; h1++) {
struct rsvp_session *s;
- while ((s = sht[h1]) != NULL) {
- sht[h1] = s->next;
+ while ((s = rtnl_dereference(data->ht[h1])) != NULL) {
+ RCU_INIT_POINTER(data->ht[h1], s->next);
for (h2 = 0; h2 <= 16; h2++) {
struct rsvp_filter *f;
- while ((f = s->ht[h2]) != NULL) {
- s->ht[h2] = f->next;
+ while ((f = rtnl_dereference(s->ht[h2])) != NULL) {
+ rcu_assign_pointer(s->ht[h2], f->next);
rsvp_delete_filter(tp, f);
}
}
- kfree(s);
+ kfree_rcu(s, rcu);
}
}
- kfree(data);
+ kfree_rcu(data, rcu);
}
static int rsvp_delete(struct tcf_proto *tp, unsigned long arg)
{
- struct rsvp_filter **fp, *f = (struct rsvp_filter *)arg;
+ struct rsvp_head *head = rtnl_dereference(tp->root);
+ struct rsvp_filter *nfp, *f = (struct rsvp_filter *)arg;
+ struct rsvp_filter __rcu **fp;
unsigned int h = f->handle;
- struct rsvp_session **sp;
- struct rsvp_session *s = f->sess;
+ struct rsvp_session __rcu **sp;
+ struct rsvp_session *nsp, *s = f->sess;
int i;
- for (fp = &s->ht[(h >> 8) & 0xFF]; *fp; fp = &(*fp)->next) {
- if (*fp == f) {
- tcf_tree_lock(tp);
- *fp = f->next;
- tcf_tree_unlock(tp);
+ fp = &s->ht[(h >> 8) & 0xFF];
+ for (nfp = rtnl_dereference(*fp); nfp;
+ fp = &nfp->next, nfp = rtnl_dereference(*fp)) {
+ if (nfp == f) {
+ RCU_INIT_POINTER(*fp, f->next);
rsvp_delete_filter(tp, f);
/* Strip tree */
@@ -313,14 +349,12 @@
return 0;
/* OK, session has no flows */
- for (sp = &((struct rsvp_head *)tp->root)->ht[h & 0xFF];
- *sp; sp = &(*sp)->next) {
- if (*sp == s) {
- tcf_tree_lock(tp);
- *sp = s->next;
- tcf_tree_unlock(tp);
-
- kfree(s);
+ sp = &head->ht[h & 0xFF];
+ for (nsp = rtnl_dereference(*sp); nsp;
+ sp = &nsp->next, nsp = rtnl_dereference(*sp)) {
+ if (nsp == s) {
+ RCU_INIT_POINTER(*sp, s->next);
+ kfree_rcu(s, rcu);
return 0;
}
}
@@ -333,7 +367,7 @@
static unsigned int gen_handle(struct tcf_proto *tp, unsigned salt)
{
- struct rsvp_head *data = tp->root;
+ struct rsvp_head *data = rtnl_dereference(tp->root);
int i = 0xFFFF;
while (i-- > 0) {
@@ -361,7 +395,7 @@
static void tunnel_recycle(struct rsvp_head *data)
{
- struct rsvp_session **sht = data->ht;
+ struct rsvp_session __rcu **sht = data->ht;
u32 tmap[256/32];
int h1, h2;
@@ -369,11 +403,13 @@
for (h1 = 0; h1 < 256; h1++) {
struct rsvp_session *s;
- for (s = sht[h1]; s; s = s->next) {
+ for (s = rtnl_dereference(sht[h1]); s;
+ s = rtnl_dereference(s->next)) {
for (h2 = 0; h2 <= 16; h2++) {
struct rsvp_filter *f;
- for (f = s->ht[h2]; f; f = f->next) {
+ for (f = rtnl_dereference(s->ht[h2]); f;
+ f = rtnl_dereference(f->next)) {
if (f->tunnelhdr == 0)
continue;
data->tgenerator = f->res.classid;
@@ -417,9 +453,11 @@
struct nlattr **tca,
unsigned long *arg, bool ovr)
{
- struct rsvp_head *data = tp->root;
- struct rsvp_filter *f, **fp;
- struct rsvp_session *s, **sp;
+ struct rsvp_head *data = rtnl_dereference(tp->root);
+ struct rsvp_filter *f, *nfp;
+ struct rsvp_filter __rcu **fp;
+ struct rsvp_session *nsp, *s;
+ struct rsvp_session __rcu **sp;
struct tc_rsvp_pinfo *pinfo = NULL;
struct nlattr *opt = tca[TCA_OPTIONS];
struct nlattr *tb[TCA_RSVP_MAX + 1];
@@ -443,15 +481,26 @@
f = (struct rsvp_filter *)*arg;
if (f) {
/* Node exists: adjust only classid */
+ struct rsvp_filter *n;
if (f->handle != handle && handle)
goto errout2;
- if (tb[TCA_RSVP_CLASSID]) {
- f->res.classid = nla_get_u32(tb[TCA_RSVP_CLASSID]);
- tcf_bind_filter(tp, &f->res, base);
+
+ n = kmemdup(f, sizeof(*f), GFP_KERNEL);
+ if (!n) {
+ err = -ENOMEM;
+ goto errout2;
}
- tcf_exts_change(tp, &f->exts, &e);
+ tcf_exts_init(&n->exts, TCA_RSVP_ACT, TCA_RSVP_POLICE);
+
+ if (tb[TCA_RSVP_CLASSID]) {
+ n->res.classid = nla_get_u32(tb[TCA_RSVP_CLASSID]);
+ tcf_bind_filter(tp, &n->res, base);
+ }
+
+ tcf_exts_change(tp, &n->exts, &e);
+ rsvp_replace(tp, n, handle);
return 0;
}
@@ -499,7 +548,9 @@
goto errout;
}
- for (sp = &data->ht[h1]; (s = *sp) != NULL; sp = &s->next) {
+ for (sp = &data->ht[h1];
+ (s = rtnl_dereference(*sp)) != NULL;
+ sp = &s->next) {
if (dst[RSVP_DST_LEN-1] == s->dst[RSVP_DST_LEN-1] &&
pinfo && pinfo->protocol == s->protocol &&
memcmp(&pinfo->dpi, &s->dpi, sizeof(s->dpi)) == 0 &&
@@ -521,12 +572,16 @@
tcf_exts_change(tp, &f->exts, &e);
- for (fp = &s->ht[h2]; *fp; fp = &(*fp)->next)
- if (((*fp)->spi.mask & f->spi.mask) != f->spi.mask)
+ fp = &s->ht[h2];
+ for (nfp = rtnl_dereference(*fp); nfp;
+ fp = &nfp->next, nfp = rtnl_dereference(*fp)) {
+ __u32 mask = nfp->spi.mask & f->spi.mask;
+
+ if (mask != f->spi.mask)
break;
- f->next = *fp;
- wmb();
- *fp = f;
+ }
+ RCU_INIT_POINTER(f->next, nfp);
+ rcu_assign_pointer(*fp, f);
*arg = (unsigned long)f;
return 0;
@@ -546,26 +601,27 @@
s->protocol = pinfo->protocol;
s->tunnelid = pinfo->tunnelid;
}
- for (sp = &data->ht[h1]; *sp; sp = &(*sp)->next) {
- if (((*sp)->dpi.mask&s->dpi.mask) != s->dpi.mask)
+ sp = &data->ht[h1];
+ for (nsp = rtnl_dereference(*sp); nsp;
+ sp = &nsp->next, nsp = rtnl_dereference(*sp)) {
+ if ((nsp->dpi.mask & s->dpi.mask) != s->dpi.mask)
break;
}
- s->next = *sp;
- wmb();
- *sp = s;
+ RCU_INIT_POINTER(s->next, nsp);
+ rcu_assign_pointer(*sp, s);
goto insert;
errout:
kfree(f);
errout2:
- tcf_exts_destroy(tp, &e);
+ tcf_exts_destroy(&e);
return err;
}
static void rsvp_walk(struct tcf_proto *tp, struct tcf_walker *arg)
{
- struct rsvp_head *head = tp->root;
+ struct rsvp_head *head = rtnl_dereference(tp->root);
unsigned int h, h1;
if (arg->stop)
@@ -574,11 +630,13 @@
for (h = 0; h < 256; h++) {
struct rsvp_session *s;
- for (s = head->ht[h]; s; s = s->next) {
+ for (s = rtnl_dereference(head->ht[h]); s;
+ s = rtnl_dereference(s->next)) {
for (h1 = 0; h1 <= 16; h1++) {
struct rsvp_filter *f;
- for (f = s->ht[h1]; f; f = f->next) {
+ for (f = rtnl_dereference(s->ht[h1]); f;
+ f = rtnl_dereference(f->next)) {
if (arg->count < arg->skip) {
arg->count++;
continue;
diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
index 3e9f764..8d0e83d 100644
--- a/net/sched/cls_tcindex.c
+++ b/net/sched/cls_tcindex.c
@@ -32,19 +32,21 @@
struct tcindex_filter {
u16 key;
struct tcindex_filter_result result;
- struct tcindex_filter *next;
+ struct tcindex_filter __rcu *next;
+ struct rcu_head rcu;
};
struct tcindex_data {
struct tcindex_filter_result *perfect; /* perfect hash; NULL if none */
- struct tcindex_filter **h; /* imperfect hash; only used if !perfect;
- NULL if unused */
+ struct tcindex_filter __rcu **h; /* imperfect hash; */
+ struct tcf_proto *tp;
u16 mask; /* AND key with mask */
- int shift; /* shift ANDed key to the right */
- int hash; /* hash table size; 0 if undefined */
- int alloc_hash; /* allocated size */
- int fall_through; /* 0: only classify if explicit match */
+ u32 shift; /* shift ANDed key to the right */
+ u32 hash; /* hash table size; 0 if undefined */
+ u32 alloc_hash; /* allocated size */
+ u32 fall_through; /* 0: only classify if explicit match */
+ struct rcu_head rcu;
};
static inline int
@@ -56,13 +58,18 @@
static struct tcindex_filter_result *
tcindex_lookup(struct tcindex_data *p, u16 key)
{
- struct tcindex_filter *f;
+ if (p->perfect) {
+ struct tcindex_filter_result *f = p->perfect + key;
- if (p->perfect)
- return tcindex_filter_is_set(p->perfect + key) ?
- p->perfect + key : NULL;
- else if (p->h) {
- for (f = p->h[key % p->hash]; f; f = f->next)
+ return tcindex_filter_is_set(f) ? f : NULL;
+ } else if (p->h) {
+ struct tcindex_filter __rcu **fp;
+ struct tcindex_filter *f;
+
+ fp = &p->h[key % p->hash];
+ for (f = rcu_dereference_bh_rtnl(*fp);
+ f;
+ fp = &f->next, f = rcu_dereference_bh_rtnl(*fp))
if (f->key == key)
return &f->result;
}
@@ -74,7 +81,7 @@
static int tcindex_classify(struct sk_buff *skb, const struct tcf_proto *tp,
struct tcf_result *res)
{
- struct tcindex_data *p = tp->root;
+ struct tcindex_data *p = rcu_dereference_bh(tp->root);
struct tcindex_filter_result *f;
int key = (skb->tc_index & p->mask) >> p->shift;
@@ -99,7 +106,7 @@
static unsigned long tcindex_get(struct tcf_proto *tp, u32 handle)
{
- struct tcindex_data *p = tp->root;
+ struct tcindex_data *p = rtnl_dereference(tp->root);
struct tcindex_filter_result *r;
pr_debug("tcindex_get(tp %p,handle 0x%08x)\n", tp, handle);
@@ -129,49 +136,59 @@
p->hash = DEFAULT_HASH_SIZE;
p->fall_through = 1;
- tp->root = p;
+ rcu_assign_pointer(tp->root, p);
return 0;
}
-
static int
-__tcindex_delete(struct tcf_proto *tp, unsigned long arg, int lock)
+tcindex_delete(struct tcf_proto *tp, unsigned long arg)
{
- struct tcindex_data *p = tp->root;
+ struct tcindex_data *p = rtnl_dereference(tp->root);
struct tcindex_filter_result *r = (struct tcindex_filter_result *) arg;
+ struct tcindex_filter __rcu **walk;
struct tcindex_filter *f = NULL;
- pr_debug("tcindex_delete(tp %p,arg 0x%lx),p %p,f %p\n", tp, arg, p, f);
+ pr_debug("tcindex_delete(tp %p,arg 0x%lx),p %p\n", tp, arg, p);
if (p->perfect) {
if (!r->res.class)
return -ENOENT;
} else {
int i;
- struct tcindex_filter **walk = NULL;
- for (i = 0; i < p->hash; i++)
- for (walk = p->h+i; *walk; walk = &(*walk)->next)
- if (&(*walk)->result == r)
+ for (i = 0; i < p->hash; i++) {
+ walk = p->h + i;
+ for (f = rtnl_dereference(*walk); f;
+ walk = &f->next, f = rtnl_dereference(*walk)) {
+ if (&f->result == r)
goto found;
+ }
+ }
return -ENOENT;
found:
- f = *walk;
- if (lock)
- tcf_tree_lock(tp);
- *walk = f->next;
- if (lock)
- tcf_tree_unlock(tp);
+ rcu_assign_pointer(*walk, rtnl_dereference(f->next));
}
tcf_unbind_filter(tp, &r->res);
- tcf_exts_destroy(tp, &r->exts);
- kfree(f);
+ tcf_exts_destroy(&r->exts);
+ if (f)
+ kfree_rcu(f, rcu);
return 0;
}
-static int tcindex_delete(struct tcf_proto *tp, unsigned long arg)
+static int tcindex_destroy_element(struct tcf_proto *tp,
+ unsigned long arg,
+ struct tcf_walker *walker)
{
- return __tcindex_delete(tp, arg, 1);
+ return tcindex_delete(tp, arg);
+}
+
+static void __tcindex_destroy(struct rcu_head *head)
+{
+ struct tcindex_data *p = container_of(head, struct tcindex_data, rcu);
+
+ kfree(p->perfect);
+ kfree(p->h);
+ kfree(p);
}
static inline int
@@ -194,6 +211,14 @@
tcf_exts_init(&r->exts, TCA_TCINDEX_ACT, TCA_TCINDEX_POLICE);
}
+static void __tcindex_partial_destroy(struct rcu_head *head)
+{
+ struct tcindex_data *p = container_of(head, struct tcindex_data, rcu);
+
+ kfree(p->perfect);
+ kfree(p);
+}
+
static int
tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
u32 handle, struct tcindex_data *p,
@@ -203,7 +228,7 @@
int err, balloc = 0;
struct tcindex_filter_result new_filter_result, *old_r = r;
struct tcindex_filter_result cr;
- struct tcindex_data cp;
+ struct tcindex_data *cp, *oldp;
struct tcindex_filter *f = NULL; /* make gcc behave */
struct tcf_exts e;
@@ -212,84 +237,117 @@
if (err < 0)
return err;
- memcpy(&cp, p, sizeof(cp));
- tcindex_filter_result_init(&new_filter_result);
+ err = -ENOMEM;
+ /* tcindex_data attributes must look atomic to classifier/lookup so
+ * allocate new tcindex data and RCU assign it onto root. Keeping
+ * perfect hash and hash pointers from old data.
+ */
+ cp = kzalloc(sizeof(*cp), GFP_KERNEL);
+ if (!cp)
+ goto errout;
+ cp->mask = p->mask;
+ cp->shift = p->shift;
+ cp->hash = p->hash;
+ cp->alloc_hash = p->alloc_hash;
+ cp->fall_through = p->fall_through;
+ cp->tp = tp;
+
+ if (p->perfect) {
+ cp->perfect = kmemdup(p->perfect,
+ sizeof(*r) * cp->hash, GFP_KERNEL);
+ if (!cp->perfect)
+ goto errout;
+ balloc = 1;
+ }
+ cp->h = p->h;
+
+ tcindex_filter_result_init(&new_filter_result);
tcindex_filter_result_init(&cr);
if (old_r)
cr.res = r->res;
if (tb[TCA_TCINDEX_HASH])
- cp.hash = nla_get_u32(tb[TCA_TCINDEX_HASH]);
+ cp->hash = nla_get_u32(tb[TCA_TCINDEX_HASH]);
if (tb[TCA_TCINDEX_MASK])
- cp.mask = nla_get_u16(tb[TCA_TCINDEX_MASK]);
+ cp->mask = nla_get_u16(tb[TCA_TCINDEX_MASK]);
if (tb[TCA_TCINDEX_SHIFT])
- cp.shift = nla_get_u32(tb[TCA_TCINDEX_SHIFT]);
+ cp->shift = nla_get_u32(tb[TCA_TCINDEX_SHIFT]);
err = -EBUSY;
+
/* Hash already allocated, make sure that we still meet the
* requirements for the allocated hash.
*/
- if (cp.perfect) {
- if (!valid_perfect_hash(&cp) ||
- cp.hash > cp.alloc_hash)
- goto errout;
- } else if (cp.h && cp.hash != cp.alloc_hash)
- goto errout;
+ if (cp->perfect) {
+ if (!valid_perfect_hash(cp) ||
+ cp->hash > cp->alloc_hash)
+ goto errout_alloc;
+ } else if (cp->h && cp->hash != cp->alloc_hash) {
+ goto errout_alloc;
+ }
err = -EINVAL;
if (tb[TCA_TCINDEX_FALL_THROUGH])
- cp.fall_through = nla_get_u32(tb[TCA_TCINDEX_FALL_THROUGH]);
+ cp->fall_through = nla_get_u32(tb[TCA_TCINDEX_FALL_THROUGH]);
- if (!cp.hash) {
+ if (!cp->hash) {
/* Hash not specified, use perfect hash if the upper limit
* of the hashing index is below the threshold.
*/
- if ((cp.mask >> cp.shift) < PERFECT_HASH_THRESHOLD)
- cp.hash = (cp.mask >> cp.shift) + 1;
+ if ((cp->mask >> cp->shift) < PERFECT_HASH_THRESHOLD)
+ cp->hash = (cp->mask >> cp->shift) + 1;
else
- cp.hash = DEFAULT_HASH_SIZE;
+ cp->hash = DEFAULT_HASH_SIZE;
}
- if (!cp.perfect && !cp.h)
- cp.alloc_hash = cp.hash;
+ if (!cp->perfect && !cp->h)
+ cp->alloc_hash = cp->hash;
/* Note: this could be as restrictive as if (handle & ~(mask >> shift))
* but then, we'd fail handles that may become valid after some future
* mask change. While this is extremely unlikely to ever matter,
* the check below is safer (and also more backwards-compatible).
*/
- if (cp.perfect || valid_perfect_hash(&cp))
- if (handle >= cp.alloc_hash)
- goto errout;
+ if (cp->perfect || valid_perfect_hash(cp))
+ if (handle >= cp->alloc_hash)
+ goto errout_alloc;
err = -ENOMEM;
- if (!cp.perfect && !cp.h) {
- if (valid_perfect_hash(&cp)) {
+ if (!cp->perfect && !cp->h) {
+ if (valid_perfect_hash(cp)) {
int i;
- cp.perfect = kcalloc(cp.hash, sizeof(*r), GFP_KERNEL);
- if (!cp.perfect)
- goto errout;
- for (i = 0; i < cp.hash; i++)
- tcf_exts_init(&cp.perfect[i].exts, TCA_TCINDEX_ACT,
+ cp->perfect = kcalloc(cp->hash, sizeof(*r), GFP_KERNEL);
+ if (!cp->perfect)
+ goto errout_alloc;
+ for (i = 0; i < cp->hash; i++)
+ tcf_exts_init(&cp->perfect[i].exts,
+ TCA_TCINDEX_ACT,
TCA_TCINDEX_POLICE);
balloc = 1;
} else {
- cp.h = kcalloc(cp.hash, sizeof(f), GFP_KERNEL);
- if (!cp.h)
- goto errout;
+ struct tcindex_filter __rcu **hash;
+
+ hash = kcalloc(cp->hash,
+ sizeof(struct tcindex_filter *),
+ GFP_KERNEL);
+
+ if (!hash)
+ goto errout_alloc;
+
+ cp->h = hash;
balloc = 2;
}
}
- if (cp.perfect)
- r = cp.perfect + handle;
+ if (cp->perfect)
+ r = cp->perfect + handle;
else
- r = tcindex_lookup(&cp, handle) ? : &new_filter_result;
+ r = tcindex_lookup(cp, handle) ? : &new_filter_result;
if (r == &new_filter_result) {
f = kzalloc(sizeof(*f), GFP_KERNEL);
@@ -307,34 +365,42 @@
else
tcf_exts_change(tp, &cr.exts, &e);
- tcf_tree_lock(tp);
if (old_r && old_r != r)
tcindex_filter_result_init(old_r);
- memcpy(p, &cp, sizeof(cp));
+ oldp = p;
r->res = cr.res;
+ rcu_assign_pointer(tp->root, cp);
if (r == &new_filter_result) {
- struct tcindex_filter **fp;
+ struct tcindex_filter *nfp;
+ struct tcindex_filter __rcu **fp;
f->key = handle;
f->result = new_filter_result;
f->next = NULL;
- for (fp = p->h+(handle % p->hash); *fp; fp = &(*fp)->next)
- /* nothing */;
- *fp = f;
- }
- tcf_tree_unlock(tp);
+ fp = cp->h + (handle % cp->hash);
+ for (nfp = rtnl_dereference(*fp);
+ nfp;
+ fp = &nfp->next, nfp = rtnl_dereference(*fp))
+ ; /* nothing */
+
+ rcu_assign_pointer(*fp, f);
+ }
+
+ if (oldp)
+ call_rcu(&oldp->rcu, __tcindex_partial_destroy);
return 0;
errout_alloc:
if (balloc == 1)
- kfree(cp.perfect);
+ kfree(cp->perfect);
else if (balloc == 2)
- kfree(cp.h);
+ kfree(cp->h);
errout:
- tcf_exts_destroy(tp, &e);
+ kfree(cp);
+ tcf_exts_destroy(&e);
return err;
}
@@ -345,7 +411,7 @@
{
struct nlattr *opt = tca[TCA_OPTIONS];
struct nlattr *tb[TCA_TCINDEX_MAX + 1];
- struct tcindex_data *p = tp->root;
+ struct tcindex_data *p = rtnl_dereference(tp->root);
struct tcindex_filter_result *r = (struct tcindex_filter_result *) *arg;
int err;
@@ -364,10 +430,9 @@
tca[TCA_RATE], ovr);
}
-
static void tcindex_walk(struct tcf_proto *tp, struct tcf_walker *walker)
{
- struct tcindex_data *p = tp->root;
+ struct tcindex_data *p = rtnl_dereference(tp->root);
struct tcindex_filter *f, *next;
int i;
@@ -390,8 +455,8 @@
if (!p->h)
return;
for (i = 0; i < p->hash; i++) {
- for (f = p->h[i]; f; f = next) {
- next = f->next;
+ for (f = rtnl_dereference(p->h[i]); f; f = next) {
+ next = rtnl_dereference(f->next);
if (walker->count >= walker->skip) {
if (walker->fn(tp, (unsigned long) &f->result,
walker) < 0) {
@@ -404,17 +469,9 @@
}
}
-
-static int tcindex_destroy_element(struct tcf_proto *tp,
- unsigned long arg, struct tcf_walker *walker)
-{
- return __tcindex_delete(tp, arg, 0);
-}
-
-
static void tcindex_destroy(struct tcf_proto *tp)
{
- struct tcindex_data *p = tp->root;
+ struct tcindex_data *p = rtnl_dereference(tp->root);
struct tcf_walker walker;
pr_debug("tcindex_destroy(tp %p),p %p\n", tp, p);
@@ -422,17 +479,16 @@
walker.skip = 0;
walker.fn = tcindex_destroy_element;
tcindex_walk(tp, &walker);
- kfree(p->perfect);
- kfree(p->h);
- kfree(p);
- tp->root = NULL;
+
+ RCU_INIT_POINTER(tp->root, NULL);
+ call_rcu(&p->rcu, __tcindex_destroy);
}
static int tcindex_dump(struct net *net, struct tcf_proto *tp, unsigned long fh,
struct sk_buff *skb, struct tcmsg *t)
{
- struct tcindex_data *p = tp->root;
+ struct tcindex_data *p = rtnl_dereference(tp->root);
struct tcindex_filter_result *r = (struct tcindex_filter_result *) fh;
unsigned char *b = skb_tail_pointer(skb);
struct nlattr *nest;
@@ -455,15 +511,18 @@
nla_nest_end(skb, nest);
} else {
if (p->perfect) {
- t->tcm_handle = r-p->perfect;
+ t->tcm_handle = r - p->perfect;
} else {
struct tcindex_filter *f;
+ struct tcindex_filter __rcu **fp;
int i;
t->tcm_handle = 0;
for (i = 0; !t->tcm_handle && i < p->hash; i++) {
- for (f = p->h[i]; !t->tcm_handle && f;
- f = f->next) {
+ fp = &p->h[i];
+ for (f = rtnl_dereference(*fp);
+ !t->tcm_handle && f;
+ fp = &f->next, f = rtnl_dereference(*fp)) {
if (&f->result == r)
t->tcm_handle = f->key;
}
diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
index 70c0be8..4be3ebf 100644
--- a/net/sched/cls_u32.c
+++ b/net/sched/cls_u32.c
@@ -36,6 +36,7 @@
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/errno.h>
+#include <linux/percpu.h>
#include <linux/rtnetlink.h>
#include <linux/skbuff.h>
#include <linux/bitmap.h>
@@ -44,40 +45,49 @@
#include <net/pkt_cls.h>
struct tc_u_knode {
- struct tc_u_knode *next;
+ struct tc_u_knode __rcu *next;
u32 handle;
- struct tc_u_hnode *ht_up;
+ struct tc_u_hnode __rcu *ht_up;
struct tcf_exts exts;
#ifdef CONFIG_NET_CLS_IND
int ifindex;
#endif
u8 fshift;
struct tcf_result res;
- struct tc_u_hnode *ht_down;
+ struct tc_u_hnode __rcu *ht_down;
#ifdef CONFIG_CLS_U32_PERF
- struct tc_u32_pcnt *pf;
+ struct tc_u32_pcnt __percpu *pf;
#endif
#ifdef CONFIG_CLS_U32_MARK
- struct tc_u32_mark mark;
+ u32 val;
+ u32 mask;
+ u32 __percpu *pcpu_success;
#endif
+ struct tcf_proto *tp;
+ struct rcu_head rcu;
+ /* The 'sel' field MUST be the last field in structure to allow for
+ * tc_u32_keys allocated at end of structure.
+ */
struct tc_u32_sel sel;
};
struct tc_u_hnode {
- struct tc_u_hnode *next;
+ struct tc_u_hnode __rcu *next;
u32 handle;
u32 prio;
struct tc_u_common *tp_c;
int refcnt;
unsigned int divisor;
- struct tc_u_knode *ht[1];
+ struct tc_u_knode __rcu *ht[1];
+ struct rcu_head rcu;
};
struct tc_u_common {
- struct tc_u_hnode *hlist;
+ struct tc_u_hnode __rcu *hlist;
struct Qdisc *q;
int refcnt;
u32 hgenerator;
+ struct rcu_head rcu;
};
static inline unsigned int u32_hash_fold(__be32 key,
@@ -96,7 +106,7 @@
unsigned int off;
} stack[TC_U32_MAXDEPTH];
- struct tc_u_hnode *ht = tp->root;
+ struct tc_u_hnode *ht = rcu_dereference_bh(tp->root);
unsigned int off = skb_network_offset(skb);
struct tc_u_knode *n;
int sdepth = 0;
@@ -108,23 +118,23 @@
int i, r;
next_ht:
- n = ht->ht[sel];
+ n = rcu_dereference_bh(ht->ht[sel]);
next_knode:
if (n) {
struct tc_u32_key *key = n->sel.keys;
#ifdef CONFIG_CLS_U32_PERF
- n->pf->rcnt += 1;
+ __this_cpu_inc(n->pf->rcnt);
j = 0;
#endif
#ifdef CONFIG_CLS_U32_MARK
- if ((skb->mark & n->mark.mask) != n->mark.val) {
- n = n->next;
+ if ((skb->mark & n->mask) != n->val) {
+ n = rcu_dereference_bh(n->next);
goto next_knode;
} else {
- n->mark.success++;
+ __this_cpu_inc(*n->pcpu_success);
}
#endif
@@ -139,37 +149,39 @@
if (!data)
goto out;
if ((*data ^ key->val) & key->mask) {
- n = n->next;
+ n = rcu_dereference_bh(n->next);
goto next_knode;
}
#ifdef CONFIG_CLS_U32_PERF
- n->pf->kcnts[j] += 1;
+ __this_cpu_inc(n->pf->kcnts[j]);
j++;
#endif
}
- if (n->ht_down == NULL) {
+
+ ht = rcu_dereference_bh(n->ht_down);
+ if (!ht) {
check_terminal:
if (n->sel.flags & TC_U32_TERMINAL) {
*res = n->res;
#ifdef CONFIG_NET_CLS_IND
if (!tcf_match_indev(skb, n->ifindex)) {
- n = n->next;
+ n = rcu_dereference_bh(n->next);
goto next_knode;
}
#endif
#ifdef CONFIG_CLS_U32_PERF
- n->pf->rhit += 1;
+ __this_cpu_inc(n->pf->rhit);
#endif
r = tcf_exts_exec(skb, &n->exts, res);
if (r < 0) {
- n = n->next;
+ n = rcu_dereference_bh(n->next);
goto next_knode;
}
return r;
}
- n = n->next;
+ n = rcu_dereference_bh(n->next);
goto next_knode;
}
@@ -180,7 +192,7 @@
stack[sdepth].off = off;
sdepth++;
- ht = n->ht_down;
+ ht = rcu_dereference_bh(n->ht_down);
sel = 0;
if (ht->divisor) {
__be32 *data, hdata;
@@ -222,7 +234,7 @@
/* POP */
if (sdepth--) {
n = stack[sdepth].knode;
- ht = n->ht_up;
+ ht = rcu_dereference_bh(n->ht_up);
off = stack[sdepth].off;
goto check_terminal;
}
@@ -239,7 +251,9 @@
{
struct tc_u_hnode *ht;
- for (ht = tp_c->hlist; ht; ht = ht->next)
+ for (ht = rtnl_dereference(tp_c->hlist);
+ ht;
+ ht = rtnl_dereference(ht->next))
if (ht->handle == handle)
break;
@@ -256,7 +270,9 @@
if (sel > ht->divisor)
goto out;
- for (n = ht->ht[sel]; n; n = n->next)
+ for (n = rtnl_dereference(ht->ht[sel]);
+ n;
+ n = rtnl_dereference(n->next))
if (n->handle == handle)
break;
out:
@@ -270,7 +286,7 @@
struct tc_u_common *tp_c = tp->data;
if (TC_U32_HTID(handle) == TC_U32_ROOT)
- ht = tp->root;
+ ht = rtnl_dereference(tp->root);
else
ht = u32_lookup_ht(tp_c, TC_U32_HTID(handle));
@@ -291,6 +307,9 @@
{
int i = 0x800;
+ /* hgenerator only used inside rtnl lock it is safe to increment
+ * without read _copy_ update semantics
+ */
do {
if (++tp_c->hgenerator == 0x7FF)
tp_c->hgenerator = 1;
@@ -326,41 +345,78 @@
}
tp_c->refcnt++;
- root_ht->next = tp_c->hlist;
- tp_c->hlist = root_ht;
+ RCU_INIT_POINTER(root_ht->next, tp_c->hlist);
+ rcu_assign_pointer(tp_c->hlist, root_ht);
root_ht->tp_c = tp_c;
- tp->root = root_ht;
+ rcu_assign_pointer(tp->root, root_ht);
tp->data = tp_c;
return 0;
}
-static int u32_destroy_key(struct tcf_proto *tp, struct tc_u_knode *n)
+static int u32_destroy_key(struct tcf_proto *tp,
+ struct tc_u_knode *n,
+ bool free_pf)
{
tcf_unbind_filter(tp, &n->res);
- tcf_exts_destroy(tp, &n->exts);
+ tcf_exts_destroy(&n->exts);
if (n->ht_down)
n->ht_down->refcnt--;
#ifdef CONFIG_CLS_U32_PERF
- kfree(n->pf);
+ if (free_pf)
+ free_percpu(n->pf);
+#endif
+#ifdef CONFIG_CLS_U32_MARK
+ if (free_pf)
+ free_percpu(n->pcpu_success);
#endif
kfree(n);
return 0;
}
+/* u32_delete_key_rcu should be called when free'ing a copied
+ * version of a tc_u_knode obtained from u32_init_knode(). When
+ * copies are obtained from u32_init_knode() the statistics are
+ * shared between the old and new copies to allow readers to
+ * continue to update the statistics during the copy. To support
+ * this the u32_delete_key_rcu variant does not free the percpu
+ * statistics.
+ */
+static void u32_delete_key_rcu(struct rcu_head *rcu)
+{
+ struct tc_u_knode *key = container_of(rcu, struct tc_u_knode, rcu);
+
+ u32_destroy_key(key->tp, key, false);
+}
+
+/* u32_delete_key_freepf_rcu is the rcu callback variant
+ * that free's the entire structure including the statistics
+ * percpu variables. Only use this if the key is not a copy
+ * returned by u32_init_knode(). See u32_delete_key_rcu()
+ * for the variant that should be used with keys return from
+ * u32_init_knode()
+ */
+static void u32_delete_key_freepf_rcu(struct rcu_head *rcu)
+{
+ struct tc_u_knode *key = container_of(rcu, struct tc_u_knode, rcu);
+
+ u32_destroy_key(key->tp, key, true);
+}
+
static int u32_delete_key(struct tcf_proto *tp, struct tc_u_knode *key)
{
- struct tc_u_knode **kp;
- struct tc_u_hnode *ht = key->ht_up;
+ struct tc_u_knode __rcu **kp;
+ struct tc_u_knode *pkp;
+ struct tc_u_hnode *ht = rtnl_dereference(key->ht_up);
if (ht) {
- for (kp = &ht->ht[TC_U32_HASH(key->handle)]; *kp; kp = &(*kp)->next) {
- if (*kp == key) {
- tcf_tree_lock(tp);
- *kp = key->next;
- tcf_tree_unlock(tp);
+ kp = &ht->ht[TC_U32_HASH(key->handle)];
+ for (pkp = rtnl_dereference(*kp); pkp;
+ kp = &pkp->next, pkp = rtnl_dereference(*kp)) {
+ if (pkp == key) {
+ RCU_INIT_POINTER(*kp, key->next);
- u32_destroy_key(tp, key);
+ call_rcu(&key->rcu, u32_delete_key_freepf_rcu);
return 0;
}
}
@@ -369,16 +425,16 @@
return 0;
}
-static void u32_clear_hnode(struct tcf_proto *tp, struct tc_u_hnode *ht)
+static void u32_clear_hnode(struct tc_u_hnode *ht)
{
struct tc_u_knode *n;
unsigned int h;
for (h = 0; h <= ht->divisor; h++) {
- while ((n = ht->ht[h]) != NULL) {
- ht->ht[h] = n->next;
-
- u32_destroy_key(tp, n);
+ while ((n = rtnl_dereference(ht->ht[h])) != NULL) {
+ RCU_INIT_POINTER(ht->ht[h],
+ rtnl_dereference(n->next));
+ call_rcu(&n->rcu, u32_delete_key_freepf_rcu);
}
}
}
@@ -386,28 +442,31 @@
static int u32_destroy_hnode(struct tcf_proto *tp, struct tc_u_hnode *ht)
{
struct tc_u_common *tp_c = tp->data;
- struct tc_u_hnode **hn;
+ struct tc_u_hnode __rcu **hn;
+ struct tc_u_hnode *phn;
WARN_ON(ht->refcnt);
- u32_clear_hnode(tp, ht);
+ u32_clear_hnode(ht);
- for (hn = &tp_c->hlist; *hn; hn = &(*hn)->next) {
- if (*hn == ht) {
- *hn = ht->next;
- kfree(ht);
+ hn = &tp_c->hlist;
+ for (phn = rtnl_dereference(*hn);
+ phn;
+ hn = &phn->next, phn = rtnl_dereference(*hn)) {
+ if (phn == ht) {
+ RCU_INIT_POINTER(*hn, ht->next);
+ kfree_rcu(ht, rcu);
return 0;
}
}
- WARN_ON(1);
return -ENOENT;
}
static void u32_destroy(struct tcf_proto *tp)
{
struct tc_u_common *tp_c = tp->data;
- struct tc_u_hnode *root_ht = tp->root;
+ struct tc_u_hnode *root_ht = rtnl_dereference(tp->root);
WARN_ON(root_ht == NULL);
@@ -419,17 +478,16 @@
tp->q->u32_node = NULL;
- for (ht = tp_c->hlist; ht; ht = ht->next) {
+ for (ht = rtnl_dereference(tp_c->hlist);
+ ht;
+ ht = rtnl_dereference(ht->next)) {
ht->refcnt--;
- u32_clear_hnode(tp, ht);
+ u32_clear_hnode(ht);
}
- while ((ht = tp_c->hlist) != NULL) {
- tp_c->hlist = ht->next;
-
- WARN_ON(ht->refcnt != 0);
-
- kfree(ht);
+ while ((ht = rtnl_dereference(tp_c->hlist)) != NULL) {
+ RCU_INIT_POINTER(tp_c->hlist, ht->next);
+ kfree_rcu(ht, rcu);
}
kfree(tp_c);
@@ -441,6 +499,7 @@
static int u32_delete(struct tcf_proto *tp, unsigned long arg)
{
struct tc_u_hnode *ht = (struct tc_u_hnode *)arg;
+ struct tc_u_hnode *root_ht = rtnl_dereference(tp->root);
if (ht == NULL)
return 0;
@@ -448,7 +507,7 @@
if (TC_U32_KEY(ht->handle))
return u32_delete_key(tp, (struct tc_u_knode *)ht);
- if (tp->root == ht)
+ if (root_ht == ht)
return -EINVAL;
if (ht->refcnt == 1) {
@@ -471,7 +530,9 @@
if (!bitmap)
return handle | 0xFFF;
- for (n = ht->ht[TC_U32_HASH(handle)]; n; n = n->next)
+ for (n = rtnl_dereference(ht->ht[TC_U32_HASH(handle)]);
+ n;
+ n = rtnl_dereference(n->next))
set_bit(TC_U32_NODE(n->handle), bitmap);
i = find_next_zero_bit(bitmap, NR_U32_NODE, 0x800);
@@ -521,10 +582,8 @@
ht_down->refcnt++;
}
- tcf_tree_lock(tp);
- ht_old = n->ht_down;
- n->ht_down = ht_down;
- tcf_tree_unlock(tp);
+ ht_old = rtnl_dereference(n->ht_down);
+ rcu_assign_pointer(n->ht_down, ht_down);
if (ht_old)
ht_old->refcnt--;
@@ -547,10 +606,86 @@
return 0;
errout:
- tcf_exts_destroy(tp, &e);
+ tcf_exts_destroy(&e);
return err;
}
+static void u32_replace_knode(struct tcf_proto *tp,
+ struct tc_u_common *tp_c,
+ struct tc_u_knode *n)
+{
+ struct tc_u_knode __rcu **ins;
+ struct tc_u_knode *pins;
+ struct tc_u_hnode *ht;
+
+ if (TC_U32_HTID(n->handle) == TC_U32_ROOT)
+ ht = rtnl_dereference(tp->root);
+ else
+ ht = u32_lookup_ht(tp_c, TC_U32_HTID(n->handle));
+
+ ins = &ht->ht[TC_U32_HASH(n->handle)];
+
+ /* The node must always exist for it to be replaced if this is not the
+ * case then something went very wrong elsewhere.
+ */
+ for (pins = rtnl_dereference(*ins); ;
+ ins = &pins->next, pins = rtnl_dereference(*ins))
+ if (pins->handle == n->handle)
+ break;
+
+ RCU_INIT_POINTER(n->next, pins->next);
+ rcu_assign_pointer(*ins, n);
+}
+
+static struct tc_u_knode *u32_init_knode(struct tcf_proto *tp,
+ struct tc_u_knode *n)
+{
+ struct tc_u_knode *new;
+ struct tc_u32_sel *s = &n->sel;
+
+ new = kzalloc(sizeof(*n) + s->nkeys*sizeof(struct tc_u32_key),
+ GFP_KERNEL);
+
+ if (!new)
+ return NULL;
+
+ RCU_INIT_POINTER(new->next, n->next);
+ new->handle = n->handle;
+ RCU_INIT_POINTER(new->ht_up, n->ht_up);
+
+#ifdef CONFIG_NET_CLS_IND
+ new->ifindex = n->ifindex;
+#endif
+ new->fshift = n->fshift;
+ new->res = n->res;
+ RCU_INIT_POINTER(new->ht_down, n->ht_down);
+
+ /* bump reference count as long as we hold pointer to structure */
+ if (new->ht_down)
+ new->ht_down->refcnt++;
+
+#ifdef CONFIG_CLS_U32_PERF
+ /* Statistics may be incremented by readers during update
+ * so we must keep them in tact. When the node is later destroyed
+ * a special destroy call must be made to not free the pf memory.
+ */
+ new->pf = n->pf;
+#endif
+
+#ifdef CONFIG_CLS_U32_MARK
+ new->val = n->val;
+ new->mask = n->mask;
+ /* Similarly success statistics must be moved as pointers */
+ new->pcpu_success = n->pcpu_success;
+#endif
+ new->tp = tp;
+ memcpy(&new->sel, s, sizeof(*s) + s->nkeys*sizeof(struct tc_u32_key));
+
+ tcf_exts_init(&new->exts, TCA_U32_ACT, TCA_U32_POLICE);
+
+ return new;
+}
+
static int u32_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base, u32 handle,
struct nlattr **tca,
@@ -564,6 +699,9 @@
struct nlattr *tb[TCA_U32_MAX + 1];
u32 htid;
int err;
+#ifdef CONFIG_CLS_U32_PERF
+ size_t size;
+#endif
if (opt == NULL)
return handle ? -EINVAL : 0;
@@ -574,11 +712,27 @@
n = (struct tc_u_knode *)*arg;
if (n) {
+ struct tc_u_knode *new;
+
if (TC_U32_KEY(n->handle) == 0)
return -EINVAL;
- return u32_set_parms(net, tp, base, n->ht_up, n, tb,
- tca[TCA_RATE], ovr);
+ new = u32_init_knode(tp, n);
+ if (!new)
+ return -ENOMEM;
+
+ err = u32_set_parms(net, tp, base,
+ rtnl_dereference(n->ht_up), new, tb,
+ tca[TCA_RATE], ovr);
+
+ if (err) {
+ u32_destroy_key(tp, new, false);
+ return err;
+ }
+
+ u32_replace_knode(tp, tp_c, new);
+ call_rcu(&n->rcu, u32_delete_key_rcu);
+ return 0;
}
if (tb[TCA_U32_DIVISOR]) {
@@ -601,8 +755,8 @@
ht->divisor = divisor;
ht->handle = handle;
ht->prio = tp->prio;
- ht->next = tp_c->hlist;
- tp_c->hlist = ht;
+ RCU_INIT_POINTER(ht->next, tp_c->hlist);
+ rcu_assign_pointer(tp_c->hlist, ht);
*arg = (unsigned long)ht;
return 0;
}
@@ -610,7 +764,7 @@
if (tb[TCA_U32_HASH]) {
htid = nla_get_u32(tb[TCA_U32_HASH]);
if (TC_U32_HTID(htid) == TC_U32_ROOT) {
- ht = tp->root;
+ ht = rtnl_dereference(tp->root);
htid = ht->handle;
} else {
ht = u32_lookup_ht(tp->data, TC_U32_HTID(htid));
@@ -618,7 +772,7 @@
return -EINVAL;
}
} else {
- ht = tp->root;
+ ht = rtnl_dereference(tp->root);
htid = ht->handle;
}
@@ -642,46 +796,62 @@
return -ENOBUFS;
#ifdef CONFIG_CLS_U32_PERF
- n->pf = kzalloc(sizeof(struct tc_u32_pcnt) + s->nkeys*sizeof(u64), GFP_KERNEL);
- if (n->pf == NULL) {
+ size = sizeof(struct tc_u32_pcnt) + s->nkeys * sizeof(u64);
+ n->pf = __alloc_percpu(size, __alignof__(struct tc_u32_pcnt));
+ if (!n->pf) {
kfree(n);
return -ENOBUFS;
}
#endif
memcpy(&n->sel, s, sizeof(*s) + s->nkeys*sizeof(struct tc_u32_key));
- n->ht_up = ht;
+ RCU_INIT_POINTER(n->ht_up, ht);
n->handle = handle;
n->fshift = s->hmask ? ffs(ntohl(s->hmask)) - 1 : 0;
tcf_exts_init(&n->exts, TCA_U32_ACT, TCA_U32_POLICE);
+ n->tp = tp;
#ifdef CONFIG_CLS_U32_MARK
+ n->pcpu_success = alloc_percpu(u32);
+ if (!n->pcpu_success) {
+ err = -ENOMEM;
+ goto errout;
+ }
+
if (tb[TCA_U32_MARK]) {
struct tc_u32_mark *mark;
mark = nla_data(tb[TCA_U32_MARK]);
- memcpy(&n->mark, mark, sizeof(struct tc_u32_mark));
- n->mark.success = 0;
+ n->val = mark->val;
+ n->mask = mark->mask;
}
#endif
err = u32_set_parms(net, tp, base, ht, n, tb, tca[TCA_RATE], ovr);
if (err == 0) {
- struct tc_u_knode **ins;
- for (ins = &ht->ht[TC_U32_HASH(handle)]; *ins; ins = &(*ins)->next)
- if (TC_U32_NODE(handle) < TC_U32_NODE((*ins)->handle))
+ struct tc_u_knode __rcu **ins;
+ struct tc_u_knode *pins;
+
+ ins = &ht->ht[TC_U32_HASH(handle)];
+ for (pins = rtnl_dereference(*ins); pins;
+ ins = &pins->next, pins = rtnl_dereference(*ins))
+ if (TC_U32_NODE(handle) < TC_U32_NODE(pins->handle))
break;
- n->next = *ins;
- tcf_tree_lock(tp);
- *ins = n;
- tcf_tree_unlock(tp);
+ RCU_INIT_POINTER(n->next, pins);
+ rcu_assign_pointer(*ins, n);
*arg = (unsigned long)n;
return 0;
}
+
+#ifdef CONFIG_CLS_U32_MARK
+ free_percpu(n->pcpu_success);
+errout:
+#endif
+
#ifdef CONFIG_CLS_U32_PERF
- kfree(n->pf);
+ free_percpu(n->pf);
#endif
kfree(n);
return err;
@@ -697,7 +867,9 @@
if (arg->stop)
return;
- for (ht = tp_c->hlist; ht; ht = ht->next) {
+ for (ht = rtnl_dereference(tp_c->hlist);
+ ht;
+ ht = rtnl_dereference(ht->next)) {
if (ht->prio != tp->prio)
continue;
if (arg->count >= arg->skip) {
@@ -708,7 +880,9 @@
}
arg->count++;
for (h = 0; h <= ht->divisor; h++) {
- for (n = ht->ht[h]; n; n = n->next) {
+ for (n = rtnl_dereference(ht->ht[h]);
+ n;
+ n = rtnl_dereference(n->next)) {
if (arg->count < arg->skip) {
arg->count++;
continue;
@@ -727,6 +901,7 @@
struct sk_buff *skb, struct tcmsg *t)
{
struct tc_u_knode *n = (struct tc_u_knode *)fh;
+ struct tc_u_hnode *ht_up, *ht_down;
struct nlattr *nest;
if (n == NULL)
@@ -745,11 +920,18 @@
if (nla_put_u32(skb, TCA_U32_DIVISOR, divisor))
goto nla_put_failure;
} else {
+#ifdef CONFIG_CLS_U32_PERF
+ struct tc_u32_pcnt *gpf;
+ int cpu;
+#endif
+
if (nla_put(skb, TCA_U32_SEL,
sizeof(n->sel) + n->sel.nkeys*sizeof(struct tc_u32_key),
&n->sel))
goto nla_put_failure;
- if (n->ht_up) {
+
+ ht_up = rtnl_dereference(n->ht_up);
+ if (ht_up) {
u32 htid = n->handle & 0xFFFFF000;
if (nla_put_u32(skb, TCA_U32_HASH, htid))
goto nla_put_failure;
@@ -757,14 +939,28 @@
if (n->res.classid &&
nla_put_u32(skb, TCA_U32_CLASSID, n->res.classid))
goto nla_put_failure;
- if (n->ht_down &&
- nla_put_u32(skb, TCA_U32_LINK, n->ht_down->handle))
+
+ ht_down = rtnl_dereference(n->ht_down);
+ if (ht_down &&
+ nla_put_u32(skb, TCA_U32_LINK, ht_down->handle))
goto nla_put_failure;
#ifdef CONFIG_CLS_U32_MARK
- if ((n->mark.val || n->mark.mask) &&
- nla_put(skb, TCA_U32_MARK, sizeof(n->mark), &n->mark))
- goto nla_put_failure;
+ if ((n->val || n->mask)) {
+ struct tc_u32_mark mark = {.val = n->val,
+ .mask = n->mask,
+ .success = 0};
+ int cpum;
+
+ for_each_possible_cpu(cpum) {
+ __u32 cnt = *per_cpu_ptr(n->pcpu_success, cpum);
+
+ mark.success += cnt;
+ }
+
+ if (nla_put(skb, TCA_U32_MARK, sizeof(mark), &mark))
+ goto nla_put_failure;
+ }
#endif
if (tcf_exts_dump(skb, &n->exts) < 0)
@@ -779,10 +975,29 @@
}
#endif
#ifdef CONFIG_CLS_U32_PERF
+ gpf = kzalloc(sizeof(struct tc_u32_pcnt) +
+ n->sel.nkeys * sizeof(u64),
+ GFP_KERNEL);
+ if (!gpf)
+ goto nla_put_failure;
+
+ for_each_possible_cpu(cpu) {
+ int i;
+ struct tc_u32_pcnt *pf = per_cpu_ptr(n->pf, cpu);
+
+ gpf->rcnt += pf->rcnt;
+ gpf->rhit += pf->rhit;
+ for (i = 0; i < n->sel.nkeys; i++)
+ gpf->kcnts[i] += pf->kcnts[i];
+ }
+
if (nla_put(skb, TCA_U32_PCNT,
sizeof(struct tc_u32_pcnt) + n->sel.nkeys*sizeof(u64),
- n->pf))
+ gpf)) {
+ kfree(gpf);
goto nla_put_failure;
+ }
+ kfree(gpf);
#endif
}
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
index 58bed75..15e7bee 100644
--- a/net/sched/sch_api.c
+++ b/net/sched/sch_api.c
@@ -586,7 +586,7 @@
void qdisc_watchdog_init(struct qdisc_watchdog *wd, struct Qdisc *qdisc)
{
- hrtimer_init(&wd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+ hrtimer_init(&wd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
wd->timer.function = qdisc_watchdog;
wd->qdisc = qdisc;
}
@@ -602,7 +602,7 @@
hrtimer_start(&wd->timer,
ns_to_ktime(expires),
- HRTIMER_MODE_ABS);
+ HRTIMER_MODE_ABS_PINNED);
}
EXPORT_SYMBOL(qdisc_watchdog_schedule_ns);
@@ -1781,7 +1781,7 @@
__be16 protocol = skb->protocol;
int err;
- for (; tp; tp = tp->next) {
+ for (; tp; tp = rcu_dereference_bh(tp->next)) {
if (tp->protocol != protocol &&
tp->protocol != htons(ETH_P_ALL))
continue;
@@ -1833,15 +1833,15 @@
{
tp->ops->destroy(tp);
module_put(tp->ops->owner);
- kfree(tp);
+ kfree_rcu(tp, rcu);
}
-void tcf_destroy_chain(struct tcf_proto **fl)
+void tcf_destroy_chain(struct tcf_proto __rcu **fl)
{
struct tcf_proto *tp;
- while ((tp = *fl) != NULL) {
- *fl = tp->next;
+ while ((tp = rtnl_dereference(*fl)) != NULL) {
+ RCU_INIT_POINTER(*fl, tp->next);
tcf_destroy(tp);
}
}
diff --git a/net/sched/sch_atm.c b/net/sched/sch_atm.c
index 8449b33..c398f9c 100644
--- a/net/sched/sch_atm.c
+++ b/net/sched/sch_atm.c
@@ -41,7 +41,7 @@
struct atm_flow_data {
struct Qdisc *q; /* FIFO, TBF, etc. */
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
struct atm_vcc *vcc; /* VCC; NULL if VCC is closed */
void (*old_pop)(struct atm_vcc *vcc,
struct sk_buff *skb); /* chaining */
@@ -273,7 +273,7 @@
error = -ENOBUFS;
goto err_out;
}
- flow->filter_list = NULL;
+ RCU_INIT_POINTER(flow->filter_list, NULL);
flow->q = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops, classid);
if (!flow->q)
flow->q = &noop_qdisc;
@@ -311,7 +311,7 @@
pr_debug("atm_tc_delete(sch %p,[qdisc %p],flow %p)\n", sch, p, flow);
if (list_empty(&flow->list))
return -EINVAL;
- if (flow->filter_list || flow == &p->link)
+ if (rcu_access_pointer(flow->filter_list) || flow == &p->link)
return -EBUSY;
/*
* Reference count must be 2: one for "keepalive" (set at class
@@ -345,7 +345,8 @@
}
}
-static struct tcf_proto **atm_tc_find_tcf(struct Qdisc *sch, unsigned long cl)
+static struct tcf_proto __rcu **atm_tc_find_tcf(struct Qdisc *sch,
+ unsigned long cl)
{
struct atm_qdisc_data *p = qdisc_priv(sch);
struct atm_flow_data *flow = (struct atm_flow_data *)cl;
@@ -369,11 +370,12 @@
flow = NULL;
if (TC_H_MAJ(skb->priority) != sch->handle ||
!(flow = (struct atm_flow_data *)atm_tc_get(sch, skb->priority))) {
+ struct tcf_proto *fl;
+
list_for_each_entry(flow, &p->flows, list) {
- if (flow->filter_list) {
- result = tc_classify_compat(skb,
- flow->filter_list,
- &res);
+ fl = rcu_dereference_bh(flow->filter_list);
+ if (fl) {
+ result = tc_classify_compat(skb, fl, &res);
if (result < 0)
continue;
flow = (struct atm_flow_data *)res.class;
@@ -544,7 +546,7 @@
if (!p->link.q)
p->link.q = &noop_qdisc;
pr_debug("atm_tc_init: link (%p) qdisc %p\n", &p->link, p->link.q);
- p->link.filter_list = NULL;
+ RCU_INIT_POINTER(p->link.filter_list, NULL);
p->link.vcc = NULL;
p->link.sock = NULL;
p->link.classid = sch->handle;
diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
index 762a04b..d2cd981 100644
--- a/net/sched/sch_cbq.c
+++ b/net/sched/sch_cbq.c
@@ -133,7 +133,7 @@
struct gnet_stats_rate_est64 rate_est;
struct tc_cbq_xstats xstats;
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
int refcnt;
int filters;
@@ -221,6 +221,7 @@
struct cbq_class **defmap;
struct cbq_class *cl = NULL;
u32 prio = skb->priority;
+ struct tcf_proto *fl;
struct tcf_result res;
/*
@@ -235,11 +236,12 @@
int result = 0;
defmap = head->defaults;
+ fl = rcu_dereference_bh(head->filter_list);
/*
* Step 2+n. Apply classifier.
*/
- if (!head->filter_list ||
- (result = tc_classify_compat(skb, head->filter_list, &res)) < 0)
+ result = tc_classify_compat(skb, fl, &res);
+ if (!fl || result < 0)
goto fallback;
cl = (void *)res.class;
@@ -615,7 +617,7 @@
time = ktime_set(0, 0);
time = ktime_add_ns(time, PSCHED_TICKS2NS(now + delay));
- hrtimer_start(&q->delay_timer, time, HRTIMER_MODE_ABS);
+ hrtimer_start(&q->delay_timer, time, HRTIMER_MODE_ABS_PINNED);
}
qdisc_unthrottled(sch);
@@ -1384,7 +1386,7 @@
q->link.minidle = -0x7FFFFFFF;
qdisc_watchdog_init(&q->watchdog, sch);
- hrtimer_init(&q->delay_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+ hrtimer_init(&q->delay_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
q->delay_timer.function = cbq_undelay;
q->toplevel = TC_CBQ_MAXLEVEL;
q->now = psched_get_time();
@@ -1954,7 +1956,8 @@
return 0;
}
-static struct tcf_proto **cbq_find_tcf(struct Qdisc *sch, unsigned long arg)
+static struct tcf_proto __rcu **cbq_find_tcf(struct Qdisc *sch,
+ unsigned long arg)
{
struct cbq_sched_data *q = qdisc_priv(sch);
struct cbq_class *cl = (struct cbq_class *)arg;
diff --git a/net/sched/sch_choke.c b/net/sched/sch_choke.c
index ed30e43..8abc262 100644
--- a/net/sched/sch_choke.c
+++ b/net/sched/sch_choke.c
@@ -57,7 +57,7 @@
/* Variables */
struct red_vars vars;
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
struct {
u32 prob_drop; /* Early probability drops */
u32 prob_mark; /* Early probability marks */
@@ -133,10 +133,16 @@
--sch->q.qlen;
}
+/* private part of skb->cb[] that a qdisc is allowed to use
+ * is limited to QDISC_CB_PRIV_LEN bytes.
+ * As a flow key might be too large, we store a part of it only.
+ */
+#define CHOKE_K_LEN min_t(u32, sizeof(struct flow_keys), QDISC_CB_PRIV_LEN - 3)
+
struct choke_skb_cb {
u16 classid;
u8 keys_valid;
- struct flow_keys keys;
+ u8 keys[QDISC_CB_PRIV_LEN - 3];
};
static inline struct choke_skb_cb *choke_skb_cb(const struct sk_buff *skb)
@@ -163,22 +169,26 @@
static bool choke_match_flow(struct sk_buff *skb1,
struct sk_buff *skb2)
{
+ struct flow_keys temp;
+
if (skb1->protocol != skb2->protocol)
return false;
if (!choke_skb_cb(skb1)->keys_valid) {
choke_skb_cb(skb1)->keys_valid = 1;
- skb_flow_dissect(skb1, &choke_skb_cb(skb1)->keys);
+ skb_flow_dissect(skb1, &temp);
+ memcpy(&choke_skb_cb(skb1)->keys, &temp, CHOKE_K_LEN);
}
if (!choke_skb_cb(skb2)->keys_valid) {
choke_skb_cb(skb2)->keys_valid = 1;
- skb_flow_dissect(skb2, &choke_skb_cb(skb2)->keys);
+ skb_flow_dissect(skb2, &temp);
+ memcpy(&choke_skb_cb(skb2)->keys, &temp, CHOKE_K_LEN);
}
return !memcmp(&choke_skb_cb(skb1)->keys,
&choke_skb_cb(skb2)->keys,
- sizeof(struct flow_keys));
+ CHOKE_K_LEN);
}
/*
@@ -193,9 +203,11 @@
{
struct choke_sched_data *q = qdisc_priv(sch);
struct tcf_result res;
+ struct tcf_proto *fl;
int result;
- result = tc_classify(skb, q->filter_list, &res);
+ fl = rcu_dereference_bh(q->filter_list);
+ result = tc_classify(skb, fl, &res);
if (result >= 0) {
#ifdef CONFIG_NET_CLS_ACT
switch (result) {
@@ -249,7 +261,7 @@
return false;
oskb = choke_peek_random(q, pidx);
- if (q->filter_list)
+ if (rcu_access_pointer(q->filter_list))
return choke_get_classid(nskb) == choke_get_classid(oskb);
return choke_match_flow(oskb, nskb);
@@ -257,11 +269,11 @@
static int choke_enqueue(struct sk_buff *skb, struct Qdisc *sch)
{
+ int ret = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
struct choke_sched_data *q = qdisc_priv(sch);
const struct red_parms *p = &q->parms;
- int ret = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
- if (q->filter_list) {
+ if (rcu_access_pointer(q->filter_list)) {
/* If using external classifiers, get result and record it. */
if (!choke_classify(skb, sch, &ret))
goto other_drop; /* Packet was eaten by filter */
@@ -554,7 +566,8 @@
return 0;
}
-static struct tcf_proto **choke_find_tcf(struct Qdisc *sch, unsigned long cl)
+static struct tcf_proto __rcu **choke_find_tcf(struct Qdisc *sch,
+ unsigned long cl)
{
struct choke_sched_data *q = qdisc_priv(sch);
diff --git a/net/sched/sch_drr.c b/net/sched/sch_drr.c
index 7bbbfe1..d8b5ccf 100644
--- a/net/sched/sch_drr.c
+++ b/net/sched/sch_drr.c
@@ -35,7 +35,7 @@
struct drr_sched {
struct list_head active;
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
struct Qdisc_class_hash clhash;
};
@@ -184,7 +184,8 @@
drr_destroy_class(sch, cl);
}
-static struct tcf_proto **drr_tcf_chain(struct Qdisc *sch, unsigned long cl)
+static struct tcf_proto __rcu **drr_tcf_chain(struct Qdisc *sch,
+ unsigned long cl)
{
struct drr_sched *q = qdisc_priv(sch);
@@ -319,6 +320,7 @@
struct drr_sched *q = qdisc_priv(sch);
struct drr_class *cl;
struct tcf_result res;
+ struct tcf_proto *fl;
int result;
if (TC_H_MAJ(skb->priority ^ sch->handle) == 0) {
@@ -328,7 +330,8 @@
}
*qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
- result = tc_classify(skb, q->filter_list, &res);
+ fl = rcu_dereference_bh(q->filter_list);
+ result = tc_classify(skb, fl, &res);
if (result >= 0) {
#ifdef CONFIG_NET_CLS_ACT
switch (result) {
diff --git a/net/sched/sch_dsmark.c b/net/sched/sch_dsmark.c
index 49d6ef3..485e456 100644
--- a/net/sched/sch_dsmark.c
+++ b/net/sched/sch_dsmark.c
@@ -37,7 +37,7 @@
struct dsmark_qdisc_data {
struct Qdisc *q;
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
u8 *mask; /* "owns" the array */
u8 *value;
u16 indices;
@@ -186,8 +186,8 @@
}
}
-static inline struct tcf_proto **dsmark_find_tcf(struct Qdisc *sch,
- unsigned long cl)
+static inline struct tcf_proto __rcu **dsmark_find_tcf(struct Qdisc *sch,
+ unsigned long cl)
{
struct dsmark_qdisc_data *p = qdisc_priv(sch);
return &p->filter_list;
@@ -229,7 +229,8 @@
skb->tc_index = TC_H_MIN(skb->priority);
else {
struct tcf_result res;
- int result = tc_classify(skb, p->filter_list, &res);
+ struct tcf_proto *fl = rcu_dereference_bh(p->filter_list);
+ int result = tc_classify(skb, fl, &res);
pr_debug("result %d class 0x%04x\n", result, res.classid);
diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
index cc56c8b..105cf55 100644
--- a/net/sched/sch_fq_codel.c
+++ b/net/sched/sch_fq_codel.c
@@ -52,7 +52,7 @@
}; /* please try to keep this structure <= 64 bytes */
struct fq_codel_sched_data {
- struct tcf_proto *filter_list; /* optional external classifier */
+ struct tcf_proto __rcu *filter_list; /* optional external classifier */
struct fq_codel_flow *flows; /* Flows table [flows_cnt] */
u32 *backlogs; /* backlog table [flows_cnt] */
u32 flows_cnt; /* number of flows */
@@ -85,6 +85,7 @@
int *qerr)
{
struct fq_codel_sched_data *q = qdisc_priv(sch);
+ struct tcf_proto *filter;
struct tcf_result res;
int result;
@@ -93,11 +94,12 @@
TC_H_MIN(skb->priority) <= q->flows_cnt)
return TC_H_MIN(skb->priority);
- if (!q->filter_list)
+ filter = rcu_dereference(q->filter_list);
+ if (!filter)
return fq_codel_hash(q, skb) + 1;
*qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
- result = tc_classify(skb, q->filter_list, &res);
+ result = tc_classify(skb, filter, &res);
if (result >= 0) {
#ifdef CONFIG_NET_CLS_ACT
switch (result) {
@@ -496,7 +498,8 @@
{
}
-static struct tcf_proto **fq_codel_find_tcf(struct Qdisc *sch, unsigned long cl)
+static struct tcf_proto __rcu **fq_codel_find_tcf(struct Qdisc *sch,
+ unsigned long cl)
{
struct fq_codel_sched_data *q = qdisc_priv(sch);
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 19696eb..11b28f6 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -523,7 +523,7 @@
struct pfifo_fast_priv *priv = qdisc_priv(qdisc);
for (prio = 0; prio < PFIFO_FAST_BANDS; prio++)
- skb_queue_head_init(band2list(priv, prio));
+ __skb_queue_head_init(band2list(priv, prio));
/* Can by-pass the queue discipline */
qdisc->flags |= TCQ_F_CAN_BYPASS;
@@ -783,7 +783,7 @@
struct Qdisc *qdisc_default = _qdisc_default;
struct Qdisc *qdisc;
- qdisc = dev_queue->qdisc;
+ qdisc = rtnl_dereference(dev_queue->qdisc);
if (qdisc) {
spin_lock_bh(qdisc_lock(qdisc));
@@ -876,7 +876,7 @@
{
struct Qdisc *qdisc = _qdisc;
- dev_queue->qdisc = qdisc;
+ rcu_assign_pointer(dev_queue->qdisc, qdisc);
dev_queue->qdisc_sleeping = qdisc;
}
diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
index ec8aeaa..04b0de4 100644
--- a/net/sched/sch_hfsc.c
+++ b/net/sched/sch_hfsc.c
@@ -116,7 +116,7 @@
struct gnet_stats_queue qstats;
struct gnet_stats_rate_est64 rate_est;
unsigned int level; /* class level in hierarchy */
- struct tcf_proto *filter_list; /* filter list */
+ struct tcf_proto __rcu *filter_list; /* filter list */
unsigned int filter_cnt; /* filter count */
struct hfsc_sched *sched; /* scheduler data */
@@ -1161,7 +1161,7 @@
*qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
head = &q->root;
- tcf = q->root.filter_list;
+ tcf = rcu_dereference_bh(q->root.filter_list);
while (tcf && (result = tc_classify(skb, tcf, &res)) >= 0) {
#ifdef CONFIG_NET_CLS_ACT
switch (result) {
@@ -1185,7 +1185,7 @@
return cl; /* hit leaf class */
/* apply inner filter chain */
- tcf = cl->filter_list;
+ tcf = rcu_dereference_bh(cl->filter_list);
head = cl;
}
@@ -1285,7 +1285,7 @@
cl->filter_cnt--;
}
-static struct tcf_proto **
+static struct tcf_proto __rcu **
hfsc_tcf_chain(struct Qdisc *sch, unsigned long arg)
{
struct hfsc_sched *q = qdisc_priv(sch);
diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index aea942c..063e953 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -103,7 +103,7 @@
u32 prio; /* these two are used only by leaves... */
int quantum; /* but stored for parent-to-leaf return */
- struct tcf_proto *filter_list; /* class attached filters */
+ struct tcf_proto __rcu *filter_list; /* class attached filters */
int filter_cnt;
int refcnt; /* usage count of this class */
@@ -153,7 +153,7 @@
int rate2quantum; /* quant = rate / rate2quantum */
/* filters for qdisc itself */
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
#define HTB_WARN_TOOMANYEVENTS 0x1
unsigned int warned; /* only one warning */
@@ -223,9 +223,9 @@
if (cl->level == 0)
return cl;
/* Start with inner filter chain if a non-leaf class is selected */
- tcf = cl->filter_list;
+ tcf = rcu_dereference_bh(cl->filter_list);
} else {
- tcf = q->filter_list;
+ tcf = rcu_dereference_bh(q->filter_list);
}
*qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
@@ -251,7 +251,7 @@
return cl; /* we hit leaf; return it */
/* we have got inner class; apply inner filter chain */
- tcf = cl->filter_list;
+ tcf = rcu_dereference_bh(cl->filter_list);
}
/* classification failed; try to use default class */
cl = htb_find(TC_H_MAKE(TC_H_MAJ(sch->handle), q->defcls), sch);
@@ -932,7 +932,7 @@
ktime_t time = ns_to_ktime(next_event);
qdisc_throttled(q->watchdog.qdisc);
hrtimer_start(&q->watchdog.timer, time,
- HRTIMER_MODE_ABS);
+ HRTIMER_MODE_ABS_PINNED);
}
} else {
schedule_work(&q->work);
@@ -1044,7 +1044,7 @@
qdisc_watchdog_init(&q->watchdog, sch);
INIT_WORK(&q->work, htb_work_func);
- skb_queue_head_init(&q->direct_queue);
+ __skb_queue_head_init(&q->direct_queue);
if (tb[TCA_HTB_DIRECT_QLEN])
q->direct_qlen = nla_get_u32(tb[TCA_HTB_DIRECT_QLEN]);
@@ -1519,11 +1519,12 @@
return err;
}
-static struct tcf_proto **htb_find_tcf(struct Qdisc *sch, unsigned long arg)
+static struct tcf_proto __rcu **htb_find_tcf(struct Qdisc *sch,
+ unsigned long arg)
{
struct htb_sched *q = qdisc_priv(sch);
struct htb_class *cl = (struct htb_class *)arg;
- struct tcf_proto **fl = cl ? &cl->filter_list : &q->filter_list;
+ struct tcf_proto __rcu **fl = cl ? &cl->filter_list : &q->filter_list;
return fl;
}
diff --git a/net/sched/sch_ingress.c b/net/sched/sch_ingress.c
index 62871c1..b351125 100644
--- a/net/sched/sch_ingress.c
+++ b/net/sched/sch_ingress.c
@@ -17,7 +17,7 @@
struct ingress_qdisc_data {
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
};
/* ------------------------- Class/flow operations ------------------------- */
@@ -46,7 +46,8 @@
{
}
-static struct tcf_proto **ingress_find_tcf(struct Qdisc *sch, unsigned long cl)
+static struct tcf_proto __rcu **ingress_find_tcf(struct Qdisc *sch,
+ unsigned long cl)
{
struct ingress_qdisc_data *p = qdisc_priv(sch);
@@ -59,9 +60,10 @@
{
struct ingress_qdisc_data *p = qdisc_priv(sch);
struct tcf_result res;
+ struct tcf_proto *fl = rcu_dereference_bh(p->filter_list);
int result;
- result = tc_classify(skb, p->filter_list, &res);
+ result = tc_classify(skb, fl, &res);
qdisc_bstats_update(sch, skb);
switch (result) {
diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index 6749e2f..37e7d25 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
@@ -231,7 +231,7 @@
memset(&sch->qstats, 0, sizeof(sch->qstats));
for (i = 0; i < dev->num_tx_queues; i++) {
- qdisc = netdev_get_tx_queue(dev, i)->qdisc;
+ qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc);
spin_lock_bh(qdisc_lock(qdisc));
sch->q.qlen += qdisc->q.qlen;
sch->bstats.bytes += qdisc->bstats.bytes;
@@ -340,7 +340,9 @@
spin_unlock_bh(d->lock);
for (i = tc.offset; i < tc.offset + tc.count; i++) {
- qdisc = netdev_get_tx_queue(dev, i)->qdisc;
+ struct netdev_queue *q = netdev_get_tx_queue(dev, i);
+
+ qdisc = rtnl_dereference(q->qdisc);
spin_lock_bh(qdisc_lock(qdisc));
bstats.bytes += qdisc->bstats.bytes;
bstats.packets += qdisc->bstats.packets;
diff --git a/net/sched/sch_multiq.c b/net/sched/sch_multiq.c
index afb050a..c0466c1 100644
--- a/net/sched/sch_multiq.c
+++ b/net/sched/sch_multiq.c
@@ -31,7 +31,7 @@
u16 bands;
u16 max_bands;
u16 curband;
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
struct Qdisc **queues;
};
@@ -42,10 +42,11 @@
struct multiq_sched_data *q = qdisc_priv(sch);
u32 band;
struct tcf_result res;
+ struct tcf_proto *fl = rcu_dereference_bh(q->filter_list);
int err;
*qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
- err = tc_classify(skb, q->filter_list, &res);
+ err = tc_classify(skb, fl, &res);
#ifdef CONFIG_NET_CLS_ACT
switch (err) {
case TC_ACT_STOLEN:
@@ -388,7 +389,8 @@
}
}
-static struct tcf_proto **multiq_find_tcf(struct Qdisc *sch, unsigned long cl)
+static struct tcf_proto __rcu **multiq_find_tcf(struct Qdisc *sch,
+ unsigned long cl)
{
struct multiq_sched_data *q = qdisc_priv(sch);
diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c
index 79359b6..03ef99e 100644
--- a/net/sched/sch_prio.c
+++ b/net/sched/sch_prio.c
@@ -24,7 +24,7 @@
struct prio_sched_data {
int bands;
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
u8 prio2band[TC_PRIO_MAX+1];
struct Qdisc *queues[TCQ_PRIO_BANDS];
};
@@ -36,11 +36,13 @@
struct prio_sched_data *q = qdisc_priv(sch);
u32 band = skb->priority;
struct tcf_result res;
+ struct tcf_proto *fl;
int err;
*qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
if (TC_H_MAJ(skb->priority) != sch->handle) {
- err = tc_classify(skb, q->filter_list, &res);
+ fl = rcu_dereference_bh(q->filter_list);
+ err = tc_classify(skb, fl, &res);
#ifdef CONFIG_NET_CLS_ACT
switch (err) {
case TC_ACT_STOLEN:
@@ -50,7 +52,7 @@
return NULL;
}
#endif
- if (!q->filter_list || err < 0) {
+ if (!fl || err < 0) {
if (TC_H_MAJ(band))
band = 0;
return q->queues[q->prio2band[band & TC_PRIO_MAX]];
@@ -351,7 +353,8 @@
}
}
-static struct tcf_proto **prio_find_tcf(struct Qdisc *sch, unsigned long cl)
+static struct tcf_proto __rcu **prio_find_tcf(struct Qdisc *sch,
+ unsigned long cl)
{
struct prio_sched_data *q = qdisc_priv(sch);
diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
index 8056fb4..602ea01 100644
--- a/net/sched/sch_qfq.c
+++ b/net/sched/sch_qfq.c
@@ -181,7 +181,7 @@
};
struct qfq_sched {
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
struct Qdisc_class_hash clhash;
u64 oldV, V; /* Precise virtual times. */
@@ -576,7 +576,8 @@
qfq_destroy_class(sch, cl);
}
-static struct tcf_proto **qfq_tcf_chain(struct Qdisc *sch, unsigned long cl)
+static struct tcf_proto __rcu **qfq_tcf_chain(struct Qdisc *sch,
+ unsigned long cl)
{
struct qfq_sched *q = qdisc_priv(sch);
@@ -704,6 +705,7 @@
struct qfq_sched *q = qdisc_priv(sch);
struct qfq_class *cl;
struct tcf_result res;
+ struct tcf_proto *fl;
int result;
if (TC_H_MAJ(skb->priority ^ sch->handle) == 0) {
@@ -714,7 +716,8 @@
}
*qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
- result = tc_classify(skb, q->filter_list, &res);
+ fl = rcu_dereference_bh(q->filter_list);
+ result = tc_classify(skb, fl, &res);
if (result >= 0) {
#ifdef CONFIG_NET_CLS_ACT
switch (result) {
diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
index 9b0f709..1562fb2 100644
--- a/net/sched/sch_sfb.c
+++ b/net/sched/sch_sfb.c
@@ -55,7 +55,7 @@
struct sfb_sched_data {
struct Qdisc *qdisc;
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
unsigned long rehash_interval;
unsigned long warmup_time; /* double buffering warmup time in jiffies */
u32 max;
@@ -253,13 +253,13 @@
return false;
}
-static bool sfb_classify(struct sk_buff *skb, struct sfb_sched_data *q,
+static bool sfb_classify(struct sk_buff *skb, struct tcf_proto *fl,
int *qerr, u32 *salt)
{
struct tcf_result res;
int result;
- result = tc_classify(skb, q->filter_list, &res);
+ result = tc_classify(skb, fl, &res);
if (result >= 0) {
#ifdef CONFIG_NET_CLS_ACT
switch (result) {
@@ -281,6 +281,7 @@
struct sfb_sched_data *q = qdisc_priv(sch);
struct Qdisc *child = q->qdisc;
+ struct tcf_proto *fl;
int i;
u32 p_min = ~0;
u32 minqlen = ~0;
@@ -306,9 +307,10 @@
}
}
- if (q->filter_list) {
+ fl = rcu_dereference_bh(q->filter_list);
+ if (fl) {
/* If using external classifiers, get result and record it. */
- if (!sfb_classify(skb, q, &ret, &salt))
+ if (!sfb_classify(skb, fl, &ret, &salt))
goto other_drop;
keys.src = salt;
keys.dst = 0;
@@ -660,7 +662,8 @@
}
}
-static struct tcf_proto **sfb_find_tcf(struct Qdisc *sch, unsigned long cl)
+static struct tcf_proto __rcu **sfb_find_tcf(struct Qdisc *sch,
+ unsigned long cl)
{
struct sfb_sched_data *q = qdisc_priv(sch);
diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
index 211db90..80c36bd 100644
--- a/net/sched/sch_sfq.c
+++ b/net/sched/sch_sfq.c
@@ -125,7 +125,7 @@
u8 cur_depth; /* depth of longest slot */
u8 flags;
unsigned short scaled_quantum; /* SFQ_ALLOT_SIZE(quantum) */
- struct tcf_proto *filter_list;
+ struct tcf_proto __rcu *filter_list;
sfq_index *ht; /* Hash table ('divisor' slots) */
struct sfq_slot *slots; /* Flows table ('maxflows' entries) */
@@ -187,6 +187,7 @@
{
struct sfq_sched_data *q = qdisc_priv(sch);
struct tcf_result res;
+ struct tcf_proto *fl;
int result;
if (TC_H_MAJ(skb->priority) == sch->handle &&
@@ -194,13 +195,14 @@
TC_H_MIN(skb->priority) <= q->divisor)
return TC_H_MIN(skb->priority);
- if (!q->filter_list) {
+ fl = rcu_dereference_bh(q->filter_list);
+ if (!fl) {
skb_flow_dissect(skb, &sfq_skb_cb(skb)->keys);
return sfq_hash(q, skb) + 1;
}
*qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
- result = tc_classify(skb, q->filter_list, &res);
+ result = tc_classify(skb, fl, &res);
if (result >= 0) {
#ifdef CONFIG_NET_CLS_ACT
switch (result) {
@@ -836,7 +838,8 @@
{
}
-static struct tcf_proto **sfq_find_tcf(struct Qdisc *sch, unsigned long cl)
+static struct tcf_proto __rcu **sfq_find_tcf(struct Qdisc *sch,
+ unsigned long cl)
{
struct sfq_sched_data *q = qdisc_priv(sch);
diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c
index aaa8d03..5cd291b 100644
--- a/net/sched/sch_teql.c
+++ b/net/sched/sch_teql.c
@@ -96,11 +96,14 @@
struct teql_sched_data *dat = qdisc_priv(sch);
struct netdev_queue *dat_queue;
struct sk_buff *skb;
+ struct Qdisc *q;
skb = __skb_dequeue(&dat->q);
dat_queue = netdev_get_tx_queue(dat->m->dev, 0);
+ q = rcu_dereference_bh(dat_queue->qdisc);
+
if (skb == NULL) {
- struct net_device *m = qdisc_dev(dat_queue->qdisc);
+ struct net_device *m = qdisc_dev(q);
if (m) {
dat->m->slaves = sch;
netif_wake_queue(m);
@@ -108,7 +111,7 @@
} else {
qdisc_bstats_update(sch, skb);
}
- sch->q.qlen = dat->q.qlen + dat_queue->qdisc->q.qlen;
+ sch->q.qlen = dat->q.qlen + q->q.qlen;
return skb;
}
@@ -157,9 +160,9 @@
txq = netdev_get_tx_queue(master->dev, 0);
master->slaves = NULL;
- root_lock = qdisc_root_sleeping_lock(txq->qdisc);
+ root_lock = qdisc_root_sleeping_lock(rtnl_dereference(txq->qdisc));
spin_lock_bh(root_lock);
- qdisc_reset(txq->qdisc);
+ qdisc_reset(rtnl_dereference(txq->qdisc));
spin_unlock_bh(root_lock);
}
}
@@ -266,7 +269,7 @@
struct dst_entry *dst = skb_dst(skb);
int res;
- if (txq->qdisc == &noop_qdisc)
+ if (rcu_access_pointer(txq->qdisc) == &noop_qdisc)
return -ENODEV;
if (!dev->header_ops || !dst)
diff --git a/net/socket.c b/net/socket.c
index d40f522..ffd9cb4 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -1993,6 +1993,9 @@
if (copy_from_user(kmsg, umsg, sizeof(struct msghdr)))
return -EFAULT;
+ if (kmsg->msg_name == NULL)
+ kmsg->msg_namelen = 0;
+
if (kmsg->msg_namelen < 0)
return -EINVAL;
diff --git a/net/wireless/chan.c b/net/wireless/chan.c
index 992b340..72d81e2 100644
--- a/net/wireless/chan.c
+++ b/net/wireless/chan.c
@@ -4,6 +4,7 @@
* any point in time.
*
* Copyright 2009 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*/
#include <linux/export.h>
diff --git a/net/wireless/core.c b/net/wireless/core.c
index c6620aa..f52a4cd 100644
--- a/net/wireless/core.c
+++ b/net/wireless/core.c
@@ -2,6 +2,7 @@
* This is the linux wireless configuration interface.
*
* Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -1005,7 +1006,7 @@
rdev->devlist_generation++;
cfg80211_mlme_purge_registrations(wdev);
#ifdef CONFIG_CFG80211_WEXT
- kfree(wdev->wext.keys);
+ kzfree(wdev->wext.keys);
#endif
}
/*
diff --git a/net/wireless/ibss.c b/net/wireless/ibss.c
index 8f345da..e24fc58 100644
--- a/net/wireless/ibss.c
+++ b/net/wireless/ibss.c
@@ -115,7 +115,7 @@
}
if (WARN_ON(wdev->connect_keys))
- kfree(wdev->connect_keys);
+ kzfree(wdev->connect_keys);
wdev->connect_keys = connkeys;
wdev->ibss_fixed = params->channel_fixed;
@@ -161,7 +161,7 @@
ASSERT_WDEV_LOCK(wdev);
- kfree(wdev->connect_keys);
+ kzfree(wdev->connect_keys);
wdev->connect_keys = NULL;
rdev_set_qos_map(rdev, dev, NULL);
diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
index 369fc33..2c52b59 100644
--- a/net/wireless/mlme.c
+++ b/net/wireless/mlme.c
@@ -19,7 +19,7 @@
void cfg80211_rx_assoc_resp(struct net_device *dev, struct cfg80211_bss *bss,
- const u8 *buf, size_t len)
+ const u8 *buf, size_t len, int uapsd_queues)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct wiphy *wiphy = wdev->wiphy;
@@ -43,7 +43,7 @@
return;
}
- nl80211_send_rx_assoc(rdev, dev, buf, len, GFP_KERNEL);
+ nl80211_send_rx_assoc(rdev, dev, buf, len, GFP_KERNEL, uapsd_queues);
/* update current_bss etc., consumes the bss reference */
__cfg80211_connect_result(dev, mgmt->bssid, NULL, 0, ie, len - ieoffs,
status_code,
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index 3011401..cb9f5a4 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -2,6 +2,7 @@
* This is the new netlink-based wireless configuration interface.
*
* Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*/
#include <linux/if.h>
@@ -225,6 +226,7 @@
[NL80211_ATTR_WIPHY_FRAG_THRESHOLD] = { .type = NLA_U32 },
[NL80211_ATTR_WIPHY_RTS_THRESHOLD] = { .type = NLA_U32 },
[NL80211_ATTR_WIPHY_COVERAGE_CLASS] = { .type = NLA_U8 },
+ [NL80211_ATTR_WIPHY_DYN_ACK] = { .type = NLA_FLAG },
[NL80211_ATTR_IFTYPE] = { .type = NLA_U32 },
[NL80211_ATTR_IFINDEX] = { .type = NLA_U32 },
@@ -388,6 +390,11 @@
[NL80211_ATTR_TDLS_PEER_CAPABILITY] = { .type = NLA_U32 },
[NL80211_ATTR_IFACE_SOCKET_OWNER] = { .type = NLA_FLAG },
[NL80211_ATTR_CSA_C_OFFSETS_TX] = { .type = NLA_BINARY },
+ [NL80211_ATTR_USE_RRM] = { .type = NLA_FLAG },
+ [NL80211_ATTR_TSID] = { .type = NLA_U8 },
+ [NL80211_ATTR_USER_PRIO] = { .type = NLA_U8 },
+ [NL80211_ATTR_ADMITTED_TIME] = { .type = NLA_U16 },
+ [NL80211_ATTR_SMPS_MODE] = { .type = NLA_U8 },
};
/* policy for the key attributes */
@@ -1507,6 +1514,9 @@
if (rdev->wiphy.flags & WIPHY_FLAG_HAS_CHANNEL_SWITCH)
CMD(channel_switch, CHANNEL_SWITCH);
CMD(set_qos_map, SET_QOS_MAP);
+ if (rdev->wiphy.flags &
+ WIPHY_FLAG_SUPPORTS_WMM_ADMISSION)
+ CMD(add_tx_ts, ADD_TX_TS);
}
/* add into the if now */
#undef CMD
@@ -2237,11 +2247,21 @@
}
if (info->attrs[NL80211_ATTR_WIPHY_COVERAGE_CLASS]) {
+ if (info->attrs[NL80211_ATTR_WIPHY_DYN_ACK])
+ return -EINVAL;
+
coverage_class = nla_get_u8(
info->attrs[NL80211_ATTR_WIPHY_COVERAGE_CLASS]);
changed |= WIPHY_PARAM_COVERAGE_CLASS;
}
+ if (info->attrs[NL80211_ATTR_WIPHY_DYN_ACK]) {
+ if (!(rdev->wiphy.features & NL80211_FEATURE_ACKTO_ESTIMATION))
+ return -EOPNOTSUPP;
+
+ changed |= WIPHY_PARAM_DYN_ACK;
+ }
+
if (changed) {
u8 old_retry_short, old_retry_long;
u32 old_frag_threshold, old_rts_threshold;
@@ -3326,6 +3346,29 @@
return PTR_ERR(params.acl);
}
+ if (info->attrs[NL80211_ATTR_SMPS_MODE]) {
+ params.smps_mode =
+ nla_get_u8(info->attrs[NL80211_ATTR_SMPS_MODE]);
+ switch (params.smps_mode) {
+ case NL80211_SMPS_OFF:
+ break;
+ case NL80211_SMPS_STATIC:
+ if (!(rdev->wiphy.features &
+ NL80211_FEATURE_STATIC_SMPS))
+ return -EINVAL;
+ break;
+ case NL80211_SMPS_DYNAMIC:
+ if (!(rdev->wiphy.features &
+ NL80211_FEATURE_DYNAMIC_SMPS))
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+ } else {
+ params.smps_mode = NL80211_SMPS_OFF;
+ }
+
wdev_lock(wdev);
err = rdev_start_ap(rdev, dev, ¶ms);
if (!err) {
@@ -6583,6 +6626,14 @@
sizeof(req.vht_capa));
}
+ if (nla_get_flag(info->attrs[NL80211_ATTR_USE_RRM])) {
+ if (!(rdev->wiphy.features &
+ NL80211_FEATURE_DS_PARAM_SET_IE_IN_PROBES) ||
+ !(rdev->wiphy.features & NL80211_FEATURE_QUIET))
+ return -EINVAL;
+ req.flags |= ASSOC_REQ_USE_RRM;
+ }
+
err = nl80211_crypto_settings(rdev, info, &req.crypto, 1);
if (!err) {
wdev_lock(dev->ieee80211_ptr);
@@ -6845,7 +6896,7 @@
err = cfg80211_join_ibss(rdev, dev, &ibss, connkeys);
if (err)
- kfree(connkeys);
+ kzfree(connkeys);
return err;
}
@@ -6977,6 +7028,9 @@
struct nlattr *data = ((void **)skb->cb)[2];
enum nl80211_multicast_groups mcgrp = NL80211_MCGRP_TESTMODE;
+ /* clear CB data for netlink core to own from now on */
+ memset(skb->cb, 0, sizeof(skb->cb));
+
nla_nest_end(skb, data);
genlmsg_end(skb, hdr);
@@ -7214,7 +7268,7 @@
if (info->attrs[NL80211_ATTR_HT_CAPABILITY]) {
if (!info->attrs[NL80211_ATTR_HT_CAPABILITY_MASK]) {
- kfree(connkeys);
+ kzfree(connkeys);
return -EINVAL;
}
memcpy(&connect.ht_capa,
@@ -7232,7 +7286,7 @@
if (info->attrs[NL80211_ATTR_VHT_CAPABILITY]) {
if (!info->attrs[NL80211_ATTR_VHT_CAPABILITY_MASK]) {
- kfree(connkeys);
+ kzfree(connkeys);
return -EINVAL;
}
memcpy(&connect.vht_capa,
@@ -7240,11 +7294,19 @@
sizeof(connect.vht_capa));
}
+ if (nla_get_flag(info->attrs[NL80211_ATTR_USE_RRM])) {
+ if (!(rdev->wiphy.features &
+ NL80211_FEATURE_DS_PARAM_SET_IE_IN_PROBES) ||
+ !(rdev->wiphy.features & NL80211_FEATURE_QUIET))
+ return -EINVAL;
+ connect.flags |= ASSOC_REQ_USE_RRM;
+ }
+
wdev_lock(dev->ieee80211_ptr);
err = cfg80211_connect(rdev, dev, &connect, connkeys, NULL);
wdev_unlock(dev->ieee80211_ptr);
if (err)
- kfree(connkeys);
+ kzfree(connkeys);
return err;
}
@@ -8930,13 +8992,9 @@
if (nla_len(tb[NL80211_REKEY_DATA_KCK]) != NL80211_KCK_LEN)
return -ERANGE;
- memcpy(rekey_data.kek, nla_data(tb[NL80211_REKEY_DATA_KEK]),
- NL80211_KEK_LEN);
- memcpy(rekey_data.kck, nla_data(tb[NL80211_REKEY_DATA_KCK]),
- NL80211_KCK_LEN);
- memcpy(rekey_data.replay_ctr,
- nla_data(tb[NL80211_REKEY_DATA_REPLAY_CTR]),
- NL80211_REPLAY_CTR_LEN);
+ rekey_data.kek = nla_data(tb[NL80211_REKEY_DATA_KEK]);
+ rekey_data.kck = nla_data(tb[NL80211_REKEY_DATA_KCK]);
+ rekey_data.replay_ctr = nla_data(tb[NL80211_REKEY_DATA_REPLAY_CTR]);
wdev_lock(wdev);
if (!wdev->current_bss) {
@@ -9302,6 +9360,9 @@
void *hdr = ((void **)skb->cb)[1];
struct nlattr *data = ((void **)skb->cb)[2];
+ /* clear CB data for netlink core to own from now on */
+ memset(skb->cb, 0, sizeof(skb->cb));
+
if (WARN_ON(!rdev->cur_cmd_info)) {
kfree_skb(skb);
return -EINVAL;
@@ -9365,6 +9426,93 @@
return ret;
}
+static int nl80211_add_tx_ts(struct sk_buff *skb, struct genl_info *info)
+{
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
+ struct net_device *dev = info->user_ptr[1];
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ const u8 *peer;
+ u8 tsid, up;
+ u16 admitted_time = 0;
+ int err;
+
+ if (!(rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_WMM_ADMISSION))
+ return -EOPNOTSUPP;
+
+ if (!info->attrs[NL80211_ATTR_TSID] || !info->attrs[NL80211_ATTR_MAC] ||
+ !info->attrs[NL80211_ATTR_USER_PRIO])
+ return -EINVAL;
+
+ tsid = nla_get_u8(info->attrs[NL80211_ATTR_TSID]);
+ if (tsid >= IEEE80211_NUM_TIDS)
+ return -EINVAL;
+
+ up = nla_get_u8(info->attrs[NL80211_ATTR_USER_PRIO]);
+ if (up >= IEEE80211_NUM_UPS)
+ return -EINVAL;
+
+ /* WMM uses TIDs 0-7 even for TSPEC */
+ if (tsid < IEEE80211_FIRST_TSPEC_TSID) {
+ if (!(rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_WMM_ADMISSION))
+ return -EINVAL;
+ } else {
+ /* TODO: handle 802.11 TSPEC/admission control
+ * need more attributes for that (e.g. BA session requirement)
+ */
+ return -EINVAL;
+ }
+
+ peer = nla_data(info->attrs[NL80211_ATTR_MAC]);
+
+ if (info->attrs[NL80211_ATTR_ADMITTED_TIME]) {
+ admitted_time =
+ nla_get_u16(info->attrs[NL80211_ATTR_ADMITTED_TIME]);
+ if (!admitted_time)
+ return -EINVAL;
+ }
+
+ wdev_lock(wdev);
+ switch (wdev->iftype) {
+ case NL80211_IFTYPE_STATION:
+ case NL80211_IFTYPE_P2P_CLIENT:
+ if (wdev->current_bss)
+ break;
+ err = -ENOTCONN;
+ goto out;
+ default:
+ err = -EOPNOTSUPP;
+ goto out;
+ }
+
+ err = rdev_add_tx_ts(rdev, dev, tsid, peer, up, admitted_time);
+
+ out:
+ wdev_unlock(wdev);
+ return err;
+}
+
+static int nl80211_del_tx_ts(struct sk_buff *skb, struct genl_info *info)
+{
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
+ struct net_device *dev = info->user_ptr[1];
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ const u8 *peer;
+ u8 tsid;
+ int err;
+
+ if (!info->attrs[NL80211_ATTR_TSID] || !info->attrs[NL80211_ATTR_MAC])
+ return -EINVAL;
+
+ tsid = nla_get_u8(info->attrs[NL80211_ATTR_TSID]);
+ peer = nla_data(info->attrs[NL80211_ATTR_MAC]);
+
+ wdev_lock(wdev);
+ err = rdev_del_tx_ts(rdev, dev, tsid, peer);
+ wdev_unlock(wdev);
+
+ return err;
+}
+
#define NL80211_FLAG_NEED_WIPHY 0x01
#define NL80211_FLAG_NEED_NETDEV 0x02
#define NL80211_FLAG_NEED_RTNL 0x04
@@ -9375,6 +9523,7 @@
/* If a netdev is associated, it must be UP, P2P must be started */
#define NL80211_FLAG_NEED_WDEV_UP (NL80211_FLAG_NEED_WDEV |\
NL80211_FLAG_CHECK_NETDEV_UP)
+#define NL80211_FLAG_CLEAR_SKB 0x20
static int nl80211_pre_doit(const struct genl_ops *ops, struct sk_buff *skb,
struct genl_info *info)
@@ -9458,8 +9607,20 @@
dev_put(info->user_ptr[1]);
}
}
+
if (ops->internal_flags & NL80211_FLAG_NEED_RTNL)
rtnl_unlock();
+
+ /* If needed, clear the netlink message payload from the SKB
+ * as it might contain key data that shouldn't stick around on
+ * the heap after the SKB is freed. The netlink message header
+ * is still needed for further processing, so leave it intact.
+ */
+ if (ops->internal_flags & NL80211_FLAG_CLEAR_SKB) {
+ struct nlmsghdr *nlh = nlmsg_hdr(skb);
+
+ memset(nlmsg_data(nlh), 0, nlmsg_len(nlh));
+ }
}
static const struct genl_ops nl80211_ops[] = {
@@ -9527,7 +9688,8 @@
.policy = nl80211_policy,
.flags = GENL_ADMIN_PERM,
.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
- NL80211_FLAG_NEED_RTNL,
+ NL80211_FLAG_NEED_RTNL |
+ NL80211_FLAG_CLEAR_SKB,
},
{
.cmd = NL80211_CMD_NEW_KEY,
@@ -9535,7 +9697,8 @@
.policy = nl80211_policy,
.flags = GENL_ADMIN_PERM,
.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
- NL80211_FLAG_NEED_RTNL,
+ NL80211_FLAG_NEED_RTNL |
+ NL80211_FLAG_CLEAR_SKB,
},
{
.cmd = NL80211_CMD_DEL_KEY,
@@ -9713,7 +9876,8 @@
.policy = nl80211_policy,
.flags = GENL_ADMIN_PERM,
.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
- NL80211_FLAG_NEED_RTNL,
+ NL80211_FLAG_NEED_RTNL |
+ NL80211_FLAG_CLEAR_SKB,
},
{
.cmd = NL80211_CMD_ASSOCIATE,
@@ -9947,7 +10111,8 @@
.policy = nl80211_policy,
.flags = GENL_ADMIN_PERM,
.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
- NL80211_FLAG_NEED_RTNL,
+ NL80211_FLAG_NEED_RTNL |
+ NL80211_FLAG_CLEAR_SKB,
},
{
.cmd = NL80211_CMD_TDLS_MGMT,
@@ -10105,6 +10270,22 @@
.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
NL80211_FLAG_NEED_RTNL,
},
+ {
+ .cmd = NL80211_CMD_ADD_TX_TS,
+ .doit = nl80211_add_tx_ts,
+ .policy = nl80211_policy,
+ .flags = GENL_ADMIN_PERM,
+ .internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
+ NL80211_FLAG_NEED_RTNL,
+ },
+ {
+ .cmd = NL80211_CMD_DEL_TX_TS,
+ .doit = nl80211_del_tx_ts,
+ .policy = nl80211_policy,
+ .flags = GENL_ADMIN_PERM,
+ .internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
+ NL80211_FLAG_NEED_RTNL,
+ },
};
/* notification functions */
@@ -10373,7 +10554,8 @@
static void nl80211_send_mlme_event(struct cfg80211_registered_device *rdev,
struct net_device *netdev,
const u8 *buf, size_t len,
- enum nl80211_commands cmd, gfp_t gfp)
+ enum nl80211_commands cmd, gfp_t gfp,
+ int uapsd_queues)
{
struct sk_buff *msg;
void *hdr;
@@ -10393,6 +10575,19 @@
nla_put(msg, NL80211_ATTR_FRAME, len, buf))
goto nla_put_failure;
+ if (uapsd_queues >= 0) {
+ struct nlattr *nla_wmm =
+ nla_nest_start(msg, NL80211_ATTR_STA_WME);
+ if (!nla_wmm)
+ goto nla_put_failure;
+
+ if (nla_put_u8(msg, NL80211_STA_WME_UAPSD_QUEUES,
+ uapsd_queues))
+ goto nla_put_failure;
+
+ nla_nest_end(msg, nla_wmm);
+ }
+
genlmsg_end(msg, hdr);
genlmsg_multicast_netns(&nl80211_fam, wiphy_net(&rdev->wiphy), msg, 0,
@@ -10409,15 +10604,15 @@
size_t len, gfp_t gfp)
{
nl80211_send_mlme_event(rdev, netdev, buf, len,
- NL80211_CMD_AUTHENTICATE, gfp);
+ NL80211_CMD_AUTHENTICATE, gfp, -1);
}
void nl80211_send_rx_assoc(struct cfg80211_registered_device *rdev,
struct net_device *netdev, const u8 *buf,
- size_t len, gfp_t gfp)
+ size_t len, gfp_t gfp, int uapsd_queues)
{
nl80211_send_mlme_event(rdev, netdev, buf, len,
- NL80211_CMD_ASSOCIATE, gfp);
+ NL80211_CMD_ASSOCIATE, gfp, uapsd_queues);
}
void nl80211_send_deauth(struct cfg80211_registered_device *rdev,
@@ -10425,7 +10620,7 @@
size_t len, gfp_t gfp)
{
nl80211_send_mlme_event(rdev, netdev, buf, len,
- NL80211_CMD_DEAUTHENTICATE, gfp);
+ NL80211_CMD_DEAUTHENTICATE, gfp, -1);
}
void nl80211_send_disassoc(struct cfg80211_registered_device *rdev,
@@ -10433,7 +10628,7 @@
size_t len, gfp_t gfp)
{
nl80211_send_mlme_event(rdev, netdev, buf, len,
- NL80211_CMD_DISASSOCIATE, gfp);
+ NL80211_CMD_DISASSOCIATE, gfp, -1);
}
void cfg80211_rx_unprot_mlme_mgmt(struct net_device *dev, const u8 *buf,
@@ -10454,7 +10649,7 @@
cmd = NL80211_CMD_UNPROT_DISASSOCIATE;
trace_cfg80211_rx_unprot_mlme_mgmt(dev, buf, len);
- nl80211_send_mlme_event(rdev, dev, buf, len, cmd, GFP_ATOMIC);
+ nl80211_send_mlme_event(rdev, dev, buf, len, cmd, GFP_ATOMIC, -1);
}
EXPORT_SYMBOL(cfg80211_rx_unprot_mlme_mgmt);
diff --git a/net/wireless/nl80211.h b/net/wireless/nl80211.h
index 49c9a48..7ad70d6 100644
--- a/net/wireless/nl80211.h
+++ b/net/wireless/nl80211.h
@@ -23,7 +23,8 @@
const u8 *buf, size_t len, gfp_t gfp);
void nl80211_send_rx_assoc(struct cfg80211_registered_device *rdev,
struct net_device *netdev,
- const u8 *buf, size_t len, gfp_t gfp);
+ const u8 *buf, size_t len, gfp_t gfp,
+ int uapsd_queues);
void nl80211_send_deauth(struct cfg80211_registered_device *rdev,
struct net_device *netdev,
const u8 *buf, size_t len, gfp_t gfp);
diff --git a/net/wireless/rdev-ops.h b/net/wireless/rdev-ops.h
index 56c2240..f6d457d 100644
--- a/net/wireless/rdev-ops.h
+++ b/net/wireless/rdev-ops.h
@@ -915,4 +915,35 @@
return ret;
}
+static inline int
+rdev_add_tx_ts(struct cfg80211_registered_device *rdev,
+ struct net_device *dev, u8 tsid, const u8 *peer,
+ u8 user_prio, u16 admitted_time)
+{
+ int ret = -EOPNOTSUPP;
+
+ trace_rdev_add_tx_ts(&rdev->wiphy, dev, tsid, peer,
+ user_prio, admitted_time);
+ if (rdev->ops->add_tx_ts)
+ ret = rdev->ops->add_tx_ts(&rdev->wiphy, dev, tsid, peer,
+ user_prio, admitted_time);
+ trace_rdev_return_int(&rdev->wiphy, ret);
+
+ return ret;
+}
+
+static inline int
+rdev_del_tx_ts(struct cfg80211_registered_device *rdev,
+ struct net_device *dev, u8 tsid, const u8 *peer)
+{
+ int ret = -EOPNOTSUPP;
+
+ trace_rdev_del_tx_ts(&rdev->wiphy, dev, tsid, peer);
+ if (rdev->ops->del_tx_ts)
+ ret = rdev->ops->del_tx_ts(&rdev->wiphy, dev, tsid, peer);
+ trace_rdev_return_int(&rdev->wiphy, ret);
+
+ return ret;
+}
+
#endif /* __CFG80211_RDEV_OPS */
diff --git a/net/wireless/reg.c b/net/wireless/reg.c
index 1afdf45..b725a31 100644
--- a/net/wireless/reg.c
+++ b/net/wireless/reg.c
@@ -3,6 +3,7 @@
* Copyright 2005-2006, Devicescape Software, Inc.
* Copyright 2007 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2008-2011 Luis R. Rodriguez <mcgrof@qca.qualcomm.com>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
@@ -798,6 +799,57 @@
return 0;
}
+/* check whether old rule contains new rule */
+static bool rule_contains(struct ieee80211_reg_rule *r1,
+ struct ieee80211_reg_rule *r2)
+{
+ /* for simplicity, currently consider only same flags */
+ if (r1->flags != r2->flags)
+ return false;
+
+ /* verify r1 is more restrictive */
+ if ((r1->power_rule.max_antenna_gain >
+ r2->power_rule.max_antenna_gain) ||
+ r1->power_rule.max_eirp > r2->power_rule.max_eirp)
+ return false;
+
+ /* make sure r2's range is contained within r1 */
+ if (r1->freq_range.start_freq_khz > r2->freq_range.start_freq_khz ||
+ r1->freq_range.end_freq_khz < r2->freq_range.end_freq_khz)
+ return false;
+
+ /* and finally verify that r1.max_bw >= r2.max_bw */
+ if (r1->freq_range.max_bandwidth_khz <
+ r2->freq_range.max_bandwidth_khz)
+ return false;
+
+ return true;
+}
+
+/* add or extend current rules. do nothing if rule is already contained */
+static void add_rule(struct ieee80211_reg_rule *rule,
+ struct ieee80211_reg_rule *reg_rules, u32 *n_rules)
+{
+ struct ieee80211_reg_rule *tmp_rule;
+ int i;
+
+ for (i = 0; i < *n_rules; i++) {
+ tmp_rule = ®_rules[i];
+ /* rule is already contained - do nothing */
+ if (rule_contains(tmp_rule, rule))
+ return;
+
+ /* extend rule if possible */
+ if (rule_contains(rule, tmp_rule)) {
+ memcpy(tmp_rule, rule, sizeof(*rule));
+ return;
+ }
+ }
+
+ memcpy(®_rules[*n_rules], rule, sizeof(*rule));
+ (*n_rules)++;
+}
+
/**
* regdom_intersect - do the intersection between two regulatory domains
* @rd1: first regulatory domain
@@ -817,12 +869,10 @@
{
int r, size_of_regd;
unsigned int x, y;
- unsigned int num_rules = 0, rule_idx = 0;
+ unsigned int num_rules = 0;
const struct ieee80211_reg_rule *rule1, *rule2;
- struct ieee80211_reg_rule *intersected_rule;
+ struct ieee80211_reg_rule intersected_rule;
struct ieee80211_regdomain *rd;
- /* This is just a dummy holder to help us count */
- struct ieee80211_reg_rule dummy_rule;
if (!rd1 || !rd2)
return NULL;
@@ -840,7 +890,7 @@
for (y = 0; y < rd2->n_reg_rules; y++) {
rule2 = &rd2->reg_rules[y];
if (!reg_rules_intersect(rd1, rd2, rule1, rule2,
- &dummy_rule))
+ &intersected_rule))
num_rules++;
}
}
@@ -855,34 +905,24 @@
if (!rd)
return NULL;
- for (x = 0; x < rd1->n_reg_rules && rule_idx < num_rules; x++) {
+ for (x = 0; x < rd1->n_reg_rules; x++) {
rule1 = &rd1->reg_rules[x];
- for (y = 0; y < rd2->n_reg_rules && rule_idx < num_rules; y++) {
+ for (y = 0; y < rd2->n_reg_rules; y++) {
rule2 = &rd2->reg_rules[y];
- /*
- * This time around instead of using the stack lets
- * write to the target rule directly saving ourselves
- * a memcpy()
- */
- intersected_rule = &rd->reg_rules[rule_idx];
r = reg_rules_intersect(rd1, rd2, rule1, rule2,
- intersected_rule);
+ &intersected_rule);
/*
* No need to memset here the intersected rule here as
* we're not using the stack anymore
*/
if (r)
continue;
- rule_idx++;
+
+ add_rule(&intersected_rule, rd->reg_rules,
+ &rd->n_reg_rules);
}
}
- if (rule_idx != num_rules) {
- kfree(rd);
- return NULL;
- }
-
- rd->n_reg_rules = num_rules;
rd->alpha2[0] = '9';
rd->alpha2[1] = '8';
rd->dfs_region = reg_intersect_dfs_region(rd1->dfs_region,
diff --git a/net/wireless/scan.c b/net/wireless/scan.c
index 620a4b4..bda39f1 100644
--- a/net/wireless/scan.c
+++ b/net/wireless/scan.c
@@ -2,6 +2,7 @@
* cfg80211 scan result handling
*
* Copyright 2008 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*/
#include <linux/kernel.h>
#include <linux/slab.h>
diff --git a/net/wireless/sme.c b/net/wireless/sme.c
index 8bbeeb3..dc1668f 100644
--- a/net/wireless/sme.c
+++ b/net/wireless/sme.c
@@ -641,7 +641,7 @@
}
if (status != WLAN_STATUS_SUCCESS) {
- kfree(wdev->connect_keys);
+ kzfree(wdev->connect_keys);
wdev->connect_keys = NULL;
wdev->ssid_len = 0;
if (bss) {
@@ -918,7 +918,7 @@
ASSERT_WDEV_LOCK(wdev);
if (WARN_ON(wdev->connect_keys)) {
- kfree(wdev->connect_keys);
+ kzfree(wdev->connect_keys);
wdev->connect_keys = NULL;
}
@@ -978,7 +978,7 @@
ASSERT_WDEV_LOCK(wdev);
- kfree(wdev->connect_keys);
+ kzfree(wdev->connect_keys);
wdev->connect_keys = NULL;
if (wdev->conn)
diff --git a/net/wireless/trace.h b/net/wireless/trace.h
index 0c524cd..625a6e6 100644
--- a/net/wireless/trace.h
+++ b/net/wireless/trace.h
@@ -1896,6 +1896,51 @@
WIPHY_PR_ARG, NETDEV_PR_ARG, CHAN_DEF_PR_ARG)
);
+TRACE_EVENT(rdev_add_tx_ts,
+ TP_PROTO(struct wiphy *wiphy, struct net_device *netdev,
+ u8 tsid, const u8 *peer, u8 user_prio, u16 admitted_time),
+ TP_ARGS(wiphy, netdev, tsid, peer, user_prio, admitted_time),
+ TP_STRUCT__entry(
+ WIPHY_ENTRY
+ NETDEV_ENTRY
+ MAC_ENTRY(peer)
+ __field(u8, tsid)
+ __field(u8, user_prio)
+ __field(u16, admitted_time)
+ ),
+ TP_fast_assign(
+ WIPHY_ASSIGN;
+ NETDEV_ASSIGN;
+ MAC_ASSIGN(peer, peer);
+ __entry->tsid = tsid;
+ __entry->user_prio = user_prio;
+ __entry->admitted_time = admitted_time;
+ ),
+ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", " MAC_PR_FMT ", TSID %d, UP %d, time %d",
+ WIPHY_PR_ARG, NETDEV_PR_ARG, MAC_PR_ARG(peer),
+ __entry->tsid, __entry->user_prio, __entry->admitted_time)
+);
+
+TRACE_EVENT(rdev_del_tx_ts,
+ TP_PROTO(struct wiphy *wiphy, struct net_device *netdev,
+ u8 tsid, const u8 *peer),
+ TP_ARGS(wiphy, netdev, tsid, peer),
+ TP_STRUCT__entry(
+ WIPHY_ENTRY
+ NETDEV_ENTRY
+ MAC_ENTRY(peer)
+ __field(u8, tsid)
+ ),
+ TP_fast_assign(
+ WIPHY_ASSIGN;
+ NETDEV_ASSIGN;
+ MAC_ASSIGN(peer, peer);
+ __entry->tsid = tsid;
+ ),
+ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", " MAC_PR_FMT ", TSID %d",
+ WIPHY_PR_ARG, NETDEV_PR_ARG, MAC_PR_ARG(peer), __entry->tsid)
+);
+
/*************************************************************
* cfg80211 exported functions traces *
*************************************************************/
diff --git a/net/wireless/util.c b/net/wireless/util.c
index 728f1c0..5e233a5 100644
--- a/net/wireless/util.c
+++ b/net/wireless/util.c
@@ -2,6 +2,7 @@
* Wireless utility functions
*
* Copyright 2007-2009 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
*/
#include <linux/export.h>
#include <linux/bitops.h>
@@ -796,7 +797,7 @@
netdev_err(dev, "failed to set mgtdef %d\n", i);
}
- kfree(wdev->connect_keys);
+ kzfree(wdev->connect_keys);
wdev->connect_keys = NULL;
}
diff --git a/net/wireless/wext-compat.c b/net/wireless/wext-compat.c
index 11120bb..0f47948 100644
--- a/net/wireless/wext-compat.c
+++ b/net/wireless/wext-compat.c
@@ -496,6 +496,8 @@
err = 0;
if (!err) {
if (!addr) {
+ memset(wdev->wext.keys->data[idx], 0,
+ sizeof(wdev->wext.keys->data[idx]));
wdev->wext.keys->params[idx].key_len = 0;
wdev->wext.keys->params[idx].cipher = 0;
}
diff --git a/net/wireless/wext-sme.c b/net/wireless/wext-sme.c
index c7e5c8e..368611c 100644
--- a/net/wireless/wext-sme.c
+++ b/net/wireless/wext-sme.c
@@ -57,7 +57,7 @@
err = cfg80211_connect(rdev, wdev->netdev,
&wdev->wext.connect, ck, prev_bssid);
if (err)
- kfree(ck);
+ kzfree(ck);
return err;
}
diff --git a/net/xfrm/xfrm_hash.h b/net/xfrm/xfrm_hash.h
index 0622d31..666c5ff 100644
--- a/net/xfrm/xfrm_hash.h
+++ b/net/xfrm/xfrm_hash.h
@@ -3,6 +3,7 @@
#include <linux/xfrm.h>
#include <linux/socket.h>
+#include <linux/jhash.h>
static inline unsigned int __xfrm4_addr_hash(const xfrm_address_t *addr)
{
@@ -28,6 +29,58 @@
saddr->a6[2] ^ saddr->a6[3]);
}
+static inline u32 __bits2mask32(__u8 bits)
+{
+ u32 mask32 = 0xffffffff;
+
+ if (bits == 0)
+ mask32 = 0;
+ else if (bits < 32)
+ mask32 <<= (32 - bits);
+
+ return mask32;
+}
+
+static inline unsigned int __xfrm4_dpref_spref_hash(const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr,
+ __u8 dbits,
+ __u8 sbits)
+{
+ return jhash_2words(ntohl(daddr->a4) & __bits2mask32(dbits),
+ ntohl(saddr->a4) & __bits2mask32(sbits),
+ 0);
+}
+
+static inline unsigned int __xfrm6_pref_hash(const xfrm_address_t *addr,
+ __u8 prefixlen)
+{
+ int pdw;
+ int pbi;
+ u32 initval = 0;
+
+ pdw = prefixlen >> 5; /* num of whole u32 in prefix */
+ pbi = prefixlen & 0x1f; /* num of bits in incomplete u32 in prefix */
+
+ if (pbi) {
+ __be32 mask;
+
+ mask = htonl((0xffffffff) << (32 - pbi));
+
+ initval = (__force u32)(addr->a6[pdw] & mask);
+ }
+
+ return jhash2((__force u32 *)addr->a6, pdw, initval);
+}
+
+static inline unsigned int __xfrm6_dpref_spref_hash(const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr,
+ __u8 dbits,
+ __u8 sbits)
+{
+ return __xfrm6_pref_hash(daddr, dbits) ^
+ __xfrm6_pref_hash(saddr, sbits);
+}
+
static inline unsigned int __xfrm_dst_hash(const xfrm_address_t *daddr,
const xfrm_address_t *saddr,
u32 reqid, unsigned short family,
@@ -84,7 +137,8 @@
}
static inline unsigned int __sel_hash(const struct xfrm_selector *sel,
- unsigned short family, unsigned int hmask)
+ unsigned short family, unsigned int hmask,
+ u8 dbits, u8 sbits)
{
const xfrm_address_t *daddr = &sel->daddr;
const xfrm_address_t *saddr = &sel->saddr;
@@ -92,19 +146,19 @@
switch (family) {
case AF_INET:
- if (sel->prefixlen_d != 32 ||
- sel->prefixlen_s != 32)
+ if (sel->prefixlen_d < dbits ||
+ sel->prefixlen_s < sbits)
return hmask + 1;
- h = __xfrm4_daddr_saddr_hash(daddr, saddr);
+ h = __xfrm4_dpref_spref_hash(daddr, saddr, dbits, sbits);
break;
case AF_INET6:
- if (sel->prefixlen_d != 128 ||
- sel->prefixlen_s != 128)
+ if (sel->prefixlen_d < dbits ||
+ sel->prefixlen_s < sbits)
return hmask + 1;
- h = __xfrm6_daddr_saddr_hash(daddr, saddr);
+ h = __xfrm6_dpref_spref_hash(daddr, saddr, dbits, sbits);
break;
}
h ^= (h >> 16);
@@ -113,17 +167,19 @@
static inline unsigned int __addr_hash(const xfrm_address_t *daddr,
const xfrm_address_t *saddr,
- unsigned short family, unsigned int hmask)
+ unsigned short family,
+ unsigned int hmask,
+ u8 dbits, u8 sbits)
{
unsigned int h = 0;
switch (family) {
case AF_INET:
- h = __xfrm4_daddr_saddr_hash(daddr, saddr);
+ h = __xfrm4_dpref_spref_hash(daddr, saddr, dbits, sbits);
break;
case AF_INET6:
- h = __xfrm6_daddr_saddr_hash(daddr, saddr);
+ h = __xfrm6_dpref_spref_hash(daddr, saddr, dbits, sbits);
break;
}
h ^= (h >> 16);
diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
index beeed60..f623dca 100644
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -39,6 +39,11 @@
#define XFRM_QUEUE_TMO_MAX ((unsigned)(60*HZ))
#define XFRM_MAX_QUEUE_LEN 100
+struct xfrm_flo {
+ struct dst_entry *dst_orig;
+ u8 flags;
+};
+
static DEFINE_SPINLOCK(xfrm_policy_afinfo_lock);
static struct xfrm_policy_afinfo __rcu *xfrm_policy_afinfo[NPROTO]
__read_mostly;
@@ -344,12 +349,39 @@
return __idx_hash(index, net->xfrm.policy_idx_hmask);
}
+/* calculate policy hash thresholds */
+static void __get_hash_thresh(struct net *net,
+ unsigned short family, int dir,
+ u8 *dbits, u8 *sbits)
+{
+ switch (family) {
+ case AF_INET:
+ *dbits = net->xfrm.policy_bydst[dir].dbits4;
+ *sbits = net->xfrm.policy_bydst[dir].sbits4;
+ break;
+
+ case AF_INET6:
+ *dbits = net->xfrm.policy_bydst[dir].dbits6;
+ *sbits = net->xfrm.policy_bydst[dir].sbits6;
+ break;
+
+ default:
+ *dbits = 0;
+ *sbits = 0;
+ }
+}
+
static struct hlist_head *policy_hash_bysel(struct net *net,
const struct xfrm_selector *sel,
unsigned short family, int dir)
{
unsigned int hmask = net->xfrm.policy_bydst[dir].hmask;
- unsigned int hash = __sel_hash(sel, family, hmask);
+ unsigned int hash;
+ u8 dbits;
+ u8 sbits;
+
+ __get_hash_thresh(net, family, dir, &dbits, &sbits);
+ hash = __sel_hash(sel, family, hmask, dbits, sbits);
return (hash == hmask + 1 ?
&net->xfrm.policy_inexact[dir] :
@@ -362,25 +394,35 @@
unsigned short family, int dir)
{
unsigned int hmask = net->xfrm.policy_bydst[dir].hmask;
- unsigned int hash = __addr_hash(daddr, saddr, family, hmask);
+ unsigned int hash;
+ u8 dbits;
+ u8 sbits;
+
+ __get_hash_thresh(net, family, dir, &dbits, &sbits);
+ hash = __addr_hash(daddr, saddr, family, hmask, dbits, sbits);
return net->xfrm.policy_bydst[dir].table + hash;
}
-static void xfrm_dst_hash_transfer(struct hlist_head *list,
+static void xfrm_dst_hash_transfer(struct net *net,
+ struct hlist_head *list,
struct hlist_head *ndsttable,
- unsigned int nhashmask)
+ unsigned int nhashmask,
+ int dir)
{
struct hlist_node *tmp, *entry0 = NULL;
struct xfrm_policy *pol;
unsigned int h0 = 0;
+ u8 dbits;
+ u8 sbits;
redo:
hlist_for_each_entry_safe(pol, tmp, list, bydst) {
unsigned int h;
+ __get_hash_thresh(net, pol->family, dir, &dbits, &sbits);
h = __addr_hash(&pol->selector.daddr, &pol->selector.saddr,
- pol->family, nhashmask);
+ pol->family, nhashmask, dbits, sbits);
if (!entry0) {
hlist_del(&pol->bydst);
hlist_add_head(&pol->bydst, ndsttable+h);
@@ -434,7 +476,7 @@
write_lock_bh(&net->xfrm.xfrm_policy_lock);
for (i = hmask; i >= 0; i--)
- xfrm_dst_hash_transfer(odst + i, ndst, nhashmask);
+ xfrm_dst_hash_transfer(net, odst + i, ndst, nhashmask, dir);
net->xfrm.policy_bydst[dir].table = ndst;
net->xfrm.policy_bydst[dir].hmask = nhashmask;
@@ -529,6 +571,86 @@
mutex_unlock(&hash_resize_mutex);
}
+static void xfrm_hash_rebuild(struct work_struct *work)
+{
+ struct net *net = container_of(work, struct net,
+ xfrm.policy_hthresh.work);
+ unsigned int hmask;
+ struct xfrm_policy *pol;
+ struct xfrm_policy *policy;
+ struct hlist_head *chain;
+ struct hlist_head *odst;
+ struct hlist_node *newpos;
+ int i;
+ int dir;
+ unsigned seq;
+ u8 lbits4, rbits4, lbits6, rbits6;
+
+ mutex_lock(&hash_resize_mutex);
+
+ /* read selector prefixlen thresholds */
+ do {
+ seq = read_seqbegin(&net->xfrm.policy_hthresh.lock);
+
+ lbits4 = net->xfrm.policy_hthresh.lbits4;
+ rbits4 = net->xfrm.policy_hthresh.rbits4;
+ lbits6 = net->xfrm.policy_hthresh.lbits6;
+ rbits6 = net->xfrm.policy_hthresh.rbits6;
+ } while (read_seqretry(&net->xfrm.policy_hthresh.lock, seq));
+
+ write_lock_bh(&net->xfrm.xfrm_policy_lock);
+
+ /* reset the bydst and inexact table in all directions */
+ for (dir = 0; dir < XFRM_POLICY_MAX * 2; dir++) {
+ INIT_HLIST_HEAD(&net->xfrm.policy_inexact[dir]);
+ hmask = net->xfrm.policy_bydst[dir].hmask;
+ odst = net->xfrm.policy_bydst[dir].table;
+ for (i = hmask; i >= 0; i--)
+ INIT_HLIST_HEAD(odst + i);
+ if ((dir & XFRM_POLICY_MASK) == XFRM_POLICY_OUT) {
+ /* dir out => dst = remote, src = local */
+ net->xfrm.policy_bydst[dir].dbits4 = rbits4;
+ net->xfrm.policy_bydst[dir].sbits4 = lbits4;
+ net->xfrm.policy_bydst[dir].dbits6 = rbits6;
+ net->xfrm.policy_bydst[dir].sbits6 = lbits6;
+ } else {
+ /* dir in/fwd => dst = local, src = remote */
+ net->xfrm.policy_bydst[dir].dbits4 = lbits4;
+ net->xfrm.policy_bydst[dir].sbits4 = rbits4;
+ net->xfrm.policy_bydst[dir].dbits6 = lbits6;
+ net->xfrm.policy_bydst[dir].sbits6 = rbits6;
+ }
+ }
+
+ /* re-insert all policies by order of creation */
+ list_for_each_entry_reverse(policy, &net->xfrm.policy_all, walk.all) {
+ newpos = NULL;
+ chain = policy_hash_bysel(net, &policy->selector,
+ policy->family,
+ xfrm_policy_id2dir(policy->index));
+ hlist_for_each_entry(pol, chain, bydst) {
+ if (policy->priority >= pol->priority)
+ newpos = &pol->bydst;
+ else
+ break;
+ }
+ if (newpos)
+ hlist_add_behind(&policy->bydst, newpos);
+ else
+ hlist_add_head(&policy->bydst, chain);
+ }
+
+ write_unlock_bh(&net->xfrm.xfrm_policy_lock);
+
+ mutex_unlock(&hash_resize_mutex);
+}
+
+void xfrm_policy_hash_rebuild(struct net *net)
+{
+ schedule_work(&net->xfrm.policy_hthresh.work);
+}
+EXPORT_SYMBOL(xfrm_policy_hash_rebuild);
+
/* Generate new index... KAME seems to generate them ordered by cost
* of an absolute inpredictability of ordering of rules. This will not pass. */
static u32 xfrm_gen_index(struct net *net, int dir, u32 index)
@@ -1877,13 +1999,14 @@
}
static struct xfrm_dst *xfrm_create_dummy_bundle(struct net *net,
- struct dst_entry *dst,
+ struct xfrm_flo *xflo,
const struct flowi *fl,
int num_xfrms,
u16 family)
{
int err;
struct net_device *dev;
+ struct dst_entry *dst;
struct dst_entry *dst1;
struct xfrm_dst *xdst;
@@ -1891,9 +2014,12 @@
if (IS_ERR(xdst))
return xdst;
- if (net->xfrm.sysctl_larval_drop || num_xfrms <= 0)
+ if (!(xflo->flags & XFRM_LOOKUP_QUEUE) ||
+ net->xfrm.sysctl_larval_drop ||
+ num_xfrms <= 0)
return xdst;
+ dst = xflo->dst_orig;
dst1 = &xdst->u.dst;
dst_hold(dst);
xdst->route = dst;
@@ -1935,7 +2061,7 @@
xfrm_bundle_lookup(struct net *net, const struct flowi *fl, u16 family, u8 dir,
struct flow_cache_object *oldflo, void *ctx)
{
- struct dst_entry *dst_orig = (struct dst_entry *)ctx;
+ struct xfrm_flo *xflo = (struct xfrm_flo *)ctx;
struct xfrm_policy *pols[XFRM_POLICY_TYPE_MAX];
struct xfrm_dst *xdst, *new_xdst;
int num_pols = 0, num_xfrms = 0, i, err, pol_dead;
@@ -1976,7 +2102,8 @@
goto make_dummy_bundle;
}
- new_xdst = xfrm_resolve_and_create_bundle(pols, num_pols, fl, family, dst_orig);
+ new_xdst = xfrm_resolve_and_create_bundle(pols, num_pols, fl, family,
+ xflo->dst_orig);
if (IS_ERR(new_xdst)) {
err = PTR_ERR(new_xdst);
if (err != -EAGAIN)
@@ -2010,7 +2137,7 @@
/* We found policies, but there's no bundles to instantiate:
* either because the policy blocks, has no transformations or
* we could not build template (no xfrm_states).*/
- xdst = xfrm_create_dummy_bundle(net, dst_orig, fl, num_xfrms, family);
+ xdst = xfrm_create_dummy_bundle(net, xflo, fl, num_xfrms, family);
if (IS_ERR(xdst)) {
xfrm_pols_put(pols, num_pols);
return ERR_CAST(xdst);
@@ -2104,13 +2231,18 @@
}
if (xdst == NULL) {
+ struct xfrm_flo xflo;
+
+ xflo.dst_orig = dst_orig;
+ xflo.flags = flags;
+
/* To accelerate a bit... */
if ((dst_orig->flags & DST_NOXFRM) ||
!net->xfrm.policy_count[XFRM_POLICY_OUT])
goto nopol;
flo = flow_cache_lookup(net, fl, family, dir,
- xfrm_bundle_lookup, dst_orig);
+ xfrm_bundle_lookup, &xflo);
if (flo == NULL)
goto nopol;
if (IS_ERR(flo)) {
@@ -2138,7 +2270,7 @@
xfrm_pols_put(pols, drop_pols);
XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTNOSTATES);
- return make_blackhole(net, family, dst_orig);
+ return ERR_PTR(-EREMOTE);
}
err = -EAGAIN;
@@ -2195,6 +2327,23 @@
}
EXPORT_SYMBOL(xfrm_lookup);
+/* Callers of xfrm_lookup_route() must ensure a call to dst_output().
+ * Otherwise we may send out blackholed packets.
+ */
+struct dst_entry *xfrm_lookup_route(struct net *net, struct dst_entry *dst_orig,
+ const struct flowi *fl,
+ struct sock *sk, int flags)
+{
+ struct dst_entry *dst = xfrm_lookup(net, dst_orig, fl, sk,
+ flags | XFRM_LOOKUP_QUEUE);
+
+ if (IS_ERR(dst) && PTR_ERR(dst) == -EREMOTE)
+ return make_blackhole(net, dst_orig->ops->family, dst_orig);
+
+ return dst;
+}
+EXPORT_SYMBOL(xfrm_lookup_route);
+
static inline int
xfrm_secpath_reject(int idx, struct sk_buff *skb, const struct flowi *fl)
{
@@ -2460,7 +2609,7 @@
skb_dst_force(skb);
- dst = xfrm_lookup(net, skb_dst(skb), &fl, NULL, 0);
+ dst = xfrm_lookup(net, skb_dst(skb), &fl, NULL, XFRM_LOOKUP_QUEUE);
if (IS_ERR(dst)) {
res = 0;
dst = NULL;
@@ -2830,10 +2979,21 @@
if (!htab->table)
goto out_bydst;
htab->hmask = hmask;
+ htab->dbits4 = 32;
+ htab->sbits4 = 32;
+ htab->dbits6 = 128;
+ htab->sbits6 = 128;
}
+ net->xfrm.policy_hthresh.lbits4 = 32;
+ net->xfrm.policy_hthresh.rbits4 = 32;
+ net->xfrm.policy_hthresh.lbits6 = 128;
+ net->xfrm.policy_hthresh.rbits6 = 128;
+
+ seqlock_init(&net->xfrm.policy_hthresh.lock);
INIT_LIST_HEAD(&net->xfrm.policy_all);
INIT_WORK(&net->xfrm.policy_hash_work, xfrm_hash_resize);
+ INIT_WORK(&net->xfrm.policy_hthresh.work, xfrm_hash_rebuild);
if (net_eq(net, &init_net))
register_netdevice_notifier(&xfrm_dev_notifier);
return 0;
diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
index 0ab5413..de971b6 100644
--- a/net/xfrm/xfrm_state.c
+++ b/net/xfrm/xfrm_state.c
@@ -97,8 +97,6 @@
return ((state_hmask + 1) << 1) * sizeof(struct hlist_head);
}
-static DEFINE_MUTEX(hash_resize_mutex);
-
static void xfrm_hash_resize(struct work_struct *work)
{
struct net *net = container_of(work, struct net, xfrm.state_hash_work);
@@ -107,22 +105,20 @@
unsigned int nhashmask, ohashmask;
int i;
- mutex_lock(&hash_resize_mutex);
-
nsize = xfrm_hash_new_size(net->xfrm.state_hmask);
ndst = xfrm_hash_alloc(nsize);
if (!ndst)
- goto out_unlock;
+ return;
nsrc = xfrm_hash_alloc(nsize);
if (!nsrc) {
xfrm_hash_free(ndst, nsize);
- goto out_unlock;
+ return;
}
nspi = xfrm_hash_alloc(nsize);
if (!nspi) {
xfrm_hash_free(ndst, nsize);
xfrm_hash_free(nsrc, nsize);
- goto out_unlock;
+ return;
}
spin_lock_bh(&net->xfrm.xfrm_state_lock);
@@ -148,9 +144,6 @@
xfrm_hash_free(odst, osize);
xfrm_hash_free(osrc, osize);
xfrm_hash_free(ospi, osize);
-
-out_unlock:
- mutex_unlock(&hash_resize_mutex);
}
static DEFINE_SPINLOCK(xfrm_state_afinfo_lock);
diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
index d4db6eb..e812e98 100644
--- a/net/xfrm/xfrm_user.c
+++ b/net/xfrm/xfrm_user.c
@@ -333,8 +333,7 @@
algo = xfrm_aalg_get_byname(ualg->alg_name, 1);
if (!algo)
return -ENOSYS;
- if ((ualg->alg_trunc_len / 8) > MAX_AH_AUTH_LEN ||
- ualg->alg_trunc_len > algo->uinfo.auth.icv_fullbits)
+ if (ualg->alg_trunc_len > algo->uinfo.auth.icv_fullbits)
return -EINVAL;
*props = algo->desc.sadb_alg_id;
@@ -964,7 +963,9 @@
{
return NLMSG_ALIGN(4)
+ nla_total_size(sizeof(struct xfrmu_spdinfo))
- + nla_total_size(sizeof(struct xfrmu_spdhinfo));
+ + nla_total_size(sizeof(struct xfrmu_spdhinfo))
+ + nla_total_size(sizeof(struct xfrmu_spdhthresh))
+ + nla_total_size(sizeof(struct xfrmu_spdhthresh));
}
static int build_spdinfo(struct sk_buff *skb, struct net *net,
@@ -973,9 +974,11 @@
struct xfrmk_spdinfo si;
struct xfrmu_spdinfo spc;
struct xfrmu_spdhinfo sph;
+ struct xfrmu_spdhthresh spt4, spt6;
struct nlmsghdr *nlh;
int err;
u32 *f;
+ unsigned lseq;
nlh = nlmsg_put(skb, portid, seq, XFRM_MSG_NEWSPDINFO, sizeof(u32), 0);
if (nlh == NULL) /* shouldn't really happen ... */
@@ -993,9 +996,22 @@
sph.spdhcnt = si.spdhcnt;
sph.spdhmcnt = si.spdhmcnt;
+ do {
+ lseq = read_seqbegin(&net->xfrm.policy_hthresh.lock);
+
+ spt4.lbits = net->xfrm.policy_hthresh.lbits4;
+ spt4.rbits = net->xfrm.policy_hthresh.rbits4;
+ spt6.lbits = net->xfrm.policy_hthresh.lbits6;
+ spt6.rbits = net->xfrm.policy_hthresh.rbits6;
+ } while (read_seqretry(&net->xfrm.policy_hthresh.lock, lseq));
+
err = nla_put(skb, XFRMA_SPD_INFO, sizeof(spc), &spc);
if (!err)
err = nla_put(skb, XFRMA_SPD_HINFO, sizeof(sph), &sph);
+ if (!err)
+ err = nla_put(skb, XFRMA_SPD_IPV4_HTHRESH, sizeof(spt4), &spt4);
+ if (!err)
+ err = nla_put(skb, XFRMA_SPD_IPV6_HTHRESH, sizeof(spt6), &spt6);
if (err) {
nlmsg_cancel(skb, nlh);
return err;
@@ -1004,6 +1020,51 @@
return nlmsg_end(skb, nlh);
}
+static int xfrm_set_spdinfo(struct sk_buff *skb, struct nlmsghdr *nlh,
+ struct nlattr **attrs)
+{
+ struct net *net = sock_net(skb->sk);
+ struct xfrmu_spdhthresh *thresh4 = NULL;
+ struct xfrmu_spdhthresh *thresh6 = NULL;
+
+ /* selector prefixlen thresholds to hash policies */
+ if (attrs[XFRMA_SPD_IPV4_HTHRESH]) {
+ struct nlattr *rta = attrs[XFRMA_SPD_IPV4_HTHRESH];
+
+ if (nla_len(rta) < sizeof(*thresh4))
+ return -EINVAL;
+ thresh4 = nla_data(rta);
+ if (thresh4->lbits > 32 || thresh4->rbits > 32)
+ return -EINVAL;
+ }
+ if (attrs[XFRMA_SPD_IPV6_HTHRESH]) {
+ struct nlattr *rta = attrs[XFRMA_SPD_IPV6_HTHRESH];
+
+ if (nla_len(rta) < sizeof(*thresh6))
+ return -EINVAL;
+ thresh6 = nla_data(rta);
+ if (thresh6->lbits > 128 || thresh6->rbits > 128)
+ return -EINVAL;
+ }
+
+ if (thresh4 || thresh6) {
+ write_seqlock(&net->xfrm.policy_hthresh.lock);
+ if (thresh4) {
+ net->xfrm.policy_hthresh.lbits4 = thresh4->lbits;
+ net->xfrm.policy_hthresh.rbits4 = thresh4->rbits;
+ }
+ if (thresh6) {
+ net->xfrm.policy_hthresh.lbits6 = thresh6->lbits;
+ net->xfrm.policy_hthresh.rbits6 = thresh6->rbits;
+ }
+ write_sequnlock(&net->xfrm.policy_hthresh.lock);
+
+ xfrm_policy_hash_rebuild(net);
+ }
+
+ return 0;
+}
+
static int xfrm_get_spdinfo(struct sk_buff *skb, struct nlmsghdr *nlh,
struct nlattr **attrs)
{
@@ -2274,6 +2335,7 @@
[XFRM_MSG_REPORT - XFRM_MSG_BASE] = XMSGSIZE(xfrm_user_report),
[XFRM_MSG_MIGRATE - XFRM_MSG_BASE] = XMSGSIZE(xfrm_userpolicy_id),
[XFRM_MSG_GETSADINFO - XFRM_MSG_BASE] = sizeof(u32),
+ [XFRM_MSG_NEWSPDINFO - XFRM_MSG_BASE] = sizeof(u32),
[XFRM_MSG_GETSPDINFO - XFRM_MSG_BASE] = sizeof(u32),
};
@@ -2308,10 +2370,17 @@
[XFRMA_ADDRESS_FILTER] = { .len = sizeof(struct xfrm_address_filter) },
};
+static const struct nla_policy xfrma_spd_policy[XFRMA_SPD_MAX+1] = {
+ [XFRMA_SPD_IPV4_HTHRESH] = { .len = sizeof(struct xfrmu_spdhthresh) },
+ [XFRMA_SPD_IPV6_HTHRESH] = { .len = sizeof(struct xfrmu_spdhthresh) },
+};
+
static const struct xfrm_link {
int (*doit)(struct sk_buff *, struct nlmsghdr *, struct nlattr **);
int (*dump)(struct sk_buff *, struct netlink_callback *);
int (*done)(struct netlink_callback *);
+ const struct nla_policy *nla_pol;
+ int nla_max;
} xfrm_dispatch[XFRM_NR_MSGTYPES] = {
[XFRM_MSG_NEWSA - XFRM_MSG_BASE] = { .doit = xfrm_add_sa },
[XFRM_MSG_DELSA - XFRM_MSG_BASE] = { .doit = xfrm_del_sa },
@@ -2335,6 +2404,9 @@
[XFRM_MSG_GETAE - XFRM_MSG_BASE] = { .doit = xfrm_get_ae },
[XFRM_MSG_MIGRATE - XFRM_MSG_BASE] = { .doit = xfrm_do_migrate },
[XFRM_MSG_GETSADINFO - XFRM_MSG_BASE] = { .doit = xfrm_get_sadinfo },
+ [XFRM_MSG_NEWSPDINFO - XFRM_MSG_BASE] = { .doit = xfrm_set_spdinfo,
+ .nla_pol = xfrma_spd_policy,
+ .nla_max = XFRMA_SPD_MAX },
[XFRM_MSG_GETSPDINFO - XFRM_MSG_BASE] = { .doit = xfrm_get_spdinfo },
};
@@ -2371,8 +2443,9 @@
}
}
- err = nlmsg_parse(nlh, xfrm_msg_min[type], attrs, XFRMA_MAX,
- xfrma_policy);
+ err = nlmsg_parse(nlh, xfrm_msg_min[type], attrs,
+ link->nla_max ? : XFRMA_MAX,
+ link->nla_pol ? : xfrma_policy);
if (err < 0)
return err;
diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
new file mode 100644
index 0000000..6343917
--- /dev/null
+++ b/samples/bpf/Makefile
@@ -0,0 +1,12 @@
+# kbuild trick to avoid linker error. Can be omitted if a module is built.
+obj- := dummy.o
+
+# List of programs to build
+hostprogs-y := test_verifier
+
+test_verifier-objs := test_verifier.o libbpf.o
+
+# Tell kbuild to always build the programs
+always := $(hostprogs-y)
+
+HOSTCFLAGS += -I$(objtree)/usr/include
diff --git a/samples/bpf/libbpf.c b/samples/bpf/libbpf.c
new file mode 100644
index 0000000..ff65044
--- /dev/null
+++ b/samples/bpf/libbpf.c
@@ -0,0 +1,94 @@
+/* eBPF mini library */
+#include <stdlib.h>
+#include <stdio.h>
+#include <linux/unistd.h>
+#include <unistd.h>
+#include <string.h>
+#include <linux/netlink.h>
+#include <linux/bpf.h>
+#include <errno.h>
+#include "libbpf.h"
+
+static __u64 ptr_to_u64(void *ptr)
+{
+ return (__u64) (unsigned long) ptr;
+}
+
+int bpf_create_map(enum bpf_map_type map_type, int key_size, int value_size,
+ int max_entries)
+{
+ union bpf_attr attr = {
+ .map_type = map_type,
+ .key_size = key_size,
+ .value_size = value_size,
+ .max_entries = max_entries
+ };
+
+ return syscall(__NR_bpf, BPF_MAP_CREATE, &attr, sizeof(attr));
+}
+
+int bpf_update_elem(int fd, void *key, void *value)
+{
+ union bpf_attr attr = {
+ .map_fd = fd,
+ .key = ptr_to_u64(key),
+ .value = ptr_to_u64(value),
+ };
+
+ return syscall(__NR_bpf, BPF_MAP_UPDATE_ELEM, &attr, sizeof(attr));
+}
+
+int bpf_lookup_elem(int fd, void *key, void *value)
+{
+ union bpf_attr attr = {
+ .map_fd = fd,
+ .key = ptr_to_u64(key),
+ .value = ptr_to_u64(value),
+ };
+
+ return syscall(__NR_bpf, BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr));
+}
+
+int bpf_delete_elem(int fd, void *key)
+{
+ union bpf_attr attr = {
+ .map_fd = fd,
+ .key = ptr_to_u64(key),
+ };
+
+ return syscall(__NR_bpf, BPF_MAP_DELETE_ELEM, &attr, sizeof(attr));
+}
+
+int bpf_get_next_key(int fd, void *key, void *next_key)
+{
+ union bpf_attr attr = {
+ .map_fd = fd,
+ .key = ptr_to_u64(key),
+ .next_key = ptr_to_u64(next_key),
+ };
+
+ return syscall(__NR_bpf, BPF_MAP_GET_NEXT_KEY, &attr, sizeof(attr));
+}
+
+#define ROUND_UP(x, n) (((x) + (n) - 1u) & ~((n) - 1u))
+
+char bpf_log_buf[LOG_BUF_SIZE];
+
+int bpf_prog_load(enum bpf_prog_type prog_type,
+ const struct bpf_insn *insns, int prog_len,
+ const char *license)
+{
+ union bpf_attr attr = {
+ .prog_type = prog_type,
+ .insns = ptr_to_u64((void *) insns),
+ .insn_cnt = prog_len / sizeof(struct bpf_insn),
+ .license = ptr_to_u64((void *) license),
+ .log_buf = ptr_to_u64(bpf_log_buf),
+ .log_size = LOG_BUF_SIZE,
+ .log_level = 1,
+ };
+
+ bpf_log_buf[0] = 0;
+
+ return syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr));
+}
diff --git a/samples/bpf/libbpf.h b/samples/bpf/libbpf.h
new file mode 100644
index 0000000..8a31bab
--- /dev/null
+++ b/samples/bpf/libbpf.h
@@ -0,0 +1,172 @@
+/* eBPF mini library */
+#ifndef __LIBBPF_H
+#define __LIBBPF_H
+
+struct bpf_insn;
+
+int bpf_create_map(enum bpf_map_type map_type, int key_size, int value_size,
+ int max_entries);
+int bpf_update_elem(int fd, void *key, void *value);
+int bpf_lookup_elem(int fd, void *key, void *value);
+int bpf_delete_elem(int fd, void *key);
+int bpf_get_next_key(int fd, void *key, void *next_key);
+
+int bpf_prog_load(enum bpf_prog_type prog_type,
+ const struct bpf_insn *insns, int insn_len,
+ const char *license);
+
+#define LOG_BUF_SIZE 8192
+extern char bpf_log_buf[LOG_BUF_SIZE];
+
+/* ALU ops on registers, bpf_add|sub|...: dst_reg += src_reg */
+
+#define BPF_ALU64_REG(OP, DST, SRC) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU64 | BPF_OP(OP) | BPF_X, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = 0, \
+ .imm = 0 })
+
+#define BPF_ALU32_REG(OP, DST, SRC) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU | BPF_OP(OP) | BPF_X, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = 0, \
+ .imm = 0 })
+
+/* ALU ops on immediates, bpf_add|sub|...: dst_reg += imm32 */
+
+#define BPF_ALU64_IMM(OP, DST, IMM) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU64 | BPF_OP(OP) | BPF_K, \
+ .dst_reg = DST, \
+ .src_reg = 0, \
+ .off = 0, \
+ .imm = IMM })
+
+#define BPF_ALU32_IMM(OP, DST, IMM) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU | BPF_OP(OP) | BPF_K, \
+ .dst_reg = DST, \
+ .src_reg = 0, \
+ .off = 0, \
+ .imm = IMM })
+
+/* Short form of mov, dst_reg = src_reg */
+
+#define BPF_MOV64_REG(DST, SRC) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU64 | BPF_MOV | BPF_X, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = 0, \
+ .imm = 0 })
+
+/* Short form of mov, dst_reg = imm32 */
+
+#define BPF_MOV64_IMM(DST, IMM) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU64 | BPF_MOV | BPF_K, \
+ .dst_reg = DST, \
+ .src_reg = 0, \
+ .off = 0, \
+ .imm = IMM })
+
+/* BPF_LD_IMM64 macro encodes single 'load 64-bit immediate' insn */
+#define BPF_LD_IMM64(DST, IMM) \
+ BPF_LD_IMM64_RAW(DST, 0, IMM)
+
+#define BPF_LD_IMM64_RAW(DST, SRC, IMM) \
+ ((struct bpf_insn) { \
+ .code = BPF_LD | BPF_DW | BPF_IMM, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = 0, \
+ .imm = (__u32) (IMM) }), \
+ ((struct bpf_insn) { \
+ .code = 0, /* zero is reserved opcode */ \
+ .dst_reg = 0, \
+ .src_reg = 0, \
+ .off = 0, \
+ .imm = ((__u64) (IMM)) >> 32 })
+
+#define BPF_PSEUDO_MAP_FD 1
+
+/* pseudo BPF_LD_IMM64 insn used to refer to process-local map_fd */
+#define BPF_LD_MAP_FD(DST, MAP_FD) \
+ BPF_LD_IMM64_RAW(DST, BPF_PSEUDO_MAP_FD, MAP_FD)
+
+
+/* Memory load, dst_reg = *(uint *) (src_reg + off16) */
+
+#define BPF_LDX_MEM(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_LDX | BPF_SIZE(SIZE) | BPF_MEM, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = 0 })
+
+/* Memory store, *(uint *) (dst_reg + off16) = src_reg */
+
+#define BPF_STX_MEM(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_MEM, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = 0 })
+
+/* Memory store, *(uint *) (dst_reg + off16) = imm32 */
+
+#define BPF_ST_MEM(SIZE, DST, OFF, IMM) \
+ ((struct bpf_insn) { \
+ .code = BPF_ST | BPF_SIZE(SIZE) | BPF_MEM, \
+ .dst_reg = DST, \
+ .src_reg = 0, \
+ .off = OFF, \
+ .imm = IMM })
+
+/* Conditional jumps against registers, if (dst_reg 'op' src_reg) goto pc + off16 */
+
+#define BPF_JMP_REG(OP, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_JMP | BPF_OP(OP) | BPF_X, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = 0 })
+
+/* Conditional jumps against immediates, if (dst_reg 'op' imm32) goto pc + off16 */
+
+#define BPF_JMP_IMM(OP, DST, IMM, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_JMP | BPF_OP(OP) | BPF_K, \
+ .dst_reg = DST, \
+ .src_reg = 0, \
+ .off = OFF, \
+ .imm = IMM })
+
+/* Raw code statement block */
+
+#define BPF_RAW_INSN(CODE, DST, SRC, OFF, IMM) \
+ ((struct bpf_insn) { \
+ .code = CODE, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = IMM })
+
+/* Program exit */
+
+#define BPF_EXIT_INSN() \
+ ((struct bpf_insn) { \
+ .code = BPF_JMP | BPF_EXIT, \
+ .dst_reg = 0, \
+ .src_reg = 0, \
+ .off = 0, \
+ .imm = 0 })
+
+#endif
diff --git a/samples/bpf/test_verifier.c b/samples/bpf/test_verifier.c
new file mode 100644
index 0000000..d10992e
--- /dev/null
+++ b/samples/bpf/test_verifier.c
@@ -0,0 +1,548 @@
+/*
+ * Testsuite for eBPF verifier
+ *
+ * Copyright (c) 2014 PLUMgrid, http://plumgrid.com
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#include <stdio.h>
+#include <unistd.h>
+#include <linux/bpf.h>
+#include <errno.h>
+#include <linux/unistd.h>
+#include <string.h>
+#include <linux/filter.h>
+#include "libbpf.h"
+
+#define MAX_INSNS 512
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x)))
+
+struct bpf_test {
+ const char *descr;
+ struct bpf_insn insns[MAX_INSNS];
+ int fixup[32];
+ const char *errstr;
+ enum {
+ ACCEPT,
+ REJECT
+ } result;
+};
+
+static struct bpf_test tests[] = {
+ {
+ "add+sub+mul",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_1, 1),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 2),
+ BPF_MOV64_IMM(BPF_REG_2, 3),
+ BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_2),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -1),
+ BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, 3),
+ BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+ },
+ {
+ "unreachable",
+ .insns = {
+ BPF_EXIT_INSN(),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "unreachable",
+ .result = REJECT,
+ },
+ {
+ "unreachable2",
+ .insns = {
+ BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+ BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "unreachable",
+ .result = REJECT,
+ },
+ {
+ "out of range jump",
+ .insns = {
+ BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "jump out of range",
+ .result = REJECT,
+ },
+ {
+ "out of range jump2",
+ .insns = {
+ BPF_JMP_IMM(BPF_JA, 0, 0, -2),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "jump out of range",
+ .result = REJECT,
+ },
+ {
+ "test1 ld_imm64",
+ .insns = {
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+ BPF_LD_IMM64(BPF_REG_0, 0),
+ BPF_LD_IMM64(BPF_REG_0, 0),
+ BPF_LD_IMM64(BPF_REG_0, 1),
+ BPF_LD_IMM64(BPF_REG_0, 1),
+ BPF_MOV64_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "invalid BPF_LD_IMM insn",
+ .result = REJECT,
+ },
+ {
+ "test2 ld_imm64",
+ .insns = {
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+ BPF_LD_IMM64(BPF_REG_0, 0),
+ BPF_LD_IMM64(BPF_REG_0, 0),
+ BPF_LD_IMM64(BPF_REG_0, 1),
+ BPF_LD_IMM64(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "invalid BPF_LD_IMM insn",
+ .result = REJECT,
+ },
+ {
+ "test3 ld_imm64",
+ .insns = {
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+ BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, 0, 0, 0, 0),
+ BPF_LD_IMM64(BPF_REG_0, 0),
+ BPF_LD_IMM64(BPF_REG_0, 0),
+ BPF_LD_IMM64(BPF_REG_0, 1),
+ BPF_LD_IMM64(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "invalid bpf_ld_imm64 insn",
+ .result = REJECT,
+ },
+ {
+ "test4 ld_imm64",
+ .insns = {
+ BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, 0, 0, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "invalid bpf_ld_imm64 insn",
+ .result = REJECT,
+ },
+ {
+ "test5 ld_imm64",
+ .insns = {
+ BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, 0, 0, 0, 0),
+ },
+ .errstr = "invalid bpf_ld_imm64 insn",
+ .result = REJECT,
+ },
+ {
+ "no bpf_exit",
+ .insns = {
+ BPF_ALU64_REG(BPF_MOV, BPF_REG_0, BPF_REG_2),
+ },
+ .errstr = "jump out of range",
+ .result = REJECT,
+ },
+ {
+ "loop (back-edge)",
+ .insns = {
+ BPF_JMP_IMM(BPF_JA, 0, 0, -1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "back-edge",
+ .result = REJECT,
+ },
+ {
+ "loop2 (back-edge)",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
+ BPF_JMP_IMM(BPF_JA, 0, 0, -4),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "back-edge",
+ .result = REJECT,
+ },
+ {
+ "conditional loop",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -3),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "back-edge",
+ .result = REJECT,
+ },
+ {
+ "read uninitialized register",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "R2 !read_ok",
+ .result = REJECT,
+ },
+ {
+ "read invalid register",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_0, -1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "R15 is invalid",
+ .result = REJECT,
+ },
+ {
+ "program doesn't init R0 before exit",
+ .insns = {
+ BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "R0 !read_ok",
+ .result = REJECT,
+ },
+ {
+ "stack out of bounds",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, 8, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "invalid stack",
+ .result = REJECT,
+ },
+ {
+ "invalid call insn1",
+ .insns = {
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL | BPF_X, 0, 0, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "BPF_CALL uses reserved",
+ .result = REJECT,
+ },
+ {
+ "invalid call insn2",
+ .insns = {
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 1, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "BPF_CALL uses reserved",
+ .result = REJECT,
+ },
+ {
+ "invalid function call",
+ .insns = {
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 1234567),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "invalid func 1234567",
+ .result = REJECT,
+ },
+ {
+ "uninitialized stack1",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_unspec),
+ BPF_EXIT_INSN(),
+ },
+ .fixup = {2},
+ .errstr = "invalid indirect read from stack",
+ .result = REJECT,
+ },
+ {
+ "uninitialized stack2",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -8),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "invalid read from stack",
+ .result = REJECT,
+ },
+ {
+ "check valid spill/fill",
+ .insns = {
+ /* spill R1(ctx) into stack */
+ BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+
+ /* fill it back into R2 */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8),
+
+ /* should be able to access R0 = *(R2 + 8) */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 8),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+ },
+ {
+ "check corrupted spill/fill",
+ .insns = {
+ /* spill R1(ctx) into stack */
+ BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+
+ /* mess up with R1 pointer on stack */
+ BPF_ST_MEM(BPF_B, BPF_REG_10, -7, 0x23),
+
+ /* fill back into R0 should fail */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
+
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "corrupted spill",
+ .result = REJECT,
+ },
+ {
+ "invalid src register in STX",
+ .insns = {
+ BPF_STX_MEM(BPF_B, BPF_REG_10, -1, -1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "R15 is invalid",
+ .result = REJECT,
+ },
+ {
+ "invalid dst register in STX",
+ .insns = {
+ BPF_STX_MEM(BPF_B, 14, BPF_REG_10, -1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "R14 is invalid",
+ .result = REJECT,
+ },
+ {
+ "invalid dst register in ST",
+ .insns = {
+ BPF_ST_MEM(BPF_B, 14, -1, -1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "R14 is invalid",
+ .result = REJECT,
+ },
+ {
+ "invalid src register in LDX",
+ .insns = {
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, 12, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "R12 is invalid",
+ .result = REJECT,
+ },
+ {
+ "invalid dst register in LDX",
+ .insns = {
+ BPF_LDX_MEM(BPF_B, 11, BPF_REG_1, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "R11 is invalid",
+ .result = REJECT,
+ },
+ {
+ "junk insn",
+ .insns = {
+ BPF_RAW_INSN(0, 0, 0, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "invalid BPF_LD_IMM",
+ .result = REJECT,
+ },
+ {
+ "junk insn2",
+ .insns = {
+ BPF_RAW_INSN(1, 0, 0, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "BPF_LDX uses reserved fields",
+ .result = REJECT,
+ },
+ {
+ "junk insn3",
+ .insns = {
+ BPF_RAW_INSN(-1, 0, 0, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "invalid BPF_ALU opcode f0",
+ .result = REJECT,
+ },
+ {
+ "junk insn4",
+ .insns = {
+ BPF_RAW_INSN(-1, -1, -1, -1, -1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "invalid BPF_ALU opcode f0",
+ .result = REJECT,
+ },
+ {
+ "junk insn5",
+ .insns = {
+ BPF_RAW_INSN(0x7f, -1, -1, -1, -1),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "BPF_ALU uses reserved fields",
+ .result = REJECT,
+ },
+ {
+ "misaligned read from stack",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -4),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "misaligned access",
+ .result = REJECT,
+ },
+ {
+ "invalid map_fd for function call",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_unspec),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "fd 0 is not pointing to valid bpf_map",
+ .result = REJECT,
+ },
+ {
+ "don't check return value before access",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_unspec),
+ BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup = {3},
+ .errstr = "R0 invalid mem access 'map_value_or_null'",
+ .result = REJECT,
+ },
+ {
+ "access memory with incorrect alignment",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_unspec),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+ BPF_ST_MEM(BPF_DW, BPF_REG_0, 4, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup = {3},
+ .errstr = "misaligned access",
+ .result = REJECT,
+ },
+ {
+ "sometimes access memory with incorrect alignment",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_unspec),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+ BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
+ BPF_EXIT_INSN(),
+ BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ },
+ .fixup = {3},
+ .errstr = "R0 invalid mem access",
+ .result = REJECT,
+ },
+};
+
+static int probe_filter_length(struct bpf_insn *fp)
+{
+ int len = 0;
+
+ for (len = MAX_INSNS - 1; len > 0; --len)
+ if (fp[len].code != 0 || fp[len].imm != 0)
+ break;
+
+ return len + 1;
+}
+
+static int create_map(void)
+{
+ long long key, value = 0;
+ int map_fd;
+
+ map_fd = bpf_create_map(BPF_MAP_TYPE_UNSPEC, sizeof(key), sizeof(value), 1024);
+ if (map_fd < 0) {
+ printf("failed to create map '%s'\n", strerror(errno));
+ }
+
+ return map_fd;
+}
+
+static int test(void)
+{
+ int prog_fd, i;
+
+ for (i = 0; i < ARRAY_SIZE(tests); i++) {
+ struct bpf_insn *prog = tests[i].insns;
+ int prog_len = probe_filter_length(prog);
+ int *fixup = tests[i].fixup;
+ int map_fd = -1;
+
+ if (*fixup) {
+ map_fd = create_map();
+
+ do {
+ prog[*fixup].imm = map_fd;
+ fixup++;
+ } while (*fixup);
+ }
+ printf("#%d %s ", i, tests[i].descr);
+
+ prog_fd = bpf_prog_load(BPF_PROG_TYPE_UNSPEC, prog,
+ prog_len * sizeof(struct bpf_insn),
+ "GPL");
+
+ if (tests[i].result == ACCEPT) {
+ if (prog_fd < 0) {
+ printf("FAIL\nfailed to load prog '%s'\n",
+ strerror(errno));
+ printf("%s", bpf_log_buf);
+ goto fail;
+ }
+ } else {
+ if (prog_fd >= 0) {
+ printf("FAIL\nunexpected success to load\n");
+ printf("%s", bpf_log_buf);
+ goto fail;
+ }
+ if (strstr(bpf_log_buf, tests[i].errstr) == 0) {
+ printf("FAIL\nunexpected error message: %s",
+ bpf_log_buf);
+ goto fail;
+ }
+ }
+
+ printf("OK\n");
+fail:
+ if (map_fd >= 0)
+ close(map_fd);
+ close(prog_fd);
+
+ }
+
+ return 0;
+}
+
+int main(void)
+{
+ return test();
+}
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index b385bcb..4d08b39 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -2133,7 +2133,10 @@
# Check for improperly formed commit descriptions
if ($in_commit_log &&
$line =~ /\bcommit\s+[0-9a-f]{5,}/i &&
- $line !~ /\b[Cc]ommit [0-9a-f]{12,40} \("/) {
+ !($line =~ /\b[Cc]ommit [0-9a-f]{12,40} \("/ ||
+ ($line =~ /\b[Cc]ommit [0-9a-f]{12,40}\s*$/ &&
+ defined $rawlines[$linenr] &&
+ $rawlines[$linenr] =~ /^\s*\("/))) {
$line =~ /\b(c)ommit\s+([0-9a-f]{5,})/i;
my $init_char = $1;
my $orig_commit = lc($2);
diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
index 9acc77e..0032278 100644
--- a/sound/core/pcm_lib.c
+++ b/sound/core/pcm_lib.c
@@ -1782,14 +1782,16 @@
{
struct snd_pcm_hw_params *params = arg;
snd_pcm_format_t format;
- int channels, width;
+ int channels;
+ ssize_t frame_size;
params->fifo_size = substream->runtime->hw.fifo_size;
if (!(substream->runtime->hw.info & SNDRV_PCM_INFO_FIFO_IN_FRAMES)) {
format = params_format(params);
channels = params_channels(params);
- width = snd_pcm_format_physical_width(format);
- params->fifo_size /= width * channels;
+ frame_size = snd_pcm_format_size(format, channels);
+ if (frame_size > 0)
+ params->fifo_size /= (unsigned)frame_size;
}
return 0;
}
diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
index 6e5d0cb..47ccb8f 100644
--- a/sound/pci/hda/patch_conexant.c
+++ b/sound/pci/hda/patch_conexant.c
@@ -777,6 +777,7 @@
{ .id = CXT_PINCFG_LENOVO_TP410, .name = "tp410" },
{ .id = CXT_FIXUP_THINKPAD_ACPI, .name = "thinkpad" },
{ .id = CXT_PINCFG_LEMOTE_A1004, .name = "lemote-a1004" },
+ { .id = CXT_PINCFG_LEMOTE_A1205, .name = "lemote-a1205" },
{ .id = CXT_FIXUP_OLPC_XO, .name = "olpc-xo" },
{}
};
diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
index ea823e1..98cd190 100644
--- a/sound/pci/hda/patch_sigmatel.c
+++ b/sound/pci/hda/patch_sigmatel.c
@@ -566,8 +566,8 @@
if (snd_hda_jack_tbl_get(codec, nid))
continue;
if (def_conf == AC_JACK_PORT_COMPLEX &&
- !(spec->vref_mute_led_nid == nid ||
- is_jack_detectable(codec, nid))) {
+ spec->vref_mute_led_nid != nid &&
+ is_jack_detectable(codec, nid)) {
snd_hda_jack_detect_enable_callback(codec, nid,
STAC_PWR_EVENT,
jack_update_power);
@@ -4276,11 +4276,18 @@
return err;
}
- stac_init_power_map(codec);
-
return 0;
}
+static int stac_build_controls(struct hda_codec *codec)
+{
+ int err = snd_hda_gen_build_controls(codec);
+
+ if (err < 0)
+ return err;
+ stac_init_power_map(codec);
+ return 0;
+}
static int stac_init(struct hda_codec *codec)
{
@@ -4392,7 +4399,7 @@
#endif /* CONFIG_PM */
static const struct hda_codec_ops stac_patch_ops = {
- .build_controls = snd_hda_gen_build_controls,
+ .build_controls = stac_build_controls,
.build_pcms = snd_hda_gen_build_pcms,
.init = stac_init,
.free = stac_free,
diff --git a/sound/soc/codecs/cs4265.c b/sound/soc/codecs/cs4265.c
index 9852320..69a8516 100644
--- a/sound/soc/codecs/cs4265.c
+++ b/sound/soc/codecs/cs4265.c
@@ -458,12 +458,12 @@
if (params_width(params) == 16) {
snd_soc_update_bits(codec, CS4265_DAC_CTL,
CS4265_DAC_CTL_DIF, (1 << 5));
- snd_soc_update_bits(codec, CS4265_ADC_CTL,
+ snd_soc_update_bits(codec, CS4265_SPDIF_CTL2,
CS4265_SPDIF_CTL2_DIF, (1 << 7));
} else {
snd_soc_update_bits(codec, CS4265_DAC_CTL,
CS4265_DAC_CTL_DIF, (3 << 5));
- snd_soc_update_bits(codec, CS4265_ADC_CTL,
+ snd_soc_update_bits(codec, CS4265_SPDIF_CTL2,
CS4265_SPDIF_CTL2_DIF, (1 << 7));
}
break;
@@ -472,7 +472,7 @@
CS4265_DAC_CTL_DIF, 0);
snd_soc_update_bits(codec, CS4265_ADC_CTL,
CS4265_ADC_DIF, 0);
- snd_soc_update_bits(codec, CS4265_ADC_CTL,
+ snd_soc_update_bits(codec, CS4265_SPDIF_CTL2,
CS4265_SPDIF_CTL2_DIF, (1 << 6));
break;
diff --git a/sound/soc/codecs/sta529.c b/sound/soc/codecs/sta529.c
index 9aa1323..89c748d 100644
--- a/sound/soc/codecs/sta529.c
+++ b/sound/soc/codecs/sta529.c
@@ -4,7 +4,7 @@
* sound/soc/codecs/sta529.c -- spear ALSA Soc codec driver
*
* Copyright (C) 2012 ST Microelectronics
- * Rajeev Kumar <rajeev-dlh.kumar@st.com>
+ * Rajeev Kumar <rajeevkumar.linux@gmail.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
@@ -426,5 +426,5 @@
module_i2c_driver(sta529_i2c_driver);
MODULE_DESCRIPTION("ASoC STA529 codec driver");
-MODULE_AUTHOR("Rajeev Kumar <rajeev-dlh.kumar@st.com>");
+MODULE_AUTHOR("Rajeev Kumar <rajeevkumar.linux@gmail.com>");
MODULE_LICENSE("GPL");
diff --git a/sound/soc/codecs/tlv320aic31xx.c b/sound/soc/codecs/tlv320aic31xx.c
index 0f64c78..aea9e1f 100644
--- a/sound/soc/codecs/tlv320aic31xx.c
+++ b/sound/soc/codecs/tlv320aic31xx.c
@@ -189,46 +189,57 @@
/* mclk rate pll: p j d dosr ndac mdac aors nadc madc */
/* 8k rate */
{12000000, 8000, 1, 8, 1920, 128, 48, 2, 128, 48, 2},
+ {12000000, 8000, 1, 8, 1920, 128, 32, 3, 128, 32, 3},
{24000000, 8000, 2, 8, 1920, 128, 48, 2, 128, 48, 2},
{25000000, 8000, 2, 7, 8643, 128, 48, 2, 128, 48, 2},
/* 11.025k rate */
{12000000, 11025, 1, 7, 5264, 128, 32, 2, 128, 32, 2},
+ {12000000, 11025, 1, 8, 4672, 128, 24, 3, 128, 24, 3},
{24000000, 11025, 2, 7, 5264, 128, 32, 2, 128, 32, 2},
{25000000, 11025, 2, 7, 2253, 128, 32, 2, 128, 32, 2},
/* 16k rate */
{12000000, 16000, 1, 8, 1920, 128, 24, 2, 128, 24, 2},
+ {12000000, 16000, 1, 8, 1920, 128, 16, 3, 128, 16, 3},
{24000000, 16000, 2, 8, 1920, 128, 24, 2, 128, 24, 2},
{25000000, 16000, 2, 7, 8643, 128, 24, 2, 128, 24, 2},
/* 22.05k rate */
{12000000, 22050, 1, 7, 5264, 128, 16, 2, 128, 16, 2},
+ {12000000, 22050, 1, 8, 4672, 128, 12, 3, 128, 12, 3},
{24000000, 22050, 2, 7, 5264, 128, 16, 2, 128, 16, 2},
{25000000, 22050, 2, 7, 2253, 128, 16, 2, 128, 16, 2},
/* 32k rate */
{12000000, 32000, 1, 8, 1920, 128, 12, 2, 128, 12, 2},
+ {12000000, 32000, 1, 8, 1920, 128, 8, 3, 128, 8, 3},
{24000000, 32000, 2, 8, 1920, 128, 12, 2, 128, 12, 2},
{25000000, 32000, 2, 7, 8643, 128, 12, 2, 128, 12, 2},
/* 44.1k rate */
{12000000, 44100, 1, 7, 5264, 128, 8, 2, 128, 8, 2},
+ {12000000, 44100, 1, 8, 4672, 128, 6, 3, 128, 6, 3},
{24000000, 44100, 2, 7, 5264, 128, 8, 2, 128, 8, 2},
{25000000, 44100, 2, 7, 2253, 128, 8, 2, 128, 8, 2},
/* 48k rate */
{12000000, 48000, 1, 8, 1920, 128, 8, 2, 128, 8, 2},
+ {12000000, 48000, 1, 7, 6800, 96, 5, 4, 96, 5, 4},
{24000000, 48000, 2, 8, 1920, 128, 8, 2, 128, 8, 2},
{25000000, 48000, 2, 7, 8643, 128, 8, 2, 128, 8, 2},
/* 88.2k rate */
{12000000, 88200, 1, 7, 5264, 64, 8, 2, 64, 8, 2},
+ {12000000, 88200, 1, 8, 4672, 64, 6, 3, 64, 6, 3},
{24000000, 88200, 2, 7, 5264, 64, 8, 2, 64, 8, 2},
{25000000, 88200, 2, 7, 2253, 64, 8, 2, 64, 8, 2},
/* 96k rate */
{12000000, 96000, 1, 8, 1920, 64, 8, 2, 64, 8, 2},
+ {12000000, 96000, 1, 7, 6800, 48, 5, 4, 48, 5, 4},
{24000000, 96000, 2, 8, 1920, 64, 8, 2, 64, 8, 2},
{25000000, 96000, 2, 7, 8643, 64, 8, 2, 64, 8, 2},
/* 176.4k rate */
{12000000, 176400, 1, 7, 5264, 32, 8, 2, 32, 8, 2},
+ {12000000, 176400, 1, 8, 4672, 32, 6, 3, 32, 6, 3},
{24000000, 176400, 2, 7, 5264, 32, 8, 2, 32, 8, 2},
{25000000, 176400, 2, 7, 2253, 32, 8, 2, 32, 8, 2},
/* 192k rate */
{12000000, 192000, 1, 8, 1920, 32, 8, 2, 32, 8, 2},
+ {12000000, 192000, 1, 7, 6800, 24, 5, 4, 24, 5, 4},
{24000000, 192000, 2, 8, 1920, 32, 8, 2, 32, 8, 2},
{25000000, 192000, 2, 7, 8643, 32, 8, 2, 32, 8, 2},
};
@@ -680,7 +691,9 @@
struct snd_pcm_hw_params *params)
{
struct aic31xx_priv *aic31xx = snd_soc_codec_get_drvdata(codec);
+ int bclk_score = snd_soc_params_to_frame_size(params);
int bclk_n = 0;
+ int match = -1;
int i;
/* Use PLL as CODEC_CLKIN and DAC_CLK as BDIV_CLKIN */
@@ -691,15 +704,37 @@
for (i = 0; i < ARRAY_SIZE(aic31xx_divs); i++) {
if (aic31xx_divs[i].rate == params_rate(params) &&
- aic31xx_divs[i].mclk == aic31xx->sysclk)
- break;
+ aic31xx_divs[i].mclk == aic31xx->sysclk) {
+ int s = (aic31xx_divs[i].dosr * aic31xx_divs[i].mdac) %
+ snd_soc_params_to_frame_size(params);
+ int bn = (aic31xx_divs[i].dosr * aic31xx_divs[i].mdac) /
+ snd_soc_params_to_frame_size(params);
+ if (s < bclk_score && bn > 0) {
+ match = i;
+ bclk_n = bn;
+ bclk_score = s;
+ }
+ }
}
- if (i == ARRAY_SIZE(aic31xx_divs)) {
- dev_err(codec->dev, "%s: Sampling rate %u not supported\n",
+ if (match == -1) {
+ dev_err(codec->dev,
+ "%s: Sample rate (%u) and format not supported\n",
__func__, params_rate(params));
+ /* See bellow for details how fix this. */
return -EINVAL;
}
+ if (bclk_score != 0) {
+ dev_warn(codec->dev, "Can not produce exact bitclock");
+ /* This is fine if using dsp format, but if using i2s
+ there may be trouble. To fix the issue edit the
+ aic31xx_divs table for your mclk and sample
+ rate. Details can be found from:
+ http://www.ti.com/lit/ds/symlink/tlv320aic3100.pdf
+ Section: 5.6 CLOCK Generation and PLL
+ */
+ }
+ i = match;
/* PLL configuration */
snd_soc_update_bits(codec, AIC31XX_PLLPR, AIC31XX_PLL_MASK,
@@ -729,14 +764,6 @@
snd_soc_write(codec, AIC31XX_AOSR, aic31xx_divs[i].aosr);
/* Bit clock divider configuration. */
- bclk_n = (aic31xx_divs[i].dosr * aic31xx_divs[i].mdac)
- / snd_soc_params_to_frame_size(params);
- if (bclk_n == 0) {
- dev_err(codec->dev, "%s: Not enough BLCK bandwidth\n",
- __func__);
- return -EINVAL;
- }
-
snd_soc_update_bits(codec, AIC31XX_BCLKN,
AIC31XX_PLL_MASK, bclk_n);
diff --git a/sound/soc/davinci/davinci-mcasp.c b/sound/soc/davinci/davinci-mcasp.c
index 6a6b2ff..68347b5 100644
--- a/sound/soc/davinci/davinci-mcasp.c
+++ b/sound/soc/davinci/davinci-mcasp.c
@@ -467,8 +467,17 @@
{
u32 fmt;
u32 tx_rotate = (word_length / 4) & 0x7;
- u32 rx_rotate = (32 - word_length) / 4;
u32 mask = (1ULL << word_length) - 1;
+ /*
+ * For captured data we should not rotate, inversion and masking is
+ * enoguh to get the data to the right position:
+ * Format data from bus after reverse (XRBUF)
+ * S16_LE: |LSB|MSB|xxx|xxx| |xxx|xxx|MSB|LSB|
+ * S24_3LE: |LSB|DAT|MSB|xxx| |xxx|MSB|DAT|LSB|
+ * S24_LE: |LSB|DAT|MSB|xxx| |xxx|MSB|DAT|LSB|
+ * S32_LE: |LSB|DAT|DAT|MSB| |MSB|DAT|DAT|LSB|
+ */
+ u32 rx_rotate = 0;
/*
* if s BCLK-to-LRCLK ratio has been configured via the set_clkdiv()
diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c
index 25c31f1..e961388 100644
--- a/sound/soc/dwc/designware_i2s.c
+++ b/sound/soc/dwc/designware_i2s.c
@@ -4,7 +4,7 @@
* sound/soc/dwc/designware_i2s.c
*
* Copyright (C) 2010 ST Microelectronics
- * Rajeev Kumar <rajeev-dlh.kumar@st.com>
+ * Rajeev Kumar <rajeevkumar.linux@gmail.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
@@ -455,7 +455,7 @@
module_platform_driver(dw_i2s_driver);
-MODULE_AUTHOR("Rajeev Kumar <rajeev-dlh.kumar@st.com>");
+MODULE_AUTHOR("Rajeev Kumar <rajeevkumar.linux@gmail.com>");
MODULE_DESCRIPTION("DESIGNWARE I2S SoC Interface");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:designware_i2s");
diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c
index 8d8e4b5..fb9e05c 100644
--- a/sound/soc/rockchip/rockchip_i2s.c
+++ b/sound/soc/rockchip/rockchip_i2s.c
@@ -165,13 +165,14 @@
struct rk_i2s_dev *i2s = to_info(cpu_dai);
unsigned int mask = 0, val = 0;
- mask = I2S_CKR_MSS_SLAVE;
+ mask = I2S_CKR_MSS_MASK;
switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
case SND_SOC_DAIFMT_CBS_CFS:
- val = I2S_CKR_MSS_SLAVE;
+ /* Set source clock in Master mode */
+ val = I2S_CKR_MSS_MASTER;
break;
case SND_SOC_DAIFMT_CBM_CFM:
- val = I2S_CKR_MSS_MASTER;
+ val = I2S_CKR_MSS_SLAVE;
break;
default:
return -EINVAL;
@@ -361,6 +362,8 @@
case I2S_XFER:
case I2S_CLR:
case I2S_RXDR:
+ case I2S_FIFOLR:
+ case I2S_INTSR:
return true;
default:
return false;
@@ -370,8 +373,8 @@
static bool rockchip_i2s_volatile_reg(struct device *dev, unsigned int reg)
{
switch (reg) {
- case I2S_FIFOLR:
case I2S_INTSR:
+ case I2S_CLR:
return true;
default:
return false;
@@ -381,8 +384,6 @@
static bool rockchip_i2s_precious_reg(struct device *dev, unsigned int reg)
{
switch (reg) {
- case I2S_FIFOLR:
- return true;
default:
return false;
}
diff --git a/sound/soc/samsung/i2s.c b/sound/soc/samsung/i2s.c
index 03eec22..9d51347 100644
--- a/sound/soc/samsung/i2s.c
+++ b/sound/soc/samsung/i2s.c
@@ -462,7 +462,7 @@
if (dir == SND_SOC_CLOCK_IN)
rfs = 0;
- if ((rfs && other->rfs && (other->rfs != rfs)) ||
+ if ((rfs && other && other->rfs && (other->rfs != rfs)) ||
(any_active(i2s) &&
(((dir == SND_SOC_CLOCK_IN)
&& !(mod & MOD_CDCLKCON)) ||
@@ -762,7 +762,8 @@
} else {
u32 mod = readl(i2s->addr + I2SMOD);
i2s->cdclk_out = !(mod & MOD_CDCLKCON);
- other->cdclk_out = i2s->cdclk_out;
+ if (other)
+ other->cdclk_out = i2s->cdclk_out;
}
/* Reset any constraint on RFS and BFS */
i2s->rfs = 0;
diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
index 27c06ac..3092b58 100644
--- a/sound/soc/soc-compress.c
+++ b/sound/soc/soc-compress.c
@@ -101,7 +101,11 @@
fe->dpcm[stream].runtime = fe_substream->runtime;
- if (dpcm_path_get(fe, stream, &list) <= 0) {
+ ret = dpcm_path_get(fe, stream, &list);
+ if (ret < 0) {
+ mutex_unlock(&fe->card->mutex);
+ goto fe_err;
+ } else if (ret == 0) {
dev_dbg(fe->dev, "ASoC: %s no valid %s route\n",
fe->dai_link->name, stream ? "capture" : "playback");
}
diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
index 731fdb5..642c862 100644
--- a/sound/soc/soc-pcm.c
+++ b/sound/soc/soc-pcm.c
@@ -2352,7 +2352,11 @@
mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME);
fe->dpcm[stream].runtime = fe_substream->runtime;
- if (dpcm_path_get(fe, stream, &list) <= 0) {
+ ret = dpcm_path_get(fe, stream, &list);
+ if (ret < 0) {
+ mutex_unlock(&fe->card->mutex);
+ return ret;
+ } else if (ret == 0) {
dev_dbg(fe->dev, "ASoC: %s no valid %s route\n",
fe->dai_link->name, stream ? "capture" : "playback");
}
diff --git a/sound/soc/spear/spear_pcm.c b/sound/soc/spear/spear_pcm.c
index 0e5a8f3..a7dc3c5 100644
--- a/sound/soc/spear/spear_pcm.c
+++ b/sound/soc/spear/spear_pcm.c
@@ -4,7 +4,7 @@
* sound/soc/spear/spear_pcm.c
*
* Copyright (C) 2012 ST Microelectronics
- * Rajeev Kumar<rajeev-dlh.kumar@st.com>
+ * Rajeev Kumar<rajeevkumar.linux@gmail.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
@@ -50,6 +50,6 @@
}
EXPORT_SYMBOL_GPL(devm_spear_pcm_platform_register);
-MODULE_AUTHOR("Rajeev Kumar <rajeev-dlh.kumar@st.com>");
+MODULE_AUTHOR("Rajeev Kumar <rajeevkumar.linux@gmail.com>");
MODULE_DESCRIPTION("SPEAr PCM DMA module");
MODULE_LICENSE("GPL");
diff --git a/sound/usb/caiaq/control.c b/sound/usb/caiaq/control.c
index f65fc09..b7a7c80 100644
--- a/sound/usb/caiaq/control.c
+++ b/sound/usb/caiaq/control.c
@@ -100,15 +100,19 @@
struct snd_usb_caiaqdev *cdev = caiaqdev(chip->card);
int pos = kcontrol->private_value;
int v = ucontrol->value.integer.value[0];
- unsigned char cmd = EP1_CMD_WRITE_IO;
+ unsigned char cmd;
- if (cdev->chip.usb_id ==
- USB_ID(USB_VID_NATIVEINSTRUMENTS, USB_PID_TRAKTORKONTROLX1))
+ switch (cdev->chip.usb_id) {
+ case USB_ID(USB_VID_NATIVEINSTRUMENTS, USB_PID_MASCHINECONTROLLER):
+ case USB_ID(USB_VID_NATIVEINSTRUMENTS, USB_PID_TRAKTORKONTROLX1):
+ case USB_ID(USB_VID_NATIVEINSTRUMENTS, USB_PID_KORECONTROLLER2):
+ case USB_ID(USB_VID_NATIVEINSTRUMENTS, USB_PID_KORECONTROLLER):
cmd = EP1_CMD_DIMM_LEDS;
-
- if (cdev->chip.usb_id ==
- USB_ID(USB_VID_NATIVEINSTRUMENTS, USB_PID_MASCHINECONTROLLER))
- cmd = EP1_CMD_DIMM_LEDS;
+ break;
+ default:
+ cmd = EP1_CMD_WRITE_IO;
+ break;
+ }
if (pos & CNT_INTVAL) {
int i = pos & ~CNT_INTVAL;
diff --git a/tools/usb/usbip/libsrc/usbip_common.h b/tools/usb/usbip/libsrc/usbip_common.h
index 5a0e95e..15fe792 100644
--- a/tools/usb/usbip/libsrc/usbip_common.h
+++ b/tools/usb/usbip/libsrc/usbip_common.h
@@ -15,7 +15,7 @@
#include <syslog.h>
#include <unistd.h>
#include <linux/usb/ch9.h>
-#include "../../uapi/usbip.h"
+#include <linux/usbip.h>
#ifndef USBIDS_FILE
#define USBIDS_FILE "/usr/share/hwdata/usb.ids"
diff --git a/virt/kvm/arm/vgic-v2.c b/virt/kvm/arm/vgic-v2.c
index 01124ef..416baed 100644
--- a/virt/kvm/arm/vgic-v2.c
+++ b/virt/kvm/arm/vgic-v2.c
@@ -71,7 +71,7 @@
struct vgic_lr lr_desc)
{
if (!(lr_desc.state & LR_STATE_MASK))
- set_bit(lr, (unsigned long *)vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr);
+ __set_bit(lr, (unsigned long *)vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr);
}
static u64 vgic_v2_get_elrsr(const struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 33712fb..95519bc 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -110,7 +110,7 @@
bool kvm_is_mmio_pfn(pfn_t pfn)
{
if (pfn_valid(pfn))
- return PageReserved(pfn_to_page(pfn));
+ return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn));
return true;
}
@@ -1725,7 +1725,7 @@
rcu_read_lock();
pid = rcu_dereference(target->pid);
if (pid)
- task = get_pid_task(target->pid, PIDTYPE_PID);
+ task = get_pid_task(pid, PIDTYPE_PID);
rcu_read_unlock();
if (!task)
return ret;