Including fixes from bpf, wifi, and netfilter.

Current release - regressions:
 
  - bpf: fix nullness propagation for reg to reg comparisons,
    avoid null-deref
 
  - inet: control sockets should not use current thread task_frag
 
  - bpf: always use maximal size for copy_array()
 
  - eth: bnxt_en: don't link netdev to a devlink port for VFs
 
 Current release - new code bugs:
 
  - rxrpc: fix a couple of potential use-after-frees
 
  - netfilter: conntrack: fix IPv6 exthdr error check
 
  - wifi: iwlwifi: fw: skip PPAG for JF, avoid FW crashes
 
  - eth: dsa: qca8k: various fixes for the in-band register access
 
  - eth: nfp: fix schedule in atomic context when sync mc address
 
  - eth: renesas: rswitch: fix getting mac address from device tree
 
  - mobile: ipa: use proper endpoint mask for suspend
 
 Previous releases - regressions:
 
  - tcp: add TIME_WAIT sockets in bhash2, fix regression caught
    by Jiri / python tests
 
  - net: tc: don't intepret cls results when asked to drop, fix
    oob-access
 
  - vrf: determine the dst using the original ifindex for multicast
 
  - eth: bnxt_en:
    - fix XDP RX path if BPF adjusted packet length
    - fix HDS (header placement) and jumbo thresholds for RX packets
 
  - eth: ice: xsk: do not use xdp_return_frame() on tx_buf->raw_buf,
    avoid memory corruptions
 
 Previous releases - always broken:
 
  - ulp: prevent ULP without clone op from entering the LISTEN status
 
  - veth: fix race with AF_XDP exposing old or uninitialized descriptors
 
  - bpf:
    - pull before calling skb_postpull_rcsum() (fix checksum support
      and avoid a WARN())
    - fix panic due to wrong pageattr of im->image (when livepatch
      and kretfunc coexist)
    - keep a reference to the mm, in case the task is dead
 
  - mptcp: fix deadlock in fastopen error path
 
  - netfilter:
    - nf_tables: perform type checking for existing sets
    - nf_tables: honor set timeout and garbage collection updates
    - ipset: fix hash:net,port,net hang with /0 subnet
    - ipset: avoid hung task warning when adding/deleting entries
 
  - selftests: net:
    - fix cmsg_so_mark.sh test hang on non-x86 systems
    - fix the arp_ndisc_evict_nocarrier test for IPv6
 
  - usb: rndis_host: secure rndis_query check against int overflow
 
  - eth: r8169: fix dmar pte write access during suspend/resume with WOL
 
  - eth: lan966x: fix configuration of the PCS
 
  - eth: sparx5: fix reading of the MAC address
 
  - eth: qed: allow sleep in qed_mcp_trace_dump()
 
  - eth: hns3:
    - fix interrupts re-initialization after VF FLR
    - fix handling of promisc when MAC addr table gets full
    - refine the handling for VF heartbeat
 
  - eth: mlx5:
    - properly handle ingress QinQ-tagged packets on VST
    - fix io_eq_size and event_eq_size params validation on big endian
    - fix RoCE setting at HCA level if not supported at all
    - don't turn CQE compression on by default for IPoIB
 
  - eth: ena:
    - fix toeplitz initial hash key value
    - account for the number of XDP-processed bytes in interface stats
    - fix rx_copybreak value update
 
 Misc:
 
  - ethtool: harden phy stat handling against buggy drivers
 
  - docs: netdev: convert maintainer's doc from FAQ to a normal document
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmO3MLcACgkQMUZtbf5S
 IrsEQBAAijPrpxsGMfX+VMqZ8RPKA3Qg8XF3ji2fSp4c0kiKv6lYI7PzPTR3u/fj
 CAlhQMHv7z53uM6Zd7FdUVl23paaEycu8YnlwSubg9z+wSeh/RQ6iq94mSk1PV+K
 LLVR/yop2N35Yp/oc5KZMb9fMLkxRG9Ci73QUVVYgvIrSd4Zdm13FjfVjL2C1MZH
 Yp003wigMs9IkIHOpHjNqwn/5s//0yXsb1PgKxCsaMdMQsG0yC+7eyDmxshCqsji
 xQm15mkGMjvWEYJaa4Tj4L3JW6lWbQzCu9nqPUX16KpmrnScr8S8Is+aifFZIBeW
 GZeDYgvjSxNWodeOrJnD3X+fnbrR9+qfx7T9y7XighfytAz5DNm1LwVOvZKDgPFA
 s+LlxOhzkDNEqbIsusK/LW+04EFc5gJyTI2iR6s4SSqmH3c3coJZQJeyRFWDZy/x
 1oqzcCcq8SwGUTJ9g6HAmDQoVkhDWDT/ZcRKhpWG0nJub972lB2iwM7LrAu+HoHI
 r8hyCkHpOi5S3WZKI9gPiGD+yOlpVAuG2wHg2IpjhKQvtd9DFUChGDhFeoB2rqJf
 9uI3RJBBYTDkeNu3kpfy5uMh2XhvbIZntK5kwpJ4VettZWFMaOAzn7KNqk8iT4gJ
 ASMrUrX59X0TAN0MgpJJm7uGtKbKZOu4lHNm74TUxH7V7bYn7dk=
 =TlcN
 -----END PGP SIGNATURE-----

Merge tag 'net-6.2-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from bpf, wifi, and netfilter.

  Current release - regressions:

   - bpf: fix nullness propagation for reg to reg comparisons, avoid
     null-deref

   - inet: control sockets should not use current thread task_frag

   - bpf: always use maximal size for copy_array()

   - eth: bnxt_en: don't link netdev to a devlink port for VFs

  Current release - new code bugs:

   - rxrpc: fix a couple of potential use-after-frees

   - netfilter: conntrack: fix IPv6 exthdr error check

   - wifi: iwlwifi: fw: skip PPAG for JF, avoid FW crashes

   - eth: dsa: qca8k: various fixes for the in-band register access

   - eth: nfp: fix schedule in atomic context when sync mc address

   - eth: renesas: rswitch: fix getting mac address from device tree

   - mobile: ipa: use proper endpoint mask for suspend

  Previous releases - regressions:

   - tcp: add TIME_WAIT sockets in bhash2, fix regression caught by
     Jiri / python tests

   - net: tc: don't intepret cls results when asked to drop, fix
     oob-access

   - vrf: determine the dst using the original ifindex for multicast

   - eth: bnxt_en:
      - fix XDP RX path if BPF adjusted packet length
      - fix HDS (header placement) and jumbo thresholds for RX packets

   - eth: ice: xsk: do not use xdp_return_frame() on tx_buf->raw_buf,
     avoid memory corruptions

  Previous releases - always broken:

   - ulp: prevent ULP without clone op from entering the LISTEN status

   - veth: fix race with AF_XDP exposing old or uninitialized
     descriptors

   - bpf:
      - pull before calling skb_postpull_rcsum() (fix checksum support
        and avoid a WARN())
      - fix panic due to wrong pageattr of im->image (when livepatch and
        kretfunc coexist)
      - keep a reference to the mm, in case the task is dead

   - mptcp: fix deadlock in fastopen error path

   - netfilter:
      - nf_tables: perform type checking for existing sets
      - nf_tables: honor set timeout and garbage collection updates
      - ipset: fix hash:net,port,net hang with /0 subnet
      - ipset: avoid hung task warning when adding/deleting entries

   - selftests: net:
      - fix cmsg_so_mark.sh test hang on non-x86 systems
      - fix the arp_ndisc_evict_nocarrier test for IPv6

   - usb: rndis_host: secure rndis_query check against int overflow

   - eth: r8169: fix dmar pte write access during suspend/resume with
     WOL

   - eth: lan966x: fix configuration of the PCS

   - eth: sparx5: fix reading of the MAC address

   - eth: qed: allow sleep in qed_mcp_trace_dump()

   - eth: hns3:
      - fix interrupts re-initialization after VF FLR
      - fix handling of promisc when MAC addr table gets full
      - refine the handling for VF heartbeat

   - eth: mlx5:
      - properly handle ingress QinQ-tagged packets on VST
      - fix io_eq_size and event_eq_size params validation on big endian
      - fix RoCE setting at HCA level if not supported at all
      - don't turn CQE compression on by default for IPoIB

   - eth: ena:
      - fix toeplitz initial hash key value
      - account for the number of XDP-processed bytes in interface stats
      - fix rx_copybreak value update

  Misc:

   - ethtool: harden phy stat handling against buggy drivers

   - docs: netdev: convert maintainer's doc from FAQ to a normal
     document"

* tag 'net-6.2-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (112 commits)
  caif: fix memory leak in cfctrl_linkup_request()
  inet: control sockets should not use current thread task_frag
  net/ulp: prevent ULP without clone op from entering the LISTEN status
  qed: allow sleep in qed_mcp_trace_dump()
  MAINTAINERS: Update maintainers for ptp_vmw driver
  usb: rndis_host: Secure rndis_query check against int overflow
  net: dpaa: Fix dtsec check for PCS availability
  octeontx2-pf: Fix lmtst ID used in aura free
  drivers/net/bonding/bond_3ad: return when there's no aggregator
  netfilter: ipset: Rework long task execution when adding/deleting entries
  netfilter: ipset: fix hash:net,port,net hang with /0 subnet
  net: sparx5: Fix reading of the MAC address
  vxlan: Fix memory leaks in error path
  net: sched: htb: fix htb_classify() kernel-doc
  net: sched: cbq: dont intepret cls results when asked to drop
  net: sched: atm: dont intepret cls results when asked to drop
  dt-bindings: net: marvell,orion-mdio: Fix examples
  dt-bindings: net: sun8i-emac: Add phy-supply property
  net: ipa: use proper endpoint mask for suspend
  selftests: net: return non-zero for failures reported in arp_ndisc_evict_nocarrier
  ...
This commit is contained in:
Linus Torvalds 2023-01-05 12:40:50 -08:00
commit 50011c32f4
128 changed files with 1998 additions and 854 deletions

View File

@ -40,6 +40,9 @@ properties:
clock-names: clock-names:
const: stmmaceth const: stmmaceth
phy-supply:
description: PHY regulator
syscon: syscon:
$ref: /schemas/types.yaml#/definitions/phandle $ref: /schemas/types.yaml#/definitions/phandle
description: description:

View File

@ -16,9 +16,6 @@ description: |
8k has a second unit which provides an interface with the xMDIO bus. This 8k has a second unit which provides an interface with the xMDIO bus. This
driver handles these interfaces. driver handles these interfaces.
allOf:
- $ref: "mdio.yaml#"
properties: properties:
compatible: compatible:
enum: enum:
@ -39,13 +36,38 @@ required:
- compatible - compatible
- reg - reg
allOf:
- $ref: mdio.yaml#
- if:
required:
- interrupts
then:
properties:
reg:
items:
- items:
- $ref: /schemas/types.yaml#/definitions/cell
- const: 0x84
else:
properties:
reg:
items:
- items:
- $ref: /schemas/types.yaml#/definitions/cell
- enum:
- 0x4
- 0x10
unevaluatedProperties: false unevaluatedProperties: false
examples: examples:
- | - |
mdio@d0072004 { mdio@d0072004 {
compatible = "marvell,orion-mdio"; compatible = "marvell,orion-mdio";
reg = <0xd0072004 0x4>; reg = <0xd0072004 0x84>;
#address-cells = <1>; #address-cells = <1>;
#size-cells = <0>; #size-cells = <0>;
interrupts = <30>; interrupts = <30>;

View File

@ -2,9 +2,9 @@
.. _netdev-FAQ: .. _netdev-FAQ:
========== =============================
netdev FAQ Networking subsystem (netdev)
========== =============================
tl;dr tl;dr
----- -----
@ -15,14 +15,15 @@ tl;dr
- don't repost your patches within one 24h period - don't repost your patches within one 24h period
- reverse xmas tree - reverse xmas tree
What is netdev? netdev
--------------- ------
It is a mailing list for all network-related Linux stuff. This
netdev is a mailing list for all network-related Linux stuff. This
includes anything found under net/ (i.e. core code like IPv6) and includes anything found under net/ (i.e. core code like IPv6) and
drivers/net (i.e. hardware specific drivers) in the Linux source tree. drivers/net (i.e. hardware specific drivers) in the Linux source tree.
Note that some subsystems (e.g. wireless drivers) which have a high Note that some subsystems (e.g. wireless drivers) which have a high
volume of traffic have their own specific mailing lists. volume of traffic have their own specific mailing lists and trees.
The netdev list is managed (like many other Linux mailing lists) through The netdev list is managed (like many other Linux mailing lists) through
VGER (http://vger.kernel.org/) with archives available at VGER (http://vger.kernel.org/) with archives available at
@ -32,32 +33,10 @@ Aside from subsystems like those mentioned above, all network-related
Linux development (i.e. RFC, review, comments, etc.) takes place on Linux development (i.e. RFC, review, comments, etc.) takes place on
netdev. netdev.
How do the changes posted to netdev make their way into Linux? Development cycle
-------------------------------------------------------------- -----------------
There are always two trees (git repositories) in play. Both are
driven by David Miller, the main network maintainer. There is the
``net`` tree, and the ``net-next`` tree. As you can probably guess from
the names, the ``net`` tree is for fixes to existing code already in the
mainline tree from Linus, and ``net-next`` is where the new code goes
for the future release. You can find the trees here:
- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git Here is a bit of background information on
- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
How do I indicate which tree (net vs. net-next) my patch should be in?
----------------------------------------------------------------------
To help maintainers and CI bots you should explicitly mark which tree
your patch is targeting. Assuming that you use git, use the prefix
flag::
git format-patch --subject-prefix='PATCH net-next' start..finish
Use ``net`` instead of ``net-next`` (always lower case) in the above for
bug-fix ``net`` content.
How often do changes from these trees make it to the mainline Linus tree?
-------------------------------------------------------------------------
To understand this, you need to know a bit of background information on
the cadence of Linux development. Each new release starts off with a the cadence of Linux development. Each new release starts off with a
two week "merge window" where the main maintainers feed their new stuff two week "merge window" where the main maintainers feed their new stuff
to Linus for merging into the mainline tree. After the two weeks, the to Linus for merging into the mainline tree. After the two weeks, the
@ -69,9 +48,33 @@ rc2 is released. This repeats on a roughly weekly basis until rc7
state of churn), and a week after the last vX.Y-rcN was done, the state of churn), and a week after the last vX.Y-rcN was done, the
official vX.Y is released. official vX.Y is released.
Relating that to netdev: At the beginning of the 2-week merge window, To find out where we are now in the cycle - load the mainline (Linus)
the ``net-next`` tree will be closed - no new changes/features. The page here:
accumulated new content of the past ~10 weeks will be passed onto
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
and note the top of the "tags" section. If it is rc1, it is early in
the dev cycle. If it was tagged rc7 a week ago, then a release is
probably imminent. If the most recent tag is a final release tag
(without an ``-rcN`` suffix) - we are most likely in a merge window
and ``net-next`` is closed.
git trees and patch flow
------------------------
There are two networking trees (git repositories) in play. Both are
driven by David Miller, the main network maintainer. There is the
``net`` tree, and the ``net-next`` tree. As you can probably guess from
the names, the ``net`` tree is for fixes to existing code already in the
mainline tree from Linus, and ``net-next`` is where the new code goes
for the future release. You can find the trees here:
- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
Relating that to kernel development: At the beginning of the 2-week
merge window, the ``net-next`` tree will be closed - no new changes/features.
The accumulated new content of the past ~10 weeks will be passed onto
mainline/Linus via a pull request for vX.Y -- at the same time, the mainline/Linus via a pull request for vX.Y -- at the same time, the
``net`` tree will start accumulating fixes for this pulled content ``net`` tree will start accumulating fixes for this pulled content
relating to vX.Y relating to vX.Y
@ -103,22 +106,14 @@ focus for ``net`` is on stabilization and bug fixes.
Finally, the vX.Y gets released, and the whole cycle starts over. Finally, the vX.Y gets released, and the whole cycle starts over.
So where are we now in this cycle? netdev patch review
---------------------------------- -------------------
Load the mainline (Linus) page here: Patch status
~~~~~~~~~~~~
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git Status of a patch can be checked by looking at the main patchwork
queue for netdev:
and note the top of the "tags" section. If it is rc1, it is early in
the dev cycle. If it was tagged rc7 a week ago, then a release is
probably imminent. If the most recent tag is a final release tag
(without an ``-rcN`` suffix) - we are most likely in a merge window
and ``net-next`` is closed.
How can I tell the status of a patch I've sent?
-----------------------------------------------
Start by looking at the main patchworks queue for netdev:
https://patchwork.kernel.org/project/netdevbpf/list/ https://patchwork.kernel.org/project/netdevbpf/list/
@ -127,8 +122,20 @@ patch. Patches are indexed by the ``Message-ID`` header of the emails
which carried them so if you have trouble finding your patch append which carried them so if you have trouble finding your patch append
the value of ``Message-ID`` to the URL above. the value of ``Message-ID`` to the URL above.
How long before my patch is accepted? Updating patch status
------------------------------------- ~~~~~~~~~~~~~~~~~~~~~
It may be tempting to help the maintainers and update the state of your
own patches when you post a new version or spot a bug. Please **do not**
do that.
Interfering with the patch status on patchwork will only cause confusion. Leave
it to the maintainer to figure out what is the most recent and current
version that should be applied. If there is any doubt, the maintainer
will reply and ask what should be done.
Review timelines
~~~~~~~~~~~~~~~~
Generally speaking, the patches get triaged quickly (in less than Generally speaking, the patches get triaged quickly (in less than
48h). But be patient, if your patch is active in patchwork (i.e. it's 48h). But be patient, if your patch is active in patchwork (i.e. it's
listed on the project's patch list) the chances it was missed are close to zero. listed on the project's patch list) the chances it was missed are close to zero.
@ -136,116 +143,47 @@ Asking the maintainer for status updates on your
patch is a good way to ensure your patch is ignored or pushed to the patch is a good way to ensure your patch is ignored or pushed to the
bottom of the priority list. bottom of the priority list.
Should I directly update patchwork state of my own patches? Partial resends
----------------------------------------------------------- ~~~~~~~~~~~~~~~
It may be tempting to help the maintainers and update the state of your
own patches when you post a new version or spot a bug. Please do not do that.
Interfering with the patch status on patchwork will only cause confusion. Leave
it to the maintainer to figure out what is the most recent and current
version that should be applied. If there is any doubt, the maintainer
will reply and ask what should be done.
How do I divide my work into patches? Please always resend the entire patch series and make sure you do number your
-------------------------------------
Put yourself in the shoes of the reviewer. Each patch is read separately
and therefore should constitute a comprehensible step towards your stated
goal.
Avoid sending series longer than 15 patches. Larger series takes longer
to review as reviewers will defer looking at it until they find a large
chunk of time. A small series can be reviewed in a short time, so Maintainers
just do it. As a result, a sequence of smaller series gets merged quicker and
with better review coverage. Re-posting large series also increases the mailing
list traffic.
I made changes to only a few patches in a patch series should I resend only those changed?
------------------------------------------------------------------------------------------
No, please resend the entire patch series and make sure you do number your
patches such that it is clear this is the latest and greatest set of patches patches such that it is clear this is the latest and greatest set of patches
that can be applied. that can be applied. Do not try to resend just the patches which changed.
I have received review feedback, when should I post a revised version of the patches? Handling misapplied patches
------------------------------------------------------------------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~
Allow at least 24 hours to pass between postings. This will ensure reviewers
from all geographical locations have a chance to chime in. Do not wait
too long (weeks) between postings either as it will make it harder for reviewers
to recall all the context.
Make sure you address all the feedback in your new posting. Do not post a new Occasionally a patch series gets applied before receiving critical feedback,
version of the code if the discussion about the previous version is still or the wrong version of a series gets applied.
ongoing, unless directly instructed by a reviewer.
I submitted multiple versions of a patch series and it looks like a version other than the last one has been accepted, what should I do?
----------------------------------------------------------------------------------------------------------------------------------------
There is no revert possible, once it is pushed out, it stays like that. There is no revert possible, once it is pushed out, it stays like that.
Please send incremental versions on top of what has been merged in order to fix Please send incremental versions on top of what has been merged in order to fix
the patches the way they would look like if your latest patch series was to be the patches the way they would look like if your latest patch series was to be
merged. merged.
Are there special rules regarding stable submissions on netdev? Stable tree
--------------------------------------------------------------- ~~~~~~~~~~~
While it used to be the case that netdev submissions were not supposed While it used to be the case that netdev submissions were not supposed
to carry explicit ``CC: stable@vger.kernel.org`` tags that is no longer to carry explicit ``CC: stable@vger.kernel.org`` tags that is no longer
the case today. Please follow the standard stable rules in the case today. Please follow the standard stable rules in
:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`, :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`,
and make sure you include appropriate Fixes tags! and make sure you include appropriate Fixes tags!
Is the comment style convention different for the networking content? Security fixes
--------------------------------------------------------------------- ~~~~~~~~~~~~~~
Yes, in a largely trivial way. Instead of this::
/* Do not email netdev maintainers directly if you think you discovered
* foobar blah blah blah a bug that might have possible security implications.
* another line of text The current netdev maintainer has consistently requested that
*/
it is requested that you make it look like this::
/* foobar blah blah blah
* another line of text
*/
What is "reverse xmas tree"?
----------------------------
Netdev has a convention for ordering local variables in functions.
Order the variable declaration lines longest to shortest, e.g.::
struct scatterlist *sg;
struct sk_buff *skb;
int err, i;
If there are dependencies between the variables preventing the ordering
move the initialization out of line.
I am working in existing code which uses non-standard formatting. Which formatting should I use?
------------------------------------------------------------------------------------------------
Make your code follow the most recent guidelines, so that eventually all code
in the domain of netdev is in the preferred format.
I found a bug that might have possible security implications or similar. Should I mail the main netdev maintainer off-list?
---------------------------------------------------------------------------------------------------------------------------
No. The current netdev maintainer has consistently requested that
people use the mailing lists and not reach out directly. If you aren't people use the mailing lists and not reach out directly. If you aren't
OK with that, then perhaps consider mailing security@kernel.org or OK with that, then perhaps consider mailing security@kernel.org or
reading about http://oss-security.openwall.org/wiki/mailing-lists/distros reading about http://oss-security.openwall.org/wiki/mailing-lists/distros
as possible alternative mechanisms. as possible alternative mechanisms.
What level of testing is expected before I submit my change?
------------------------------------------------------------
At the very minimum your changes must survive an ``allyesconfig`` and an
``allmodconfig`` build with ``W=1`` set without new warnings or failures.
Ideally you will have done run-time testing specific to your change, Co-posting changes to user space components
and the patch series contains a set of kernel selftest for ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``tools/testing/selftests/net`` or using the KUnit framework.
You are expected to test your changes on top of the relevant networking
tree (``net`` or ``net-next``) and not e.g. a stable tree or ``linux-next``.
How do I post corresponding changes to user space components?
-------------------------------------------------------------
User space code exercising kernel features should be posted User space code exercising kernel features should be posted
alongside kernel patches. This gives reviewers a chance to see alongside kernel patches. This gives reviewers a chance to see
how any new interface is used and how well it works. how any new interface is used and how well it works.
@ -270,42 +208,10 @@ to the mailing list, e.g.::
Posting as one thread is discouraged because it confuses patchwork Posting as one thread is discouraged because it confuses patchwork
(as of patchwork 2.2.2). (as of patchwork 2.2.2).
Can I reproduce the checks from patchwork on my local machine? Preparing changes
-------------------------------------------------------------- -----------------
Checks in patchwork are mostly simple wrappers around existing kernel Attention to detail is important. Re-read your own work as if you were the
scripts, the sources are available at:
https://github.com/kuba-moo/nipa/tree/master/tests
Running all the builds and checks locally is a pain, can I post my patches and have the patchwork bot validate them?
--------------------------------------------------------------------------------------------------------------------
No, you must ensure that your patches are ready by testing them locally
before posting to the mailing list. The patchwork build bot instance
gets overloaded very easily and netdev@vger really doesn't need more
traffic if we can help it.
netdevsim is great, can I extend it for my out-of-tree tests?
-------------------------------------------------------------
No, ``netdevsim`` is a test vehicle solely for upstream tests.
(Please add your tests under ``tools/testing/selftests/``.)
We also give no guarantees that ``netdevsim`` won't change in the future
in a way which would break what would normally be considered uAPI.
Is netdevsim considered a "user" of an API?
-------------------------------------------
Linux kernel has a long standing rule that no API should be added unless
it has a real, in-tree user. Mock-ups and tests based on ``netdevsim`` are
strongly encouraged when adding new APIs, but ``netdevsim`` in itself
is **not** considered a use case/user.
Any other tips to help ensure my net/net-next patch gets OK'd?
--------------------------------------------------------------
Attention to detail. Re-read your own work as if you were the
reviewer. You can start with using ``checkpatch.pl``, perhaps even with reviewer. You can start with using ``checkpatch.pl``, perhaps even with
the ``--strict`` flag. But do not be mindlessly robotic in doing so. the ``--strict`` flag. But do not be mindlessly robotic in doing so.
If your change is a bug fix, make sure your commit log indicates the If your change is a bug fix, make sure your commit log indicates the
@ -320,10 +226,133 @@ Finally, go back and read
:ref:`Documentation/process/submitting-patches.rst <submittingpatches>` :ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
to be sure you are not repeating some common mistake documented there. to be sure you are not repeating some common mistake documented there.
My company uses peer feedback in employee performance reviews. Can I ask netdev maintainers for feedback? Indicating target tree
--------------------------------------------------------------------------------------------------------- ~~~~~~~~~~~~~~~~~~~~~~
Yes, especially if you spend significant amount of time reviewing code To help maintainers and CI bots you should explicitly mark which tree
your patch is targeting. Assuming that you use git, use the prefix
flag::
git format-patch --subject-prefix='PATCH net-next' start..finish
Use ``net`` instead of ``net-next`` (always lower case) in the above for
bug-fix ``net`` content.
Dividing work into patches
~~~~~~~~~~~~~~~~~~~~~~~~~~
Put yourself in the shoes of the reviewer. Each patch is read separately
and therefore should constitute a comprehensible step towards your stated
goal.
Avoid sending series longer than 15 patches. Larger series takes longer
to review as reviewers will defer looking at it until they find a large
chunk of time. A small series can be reviewed in a short time, so Maintainers
just do it. As a result, a sequence of smaller series gets merged quicker and
with better review coverage. Re-posting large series also increases the mailing
list traffic.
Multi-line comments
~~~~~~~~~~~~~~~~~~~
Comment style convention is slightly different for networking and most of
the tree. Instead of this::
/*
* foobar blah blah blah
* another line of text
*/
it is requested that you make it look like this::
/* foobar blah blah blah
* another line of text
*/
Local variable ordering ("reverse xmas tree", "RCS")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Netdev has a convention for ordering local variables in functions.
Order the variable declaration lines longest to shortest, e.g.::
struct scatterlist *sg;
struct sk_buff *skb;
int err, i;
If there are dependencies between the variables preventing the ordering
move the initialization out of line.
Format precedence
~~~~~~~~~~~~~~~~~
When working in existing code which uses nonstandard formatting make
your code follow the most recent guidelines, so that eventually all code
in the domain of netdev is in the preferred format.
Resending after review
~~~~~~~~~~~~~~~~~~~~~~
Allow at least 24 hours to pass between postings. This will ensure reviewers
from all geographical locations have a chance to chime in. Do not wait
too long (weeks) between postings either as it will make it harder for reviewers
to recall all the context.
Make sure you address all the feedback in your new posting. Do not post a new
version of the code if the discussion about the previous version is still
ongoing, unless directly instructed by a reviewer.
Testing
-------
Expected level of testing
~~~~~~~~~~~~~~~~~~~~~~~~~
At the very minimum your changes must survive an ``allyesconfig`` and an
``allmodconfig`` build with ``W=1`` set without new warnings or failures.
Ideally you will have done run-time testing specific to your change,
and the patch series contains a set of kernel selftest for
``tools/testing/selftests/net`` or using the KUnit framework.
You are expected to test your changes on top of the relevant networking
tree (``net`` or ``net-next``) and not e.g. a stable tree or ``linux-next``.
patchwork checks
~~~~~~~~~~~~~~~~
Checks in patchwork are mostly simple wrappers around existing kernel
scripts, the sources are available at:
https://github.com/kuba-moo/nipa/tree/master/tests
**Do not** post your patches just to run them through the checks.
You must ensure that your patches are ready by testing them locally
before posting to the mailing list. The patchwork build bot instance
gets overloaded very easily and netdev@vger really doesn't need more
traffic if we can help it.
netdevsim
~~~~~~~~~
``netdevsim`` is a test driver which can be used to exercise driver
configuration APIs without requiring capable hardware.
Mock-ups and tests based on ``netdevsim`` are strongly encouraged when
adding new APIs, but ``netdevsim`` in itself is **not** considered
a use case/user. You must also implement the new APIs in a real driver.
We give no guarantees that ``netdevsim`` won't change in the future
in a way which would break what would normally be considered uAPI.
``netdevsim`` is reserved for use by upstream tests only, so any
new ``netdevsim`` features must be accompanied by selftests under
``tools/testing/selftests/``.
Testimonials / feedback
-----------------------
Some companies use peer feedback in employee performance reviews.
Please feel free to request feedback from netdev maintainers,
especially if you spend significant amount of time reviewing code
and go out of your way to improve shared infrastructure. and go out of your way to improve shared infrastructure.
The feedback must be requested by you, the contributor, and will always The feedback must be requested by you, the contributor, and will always

View File

@ -22246,7 +22246,9 @@ F: drivers/scsi/vmw_pvscsi.c
F: drivers/scsi/vmw_pvscsi.h F: drivers/scsi/vmw_pvscsi.h
VMWARE VIRTUAL PTP CLOCK DRIVER VMWARE VIRTUAL PTP CLOCK DRIVER
M: Vivek Thampi <vithampi@vmware.com> M: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu>
M: Deep Shah <sdeep@vmware.com>
R: Alexey Makhalov <amakhalov@vmware.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com> R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported

View File

@ -659,3 +659,19 @@
interrupts = <16 2 1 9>; interrupts = <16 2 1 9>;
}; };
}; };
&fman0_rx_0x08 {
/delete-property/ fsl,fman-10g-port;
};
&fman0_tx_0x28 {
/delete-property/ fsl,fman-10g-port;
};
&fman0_rx_0x09 {
/delete-property/ fsl,fman-10g-port;
};
&fman0_tx_0x29 {
/delete-property/ fsl,fman-10g-port;
};

View File

@ -1549,6 +1549,7 @@ static void ad_port_selection_logic(struct port *port, bool *update_slave_arr)
slave_err(bond->dev, port->slave->dev, slave_err(bond->dev, port->slave->dev,
"Port %d did not find a suitable aggregator\n", "Port %d did not find a suitable aggregator\n",
port->actor_port_number); port->actor_port_number);
return;
} }
} }
/* if all aggregator's ports are READY_N == TRUE, set ready=TRUE /* if all aggregator's ports are READY_N == TRUE, set ready=TRUE

View File

@ -2654,10 +2654,12 @@ static void bond_miimon_link_change(struct bonding *bond,
static void bond_miimon_commit(struct bonding *bond) static void bond_miimon_commit(struct bonding *bond)
{ {
struct slave *slave, *primary; struct slave *slave, *primary, *active;
bool do_failover = false; bool do_failover = false;
struct list_head *iter; struct list_head *iter;
ASSERT_RTNL();
bond_for_each_slave(bond, slave, iter) { bond_for_each_slave(bond, slave, iter) {
switch (slave->link_new_state) { switch (slave->link_new_state) {
case BOND_LINK_NOCHANGE: case BOND_LINK_NOCHANGE:
@ -2700,8 +2702,8 @@ static void bond_miimon_commit(struct bonding *bond)
bond_miimon_link_change(bond, slave, BOND_LINK_UP); bond_miimon_link_change(bond, slave, BOND_LINK_UP);
if (!rcu_access_pointer(bond->curr_active_slave) || slave == primary || active = rtnl_dereference(bond->curr_active_slave);
slave->prio > rcu_dereference(bond->curr_active_slave)->prio) if (!active || slave == primary || slave->prio > active->prio)
do_failover = true; do_failover = true;
continue; continue;

View File

@ -2,7 +2,6 @@
config NET_DSA_MV88E6XXX config NET_DSA_MV88E6XXX
tristate "Marvell 88E6xxx Ethernet switch fabric support" tristate "Marvell 88E6xxx Ethernet switch fabric support"
depends on NET_DSA depends on NET_DSA
depends on PTP_1588_CLOCK_OPTIONAL
select IRQ_DOMAIN select IRQ_DOMAIN
select NET_DSA_TAG_EDSA select NET_DSA_TAG_EDSA
select NET_DSA_TAG_DSA select NET_DSA_TAG_DSA
@ -13,7 +12,8 @@ config NET_DSA_MV88E6XXX
config NET_DSA_MV88E6XXX_PTP config NET_DSA_MV88E6XXX_PTP
bool "PTP support for Marvell 88E6xxx" bool "PTP support for Marvell 88E6xxx"
default n default n
depends on NET_DSA_MV88E6XXX && PTP_1588_CLOCK depends on (NET_DSA_MV88E6XXX = y && PTP_1588_CLOCK = y) || \
(NET_DSA_MV88E6XXX = m && PTP_1588_CLOCK)
help help
Say Y to enable PTP hardware timestamping on Marvell 88E6xxx switch Say Y to enable PTP hardware timestamping on Marvell 88E6xxx switch
chips that support it. chips that support it.

View File

@ -37,77 +37,104 @@ qca8k_split_addr(u32 regaddr, u16 *r1, u16 *r2, u16 *page)
} }
static int static int
qca8k_set_lo(struct qca8k_priv *priv, int phy_id, u32 regnum, u16 lo) qca8k_mii_write_lo(struct mii_bus *bus, int phy_id, u32 regnum, u32 val)
{ {
u16 *cached_lo = &priv->mdio_cache.lo;
struct mii_bus *bus = priv->bus;
int ret; int ret;
u16 lo;
if (lo == *cached_lo) lo = val & 0xffff;
return 0;
ret = bus->write(bus, phy_id, regnum, lo); ret = bus->write(bus, phy_id, regnum, lo);
if (ret < 0) if (ret < 0)
dev_err_ratelimited(&bus->dev, dev_err_ratelimited(&bus->dev,
"failed to write qca8k 32bit lo register\n"); "failed to write qca8k 32bit lo register\n");
*cached_lo = lo; return ret;
return 0;
} }
static int static int
qca8k_set_hi(struct qca8k_priv *priv, int phy_id, u32 regnum, u16 hi) qca8k_mii_write_hi(struct mii_bus *bus, int phy_id, u32 regnum, u32 val)
{ {
u16 *cached_hi = &priv->mdio_cache.hi;
struct mii_bus *bus = priv->bus;
int ret; int ret;
u16 hi;
if (hi == *cached_hi) hi = (u16)(val >> 16);
return 0;
ret = bus->write(bus, phy_id, regnum, hi); ret = bus->write(bus, phy_id, regnum, hi);
if (ret < 0) if (ret < 0)
dev_err_ratelimited(&bus->dev, dev_err_ratelimited(&bus->dev,
"failed to write qca8k 32bit hi register\n"); "failed to write qca8k 32bit hi register\n");
*cached_hi = hi; return ret;
}
static int
qca8k_mii_read_lo(struct mii_bus *bus, int phy_id, u32 regnum, u32 *val)
{
int ret;
ret = bus->read(bus, phy_id, regnum);
if (ret < 0)
goto err;
*val = ret & 0xffff;
return 0; return 0;
err:
dev_err_ratelimited(&bus->dev,
"failed to read qca8k 32bit lo register\n");
*val = 0;
return ret;
}
static int
qca8k_mii_read_hi(struct mii_bus *bus, int phy_id, u32 regnum, u32 *val)
{
int ret;
ret = bus->read(bus, phy_id, regnum);
if (ret < 0)
goto err;
*val = ret << 16;
return 0;
err:
dev_err_ratelimited(&bus->dev,
"failed to read qca8k 32bit hi register\n");
*val = 0;
return ret;
} }
static int static int
qca8k_mii_read32(struct mii_bus *bus, int phy_id, u32 regnum, u32 *val) qca8k_mii_read32(struct mii_bus *bus, int phy_id, u32 regnum, u32 *val)
{ {
u32 hi, lo;
int ret; int ret;
ret = bus->read(bus, phy_id, regnum); *val = 0;
if (ret >= 0) {
*val = ret;
ret = bus->read(bus, phy_id, regnum + 1);
*val |= ret << 16;
}
if (ret < 0) { ret = qca8k_mii_read_lo(bus, phy_id, regnum, &lo);
dev_err_ratelimited(&bus->dev, if (ret < 0)
"failed to read qca8k 32bit register\n"); goto err;
*val = 0;
return ret;
}
return 0; ret = qca8k_mii_read_hi(bus, phy_id, regnum + 1, &hi);
if (ret < 0)
goto err;
*val = lo | hi;
err:
return ret;
} }
static void static void
qca8k_mii_write32(struct qca8k_priv *priv, int phy_id, u32 regnum, u32 val) qca8k_mii_write32(struct mii_bus *bus, int phy_id, u32 regnum, u32 val)
{ {
u16 lo, hi; if (qca8k_mii_write_lo(bus, phy_id, regnum, val) < 0)
int ret; return;
lo = val & 0xffff; qca8k_mii_write_hi(bus, phy_id, regnum + 1, val);
hi = (u16)(val >> 16);
ret = qca8k_set_lo(priv, phy_id, regnum, lo);
if (ret >= 0)
ret = qca8k_set_hi(priv, phy_id, regnum + 1, hi);
} }
static int static int
@ -146,7 +173,16 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb)
command = get_unaligned_le32(&mgmt_ethhdr->command); command = get_unaligned_le32(&mgmt_ethhdr->command);
cmd = FIELD_GET(QCA_HDR_MGMT_CMD, command); cmd = FIELD_GET(QCA_HDR_MGMT_CMD, command);
len = FIELD_GET(QCA_HDR_MGMT_LENGTH, command); len = FIELD_GET(QCA_HDR_MGMT_LENGTH, command);
/* Special case for len of 15 as this is the max value for len and needs to
* be increased before converting it from word to dword.
*/
if (len == 15)
len++;
/* We can ignore odd value, we always round up them in the alloc function. */
len *= sizeof(u16);
/* Make sure the seq match the requested packet */ /* Make sure the seq match the requested packet */
if (get_unaligned_le32(&mgmt_ethhdr->seq) == mgmt_eth_data->seq) if (get_unaligned_le32(&mgmt_ethhdr->seq) == mgmt_eth_data->seq)
@ -193,17 +229,33 @@ static struct sk_buff *qca8k_alloc_mdio_header(enum mdio_cmd cmd, u32 reg, u32 *
if (!skb) if (!skb)
return NULL; return NULL;
/* Max value for len reg is 15 (0xf) but the switch actually return 16 byte /* Hdr mgmt length value is in step of word size.
* Actually for some reason the steps are: * As an example to process 4 byte of data the correct length to set is 2.
* 0: nothing * To process 8 byte 4, 12 byte 6, 16 byte 8...
* 1-4: first 4 byte *
* 5-6: first 12 byte * Odd values will always return the next size on the ack packet.
* 7-15: all 16 byte * (length of 3 (6 byte) will always return 8 bytes of data)
*
* This means that a value of 15 (0xf) actually means reading/writing 32 bytes
* of data.
*
* To correctly calculate the length we devide the requested len by word and
* round up.
* On the ack function we can skip the odd check as we already handle the
* case here.
*/ */
if (len == 16) real_len = DIV_ROUND_UP(len, sizeof(u16));
real_len = 15;
else /* We check if the result len is odd and we round up another time to
real_len = len; * the next size. (length of 3 will be increased to 4 as switch will always
* return 8 bytes)
*/
if (real_len % sizeof(u16) != 0)
real_len++;
/* Max reg value is 0xf(15) but switch will always return the next size (32 byte) */
if (real_len == 16)
real_len--;
skb_reset_mac_header(skb); skb_reset_mac_header(skb);
skb_set_network_header(skb, skb->len); skb_set_network_header(skb, skb->len);
@ -417,7 +469,7 @@ qca8k_regmap_write(void *ctx, uint32_t reg, uint32_t val)
if (ret < 0) if (ret < 0)
goto exit; goto exit;
qca8k_mii_write32(priv, 0x10 | r2, r1, val); qca8k_mii_write32(bus, 0x10 | r2, r1, val);
exit: exit:
mutex_unlock(&bus->mdio_lock); mutex_unlock(&bus->mdio_lock);
@ -450,7 +502,7 @@ qca8k_regmap_update_bits(void *ctx, uint32_t reg, uint32_t mask, uint32_t write_
val &= ~mask; val &= ~mask;
val |= write_val; val |= write_val;
qca8k_mii_write32(priv, 0x10 | r2, r1, val); qca8k_mii_write32(bus, 0x10 | r2, r1, val);
exit: exit:
mutex_unlock(&bus->mdio_lock); mutex_unlock(&bus->mdio_lock);
@ -688,9 +740,9 @@ qca8k_mdio_busy_wait(struct mii_bus *bus, u32 reg, u32 mask)
qca8k_split_addr(reg, &r1, &r2, &page); qca8k_split_addr(reg, &r1, &r2, &page);
ret = read_poll_timeout(qca8k_mii_read32, ret1, !(val & mask), 0, ret = read_poll_timeout(qca8k_mii_read_hi, ret1, !(val & mask), 0,
QCA8K_BUSY_WAIT_TIMEOUT * USEC_PER_MSEC, false, QCA8K_BUSY_WAIT_TIMEOUT * USEC_PER_MSEC, false,
bus, 0x10 | r2, r1, &val); bus, 0x10 | r2, r1 + 1, &val);
/* Check if qca8k_read has failed for a different reason /* Check if qca8k_read has failed for a different reason
* before returnting -ETIMEDOUT * before returnting -ETIMEDOUT
@ -725,14 +777,14 @@ qca8k_mdio_write(struct qca8k_priv *priv, int phy, int regnum, u16 data)
if (ret) if (ret)
goto exit; goto exit;
qca8k_mii_write32(priv, 0x10 | r2, r1, val); qca8k_mii_write32(bus, 0x10 | r2, r1, val);
ret = qca8k_mdio_busy_wait(bus, QCA8K_MDIO_MASTER_CTRL, ret = qca8k_mdio_busy_wait(bus, QCA8K_MDIO_MASTER_CTRL,
QCA8K_MDIO_MASTER_BUSY); QCA8K_MDIO_MASTER_BUSY);
exit: exit:
/* even if the busy_wait timeouts try to clear the MASTER_EN */ /* even if the busy_wait timeouts try to clear the MASTER_EN */
qca8k_mii_write32(priv, 0x10 | r2, r1, 0); qca8k_mii_write_hi(bus, 0x10 | r2, r1 + 1, 0);
mutex_unlock(&bus->mdio_lock); mutex_unlock(&bus->mdio_lock);
@ -762,18 +814,18 @@ qca8k_mdio_read(struct qca8k_priv *priv, int phy, int regnum)
if (ret) if (ret)
goto exit; goto exit;
qca8k_mii_write32(priv, 0x10 | r2, r1, val); qca8k_mii_write_hi(bus, 0x10 | r2, r1 + 1, val);
ret = qca8k_mdio_busy_wait(bus, QCA8K_MDIO_MASTER_CTRL, ret = qca8k_mdio_busy_wait(bus, QCA8K_MDIO_MASTER_CTRL,
QCA8K_MDIO_MASTER_BUSY); QCA8K_MDIO_MASTER_BUSY);
if (ret) if (ret)
goto exit; goto exit;
ret = qca8k_mii_read32(bus, 0x10 | r2, r1, &val); ret = qca8k_mii_read_lo(bus, 0x10 | r2, r1, &val);
exit: exit:
/* even if the busy_wait timeouts try to clear the MASTER_EN */ /* even if the busy_wait timeouts try to clear the MASTER_EN */
qca8k_mii_write32(priv, 0x10 | r2, r1, 0); qca8k_mii_write_hi(bus, 0x10 | r2, r1 + 1, 0);
mutex_unlock(&bus->mdio_lock); mutex_unlock(&bus->mdio_lock);
@ -1943,8 +1995,6 @@ qca8k_sw_probe(struct mdio_device *mdiodev)
} }
priv->mdio_cache.page = 0xffff; priv->mdio_cache.page = 0xffff;
priv->mdio_cache.lo = 0xffff;
priv->mdio_cache.hi = 0xffff;
/* Check the detected switch id */ /* Check the detected switch id */
ret = qca8k_read_switch_id(priv); ret = qca8k_read_switch_id(priv);

View File

@ -375,11 +375,6 @@ struct qca8k_mdio_cache {
* mdio writes * mdio writes
*/ */
u16 page; u16 page;
/* lo and hi can also be cached and from Documentation we can skip one
* extra mdio write if lo or hi is didn't change.
*/
u16 lo;
u16 hi;
}; };
struct qca8k_pcs { struct qca8k_pcs {

View File

@ -2400,29 +2400,18 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
switch (func) { if ((func == ENA_ADMIN_TOEPLITZ) && key) {
case ENA_ADMIN_TOEPLITZ: if (key_len != sizeof(hash_key->key)) {
if (key) { netdev_err(ena_dev->net_device,
if (key_len != sizeof(hash_key->key)) { "key len (%u) doesn't equal the supported size (%zu)\n",
netdev_err(ena_dev->net_device, key_len, sizeof(hash_key->key));
"key len (%u) doesn't equal the supported size (%zu)\n", return -EINVAL;
key_len, sizeof(hash_key->key));
return -EINVAL;
}
memcpy(hash_key->key, key, key_len);
rss->hash_init_val = init_val;
hash_key->key_parts = key_len / sizeof(hash_key->key[0]);
} }
break; memcpy(hash_key->key, key, key_len);
case ENA_ADMIN_CRC32: hash_key->key_parts = key_len / sizeof(hash_key->key[0]);
rss->hash_init_val = init_val;
break;
default:
netdev_err(ena_dev->net_device, "Invalid hash function (%d)\n",
func);
return -EINVAL;
} }
rss->hash_init_val = init_val;
old_func = rss->hash_func; old_func = rss->hash_func;
rss->hash_func = func; rss->hash_func = func;
rc = ena_com_set_hash_function(ena_dev); rc = ena_com_set_hash_function(ena_dev);

View File

@ -887,11 +887,7 @@ static int ena_set_tunable(struct net_device *netdev,
switch (tuna->id) { switch (tuna->id) {
case ETHTOOL_RX_COPYBREAK: case ETHTOOL_RX_COPYBREAK:
len = *(u32 *)data; len = *(u32 *)data;
if (len > adapter->netdev->mtu) { ret = ena_set_rx_copybreak(adapter, len);
ret = -EINVAL;
break;
}
adapter->rx_copybreak = len;
break; break;
default: default:
ret = -EINVAL; ret = -EINVAL;

View File

@ -374,9 +374,9 @@ static int ena_xdp_xmit(struct net_device *dev, int n,
static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp) static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)
{ {
u32 verdict = ENA_XDP_PASS;
struct bpf_prog *xdp_prog; struct bpf_prog *xdp_prog;
struct ena_ring *xdp_ring; struct ena_ring *xdp_ring;
u32 verdict = XDP_PASS;
struct xdp_frame *xdpf; struct xdp_frame *xdpf;
u64 *xdp_stat; u64 *xdp_stat;
@ -393,7 +393,7 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)
if (unlikely(!xdpf)) { if (unlikely(!xdpf)) {
trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict); trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict);
xdp_stat = &rx_ring->rx_stats.xdp_aborted; xdp_stat = &rx_ring->rx_stats.xdp_aborted;
verdict = XDP_ABORTED; verdict = ENA_XDP_DROP;
break; break;
} }
@ -409,29 +409,35 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)
spin_unlock(&xdp_ring->xdp_tx_lock); spin_unlock(&xdp_ring->xdp_tx_lock);
xdp_stat = &rx_ring->rx_stats.xdp_tx; xdp_stat = &rx_ring->rx_stats.xdp_tx;
verdict = ENA_XDP_TX;
break; break;
case XDP_REDIRECT: case XDP_REDIRECT:
if (likely(!xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog))) { if (likely(!xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog))) {
xdp_stat = &rx_ring->rx_stats.xdp_redirect; xdp_stat = &rx_ring->rx_stats.xdp_redirect;
verdict = ENA_XDP_REDIRECT;
break; break;
} }
trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict); trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict);
xdp_stat = &rx_ring->rx_stats.xdp_aborted; xdp_stat = &rx_ring->rx_stats.xdp_aborted;
verdict = XDP_ABORTED; verdict = ENA_XDP_DROP;
break; break;
case XDP_ABORTED: case XDP_ABORTED:
trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict); trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict);
xdp_stat = &rx_ring->rx_stats.xdp_aborted; xdp_stat = &rx_ring->rx_stats.xdp_aborted;
verdict = ENA_XDP_DROP;
break; break;
case XDP_DROP: case XDP_DROP:
xdp_stat = &rx_ring->rx_stats.xdp_drop; xdp_stat = &rx_ring->rx_stats.xdp_drop;
verdict = ENA_XDP_DROP;
break; break;
case XDP_PASS: case XDP_PASS:
xdp_stat = &rx_ring->rx_stats.xdp_pass; xdp_stat = &rx_ring->rx_stats.xdp_pass;
verdict = ENA_XDP_PASS;
break; break;
default: default:
bpf_warn_invalid_xdp_action(rx_ring->netdev, xdp_prog, verdict); bpf_warn_invalid_xdp_action(rx_ring->netdev, xdp_prog, verdict);
xdp_stat = &rx_ring->rx_stats.xdp_invalid; xdp_stat = &rx_ring->rx_stats.xdp_invalid;
verdict = ENA_XDP_DROP;
} }
ena_increase_stat(xdp_stat, 1, &rx_ring->syncp); ena_increase_stat(xdp_stat, 1, &rx_ring->syncp);
@ -512,16 +518,18 @@ static void ena_xdp_exchange_program_rx_in_range(struct ena_adapter *adapter,
struct bpf_prog *prog, struct bpf_prog *prog,
int first, int count) int first, int count)
{ {
struct bpf_prog *old_bpf_prog;
struct ena_ring *rx_ring; struct ena_ring *rx_ring;
int i = 0; int i = 0;
for (i = first; i < count; i++) { for (i = first; i < count; i++) {
rx_ring = &adapter->rx_ring[i]; rx_ring = &adapter->rx_ring[i];
xchg(&rx_ring->xdp_bpf_prog, prog); old_bpf_prog = xchg(&rx_ring->xdp_bpf_prog, prog);
if (prog) {
if (!old_bpf_prog && prog) {
ena_xdp_register_rxq_info(rx_ring); ena_xdp_register_rxq_info(rx_ring);
rx_ring->rx_headroom = XDP_PACKET_HEADROOM; rx_ring->rx_headroom = XDP_PACKET_HEADROOM;
} else { } else if (old_bpf_prog && !prog) {
ena_xdp_unregister_rxq_info(rx_ring); ena_xdp_unregister_rxq_info(rx_ring);
rx_ring->rx_headroom = NET_SKB_PAD; rx_ring->rx_headroom = NET_SKB_PAD;
} }
@ -672,6 +680,7 @@ static void ena_init_io_rings_common(struct ena_adapter *adapter,
ring->ena_dev = adapter->ena_dev; ring->ena_dev = adapter->ena_dev;
ring->per_napi_packets = 0; ring->per_napi_packets = 0;
ring->cpu = 0; ring->cpu = 0;
ring->numa_node = 0;
ring->no_interrupt_event_cnt = 0; ring->no_interrupt_event_cnt = 0;
u64_stats_init(&ring->syncp); u64_stats_init(&ring->syncp);
} }
@ -775,6 +784,7 @@ static int ena_setup_tx_resources(struct ena_adapter *adapter, int qid)
tx_ring->next_to_use = 0; tx_ring->next_to_use = 0;
tx_ring->next_to_clean = 0; tx_ring->next_to_clean = 0;
tx_ring->cpu = ena_irq->cpu; tx_ring->cpu = ena_irq->cpu;
tx_ring->numa_node = node;
return 0; return 0;
err_push_buf_intermediate_buf: err_push_buf_intermediate_buf:
@ -907,6 +917,7 @@ static int ena_setup_rx_resources(struct ena_adapter *adapter,
rx_ring->next_to_clean = 0; rx_ring->next_to_clean = 0;
rx_ring->next_to_use = 0; rx_ring->next_to_use = 0;
rx_ring->cpu = ena_irq->cpu; rx_ring->cpu = ena_irq->cpu;
rx_ring->numa_node = node;
return 0; return 0;
} }
@ -1619,12 +1630,12 @@ static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp)
* we expect, then we simply drop it * we expect, then we simply drop it
*/ */
if (unlikely(rx_ring->ena_bufs[0].len > ENA_XDP_MAX_MTU)) if (unlikely(rx_ring->ena_bufs[0].len > ENA_XDP_MAX_MTU))
return XDP_DROP; return ENA_XDP_DROP;
ret = ena_xdp_execute(rx_ring, xdp); ret = ena_xdp_execute(rx_ring, xdp);
/* The xdp program might expand the headers */ /* The xdp program might expand the headers */
if (ret == XDP_PASS) { if (ret == ENA_XDP_PASS) {
rx_info->page_offset = xdp->data - xdp->data_hard_start; rx_info->page_offset = xdp->data - xdp->data_hard_start;
rx_ring->ena_bufs[0].len = xdp->data_end - xdp->data; rx_ring->ena_bufs[0].len = xdp->data_end - xdp->data;
} }
@ -1663,7 +1674,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
xdp_init_buff(&xdp, ENA_PAGE_SIZE, &rx_ring->xdp_rxq); xdp_init_buff(&xdp, ENA_PAGE_SIZE, &rx_ring->xdp_rxq);
do { do {
xdp_verdict = XDP_PASS; xdp_verdict = ENA_XDP_PASS;
skb = NULL; skb = NULL;
ena_rx_ctx.ena_bufs = rx_ring->ena_bufs; ena_rx_ctx.ena_bufs = rx_ring->ena_bufs;
ena_rx_ctx.max_bufs = rx_ring->sgl_size; ena_rx_ctx.max_bufs = rx_ring->sgl_size;
@ -1691,7 +1702,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
xdp_verdict = ena_xdp_handle_buff(rx_ring, &xdp); xdp_verdict = ena_xdp_handle_buff(rx_ring, &xdp);
/* allocate skb and fill it */ /* allocate skb and fill it */
if (xdp_verdict == XDP_PASS) if (xdp_verdict == ENA_XDP_PASS)
skb = ena_rx_skb(rx_ring, skb = ena_rx_skb(rx_ring,
rx_ring->ena_bufs, rx_ring->ena_bufs,
ena_rx_ctx.descs, ena_rx_ctx.descs,
@ -1709,14 +1720,15 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
/* Packets was passed for transmission, unmap it /* Packets was passed for transmission, unmap it
* from RX side. * from RX side.
*/ */
if (xdp_verdict == XDP_TX || xdp_verdict == XDP_REDIRECT) { if (xdp_verdict & ENA_XDP_FORWARDED) {
ena_unmap_rx_buff(rx_ring, ena_unmap_rx_buff(rx_ring,
&rx_ring->rx_buffer_info[req_id]); &rx_ring->rx_buffer_info[req_id]);
rx_ring->rx_buffer_info[req_id].page = NULL; rx_ring->rx_buffer_info[req_id].page = NULL;
} }
} }
if (xdp_verdict != XDP_PASS) { if (xdp_verdict != ENA_XDP_PASS) {
xdp_flags |= xdp_verdict; xdp_flags |= xdp_verdict;
total_len += ena_rx_ctx.ena_bufs[0].len;
res_budget--; res_budget--;
continue; continue;
} }
@ -1760,7 +1772,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
ena_refill_rx_bufs(rx_ring, refill_required); ena_refill_rx_bufs(rx_ring, refill_required);
} }
if (xdp_flags & XDP_REDIRECT) if (xdp_flags & ENA_XDP_REDIRECT)
xdp_do_flush_map(); xdp_do_flush_map();
return work_done; return work_done;
@ -1814,8 +1826,9 @@ static void ena_adjust_adaptive_rx_intr_moderation(struct ena_napi *ena_napi)
static void ena_unmask_interrupt(struct ena_ring *tx_ring, static void ena_unmask_interrupt(struct ena_ring *tx_ring,
struct ena_ring *rx_ring) struct ena_ring *rx_ring)
{ {
u32 rx_interval = tx_ring->smoothed_interval;
struct ena_eth_io_intr_reg intr_reg; struct ena_eth_io_intr_reg intr_reg;
u32 rx_interval = 0;
/* Rx ring can be NULL when for XDP tx queues which don't have an /* Rx ring can be NULL when for XDP tx queues which don't have an
* accompanying rx_ring pair. * accompanying rx_ring pair.
*/ */
@ -1853,20 +1866,27 @@ static void ena_update_ring_numa_node(struct ena_ring *tx_ring,
if (likely(tx_ring->cpu == cpu)) if (likely(tx_ring->cpu == cpu))
goto out; goto out;
tx_ring->cpu = cpu;
if (rx_ring)
rx_ring->cpu = cpu;
numa_node = cpu_to_node(cpu); numa_node = cpu_to_node(cpu);
if (likely(tx_ring->numa_node == numa_node))
goto out;
put_cpu(); put_cpu();
if (numa_node != NUMA_NO_NODE) { if (numa_node != NUMA_NO_NODE) {
ena_com_update_numa_node(tx_ring->ena_com_io_cq, numa_node); ena_com_update_numa_node(tx_ring->ena_com_io_cq, numa_node);
if (rx_ring) tx_ring->numa_node = numa_node;
if (rx_ring) {
rx_ring->numa_node = numa_node;
ena_com_update_numa_node(rx_ring->ena_com_io_cq, ena_com_update_numa_node(rx_ring->ena_com_io_cq,
numa_node); numa_node);
}
} }
tx_ring->cpu = cpu;
if (rx_ring)
rx_ring->cpu = cpu;
return; return;
out: out:
put_cpu(); put_cpu();
@ -1987,11 +2007,10 @@ static int ena_io_poll(struct napi_struct *napi, int budget)
if (ena_com_get_adaptive_moderation_enabled(rx_ring->ena_dev)) if (ena_com_get_adaptive_moderation_enabled(rx_ring->ena_dev))
ena_adjust_adaptive_rx_intr_moderation(ena_napi); ena_adjust_adaptive_rx_intr_moderation(ena_napi);
ena_update_ring_numa_node(tx_ring, rx_ring);
ena_unmask_interrupt(tx_ring, rx_ring); ena_unmask_interrupt(tx_ring, rx_ring);
} }
ena_update_ring_numa_node(tx_ring, rx_ring);
ret = rx_work_done; ret = rx_work_done;
} else { } else {
ret = budget; ret = budget;
@ -2376,7 +2395,7 @@ static int ena_create_io_tx_queue(struct ena_adapter *adapter, int qid)
ctx.mem_queue_type = ena_dev->tx_mem_queue_type; ctx.mem_queue_type = ena_dev->tx_mem_queue_type;
ctx.msix_vector = msix_vector; ctx.msix_vector = msix_vector;
ctx.queue_size = tx_ring->ring_size; ctx.queue_size = tx_ring->ring_size;
ctx.numa_node = cpu_to_node(tx_ring->cpu); ctx.numa_node = tx_ring->numa_node;
rc = ena_com_create_io_queue(ena_dev, &ctx); rc = ena_com_create_io_queue(ena_dev, &ctx);
if (rc) { if (rc) {
@ -2444,7 +2463,7 @@ static int ena_create_io_rx_queue(struct ena_adapter *adapter, int qid)
ctx.mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; ctx.mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;
ctx.msix_vector = msix_vector; ctx.msix_vector = msix_vector;
ctx.queue_size = rx_ring->ring_size; ctx.queue_size = rx_ring->ring_size;
ctx.numa_node = cpu_to_node(rx_ring->cpu); ctx.numa_node = rx_ring->numa_node;
rc = ena_com_create_io_queue(ena_dev, &ctx); rc = ena_com_create_io_queue(ena_dev, &ctx);
if (rc) { if (rc) {
@ -2805,6 +2824,24 @@ int ena_update_queue_sizes(struct ena_adapter *adapter,
return dev_was_up ? ena_up(adapter) : 0; return dev_was_up ? ena_up(adapter) : 0;
} }
int ena_set_rx_copybreak(struct ena_adapter *adapter, u32 rx_copybreak)
{
struct ena_ring *rx_ring;
int i;
if (rx_copybreak > min_t(u16, adapter->netdev->mtu, ENA_PAGE_SIZE))
return -EINVAL;
adapter->rx_copybreak = rx_copybreak;
for (i = 0; i < adapter->num_io_queues; i++) {
rx_ring = &adapter->rx_ring[i];
rx_ring->rx_copybreak = rx_copybreak;
}
return 0;
}
int ena_update_queue_count(struct ena_adapter *adapter, u32 new_channel_count) int ena_update_queue_count(struct ena_adapter *adapter, u32 new_channel_count)
{ {
struct ena_com_dev *ena_dev = adapter->ena_dev; struct ena_com_dev *ena_dev = adapter->ena_dev;

View File

@ -262,9 +262,11 @@ struct ena_ring {
bool disable_meta_caching; bool disable_meta_caching;
u16 no_interrupt_event_cnt; u16 no_interrupt_event_cnt;
/* cpu for TPH */ /* cpu and NUMA for TPH */
int cpu; int cpu;
/* number of tx/rx_buffer_info's entries */ int numa_node;
/* number of tx/rx_buffer_info's entries */
int ring_size; int ring_size;
enum ena_admin_placement_policy_type tx_mem_queue_type; enum ena_admin_placement_policy_type tx_mem_queue_type;
@ -392,6 +394,8 @@ int ena_update_queue_sizes(struct ena_adapter *adapter,
int ena_update_queue_count(struct ena_adapter *adapter, u32 new_channel_count); int ena_update_queue_count(struct ena_adapter *adapter, u32 new_channel_count);
int ena_set_rx_copybreak(struct ena_adapter *adapter, u32 rx_copybreak);
int ena_get_sset_count(struct net_device *netdev, int sset); int ena_get_sset_count(struct net_device *netdev, int sset);
static inline void ena_reset_device(struct ena_adapter *adapter, static inline void ena_reset_device(struct ena_adapter *adapter,
@ -409,6 +413,15 @@ enum ena_xdp_errors_t {
ENA_XDP_NO_ENOUGH_QUEUES, ENA_XDP_NO_ENOUGH_QUEUES,
}; };
enum ENA_XDP_ACTIONS {
ENA_XDP_PASS = 0,
ENA_XDP_TX = BIT(0),
ENA_XDP_REDIRECT = BIT(1),
ENA_XDP_DROP = BIT(2)
};
#define ENA_XDP_FORWARDED (ENA_XDP_TX | ENA_XDP_REDIRECT)
static inline bool ena_xdp_present(struct ena_adapter *adapter) static inline bool ena_xdp_present(struct ena_adapter *adapter)
{ {
return !!adapter->xdp_bpf_prog; return !!adapter->xdp_bpf_prog;

View File

@ -1064,6 +1064,9 @@ static void xgbe_free_irqs(struct xgbe_prv_data *pdata)
devm_free_irq(pdata->dev, pdata->dev_irq, pdata); devm_free_irq(pdata->dev, pdata->dev_irq, pdata);
tasklet_kill(&pdata->tasklet_dev);
tasklet_kill(&pdata->tasklet_ecc);
if (pdata->vdata->ecc_support && (pdata->dev_irq != pdata->ecc_irq)) if (pdata->vdata->ecc_support && (pdata->dev_irq != pdata->ecc_irq))
devm_free_irq(pdata->dev, pdata->ecc_irq, pdata); devm_free_irq(pdata->dev, pdata->ecc_irq, pdata);

View File

@ -447,8 +447,10 @@ static void xgbe_i2c_stop(struct xgbe_prv_data *pdata)
xgbe_i2c_disable(pdata); xgbe_i2c_disable(pdata);
xgbe_i2c_clear_all_interrupts(pdata); xgbe_i2c_clear_all_interrupts(pdata);
if (pdata->dev_irq != pdata->i2c_irq) if (pdata->dev_irq != pdata->i2c_irq) {
devm_free_irq(pdata->dev, pdata->i2c_irq, pdata); devm_free_irq(pdata->dev, pdata->i2c_irq, pdata);
tasklet_kill(&pdata->tasklet_i2c);
}
} }
static int xgbe_i2c_start(struct xgbe_prv_data *pdata) static int xgbe_i2c_start(struct xgbe_prv_data *pdata)

View File

@ -1390,8 +1390,10 @@ static void xgbe_phy_stop(struct xgbe_prv_data *pdata)
/* Disable auto-negotiation */ /* Disable auto-negotiation */
xgbe_an_disable_all(pdata); xgbe_an_disable_all(pdata);
if (pdata->dev_irq != pdata->an_irq) if (pdata->dev_irq != pdata->an_irq) {
devm_free_irq(pdata->dev, pdata->an_irq, pdata); devm_free_irq(pdata->dev, pdata->an_irq, pdata);
tasklet_kill(&pdata->tasklet_an);
}
pdata->phy_if.phy_impl.stop(pdata); pdata->phy_if.phy_impl.stop(pdata);

View File

@ -2784,17 +2784,11 @@ static int bcm_enet_shared_probe(struct platform_device *pdev)
return 0; return 0;
} }
static int bcm_enet_shared_remove(struct platform_device *pdev)
{
return 0;
}
/* this "shared" driver is needed because both macs share a single /* this "shared" driver is needed because both macs share a single
* address space * address space
*/ */
struct platform_driver bcm63xx_enet_shared_driver = { struct platform_driver bcm63xx_enet_shared_driver = {
.probe = bcm_enet_shared_probe, .probe = bcm_enet_shared_probe,
.remove = bcm_enet_shared_remove,
.driver = { .driver = {
.name = "bcm63xx_enet_shared", .name = "bcm63xx_enet_shared",
.owner = THIS_MODULE, .owner = THIS_MODULE,

View File

@ -991,8 +991,7 @@ static struct sk_buff *bnxt_rx_multi_page_skb(struct bnxt *bp,
dma_addr -= bp->rx_dma_offset; dma_addr -= bp->rx_dma_offset;
dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, PAGE_SIZE, bp->rx_dir, dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, PAGE_SIZE, bp->rx_dir,
DMA_ATTR_WEAK_ORDERING); DMA_ATTR_WEAK_ORDERING);
skb = build_skb(page_address(page), BNXT_PAGE_MODE_BUF_SIZE + skb = build_skb(page_address(page), PAGE_SIZE);
bp->rx_dma_offset);
if (!skb) { if (!skb) {
__free_page(page); __free_page(page);
return NULL; return NULL;
@ -1925,7 +1924,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
dma_addr = rx_buf->mapping; dma_addr = rx_buf->mapping;
if (bnxt_xdp_attached(bp, rxr)) { if (bnxt_xdp_attached(bp, rxr)) {
bnxt_xdp_buff_init(bp, rxr, cons, &data_ptr, &len, &xdp); bnxt_xdp_buff_init(bp, rxr, cons, data_ptr, len, &xdp);
if (agg_bufs) { if (agg_bufs) {
u32 frag_len = bnxt_rx_agg_pages_xdp(bp, cpr, &xdp, u32 frag_len = bnxt_rx_agg_pages_xdp(bp, cpr, &xdp,
cp_cons, agg_bufs, cp_cons, agg_bufs,
@ -1940,7 +1939,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
} }
if (xdp_active) { if (xdp_active) {
if (bnxt_rx_xdp(bp, rxr, cons, xdp, data, &len, event)) { if (bnxt_rx_xdp(bp, rxr, cons, xdp, data, &data_ptr, &len, event)) {
rc = 1; rc = 1;
goto next_rx; goto next_rx;
} }
@ -3969,8 +3968,10 @@ void bnxt_set_ring_params(struct bnxt *bp)
bp->rx_agg_ring_mask = (bp->rx_agg_nr_pages * RX_DESC_CNT) - 1; bp->rx_agg_ring_mask = (bp->rx_agg_nr_pages * RX_DESC_CNT) - 1;
if (BNXT_RX_PAGE_MODE(bp)) { if (BNXT_RX_PAGE_MODE(bp)) {
rx_space = BNXT_PAGE_MODE_BUF_SIZE; rx_space = PAGE_SIZE;
rx_size = BNXT_MAX_PAGE_MODE_MTU; rx_size = PAGE_SIZE -
ALIGN(max(NET_SKB_PAD, XDP_PACKET_HEADROOM), 8) -
SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
} else { } else {
rx_size = SKB_DATA_ALIGN(BNXT_RX_COPY_THRESH + NET_IP_ALIGN); rx_size = SKB_DATA_ALIGN(BNXT_RX_COPY_THRESH + NET_IP_ALIGN);
rx_space = rx_size + NET_SKB_PAD + rx_space = rx_size + NET_SKB_PAD +
@ -5398,15 +5399,16 @@ static int bnxt_hwrm_vnic_set_hds(struct bnxt *bp, u16 vnic_id)
req->flags = cpu_to_le32(VNIC_PLCMODES_CFG_REQ_FLAGS_JUMBO_PLACEMENT); req->flags = cpu_to_le32(VNIC_PLCMODES_CFG_REQ_FLAGS_JUMBO_PLACEMENT);
req->enables = cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_JUMBO_THRESH_VALID); req->enables = cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_JUMBO_THRESH_VALID);
if (BNXT_RX_PAGE_MODE(bp) && !BNXT_RX_JUMBO_MODE(bp)) { if (BNXT_RX_PAGE_MODE(bp)) {
req->jumbo_thresh = cpu_to_le16(bp->rx_buf_use_size);
} else {
req->flags |= cpu_to_le32(VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV4 | req->flags |= cpu_to_le32(VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV4 |
VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV6); VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV6);
req->enables |= req->enables |=
cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_HDS_THRESHOLD_VALID); cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_HDS_THRESHOLD_VALID);
req->jumbo_thresh = cpu_to_le16(bp->rx_copy_thresh);
req->hds_threshold = cpu_to_le16(bp->rx_copy_thresh);
} }
/* thresholds not implemented in firmware yet */
req->jumbo_thresh = cpu_to_le16(bp->rx_copy_thresh);
req->hds_threshold = cpu_to_le16(bp->rx_copy_thresh);
req->vnic_id = cpu_to_le32(vnic->fw_vnic_id); req->vnic_id = cpu_to_le32(vnic->fw_vnic_id);
return hwrm_req_send(bp, req); return hwrm_req_send(bp, req);
} }
@ -13591,7 +13593,6 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
return -ENOMEM; return -ENOMEM;
bp = netdev_priv(dev); bp = netdev_priv(dev);
SET_NETDEV_DEVLINK_PORT(dev, &bp->dl_port);
bp->board_idx = ent->driver_data; bp->board_idx = ent->driver_data;
bp->msg_enable = BNXT_DEF_MSG_ENABLE; bp->msg_enable = BNXT_DEF_MSG_ENABLE;
bnxt_set_max_func_irqs(bp, max_irqs); bnxt_set_max_func_irqs(bp, max_irqs);
@ -13599,6 +13600,10 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (bnxt_vf_pciid(bp->board_idx)) if (bnxt_vf_pciid(bp->board_idx))
bp->flags |= BNXT_FLAG_VF; bp->flags |= BNXT_FLAG_VF;
/* No devlink port registration in case of a VF */
if (BNXT_PF(bp))
SET_NETDEV_DEVLINK_PORT(dev, &bp->dl_port);
if (pdev->msix_cap) if (pdev->msix_cap)
bp->flags |= BNXT_FLAG_MSIX_CAP; bp->flags |= BNXT_FLAG_MSIX_CAP;

View File

@ -591,12 +591,20 @@ struct nqe_cn {
#define BNXT_RX_PAGE_SIZE (1 << BNXT_RX_PAGE_SHIFT) #define BNXT_RX_PAGE_SIZE (1 << BNXT_RX_PAGE_SHIFT)
#define BNXT_MAX_MTU 9500 #define BNXT_MAX_MTU 9500
#define BNXT_PAGE_MODE_BUF_SIZE \
/* First RX buffer page in XDP multi-buf mode
*
* +-------------------------------------------------------------------------+
* | XDP_PACKET_HEADROOM | bp->rx_buf_use_size | skb_shared_info|
* | (bp->rx_dma_offset) | | |
* +-------------------------------------------------------------------------+
*/
#define BNXT_MAX_PAGE_MODE_MTU_SBUF \
((unsigned int)PAGE_SIZE - VLAN_ETH_HLEN - NET_IP_ALIGN - \ ((unsigned int)PAGE_SIZE - VLAN_ETH_HLEN - NET_IP_ALIGN - \
XDP_PACKET_HEADROOM) XDP_PACKET_HEADROOM)
#define BNXT_MAX_PAGE_MODE_MTU \ #define BNXT_MAX_PAGE_MODE_MTU \
BNXT_PAGE_MODE_BUF_SIZE - \ (BNXT_MAX_PAGE_MODE_MTU_SBUF - \
SKB_DATA_ALIGN((unsigned int)sizeof(struct skb_shared_info)) SKB_DATA_ALIGN((unsigned int)sizeof(struct skb_shared_info)))
#define BNXT_MIN_PKT_SIZE 52 #define BNXT_MIN_PKT_SIZE 52
@ -2134,7 +2142,6 @@ struct bnxt {
#define BNXT_DUMP_CRASH 1 #define BNXT_DUMP_CRASH 1
struct bpf_prog *xdp_prog; struct bpf_prog *xdp_prog;
u8 xdp_has_frags;
struct bnxt_ptp_cfg *ptp_cfg; struct bnxt_ptp_cfg *ptp_cfg;
u8 ptp_all_rx_tstamp; u8 ptp_all_rx_tstamp;

View File

@ -177,7 +177,7 @@ bool bnxt_xdp_attached(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
} }
void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
u16 cons, u8 **data_ptr, unsigned int *len, u16 cons, u8 *data_ptr, unsigned int len,
struct xdp_buff *xdp) struct xdp_buff *xdp)
{ {
struct bnxt_sw_rx_bd *rx_buf; struct bnxt_sw_rx_bd *rx_buf;
@ -191,13 +191,10 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
offset = bp->rx_offset; offset = bp->rx_offset;
mapping = rx_buf->mapping - bp->rx_dma_offset; mapping = rx_buf->mapping - bp->rx_dma_offset;
dma_sync_single_for_cpu(&pdev->dev, mapping + offset, *len, bp->rx_dir); dma_sync_single_for_cpu(&pdev->dev, mapping + offset, len, bp->rx_dir);
if (bp->xdp_has_frags)
buflen = BNXT_PAGE_MODE_BUF_SIZE + offset;
xdp_init_buff(xdp, buflen, &rxr->xdp_rxq); xdp_init_buff(xdp, buflen, &rxr->xdp_rxq);
xdp_prepare_buff(xdp, *data_ptr - offset, offset, *len, false); xdp_prepare_buff(xdp, data_ptr - offset, offset, len, false);
} }
void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr, void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr,
@ -222,7 +219,8 @@ void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr,
* false - packet should be passed to the stack. * false - packet should be passed to the stack.
*/ */
bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
struct xdp_buff xdp, struct page *page, unsigned int *len, u8 *event) struct xdp_buff xdp, struct page *page, u8 **data_ptr,
unsigned int *len, u8 *event)
{ {
struct bpf_prog *xdp_prog = READ_ONCE(rxr->xdp_prog); struct bpf_prog *xdp_prog = READ_ONCE(rxr->xdp_prog);
struct bnxt_tx_ring_info *txr; struct bnxt_tx_ring_info *txr;
@ -255,8 +253,10 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
*event &= ~BNXT_RX_EVENT; *event &= ~BNXT_RX_EVENT;
*len = xdp.data_end - xdp.data; *len = xdp.data_end - xdp.data;
if (orig_data != xdp.data) if (orig_data != xdp.data) {
offset = xdp.data - xdp.data_hard_start; offset = xdp.data - xdp.data_hard_start;
*data_ptr = xdp.data_hard_start + offset;
}
switch (act) { switch (act) {
case XDP_PASS: case XDP_PASS:
@ -401,10 +401,8 @@ static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog)
netdev_warn(dev, "ethtool rx/tx channels must be combined to support XDP.\n"); netdev_warn(dev, "ethtool rx/tx channels must be combined to support XDP.\n");
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
if (prog) { if (prog)
tx_xdp = bp->rx_nr_rings; tx_xdp = bp->rx_nr_rings;
bp->xdp_has_frags = prog->aux->xdp_has_frags;
}
tc = netdev_get_num_tc(dev); tc = netdev_get_num_tc(dev);
if (!tc) if (!tc)

View File

@ -18,8 +18,8 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp,
struct xdp_buff *xdp); struct xdp_buff *xdp);
void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts); void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts);
bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
struct xdp_buff xdp, struct page *page, unsigned int *len, struct xdp_buff xdp, struct page *page, u8 **data_ptr,
u8 *event); unsigned int *len, u8 *event);
int bnxt_xdp(struct net_device *dev, struct netdev_bpf *xdp); int bnxt_xdp(struct net_device *dev, struct netdev_bpf *xdp);
int bnxt_xdp_xmit(struct net_device *dev, int num_frames, int bnxt_xdp_xmit(struct net_device *dev, int num_frames,
struct xdp_frame **frames, u32 flags); struct xdp_frame **frames, u32 flags);
@ -27,7 +27,7 @@ int bnxt_xdp_xmit(struct net_device *dev, int num_frames,
bool bnxt_xdp_attached(struct bnxt *bp, struct bnxt_rx_ring_info *rxr); bool bnxt_xdp_attached(struct bnxt *bp, struct bnxt_rx_ring_info *rxr);
void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
u16 cons, u8 **data_ptr, unsigned int *len, u16 cons, u8 *data_ptr, unsigned int len,
struct xdp_buff *xdp); struct xdp_buff *xdp);
void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr, void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr,
struct xdp_buff *xdp); struct xdp_buff *xdp);

View File

@ -127,11 +127,6 @@ static int enetc_ierb_probe(struct platform_device *pdev)
return 0; return 0;
} }
static int enetc_ierb_remove(struct platform_device *pdev)
{
return 0;
}
static const struct of_device_id enetc_ierb_match[] = { static const struct of_device_id enetc_ierb_match[] = {
{ .compatible = "fsl,ls1028a-enetc-ierb", }, { .compatible = "fsl,ls1028a-enetc-ierb", },
{}, {},
@ -144,7 +139,6 @@ static struct platform_driver enetc_ierb_driver = {
.of_match_table = enetc_ierb_match, .of_match_table = enetc_ierb_match,
}, },
.probe = enetc_ierb_probe, .probe = enetc_ierb_probe,
.remove = enetc_ierb_remove,
}; };
module_platform_driver(enetc_ierb_driver); module_platform_driver(enetc_ierb_driver);

View File

@ -1430,7 +1430,7 @@ int dtsec_initialization(struct mac_device *mac_dev,
dtsec->dtsec_drv_param->tx_pad_crc = true; dtsec->dtsec_drv_param->tx_pad_crc = true;
phy_node = of_parse_phandle(mac_node, "tbi-handle", 0); phy_node = of_parse_phandle(mac_node, "tbi-handle", 0);
if (!phy_node || of_device_is_available(phy_node)) { if (!phy_node || !of_device_is_available(phy_node)) {
of_node_put(phy_node); of_node_put(phy_node);
err = -EINVAL; err = -EINVAL;
dev_err_probe(mac_dev->dev, err, dev_err_probe(mac_dev->dev, err,

View File

@ -3855,18 +3855,16 @@ static int hns3_gro_complete(struct sk_buff *skb, u32 l234info)
return 0; return 0;
} }
static bool hns3_checksum_complete(struct hns3_enet_ring *ring, static void hns3_checksum_complete(struct hns3_enet_ring *ring,
struct sk_buff *skb, u32 ptype, u16 csum) struct sk_buff *skb, u32 ptype, u16 csum)
{ {
if (ptype == HNS3_INVALID_PTYPE || if (ptype == HNS3_INVALID_PTYPE ||
hns3_rx_ptype_tbl[ptype].ip_summed != CHECKSUM_COMPLETE) hns3_rx_ptype_tbl[ptype].ip_summed != CHECKSUM_COMPLETE)
return false; return;
hns3_ring_stats_update(ring, csum_complete); hns3_ring_stats_update(ring, csum_complete);
skb->ip_summed = CHECKSUM_COMPLETE; skb->ip_summed = CHECKSUM_COMPLETE;
skb->csum = csum_unfold((__force __sum16)csum); skb->csum = csum_unfold((__force __sum16)csum);
return true;
} }
static void hns3_rx_handle_csum(struct sk_buff *skb, u32 l234info, static void hns3_rx_handle_csum(struct sk_buff *skb, u32 l234info,
@ -3926,8 +3924,7 @@ static void hns3_rx_checksum(struct hns3_enet_ring *ring, struct sk_buff *skb,
ptype = hnae3_get_field(ol_info, HNS3_RXD_PTYPE_M, ptype = hnae3_get_field(ol_info, HNS3_RXD_PTYPE_M,
HNS3_RXD_PTYPE_S); HNS3_RXD_PTYPE_S);
if (hns3_checksum_complete(ring, skb, ptype, csum)) hns3_checksum_complete(ring, skb, ptype, csum);
return;
/* check if hardware has done checksum */ /* check if hardware has done checksum */
if (!(bd_base_info & BIT(HNS3_RXD_L3L4P_B))) if (!(bd_base_info & BIT(HNS3_RXD_L3L4P_B)))
@ -3936,6 +3933,7 @@ static void hns3_rx_checksum(struct hns3_enet_ring *ring, struct sk_buff *skb,
if (unlikely(l234info & (BIT(HNS3_RXD_L3E_B) | BIT(HNS3_RXD_L4E_B) | if (unlikely(l234info & (BIT(HNS3_RXD_L3E_B) | BIT(HNS3_RXD_L4E_B) |
BIT(HNS3_RXD_OL3E_B) | BIT(HNS3_RXD_OL3E_B) |
BIT(HNS3_RXD_OL4E_B)))) { BIT(HNS3_RXD_OL4E_B)))) {
skb->ip_summed = CHECKSUM_NONE;
hns3_ring_stats_update(ring, l3l4_csum_err); hns3_ring_stats_update(ring, l3l4_csum_err);
return; return;

View File

@ -3910,9 +3910,17 @@ static int hclge_set_all_vf_rst(struct hclge_dev *hdev, bool reset)
return ret; return ret;
} }
if (!reset || !test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state)) if (!reset ||
!test_bit(HCLGE_VPORT_STATE_INITED, &vport->state))
continue; continue;
if (!test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state) &&
hdev->reset_type == HNAE3_FUNC_RESET) {
set_bit(HCLGE_VPORT_NEED_NOTIFY_RESET,
&vport->need_notify);
continue;
}
/* Inform VF to process the reset. /* Inform VF to process the reset.
* hclge_inform_reset_assert_to_vf may fail if VF * hclge_inform_reset_assert_to_vf may fail if VF
* driver is not loaded. * driver is not loaded.
@ -4609,18 +4617,25 @@ static void hclge_reset_service_task(struct hclge_dev *hdev)
static void hclge_update_vport_alive(struct hclge_dev *hdev) static void hclge_update_vport_alive(struct hclge_dev *hdev)
{ {
#define HCLGE_ALIVE_SECONDS_NORMAL 8
unsigned long alive_time = HCLGE_ALIVE_SECONDS_NORMAL * HZ;
int i; int i;
/* start from vport 1 for PF is always alive */ /* start from vport 1 for PF is always alive */
for (i = 1; i < hdev->num_alloc_vport; i++) { for (i = 1; i < hdev->num_alloc_vport; i++) {
struct hclge_vport *vport = &hdev->vport[i]; struct hclge_vport *vport = &hdev->vport[i];
if (time_after(jiffies, vport->last_active_jiffies + 8 * HZ)) if (!test_bit(HCLGE_VPORT_STATE_INITED, &vport->state) ||
!test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state))
continue;
if (time_after(jiffies, vport->last_active_jiffies +
alive_time)) {
clear_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state); clear_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state);
dev_warn(&hdev->pdev->dev,
/* If vf is not alive, set to default value */ "VF %u heartbeat timeout\n",
if (!test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state)) i - HCLGE_VF_VPORT_START_NUM);
vport->mps = HCLGE_MAC_DEFAULT_FRAME; }
} }
} }
@ -8064,9 +8079,11 @@ int hclge_vport_start(struct hclge_vport *vport)
{ {
struct hclge_dev *hdev = vport->back; struct hclge_dev *hdev = vport->back;
set_bit(HCLGE_VPORT_STATE_INITED, &vport->state);
set_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state); set_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state);
set_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE, &vport->state); set_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE, &vport->state);
vport->last_active_jiffies = jiffies; vport->last_active_jiffies = jiffies;
vport->need_notify = 0;
if (test_bit(vport->vport_id, hdev->vport_config_block)) { if (test_bit(vport->vport_id, hdev->vport_config_block)) {
if (vport->vport_id) { if (vport->vport_id) {
@ -8084,7 +8101,9 @@ int hclge_vport_start(struct hclge_vport *vport)
void hclge_vport_stop(struct hclge_vport *vport) void hclge_vport_stop(struct hclge_vport *vport)
{ {
clear_bit(HCLGE_VPORT_STATE_INITED, &vport->state);
clear_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state); clear_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state);
vport->need_notify = 0;
} }
static int hclge_client_start(struct hnae3_handle *handle) static int hclge_client_start(struct hnae3_handle *handle)
@ -9208,7 +9227,8 @@ static int hclge_set_vf_mac(struct hnae3_handle *handle, int vf,
return 0; return 0;
} }
dev_info(&hdev->pdev->dev, "MAC of VF %d has been set to %s\n", dev_info(&hdev->pdev->dev,
"MAC of VF %d has been set to %s, will be active after VF reset\n",
vf, format_mac_addr); vf, format_mac_addr);
return 0; return 0;
} }
@ -10465,12 +10485,16 @@ static int hclge_set_vf_vlan_filter(struct hnae3_handle *handle, int vfid,
* for DEVICE_VERSION_V3, vf doesn't need to know about the port based * for DEVICE_VERSION_V3, vf doesn't need to know about the port based
* VLAN state. * VLAN state.
*/ */
if (ae_dev->dev_version < HNAE3_DEVICE_VERSION_V3 && if (ae_dev->dev_version < HNAE3_DEVICE_VERSION_V3) {
test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state)) if (test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state))
(void)hclge_push_vf_port_base_vlan_info(&hdev->vport[0], (void)hclge_push_vf_port_base_vlan_info(&hdev->vport[0],
vport->vport_id, vport->vport_id,
state, &vlan_info); state,
&vlan_info);
else
set_bit(HCLGE_VPORT_NEED_NOTIFY_VF_VLAN,
&vport->need_notify);
}
return 0; return 0;
} }
@ -11941,7 +11965,7 @@ static void hclge_reset_vport_state(struct hclge_dev *hdev)
int i; int i;
for (i = 0; i < hdev->num_alloc_vport; i++) { for (i = 0; i < hdev->num_alloc_vport; i++) {
hclge_vport_stop(vport); clear_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state);
vport++; vport++;
} }
} }
@ -12754,60 +12778,71 @@ static int hclge_gro_en(struct hnae3_handle *handle, bool enable)
return ret; return ret;
} }
static void hclge_sync_promisc_mode(struct hclge_dev *hdev) static int hclge_sync_vport_promisc_mode(struct hclge_vport *vport)
{ {
struct hclge_vport *vport = &hdev->vport[0];
struct hnae3_handle *handle = &vport->nic; struct hnae3_handle *handle = &vport->nic;
struct hclge_dev *hdev = vport->back;
bool uc_en = false;
bool mc_en = false;
u8 tmp_flags; u8 tmp_flags;
bool bc_en;
int ret; int ret;
u16 i;
if (vport->last_promisc_flags != vport->overflow_promisc_flags) { if (vport->last_promisc_flags != vport->overflow_promisc_flags) {
set_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE, &vport->state); set_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE, &vport->state);
vport->last_promisc_flags = vport->overflow_promisc_flags; vport->last_promisc_flags = vport->overflow_promisc_flags;
} }
if (test_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE, &vport->state)) { if (!test_and_clear_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE,
&vport->state))
return 0;
/* for PF */
if (!vport->vport_id) {
tmp_flags = handle->netdev_flags | vport->last_promisc_flags; tmp_flags = handle->netdev_flags | vport->last_promisc_flags;
ret = hclge_set_promisc_mode(handle, tmp_flags & HNAE3_UPE, ret = hclge_set_promisc_mode(handle, tmp_flags & HNAE3_UPE,
tmp_flags & HNAE3_MPE); tmp_flags & HNAE3_MPE);
if (!ret) { if (!ret)
clear_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE,
&vport->state);
set_bit(HCLGE_VPORT_STATE_VLAN_FLTR_CHANGE, set_bit(HCLGE_VPORT_STATE_VLAN_FLTR_CHANGE,
&vport->state); &vport->state);
} else
}
for (i = 1; i < hdev->num_alloc_vport; i++) {
bool uc_en = false;
bool mc_en = false;
bool bc_en;
vport = &hdev->vport[i];
if (!test_and_clear_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE,
&vport->state))
continue;
if (vport->vf_info.trusted) {
uc_en = vport->vf_info.request_uc_en > 0 ||
vport->overflow_promisc_flags &
HNAE3_OVERFLOW_UPE;
mc_en = vport->vf_info.request_mc_en > 0 ||
vport->overflow_promisc_flags &
HNAE3_OVERFLOW_MPE;
}
bc_en = vport->vf_info.request_bc_en > 0;
ret = hclge_cmd_set_promisc_mode(hdev, vport->vport_id, uc_en,
mc_en, bc_en);
if (ret) {
set_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE, set_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE,
&vport->state); &vport->state);
return ret;
}
/* for VF */
if (vport->vf_info.trusted) {
uc_en = vport->vf_info.request_uc_en > 0 ||
vport->overflow_promisc_flags & HNAE3_OVERFLOW_UPE;
mc_en = vport->vf_info.request_mc_en > 0 ||
vport->overflow_promisc_flags & HNAE3_OVERFLOW_MPE;
}
bc_en = vport->vf_info.request_bc_en > 0;
ret = hclge_cmd_set_promisc_mode(hdev, vport->vport_id, uc_en,
mc_en, bc_en);
if (ret) {
set_bit(HCLGE_VPORT_STATE_PROMISC_CHANGE, &vport->state);
return ret;
}
hclge_set_vport_vlan_fltr_change(vport);
return 0;
}
static void hclge_sync_promisc_mode(struct hclge_dev *hdev)
{
struct hclge_vport *vport;
int ret;
u16 i;
for (i = 0; i < hdev->num_alloc_vport; i++) {
vport = &hdev->vport[i];
ret = hclge_sync_vport_promisc_mode(vport);
if (ret)
return; return;
}
hclge_set_vport_vlan_fltr_change(vport);
} }
} }
@ -12944,6 +12979,11 @@ static void hclge_clear_vport_vf_info(struct hclge_vport *vport, int vfid)
struct hclge_vlan_info vlan_info; struct hclge_vlan_info vlan_info;
int ret; int ret;
clear_bit(HCLGE_VPORT_STATE_INITED, &vport->state);
clear_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state);
vport->need_notify = 0;
vport->mps = 0;
/* after disable sriov, clean VF rate configured by PF */ /* after disable sriov, clean VF rate configured by PF */
ret = hclge_tm_qs_shaper_cfg(vport, 0); ret = hclge_tm_qs_shaper_cfg(vport, 0);
if (ret) if (ret)

View File

@ -995,9 +995,15 @@ enum HCLGE_VPORT_STATE {
HCLGE_VPORT_STATE_MAC_TBL_CHANGE, HCLGE_VPORT_STATE_MAC_TBL_CHANGE,
HCLGE_VPORT_STATE_PROMISC_CHANGE, HCLGE_VPORT_STATE_PROMISC_CHANGE,
HCLGE_VPORT_STATE_VLAN_FLTR_CHANGE, HCLGE_VPORT_STATE_VLAN_FLTR_CHANGE,
HCLGE_VPORT_STATE_INITED,
HCLGE_VPORT_STATE_MAX HCLGE_VPORT_STATE_MAX
}; };
enum HCLGE_VPORT_NEED_NOTIFY {
HCLGE_VPORT_NEED_NOTIFY_RESET,
HCLGE_VPORT_NEED_NOTIFY_VF_VLAN,
};
struct hclge_vlan_info { struct hclge_vlan_info {
u16 vlan_proto; /* so far support 802.1Q only */ u16 vlan_proto; /* so far support 802.1Q only */
u16 qos; u16 qos;
@ -1044,6 +1050,7 @@ struct hclge_vport {
struct hnae3_handle roce; struct hnae3_handle roce;
unsigned long state; unsigned long state;
unsigned long need_notify;
unsigned long last_active_jiffies; unsigned long last_active_jiffies;
u32 mps; /* Max packet size */ u32 mps; /* Max packet size */
struct hclge_vf_info vf_info; struct hclge_vf_info vf_info;

View File

@ -124,17 +124,26 @@ static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len,
return status; return status;
} }
static int hclge_inform_vf_reset(struct hclge_vport *vport, u16 reset_type)
{
__le16 msg_data;
u8 dest_vfid;
dest_vfid = (u8)vport->vport_id;
msg_data = cpu_to_le16(reset_type);
/* send this requested info to VF */
return hclge_send_mbx_msg(vport, (u8 *)&msg_data, sizeof(msg_data),
HCLGE_MBX_ASSERTING_RESET, dest_vfid);
}
int hclge_inform_reset_assert_to_vf(struct hclge_vport *vport) int hclge_inform_reset_assert_to_vf(struct hclge_vport *vport)
{ {
struct hclge_dev *hdev = vport->back; struct hclge_dev *hdev = vport->back;
__le16 msg_data;
u16 reset_type; u16 reset_type;
u8 dest_vfid;
BUILD_BUG_ON(HNAE3_MAX_RESET > U16_MAX); BUILD_BUG_ON(HNAE3_MAX_RESET > U16_MAX);
dest_vfid = (u8)vport->vport_id;
if (hdev->reset_type == HNAE3_FUNC_RESET) if (hdev->reset_type == HNAE3_FUNC_RESET)
reset_type = HNAE3_VF_PF_FUNC_RESET; reset_type = HNAE3_VF_PF_FUNC_RESET;
else if (hdev->reset_type == HNAE3_FLR_RESET) else if (hdev->reset_type == HNAE3_FLR_RESET)
@ -142,11 +151,7 @@ int hclge_inform_reset_assert_to_vf(struct hclge_vport *vport)
else else
reset_type = HNAE3_VF_FUNC_RESET; reset_type = HNAE3_VF_FUNC_RESET;
msg_data = cpu_to_le16(reset_type); return hclge_inform_vf_reset(vport, reset_type);
/* send this requested info to VF */
return hclge_send_mbx_msg(vport, (u8 *)&msg_data, sizeof(msg_data),
HCLGE_MBX_ASSERTING_RESET, dest_vfid);
} }
static void hclge_free_vector_ring_chain(struct hnae3_ring_chain_node *head) static void hclge_free_vector_ring_chain(struct hnae3_ring_chain_node *head)
@ -652,9 +657,56 @@ static int hclge_reset_vf(struct hclge_vport *vport)
return hclge_func_reset_cmd(hdev, vport->vport_id); return hclge_func_reset_cmd(hdev, vport->vport_id);
} }
static void hclge_notify_vf_config(struct hclge_vport *vport)
{
struct hclge_dev *hdev = vport->back;
struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev);
struct hclge_port_base_vlan_config *vlan_cfg;
int ret;
hclge_push_vf_link_status(vport);
if (test_bit(HCLGE_VPORT_NEED_NOTIFY_RESET, &vport->need_notify)) {
ret = hclge_inform_vf_reset(vport, HNAE3_VF_PF_FUNC_RESET);
if (ret) {
dev_err(&hdev->pdev->dev,
"failed to inform VF %u reset!",
vport->vport_id - HCLGE_VF_VPORT_START_NUM);
return;
}
vport->need_notify = 0;
return;
}
if (ae_dev->dev_version < HNAE3_DEVICE_VERSION_V3 &&
test_bit(HCLGE_VPORT_NEED_NOTIFY_VF_VLAN, &vport->need_notify)) {
vlan_cfg = &vport->port_base_vlan_cfg;
ret = hclge_push_vf_port_base_vlan_info(&hdev->vport[0],
vport->vport_id,
vlan_cfg->state,
&vlan_cfg->vlan_info);
if (ret) {
dev_err(&hdev->pdev->dev,
"failed to inform VF %u port base vlan!",
vport->vport_id - HCLGE_VF_VPORT_START_NUM);
return;
}
clear_bit(HCLGE_VPORT_NEED_NOTIFY_VF_VLAN, &vport->need_notify);
}
}
static void hclge_vf_keep_alive(struct hclge_vport *vport) static void hclge_vf_keep_alive(struct hclge_vport *vport)
{ {
struct hclge_dev *hdev = vport->back;
vport->last_active_jiffies = jiffies; vport->last_active_jiffies = jiffies;
if (test_bit(HCLGE_VPORT_STATE_INITED, &vport->state) &&
!test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state)) {
set_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state);
dev_info(&hdev->pdev->dev, "VF %u is alive!",
vport->vport_id - HCLGE_VF_VPORT_START_NUM);
hclge_notify_vf_config(vport);
}
} }
static int hclge_set_vf_mtu(struct hclge_vport *vport, static int hclge_set_vf_mtu(struct hclge_vport *vport,
@ -954,6 +1006,7 @@ static int hclge_mbx_vf_uninit_handler(struct hclge_mbx_ops_param *param)
hclge_rm_vport_all_mac_table(param->vport, true, hclge_rm_vport_all_mac_table(param->vport, true,
HCLGE_MAC_ADDR_MC); HCLGE_MAC_ADDR_MC);
hclge_rm_vport_all_vlan_table(param->vport, true); hclge_rm_vport_all_vlan_table(param->vport, true);
param->vport->mps = 0;
return 0; return 0;
} }

View File

@ -2767,7 +2767,8 @@ static int hclgevf_pci_reset(struct hclgevf_dev *hdev)
struct pci_dev *pdev = hdev->pdev; struct pci_dev *pdev = hdev->pdev;
int ret = 0; int ret = 0;
if (hdev->reset_type == HNAE3_VF_FULL_RESET && if ((hdev->reset_type == HNAE3_VF_FULL_RESET ||
hdev->reset_type == HNAE3_FLR_RESET) &&
test_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state)) { test_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state)) {
hclgevf_misc_irq_uninit(hdev); hclgevf_misc_irq_uninit(hdev);
hclgevf_uninit_msi(hdev); hclgevf_uninit_msi(hdev);

View File

@ -783,7 +783,7 @@ construct_skb:
static void static void
ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf) ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf)
{ {
xdp_return_frame((struct xdp_frame *)tx_buf->raw_buf); page_frag_free(tx_buf->raw_buf);
xdp_ring->xdp_tx_active--; xdp_ring->xdp_tx_active--;
dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma), dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma),
dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); dma_unmap_len(tx_buf, len), DMA_TO_DEVICE);

View File

@ -589,7 +589,7 @@ int rvu_mbox_handler_mcs_free_resources(struct rvu *rvu,
u16 pcifunc = req->hdr.pcifunc; u16 pcifunc = req->hdr.pcifunc;
struct mcs_rsrc_map *map; struct mcs_rsrc_map *map;
struct mcs *mcs; struct mcs *mcs;
int rc; int rc = 0;
if (req->mcs_id >= rvu->mcs_blk_cnt) if (req->mcs_id >= rvu->mcs_blk_cnt)
return MCS_AF_ERR_INVALID_MCSID; return MCS_AF_ERR_INVALID_MCSID;

View File

@ -1012,6 +1012,7 @@ static void otx2_pool_refill_task(struct work_struct *work)
rbpool = cq->rbpool; rbpool = cq->rbpool;
free_ptrs = cq->pool_ptrs; free_ptrs = cq->pool_ptrs;
get_cpu();
while (cq->pool_ptrs) { while (cq->pool_ptrs) {
if (otx2_alloc_rbuf(pfvf, rbpool, &bufptr)) { if (otx2_alloc_rbuf(pfvf, rbpool, &bufptr)) {
/* Schedule a WQ if we fails to free atleast half of the /* Schedule a WQ if we fails to free atleast half of the
@ -1031,6 +1032,7 @@ static void otx2_pool_refill_task(struct work_struct *work)
pfvf->hw_ops->aura_freeptr(pfvf, qidx, bufptr + OTX2_HEAD_ROOM); pfvf->hw_ops->aura_freeptr(pfvf, qidx, bufptr + OTX2_HEAD_ROOM);
cq->pool_ptrs--; cq->pool_ptrs--;
} }
put_cpu();
cq->refill_task_sched = false; cq->refill_task_sched = false;
} }
@ -1368,6 +1370,7 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf)
if (err) if (err)
goto fail; goto fail;
get_cpu();
/* Allocate pointers and free them to aura/pool */ /* Allocate pointers and free them to aura/pool */
for (qidx = 0; qidx < hw->tot_tx_queues; qidx++) { for (qidx = 0; qidx < hw->tot_tx_queues; qidx++) {
pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, qidx); pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, qidx);
@ -1376,18 +1379,24 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf)
sq = &qset->sq[qidx]; sq = &qset->sq[qidx];
sq->sqb_count = 0; sq->sqb_count = 0;
sq->sqb_ptrs = kcalloc(num_sqbs, sizeof(*sq->sqb_ptrs), GFP_KERNEL); sq->sqb_ptrs = kcalloc(num_sqbs, sizeof(*sq->sqb_ptrs), GFP_KERNEL);
if (!sq->sqb_ptrs) if (!sq->sqb_ptrs) {
return -ENOMEM; err = -ENOMEM;
goto err_mem;
}
for (ptr = 0; ptr < num_sqbs; ptr++) { for (ptr = 0; ptr < num_sqbs; ptr++) {
if (otx2_alloc_rbuf(pfvf, pool, &bufptr)) err = otx2_alloc_rbuf(pfvf, pool, &bufptr);
return -ENOMEM; if (err)
goto err_mem;
pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr); pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr);
sq->sqb_ptrs[sq->sqb_count++] = (u64)bufptr; sq->sqb_ptrs[sq->sqb_count++] = (u64)bufptr;
} }
} }
return 0; err_mem:
put_cpu();
return err ? -ENOMEM : 0;
fail: fail:
otx2_mbox_reset(&pfvf->mbox.mbox, 0); otx2_mbox_reset(&pfvf->mbox.mbox, 0);
otx2_aura_pool_free(pfvf); otx2_aura_pool_free(pfvf);
@ -1426,18 +1435,21 @@ int otx2_rq_aura_pool_init(struct otx2_nic *pfvf)
if (err) if (err)
goto fail; goto fail;
get_cpu();
/* Allocate pointers and free them to aura/pool */ /* Allocate pointers and free them to aura/pool */
for (pool_id = 0; pool_id < hw->rqpool_cnt; pool_id++) { for (pool_id = 0; pool_id < hw->rqpool_cnt; pool_id++) {
pool = &pfvf->qset.pool[pool_id]; pool = &pfvf->qset.pool[pool_id];
for (ptr = 0; ptr < num_ptrs; ptr++) { for (ptr = 0; ptr < num_ptrs; ptr++) {
if (otx2_alloc_rbuf(pfvf, pool, &bufptr)) err = otx2_alloc_rbuf(pfvf, pool, &bufptr);
return -ENOMEM; if (err)
goto err_mem;
pfvf->hw_ops->aura_freeptr(pfvf, pool_id, pfvf->hw_ops->aura_freeptr(pfvf, pool_id,
bufptr + OTX2_HEAD_ROOM); bufptr + OTX2_HEAD_ROOM);
} }
} }
err_mem:
return 0; put_cpu();
return err ? -ENOMEM : 0;
fail: fail:
otx2_mbox_reset(&pfvf->mbox.mbox, 0); otx2_mbox_reset(&pfvf->mbox.mbox, 0);
otx2_aura_pool_free(pfvf); otx2_aura_pool_free(pfvf);

View File

@ -468,7 +468,7 @@ static int mlx5_devlink_enable_roce_validate(struct devlink *devlink, u32 id,
bool new_state = val.vbool; bool new_state = val.vbool;
if (new_state && !MLX5_CAP_GEN(dev, roce) && if (new_state && !MLX5_CAP_GEN(dev, roce) &&
!MLX5_CAP_GEN(dev, roce_rw_supported)) { !(MLX5_CAP_GEN(dev, roce_rw_supported) && MLX5_CAP_GEN_MAX(dev, roce))) {
NL_SET_ERR_MSG_MOD(extack, "Device doesn't support RoCE"); NL_SET_ERR_MSG_MOD(extack, "Device doesn't support RoCE");
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
@ -563,7 +563,7 @@ static int mlx5_devlink_eq_depth_validate(struct devlink *devlink, u32 id,
union devlink_param_value val, union devlink_param_value val,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
return (val.vu16 >= 64 && val.vu16 <= 4096) ? 0 : -EINVAL; return (val.vu32 >= 64 && val.vu32 <= 4096) ? 0 : -EINVAL;
} }
static const struct devlink_param mlx5_devlink_params[] = { static const struct devlink_param mlx5_devlink_params[] = {

View File

@ -459,7 +459,11 @@ static int mlx5e_rx_reporter_diagnose(struct devlink_health_reporter *reporter,
goto unlock; goto unlock;
for (i = 0; i < priv->channels.num; i++) { for (i = 0; i < priv->channels.num; i++) {
struct mlx5e_rq *rq = &priv->channels.c[i]->rq; struct mlx5e_channel *c = priv->channels.c[i];
struct mlx5e_rq *rq;
rq = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state) ?
&c->xskrq : &c->rq;
err = mlx5e_rx_reporter_build_diagnose_output(rq, fmsg); err = mlx5e_rx_reporter_build_diagnose_output(rq, fmsg);
if (err) if (err)

View File

@ -2103,14 +2103,9 @@ out_err:
static void static void
mlx5_ct_tc_create_dbgfs(struct mlx5_tc_ct_priv *ct_priv) mlx5_ct_tc_create_dbgfs(struct mlx5_tc_ct_priv *ct_priv)
{ {
bool is_fdb = ct_priv->ns_type == MLX5_FLOW_NAMESPACE_FDB;
struct mlx5_tc_ct_debugfs *ct_dbgfs = &ct_priv->debugfs; struct mlx5_tc_ct_debugfs *ct_dbgfs = &ct_priv->debugfs;
char dirname[16] = {};
if (sscanf(dirname, "ct_%s", is_fdb ? "fdb" : "nic") < 0) ct_dbgfs->root = debugfs_create_dir("ct", mlx5_debugfs_get_dev_root(ct_priv->dev));
return;
ct_dbgfs->root = debugfs_create_dir(dirname, mlx5_debugfs_get_dev_root(ct_priv->dev));
debugfs_create_atomic_t("offloaded", 0400, ct_dbgfs->root, debugfs_create_atomic_t("offloaded", 0400, ct_dbgfs->root,
&ct_dbgfs->stats.offloaded); &ct_dbgfs->stats.offloaded);
debugfs_create_atomic_t("rx_dropped", 0400, ct_dbgfs->root, debugfs_create_atomic_t("rx_dropped", 0400, ct_dbgfs->root,

View File

@ -222,7 +222,7 @@ void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv,
int err; int err;
list_for_each_entry(flow, flow_list, tmp_list) { list_for_each_entry(flow, flow_list, tmp_list) {
if (!mlx5e_is_offloaded_flow(flow) || flow_flag_test(flow, SLOW)) if (!mlx5e_is_offloaded_flow(flow))
continue; continue;
attr = mlx5e_tc_get_encap_attr(flow); attr = mlx5e_tc_get_encap_attr(flow);
@ -231,6 +231,13 @@ void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv,
esw_attr->dests[flow->tmp_entry_index].flags &= ~MLX5_ESW_DEST_ENCAP_VALID; esw_attr->dests[flow->tmp_entry_index].flags &= ~MLX5_ESW_DEST_ENCAP_VALID;
esw_attr->dests[flow->tmp_entry_index].pkt_reformat = NULL; esw_attr->dests[flow->tmp_entry_index].pkt_reformat = NULL;
/* Clear pkt_reformat before checking slow path flag. Because
* in next iteration, the same flow is already set slow path
* flag, but still need to clear the pkt_reformat.
*/
if (flow_flag_test(flow, SLOW))
continue;
/* update from encap rule to slow path rule */ /* update from encap rule to slow path rule */
spec = &flow->attr->parse_attr->spec; spec = &flow->attr->parse_attr->spec;
rule = mlx5e_tc_offload_to_slow_path(esw, flow, spec); rule = mlx5e_tc_offload_to_slow_path(esw, flow, spec);

View File

@ -273,6 +273,11 @@ static int mlx5e_tc_tun_parse_geneve_options(struct mlx5e_priv *priv,
geneve_tlv_option_0_data, be32_to_cpu(opt_data_key)); geneve_tlv_option_0_data, be32_to_cpu(opt_data_key));
MLX5_SET(fte_match_set_misc3, misc_3_c, MLX5_SET(fte_match_set_misc3, misc_3_c,
geneve_tlv_option_0_data, be32_to_cpu(opt_data_mask)); geneve_tlv_option_0_data, be32_to_cpu(opt_data_mask));
if (MLX5_CAP_ESW_FLOWTABLE_FDB(priv->mdev,
ft_field_support.geneve_tlv_option_0_exist)) {
MLX5_SET_TO_ONES(fte_match_set_misc, misc_c, geneve_tlv_option_0_exist);
MLX5_SET_TO_ONES(fte_match_set_misc, misc_v, geneve_tlv_option_0_exist);
}
spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_3; spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_3;

View File

@ -1305,7 +1305,7 @@ static int mlx5e_alloc_xdpsq(struct mlx5e_channel *c,
sq->channel = c; sq->channel = c;
sq->uar_map = mdev->mlx5e_res.hw_objs.bfreg.map; sq->uar_map = mdev->mlx5e_res.hw_objs.bfreg.map;
sq->min_inline_mode = params->tx_min_inline_mode; sq->min_inline_mode = params->tx_min_inline_mode;
sq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu); sq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu) - ETH_FCS_LEN;
sq->xsk_pool = xsk_pool; sq->xsk_pool = xsk_pool;
sq->stats = sq->xsk_pool ? sq->stats = sq->xsk_pool ?

View File

@ -67,6 +67,7 @@ static void esw_acl_egress_lgcy_groups_destroy(struct mlx5_vport *vport)
int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw, int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
struct mlx5_vport *vport) struct mlx5_vport *vport)
{ {
bool vst_mode_steering = esw_vst_mode_is_steering(esw);
struct mlx5_flow_destination drop_ctr_dst = {}; struct mlx5_flow_destination drop_ctr_dst = {};
struct mlx5_flow_destination *dst = NULL; struct mlx5_flow_destination *dst = NULL;
struct mlx5_fc *drop_counter = NULL; struct mlx5_fc *drop_counter = NULL;
@ -77,6 +78,7 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
*/ */
int table_size = 2; int table_size = 2;
int dest_num = 0; int dest_num = 0;
int actions_flag;
int err = 0; int err = 0;
if (vport->egress.legacy.drop_counter) { if (vport->egress.legacy.drop_counter) {
@ -119,8 +121,11 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
vport->vport, vport->info.vlan, vport->info.qos); vport->vport, vport->info.vlan, vport->info.qos);
/* Allowed vlan rule */ /* Allowed vlan rule */
actions_flag = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
if (vst_mode_steering)
actions_flag |= MLX5_FLOW_CONTEXT_ACTION_VLAN_POP;
err = esw_egress_acl_vlan_create(esw, vport, NULL, vport->info.vlan, err = esw_egress_acl_vlan_create(esw, vport, NULL, vport->info.vlan,
MLX5_FLOW_CONTEXT_ACTION_ALLOW); actions_flag);
if (err) if (err)
goto out; goto out;

View File

@ -139,11 +139,14 @@ static void esw_acl_ingress_lgcy_groups_destroy(struct mlx5_vport *vport)
int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw, int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
struct mlx5_vport *vport) struct mlx5_vport *vport)
{ {
bool vst_mode_steering = esw_vst_mode_is_steering(esw);
struct mlx5_flow_destination drop_ctr_dst = {}; struct mlx5_flow_destination drop_ctr_dst = {};
struct mlx5_flow_destination *dst = NULL; struct mlx5_flow_destination *dst = NULL;
struct mlx5_flow_act flow_act = {}; struct mlx5_flow_act flow_act = {};
struct mlx5_flow_spec *spec = NULL; struct mlx5_flow_spec *spec = NULL;
struct mlx5_fc *counter = NULL; struct mlx5_fc *counter = NULL;
bool vst_check_cvlan = false;
bool vst_push_cvlan = false;
/* The ingress acl table contains 4 groups /* The ingress acl table contains 4 groups
* (2 active rules at the same time - * (2 active rules at the same time -
* 1 allow rule from one of the first 3 groups. * 1 allow rule from one of the first 3 groups.
@ -203,7 +206,26 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
goto out; goto out;
} }
if (vport->info.vlan || vport->info.qos) if ((vport->info.vlan || vport->info.qos)) {
if (vst_mode_steering)
vst_push_cvlan = true;
else if (!MLX5_CAP_ESW(esw->dev, vport_cvlan_insert_always))
vst_check_cvlan = true;
}
if (vst_check_cvlan || vport->info.spoofchk)
spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
/* Create ingress allow rule */
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
if (vst_push_cvlan) {
flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH;
flow_act.vlan[0].prio = vport->info.qos;
flow_act.vlan[0].vid = vport->info.vlan;
flow_act.vlan[0].ethtype = ETH_P_8021Q;
}
if (vst_check_cvlan)
MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria,
outer_headers.cvlan_tag); outer_headers.cvlan_tag);
@ -218,9 +240,6 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
ether_addr_copy(smac_v, vport->info.mac); ether_addr_copy(smac_v, vport->info.mac);
} }
/* Create ingress allow rule */
spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
vport->ingress.allow_rule = mlx5_add_flow_rules(vport->ingress.acl, spec, vport->ingress.allow_rule = mlx5_add_flow_rules(vport->ingress.acl, spec,
&flow_act, NULL, 0); &flow_act, NULL, 0);
if (IS_ERR(vport->ingress.allow_rule)) { if (IS_ERR(vport->ingress.allow_rule)) {
@ -232,6 +251,9 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
goto out; goto out;
} }
if (!vst_check_cvlan && !vport->info.spoofchk)
goto out;
memset(&flow_act, 0, sizeof(flow_act)); memset(&flow_act, 0, sizeof(flow_act));
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP; flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
/* Attach drop flow counter */ /* Attach drop flow counter */
@ -257,7 +279,8 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
return 0; return 0;
out: out:
esw_acl_ingress_lgcy_cleanup(esw, vport); if (err)
esw_acl_ingress_lgcy_cleanup(esw, vport);
kvfree(spec); kvfree(spec);
return err; return err;
} }

View File

@ -161,10 +161,17 @@ static int modify_esw_vport_cvlan(struct mlx5_core_dev *dev, u16 vport,
esw_vport_context.vport_cvlan_strip, 1); esw_vport_context.vport_cvlan_strip, 1);
if (set_flags & SET_VLAN_INSERT) { if (set_flags & SET_VLAN_INSERT) {
/* insert only if no vlan in packet */ if (MLX5_CAP_ESW(dev, vport_cvlan_insert_always)) {
MLX5_SET(modify_esw_vport_context_in, in, /* insert either if vlan exist in packet or not */
esw_vport_context.vport_cvlan_insert, 1); MLX5_SET(modify_esw_vport_context_in, in,
esw_vport_context.vport_cvlan_insert,
MLX5_VPORT_CVLAN_INSERT_ALWAYS);
} else {
/* insert only if no vlan in packet */
MLX5_SET(modify_esw_vport_context_in, in,
esw_vport_context.vport_cvlan_insert,
MLX5_VPORT_CVLAN_INSERT_WHEN_NO_CVLAN);
}
MLX5_SET(modify_esw_vport_context_in, in, MLX5_SET(modify_esw_vport_context_in, in,
esw_vport_context.cvlan_pcp, qos); esw_vport_context.cvlan_pcp, qos);
MLX5_SET(modify_esw_vport_context_in, in, MLX5_SET(modify_esw_vport_context_in, in,
@ -809,6 +816,7 @@ out_free:
static int esw_vport_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport) static int esw_vport_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport)
{ {
bool vst_mode_steering = esw_vst_mode_is_steering(esw);
u16 vport_num = vport->vport; u16 vport_num = vport->vport;
int flags; int flags;
int err; int err;
@ -839,8 +847,9 @@ static int esw_vport_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport)
flags = (vport->info.vlan || vport->info.qos) ? flags = (vport->info.vlan || vport->info.qos) ?
SET_VLAN_STRIP | SET_VLAN_INSERT : 0; SET_VLAN_STRIP | SET_VLAN_INSERT : 0;
modify_esw_vport_cvlan(esw->dev, vport_num, vport->info.vlan, if (esw->mode == MLX5_ESWITCH_OFFLOADS || !vst_mode_steering)
vport->info.qos, flags); modify_esw_vport_cvlan(esw->dev, vport_num, vport->info.vlan,
vport->info.qos, flags);
return 0; return 0;
@ -1848,6 +1857,7 @@ int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
u16 vport, u16 vlan, u8 qos, u8 set_flags) u16 vport, u16 vlan, u8 qos, u8 set_flags)
{ {
struct mlx5_vport *evport = mlx5_eswitch_get_vport(esw, vport); struct mlx5_vport *evport = mlx5_eswitch_get_vport(esw, vport);
bool vst_mode_steering = esw_vst_mode_is_steering(esw);
int err = 0; int err = 0;
if (IS_ERR(evport)) if (IS_ERR(evport))
@ -1855,9 +1865,11 @@ int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
if (vlan > 4095 || qos > 7) if (vlan > 4095 || qos > 7)
return -EINVAL; return -EINVAL;
err = modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set_flags); if (esw->mode == MLX5_ESWITCH_OFFLOADS || !vst_mode_steering) {
if (err) err = modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set_flags);
return err; if (err)
return err;
}
evport->info.vlan = vlan; evport->info.vlan = vlan;
evport->info.qos = qos; evport->info.qos = qos;

View File

@ -527,6 +527,12 @@ int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw,
int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw, int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
u16 vport, u16 vlan, u8 qos, u8 set_flags); u16 vport, u16 vlan, u8 qos, u8 set_flags);
static inline bool esw_vst_mode_is_steering(struct mlx5_eswitch *esw)
{
return (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, pop_vlan) &&
MLX5_CAP_ESW_INGRESS_ACL(esw->dev, push_vlan));
}
static inline bool mlx5_eswitch_vlan_actions_supported(struct mlx5_core_dev *dev, static inline bool mlx5_eswitch_vlan_actions_supported(struct mlx5_core_dev *dev,
u8 vlan_depth) u8 vlan_depth)
{ {

View File

@ -674,6 +674,12 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
dev = container_of(priv, struct mlx5_core_dev, priv); dev = container_of(priv, struct mlx5_core_dev, priv);
devlink = priv_to_devlink(dev); devlink = priv_to_devlink(dev);
mutex_lock(&dev->intf_state_mutex);
if (test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) {
mlx5_core_err(dev, "health works are not permitted at this stage\n");
return;
}
mutex_unlock(&dev->intf_state_mutex);
enter_error_state(dev, false); enter_error_state(dev, false);
if (IS_ERR_OR_NULL(health->fw_fatal_reporter)) { if (IS_ERR_OR_NULL(health->fw_fatal_reporter)) {
devl_lock(devlink); devl_lock(devlink);

View File

@ -71,6 +71,10 @@ static void mlx5i_build_nic_params(struct mlx5_core_dev *mdev,
params->packet_merge.type = MLX5E_PACKET_MERGE_NONE; params->packet_merge.type = MLX5E_PACKET_MERGE_NONE;
params->hard_mtu = MLX5_IB_GRH_BYTES + MLX5_IPOIB_HARD_LEN; params->hard_mtu = MLX5_IB_GRH_BYTES + MLX5_IPOIB_HARD_LEN;
params->tunneled_offload_en = false; params->tunneled_offload_en = false;
/* CQE compression is not supported for IPoIB */
params->rx_cqe_compress_def = false;
MLX5E_SET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS, params->rx_cqe_compress_def);
} }
/* Called directly after IPoIB netdevice was created to initialize SW structs */ /* Called directly after IPoIB netdevice was created to initialize SW structs */

View File

@ -228,6 +228,7 @@ static void mlx5_ldev_free(struct kref *ref)
if (ldev->nb.notifier_call) if (ldev->nb.notifier_call)
unregister_netdevice_notifier_net(&init_net, &ldev->nb); unregister_netdevice_notifier_net(&init_net, &ldev->nb);
mlx5_lag_mp_cleanup(ldev); mlx5_lag_mp_cleanup(ldev);
cancel_delayed_work_sync(&ldev->bond_work);
destroy_workqueue(ldev->wq); destroy_workqueue(ldev->wq);
mlx5_lag_mpesw_cleanup(ldev); mlx5_lag_mpesw_cleanup(ldev);
mutex_destroy(&ldev->lock); mutex_destroy(&ldev->lock);

View File

@ -613,7 +613,7 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx)
MLX5_SET(cmd_hca_cap, set_hca_cap, num_total_dynamic_vf_msix, MLX5_SET(cmd_hca_cap, set_hca_cap, num_total_dynamic_vf_msix,
MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix)); MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix));
if (MLX5_CAP_GEN(dev, roce_rw_supported)) if (MLX5_CAP_GEN(dev, roce_rw_supported) && MLX5_CAP_GEN_MAX(dev, roce))
MLX5_SET(cmd_hca_cap, set_hca_cap, roce, MLX5_SET(cmd_hca_cap, set_hca_cap, roce,
mlx5_is_roce_on(dev)); mlx5_is_roce_on(dev));
@ -1050,6 +1050,8 @@ err_rl_cleanup:
err_tables_cleanup: err_tables_cleanup:
mlx5_geneve_destroy(dev->geneve); mlx5_geneve_destroy(dev->geneve);
mlx5_vxlan_destroy(dev->vxlan); mlx5_vxlan_destroy(dev->vxlan);
mlx5_cleanup_clock(dev);
mlx5_cleanup_reserved_gids(dev);
mlx5_cq_debugfs_cleanup(dev); mlx5_cq_debugfs_cleanup(dev);
mlx5_fw_reset_cleanup(dev); mlx5_fw_reset_cleanup(dev);
err_events_cleanup: err_events_cleanup:

View File

@ -381,7 +381,7 @@ int lan966x_port_pcs_set(struct lan966x_port *port,
} }
/* Take PCS out of reset */ /* Take PCS out of reset */
lan_rmw(DEV_CLOCK_CFG_LINK_SPEED_SET(2) | lan_rmw(DEV_CLOCK_CFG_LINK_SPEED_SET(LAN966X_SPEED_1000) |
DEV_CLOCK_CFG_PCS_RX_RST_SET(0) | DEV_CLOCK_CFG_PCS_RX_RST_SET(0) |
DEV_CLOCK_CFG_PCS_TX_RST_SET(0), DEV_CLOCK_CFG_PCS_TX_RST_SET(0),
DEV_CLOCK_CFG_LINK_SPEED | DEV_CLOCK_CFG_LINK_SPEED |

View File

@ -834,7 +834,7 @@ static int mchp_sparx5_probe(struct platform_device *pdev)
if (err) if (err)
goto cleanup_config; goto cleanup_config;
if (!of_get_mac_address(np, sparx5->base_mac)) { if (of_get_mac_address(np, sparx5->base_mac)) {
dev_info(sparx5->dev, "MAC addr was not set, use random MAC\n"); dev_info(sparx5->dev, "MAC addr was not set, use random MAC\n");
eth_random_addr(sparx5->base_mac); eth_random_addr(sparx5->base_mac);
sparx5->base_mac[5] = 0; sparx5->base_mac[5] = 0;

View File

@ -617,6 +617,9 @@ struct nfp_net_dp {
* @vnic_no_name: For non-port PF vNIC make ndo_get_phys_port_name return * @vnic_no_name: For non-port PF vNIC make ndo_get_phys_port_name return
* -EOPNOTSUPP to keep backwards compatibility (set by app) * -EOPNOTSUPP to keep backwards compatibility (set by app)
* @port: Pointer to nfp_port structure if vNIC is a port * @port: Pointer to nfp_port structure if vNIC is a port
* @mc_lock: Protect mc_addrs list
* @mc_addrs: List of mc addrs to add/del to HW
* @mc_work: Work to update mc addrs
* @app_priv: APP private data for this vNIC * @app_priv: APP private data for this vNIC
*/ */
struct nfp_net { struct nfp_net {
@ -718,6 +721,10 @@ struct nfp_net {
struct nfp_port *port; struct nfp_port *port;
spinlock_t mc_lock;
struct list_head mc_addrs;
struct work_struct mc_work;
void *app_priv; void *app_priv;
}; };

View File

@ -1334,9 +1334,14 @@ err_unlock:
return err; return err;
} }
static int nfp_net_mc_cfg(struct net_device *netdev, const unsigned char *addr, const u32 cmd) struct nfp_mc_addr_entry {
u8 addr[ETH_ALEN];
u32 cmd;
struct list_head list;
};
static int nfp_net_mc_cfg(struct nfp_net *nn, const unsigned char *addr, const u32 cmd)
{ {
struct nfp_net *nn = netdev_priv(netdev);
int ret; int ret;
ret = nfp_net_mbox_lock(nn, NFP_NET_CFG_MULTICAST_SZ); ret = nfp_net_mbox_lock(nn, NFP_NET_CFG_MULTICAST_SZ);
@ -1351,6 +1356,25 @@ static int nfp_net_mc_cfg(struct net_device *netdev, const unsigned char *addr,
return nfp_net_mbox_reconfig_and_unlock(nn, cmd); return nfp_net_mbox_reconfig_and_unlock(nn, cmd);
} }
static int nfp_net_mc_prep(struct nfp_net *nn, const unsigned char *addr, const u32 cmd)
{
struct nfp_mc_addr_entry *entry;
entry = kmalloc(sizeof(*entry), GFP_ATOMIC);
if (!entry)
return -ENOMEM;
ether_addr_copy(entry->addr, addr);
entry->cmd = cmd;
spin_lock_bh(&nn->mc_lock);
list_add_tail(&entry->list, &nn->mc_addrs);
spin_unlock_bh(&nn->mc_lock);
schedule_work(&nn->mc_work);
return 0;
}
static int nfp_net_mc_sync(struct net_device *netdev, const unsigned char *addr) static int nfp_net_mc_sync(struct net_device *netdev, const unsigned char *addr)
{ {
struct nfp_net *nn = netdev_priv(netdev); struct nfp_net *nn = netdev_priv(netdev);
@ -1361,12 +1385,35 @@ static int nfp_net_mc_sync(struct net_device *netdev, const unsigned char *addr)
return -EINVAL; return -EINVAL;
} }
return nfp_net_mc_cfg(netdev, addr, NFP_NET_CFG_MBOX_CMD_MULTICAST_ADD); return nfp_net_mc_prep(nn, addr, NFP_NET_CFG_MBOX_CMD_MULTICAST_ADD);
} }
static int nfp_net_mc_unsync(struct net_device *netdev, const unsigned char *addr) static int nfp_net_mc_unsync(struct net_device *netdev, const unsigned char *addr)
{ {
return nfp_net_mc_cfg(netdev, addr, NFP_NET_CFG_MBOX_CMD_MULTICAST_DEL); struct nfp_net *nn = netdev_priv(netdev);
return nfp_net_mc_prep(nn, addr, NFP_NET_CFG_MBOX_CMD_MULTICAST_DEL);
}
static void nfp_net_mc_addr_config(struct work_struct *work)
{
struct nfp_net *nn = container_of(work, struct nfp_net, mc_work);
struct nfp_mc_addr_entry *entry, *tmp;
struct list_head tmp_list;
INIT_LIST_HEAD(&tmp_list);
spin_lock_bh(&nn->mc_lock);
list_splice_init(&nn->mc_addrs, &tmp_list);
spin_unlock_bh(&nn->mc_lock);
list_for_each_entry_safe(entry, tmp, &tmp_list, list) {
if (nfp_net_mc_cfg(nn, entry->addr, entry->cmd))
nn_err(nn, "Config mc address to HW failed.\n");
list_del(&entry->list);
kfree(entry);
}
} }
static void nfp_net_set_rx_mode(struct net_device *netdev) static void nfp_net_set_rx_mode(struct net_device *netdev)
@ -2633,6 +2680,11 @@ int nfp_net_init(struct nfp_net *nn)
if (!nn->dp.netdev) if (!nn->dp.netdev)
return 0; return 0;
spin_lock_init(&nn->mc_lock);
INIT_LIST_HEAD(&nn->mc_addrs);
INIT_WORK(&nn->mc_work, nfp_net_mc_addr_config);
return register_netdev(nn->dp.netdev); return register_netdev(nn->dp.netdev);
err_clean_mbox: err_clean_mbox:
@ -2652,5 +2704,6 @@ void nfp_net_clean(struct nfp_net *nn)
unregister_netdev(nn->dp.netdev); unregister_netdev(nn->dp.netdev);
nfp_net_ipsec_clean(nn); nfp_net_ipsec_clean(nn);
nfp_ccm_mbox_clean(nn); nfp_ccm_mbox_clean(nn);
flush_work(&nn->mc_work);
nfp_net_reconfig_wait_posted(nn); nfp_net_reconfig_wait_posted(nn);
} }

View File

@ -1832,7 +1832,8 @@ static enum dbg_status qed_find_nvram_image(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt, struct qed_ptt *p_ptt,
u32 image_type, u32 image_type,
u32 *nvram_offset_bytes, u32 *nvram_offset_bytes,
u32 *nvram_size_bytes) u32 *nvram_size_bytes,
bool b_can_sleep)
{ {
u32 ret_mcp_resp, ret_mcp_param, ret_txn_size; u32 ret_mcp_resp, ret_mcp_param, ret_txn_size;
struct mcp_file_att file_att; struct mcp_file_att file_att;
@ -1846,7 +1847,8 @@ static enum dbg_status qed_find_nvram_image(struct qed_hwfn *p_hwfn,
&ret_mcp_resp, &ret_mcp_resp,
&ret_mcp_param, &ret_mcp_param,
&ret_txn_size, &ret_txn_size,
(u32 *)&file_att, false); (u32 *)&file_att,
b_can_sleep);
/* Check response */ /* Check response */
if (nvm_result || (ret_mcp_resp & FW_MSG_CODE_MASK) != if (nvm_result || (ret_mcp_resp & FW_MSG_CODE_MASK) !=
@ -1873,7 +1875,9 @@ static enum dbg_status qed_find_nvram_image(struct qed_hwfn *p_hwfn,
static enum dbg_status qed_nvram_read(struct qed_hwfn *p_hwfn, static enum dbg_status qed_nvram_read(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt, struct qed_ptt *p_ptt,
u32 nvram_offset_bytes, u32 nvram_offset_bytes,
u32 nvram_size_bytes, u32 *ret_buf) u32 nvram_size_bytes,
u32 *ret_buf,
bool b_can_sleep)
{ {
u32 ret_mcp_resp, ret_mcp_param, ret_read_size, bytes_to_copy; u32 ret_mcp_resp, ret_mcp_param, ret_read_size, bytes_to_copy;
s32 bytes_left = nvram_size_bytes; s32 bytes_left = nvram_size_bytes;
@ -1899,7 +1903,7 @@ static enum dbg_status qed_nvram_read(struct qed_hwfn *p_hwfn,
&ret_mcp_resp, &ret_mcp_resp,
&ret_mcp_param, &ret_read_size, &ret_mcp_param, &ret_read_size,
(u32 *)((u8 *)ret_buf + read_offset), (u32 *)((u8 *)ret_buf + read_offset),
false)) b_can_sleep))
return DBG_STATUS_NVRAM_READ_FAILED; return DBG_STATUS_NVRAM_READ_FAILED;
/* Check response */ /* Check response */
@ -3380,7 +3384,8 @@ static u32 qed_grc_dump_mcp_hw_dump(struct qed_hwfn *p_hwfn,
p_ptt, p_ptt,
NVM_TYPE_HW_DUMP_OUT, NVM_TYPE_HW_DUMP_OUT,
&hw_dump_offset_bytes, &hw_dump_offset_bytes,
&hw_dump_size_bytes); &hw_dump_size_bytes,
false);
if (status != DBG_STATUS_OK) if (status != DBG_STATUS_OK)
return 0; return 0;
@ -3397,7 +3402,9 @@ static u32 qed_grc_dump_mcp_hw_dump(struct qed_hwfn *p_hwfn,
status = qed_nvram_read(p_hwfn, status = qed_nvram_read(p_hwfn,
p_ptt, p_ptt,
hw_dump_offset_bytes, hw_dump_offset_bytes,
hw_dump_size_bytes, dump_buf + offset); hw_dump_size_bytes,
dump_buf + offset,
false);
if (status != DBG_STATUS_OK) { if (status != DBG_STATUS_OK) {
DP_NOTICE(p_hwfn, DP_NOTICE(p_hwfn,
"Failed to read MCP HW Dump image from NVRAM\n"); "Failed to read MCP HW Dump image from NVRAM\n");
@ -4123,7 +4130,9 @@ static enum dbg_status qed_mcp_trace_get_meta_info(struct qed_hwfn *p_hwfn,
return qed_find_nvram_image(p_hwfn, return qed_find_nvram_image(p_hwfn,
p_ptt, p_ptt,
nvram_image_type, nvram_image_type,
trace_meta_offset, trace_meta_size); trace_meta_offset,
trace_meta_size,
true);
} }
/* Reads the MCP Trace meta data from NVRAM into the specified buffer */ /* Reads the MCP Trace meta data from NVRAM into the specified buffer */
@ -4139,7 +4148,10 @@ static enum dbg_status qed_mcp_trace_read_meta(struct qed_hwfn *p_hwfn,
/* Read meta data from NVRAM */ /* Read meta data from NVRAM */
status = qed_nvram_read(p_hwfn, status = qed_nvram_read(p_hwfn,
p_ptt, p_ptt,
nvram_offset_in_bytes, size_in_bytes, buf); nvram_offset_in_bytes,
size_in_bytes,
buf,
true);
if (status != DBG_STATUS_OK) if (status != DBG_STATUS_OK)
return status; return status;

View File

@ -2505,7 +2505,13 @@ int qlcnic_83xx_init(struct qlcnic_adapter *adapter)
goto disable_mbx_intr; goto disable_mbx_intr;
qlcnic_83xx_clear_function_resources(adapter); qlcnic_83xx_clear_function_resources(adapter);
qlcnic_dcb_enable(adapter->dcb);
err = qlcnic_dcb_enable(adapter->dcb);
if (err) {
qlcnic_dcb_free(adapter->dcb);
goto disable_mbx_intr;
}
qlcnic_83xx_initialize_nic(adapter, 1); qlcnic_83xx_initialize_nic(adapter, 1);
qlcnic_dcb_get_info(adapter->dcb); qlcnic_dcb_get_info(adapter->dcb);

View File

@ -41,11 +41,6 @@ struct qlcnic_dcb {
unsigned long state; unsigned long state;
}; };
static inline void qlcnic_clear_dcb_ops(struct qlcnic_dcb *dcb)
{
kfree(dcb);
}
static inline int qlcnic_dcb_get_hw_capability(struct qlcnic_dcb *dcb) static inline int qlcnic_dcb_get_hw_capability(struct qlcnic_dcb *dcb)
{ {
if (dcb && dcb->ops->get_hw_capability) if (dcb && dcb->ops->get_hw_capability)
@ -112,9 +107,8 @@ static inline void qlcnic_dcb_init_dcbnl_ops(struct qlcnic_dcb *dcb)
dcb->ops->init_dcbnl_ops(dcb); dcb->ops->init_dcbnl_ops(dcb);
} }
static inline void qlcnic_dcb_enable(struct qlcnic_dcb *dcb) static inline int qlcnic_dcb_enable(struct qlcnic_dcb *dcb)
{ {
if (dcb && qlcnic_dcb_attach(dcb)) return dcb ? qlcnic_dcb_attach(dcb) : 0;
qlcnic_clear_dcb_ops(dcb);
} }
#endif #endif

View File

@ -2599,7 +2599,13 @@ qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
"Device does not support MSI interrupts\n"); "Device does not support MSI interrupts\n");
if (qlcnic_82xx_check(adapter)) { if (qlcnic_82xx_check(adapter)) {
qlcnic_dcb_enable(adapter->dcb); err = qlcnic_dcb_enable(adapter->dcb);
if (err) {
qlcnic_dcb_free(adapter->dcb);
dev_err(&pdev->dev, "Failed to enable DCB\n");
goto err_out_free_hw;
}
qlcnic_dcb_get_info(adapter->dcb); qlcnic_dcb_get_info(adapter->dcb);
err = qlcnic_setup_intr(adapter); err = qlcnic_setup_intr(adapter);

View File

@ -2210,28 +2210,6 @@ static int rtl_set_mac_address(struct net_device *dev, void *p)
return 0; return 0;
} }
static void rtl_wol_enable_rx(struct rtl8169_private *tp)
{
if (tp->mac_version >= RTL_GIGA_MAC_VER_25)
RTL_W32(tp, RxConfig, RTL_R32(tp, RxConfig) |
AcceptBroadcast | AcceptMulticast | AcceptMyPhys);
}
static void rtl_prepare_power_down(struct rtl8169_private *tp)
{
if (tp->dash_type != RTL_DASH_NONE)
return;
if (tp->mac_version == RTL_GIGA_MAC_VER_32 ||
tp->mac_version == RTL_GIGA_MAC_VER_33)
rtl_ephy_write(tp, 0x19, 0xff64);
if (device_may_wakeup(tp_to_dev(tp))) {
phy_speed_down(tp->phydev, false);
rtl_wol_enable_rx(tp);
}
}
static void rtl_init_rxcfg(struct rtl8169_private *tp) static void rtl_init_rxcfg(struct rtl8169_private *tp)
{ {
switch (tp->mac_version) { switch (tp->mac_version) {
@ -2455,6 +2433,31 @@ static void rtl_enable_rxdvgate(struct rtl8169_private *tp)
rtl_wait_txrx_fifo_empty(tp); rtl_wait_txrx_fifo_empty(tp);
} }
static void rtl_wol_enable_rx(struct rtl8169_private *tp)
{
if (tp->mac_version >= RTL_GIGA_MAC_VER_25)
RTL_W32(tp, RxConfig, RTL_R32(tp, RxConfig) |
AcceptBroadcast | AcceptMulticast | AcceptMyPhys);
if (tp->mac_version >= RTL_GIGA_MAC_VER_40)
rtl_disable_rxdvgate(tp);
}
static void rtl_prepare_power_down(struct rtl8169_private *tp)
{
if (tp->dash_type != RTL_DASH_NONE)
return;
if (tp->mac_version == RTL_GIGA_MAC_VER_32 ||
tp->mac_version == RTL_GIGA_MAC_VER_33)
rtl_ephy_write(tp, 0x19, 0xff64);
if (device_may_wakeup(tp_to_dev(tp))) {
phy_speed_down(tp->phydev, false);
rtl_wol_enable_rx(tp);
}
}
static void rtl_set_tx_config_registers(struct rtl8169_private *tp) static void rtl_set_tx_config_registers(struct rtl8169_private *tp)
{ {
u32 val = TX_DMA_BURST << TxDMAShift | u32 val = TX_DMA_BURST << TxDMAShift |
@ -3872,7 +3875,7 @@ static void rtl8169_tx_clear(struct rtl8169_private *tp)
netdev_reset_queue(tp->dev); netdev_reset_queue(tp->dev);
} }
static void rtl8169_cleanup(struct rtl8169_private *tp, bool going_down) static void rtl8169_cleanup(struct rtl8169_private *tp)
{ {
napi_disable(&tp->napi); napi_disable(&tp->napi);
@ -3884,9 +3887,6 @@ static void rtl8169_cleanup(struct rtl8169_private *tp, bool going_down)
rtl_rx_close(tp); rtl_rx_close(tp);
if (going_down && tp->dev->wol_enabled)
goto no_reset;
switch (tp->mac_version) { switch (tp->mac_version) {
case RTL_GIGA_MAC_VER_28: case RTL_GIGA_MAC_VER_28:
case RTL_GIGA_MAC_VER_31: case RTL_GIGA_MAC_VER_31:
@ -3907,7 +3907,7 @@ static void rtl8169_cleanup(struct rtl8169_private *tp, bool going_down)
} }
rtl_hw_reset(tp); rtl_hw_reset(tp);
no_reset:
rtl8169_tx_clear(tp); rtl8169_tx_clear(tp);
rtl8169_init_ring_indexes(tp); rtl8169_init_ring_indexes(tp);
} }
@ -3918,7 +3918,7 @@ static void rtl_reset_work(struct rtl8169_private *tp)
netif_stop_queue(tp->dev); netif_stop_queue(tp->dev);
rtl8169_cleanup(tp, false); rtl8169_cleanup(tp);
for (i = 0; i < NUM_RX_DESC; i++) for (i = 0; i < NUM_RX_DESC; i++)
rtl8169_mark_to_asic(tp->RxDescArray + i); rtl8169_mark_to_asic(tp->RxDescArray + i);
@ -4605,7 +4605,7 @@ static void rtl8169_down(struct rtl8169_private *tp)
pci_clear_master(tp->pci_dev); pci_clear_master(tp->pci_dev);
rtl_pci_commit(tp); rtl_pci_commit(tp);
rtl8169_cleanup(tp, true); rtl8169_cleanup(tp);
rtl_disable_exit_l1(tp); rtl_disable_exit_l1(tp);
rtl_prepare_power_down(tp); rtl_prepare_power_down(tp);
} }

View File

@ -1578,6 +1578,7 @@ static int rswitch_device_alloc(struct rswitch_private *priv, int index)
{ {
struct platform_device *pdev = priv->pdev; struct platform_device *pdev = priv->pdev;
struct rswitch_device *rdev; struct rswitch_device *rdev;
struct device_node *port;
struct net_device *ndev; struct net_device *ndev;
int err; int err;
@ -1606,7 +1607,9 @@ static int rswitch_device_alloc(struct rswitch_private *priv, int index)
netif_napi_add(ndev, &rdev->napi, rswitch_poll); netif_napi_add(ndev, &rdev->napi, rswitch_poll);
err = of_get_ethdev_address(pdev->dev.of_node, ndev); port = rswitch_get_port_node(rdev);
err = of_get_ethdev_address(port, ndev);
of_node_put(port);
if (err) { if (err) {
if (is_valid_ether_addr(rdev->etha->mac_addr)) if (is_valid_ether_addr(rdev->etha->mac_addr))
eth_hw_addr_set(ndev, rdev->etha->mac_addr); eth_hw_addr_set(ndev, rdev->etha->mac_addr);
@ -1786,6 +1789,11 @@ static int renesas_eth_sw_probe(struct platform_device *pdev)
pm_runtime_get_sync(&pdev->dev); pm_runtime_get_sync(&pdev->dev);
ret = rswitch_init(priv); ret = rswitch_init(priv);
if (ret < 0) {
pm_runtime_put(&pdev->dev);
pm_runtime_disable(&pdev->dev);
return ret;
}
device_set_wakeup_capable(&pdev->dev, 1); device_set_wakeup_capable(&pdev->dev, 1);

View File

@ -132,10 +132,10 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
u32 endpoint_id, bool enable) u32 endpoint_id, bool enable)
{ {
struct ipa *ipa = interrupt->ipa; struct ipa *ipa = interrupt->ipa;
u32 mask = BIT(endpoint_id % 32);
u32 unit = endpoint_id / 32; u32 unit = endpoint_id / 32;
const struct ipa_reg *reg; const struct ipa_reg *reg;
u32 offset; u32 offset;
u32 mask;
u32 val; u32 val;
WARN_ON(!test_bit(endpoint_id, ipa->available)); WARN_ON(!test_bit(endpoint_id, ipa->available));
@ -148,7 +148,6 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
offset = ipa_reg_n_offset(reg, unit); offset = ipa_reg_n_offset(reg, unit);
val = ioread32(ipa->reg_virt + offset); val = ioread32(ipa->reg_virt + offset);
mask = BIT(endpoint_id);
if (enable) if (enable)
val |= mask; val |= mask;
else else

View File

@ -105,6 +105,7 @@ static int xgmiitorgmii_probe(struct mdio_device *mdiodev)
if (!priv->phy_dev->drv) { if (!priv->phy_dev->drv) {
dev_info(dev, "Attached phy not ready\n"); dev_info(dev, "Attached phy not ready\n");
put_device(&priv->phy_dev->mdio.dev);
return -EPROBE_DEFER; return -EPROBE_DEFER;
} }

View File

@ -255,7 +255,8 @@ static int rndis_query(struct usbnet *dev, struct usb_interface *intf,
off = le32_to_cpu(u.get_c->offset); off = le32_to_cpu(u.get_c->offset);
len = le32_to_cpu(u.get_c->len); len = le32_to_cpu(u.get_c->len);
if (unlikely((8 + off + len) > CONTROL_BUFFER_SIZE)) if (unlikely((off > CONTROL_BUFFER_SIZE - 8) ||
(len > CONTROL_BUFFER_SIZE - 8 - off)))
goto response_error; goto response_error;
if (*reply_len != -1 && len != *reply_len) if (*reply_len != -1 && len != *reply_len)

View File

@ -974,6 +974,9 @@ static int veth_poll(struct napi_struct *napi, int budget)
xdp_set_return_frame_no_direct(); xdp_set_return_frame_no_direct();
done = veth_xdp_rcv(rq, budget, &bq, &stats); done = veth_xdp_rcv(rq, budget, &bq, &stats);
if (stats.xdp_redirect > 0)
xdp_do_flush();
if (done < budget && napi_complete_done(napi, done)) { if (done < budget && napi_complete_done(napi, done)) {
/* Write rx_notify_masked before reading ptr_ring */ /* Write rx_notify_masked before reading ptr_ring */
smp_store_mb(rq->rx_notify_masked, false); smp_store_mb(rq->rx_notify_masked, false);
@ -987,8 +990,6 @@ static int veth_poll(struct napi_struct *napi, int budget)
if (stats.xdp_tx > 0) if (stats.xdp_tx > 0)
veth_xdp_flush(rq, &bq); veth_xdp_flush(rq, &bq);
if (stats.xdp_redirect > 0)
xdp_do_flush();
xdp_clear_return_frame_no_direct(); xdp_clear_return_frame_no_direct();
return done; return done;

View File

@ -1288,6 +1288,10 @@ vmxnet3_rx_csum(struct vmxnet3_adapter *adapter,
(le32_to_cpu(gdesc->dword[3]) & (le32_to_cpu(gdesc->dword[3]) &
VMXNET3_RCD_CSUM_OK) == VMXNET3_RCD_CSUM_OK) { VMXNET3_RCD_CSUM_OK) == VMXNET3_RCD_CSUM_OK) {
skb->ip_summed = CHECKSUM_UNNECESSARY; skb->ip_summed = CHECKSUM_UNNECESSARY;
if ((le32_to_cpu(gdesc->dword[0]) &
(1UL << VMXNET3_RCD_HDR_INNER_SHIFT))) {
skb->csum_level = 1;
}
WARN_ON_ONCE(!(gdesc->rcd.tcp || gdesc->rcd.udp) && WARN_ON_ONCE(!(gdesc->rcd.tcp || gdesc->rcd.udp) &&
!(le32_to_cpu(gdesc->dword[0]) & !(le32_to_cpu(gdesc->dword[0]) &
(1UL << VMXNET3_RCD_HDR_INNER_SHIFT))); (1UL << VMXNET3_RCD_HDR_INNER_SHIFT)));
@ -1297,6 +1301,10 @@ vmxnet3_rx_csum(struct vmxnet3_adapter *adapter,
} else if (gdesc->rcd.v6 && (le32_to_cpu(gdesc->dword[3]) & } else if (gdesc->rcd.v6 && (le32_to_cpu(gdesc->dword[3]) &
(1 << VMXNET3_RCD_TUC_SHIFT))) { (1 << VMXNET3_RCD_TUC_SHIFT))) {
skb->ip_summed = CHECKSUM_UNNECESSARY; skb->ip_summed = CHECKSUM_UNNECESSARY;
if ((le32_to_cpu(gdesc->dword[0]) &
(1UL << VMXNET3_RCD_HDR_INNER_SHIFT))) {
skb->csum_level = 1;
}
WARN_ON_ONCE(!(gdesc->rcd.tcp || gdesc->rcd.udp) && WARN_ON_ONCE(!(gdesc->rcd.tcp || gdesc->rcd.udp) &&
!(le32_to_cpu(gdesc->dword[0]) & !(le32_to_cpu(gdesc->dword[0]) &
(1UL << VMXNET3_RCD_HDR_INNER_SHIFT))); (1UL << VMXNET3_RCD_HDR_INNER_SHIFT)));

View File

@ -1385,8 +1385,8 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
/* loopback, multicast & non-ND link-local traffic; do not push through /* loopback, multicast & non-ND link-local traffic; do not push through
* packet taps again. Reset pkt_type for upper layers to process skb. * packet taps again. Reset pkt_type for upper layers to process skb.
* For strict packets with a source LLA, determine the dst using the * For non-loopback strict packets, determine the dst using the original
* original ifindex. * ifindex.
*/ */
if (skb->pkt_type == PACKET_LOOPBACK || (need_strict && !is_ndisc)) { if (skb->pkt_type == PACKET_LOOPBACK || (need_strict && !is_ndisc)) {
skb->dev = vrf_dev; skb->dev = vrf_dev;
@ -1395,7 +1395,7 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
if (skb->pkt_type == PACKET_LOOPBACK) if (skb->pkt_type == PACKET_LOOPBACK)
skb->pkt_type = PACKET_HOST; skb->pkt_type = PACKET_HOST;
else if (ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL) else
vrf_ip6_input_dst(skb, vrf_dev, orig_iif); vrf_ip6_input_dst(skb, vrf_dev, orig_iif);
goto out; goto out;

View File

@ -2917,16 +2917,23 @@ static int vxlan_init(struct net_device *dev)
vxlan_vnigroup_init(vxlan); vxlan_vnigroup_init(vxlan);
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
if (!dev->tstats) if (!dev->tstats) {
return -ENOMEM; err = -ENOMEM;
goto err_vnigroup_uninit;
err = gro_cells_init(&vxlan->gro_cells, dev);
if (err) {
free_percpu(dev->tstats);
return err;
} }
err = gro_cells_init(&vxlan->gro_cells, dev);
if (err)
goto err_free_percpu;
return 0; return 0;
err_free_percpu:
free_percpu(dev->tstats);
err_vnigroup_uninit:
if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)
vxlan_vnigroup_uninit(vxlan);
return err;
} }
static void vxlan_fdb_delete_default(struct vxlan_dev *vxlan, __be32 vni) static void vxlan_fdb_delete_default(struct vxlan_dev *vxlan, __be32 vni)

View File

@ -327,9 +327,9 @@ static inline struct ath9k_htc_tx_ctl *HTC_SKB_CB(struct sk_buff *skb)
} }
#ifdef CONFIG_ATH9K_HTC_DEBUGFS #ifdef CONFIG_ATH9K_HTC_DEBUGFS
#define __STAT_SAFE(hif_dev, expr) ((hif_dev)->htc_handle->drv_priv ? (expr) : 0) #define __STAT_SAFE(hif_dev, expr) do { ((hif_dev)->htc_handle->drv_priv ? (expr) : 0); } while (0)
#define CAB_STAT_INC(priv) ((priv)->debug.tx_stats.cab_queued++) #define CAB_STAT_INC(priv) do { ((priv)->debug.tx_stats.cab_queued++); } while (0)
#define TX_QSTAT_INC(priv, q) ((priv)->debug.tx_stats.queue_stats[q]++) #define TX_QSTAT_INC(priv, q) do { ((priv)->debug.tx_stats.queue_stats[q]++); } while (0)
#define TX_STAT_INC(hif_dev, c) \ #define TX_STAT_INC(hif_dev, c) \
__STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.tx_stats.c++) __STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.tx_stats.c++)
@ -378,10 +378,10 @@ void ath9k_htc_get_et_stats(struct ieee80211_hw *hw,
struct ethtool_stats *stats, u64 *data); struct ethtool_stats *stats, u64 *data);
#else #else
#define TX_STAT_INC(hif_dev, c) #define TX_STAT_INC(hif_dev, c) do { } while (0)
#define TX_STAT_ADD(hif_dev, c, a) #define TX_STAT_ADD(hif_dev, c, a) do { } while (0)
#define RX_STAT_INC(hif_dev, c) #define RX_STAT_INC(hif_dev, c) do { } while (0)
#define RX_STAT_ADD(hif_dev, c, a) #define RX_STAT_ADD(hif_dev, c, a) do { } while (0)
#define CAB_STAT_INC(priv) #define CAB_STAT_INC(priv)
#define TX_QSTAT_INC(priv, c) #define TX_QSTAT_INC(priv, c)

View File

@ -1106,6 +1106,11 @@ int iwl_read_ppag_table(struct iwl_fw_runtime *fwrt, union iwl_ppag_table_cmd *c
int i, j, num_sub_bands; int i, j, num_sub_bands;
s8 *gain; s8 *gain;
/* many firmware images for JF lie about this */
if (CSR_HW_RFID_TYPE(fwrt->trans->hw_rf_id) ==
CSR_HW_RFID_TYPE(CSR_HW_RF_ID_TYPE_JF))
return -EOPNOTSUPP;
if (!fw_has_capa(&fwrt->fw->ucode_capa, IWL_UCODE_TLV_CAPA_SET_PPAG)) { if (!fw_has_capa(&fwrt->fw->ucode_capa, IWL_UCODE_TLV_CAPA_SET_PPAG)) {
IWL_DEBUG_RADIO(fwrt, IWL_DEBUG_RADIO(fwrt,
"PPAG capability not supported by FW, command not sent.\n"); "PPAG capability not supported by FW, command not sent.\n");

View File

@ -2,6 +2,7 @@
config MT7996E config MT7996E
tristate "MediaTek MT7996 (PCIe) support" tristate "MediaTek MT7996 (PCIe) support"
select MT76_CONNAC_LIB select MT76_CONNAC_LIB
select RELAY
depends on MAC80211 depends on MAC80211
depends on PCI depends on PCI
help help

View File

@ -3,6 +3,3 @@ obj-$(CONFIG_WLCORE) += wlcore/
obj-$(CONFIG_WL12XX) += wl12xx/ obj-$(CONFIG_WL12XX) += wl12xx/
obj-$(CONFIG_WL1251) += wl1251/ obj-$(CONFIG_WL1251) += wl1251/
obj-$(CONFIG_WL18XX) += wl18xx/ obj-$(CONFIG_WL18XX) += wl18xx/
# small builtin driver bit
obj-$(CONFIG_WILINK_PLATFORM_DATA) += wilink_platform_data.o

View File

@ -410,13 +410,13 @@ static ssize_t qeth_dev_isolation_show(struct device *dev,
switch (card->options.isolation) { switch (card->options.isolation) {
case ISOLATION_MODE_NONE: case ISOLATION_MODE_NONE:
return snprintf(buf, 6, "%s\n", ATTR_QETH_ISOLATION_NONE); return sysfs_emit(buf, "%s\n", ATTR_QETH_ISOLATION_NONE);
case ISOLATION_MODE_FWD: case ISOLATION_MODE_FWD:
return snprintf(buf, 9, "%s\n", ATTR_QETH_ISOLATION_FWD); return sysfs_emit(buf, "%s\n", ATTR_QETH_ISOLATION_FWD);
case ISOLATION_MODE_DROP: case ISOLATION_MODE_DROP:
return snprintf(buf, 6, "%s\n", ATTR_QETH_ISOLATION_DROP); return sysfs_emit(buf, "%s\n", ATTR_QETH_ISOLATION_DROP);
default: default:
return snprintf(buf, 5, "%s\n", "N/A"); return sysfs_emit(buf, "%s\n", "N/A");
} }
} }
@ -500,9 +500,9 @@ static ssize_t qeth_hw_trap_show(struct device *dev,
struct qeth_card *card = dev_get_drvdata(dev); struct qeth_card *card = dev_get_drvdata(dev);
if (card->info.hwtrap) if (card->info.hwtrap)
return snprintf(buf, 5, "arm\n"); return sysfs_emit(buf, "arm\n");
else else
return snprintf(buf, 8, "disarm\n"); return sysfs_emit(buf, "disarm\n");
} }
static ssize_t qeth_hw_trap_store(struct device *dev, static ssize_t qeth_hw_trap_store(struct device *dev,

View File

@ -45,8 +45,8 @@ struct sk_buff;
QCA_HDR_MGMT_COMMAND_LEN + \ QCA_HDR_MGMT_COMMAND_LEN + \
QCA_HDR_MGMT_DATA1_LEN) QCA_HDR_MGMT_DATA1_LEN)
#define QCA_HDR_MGMT_DATA2_LEN 12 /* Other 12 byte for the mdio data */ #define QCA_HDR_MGMT_DATA2_LEN 28 /* Other 28 byte for the mdio data */
#define QCA_HDR_MGMT_PADDING_LEN 34 /* Padding to reach the min Ethernet packet */ #define QCA_HDR_MGMT_PADDING_LEN 18 /* Padding to reach the min Ethernet packet */
#define QCA_HDR_MGMT_PKT_LEN (QCA_HDR_MGMT_HEADER_LEN + \ #define QCA_HDR_MGMT_PKT_LEN (QCA_HDR_MGMT_HEADER_LEN + \
QCA_HDR_LEN + \ QCA_HDR_LEN + \

View File

@ -1090,6 +1090,11 @@ enum {
MLX5_VPORT_ADMIN_STATE_AUTO = 0x2, MLX5_VPORT_ADMIN_STATE_AUTO = 0x2,
}; };
enum {
MLX5_VPORT_CVLAN_INSERT_WHEN_NO_CVLAN = 0x1,
MLX5_VPORT_CVLAN_INSERT_ALWAYS = 0x3,
};
enum { enum {
MLX5_L3_PROT_TYPE_IPV4 = 0, MLX5_L3_PROT_TYPE_IPV4 = 0,
MLX5_L3_PROT_TYPE_IPV6 = 1, MLX5_L3_PROT_TYPE_IPV6 = 1,

View File

@ -913,7 +913,8 @@ struct mlx5_ifc_e_switch_cap_bits {
u8 vport_svlan_insert[0x1]; u8 vport_svlan_insert[0x1];
u8 vport_cvlan_insert_if_not_exist[0x1]; u8 vport_cvlan_insert_if_not_exist[0x1];
u8 vport_cvlan_insert_overwrite[0x1]; u8 vport_cvlan_insert_overwrite[0x1];
u8 reserved_at_5[0x2]; u8 reserved_at_5[0x1];
u8 vport_cvlan_insert_always[0x1];
u8 esw_shared_ingress_acl[0x1]; u8 esw_shared_ingress_acl[0x1];
u8 esw_uplink_ingress_acl[0x1]; u8 esw_uplink_ingress_acl[0x1];
u8 root_ft_on_other_esw[0x1]; u8 root_ft_on_other_esw[0x1];

View File

@ -197,7 +197,7 @@ struct ip_set_region {
}; };
/* Max range where every element is added/deleted in one step */ /* Max range where every element is added/deleted in one step */
#define IPSET_MAX_RANGE (1<<20) #define IPSET_MAX_RANGE (1<<14)
/* The max revision number supported by any set type + 1 */ /* The max revision number supported by any set type + 1 */
#define IPSET_REVISION_MAX 9 #define IPSET_REVISION_MAX 9

View File

@ -826,10 +826,7 @@ struct phy_driver {
* whether to advertise lower-speed modes for that interface. It is * whether to advertise lower-speed modes for that interface. It is
* assumed that if a rate matching mode is supported on an interface, * assumed that if a rate matching mode is supported on an interface,
* then that interface's rate can be adapted to all slower link speeds * then that interface's rate can be adapted to all slower link speeds
* supported by the phy. If iface is %PHY_INTERFACE_MODE_NA, and the phy * supported by the phy. If the interface is not supported, this should
* supports any kind of rate matching for any interface, then it must
* return that rate matching mode (preferring %RATE_MATCH_PAUSE to
* %RATE_MATCH_CRS). If the interface is not supported, this should
* return %RATE_MATCH_NONE. * return %RATE_MATCH_NONE.
*/ */
int (*get_rate_matching)(struct phy_device *phydev, int (*get_rate_matching)(struct phy_device *phydev,

View File

@ -108,6 +108,10 @@ struct inet_bind2_bucket {
struct hlist_node node; struct hlist_node node;
/* List of sockets hashed to this bucket */ /* List of sockets hashed to this bucket */
struct hlist_head owners; struct hlist_head owners;
/* bhash has twsk in owners, but bhash2 has twsk in
* deathrow not to add a member in struct sock_common.
*/
struct hlist_head deathrow;
}; };
static inline struct net *ib_net(const struct inet_bind_bucket *ib) static inline struct net *ib_net(const struct inet_bind_bucket *ib)

View File

@ -73,9 +73,14 @@ struct inet_timewait_sock {
u32 tw_priority; u32 tw_priority;
struct timer_list tw_timer; struct timer_list tw_timer;
struct inet_bind_bucket *tw_tb; struct inet_bind_bucket *tw_tb;
struct inet_bind2_bucket *tw_tb2;
struct hlist_node tw_bind2_node;
}; };
#define tw_tclass tw_tos #define tw_tclass tw_tos
#define twsk_for_each_bound_bhash2(__tw, list) \
hlist_for_each_entry(__tw, list, tw_bind2_node)
static inline struct inet_timewait_sock *inet_twsk(const struct sock *sk) static inline struct inet_timewait_sock *inet_twsk(const struct sock *sk)
{ {
return (struct inet_timewait_sock *)sk; return (struct inet_timewait_sock *)sk;

View File

@ -312,17 +312,29 @@ struct nft_set_iter {
/** /**
* struct nft_set_desc - description of set elements * struct nft_set_desc - description of set elements
* *
* @ktype: key type
* @klen: key length * @klen: key length
* @dtype: data type
* @dlen: data length * @dlen: data length
* @objtype: object type
* @flags: flags
* @size: number of set elements * @size: number of set elements
* @policy: set policy
* @gc_int: garbage collector interval
* @field_len: length of each field in concatenation, bytes * @field_len: length of each field in concatenation, bytes
* @field_count: number of concatenated fields in element * @field_count: number of concatenated fields in element
* @expr: set must support for expressions * @expr: set must support for expressions
*/ */
struct nft_set_desc { struct nft_set_desc {
u32 ktype;
unsigned int klen; unsigned int klen;
u32 dtype;
unsigned int dlen; unsigned int dlen;
u32 objtype;
unsigned int size; unsigned int size;
u32 policy;
u32 gc_int;
u64 timeout;
u8 field_len[NFT_REG32_COUNT]; u8 field_len[NFT_REG32_COUNT];
u8 field_count; u8 field_count;
bool expr; bool expr;
@ -585,7 +597,9 @@ void *nft_set_catchall_gc(const struct nft_set *set);
static inline unsigned long nft_set_gc_interval(const struct nft_set *set) static inline unsigned long nft_set_gc_interval(const struct nft_set *set)
{ {
return set->gc_int ? msecs_to_jiffies(set->gc_int) : HZ; u32 gc_int = READ_ONCE(set->gc_int);
return gc_int ? msecs_to_jiffies(gc_int) : HZ;
} }
/** /**
@ -1558,6 +1572,9 @@ struct nft_trans_rule {
struct nft_trans_set { struct nft_trans_set {
struct nft_set *set; struct nft_set *set;
u32 set_id; u32 set_id;
u32 gc_int;
u64 timeout;
bool update;
bool bound; bool bound;
}; };
@ -1567,6 +1584,12 @@ struct nft_trans_set {
(((struct nft_trans_set *)trans->data)->set_id) (((struct nft_trans_set *)trans->data)->set_id)
#define nft_trans_set_bound(trans) \ #define nft_trans_set_bound(trans) \
(((struct nft_trans_set *)trans->data)->bound) (((struct nft_trans_set *)trans->data)->bound)
#define nft_trans_set_update(trans) \
(((struct nft_trans_set *)trans->data)->update)
#define nft_trans_set_timeout(trans) \
(((struct nft_trans_set *)trans->data)->timeout)
#define nft_trans_set_gc_int(trans) \
(((struct nft_trans_set *)trans->data)->gc_int)
struct nft_trans_chain { struct nft_trans_chain {
bool update; bool update;

View File

@ -216,6 +216,8 @@ skip:
return tp->classify(skb, tp, res); return tp->classify(skb, tp, res);
} }
#endif /* CONFIG_NET_CLS */
static inline void tc_wrapper_init(void) static inline void tc_wrapper_init(void)
{ {
#ifdef CONFIG_X86 #ifdef CONFIG_X86
@ -224,8 +226,6 @@ static inline void tc_wrapper_init(void)
#endif #endif
} }
#endif /* CONFIG_NET_CLS */
#else #else
#define TC_INDIRECT_SCOPE static #define TC_INDIRECT_SCOPE static

View File

@ -1062,10 +1062,10 @@ TRACE_EVENT(rxrpc_receive,
); );
TRACE_EVENT(rxrpc_recvmsg, TRACE_EVENT(rxrpc_recvmsg,
TP_PROTO(struct rxrpc_call *call, enum rxrpc_recvmsg_trace why, TP_PROTO(unsigned int call_debug_id, enum rxrpc_recvmsg_trace why,
int ret), int ret),
TP_ARGS(call, why, ret), TP_ARGS(call_debug_id, why, ret),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(unsigned int, call ) __field(unsigned int, call )
@ -1074,7 +1074,7 @@ TRACE_EVENT(rxrpc_recvmsg,
), ),
TP_fast_assign( TP_fast_assign(
__entry->call = call ? call->debug_id : 0; __entry->call = call_debug_id;
__entry->why = why; __entry->why = why;
__entry->ret = ret; __entry->ret = ret;
), ),

View File

@ -38,7 +38,7 @@
*/ */
#define BR2684_ENCAPS_VC (0) /* VC-mux */ #define BR2684_ENCAPS_VC (0) /* VC-mux */
#define BR2684_ENCAPS_LLC (1) #define BR2684_ENCAPS_LLC (1)
#define BR2684_ENCAPS_AUTODETECT (2) /* Unsuported */ #define BR2684_ENCAPS_AUTODETECT (2) /* Unsupported */
/* /*
* Is this VC bridged or routed? * Is this VC bridged or routed?

View File

@ -351,8 +351,10 @@ BTF_ID(func, bpf_lsm_bpf_prog_alloc_security)
BTF_ID(func, bpf_lsm_bpf_prog_free_security) BTF_ID(func, bpf_lsm_bpf_prog_free_security)
BTF_ID(func, bpf_lsm_file_alloc_security) BTF_ID(func, bpf_lsm_file_alloc_security)
BTF_ID(func, bpf_lsm_file_free_security) BTF_ID(func, bpf_lsm_file_free_security)
#ifdef CONFIG_SECURITY_NETWORK
BTF_ID(func, bpf_lsm_sk_alloc_security) BTF_ID(func, bpf_lsm_sk_alloc_security)
BTF_ID(func, bpf_lsm_sk_free_security) BTF_ID(func, bpf_lsm_sk_free_security)
#endif /* CONFIG_SECURITY_NETWORK */
BTF_ID(func, bpf_lsm_task_free) BTF_ID(func, bpf_lsm_task_free)
BTF_SET_END(untrusted_lsm_hooks) BTF_SET_END(untrusted_lsm_hooks)

View File

@ -438,6 +438,7 @@ struct bpf_iter_seq_task_vma_info {
*/ */
struct bpf_iter_seq_task_common common; struct bpf_iter_seq_task_common common;
struct task_struct *task; struct task_struct *task;
struct mm_struct *mm;
struct vm_area_struct *vma; struct vm_area_struct *vma;
u32 tid; u32 tid;
unsigned long prev_vm_start; unsigned long prev_vm_start;
@ -456,16 +457,19 @@ task_vma_seq_get_next(struct bpf_iter_seq_task_vma_info *info)
enum bpf_task_vma_iter_find_op op; enum bpf_task_vma_iter_find_op op;
struct vm_area_struct *curr_vma; struct vm_area_struct *curr_vma;
struct task_struct *curr_task; struct task_struct *curr_task;
struct mm_struct *curr_mm;
u32 saved_tid = info->tid; u32 saved_tid = info->tid;
/* If this function returns a non-NULL vma, it holds a reference to /* If this function returns a non-NULL vma, it holds a reference to
* the task_struct, and holds read lock on vma->mm->mmap_lock. * the task_struct, holds a refcount on mm->mm_users, and holds
* read lock on vma->mm->mmap_lock.
* If this function returns NULL, it does not hold any reference or * If this function returns NULL, it does not hold any reference or
* lock. * lock.
*/ */
if (info->task) { if (info->task) {
curr_task = info->task; curr_task = info->task;
curr_vma = info->vma; curr_vma = info->vma;
curr_mm = info->mm;
/* In case of lock contention, drop mmap_lock to unblock /* In case of lock contention, drop mmap_lock to unblock
* the writer. * the writer.
* *
@ -504,13 +508,15 @@ task_vma_seq_get_next(struct bpf_iter_seq_task_vma_info *info)
* 4.2) VMA2 and VMA2' covers different ranges, process * 4.2) VMA2 and VMA2' covers different ranges, process
* VMA2'. * VMA2'.
*/ */
if (mmap_lock_is_contended(curr_task->mm)) { if (mmap_lock_is_contended(curr_mm)) {
info->prev_vm_start = curr_vma->vm_start; info->prev_vm_start = curr_vma->vm_start;
info->prev_vm_end = curr_vma->vm_end; info->prev_vm_end = curr_vma->vm_end;
op = task_vma_iter_find_vma; op = task_vma_iter_find_vma;
mmap_read_unlock(curr_task->mm); mmap_read_unlock(curr_mm);
if (mmap_read_lock_killable(curr_task->mm)) if (mmap_read_lock_killable(curr_mm)) {
mmput(curr_mm);
goto finish; goto finish;
}
} else { } else {
op = task_vma_iter_next_vma; op = task_vma_iter_next_vma;
} }
@ -535,42 +541,47 @@ again:
op = task_vma_iter_find_vma; op = task_vma_iter_find_vma;
} }
if (!curr_task->mm) curr_mm = get_task_mm(curr_task);
if (!curr_mm)
goto next_task; goto next_task;
if (mmap_read_lock_killable(curr_task->mm)) if (mmap_read_lock_killable(curr_mm)) {
mmput(curr_mm);
goto finish; goto finish;
}
} }
switch (op) { switch (op) {
case task_vma_iter_first_vma: case task_vma_iter_first_vma:
curr_vma = find_vma(curr_task->mm, 0); curr_vma = find_vma(curr_mm, 0);
break; break;
case task_vma_iter_next_vma: case task_vma_iter_next_vma:
curr_vma = find_vma(curr_task->mm, curr_vma->vm_end); curr_vma = find_vma(curr_mm, curr_vma->vm_end);
break; break;
case task_vma_iter_find_vma: case task_vma_iter_find_vma:
/* We dropped mmap_lock so it is necessary to use find_vma /* We dropped mmap_lock so it is necessary to use find_vma
* to find the next vma. This is similar to the mechanism * to find the next vma. This is similar to the mechanism
* in show_smaps_rollup(). * in show_smaps_rollup().
*/ */
curr_vma = find_vma(curr_task->mm, info->prev_vm_end - 1); curr_vma = find_vma(curr_mm, info->prev_vm_end - 1);
/* case 1) and 4.2) above just use curr_vma */ /* case 1) and 4.2) above just use curr_vma */
/* check for case 2) or case 4.1) above */ /* check for case 2) or case 4.1) above */
if (curr_vma && if (curr_vma &&
curr_vma->vm_start == info->prev_vm_start && curr_vma->vm_start == info->prev_vm_start &&
curr_vma->vm_end == info->prev_vm_end) curr_vma->vm_end == info->prev_vm_end)
curr_vma = find_vma(curr_task->mm, curr_vma->vm_end); curr_vma = find_vma(curr_mm, curr_vma->vm_end);
break; break;
} }
if (!curr_vma) { if (!curr_vma) {
/* case 3) above, or case 2) 4.1) with vma->next == NULL */ /* case 3) above, or case 2) 4.1) with vma->next == NULL */
mmap_read_unlock(curr_task->mm); mmap_read_unlock(curr_mm);
mmput(curr_mm);
goto next_task; goto next_task;
} }
info->task = curr_task; info->task = curr_task;
info->vma = curr_vma; info->vma = curr_vma;
info->mm = curr_mm;
return curr_vma; return curr_vma;
next_task: next_task:
@ -579,6 +590,7 @@ next_task:
put_task_struct(curr_task); put_task_struct(curr_task);
info->task = NULL; info->task = NULL;
info->mm = NULL;
info->tid++; info->tid++;
goto again; goto again;
@ -587,6 +599,7 @@ finish:
put_task_struct(curr_task); put_task_struct(curr_task);
info->task = NULL; info->task = NULL;
info->vma = NULL; info->vma = NULL;
info->mm = NULL;
return NULL; return NULL;
} }
@ -658,7 +671,9 @@ static void task_vma_seq_stop(struct seq_file *seq, void *v)
*/ */
info->prev_vm_start = ~0UL; info->prev_vm_start = ~0UL;
info->prev_vm_end = info->vma->vm_end; info->prev_vm_end = info->vma->vm_end;
mmap_read_unlock(info->task->mm); mmap_read_unlock(info->mm);
mmput(info->mm);
info->mm = NULL;
put_task_struct(info->task); put_task_struct(info->task);
info->task = NULL; info->task = NULL;
} }

View File

@ -488,6 +488,10 @@ again:
/* reset fops->func and fops->trampoline for re-register */ /* reset fops->func and fops->trampoline for re-register */
tr->fops->func = NULL; tr->fops->func = NULL;
tr->fops->trampoline = 0; tr->fops->trampoline = 0;
/* reset im->image memory attr for arch_prepare_bpf_trampoline */
set_memory_nx((long)im->image, 1);
set_memory_rw((long)im->image, 1);
goto again; goto again;
} }
#endif #endif

View File

@ -1054,6 +1054,8 @@ static void print_insn_state(struct bpf_verifier_env *env,
*/ */
static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t flags) static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t flags)
{ {
size_t alloc_bytes;
void *orig = dst;
size_t bytes; size_t bytes;
if (ZERO_OR_NULL_PTR(src)) if (ZERO_OR_NULL_PTR(src))
@ -1062,11 +1064,11 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
if (unlikely(check_mul_overflow(n, size, &bytes))) if (unlikely(check_mul_overflow(n, size, &bytes)))
return NULL; return NULL;
if (ksize(dst) < ksize(src)) { alloc_bytes = max(ksize(orig), kmalloc_size_roundup(bytes));
kfree(dst); dst = krealloc(orig, alloc_bytes, flags);
dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags); if (!dst) {
if (!dst) kfree(orig);
return NULL; return NULL;
} }
memcpy(dst, src, bytes); memcpy(dst, src, bytes);
@ -11822,10 +11824,17 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
* register B - not null * register B - not null
* for JNE A, B, ... - A is not null in the false branch; * for JNE A, B, ... - A is not null in the false branch;
* for JEQ A, B, ... - A is not null in the true branch. * for JEQ A, B, ... - A is not null in the true branch.
*
* Since PTR_TO_BTF_ID points to a kernel struct that does
* not need to be null checked by the BPF program, i.e.,
* could be null even without PTR_MAYBE_NULL marking, so
* only propagate nullness when neither reg is that type.
*/ */
if (!is_jmp32 && BPF_SRC(insn->code) == BPF_X && if (!is_jmp32 && BPF_SRC(insn->code) == BPF_X &&
__is_pointer_value(false, src_reg) && __is_pointer_value(false, dst_reg) && __is_pointer_value(false, src_reg) && __is_pointer_value(false, dst_reg) &&
type_may_be_null(src_reg->type) != type_may_be_null(dst_reg->type)) { type_may_be_null(src_reg->type) != type_may_be_null(dst_reg->type) &&
base_type(src_reg->type) != PTR_TO_BTF_ID &&
base_type(dst_reg->type) != PTR_TO_BTF_ID) {
eq_branch_regs = NULL; eq_branch_regs = NULL;
switch (opcode) { switch (opcode) {
case BPF_JEQ: case BPF_JEQ:

View File

@ -269,11 +269,15 @@ int cfctrl_linkup_request(struct cflayer *layer,
default: default:
pr_warn("Request setup of bad link type = %d\n", pr_warn("Request setup of bad link type = %d\n",
param->linktype); param->linktype);
cfpkt_destroy(pkt);
return -EINVAL; return -EINVAL;
} }
req = kzalloc(sizeof(*req), GFP_KERNEL); req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req) if (!req) {
cfpkt_destroy(pkt);
return -ENOMEM; return -ENOMEM;
}
req->client_layer = user_layer; req->client_layer = user_layer;
req->cmd = CFCTRL_CMD_LINK_SETUP; req->cmd = CFCTRL_CMD_LINK_SETUP;
req->param = *param; req->param = *param;

View File

@ -3180,15 +3180,18 @@ static int bpf_skb_generic_push(struct sk_buff *skb, u32 off, u32 len)
static int bpf_skb_generic_pop(struct sk_buff *skb, u32 off, u32 len) static int bpf_skb_generic_pop(struct sk_buff *skb, u32 off, u32 len)
{ {
void *old_data;
/* skb_ensure_writable() is not needed here, as we're /* skb_ensure_writable() is not needed here, as we're
* already working on an uncloned skb. * already working on an uncloned skb.
*/ */
if (unlikely(!pskb_may_pull(skb, off + len))) if (unlikely(!pskb_may_pull(skb, off + len)))
return -ENOMEM; return -ENOMEM;
skb_postpull_rcsum(skb, skb->data + off, len); old_data = skb->data;
memmove(skb->data + len, skb->data, off);
__skb_pull(skb, len); __skb_pull(skb, len);
skb_postpull_rcsum(skb, old_data + off, len);
memmove(skb->data, old_data, off);
return 0; return 0;
} }

View File

@ -2078,58 +2078,91 @@ static int ethtool_get_stats(struct net_device *dev, void __user *useraddr)
return ret; return ret;
} }
static int ethtool_get_phy_stats(struct net_device *dev, void __user *useraddr) static int ethtool_vzalloc_stats_array(int n_stats, u64 **data)
{ {
const struct ethtool_phy_ops *phy_ops = ethtool_phy_ops;
const struct ethtool_ops *ops = dev->ethtool_ops;
struct phy_device *phydev = dev->phydev;
struct ethtool_stats stats;
u64 *data;
int ret, n_stats;
if (!phydev && (!ops->get_ethtool_phy_stats || !ops->get_sset_count))
return -EOPNOTSUPP;
if (phydev && !ops->get_ethtool_phy_stats &&
phy_ops && phy_ops->get_sset_count)
n_stats = phy_ops->get_sset_count(phydev);
else
n_stats = ops->get_sset_count(dev, ETH_SS_PHY_STATS);
if (n_stats < 0) if (n_stats < 0)
return n_stats; return n_stats;
if (n_stats > S32_MAX / sizeof(u64)) if (n_stats > S32_MAX / sizeof(u64))
return -ENOMEM; return -ENOMEM;
WARN_ON_ONCE(!n_stats); if (WARN_ON_ONCE(!n_stats))
return -EOPNOTSUPP;
*data = vzalloc(array_size(n_stats, sizeof(u64)));
if (!*data)
return -ENOMEM;
return 0;
}
static int ethtool_get_phy_stats_phydev(struct phy_device *phydev,
struct ethtool_stats *stats,
u64 **data)
{
const struct ethtool_phy_ops *phy_ops = ethtool_phy_ops;
int n_stats, ret;
if (!phy_ops || !phy_ops->get_sset_count || !phy_ops->get_stats)
return -EOPNOTSUPP;
n_stats = phy_ops->get_sset_count(phydev);
ret = ethtool_vzalloc_stats_array(n_stats, data);
if (ret)
return ret;
stats->n_stats = n_stats;
return phy_ops->get_stats(phydev, stats, *data);
}
static int ethtool_get_phy_stats_ethtool(struct net_device *dev,
struct ethtool_stats *stats,
u64 **data)
{
const struct ethtool_ops *ops = dev->ethtool_ops;
int n_stats, ret;
if (!ops || !ops->get_sset_count || ops->get_ethtool_phy_stats)
return -EOPNOTSUPP;
n_stats = ops->get_sset_count(dev, ETH_SS_PHY_STATS);
ret = ethtool_vzalloc_stats_array(n_stats, data);
if (ret)
return ret;
stats->n_stats = n_stats;
ops->get_ethtool_phy_stats(dev, stats, *data);
return 0;
}
static int ethtool_get_phy_stats(struct net_device *dev, void __user *useraddr)
{
struct phy_device *phydev = dev->phydev;
struct ethtool_stats stats;
u64 *data = NULL;
int ret = -EOPNOTSUPP;
if (copy_from_user(&stats, useraddr, sizeof(stats))) if (copy_from_user(&stats, useraddr, sizeof(stats)))
return -EFAULT; return -EFAULT;
stats.n_stats = n_stats; if (phydev)
ret = ethtool_get_phy_stats_phydev(phydev, &stats, &data);
if (n_stats) { if (ret == -EOPNOTSUPP)
data = vzalloc(array_size(n_stats, sizeof(u64))); ret = ethtool_get_phy_stats_ethtool(dev, &stats, &data);
if (!data)
return -ENOMEM;
if (phydev && !ops->get_ethtool_phy_stats && if (ret)
phy_ops && phy_ops->get_stats) { goto out;
ret = phy_ops->get_stats(phydev, &stats, data);
if (ret < 0) if (copy_to_user(useraddr, &stats, sizeof(stats))) {
goto out; ret = -EFAULT;
} else { goto out;
ops->get_ethtool_phy_stats(dev, &stats, data);
}
} else {
data = NULL;
} }
ret = -EFAULT;
if (copy_to_user(useraddr, &stats, sizeof(stats)))
goto out;
useraddr += sizeof(stats); useraddr += sizeof(stats);
if (n_stats && copy_to_user(useraddr, data, array_size(n_stats, sizeof(u64)))) if (copy_to_user(useraddr, data, array_size(stats.n_stats, sizeof(u64))))
goto out; ret = -EFAULT;
ret = 0;
out: out:
vfree(data); vfree(data);

View File

@ -1665,6 +1665,7 @@ int inet_ctl_sock_create(struct sock **sk, unsigned short family,
if (rc == 0) { if (rc == 0) {
*sk = sock->sk; *sk = sock->sk;
(*sk)->sk_allocation = GFP_ATOMIC; (*sk)->sk_allocation = GFP_ATOMIC;
(*sk)->sk_use_task_frag = false;
/* /*
* Unhash it so that IP input processing does not even see it, * Unhash it so that IP input processing does not even see it,
* we do not wish this socket to see incoming packets. * we do not wish this socket to see incoming packets.

View File

@ -173,22 +173,40 @@ static bool inet_bind_conflict(const struct sock *sk, struct sock *sk2,
return false; return false;
} }
static bool __inet_bhash2_conflict(const struct sock *sk, struct sock *sk2,
kuid_t sk_uid, bool relax,
bool reuseport_cb_ok, bool reuseport_ok)
{
if (sk->sk_family == AF_INET && ipv6_only_sock(sk2))
return false;
return inet_bind_conflict(sk, sk2, sk_uid, relax,
reuseport_cb_ok, reuseport_ok);
}
static bool inet_bhash2_conflict(const struct sock *sk, static bool inet_bhash2_conflict(const struct sock *sk,
const struct inet_bind2_bucket *tb2, const struct inet_bind2_bucket *tb2,
kuid_t sk_uid, kuid_t sk_uid,
bool relax, bool reuseport_cb_ok, bool relax, bool reuseport_cb_ok,
bool reuseport_ok) bool reuseport_ok)
{ {
struct inet_timewait_sock *tw2;
struct sock *sk2; struct sock *sk2;
sk_for_each_bound_bhash2(sk2, &tb2->owners) { sk_for_each_bound_bhash2(sk2, &tb2->owners) {
if (sk->sk_family == AF_INET && ipv6_only_sock(sk2)) if (__inet_bhash2_conflict(sk, sk2, sk_uid, relax,
continue; reuseport_cb_ok, reuseport_ok))
if (inet_bind_conflict(sk, sk2, sk_uid, relax,
reuseport_cb_ok, reuseport_ok))
return true; return true;
} }
twsk_for_each_bound_bhash2(tw2, &tb2->deathrow) {
sk2 = (struct sock *)tw2;
if (__inet_bhash2_conflict(sk, sk2, sk_uid, relax,
reuseport_cb_ok, reuseport_ok))
return true;
}
return false; return false;
} }
@ -1182,12 +1200,26 @@ void inet_csk_prepare_forced_close(struct sock *sk)
} }
EXPORT_SYMBOL(inet_csk_prepare_forced_close); EXPORT_SYMBOL(inet_csk_prepare_forced_close);
static int inet_ulp_can_listen(const struct sock *sk)
{
const struct inet_connection_sock *icsk = inet_csk(sk);
if (icsk->icsk_ulp_ops && !icsk->icsk_ulp_ops->clone)
return -EINVAL;
return 0;
}
int inet_csk_listen_start(struct sock *sk) int inet_csk_listen_start(struct sock *sk)
{ {
struct inet_connection_sock *icsk = inet_csk(sk); struct inet_connection_sock *icsk = inet_csk(sk);
struct inet_sock *inet = inet_sk(sk); struct inet_sock *inet = inet_sk(sk);
int err; int err;
err = inet_ulp_can_listen(sk);
if (unlikely(err))
return err;
reqsk_queue_alloc(&icsk->icsk_accept_queue); reqsk_queue_alloc(&icsk->icsk_accept_queue);
sk->sk_ack_backlog = 0; sk->sk_ack_backlog = 0;

View File

@ -116,6 +116,7 @@ static void inet_bind2_bucket_init(struct inet_bind2_bucket *tb,
#endif #endif
tb->rcv_saddr = sk->sk_rcv_saddr; tb->rcv_saddr = sk->sk_rcv_saddr;
INIT_HLIST_HEAD(&tb->owners); INIT_HLIST_HEAD(&tb->owners);
INIT_HLIST_HEAD(&tb->deathrow);
hlist_add_head(&tb->node, &head->chain); hlist_add_head(&tb->node, &head->chain);
} }
@ -137,7 +138,7 @@ struct inet_bind2_bucket *inet_bind2_bucket_create(struct kmem_cache *cachep,
/* Caller must hold hashbucket lock for this tb with local BH disabled */ /* Caller must hold hashbucket lock for this tb with local BH disabled */
void inet_bind2_bucket_destroy(struct kmem_cache *cachep, struct inet_bind2_bucket *tb) void inet_bind2_bucket_destroy(struct kmem_cache *cachep, struct inet_bind2_bucket *tb)
{ {
if (hlist_empty(&tb->owners)) { if (hlist_empty(&tb->owners) && hlist_empty(&tb->deathrow)) {
__hlist_del(&tb->node); __hlist_del(&tb->node);
kmem_cache_free(cachep, tb); kmem_cache_free(cachep, tb);
} }
@ -1103,15 +1104,16 @@ ok:
/* Head lock still held and bh's disabled */ /* Head lock still held and bh's disabled */
inet_bind_hash(sk, tb, tb2, port); inet_bind_hash(sk, tb, tb2, port);
spin_unlock(&head2->lock);
if (sk_unhashed(sk)) { if (sk_unhashed(sk)) {
inet_sk(sk)->inet_sport = htons(port); inet_sk(sk)->inet_sport = htons(port);
inet_ehash_nolisten(sk, (struct sock *)tw, NULL); inet_ehash_nolisten(sk, (struct sock *)tw, NULL);
} }
if (tw) if (tw)
inet_twsk_bind_unhash(tw, hinfo); inet_twsk_bind_unhash(tw, hinfo);
spin_unlock(&head2->lock);
spin_unlock(&head->lock); spin_unlock(&head->lock);
if (tw) if (tw)
inet_twsk_deschedule_put(tw); inet_twsk_deschedule_put(tw);
local_bh_enable(); local_bh_enable();

View File

@ -29,6 +29,7 @@
void inet_twsk_bind_unhash(struct inet_timewait_sock *tw, void inet_twsk_bind_unhash(struct inet_timewait_sock *tw,
struct inet_hashinfo *hashinfo) struct inet_hashinfo *hashinfo)
{ {
struct inet_bind2_bucket *tb2 = tw->tw_tb2;
struct inet_bind_bucket *tb = tw->tw_tb; struct inet_bind_bucket *tb = tw->tw_tb;
if (!tb) if (!tb)
@ -37,6 +38,11 @@ void inet_twsk_bind_unhash(struct inet_timewait_sock *tw,
__hlist_del(&tw->tw_bind_node); __hlist_del(&tw->tw_bind_node);
tw->tw_tb = NULL; tw->tw_tb = NULL;
inet_bind_bucket_destroy(hashinfo->bind_bucket_cachep, tb); inet_bind_bucket_destroy(hashinfo->bind_bucket_cachep, tb);
__hlist_del(&tw->tw_bind2_node);
tw->tw_tb2 = NULL;
inet_bind2_bucket_destroy(hashinfo->bind2_bucket_cachep, tb2);
__sock_put((struct sock *)tw); __sock_put((struct sock *)tw);
} }
@ -45,7 +51,7 @@ static void inet_twsk_kill(struct inet_timewait_sock *tw)
{ {
struct inet_hashinfo *hashinfo = tw->tw_dr->hashinfo; struct inet_hashinfo *hashinfo = tw->tw_dr->hashinfo;
spinlock_t *lock = inet_ehash_lockp(hashinfo, tw->tw_hash); spinlock_t *lock = inet_ehash_lockp(hashinfo, tw->tw_hash);
struct inet_bind_hashbucket *bhead; struct inet_bind_hashbucket *bhead, *bhead2;
spin_lock(lock); spin_lock(lock);
sk_nulls_del_node_init_rcu((struct sock *)tw); sk_nulls_del_node_init_rcu((struct sock *)tw);
@ -54,9 +60,13 @@ static void inet_twsk_kill(struct inet_timewait_sock *tw)
/* Disassociate with bind bucket. */ /* Disassociate with bind bucket. */
bhead = &hashinfo->bhash[inet_bhashfn(twsk_net(tw), tw->tw_num, bhead = &hashinfo->bhash[inet_bhashfn(twsk_net(tw), tw->tw_num,
hashinfo->bhash_size)]; hashinfo->bhash_size)];
bhead2 = inet_bhashfn_portaddr(hashinfo, (struct sock *)tw,
twsk_net(tw), tw->tw_num);
spin_lock(&bhead->lock); spin_lock(&bhead->lock);
spin_lock(&bhead2->lock);
inet_twsk_bind_unhash(tw, hashinfo); inet_twsk_bind_unhash(tw, hashinfo);
spin_unlock(&bhead2->lock);
spin_unlock(&bhead->lock); spin_unlock(&bhead->lock);
refcount_dec(&tw->tw_dr->tw_refcount); refcount_dec(&tw->tw_dr->tw_refcount);
@ -93,6 +103,12 @@ static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
hlist_add_head(&tw->tw_bind_node, list); hlist_add_head(&tw->tw_bind_node, list);
} }
static void inet_twsk_add_bind2_node(struct inet_timewait_sock *tw,
struct hlist_head *list)
{
hlist_add_head(&tw->tw_bind2_node, list);
}
/* /*
* Enter the time wait state. This is called with locally disabled BH. * Enter the time wait state. This is called with locally disabled BH.
* Essentially we whip up a timewait bucket, copy the relevant info into it * Essentially we whip up a timewait bucket, copy the relevant info into it
@ -105,17 +121,28 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
const struct inet_connection_sock *icsk = inet_csk(sk); const struct inet_connection_sock *icsk = inet_csk(sk);
struct inet_ehash_bucket *ehead = inet_ehash_bucket(hashinfo, sk->sk_hash); struct inet_ehash_bucket *ehead = inet_ehash_bucket(hashinfo, sk->sk_hash);
spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash); spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
struct inet_bind_hashbucket *bhead; struct inet_bind_hashbucket *bhead, *bhead2;
/* Step 1: Put TW into bind hash. Original socket stays there too. /* Step 1: Put TW into bind hash. Original socket stays there too.
Note, that any socket with inet->num != 0 MUST be bound in Note, that any socket with inet->num != 0 MUST be bound in
binding cache, even if it is closed. binding cache, even if it is closed.
*/ */
bhead = &hashinfo->bhash[inet_bhashfn(twsk_net(tw), inet->inet_num, bhead = &hashinfo->bhash[inet_bhashfn(twsk_net(tw), inet->inet_num,
hashinfo->bhash_size)]; hashinfo->bhash_size)];
bhead2 = inet_bhashfn_portaddr(hashinfo, sk, twsk_net(tw), inet->inet_num);
spin_lock(&bhead->lock); spin_lock(&bhead->lock);
spin_lock(&bhead2->lock);
tw->tw_tb = icsk->icsk_bind_hash; tw->tw_tb = icsk->icsk_bind_hash;
WARN_ON(!icsk->icsk_bind_hash); WARN_ON(!icsk->icsk_bind_hash);
inet_twsk_add_bind_node(tw, &tw->tw_tb->owners); inet_twsk_add_bind_node(tw, &tw->tw_tb->owners);
tw->tw_tb2 = icsk->icsk_bind2_hash;
WARN_ON(!icsk->icsk_bind2_hash);
inet_twsk_add_bind2_node(tw, &tw->tw_tb2->deathrow);
spin_unlock(&bhead2->lock);
spin_unlock(&bhead->lock); spin_unlock(&bhead->lock);
spin_lock(lock); spin_lock(lock);

View File

@ -139,6 +139,10 @@ static int __tcp_set_ulp(struct sock *sk, const struct tcp_ulp_ops *ulp_ops)
if (sk->sk_socket) if (sk->sk_socket)
clear_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags); clear_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
err = -EINVAL;
if (!ulp_ops->clone && sk->sk_state == TCP_LISTEN)
goto out_err;
err = ulp_ops->init(sk); err = ulp_ops->init(sk);
if (err) if (err)
goto out_err; goto out_err;

View File

@ -1662,6 +1662,8 @@ static void mptcp_set_nospace(struct sock *sk)
set_bit(MPTCP_NOSPACE, &mptcp_sk(sk)->flags); set_bit(MPTCP_NOSPACE, &mptcp_sk(sk)->flags);
} }
static int mptcp_disconnect(struct sock *sk, int flags);
static int mptcp_sendmsg_fastopen(struct sock *sk, struct sock *ssk, struct msghdr *msg, static int mptcp_sendmsg_fastopen(struct sock *sk, struct sock *ssk, struct msghdr *msg,
size_t len, int *copied_syn) size_t len, int *copied_syn)
{ {
@ -1672,9 +1674,9 @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct sock *ssk, struct msgh
lock_sock(ssk); lock_sock(ssk);
msg->msg_flags |= MSG_DONTWAIT; msg->msg_flags |= MSG_DONTWAIT;
msk->connect_flags = O_NONBLOCK; msk->connect_flags = O_NONBLOCK;
msk->is_sendmsg = 1; msk->fastopening = 1;
ret = tcp_sendmsg_fastopen(ssk, msg, copied_syn, len, NULL); ret = tcp_sendmsg_fastopen(ssk, msg, copied_syn, len, NULL);
msk->is_sendmsg = 0; msk->fastopening = 0;
msg->msg_flags = saved_flags; msg->msg_flags = saved_flags;
release_sock(ssk); release_sock(ssk);
@ -1688,6 +1690,8 @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct sock *ssk, struct msgh
*/ */
if (ret && ret != -EINPROGRESS && ret != -ERESTARTSYS && ret != -EINTR) if (ret && ret != -EINPROGRESS && ret != -ERESTARTSYS && ret != -EINTR)
*copied_syn = 0; *copied_syn = 0;
} else if (ret && ret != -EINPROGRESS) {
mptcp_disconnect(sk, 0);
} }
return ret; return ret;
@ -2353,7 +2357,7 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
/* otherwise tcp will dispose of the ssk and subflow ctx */ /* otherwise tcp will dispose of the ssk and subflow ctx */
if (ssk->sk_state == TCP_LISTEN) { if (ssk->sk_state == TCP_LISTEN) {
tcp_set_state(ssk, TCP_CLOSE); tcp_set_state(ssk, TCP_CLOSE);
mptcp_subflow_queue_clean(ssk); mptcp_subflow_queue_clean(sk, ssk);
inet_csk_listen_stop(ssk); inet_csk_listen_stop(ssk);
mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CLOSED); mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CLOSED);
} }
@ -2989,6 +2993,14 @@ static int mptcp_disconnect(struct sock *sk, int flags)
{ {
struct mptcp_sock *msk = mptcp_sk(sk); struct mptcp_sock *msk = mptcp_sk(sk);
/* We are on the fastopen error path. We can't call straight into the
* subflows cleanup code due to lock nesting (we are already under
* msk->firstsocket lock). Do nothing and leave the cleanup to the
* caller.
*/
if (msk->fastopening)
return 0;
inet_sk_state_store(sk, TCP_CLOSE); inet_sk_state_store(sk, TCP_CLOSE);
mptcp_stop_timer(sk); mptcp_stop_timer(sk);
@ -3532,7 +3544,7 @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
/* if reaching here via the fastopen/sendmsg path, the caller already /* if reaching here via the fastopen/sendmsg path, the caller already
* acquired the subflow socket lock, too. * acquired the subflow socket lock, too.
*/ */
if (msk->is_sendmsg) if (msk->fastopening)
err = __inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags, 1); err = __inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags, 1);
else else
err = inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags); err = inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags);

View File

@ -295,7 +295,7 @@ struct mptcp_sock {
u8 recvmsg_inq:1, u8 recvmsg_inq:1,
cork:1, cork:1,
nodelay:1, nodelay:1,
is_sendmsg:1; fastopening:1;
int connect_flags; int connect_flags;
struct work_struct work; struct work_struct work;
struct sk_buff *ooo_last_skb; struct sk_buff *ooo_last_skb;
@ -628,7 +628,7 @@ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
struct mptcp_subflow_context *subflow); struct mptcp_subflow_context *subflow);
void __mptcp_subflow_send_ack(struct sock *ssk); void __mptcp_subflow_send_ack(struct sock *ssk);
void mptcp_subflow_reset(struct sock *ssk); void mptcp_subflow_reset(struct sock *ssk);
void mptcp_subflow_queue_clean(struct sock *ssk); void mptcp_subflow_queue_clean(struct sock *sk, struct sock *ssk);
void mptcp_sock_graft(struct sock *sk, struct socket *parent); void mptcp_sock_graft(struct sock *sk, struct socket *parent);
struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk); struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk);
bool __mptcp_close(struct sock *sk, long timeout); bool __mptcp_close(struct sock *sk, long timeout);

View File

@ -1791,7 +1791,7 @@ static void subflow_state_change(struct sock *sk)
} }
} }
void mptcp_subflow_queue_clean(struct sock *listener_ssk) void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_ssk)
{ {
struct request_sock_queue *queue = &inet_csk(listener_ssk)->icsk_accept_queue; struct request_sock_queue *queue = &inet_csk(listener_ssk)->icsk_accept_queue;
struct mptcp_sock *msk, *next, *head = NULL; struct mptcp_sock *msk, *next, *head = NULL;
@ -1840,8 +1840,23 @@ void mptcp_subflow_queue_clean(struct sock *listener_ssk)
do_cancel_work = __mptcp_close(sk, 0); do_cancel_work = __mptcp_close(sk, 0);
release_sock(sk); release_sock(sk);
if (do_cancel_work) if (do_cancel_work) {
/* lockdep will report a false positive ABBA deadlock
* between cancel_work_sync and the listener socket.
* The involved locks belong to different sockets WRT
* the existing AB chain.
* Using a per socket key is problematic as key
* deregistration requires process context and must be
* performed at socket disposal time, in atomic
* context.
* Just tell lockdep to consider the listener socket
* released here.
*/
mutex_release(&listener_sk->sk_lock.dep_map, _RET_IP_);
mptcp_cancel_work(sk); mptcp_cancel_work(sk);
mutex_acquire(&listener_sk->sk_lock.dep_map,
SINGLE_DEPTH_NESTING, 0, _RET_IP_);
}
sock_put(sk); sock_put(sk);
} }

View File

@ -1698,9 +1698,10 @@ call_ad(struct net *net, struct sock *ctnl, struct sk_buff *skb,
ret = set->variant->uadt(set, tb, adt, &lineno, flags, retried); ret = set->variant->uadt(set, tb, adt, &lineno, flags, retried);
ip_set_unlock(set); ip_set_unlock(set);
retried = true; retried = true;
} while (ret == -EAGAIN && } while (ret == -ERANGE ||
set->variant->resize && (ret == -EAGAIN &&
(ret = set->variant->resize(set, retried)) == 0); set->variant->resize &&
(ret = set->variant->resize(set, retried)) == 0));
if (!ret || (ret == -IPSET_ERR_EXIST && eexist)) if (!ret || (ret == -IPSET_ERR_EXIST && eexist))
return 0; return 0;

View File

@ -100,11 +100,11 @@ static int
hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[], hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ip4 *h = set->data; struct hash_ip4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ip4_elem e = { 0 }; struct hash_ip4_elem e = { 0 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
u32 ip = 0, ip_to = 0, hosts; u32 ip = 0, ip_to = 0, hosts, i = 0;
int ret = 0; int ret = 0;
if (tb[IPSET_ATTR_LINENO]) if (tb[IPSET_ATTR_LINENO])
@ -149,14 +149,14 @@ hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[],
hosts = h->netmask == 32 ? 1 : 2 << (32 - h->netmask - 1); hosts = h->netmask == 32 ? 1 : 2 << (32 - h->netmask - 1);
/* 64bit division is not allowed on 32bit */
if (((u64)ip_to - ip + 1) >> (32 - h->netmask) > IPSET_MAX_RANGE)
return -ERANGE;
if (retried) if (retried)
ip = ntohl(h->next.ip); ip = ntohl(h->next.ip);
for (; ip <= ip_to;) { for (; ip <= ip_to; i++) {
e.ip = htonl(ip); e.ip = htonl(ip);
if (i > IPSET_MAX_RANGE) {
hash_ip4_data_next(&h->next, &e);
return -ERANGE;
}
ret = adtfn(set, &e, &ext, &ext, flags); ret = adtfn(set, &e, &ext, &ext, flags);
if (ret && !ip_set_eexist(ret, flags)) if (ret && !ip_set_eexist(ret, flags))
return ret; return ret;

View File

@ -97,11 +97,11 @@ static int
hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipmark4 *h = set->data; struct hash_ipmark4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipmark4_elem e = { }; struct hash_ipmark4_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
u32 ip, ip_to = 0; u32 ip, ip_to = 0, i = 0;
int ret; int ret;
if (tb[IPSET_ATTR_LINENO]) if (tb[IPSET_ATTR_LINENO])
@ -148,13 +148,14 @@ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
ip_set_mask_from_to(ip, ip_to, cidr); ip_set_mask_from_to(ip, ip_to, cidr);
} }
if (((u64)ip_to - ip + 1) > IPSET_MAX_RANGE)
return -ERANGE;
if (retried) if (retried)
ip = ntohl(h->next.ip); ip = ntohl(h->next.ip);
for (; ip <= ip_to; ip++) { for (; ip <= ip_to; ip++, i++) {
e.ip = htonl(ip); e.ip = htonl(ip);
if (i > IPSET_MAX_RANGE) {
hash_ipmark4_data_next(&h->next, &e);
return -ERANGE;
}
ret = adtfn(set, &e, &ext, &ext, flags); ret = adtfn(set, &e, &ext, &ext, flags);
if (ret && !ip_set_eexist(ret, flags)) if (ret && !ip_set_eexist(ret, flags))

View File

@ -112,11 +112,11 @@ static int
hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipport4 *h = set->data; struct hash_ipport4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipport4_elem e = { .ip = 0 }; struct hash_ipport4_elem e = { .ip = 0 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
u32 ip, ip_to = 0, p = 0, port, port_to; u32 ip, ip_to = 0, p = 0, port, port_to, i = 0;
bool with_ports = false; bool with_ports = false;
int ret; int ret;
@ -184,17 +184,18 @@ hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[],
swap(port, port_to); swap(port, port_to);
} }
if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
return -ERANGE;
if (retried) if (retried)
ip = ntohl(h->next.ip); ip = ntohl(h->next.ip);
for (; ip <= ip_to; ip++) { for (; ip <= ip_to; ip++) {
p = retried && ip == ntohl(h->next.ip) ? ntohs(h->next.port) p = retried && ip == ntohl(h->next.ip) ? ntohs(h->next.port)
: port; : port;
for (; p <= port_to; p++) { for (; p <= port_to; p++, i++) {
e.ip = htonl(ip); e.ip = htonl(ip);
e.port = htons(p); e.port = htons(p);
if (i > IPSET_MAX_RANGE) {
hash_ipport4_data_next(&h->next, &e);
return -ERANGE;
}
ret = adtfn(set, &e, &ext, &ext, flags); ret = adtfn(set, &e, &ext, &ext, flags);
if (ret && !ip_set_eexist(ret, flags)) if (ret && !ip_set_eexist(ret, flags))

View File

@ -108,11 +108,11 @@ static int
hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipportip4 *h = set->data; struct hash_ipportip4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipportip4_elem e = { .ip = 0 }; struct hash_ipportip4_elem e = { .ip = 0 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
u32 ip, ip_to = 0, p = 0, port, port_to; u32 ip, ip_to = 0, p = 0, port, port_to, i = 0;
bool with_ports = false; bool with_ports = false;
int ret; int ret;
@ -180,17 +180,18 @@ hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[],
swap(port, port_to); swap(port, port_to);
} }
if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
return -ERANGE;
if (retried) if (retried)
ip = ntohl(h->next.ip); ip = ntohl(h->next.ip);
for (; ip <= ip_to; ip++) { for (; ip <= ip_to; ip++) {
p = retried && ip == ntohl(h->next.ip) ? ntohs(h->next.port) p = retried && ip == ntohl(h->next.ip) ? ntohs(h->next.port)
: port; : port;
for (; p <= port_to; p++) { for (; p <= port_to; p++, i++) {
e.ip = htonl(ip); e.ip = htonl(ip);
e.port = htons(p); e.port = htons(p);
if (i > IPSET_MAX_RANGE) {
hash_ipportip4_data_next(&h->next, &e);
return -ERANGE;
}
ret = adtfn(set, &e, &ext, &ext, flags); ret = adtfn(set, &e, &ext, &ext, flags);
if (ret && !ip_set_eexist(ret, flags)) if (ret && !ip_set_eexist(ret, flags))

View File

@ -160,12 +160,12 @@ static int
hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipportnet4 *h = set->data; struct hash_ipportnet4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipportnet4_elem e = { .cidr = HOST_MASK - 1 }; struct hash_ipportnet4_elem e = { .cidr = HOST_MASK - 1 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
u32 ip = 0, ip_to = 0, p = 0, port, port_to; u32 ip = 0, ip_to = 0, p = 0, port, port_to;
u32 ip2_from = 0, ip2_to = 0, ip2; u32 ip2_from = 0, ip2_to = 0, ip2, i = 0;
bool with_ports = false; bool with_ports = false;
u8 cidr; u8 cidr;
int ret; int ret;
@ -253,9 +253,6 @@ hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
swap(port, port_to); swap(port, port_to);
} }
if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
return -ERANGE;
ip2_to = ip2_from; ip2_to = ip2_from;
if (tb[IPSET_ATTR_IP2_TO]) { if (tb[IPSET_ATTR_IP2_TO]) {
ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP2_TO], &ip2_to); ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP2_TO], &ip2_to);
@ -282,9 +279,15 @@ hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
for (; p <= port_to; p++) { for (; p <= port_to; p++) {
e.port = htons(p); e.port = htons(p);
do { do {
i++;
e.ip2 = htonl(ip2); e.ip2 = htonl(ip2);
ip2 = ip_set_range_to_cidr(ip2, ip2_to, &cidr); ip2 = ip_set_range_to_cidr(ip2, ip2_to, &cidr);
e.cidr = cidr - 1; e.cidr = cidr - 1;
if (i > IPSET_MAX_RANGE) {
hash_ipportnet4_data_next(&h->next,
&e);
return -ERANGE;
}
ret = adtfn(set, &e, &ext, &ext, flags); ret = adtfn(set, &e, &ext, &ext, flags);
if (ret && !ip_set_eexist(ret, flags)) if (ret && !ip_set_eexist(ret, flags))

Some files were not shown because too many files have changed in this diff Show More