IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
As we don't specify the MTU in the driver, the framework
will fall back to 1500 bytes and this doesn't work very
well when we try to attach a DSA switch:
eth1: mtu greater than device maximum
ixp4xx_eth c800a000.ethernet eth1: error -22 setting
MTU to 1504 to include DSA overhead
After locating an out-of-tree patch in OpenWrt I found
suitable code to set the MTU on the interface and ported
it and updated it. Now the MTU gets set properly.
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Link: https://lore.kernel.org/r/20231005-ixp4xx-eth-mtu-v4-1-08c66ed0bc69@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQFHBAABCgAxFiEEDs2BvajyNKlf9TJQvlAcSiqKBOgFAmUfE5ETHG1rbEBwZW5n
dXRyb25peC5kZQAKCRC+UBxKKooE6H+OCACG8YWLFqa4J0CeRJ6DgUL1bpIGGTOZ
V5nDydDeFzl8AdMBt+Zr97Z9n9gBYG/E1yfLMH4h4bR8RgvT+O0o3zHa4vlx9Qzf
ArMqZE2Us05TkLOpLurMxs2ya2iR6PBaulDKq/lV2FLZJxPRcgHKc7lUPfjwoqeV
tIPZsCOLJXWtcux17pYrqZ/aY/t1f506PCklsTSR/XDFyNODKDT3FI3MrPnR0ZcP
uMi76k/HQC7e/1T70Wrp9IAKijORURo+OZJsOdgU9jcTGSdujcNE8j52ReMtTB+g
w1gDoXkRm2AR2o3xR+QctdPGy/q32B4FQGS6ChSZPnSMSNf1vCcJbhr9
=1OP/
-----END PGP SIGNATURE-----
Merge tag 'linux-can-next-for-6.7-20231005' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next
Marc Kleine-Budde says:
====================
pull-request: can-next 2023-10-05
The first patch is by Miquel Raynal and fixes a comment in the sja1000
driver.
Vincent Mailhol contributes 2 patches that fix W=1 compiler warnings
in the etas_es58x driver.
Jiapeng Chong's patch removes an unneeded NULL pointer check before
dev_put() in the CAN raw protocol.
A patch by Justin Stittreplaces a strncpy() by strscpy() in the
peak_pci sja1000 driver.
The next 5 patches are by me and fix the can_restart() handler and
replace BUG_ON()s in the CAN dev helpers with proper error handling.
The last 27 patches are also by me and target the at91_can driver.
First a new helper function is introduced, the at91_can driver is
cleaned up and updated to use the rx-offload helper.
* tag 'linux-can-next-for-6.7-20231005' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next: (37 commits)
can: at91_can: switch to rx-offload implementation
can: at91_can: at91_alloc_can_err_skb() introduce new function
can: at91_can: at91_irq_err_line(): send error counters with state change
can: at91_can: at91_irq_err_line(): make use of can_change_state() and can_bus_off()
can: at91_can: at91_irq_err_line(): take reg_sr into account for bus off
can: at91_can: at91_irq_err_line(): make use of can_state_get_by_berr_counter()
can: at91_can: at91_irq_err(): rename to at91_irq_err_line()
can: at91_can: at91_irq_err_frame(): move next to at91_irq_err()
can: at91_can: at91_irq_err_frame(): call directly from IRQ handler
can: at91_can: at91_poll_err(): increase stats even if no quota left or OOM
can: at91_can: at91_poll_err(): fold in at91_poll_err_frame()
can: at91_can: add CAN transceiver support
can: at91_can: at91_open(): forward request_irq()'s return value in case or an error
can: at91_can: at91_chip_start(): don't disable IRQs twice
can: at91_can: at91_set_bittiming(): demote register output to debug level
can: at91_can: rename struct at91_priv::{tx_next,tx_echo} to {tx_head,tx_tail}
can: at91_can: at91_setup_mailboxes(): update comments
can: at91_can: add more register definitions
can: at91_can: MCR Register: convert to FIELD_PREP()
can: at91_can: MSR Register: convert to FIELD_PREP()
...
====================
Link: https://lore.kernel.org/r/20231005195812.549776-1-mkl@pengutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This implements the led_hw_* hooks to support hardware blinking LEDs on
the DP83867 phy. The driver supports all LED modes that have a
corresponding TRIGGER_NETDEV_* define. Error and collision do not have
a TRIGGER_NETDEV_* define, so these modes are currently not supported.
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Alexander Stein <alexander.stein@ew.tq-group.com> #TQMa8MxML/MBa8Mx
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct flow_action_entry.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: netdev@vger.kernel.org
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct packet_fanout.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Anqi Shen <amy.saq@antgroup.com>
Cc: netdev@vger.kernel.org
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Chuck points out that we should use the uapi-header property
when generating the guard. Otherwise we may generate the same
guard as another file in the tree.
Tested-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata says:
====================
mlxsw: Control the order of blocks in ACL region
Amit Cohen writes:
For 12 key blocks in the A-TCAM, rules are split into two records, which
constitute two lookups. The two records are linked using a
"large entry key ID".
Due to a Spectrum-4 hardware issue, KVD entries that correspond to key
blocks 0 to 5 of 12 key blocks will be placed in the same KVD pipe if they
only differ in their "large entry key ID", as it is ignored. This results
in a reduced scale, we can insert less than 20k filters and get an error:
$ tc -b flower.batch
RTNETLINK answers: Input/output error
We have an error talking to the kernel
To reduce the probability of this issue, we can place key blocks with
high entropy in blocks 0 to 5. The idea is to place blocks that are often
changed in blocks 0 to 5, for example, key blocks that match on IPv4
addresses or the LSBs of IPv6 addresses. Such placement will reduce the
probability of these blocks to be same.
Mark several blocks with 'high_entropy' flag and place them in blocks 0
to 5. Note that the list of the blocks is just a suggestion, I will verify
it with architects.
Currently, there is a one loop that chooses which blocks should be used
for a given list of elements and fills the blocks - when a block is
chosen, it fills it in the region. To be able to control the order of
the blocks, separate between searching blocks and filling them. Several
pre-changes are required.
Patch set overview:
Patch #1 marks several blocks with 'high_entropy' flag.
Patches #2-#4 prepare the code for filling blocks at the end of the search.
Patch #5 changes the loop to just choose the blocks and fill the blocks at
the end.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The previous patches prepared the code to allow separating between
choosing blocks and filling blocks.
Do not add blocks as part of the loop that chooses them. When all the
required blocks are set in the bitmap 'chosen_blocks_bm', start filling
blocks. Iterate over the bitmap twice - first add only blocks that are
marked with 'high_entropy' flag. Then, fill the rest of the blocks.
The idea is to place key blocks with high entropy in blocks 0 to 5. See
more details in previous patches.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, mlxsw_afk_picker() chooses which blocks will be used for a
given list of elements, and fills the blocks during the searching - when a
key block is found with most hits, it adds it and removes the elements from
the count of hits. This should be changed as we want to be able to choose
which blocks will be placed in blocks 0 to 5.
To separate between choosing blocks and filling blocks, several pre-changes
are required. Currently, the indication of whether all elements were
found in the chosen blocks is by the structure 'key_info->elusage'. This
structure is updated when block is filled as part of
mlxsw_afk_picker_key_info_add(). A following patch will call this
function only after choosing all the blocks. Add a bitmap called
'elusage_chosen' to store which elements were chosen in the chosen blocks.
Change the condition in the loop to check elements that were chosen, not
elements that were already filled in the blocks.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, mlxsw_afk_picker() chooses which blocks will be used for a
given list of elements, and fills the blocks during the searching - when a
key block is found with most hits, it adds it and removes the elements from
the count of hits. This should be changed as we want to be able to choose
which blocks will be placed in blocks 0 to 5.
To separate between choosing blocks and filling blocks, several pre-changes
are required. During the search, the structure 'mlxsw_afk_picker' is
used per block, it contains how many elements from the required list appear
in the block. When a block is chosen and filled, this bitmap of elements is
cleaned. To be able to fill the blocks at the end, add a bitmap called
'chosen_element' as part of picker. When a block is chosen, copy the
'element' bitmap to it. Use the new bitmap as part of
mlxsw_afk_picker_key_info_add(). So later, when filling the block will
be done at the end of the searching, we will use the copied bitmap that
contains the elements that should be used in the block.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, mlxsw_afk_picker() chooses which blocks will be used for a
given list of elements, and fills the blocks during the searching - when a
key block is found with most hits, it adds it and removes the elements from
the count of hits. This should be changed as we want to be able to choose
which blocks will be placed in blocks 0 to 5.
To separate between choosing blocks and filling blocks, several pre-changes
are required. The indexes of the chosen blocks should be saved, so then
the relevant blocks will be filled at the end of search.
Allocate a bitmap for chosen blocks, when a block is found with most
hits, set the relevant bit in the bitmap. This bitmap will be used in a
following patch.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For 12 key blocks in the A-TCAM, rules are split into two records, which
constitute two lookups. The two records are linked using a
"large entry key ID".
Due to a Spectrum-4 hardware issue, KVD entries that correspond to key
blocks 0 to 5 of 12 key blocks A-TCAM entries will be placed in the same
KVD pipe if they only differ in their "large entry key ID", as it is
ignored. This results in a reduced scale. To reduce the probability of this
issue, we can place key blocks with high entropy in blocks 0 to 5. The idea
is to place blocks that are changed often in blocks 0 to 5, for
example, key blocks that match on IPv4 addresses or the LSBs of IPv6
addresses. Such placement will reduce the probability of these blocks to be
same.
Mark several blocks with 'high_entropy' flag, so later we will take into
account this flag and place them in blocks 0 to 5.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree says:
====================
sfc: conntrack offload for tunnels
This series adds support for offloading TC flower rules which require
both connection tracking and tunnel decapsulation. Depending on the
match keys required, the left-hand-side rule may go in either the
Outer Rule table or the Action Rule table.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
When a foreign LHS rule (TC rule from a tunnel netdev which requests
conntrack lookup) matches on inner headers or enc_key_id, these matches
cannot be performed by the Outer Rule table, as the keys are only
available after the tunnel type has been identified (by the OR lookup)
and the rest of the headers parsed accordingly.
Offload such rules with an Action Rule, using the LOOKUP_CONTROL section
of the AR response to specify the conntrack and/or recirculation actions,
combined with an Outer Rule which performs only the usual Encap Match
duties.
This processing flow, as it requires two AR lookups per packet, is less
performant than OR-CT-AR, so only use it where necessary.
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com>
Signed-off-by: Edward Cree <ecree.xilinx@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There were a few places where no extack error message was set, or the
extack was not forwarded to callees, potentially resulting in a return
of -EOPNOTSUPP with no additional information.
Make sure to populate the error message in these cases. In practice
this does us no good as TC indirect block callbacks don't come with an
extack to fill in; but maybe they will someday and when debugging it's
possible to provide a fake extack and emit its message to the console.
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com>
Signed-off-by: Edward Cree <ecree.xilinx@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Normally, if a TC filter on a tunnel netdev does not match on any
encap fields, we decline to offload it, as it cannot meet our
requirement for a <sip,dip,dport> tuple for the encap match.
However, if the rule has a nonzero chain_index, then for a packet to
reach the rule, it must already have matched a LHS rule which will
have included an encap match and determined the tunnel type, so in
that case we can offload the right-hand-side rule.
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com>
Signed-off-by: Edward Cree <ecree.xilinx@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Allow a tunnel netdevice (such as a vxlan) to offload conntrack lookups,
in much the same way as efx netdevs.
To ensure this rule does not overlap with other tunnel rules on the same
sip,dip,dport tuple, register a pseudo encap match of a new type
(EFX_TC_EM_PSEUDO_OR), which unlike PSEUDO_MASK may only be referenced
once (because an actual Outer Rule in hardware exists, although its
fw_id is not recorded in the encap match entry).
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com>
Signed-off-by: Edward Cree <ecree.xilinx@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct nh_group.
Cc: David Ahern <dsahern@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: netdev@vger.kernel.org
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct nh_notifier_grp_info.
Cc: David Ahern <dsahern@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Tom Rix <trix@redhat.com>
Cc: netdev@vger.kernel.org
Cc: llvm@lists.linux.dev
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct netlink_policy_dump_state.
Additionally update the size of the usage array length before accessing
it. This requires remembering the old size for the memset() and later
assignments.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Johannes Berg <johannes.berg@intel.com>
Cc: netdev@vger.kernel.org
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct nfp_eth_table.
Cc: Simon Horman <simon.horman@corigine.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Yinjun Zhang <yinjun.zhang@corigine.com>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Yu Xiao <yu.xiao@corigine.com>
Cc: Sixiang Chen <sixiang.chen@corigine.com>
Cc: oss-drivers@corigine.com
Cc: netdev@vger.kernel.org
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Acked-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct nfp_reprs.
Cc: Simon Horman <simon.horman@corigine.com>
Cc: oss-drivers@corigine.com
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Acked-by: Louis Peens <louis.peens@corigine.com>
Link: https://lore.kernel.org/r/20231003231843.work.811-kees@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct disttable.
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Link: https://lore.kernel.org/r/20231003231823.work.684-kees@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct
nh_notifier_res_table_info.
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Tom Rix <trix@redhat.com>
Cc: llvm@lists.linux.dev
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Ahern <dsahern@kernel.org>
Link: https://lore.kernel.org/r/20231003231818.work.883-kees@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct nh_res_table.
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: David Ahern <dsahern@kernel.org>
Link: https://lore.kernel.org/r/20231003231813.work.042-kees@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Russell King says:
====================
Rework tx fault fixups
This series reworks the tx-fault fixup and then improves the Nokia GPON
workaround to also ignore the RX LOS signal as well. We do this by
introducing a mask of hardware pin states that should be ignored,
converting the tx-fault fixup to use that, and then augmenting it for
RX LOS.
====================
Link: https://lore.kernel.org/r/ZRwYJXRizvkhm83M@shell.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Improve the Nokia GPON fixup - we need to ignore not only the hardware
LOS signal, but also the software implementation as well. Do this by
using the new state_ignore_mask to indicate that we should ignore not
only the hardware RX_LOS signal, and also clear the LOS bits in the
option field.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Tested-by: Christian Marangi <ansuelsmth@gmail.com>
Link: https://lore.kernel.org/r/E1qnfXh-008UDe-F9@rmk-PC.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Re-implement how we ignore the hardware TX_FAULT signal. Rather than
having a separate boolean for this, use a bitmask of the hardware
signals that we wish to ignore. This gives more flexibility in the
future to ignore other signals such as RX_LOS.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Tested-by: Christian Marangi <ansuelsmth@gmail.com>
Link: https://lore.kernel.org/r/E1qnfXc-008UDY-91@rmk-PC.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
AR7 is going to be removed from the Kernel, so remove its networking
support in form of the cpmac driver. This allows us to remove the
platform because this driver includes a platform specific header.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Link: https://lore.kernel.org/all/20230922061530.3121-6-wsa+renesas@sang-engineering.com/
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Marc Kleine-Budde <mkl@pengutronix.de> says:
This series first introduces the can_state_get_by_berr_counter()
helper function. It returns the current TX and RX state depending on
the provided CAN bit error counters. It will be later used by the
at91_can driver.
The remaining patches of this series first clean up the at91_can
driver, clean up the bus- and line error (including bus-off) handling,
and then convert it use the rx_offload helper. The driver works better
under high system load and the order of received CAN frames is better
maintained.
Due to a hardware limitation the converted driver could trigger a race
condition in the can_restart() CAN bus-off handler. The patch series
[1] fixes the issue.
[1] https://lore.kernel.org/all/20231005-can-dev-fix-can-restart-v2-0-91b5c1fd922c@pengutronix.de
Changes in v2:
- 1/27: can_state_err_to_state(): use symbolic error values instead of
plain numbers (Thanks Vincent)
- 27/27: fix patch description and typos (Thanks Vincent)
- Link to v1: https://lore.kernel.org/all/20231004-at91_can-rx_offload-v1-0-c32bf99097db@pengutronix.de
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-0-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The current at91_can driver uses NAPI to handle RX'ed CAN frames, the
RX IRQ is disabled and a NAPI poll is scheduled. Then in
at91_poll_rx() the RX'ed CAN frames are tried to read in order from
the device.
This approach has 2 drawbacks:
- Under high system load it might take too long from the initial RX
IRQ to the NAPI poll function to run. This causes RX buffer
overflows.
- The algorithm to read the CAN frames in order is not bullet proof
and may fail under certain use cases/system loads.
The rx-offload helper fixes these problems by reading the RX'ed CAN
frames in the interrupt handler and adding it to a list sorted by RX
timestamp. This list of RX'ed SKBs is then passed to the networking
stack via NAPI.
Convert the RX path to rx-offload, pass all CAN error frames with
can_rx_offload_queue_timestamp().
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-27-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
This is a preparation patch to convert the driver to make use of the
rx-offload helper. With rx-offload the received CAN frames are sorted
by their timestamp. Regular CAN RX'ed and TX'ed CAN frames are
timestamped by the hardware. Error events are not.
Introduce a new function at91_alloc_can_err_skb() the allocates an
error SKB and reads the current timestamp from the controller.
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-26-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Since 3e5c291c7942 ("can: add CAN_ERR_CNT flag to notify availability
of error counter") there is a dedicated flag to inform the user space,
that there are CAN error counters in the CAN error frame.
In case the device is not in bus off mode, send the error counters to
user space and set CAN_ERR_CNT.
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-25-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The driver implements a hand crafted CAN state handling. Update the
driver to make use of can_change_state(), introduced in ("can: dev:
Consolidate and unify state change handling")
Also switch from hand crafted CAN bus off handling to can_bus_off():
In case of a bus off, abort all pending TX requests, switch off the
device and let can_bus_off() handle the device restart.
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-24-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The at91 CAN controller automatically recovers from bus-off after 128
occurrences of 11 consecutive recessive bits.
After an auto-recovered bus-off, the error counters no longer reflect
this fact. On the sam9263 the state bits in the SR register show the
current state (based on the current error counters), while on sam9x5
and newer SoCs these bits are latched.
Take any latched bus-off information from the SR register into account
when calculating the CAN new state, to start the standard CAN bus off
handling.
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-23-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
On the sam9263 the SR bits for bus off, error passive, warning limit,
and error active are not latched and reflect the current status of the
controller. On the sam9x5 and newer SoCs these bits are latched.
To simplify the code, use can_state_get_by_berr_counter() to get the
state of the controller regardless of the SoC version.
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-22-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
This is a preparation patch to convert the driver to the rx-offload
helper. In rx-offload RX, TX-done and CAN error handling are done in
the IRQ handler, SKB are pushed to the network stack in the NAPI poll
function.
Move the CAN frame error handling from the NAPI function at91_poll()
to the IRQ handler at91_poll(). To reflect this change, rename
at91_poll_err() to at91_irq_err_frame().
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-19-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
at91_poll_err() allocates a can error SKB, to inform the user space
about the CAN error. Then it fills the SKB with information the error
information and increases the net device error stats.
In case no SBK can be allocated (e.g. due to an OOM) or the NAPI quota
is 0 the function is left early and no stats are updated. This is not
helpful to the user, as there is no information about the faulty CAN
bus.
Increase the error stats even if no quota is left or no SKB can be
allocated.
While there treat No-Acknowledgment as a bus error, too.
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-18-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
In at91_chip_start() first all IRQs are disabled, they do not have to
be disabled again at the end of the function before the requested IRQs
are enabled.
Remove the 2nd disable of all IRQs at the end of the function.
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-14-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Since 6388b3961420 ("can: at91_can: add support for the AT91SAM9X5
SOCs") the number of mailboxes used for RX and TX is no longer
constant, but depends on the IP core used.
Remove the fixed number of mailboxes from the comment.
Link: https://lore.kernel.org/all/20231005-at91_can-rx_offload-v2-11-9987d53600e0@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>