IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
ipv6_add_addr_hash() can compute the hash value outside of
locked section and pass it to ipv6_chk_same_addr().
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
ipv6_chk_same_addr() is only used by ipv6_add_addr_hash(),
so moving it avoids a forward declaration.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski says:
====================
nfp: bpf: stack support in offload
This series brings stack support for offload.
We use the LMEM (Local memory) register file as memory to store
the stack. Since this is a register file we need to do appropriate
shifts on unaligned accesses. Verifier's state tracking helps us
with that.
LMEM can't be accessed directly, so we add support for setting
pointer registers through which one can read/write LMEM.
This set does not support accessing the stack when the alignment
is not known. This can be added later (most likely using the byte_align
instructions). There is also a number of optimizations which have been
left out:
- in more complex non aligned accesses, double shift and rotation
can save us a cycle. This, however, leads to code explosion
since all access sizes have to be coded separately;
- since setting LM pointers costs around 5 cycles, we should be
tracking their values to make sure we don't move them when
they're already set correctly for earlier access;
- in case of 8 byte access aligned to 4 bytes and crossing
32 byte boundary but not crossing a 64 byte boundary we don't
have to increment the pointer, but this seems like a pretty
rare case to justify the added complexity.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Loading 64bit constants require up to 4 load immediates, since
we can only load 16 bits at a time. If the 32bit halves of
the 64bit constant are the same, however, we can save a cycle
by doing a register move instead of two loads of 16 bits.
Note that we don't optimize the normal ALU64 load because even
though it's a 64 bit load the upper half of the register is
a coming from sign extension so we can load it in one cycle
anyway.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If stack pointer has a different value on different paths
but the alignment to words (4B) remains the same, we can
set a new LMEM access pointer to the calculated value and
access whichever word it's pointing to.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
To access beyond 64th byte of the stack we need to set a new
stack pointer register (LMEM is accessed indirectly through
those pointers). Add a function for encoding local CSR access
instruction. Use stack pointer number 3.
Note that stack pointer registers allow us to index into 32
bytes of LMEM (with shift operations i.e. when operands are
restricted). This means if access is crossing 32 byte boundary
we must not use offsetting, we have to set the pointer to the
exact address and move it with post-increments.
We depend on the datapath placing the stack base address in
GPR A22 for our use.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As long as the verifier tells us the stack offset exactly we
can render the LMEM reads quite easily. Simply make sure that
the offset is constant for a given instruction and add it to
the instruction's offset.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When we are performing unaligned stack accesses in the 32-64B window
we have to do a read-modify-write cycle. E.g. for reading 8 bytes
from address 17:
0: tmp = stack[16]
1: gprLo = tmp >> 8
2: tmp = stack[20]
3: gprLo |= tmp << 24
4: tmp = stack[20]
5: gprHi = tmp >> 8
6: tmp = stack[24]
7: gprHi |= tmp << 24
The load on line 4 is unnecessary, because tmp already contains data
from stack[20].
For write we can optimize both loads and writebacks away.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add simple stack read support, similar to write in every aspect,
but data flowing the other way. Note that unlike write which can
be done in smaller than word quantities, if registers are loaded
with less-than-word of stack contents - the values have to be
zero extended.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Stack is implemented by the LMEM register file. Unaligned accesses
to LMEM are not allowed. Accesses also have to be 4B wide.
To support stack we need to make sure offsets of pointers are known
at translation time (for now) and perform correct load/mask/shift
operations.
Since we can access first 64B of LMEM without much effort support
only stacks not bigger than 64B. Following commits will extend
the possible sizes beyond that.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
nfp_bpf_check_ptr() mostly looks at the pointer register.
Add a temporary variable to shorten the code.
While at it make sure we print error messages if translation
fails to help users identify the problem (to be carried in
ext_ack in due course).
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The need to emitting a few nops will become more common soon
as we add stack and map support. Add a helper. This allows
for code to be shorter but also may be handy for marking the
nops with a "reason" to ease applying optimizations.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski says:
====================
tools: bpftool: Add JSON output to bpftool
Quentin says:
This series introduces support for JSON output to all bpftool commands. It
adds option parsing, and several options are created:
* -j, --json Switch to JSON output.
* -p, --pretty Switch to JSON and print it in a human-friendly fashion.
* -h, --help Print generic help message.
* -V, --version Print version number.
This code uses a "json_writer", which is a copy of the one written by
Stephen Hemminger in iproute2.
---
I don't know if there is an easy way to share the code for json_write
without copying the file, so I am very open to suggestions on this matter.
====================
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Update the documentation to provide help about JSON output generation,
and add an example in bpftool-prog manual page.
Also reintroduce an example that was left aside when the tool was moved
from GitHub to the kernel sources, in order to show how to mount the
bpffs file system (to pin programs) inside the bpftool-prog manual page.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make the look-and-feel of the manual pages somewhat closer to other
manual pages, such as the ones from the utilities from iproute2, by
highlighting more keywords.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
As all commands can now return JSON output (possibly just a "null"
value), output of `bpftool --json batch file FILE` should also be fully
JSON compliant.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Turn err() and info() macros into functions.
In order to avoid naming conflicts with variables in the code, rename
them as p_err() and p_info() respectively.
The behavior of these functions is similar to the one of the macros for
plain output. However, when JSON output is requested, these macros
return a JSON-formatted "error" object instead of printing a message to
stderr.
To handle error messages correctly with JSON, a modification was brought
to their behavior nonetheless: the functions now append a end-of-line
character at the end of the message. This way, we can remove end-of-line
characters at the end of the argument strings, and not have them in the
JSON output.
All error messages are formatted to hold in a single call to p_err(), in
order to produce a single JSON field.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
`bpftool batch file FILE` takes FILE as an argument and executes all the
bpftool commands it finds inside (or stops if an error occurs).
To obtain a consistent JSON output, create a root JSON array, then for
each command create a new object containing two fields: one with the
command arguments, the other with the output (which is the JSON object
that the command would have produced, if called on its own).
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Reuse the json_writer API introduced in an earlier commit to make
bpftool able to generate JSON output on
`bpftool map { show | dump | lookup | getnext }` commands. Remaining
commands produce no output.
Some functions have been spit into plain-output and JSON versions in
order to remain readable.
Outputs for sample maps have been successfully tested against a JSON
validator.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Reuse the json_writer API introduced in an earlier commit to make
bpftool able to generate JSON output on `bpftool prog show *` commands.
A new printing function is created to be passed as an argument to the
disassembler.
Similarly to plain output, opcodes are printed on request.
Outputs from sample programs have been successfully tested against a
JSON validator.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Reuse the json_writer API introduced in an earlier commit to make
bpftool able to generate JSON output on `bpftool prog show *` commands.
For readability, the code from show_prog() has been split into two
functions, one for plain output, one for JSON.
Outputs from sample programs have been successfully tested against a
JSON validator.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
These two options can be used to ask for a JSON output (--j or -json),
and to make this JSON human-readable (-p or --pretty).
A json_writer object is created when JSON is required, and will be used
in follow-up commits to produce JSON output.
Note that --pretty implies --json.
Update for the manual pages and interactive help messages comes in a
later patch of the series.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add an option parsing facility to bpftool, in prevision of future
options for demanding JSON output. Currently, two options are added:
--help and --version, that act the same as the respective commands
`help` and `version`.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
In prevision of following commits, supposed to add JSON output to the
tool, two files are copied from the iproute2 repository (taken at commit
268a9eee985f): lib/json_writer.c and include/json_writer.h.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Song Liu says:
====================
net: add a set of tracepoints to tcp stack
Changes from v1:
Fix build error (with ipv6 as ko) by adding EXPORT_TRACEPOINT_SYMBOL_GPL
for trace_tcp_send_reset.
These patches add the following tracepoints to tcp stack.
tcp_send_reset
tcp_receive_reset
tcp_destroy_sock
tcp_set_state
These tracepoints can be used to track TCP state changes. Such state
changes include but are not limited to: connection establish,
connection termination, tx and rx of RST, various retransmits.
Currently, we use the following kprobes to trace these events:
int kprobe__tcp_validate_incoming
int kprobe__tcp_send_active_reset
int kprobe__tcp_v4_send_reset
int kprobe__tcp_v6_send_reset
int kprobe__tcp_v4_destroy_sock
int kprobe__tcp_set_state
int kprobe__tcp_retransmit_skb
These tracepoints will help us simplify this work.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds tracepoint trace_tcp_set_state. Besides usual fields
(s/d ports, IP addresses), old and new state of the socket is also
printed with TP_printk, with __print_symbolic().
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds trace event trace_tcp_destroy_sock.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
New tracepoint trace_tcp_receive_reset is added and called from
tcp_reset(). This tracepoint is define with a new class tcp_event_sk.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
New tracepoint trace_tcp_send_reset is added and called from
tcp_v4_send_reset(), tcp_v6_send_reset() and tcp_send_active_reset().
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some functions that we plan to add trace points require const sk
and/or skb. So we mark these fields as const in the tracepoint.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Introduce event class tcp_event_sk_skb for tcp tracepoints that
have arguments sk and skb.
Existing tracepoint trace_tcp_retransmit_skb() falls into this class.
This patch rewrites the definition of trace_tcp_retransmit_skb() with
tcp_event_sk_skb.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lipeng says:
====================
net: hns3: bug fixes & code improvements
This patchset introduces various HNS3 bug fixes, optimizations and code
improvements.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The return value of hns3_clean_tx_ring means tx ring clean result.
Return true means clean complete and there is no more pakcet need
clean. Retrun false means there is packets need clean and napi need
poll again. The last return of hns3_clean_tx_ring is
"return !!budget" as budget will decrease when clean a buffer.
If there is no valid BD in TX ring, return 0 for hns3_clean_tx_ring
will cause napi poll again and never complete the napi poll. This
patch fixes the bug.
Fixes: 76ad4f0 (net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC)
Signed-off-by: Lipeng <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
HW will use packet length to write packets to buffer or read
packets from buffer. There is a redundant memset when alloc buffer,
the memset have no sense and will increase time-consuming.
This patch removes it.
Fixes: 76ad4f0 (net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC)
Signed-off-by: Lipeng <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The interface hns3_ring_get_cfg only update TX ring queue_index,
but do not update RX ring queue_index. This patch fixes it.
Fixes: 76ad4f0 (net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC)
Signed-off-by: Lipeng <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch gets vf count by standard function pci_sriov_get_totalvfs,
instead of info from NIC HW.
Signed-off-by: Lipeng <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
1# patch: 07d2995 net: hns3: add support for ETHTOOL_GRXFH.
2# patch: 5668abd net: hns3: add support for set_ringparam.
1# patch adds ae_algo->ops->get_rss_tuple to hns3_get_rxnfc
and 2# patch delete ae_algo->ops->get_tc_size
from hns3_get_rxnfc.This patch fix the ops check in hns3_get_rxnfc.
Signed-off-by: Lipeng <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If one buffer had been recieved to stack, driver will alloc a new buffer,
map the buffer to device and replace the old buffer. When map fail, should
only free the new alloced buffer, but not free all buffers in the ring.
Fixes: 76ad4f0 (net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC)
Signed-off-by: Lipeng <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When alloce new buffer to HW, should unmap the old buffer first.
This old code map the old buffer but not unmap the old buffer,
this patch fixes it.
Fixes: 76ad4f0 (net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC)
Signed-off-by: Lipeng <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- Fix parameter kerneldoc which caused kerneldoc warnings, by Sven Eckelmann
- Remove spurious warnings in B.A.T.M.A.N. V neighbor comparison,
by Sven Eckelmann
- Use inline kernel-doc style for UAPI constants, by Sven Eckelmann
-----BEGIN PGP SIGNATURE-----
iQJKBAABCgA0FiEE1ilQI7G+y+fdhnrfoSvjmEKSnqEFAlnuC9IWHHN3QHNpbW9u
d3VuZGVybGljaC5kZQAKCRChK+OYQpKeob0tEACwqVIrUSX9ru7N+uAv04jm9GXy
zR43z5cEzHEiH/yZZKxL7z7sOXTO+fHj06JKWDKUvEYYhUhDBL7xqvgBBcHZO0AK
TQLHirY+gMRJXphNsfrUWJ5IbB+C0wz7BpBuA+TLG8I0ibc9tjIt/pGdwJO34Z0C
Ym0PSB8A1ej+l3CyGMCICramOexuBPRrbtgKrxi4uO56c+RYnfZkPdB+EFMvPGA+
mcofumu/rKr/NFk/ZS77yooqBe2q93IVFzvRc7h3anm84XD9n4HwD5Rm4ruvXEBt
FH4w0ZPmMbkBnwqkiWgefV5hDqKt1tLCEW9lxvokceoKy0ntDvzSqCU/yad2MGVw
Kf27E6/iYhshVVgXFY5Q4E3YC5Z+Y2+rKL9xO7Dr4xYKi3GCIsCzz+aCJn3rRVgI
q4qTVS/+bLR2YxYkHyPs5ux82G7VfhnaPI6jQkUM6ZeTKE1hqt30J7fDnHFRb7b4
WVz6MGr46HGs9mBwhZ64mCercdS4+dIKYFjKS0avm39LWtOStgqoZs9rasLKE4/T
wYfmrkX2tWpOrmZjO4ySuSBz9F3o4Yy0zwBeBNcnb11Mm8/08D6CwXLKg2q9wiY0
E3hFjwg2AGivf0Lq9KMSS30tpC0iDz+xpJdGwxZumALX602pWv7f20cvMT31tR57
Rmqf4ZEK82N2YpiGWA==
=qGzP
-----END PGP SIGNATURE-----
Merge tag 'batadv-next-for-davem-20171023' of git://git.open-mesh.org/linux-merge
Simon Wunderlich says:
====================
This documentation/cleanup patchset includes the following patches:
- Fix parameter kerneldoc which caused kerneldoc warnings, by Sven Eckelmann
- Remove spurious warnings in B.A.T.M.A.N. V neighbor comparison,
by Sven Eckelmann
- Use inline kernel-doc style for UAPI constants, by Sven Eckelmann
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The enums of constants for netlink tends to become rather large over time.
Documenting them is easier when the kernel-doc is actually next to constant
and not in a different block above the enum.
Also inline kernel-doc allows multi-paragraph description. This could be
required to better document the netlink command types and the expected
return values.
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Use BUG_ON instead of if condition followed by BUG in do_setlink.
This issue was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Because SYSTEMPORT is a (semi) normal network device, the stack may attempt to
queue packets on it oustide of the DSA slave transmit path. When that happens,
the DSA layer has not had a chance to tag packets with the appropriate per-port
and per-queue information, and if that happens and we don't have a port 0 queue
0 available (e.g: on boards where this does not exist), we will hit a NULL
pointer de-reference in bcm_sysport_select_queue().
Guard against such cases by testing for the TX ring validity.
Fixes: 84ff33eeb23d ("net: systemport: Establish DSA network device queue mapping")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko says:
====================
mlxsw: Add support for non-equal-cost multi-path
Ido says:
In the device, nexthops are stored as adjacency entries in an array
called the KVD linear (KVDL). When a multi-path route is hit the
packet's headers are hashed and then converted to an index into KVDL
based on the adjacency group's size and base index.
Up until now the driver ignored the `weight` parameter for multi-path
routes and allocated only one adjacency entry for each nexthop with a
limit of 32 nexthops in a group. This set makes the driver take the
`weight` parameter into account when allocating adjacency entries.
First patch teaches dpipe to show the size of the adjacency group, so
that users will be able to determine the actual weight of each nexthop.
The second patch refactors the KVDL allocator, making it more receptive
towards the addition of another partition later in the set.
Patches 3-5 introduce small changes towards the actual change in the
sixth patch that populates the adjacency entries according to their
relative weight.
Last two patches finally add another partition to the KVDL, which allows
us to allocate more than 32 entries per-group and thus support more
nexthops and also provide higher accuracy with regards to the requested
weights.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The KVD linear is currently partitioned into two partitions. One for
single entries and another for groups of 32 entries.
Add another partition consisting of groups of 512 entries which will
allow us to more accurately represent the nexthop weights in non-equal
cost multi-path routing.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The memory region where adjacency entries (nexthops) are stored is
called the KVD linear and is configured during initialization with a
size of 64K.
Extend this area with 32K more entries, that will be partitioned into 64
groups of 0.5K entries, thereby allowing us to support weighted nexthops
with high accuracy.
Change the ratio between both types of hash entries, so as to prevent
reduction in the number of double hash entries, which are used for IPv6
neighbours and routes with a prefix length greater than 64.
Note that the user will be able to control all these sizes once the
devlink resource manager is introduced.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Up until now the driver assumed all the nexthops have an equal weight
and wrote each to a single adjacency entry.
This patch takes the `weight` parameter into account and populates the
adjacency group according to the relative weight of each nexthop.
Specifically, the weights of all the nexthops that should be offloaded
are first normalized and then used to calculate the upper adjacency
index of each nexthop. This is done according to the hash-threshold
algorithm used by the kernel for IPv4 multi-path routing.
Adjacency groups are currently limited to 32 entries which limits the
weights that can be used, but follow-up patches will introduce groups of
512 entries.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The device has certain restrictions regarding the size of an adjacency
group.
Have the router determine the size of the adjacency group according to
available KVDL allocation sizes and these restrictions.
This was not needed until now since only allocations of up 32 entries
were supported and these are all valid sizes for an adjacency group.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As the first step towards non-equal-cost multi-path support, store each
nexthop's weight.
For IPv6 nexthops always set the weight to 1, as it only supports ECMP.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>