stmmac: align RX buffers
On RX an SKB is allocated and the received buffer is copied into it. But on some architectures, the memcpy() needs the source and destination buffers to have the same alignment to be efficient. This is not our case, because SKB data pointer is misaligned by two bytes to compensate the ethernet header. Align the RX buffer the same way as the SKB one, so the copy is faster. An iperf3 RX test gives a decent improvement on a RISC-V machine: before: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 733 MBytes 615 Mbits/sec 88 sender [ 5] 0.00-10.01 sec 730 MBytes 612 Mbits/sec receiver after: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 1.09 GBytes 940 Mbits/sec receiver And the memcpy() overhead during the RX drops dramatically. before: Overhead Shared O Symbol 43.35% [kernel] [k] memcpy 33.77% [kernel] [k] __asm_copy_to_user 3.64% [kernel] [k] sifive_l2_flush64_range after: Overhead Shared O Symbol 45.40% [kernel] [k] __asm_copy_to_user 28.09% [kernel] [k] memcpy 4.27% [kernel] [k] sifive_l2_flush64_range Signed-off-by: Matteo Croce <mcroce@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
parent
89212e160b
commit
a955318fe6
@ -339,9 +339,9 @@ static inline bool stmmac_xdp_is_enabled(struct stmmac_priv *priv)
|
||||
static inline unsigned int stmmac_rx_offset(struct stmmac_priv *priv)
|
||||
{
|
||||
if (stmmac_xdp_is_enabled(priv))
|
||||
return XDP_PACKET_HEADROOM;
|
||||
return XDP_PACKET_HEADROOM + NET_IP_ALIGN;
|
||||
|
||||
return 0;
|
||||
return NET_SKB_PAD + NET_IP_ALIGN;
|
||||
}
|
||||
|
||||
void stmmac_disable_rx_queue(struct stmmac_priv *priv, u32 queue);
|
||||
|
Loading…
Reference in New Issue
Block a user