License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
// SPDX-License-Identifier: GPL-2.0
2008-07-08 14:23:36 +04:00
# include <linux/skbuff.h>
# include <linux/netdevice.h>
# include <linux/if_vlan.h>
2009-03-01 11:11:52 +03:00
# include <linux/netpoll.h>
2011-07-15 19:47:34 +04:00
# include <linux/export.h>
2021-03-18 21:42:34 +03:00
# include <net/gro.h>
2008-07-08 14:23:36 +04:00
# include "vlan.h"
vlan: don't deliver frames for unknown vlans to protocols
6a32e4f9dd9219261f8856f817e6655114cfec2f made the vlan code skip marking
vlan-tagged frames for not locally configured vlans as PACKET_OTHERHOST if
there was an rx_handler, as the rx_handler could cause the frame to be received
on a different (virtual) vlan-capable interface where that vlan might be
configured.
As rx_handlers do not necessarily return RX_HANDLER_ANOTHER, this could cause
frames for unknown vlans to be delivered to the protocol stack as if they had
been received untagged.
For example, if an ipv6 router advertisement that's tagged for a locally not
configured vlan is received on an interface with macvlan interfaces attached,
macvlan's rx_handler returns RX_HANDLER_PASS after delivering the frame to the
macvlan interfaces, which caused it to be passed to the protocol stack, leading
to ipv6 addresses for the announced prefix being configured even though those
are completely unusable on the underlying interface.
The fix moves marking as PACKET_OTHERHOST after the rx_handler so the
rx_handler, if there is one, sees the frame unchanged, but afterwards,
before the frame is delivered to the protocol stack, it gets marked whether
there is an rx_handler or not.
Signed-off-by: Florian Zumbiehl <florz@florz.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-07 19:51:58 +04:00
bool vlan_do_receive ( struct sk_buff * * skbp )
2008-07-08 14:23:36 +04:00
{
2010-10-20 17:56:06 +04:00
struct sk_buff * skb = * skbp ;
2013-04-19 06:04:30 +04:00
__be16 vlan_proto = skb - > vlan_proto ;
2015-01-13 19:13:44 +03:00
u16 vlan_id = skb_vlan_tag_get_id ( skb ) ;
vlan_dev: VLAN 0 should be treated as "no vlan tag" (802.1p packet)
- Without the 8021q module loaded in the kernel, all 802.1p packets
(VLAN 0 but QoS tagging) are silently discarded (as expected, as
the protocol is not loaded).
- Without this patch in 8021q module, these packets are forwarded to
the module, but they are discarded also if VLAN 0 is not configured,
which should not be the default behaviour, as VLAN 0 is not really
a VLANed packet but a 802.1p packet. Defining VLAN 0 makes it almost
impossible to communicate with mixed 802.1p and non 802.1p devices on
the same network due to arp table issues.
- Changed logic to skip vlan specific code in vlan_skb_recv if VLAN
is 0 and we have not defined a VLAN with ID 0, but we accept the
packet with the encapsulated proto and pass it later to netif_rx.
- In the vlan device event handler, added some logic to add VLAN 0
to HW filter in devices that support it (this prevented any traffic
in VLAN 0 to reach the stack in e1000e with HW filter under 2.6.35,
and probably also with other HW filtered cards, so we fix it here).
- In the vlan unregister logic, prevent the elimination of VLAN 0
in devices with HW filter.
- The default behaviour is to ignore the VLAN 0 tagging and accept
the packet as if it was not tagged, but we can still define a
VLAN 0 if desired (so it is backwards compatible).
Signed-off-by: Pedro Garcia <pedro.netdev@dondevamos.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-07-19 02:38:44 +04:00
struct net_device * vlan_dev ;
2010-11-11 02:42:00 +03:00
struct vlan_pcpu_stats * rx_stats ;
2008-07-08 14:23:36 +04:00
2013-04-19 06:04:30 +04:00
vlan_dev = vlan_find_dev ( skb - > dev , vlan_proto , vlan_id ) ;
vlan: don't deliver frames for unknown vlans to protocols
6a32e4f9dd9219261f8856f817e6655114cfec2f made the vlan code skip marking
vlan-tagged frames for not locally configured vlans as PACKET_OTHERHOST if
there was an rx_handler, as the rx_handler could cause the frame to be received
on a different (virtual) vlan-capable interface where that vlan might be
configured.
As rx_handlers do not necessarily return RX_HANDLER_ANOTHER, this could cause
frames for unknown vlans to be delivered to the protocol stack as if they had
been received untagged.
For example, if an ipv6 router advertisement that's tagged for a locally not
configured vlan is received on an interface with macvlan interfaces attached,
macvlan's rx_handler returns RX_HANDLER_PASS after delivering the frame to the
macvlan interfaces, which caused it to be passed to the protocol stack, leading
to ipv6 addresses for the announced prefix being configured even though those
are completely unusable on the underlying interface.
The fix moves marking as PACKET_OTHERHOST after the rx_handler so the
rx_handler, if there is one, sees the frame unchanged, but afterwards,
before the frame is delivered to the protocol stack, it gets marked whether
there is an rx_handler or not.
Signed-off-by: Florian Zumbiehl <florz@florz.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-07 19:51:58 +04:00
if ( ! vlan_dev )
2010-10-20 17:56:06 +04:00
return false ;
2008-11-05 01:49:57 +03:00
2010-10-20 17:56:06 +04:00
skb = * skbp = skb_share_check ( skb , GFP_ATOMIC ) ;
if ( unlikely ( ! skb ) )
return false ;
2017-10-03 23:13:29 +03:00
if ( unlikely ( ! ( vlan_dev - > flags & IFF_UP ) ) ) {
kfree_skb ( skb ) ;
* skbp = NULL ;
return false ;
}
2009-01-06 21:50:09 +03:00
2010-10-20 17:56:06 +04:00
skb - > dev = vlan_dev ;
2014-03-07 14:45:30 +04:00
if ( unlikely ( skb - > pkt_type = = PACKET_OTHERHOST ) ) {
2011-06-10 10:56:58 +04:00
/* Our lower layer thinks this is not local, let's make sure.
* This allows the VLAN to have a different MAC than the
* underlying device , and still route correctly . */
2014-03-07 14:45:31 +04:00
if ( ether_addr_equal_64bits ( eth_hdr ( skb ) - > h_dest , vlan_dev - > dev_addr ) )
2011-06-10 10:56:58 +04:00
skb - > pkt_type = PACKET_HOST ;
}
2015-11-16 23:43:45 +03:00
if ( ! ( vlan_dev_priv ( vlan_dev ) - > flags & VLAN_FLAG_REORDER_HDR ) & &
! netif_is_macvlan_port ( vlan_dev ) & &
! netif_is_bridge_port ( vlan_dev ) ) {
2011-06-10 10:56:58 +04:00
unsigned int offset = skb - > data - skb_mac_header ( skb ) ;
/*
* vlan_insert_tag expect skb - > data pointing to mac header .
* So change skb - > data before calling it and change back to
* original position later
*/
skb_push ( skb , offset ) ;
2018-03-13 08:51:28 +03:00
skb = * skbp = vlan_insert_inner_tag ( skb , skb - > vlan_proto ,
skb - > vlan_tci , skb - > mac_len ) ;
2011-06-10 10:56:58 +04:00
if ( ! skb )
return false ;
skb_pull ( skb , offset + VLAN_HLEN ) ;
skb_reset_mac_len ( skb ) ;
}
2010-10-20 17:56:06 +04:00
skb - > priority = vlan_get_ingress_priority ( vlan_dev , skb - > vlan_tci ) ;
2018-11-09 02:18:03 +03:00
__vlan_hwaccel_clear_tag ( skb ) ;
2008-07-08 14:23:36 +04:00
2011-12-08 08:11:15 +04:00
rx_stats = this_cpu_ptr ( vlan_dev_priv ( vlan_dev ) - > vlan_pcpu_stats ) ;
2009-11-17 07:53:09 +03:00
2010-06-24 04:55:06 +04:00
u64_stats_update_begin ( & rx_stats - > syncp ) ;
2009-11-17 07:53:09 +03:00
rx_stats - > rx_packets + + ;
rx_stats - > rx_bytes + = skb - > len ;
2011-06-10 10:56:58 +04:00
if ( skb - > pkt_type = = PACKET_MULTICAST )
2010-06-24 04:55:06 +04:00
rx_stats - > rx_multicast + + ;
u64_stats_update_end ( & rx_stats - > syncp ) ;
2010-10-20 17:56:06 +04:00
return true ;
2008-07-08 14:23:36 +04:00
}
2008-07-08 14:23:57 +04:00
2013-01-04 02:48:59 +04:00
/* Must be invoked with rcu_read_lock. */
2014-05-09 10:58:05 +04:00
struct net_device * __vlan_find_dev_deep_rcu ( struct net_device * dev ,
2013-04-19 06:04:29 +04:00
__be16 vlan_proto , u16 vlan_id )
2011-07-20 08:54:05 +04:00
{
2013-01-04 02:48:59 +04:00
struct vlan_info * vlan_info = rcu_dereference ( dev - > vlan_info ) ;
2011-07-20 08:54:05 +04:00
2011-12-08 08:11:18 +04:00
if ( vlan_info ) {
2013-04-19 06:04:29 +04:00
return vlan_group_get_device ( & vlan_info - > grp ,
vlan_proto , vlan_id ) ;
2011-07-20 08:54:05 +04:00
} else {
/*
2013-01-04 02:48:59 +04:00
* Lower devices of master uppers ( bonding , team ) do not have
* grp assigned to themselves . Grp is assigned to upper device
* instead .
2011-07-20 08:54:05 +04:00
*/
2013-01-04 02:48:59 +04:00
struct net_device * upper_dev ;
upper_dev = netdev_master_upper_dev_get_rcu ( dev ) ;
if ( upper_dev )
2014-05-09 10:58:05 +04:00
return __vlan_find_dev_deep_rcu ( upper_dev ,
2013-04-19 06:04:29 +04:00
vlan_proto , vlan_id ) ;
2011-07-20 08:54:05 +04:00
}
return NULL ;
}
2014-05-09 10:58:05 +04:00
EXPORT_SYMBOL ( __vlan_find_dev_deep_rcu ) ;
2011-07-20 08:54:05 +04:00
2008-07-08 14:23:57 +04:00
struct net_device * vlan_dev_real_dev ( const struct net_device * dev )
{
2013-08-04 00:07:46 +04:00
struct net_device * ret = vlan_dev_priv ( dev ) - > real_dev ;
while ( is_vlan_dev ( ret ) )
ret = vlan_dev_priv ( ret ) - > real_dev ;
return ret ;
2008-07-08 14:23:57 +04:00
}
2009-01-26 23:37:53 +03:00
EXPORT_SYMBOL ( vlan_dev_real_dev ) ;
2008-07-08 14:23:57 +04:00
u16 vlan_dev_vlan_id ( const struct net_device * dev )
{
2011-12-08 08:11:15 +04:00
return vlan_dev_priv ( dev ) - > vlan_id ;
2008-07-08 14:23:57 +04:00
}
2009-01-26 23:37:53 +03:00
EXPORT_SYMBOL ( vlan_dev_vlan_id ) ;
2009-01-06 21:50:09 +03:00
2014-03-25 13:44:42 +04:00
__be16 vlan_dev_vlan_proto ( const struct net_device * dev )
{
return vlan_dev_priv ( dev ) - > vlan_proto ;
}
EXPORT_SYMBOL ( vlan_dev_vlan_proto ) ;
2011-12-08 08:11:18 +04:00
/*
* vlan info and vid list
*/
static void vlan_group_free ( struct vlan_group * grp )
{
2013-04-21 03:34:40 +04:00
int i , j ;
2011-12-08 08:11:18 +04:00
2013-04-21 03:34:40 +04:00
for ( i = 0 ; i < VLAN_PROTO_NUM ; i + + )
for ( j = 0 ; j < VLAN_GROUP_ARRAY_SPLIT_PARTS ; j + + )
kfree ( grp - > vlan_devices_arrays [ i ] [ j ] ) ;
2011-12-08 08:11:18 +04:00
}
static void vlan_info_free ( struct vlan_info * vlan_info )
{
vlan_group_free ( & vlan_info - > grp ) ;
kfree ( vlan_info ) ;
}
static void vlan_info_rcu_free ( struct rcu_head * rcu )
{
vlan_info_free ( container_of ( rcu , struct vlan_info , rcu ) ) ;
}
static struct vlan_info * vlan_info_alloc ( struct net_device * dev )
{
struct vlan_info * vlan_info ;
vlan_info = kzalloc ( sizeof ( struct vlan_info ) , GFP_KERNEL ) ;
if ( ! vlan_info )
return NULL ;
vlan_info - > real_dev = dev ;
INIT_LIST_HEAD ( & vlan_info - > vid_list ) ;
return vlan_info ;
}
struct vlan_vid_info {
struct list_head list ;
2013-04-19 06:04:28 +04:00
__be16 proto ;
u16 vid ;
2011-12-08 08:11:18 +04:00
int refcount ;
} ;
2018-03-31 09:11:41 +03:00
static bool vlan_hw_filter_capable ( const struct net_device * dev , __be16 proto )
net: vlan: add 802.1ad support
Add support for 802.1ad VLAN devices. This mainly consists of checking for
ETH_P_8021AD in addition to ETH_P_8021Q in a couple of places and check
offloading capabilities based on the used protocol.
Configuration is done using "ip link":
# ip link add link eth0 eth0.1000 \
type vlan proto 802.1ad id 1000
# ip link add link eth0.1000 eth0.1000.1000 \
type vlan proto 802.1q id 1000
52:54:00:12:34:56 > 92:b1:54:28:e4:8c, ethertype 802.1Q (0x8100), length 106: vlan 1000, p 0, ethertype 802.1Q, vlan 1000, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
20.1.0.2 > 20.1.0.1: ICMP echo request, id 3003, seq 8, length 64
92:b1:54:28:e4:8c > 52:54:00:12:34:56, ethertype 802.1Q-QinQ (0x88a8), length 106: vlan 1000, p 0, ethertype 802.1Q, vlan 1000, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 47944, offset 0, flags [none], proto ICMP (1), length 84)
20.1.0.1 > 20.1.0.2: ICMP echo reply, id 3003, seq 8, length 64
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-04-19 06:04:31 +04:00
{
2018-03-28 17:46:54 +03:00
if ( proto = = htons ( ETH_P_8021Q ) & &
net: vlan: add 802.1ad support
Add support for 802.1ad VLAN devices. This mainly consists of checking for
ETH_P_8021AD in addition to ETH_P_8021Q in a couple of places and check
offloading capabilities based on the used protocol.
Configuration is done using "ip link":
# ip link add link eth0 eth0.1000 \
type vlan proto 802.1ad id 1000
# ip link add link eth0.1000 eth0.1000.1000 \
type vlan proto 802.1q id 1000
52:54:00:12:34:56 > 92:b1:54:28:e4:8c, ethertype 802.1Q (0x8100), length 106: vlan 1000, p 0, ethertype 802.1Q, vlan 1000, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
20.1.0.2 > 20.1.0.1: ICMP echo request, id 3003, seq 8, length 64
92:b1:54:28:e4:8c > 52:54:00:12:34:56, ethertype 802.1Q-QinQ (0x88a8), length 106: vlan 1000, p 0, ethertype 802.1Q, vlan 1000, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 47944, offset 0, flags [none], proto ICMP (1), length 84)
20.1.0.1 > 20.1.0.2: ICMP echo reply, id 3003, seq 8, length 64
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-04-19 06:04:31 +04:00
dev - > features & NETIF_F_HW_VLAN_CTAG_FILTER )
return true ;
2018-03-28 17:46:54 +03:00
if ( proto = = htons ( ETH_P_8021AD ) & &
net: vlan: add 802.1ad support
Add support for 802.1ad VLAN devices. This mainly consists of checking for
ETH_P_8021AD in addition to ETH_P_8021Q in a couple of places and check
offloading capabilities based on the used protocol.
Configuration is done using "ip link":
# ip link add link eth0 eth0.1000 \
type vlan proto 802.1ad id 1000
# ip link add link eth0.1000 eth0.1000.1000 \
type vlan proto 802.1q id 1000
52:54:00:12:34:56 > 92:b1:54:28:e4:8c, ethertype 802.1Q (0x8100), length 106: vlan 1000, p 0, ethertype 802.1Q, vlan 1000, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
20.1.0.2 > 20.1.0.1: ICMP echo request, id 3003, seq 8, length 64
92:b1:54:28:e4:8c > 52:54:00:12:34:56, ethertype 802.1Q-QinQ (0x88a8), length 106: vlan 1000, p 0, ethertype 802.1Q, vlan 1000, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 47944, offset 0, flags [none], proto ICMP (1), length 84)
20.1.0.1 > 20.1.0.2: ICMP echo reply, id 3003, seq 8, length 64
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-04-19 06:04:31 +04:00
dev - > features & NETIF_F_HW_VLAN_STAG_FILTER )
return true ;
return false ;
}
2011-12-08 08:11:18 +04:00
static struct vlan_vid_info * vlan_vid_info_get ( struct vlan_info * vlan_info ,
2013-04-19 06:04:28 +04:00
__be16 proto , u16 vid )
2011-12-08 08:11:18 +04:00
{
struct vlan_vid_info * vid_info ;
list_for_each_entry ( vid_info , & vlan_info - > vid_list , list ) {
2013-04-19 06:04:28 +04:00
if ( vid_info - > proto = = proto & & vid_info - > vid = = vid )
2011-12-08 08:11:18 +04:00
return vid_info ;
}
return NULL ;
}
2013-04-19 06:04:28 +04:00
static struct vlan_vid_info * vlan_vid_info_alloc ( __be16 proto , u16 vid )
2011-12-08 08:11:18 +04:00
{
struct vlan_vid_info * vid_info ;
vid_info = kzalloc ( sizeof ( struct vlan_vid_info ) , GFP_KERNEL ) ;
if ( ! vid_info )
return NULL ;
2013-04-19 06:04:28 +04:00
vid_info - > proto = proto ;
2011-12-08 08:11:18 +04:00
vid_info - > vid = vid ;
return vid_info ;
}
2018-03-28 17:46:54 +03:00
static int vlan_add_rx_filter_info ( struct net_device * dev , __be16 proto , u16 vid )
{
if ( ! vlan_hw_filter_capable ( dev , proto ) )
return 0 ;
if ( netif_device_present ( dev ) )
return dev - > netdev_ops - > ndo_vlan_rx_add_vid ( dev , proto , vid ) ;
else
return - ENODEV ;
}
static int vlan_kill_rx_filter_info ( struct net_device * dev , __be16 proto , u16 vid )
{
if ( ! vlan_hw_filter_capable ( dev , proto ) )
return 0 ;
if ( netif_device_present ( dev ) )
return dev - > netdev_ops - > ndo_vlan_rx_kill_vid ( dev , proto , vid ) ;
else
return - ENODEV ;
}
2018-11-08 23:27:55 +03:00
int vlan_for_each ( struct net_device * dev ,
int ( * action ) ( struct net_device * dev , int vid , void * arg ) ,
void * arg )
{
struct vlan_vid_info * vid_info ;
struct vlan_info * vlan_info ;
struct net_device * vdev ;
int ret ;
ASSERT_RTNL ( ) ;
vlan_info = rtnl_dereference ( dev - > vlan_info ) ;
if ( ! vlan_info )
return 0 ;
list_for_each_entry ( vid_info , & vlan_info - > vid_list , list ) {
vdev = vlan_group_get_device ( & vlan_info - > grp , vid_info - > proto ,
vid_info - > vid ) ;
ret = action ( vdev , vid_info - > vid , arg ) ;
if ( ret )
return ret ;
}
return 0 ;
}
EXPORT_SYMBOL ( vlan_for_each ) ;
2018-03-28 17:46:54 +03:00
int vlan_filter_push_vids ( struct vlan_info * vlan_info , __be16 proto )
{
struct net_device * real_dev = vlan_info - > real_dev ;
struct vlan_vid_info * vlan_vid_info ;
int err ;
list_for_each_entry ( vlan_vid_info , & vlan_info - > vid_list , list ) {
if ( vlan_vid_info - > proto = = proto ) {
err = vlan_add_rx_filter_info ( real_dev , proto ,
vlan_vid_info - > vid ) ;
if ( err )
goto unwind ;
}
}
return 0 ;
unwind :
list_for_each_entry_continue_reverse ( vlan_vid_info ,
& vlan_info - > vid_list , list ) {
if ( vlan_vid_info - > proto = = proto )
vlan_kill_rx_filter_info ( real_dev , proto ,
vlan_vid_info - > vid ) ;
}
return err ;
}
EXPORT_SYMBOL ( vlan_filter_push_vids ) ;
void vlan_filter_drop_vids ( struct vlan_info * vlan_info , __be16 proto )
{
struct vlan_vid_info * vlan_vid_info ;
list_for_each_entry ( vlan_vid_info , & vlan_info - > vid_list , list )
if ( vlan_vid_info - > proto = = proto )
vlan_kill_rx_filter_info ( vlan_info - > real_dev ,
vlan_vid_info - > proto ,
vlan_vid_info - > vid ) ;
}
EXPORT_SYMBOL ( vlan_filter_drop_vids ) ;
2013-04-19 06:04:28 +04:00
static int __vlan_vid_add ( struct vlan_info * vlan_info , __be16 proto , u16 vid ,
2011-12-08 08:11:18 +04:00
struct vlan_vid_info * * pvid_info )
2011-12-08 08:11:17 +04:00
{
2011-12-08 08:11:18 +04:00
struct net_device * dev = vlan_info - > real_dev ;
struct vlan_vid_info * vid_info ;
int err ;
2013-04-19 06:04:28 +04:00
vid_info = vlan_vid_info_alloc ( proto , vid ) ;
2011-12-08 08:11:18 +04:00
if ( ! vid_info )
return - ENOMEM ;
2011-12-08 08:11:17 +04:00
2018-03-28 17:46:54 +03:00
err = vlan_add_rx_filter_info ( dev , proto , vid ) ;
if ( err ) {
kfree ( vid_info ) ;
return err ;
2011-12-08 08:11:17 +04:00
}
2018-03-28 17:46:54 +03:00
2011-12-08 08:11:18 +04:00
list_add ( & vid_info - > list , & vlan_info - > vid_list ) ;
vlan_info - > nr_vids + + ;
* pvid_info = vid_info ;
2011-12-08 08:11:17 +04:00
return 0 ;
}
2011-12-08 08:11:18 +04:00
2013-04-19 06:04:28 +04:00
int vlan_vid_add ( struct net_device * dev , __be16 proto , u16 vid )
2011-12-08 08:11:18 +04:00
{
struct vlan_info * vlan_info ;
struct vlan_vid_info * vid_info ;
bool vlan_info_created = false ;
int err ;
ASSERT_RTNL ( ) ;
vlan_info = rtnl_dereference ( dev - > vlan_info ) ;
if ( ! vlan_info ) {
vlan_info = vlan_info_alloc ( dev ) ;
if ( ! vlan_info )
return - ENOMEM ;
vlan_info_created = true ;
}
2013-04-19 06:04:28 +04:00
vid_info = vlan_vid_info_get ( vlan_info , proto , vid ) ;
2011-12-08 08:11:18 +04:00
if ( ! vid_info ) {
2013-04-19 06:04:28 +04:00
err = __vlan_vid_add ( vlan_info , proto , vid , & vid_info ) ;
2011-12-08 08:11:18 +04:00
if ( err )
goto out_free_vlan_info ;
}
vid_info - > refcount + + ;
if ( vlan_info_created )
rcu_assign_pointer ( dev - > vlan_info , vlan_info ) ;
return 0 ;
out_free_vlan_info :
if ( vlan_info_created )
kfree ( vlan_info ) ;
return err ;
}
2011-12-08 08:11:17 +04:00
EXPORT_SYMBOL ( vlan_vid_add ) ;
2011-12-08 08:11:18 +04:00
static void __vlan_vid_del ( struct vlan_info * vlan_info ,
struct vlan_vid_info * vid_info )
2011-12-08 08:11:17 +04:00
{
2011-12-08 08:11:18 +04:00
struct net_device * dev = vlan_info - > real_dev ;
2013-04-19 06:04:28 +04:00
__be16 proto = vid_info - > proto ;
u16 vid = vid_info - > vid ;
2011-12-08 08:11:18 +04:00
int err ;
2011-12-08 08:11:17 +04:00
2018-03-28 17:46:54 +03:00
err = vlan_kill_rx_filter_info ( dev , proto , vid ) ;
2020-02-17 15:27:58 +03:00
if ( err & & dev - > reg_state ! = NETREG_UNREGISTERING )
netdev_warn ( dev , " failed to kill vid %04x/%d \n " , proto , vid ) ;
2018-03-28 17:46:54 +03:00
2011-12-08 08:11:18 +04:00
list_del ( & vid_info - > list ) ;
kfree ( vid_info ) ;
vlan_info - > nr_vids - - ;
}
2013-04-19 06:04:28 +04:00
void vlan_vid_del ( struct net_device * dev , __be16 proto , u16 vid )
2011-12-08 08:11:18 +04:00
{
struct vlan_info * vlan_info ;
struct vlan_vid_info * vid_info ;
ASSERT_RTNL ( ) ;
vlan_info = rtnl_dereference ( dev - > vlan_info ) ;
if ( ! vlan_info )
return ;
2013-04-19 06:04:28 +04:00
vid_info = vlan_vid_info_get ( vlan_info , proto , vid ) ;
2011-12-08 08:11:18 +04:00
if ( ! vid_info )
return ;
vid_info - > refcount - - ;
if ( vid_info - > refcount = = 0 ) {
__vlan_vid_del ( vlan_info , vid_info ) ;
if ( vlan_info - > nr_vids = = 0 ) {
RCU_INIT_POINTER ( dev - > vlan_info , NULL ) ;
call_rcu ( & vlan_info - > rcu , vlan_info_rcu_free ) ;
}
2011-12-08 08:11:17 +04:00
}
}
EXPORT_SYMBOL ( vlan_vid_del ) ;
2011-12-08 08:11:19 +04:00
int vlan_vids_add_by_dev ( struct net_device * dev ,
const struct net_device * by_dev )
{
struct vlan_vid_info * vid_info ;
2011-12-14 00:29:43 +04:00
struct vlan_info * vlan_info ;
2011-12-08 08:11:19 +04:00
int err ;
ASSERT_RTNL ( ) ;
2011-12-14 00:29:43 +04:00
vlan_info = rtnl_dereference ( by_dev - > vlan_info ) ;
if ( ! vlan_info )
2011-12-08 08:11:19 +04:00
return 0 ;
2011-12-14 00:29:43 +04:00
list_for_each_entry ( vid_info , & vlan_info - > vid_list , list ) {
2013-04-19 06:04:28 +04:00
err = vlan_vid_add ( dev , vid_info - > proto , vid_info - > vid ) ;
2011-12-08 08:11:19 +04:00
if ( err )
goto unwind ;
}
return 0 ;
unwind :
list_for_each_entry_continue_reverse ( vid_info ,
2011-12-14 00:29:43 +04:00
& vlan_info - > vid_list ,
2011-12-08 08:11:19 +04:00
list ) {
2013-04-19 06:04:28 +04:00
vlan_vid_del ( dev , vid_info - > proto , vid_info - > vid ) ;
2011-12-08 08:11:19 +04:00
}
return err ;
}
EXPORT_SYMBOL ( vlan_vids_add_by_dev ) ;
void vlan_vids_del_by_dev ( struct net_device * dev ,
const struct net_device * by_dev )
{
struct vlan_vid_info * vid_info ;
2011-12-14 00:29:43 +04:00
struct vlan_info * vlan_info ;
2011-12-08 08:11:19 +04:00
ASSERT_RTNL ( ) ;
2011-12-14 00:29:43 +04:00
vlan_info = rtnl_dereference ( by_dev - > vlan_info ) ;
if ( ! vlan_info )
2011-12-08 08:11:19 +04:00
return ;
2011-12-14 00:29:43 +04:00
list_for_each_entry ( vid_info , & vlan_info - > vid_list , list )
2013-04-19 06:04:28 +04:00
vlan_vid_del ( dev , vid_info - > proto , vid_info - > vid ) ;
2011-12-08 08:11:19 +04:00
}
EXPORT_SYMBOL ( vlan_vids_del_by_dev ) ;
2012-08-23 07:26:52 +04:00
bool vlan_uses_dev ( const struct net_device * dev )
{
2012-10-14 08:30:56 +04:00
struct vlan_info * vlan_info ;
ASSERT_RTNL ( ) ;
vlan_info = rtnl_dereference ( dev - > vlan_info ) ;
if ( ! vlan_info )
return false ;
return vlan_info - > grp . nr_vlan_devs ? true : false ;
2012-08-23 07:26:52 +04:00
}
EXPORT_SYMBOL ( vlan_uses_dev ) ;
2018-11-14 01:22:48 +03:00
static struct sk_buff * vlan_gro_receive ( struct list_head * head ,
struct sk_buff * skb )
{
const struct packet_offload * ptype ;
unsigned int hlen , off_vlan ;
struct sk_buff * pp = NULL ;
struct vlan_hdr * vhdr ;
struct sk_buff * p ;
__be16 type ;
int flush = 1 ;
off_vlan = skb_gro_offset ( skb ) ;
hlen = off_vlan + sizeof ( * vhdr ) ;
vhdr = skb_gro_header_fast ( skb , off_vlan ) ;
if ( skb_gro_header_hard ( skb , hlen ) ) {
vhdr = skb_gro_header_slow ( skb , hlen , off_vlan ) ;
if ( unlikely ( ! vhdr ) )
goto out ;
}
type = vhdr - > h_vlan_encapsulated_proto ;
rcu_read_lock ( ) ;
ptype = gro_find_receive_by_type ( type ) ;
if ( ! ptype )
goto out_unlock ;
flush = 0 ;
list_for_each_entry ( p , head , list ) {
struct vlan_hdr * vhdr2 ;
if ( ! NAPI_GRO_CB ( p ) - > same_flow )
continue ;
vhdr2 = ( struct vlan_hdr * ) ( p - > data + off_vlan ) ;
if ( compare_vlan_header ( vhdr , vhdr2 ) )
NAPI_GRO_CB ( p ) - > same_flow = 0 ;
}
skb_gro_pull ( skb , sizeof ( * vhdr ) ) ;
skb_gro_postpull_rcsum ( skb , vhdr , sizeof ( * vhdr ) ) ;
2021-03-18 21:42:34 +03:00
pp = indirect_call_gro_receive_inet ( ptype - > callbacks . gro_receive ,
ipv6_gro_receive , inet_gro_receive ,
head , skb ) ;
2018-11-14 01:22:48 +03:00
out_unlock :
rcu_read_unlock ( ) ;
out :
skb_gro_flush_final ( skb , pp , flush ) ;
return pp ;
}
static int vlan_gro_complete ( struct sk_buff * skb , int nhoff )
{
struct vlan_hdr * vhdr = ( struct vlan_hdr * ) ( skb - > data + nhoff ) ;
__be16 type = vhdr - > h_vlan_encapsulated_proto ;
struct packet_offload * ptype ;
int err = - ENOENT ;
rcu_read_lock ( ) ;
ptype = gro_find_complete_by_type ( type ) ;
if ( ptype )
2021-03-18 21:42:34 +03:00
err = INDIRECT_CALL_INET ( ptype - > callbacks . gro_complete ,
ipv6_gro_complete , inet_gro_complete ,
skb , nhoff + sizeof ( * vhdr ) ) ;
2018-11-14 01:22:48 +03:00
rcu_read_unlock ( ) ;
return err ;
}
static struct packet_offload vlan_packet_offloads [ ] __read_mostly = {
{
. type = cpu_to_be16 ( ETH_P_8021Q ) ,
. priority = 10 ,
. callbacks = {
. gro_receive = vlan_gro_receive ,
. gro_complete = vlan_gro_complete ,
} ,
} ,
{
. type = cpu_to_be16 ( ETH_P_8021AD ) ,
. priority = 10 ,
. callbacks = {
. gro_receive = vlan_gro_receive ,
. gro_complete = vlan_gro_complete ,
} ,
} ,
} ;
static int __init vlan_offload_init ( void )
{
unsigned int i ;
for ( i = 0 ; i < ARRAY_SIZE ( vlan_packet_offloads ) ; i + + )
dev_add_offload ( & vlan_packet_offloads [ i ] ) ;
return 0 ;
}
fs_initcall ( vlan_offload_init ) ;