IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
On a imx6ul-pico board the following error is seen during system suspend:
dpm_run_callback(): platform_pm_resume+0x0/0x54 returns -110
PM: Device 2090000.flexcan failed to resume: error -110
The reason for this suspend error is because when the CAN interface is not
active the clocks are disabled and then flexcan_chip_enable() will
always fail due to a timeout error.
In order to fix this issue, only call flexcan_chip_enable/disable()
when the CAN interface is active.
Based on a patch from Dong Aisheng in the NXP kernel.
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Cc: linux-stable <stable@vger.kernel.org>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
commit 1a6509d99122 ("[IPSEC]: Add support for combined mode algorithms")
introduced aead. The function attach_aead kmemdup()s the algorithm
name during xfrm_state_construct().
However this memory is never freed.
Implementation has since been slightly modified in
commit ee5c23176fcc ("xfrm: Clone states properly on migration")
without resolving this leak.
This patch adds a kfree() call for the aead algorithm name.
Fixes: 1a6509d99122 ("[IPSEC]: Add support for combined mode algorithms")
Signed-off-by: Ilan Tayari <ilant@mellanox.com>
Acked-by: Rami Rosen <roszenrami@gmail.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Similar to struct drm_update_draw, struct drm_mode_fb_cmd2 has an
unaligned 64 bit field (modifier). This get packed differently between
32 bit and 64 bit modes on architectures that can handle unaligned 64
bit access (X86 and IA64). Other architectures pack the structs the
same and don't need the compat wrapper. Use the same condition for
drm_mode_fb_cmd2 as we use for drm_update_draw.
Note that only the modifier will be packed differently between compat
and non-compat versions.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
[seanpaul added not at bottom of commit msg re: modifier]
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Link: http://patchwork.freedesktop.org/patch/msgid/1473801645-116011-1-git-send-email-hoegsberg@chromium.org
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIVAwUAV90ZzvSw1s6N8H32AQIBsg//StI2J/tyGGTvvVXoeWPAGeDZp8907Qbc
O6671V3feft0HvenQ/uFfC9PdpRTYHTlQhWidk9/wTx5nvg/5b7HGHSD8ghPnCRj
Jyu4XbpXM6vw650qSBpRAg5TLIAkSsrdjaANzFSYw70sX6eTLdKJF3YFyoDRD2Cc
T2Y0XIUAcACl5/A/KIHRaKjTtmzhoc5Vih9lOWQRZ/dw2u0A6GdwN0qoFfHqXwvI
t1S4MtSkXrO8D9uWqmFhTpRzSAzO57L7bccVJbr+OzdNAm/Bkicjrxyzo6YepDAg
VRdYaNtKCR9vPRFb9vZFlZy3vyUHb/22mrgM6/V5GG5qd/NFSnouFZ/WGxLnv1Yt
5X+F0qJXXgxSmaGt0f6cV7d9OO24HU/YWGl7wUCHsMZ4bWDySBUWtk0X4USpCsiD
+AUxYJTEUNKZcCsW8XmC+cfnkRLNPycX603/pCH7aiPdQ5qY+ue89ICxu3PrJrf/
S8PuQPIoEWmEdTQ9QweG9FpIuQc2LO9fmzOJdiZXHYbGAKFOfNNz5Z64tSVRvPVE
WbxW0HYSNCJmBt15OI3hyONmbgACgmuD+e3EFWzCATz94FwMogLBFNqPcw1YpbwX
48RdcP62gx7FMZ4MdqfWllviZKjTlTVmacCRghwGFBOuZL3ifu1gfhsTbrAxqXCs
CSlnXgZV7gI=
=/2cJ
-----END PGP SIGNATURE-----
Merge tag 'rxrpc-rewrite-20160917-2' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: Tracepoint addition and improvement
Here is a set of patches that add some more tracepoints and improve a couple
of existing ones. New additions include:
(1) Connection refcount tracking.
(2) Client connection state machine tracking.
(3) Tx and Rx packet lifecycle.
(4) ACK reception and transmission.
(5) recvmsg processing.
Updates include:
(1) Print the symbolic packet name in the Rx packet tracepoint.
(2) Additional call refcount trace events.
(3) Improvements to sk_buff tracking with AF_RXRPC.
In addition:
(1) Config option to inject packet loss during both transmission and
reception.
(2) Removal of some printks.
This series needs to be applied on top of the previously posted fixes.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
-----BEGIN PGP SIGNATURE-----
iQIVAwUAV90ZnPSw1s6N8H32AQKguA//Q7lvTa3n3DFvMQcIPsyJZ6VniUqksTA1
wmQrw4GHXRUgM8UWz7G9Y5aqxUp2q6y6Vm9BeHkQ2bYZjSrOx5Dc/AImWhBn5au+
h+HZYcEs4mFM5AoVT7GK8o/nNODDjNt2qwcH/Nf8+SM7Xf52zYHelrSteLZZ1YWO
S1m4pa5YUBa/ICD3+K52lWq9SCG0VMmy41UHDXU6uSakfN62rn9ZYCcNeathlJGS
2D0cG3GzYYHiBZ1CkmdPQgGQhTM3wzI+0OvbNnidFlF78zVDlxX8C9zgWs/VlTCg
02ok4+ftzDojgXH9W+DziEiyCNq14GTDbrdSai5WA+vHVGagr6OoSwDWgPHkXBvW
pYQh7jqRoBOKN2fsVkU0t19hPc2CCVGYMh49A6AFv8lgS+nWoDXOlmZ0snh8deQg
Z0HO5mx+V+4yJplBlwH6ncvbRB9ywpsvIuLriZXC/aJg6aY4a8nrU35d1+6xUaM7
RBMud0uj+7oU+sC9N7CuM8m8HpBOg6+qAsbsfATSwadMRcMdS4LSoXcBg0WvljIH
JmtL924yEnMDw1yPkmuDBcQ9K6DuxeOYZg4A2756tBtGulxuVjntmI1MVAQlsbqH
CnNPWpxIDoLRQsHVcYWS5O1F16drGobzFhmj7Hf/6HmGa28x7nQhDafzfFj/3Dos
MAdM2pdO2x8=
=MjVT
-----END PGP SIGNATURE-----
Merge tag 'rxrpc-rewrite-20160917-1' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: Fixes & miscellany
Here are some more AF_RXRPC fix patches with a couple of miscellaneous
changes also. Fixes include:
(1) Make RxRPC IPv6 support conditional on IPv6 being available.
(2) Move the condition check in rxrpc_locate_data() into the caller and
check the error return.
(3) Fix the detection of the last received packet in recvmsg.
(4) Account calls that need acceptance and clean up any unaccepted ones if
the socket gets closed.
(5) Fix the cleanup of client connections.
(6) Fix the soft-ACK parsing and the retransmission of packets based on
those ACKs.
(7) Suppress transmission of an ACK when there's no pending ACK to
transmit because another thread stole it.
And some miscellany:
(8) Whitespace removal.
(9) Switch-value consistency in rxrpc_send_call_packet().
(10) Fix the basic transmission packet size to allow for spur-of-the-moment
jumbo DATA packet production.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal says:
====================
sched: convert queues to single-linked list
During Netfilter Workshop 2016 Eric Dumazet pointed out that qdisc
schedulers use doubly-linked lists, even though single-linked list
would be enough.
The double-linked skb lists incur one extra write on enqueue/dequeue
operations (to change ->prev pointer of next list elem).
This series converts qdiscs to single-linked version, listhead
maintains pointers to first (for dequeue) and last skb (for enqueue).
Most qdiscs don't queue at all and instead use a leaf qdisc (typically
pfifo_fast) so only a few schedulers needed changes.
I briefly tested netem and htb and they seemed fine.
UDP_STREAM netperf with 64 byte packets via veth+pfifo_fast shows
a small (~2%) improvement.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This change replaces sk_buff_head struct in Qdiscs with new qdisc_skb_head.
Its similar to the skb_buff_head api, but does not use skb->prev pointers.
Qdiscs will commonly enqueue at the tail of a list and dequeue at head.
While skb_buff_head works fine for this, enqueue/dequeue needs to also
adjust the prev pointer of next element.
The ->prev pointer is not required for qdiscs so we can just leave
it undefined and avoid one cacheline write access for en/dequeue.
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
After previous patch these functions are identical.
Replace __skb_dequeue in qdiscs with __qdisc_dequeue_head.
Next patch will then make __qdisc_dequeue_head handle
single-linked list instead of strcut sk_buff_head argument.
Doesn't change generated code.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Moves qdisc stat accouting to qdisc_dequeue_head.
The only direct caller of the __qdisc_dequeue_head version open-codes
this now.
This allows us to later use __qdisc_dequeue_head as a replacement
of __skb_dequeue() (which operates on sk_buff_head list).
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
A followup change will replace the sk_buff_head in the qdisc
struct with a slightly different list.
Use of the sk_buff_head helpers will thus cause compiler
warnings.
Open-code these accesses in an extra change to ease review.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix the retrn value check which testing the wrong variable
in cfg_queues_uld().
Fixes: 94cdb8bb993a ("cxgb4: Add support for dynamic allocation of
resources for ULD")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nelson Chang says:
====================
net: ethernet: mediatek: add HW LRO functions
The series add the large receive offload (LRO) functions by hardware and
the ethtool functions to configure RX flows of HW LRO.
changes since v3:
- Respin the patch by the newer driver
- Move the dts description of hwlro to optional properties
changes since v2:
- Add ndo_fix_features to prevent NETIF_F_LRO off while RX flow is programmed
- Rephrase the dts property is a capability if the hardware supports LRO
changes since v1:
- Add HW LRO support
- Add ethtool hooks to set LRO RX flows
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add the dts property for the capability if the hardware supports LRO.
Signed-off-by: Nelson Chang <nelson.chang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The codes add ethtool functions to set RX flows for HW LRO. Because the
HW LRO hardware can only recognize the destination IP of TCP/IP RX flows,
the ethtool command to add HW LRO flow is as below:
ethtool -N [devname] flow-type tcp4 dst-ip [ip_addr] loc [0~1]
Otherwise, cause the hardware can set total four destination IPs, each
GMAC (GMAC1/GMAC2) can set two IPs separately at most.
Signed-off-by: Nelson Chang <nelson.chang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The codes add the large receive offload (LRO) functions by hardware as below:
1) PDMA has total four RX rings that one is the normal ring, and others can
be configured as LRO rings.
2) Only TCP/IP RX flows can be offloaded. The hardware can set four IP
addresses at most, if the destination IP of the RX flow matches one of
them, it has the chance to be offloaded.
3) There three RX flows can be offloaded at most, and one flow is mapped to
one RX ring.
4) If there are more than three candidate RX flows, the hardware can
choose three of them by throughput comparison results.
Signed-off-by: Nelson Chang <nelson.chang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Allocate resources dynamically to cxgb4's Upper layer driver's(ULD) like
cxgbit, iw_cxgb4 and cxgb4i. Allocate resources when they register with
cxgb4 driver and free them while unregistering. All the queues and the
interrupts for them will be allocated during ULD probe only and freed
during remove.
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In commit 311b21774f13 ("sctp: simplify sk_receive_queue locking"), a call
to 'skb_queue_splice_tail_init()' has been made explicit. Previously it was
hidden in 'sctp_skb_list_tail()'
Now, the code around it looks redundant. The '_init()' part of
'skb_queue_splice_tail_init()' should already do the same.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The XDP_TX action can fail transmitting the frame in case the TX ring
is full or port is down. In case of TX failure it should drop the
frame, and not as now call 'break' which is the same as XDP_PASS.
Fixes: 9ecc2d86171a ("net/mlx4_en: add xdp forwarding and data write support")
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Brenden Blanco <bblanco@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mahesh Bandewar says:
====================
IPvlan introduce l3s mode
Same old problem with new approach especially from suggestions from
earlier patch-series.
First thing is that this is introduced as a new mode rather than
modifying the old (L3) mode. So the behavior of the existing modes is
preserved as it is and the new L3s mode obeys iptables so that intended
conn-tracking can work.
To do this, the code uses newly added l3mdev_rcv() handler and an
Iptables hook. l3mdev_rcv() to perform an inbound route lookup with the
correct (IPvlan slave) interface and then IPtable-hook at LOCAL_INPUT
to change the input device from master to the slave to complete the
formality.
Supporting stack changes are trivial changes to export symbol to get
IPv4 equivalent code exported for IPv6 and to allow netfilter hook
registration code to allow caller to hold RTNL. Please look into
individual patches for details.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
In a typical IPvlan L3 setup where master is in default-ns and
each slave is into different (slave) ns. In this setup egress
packet processing for traffic originating from slave-ns will
hit all NF_HOOKs in slave-ns as well as default-ns. However same
is not true for ingress processing. All these NF_HOOKs are
hit only in the slave-ns skipping them in the default-ns.
IPvlan in L3 mode is restrictive and if admins want to deploy
iptables rules in default-ns, this asymmetric data path makes it
impossible to do so.
This patch makes use of the l3_rcv() (added as part of l3mdev
enhancements) to perform input route lookup on RX packets without
changing the skb->dev and then uses nf_hook at NF_INET_LOCAL_IN
to change the skb->dev just before handing over skb to L4.
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
CC: David Ahern <dsa@cumulusnetworks.com>
Reviewed-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add _nf_register_hooks() and _nf_unregister_hooks() calls which allow
caller to hold RTNL mutex.
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
CC: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make ip6_route_input_lookup available outside of ipv6 the module
similar to ip_route_input_noref in the IPv4 world.
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Just fixup to runtime pm usage and some cleanups.
* 'exynos-drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos:
Subject: [PATCH, RESEND] drm: exynos: avoid unused function warning
drm/exynos: g2d: fix system and runtime pm integration
drm/exynos: rotator: fix system and runtime pm integration
drm/exynos: gsc: fix system and runtime pm integration
drm/exynos: fimc: fix system and runtime pm integration
exynos-drm: Fix unsupported GEM memory type error message to be clear
Jiri Pirko says:
====================
net: return offloaded stats as default and expose original sw stats
The problem we try to handle is about offloaded forwarded packets
which are not seen by kernel. Let me try to draw it:
port1 port2 (HW stats are counted here)
\ /
\ /
\ /
--(A)---- ASIC --(B)--
|
(C)
|
CPU (SW stats are counted here)
Now we have couple of flows for TX and RX (direction does not matter here):
1) port1->A->ASIC->C->CPU
For this flow, HW and SW stats are equal.
2) port1->A->ASIC->C->CPU->C->ASIC->B->port2
For this flow, HW and SW stats are equal.
3) port1->A->ASIC->B->port2
For this flow, SW stats are 0.
The purpose of this patchset is to provide facility for user to
find out the difference between flows 1+2 and 3. In other words, user
will be able to see the statistics for the slow-path (through kernel).
Also note that HW stats are what someone calls "accumulated" stats.
Every packet counted by SW is also counted by HW. Not the other way around.
As a default the accumulated stats (HW) will be exposed to user
so the userspace apps can react properly.
This patchset add the SW stats (flows 1+2) under offload related stats, so
in the future we can expose other offload related stat in a similar way.
---
v9->v10:
- patch 2/3
- removed unnecessary ()s as pointed out by Nik
v8->v9:
- patch 2/3
- add using of idxattr and prividx
v7->v8:
- patch 2/3
- move helping const from uapi to rtnetlink
- cancel driver xstat nesting if it is empty
v6->v7:
- patch 1/3:
- ndo interface changed to get the wanted stats type as an input.
- change commit message.
- patch 2/3:
- create a nesting for offloaded stat and put SW stats under it.
- change the ndo call to indicate which offload stats we wants.
- change commit message.
- patch 3/3:
- change ndo implementation to match the changes in the previous patches.
- change commit message.
v5->v6:
- patch 2/4 was dropped as requested by Roopa
- patch 1/3:
- comment changed to indicate that default stats are combined stats
- commit massage changed
- patch 2/3: (previously 3/4)
- SW stats return nothing if there is no SW stats ndo
v4->v5:
- updated cover letter
- patch3/4:
- using memcpy directly to copy stats as requested by DaveM
v3->v4:
- patch1/4:
- fixed "return ()" pointed out by EricD
- patch2/4:
- fixed if_nlmsg_size as pointed out by EricD
v2->v3:
- patch1/4:
- added dev_have_sw_stats helper
- patch2/4:
- avoided memcpy as requested by DaveM
- patch3/4:
- use new dev_have_sw_stats helper
v1->v2:
- patch3/4:
- fixed NULL initialization
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Change the default statistics ndo to return HW statistics
(like the one returned by ethtool_ops).
The HW stats are collected to a cache by delayed work every 1 sec.
Implement the offload stat ndo.
Add a function to get SW statistics, to be called from this function.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a nested attribute of offload stats to if_stats_msg
named IFLA_STATS_LINK_OFFLOAD_XSTATS.
Under it, add SW stats, meaning stats only per packets that went via
slowpath to the cpu, named IFLA_OFFLOAD_XSTATS_CPU_HIT.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a new ndo to return statistics for offloaded operation.
Since there can be many different offloaded operation with many
stats types, the ndo gets an attribute id by which it knows which
stats are wanted. The ndo also gets a void pointer to be cast according
to the attribute id.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
* MU-MIMO sniffer support in mac80211
* a create_singlethread_workqueue() cleanup
* interface dump filtering that was documented but not implemented
* support for the new radiotap timestamp field
* send delBA in two unexpected conditions (as required by the spec)
* connect keys cleanups - allow only WEP with index 0-3
* per-station aggregation limit to work around broken APs
* debugfs improvement for the integrated codel algorithm
and various other small improvements and cleanups.
-----BEGIN PGP SIGNATURE-----
iQIcBAABCgAGBQJX2+umAAoJEGt7eEactAAdIMkP/jMmpbxkzD64L7nTkO4APGva
r6RmMM1SmgVD/CtVkjlBLuvo5YOTWv/vWvy6KoUESOINAx/e6T3T7bmmCOXzbsOL
e5/YYcS1AOqgn5SdhgIj1E5cpdYIhlUGRlNJ0qEjeLLrh4/TLUNbCcuPhOYybUMz
fUrdPKgDeWb7x9EHLENhPsVtCXWwKnkDIS4qclPZCWgRj46XM4pNB4OlvCUzGY6k
bOqGJfrtjYjgKFDmPFqfYA4JDA56980qqO41+eEKXeMvDKNs+pSiNco130Q+uU3E
o7tk9DMnAnCy2GihpV1ZYVkLr6O+7o9xVuenj3NRlhyd1mn2gXxLcO4AkHcrZBkf
Ei+2L+KgnWELyqiSOaGTJKlugsgS4DDoNnFEIVjSweQ9DIoBA/Gj/6+4uZeHXJ3M
bEjtHnCLi5CuI067uBoevwXFoMi1poWra2KnZKOZzFS5OL3xHv4//x/Wmnn2/5Jz
ffEwVyRmTY76sLWfnwXUDClrFWAYQrpNyTryc+k3cpYKzhnseiqt+z43cBuISm00
uh5B9PpPB8RhtUnXrL/SHRyf8YEluaidTsI2lc1LvwXOc0+Zbp73mTCgP+rzLs9p
K2qVRiozpIXanW6hKmmaDwjKlcAKKLP0xN2v90MqwQt4YdLIKlXnll1AH2BawzuP
OWB3n8D0I6y0PWH+Yo8o
=s1MY
-----END PGP SIGNATURE-----
Merge tag 'mac80211-next-for-davem-2016-09-16' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next
Johannes Berg says:
====================
This time we have various things - all across the board:
* MU-MIMO sniffer support in mac80211
* a create_singlethread_workqueue() cleanup
* interface dump filtering that was documented but not implemented
* support for the new radiotap timestamp field
* send delBA in two unexpected conditions (as required by the spec)
* connect keys cleanups - allow only WEP with index 0-3
* per-station aggregation limit to work around broken APs
* debugfs improvement for the integrated codel algorithm
and various other small improvements and cleanups.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
* reject aggregation sessions for TSID/TID 8-16 that we
can never use anyway and which could confuse drivers
* check return value of skb_linearize()
-----BEGIN PGP SIGNATURE-----
iQIcBAABCgAGBQJX2+l7AAoJEGt7eEactAAdedMP/0k225qWRR+19lavkZ3/lSqb
YiQdYNwKCZf9WR+5FiQKFcTV4XMCfKiWX2eJJWqFI5r+V4mfTJAZDehZpddCvJou
KW3dZYe25+DaFApdhQiJXTxKSeJAbRB7EWT4i0P54Ned0YUeFwwOjvxuH+fq2VY0
tScX5JogPuGFVzXhlqUVZTjbHlBkcILGv7QjOXXL9yf4d2x57uXyabOB14l351RF
4hqXm7u+JQedbGjhYHDMKnyuwXQW4mbhELBdyMh7heLPqufWf4Xakrwh8WYSKuBb
H5dICeq+/JPKnwOStmwtz1FQVIlU08PMLnWzQbByrkDvQPCIkMHLu6h6oNierqAH
z4vyy8w05KzFNG3R5egygopXbncZvSOCcOPkDQxbTYmh2tK97OMUjXbUfPEUXZs2
WbSuECm+RVSNKtLlUip7TrM3iYI7xmCN/qPB4Wuq+z/JiSOrnP+G047iUbhdDpnE
fwLmRWWdEGkjnv3TFdzzxLVAJPCQWyuLfxaUBlHlYRNqBY8ufTR3FHUKj4dVwqJK
EqnSiDUqbZE6qJw516MEp+CqZGb8hzeMCeAu+docWMa3CmnNlglZtYgNiou45y9Z
9/8dAO93/qAzsaw9A/XprnIVPcu1vsFSicrhjOdZsnYoVJDGoW6rjzz6HpCk9QxB
7FjxZ5LilJ5Ws4ltlwr7
=3dek
-----END PGP SIGNATURE-----
Merge tag 'mac80211-for-davem-2016-09-16' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211
Johannes Berg says:
====================
Two more fixes:
* reject aggregation sessions for TSID/TID 8-16 that we
can never use anyway and which could confuse drivers
* check return value of skb_linearize()
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
A couple of dev_err messages span two lines and the literal
string is missing a white space between words. Add the white
space and join the two lines into one.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: FLorian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
MAC devices use the RWKPKTEN and MGKPKTEN bits of the PMT Control/Status
register to generate power management events.
So this patch is to properly set the RWKPKTEN [BIT(2)] inside the
PMT register (needed in case of global unicast).
Reported-by: Aditi SHARMA <aditi-hed.sharma@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Cc: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When fq is used on 32bit kernels, we need to lock the qdisc before
copying 64bit fields.
Otherwise "tc -s qdisc ..." might report bogus values.
Fixes: afe4fd062416 ("pkt_sched: fq: Fair Queue packet scheduler")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of using flow stats per NUMA node, use it per CPU. When using
megaflows, the stats lock can be a bottleneck in scalability.
On a E5-2690 12-core system, usual throughput went from ~4Mpps to
~15Mpps when forwarding between two 40GbE ports with a single flow
configured on the datapath.
This has been tested on a system with possible CPUs 0-7,16-23. After
module removal, there were no corruption on the slab cache.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@redhat.com>
Cc: pravin shelar <pshelar@ovn.org>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
On a system with only node 1 as possible, all statistics is going to be
accounted on node 0 as it will have a single writer.
However, when getting and clearing the statistics, node 0 is not going
to be considered, as it's not a possible node.
Tested that statistics are not zero on a system with only node 1
possible. Also compile-tested with CONFIG_NUMA off.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@redhat.com>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long says:
====================
sctp: fix the transmit err process
This patchset is to improve the transmit err process and also fix some
issues.
After this patchset, once the chunks are enqueued successfully, even
if the chunks fail to send out, no matter because of nodst or nomem,
no err retruns back to users any more. Instead, they are taken care
of by retransmit.
v1->v2:
- add more details to the changelog in patch 1/6
- add Fixes: tag in patch 2/6, 3/6
- also revert 69b5777f2e57 in patch 3/6
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
As David and Marcelo's suggestion, ENOMEM err shouldn't return back to
user in transmit path. Instead, sctp's retransmit would take care of
the chunks that fail to send because of ENOMEM.
This patch is only to do some release job when alloc_skb fails, not to
return ENOMEM back any more.
Besides, it also cleans up sctp_packet_transmit's err path, and fixes
some issues in err path:
- It didn't free the head skb in nomem: path.
- No need to check nskb in no_route: path.
- It should goto err: path if alloc_skb fails for head.
- Not all the NOMEMs should free nskb.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
sctp_outq_flush return value is meaningless now, this patch is
to make sctp_outq_flush return void, as well as sctp_outq_fail
and sctp_outq_uncork.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Every time when sctp calls sctp_outq_flush, it sends out the chunks of
control queue, retransmit queue and data queue. Even if some trunks are
failed to transmit, it still has to flush all the transports, as it's
the only chance to clean that transmit_list.
So the latest transmit error here should be returned back. This transmit
error is an internal error of sctp stack.
I checked all the places where it uses the transmit error (the return
value of sctp_outq_flush), most of them are actually just save it to
sk_err.
Except for sctp_assoc/endpoint_bh_rcv, they will drop the chunk if
it's failed to send a REPLY, which is actually incorrect, as we can't
be sure the error that sctp_outq_flush returns is from sending that
REPLY.
So it's meaningless for sctp_outq_flush to return error back.
This patch is to save transmit error to sk_err in sctp_outq_flush, the
new error can update the old value. Eventually, sctp_wait_for_* would
check for it.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Last patch "sctp: do not return the transmit err back to sctp_sendmsg"
made sctp_primitive_SEND return err only when asoc state is unavailable.
In this case, chunks are not enqueued, they have no chance to be freed if
we don't take care of them later.
This Patch is actually to revert commit 1cd4d5c4326a ("sctp: remove the
unused sctp_datamsg_free()"), commit 69b5777f2e57 ("sctp: hold the chunks
only after the chunk is enqueued in outq") and commit 8b570dc9f7b6 ("sctp:
only drop the reference on the datamsg after sending a msg"), to use
sctp_datamsg_free to free the chunks of current msg.
Fixes: 8b570dc9f7b6 ("sctp: only drop the reference on the datamsg after sending a msg")
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Once a chunk is enqueued successfully, sctp queues can take care of it.
Even if it is failed to transmit (like because of nomem), it should be
put into retransmit queue.
If sctp report this error to users, it confuses them, they may resend
that msg, but actually in kernel sctp stack is in charge of retransmit
it already.
Besides, this error probably is not from the failure of transmitting
current msg, but transmitting or retransmitting another msg's chunks,
as sctp_outq_flush just tries to send out all transports' chunks.
This patch is to make sctp_cmd_send_msg return avoid, and not return the
transmit err back to sctp_sendmsg
Fixes: 8b570dc9f7b6 ("sctp: only drop the reference on the datamsg after sending a msg")
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Data Chunks are only sent by sctp_primitive_SEND, in which sctp checks
the asoc's state through statetable before calling sctp_outq_tail. So
there's no need to check the asoc's state again in sctp_outq_tail.
Besides, sctp_do_sm is protected by lock_sock, even if sending msg is
interrupted by timer events, the event's processes still need to acquire
lock_sock first. It means no others CMDs can be enqueue into side effect
list before CMD_SEND_MSG to change asoc->state, so it's safe to remove it.
This patch is to remove redundant asoc->state check from sctp_outq_tail.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When CONFIG_PM is not set, we get a warning about an unused function:
drivers/gpu/drm/exynos/exynos_drm_gsc.c:1219:12: error: 'gsc_clk_ctrl' defined but not used [-Werror=unused-function]
static int gsc_clk_ctrl(struct gsc_context *ctx, bool enable)
^~~~~~~~~~~~
This removes the two #ifdef checks in this file and instead marks the
functions as __maybe_unused, which is a more reliable way of doing the
same, allowing better build coverage and avoiding the warning above.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Here are 2 small fixes, and one new device id, for 4.8-rc7
The fixes solve a build error that was reported in your tree for the
blackfin arch, and resolve an issue with a number of broken USB devices
that reported the wrong interval rate. Included here is also a new
device id for the usb-serial driver.
All have been in linux-next with no reported issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iFYEABECABYFAlfebRsPHGdyZWdAa3JvYWguY29tAAoJEDFH1A3bLfsp2/IAoMbG
oJUDt8spFWQS1DCbmfdWNNSnAJ4qNSYDvL5uYWH44DkEyHr+4MekDg==
=QA88
-----END PGP SIGNATURE-----
Merge tag 'usb-4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Pull USB fixes from Greg KH:
"Here are two small fixes, and one new device id, for 4.8-rc7
The fixes solve a build error that was reported in your tree for the
blackfin arch, and resolve an issue with a number of broken USB
devices that reported the wrong interval rate. Included here is also
a new device id for the usb-serial driver.
All have been in linux-next with no reported issues"
* tag 'usb-4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
USB: change bInterval default to 10 ms
usb: musb: Fix tusb6010 compile error on blackfin
USB: serial: simple: add support for another Infineon flashloader
Pull perf fixes from Thomas Gleixner:
"A couple of small fixes to x86 perf drivers:
- Measure L2 for HW_CACHE* events on AMD
- Fix the address filter handling in the intel/pt driver
- Handle the BTS disabling at the proper place"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86/amd: Make HW_CACHE_REFERENCES and HW_CACHE_MISSES measure L2
perf/x86/intel/pt: Do validate the size of a kernel address filter
perf/x86/intel/pt: Fix kernel address filter's offset validation
perf/x86/intel/pt: Fix an off-by-one in address filter configuration
perf/x86/intel: Don't disable "intel_bts" around "intel" event batching
Pull SMP build fixlet from Thomas Gleixner:
"Add a missing include in cpuhotplug.h"
* 'smp-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
cpu/hotplug: Include linux/types.h in linux/cpuhotplug.h
Pull irq fixes from Thomas Gleixner:
"Two patches from Boris which address a potential deadlock in the atmel
irq chip driver"
* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/atmel-aic: Fix potential deadlock in ->xlate()
genirq: Provide irq_gc_{lock_irqsave,unlock_irqrestore}() helpers
Since commit acb2505d0119 ("openrisc: fix copy_from_user()"),
copy_from_user() returns the number of bytes requested, not the
number of bytes not copied.
Cc: Al Viro <viro@zeniv.linux.org.uk>
Fixes: acb2505d0119 ("openrisc: fix copy_from_user()")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>