IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
[ Upstream commit 351050ecd6523374b370341cc29fe61e2201556b ]
syzkaller had no problem to trigger a deadlock, attaching a KCM socket
to another one (or itself). (original syzkaller report was a very
confusing lockdep splat during a sendmsg())
It seems KCM claims to only support TCP, but no enforcement is done,
so we might need to add additional checks.
Fixes: ab7ac4eb9832 ("kcm: Kernel Connection Multiplexor module")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Acked-by: Tom Herbert <tom@quantonium.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit edbd58be15a957f6a760c4a514cd475217eb97fd ]
... which may happen with certain values of tp_reserve and maclen.
Fixes: 58d19b19cd99 ("packet: vnet_hdr support for tpacket_rcv")
Signed-off-by: Benjamin Poirier <bpoirier@suse.com>
Cc: Willem de Bruijn <willemb@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 0f3086868e8889a823a6e0f3d299102aa895d947 ]
Passing commands for logging to t4_record_mbox() with size
MBOX_LEN, when the actual command size is actually smaller,
causes out-of-bounds stack accesses in t4_record_mbox() while
copying command words here:
for (i = 0; i < size / 8; i++)
entry->cmd[i] = be64_to_cpu(cmd[i]);
Up to 48 bytes from the stack are then leaked to debugfs.
This happens whenever we send (and log) commands described by
structs fw_sched_cmd (32 bytes leaked), fw_vi_rxmode_cmd (48),
fw_hello_cmd (48), fw_bye_cmd (48), fw_initialize_cmd (48),
fw_reset_cmd (48), fw_pfvf_cmd (32), fw_eq_eth_cmd (16),
fw_eq_ctrl_cmd (32), fw_eq_ofld_cmd (32), fw_acl_mac_cmd(16),
fw_rss_glb_config_cmd(32), fw_rss_vi_config_cmd(32),
fw_devlog_cmd(32), fw_vi_enable_cmd(48), fw_port_cmd(32),
fw_sched_cmd(32), fw_devlog_cmd(32).
The cxgb4vf driver got this right instead.
When we call t4_record_mbox() to log a command reply, a MBOX_LEN
size can be used though, as get_mbox_rpl() will fill cmd_rpl up
completely.
Fixes: 7f080c3f2ff0 ("cxgb4: Add support to enable logging of firmware mailbox commands")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 9b4e946ce14e20d7addbfb7d9139e604f9fda107 ]
There is a deadlock possible when canceling the link status
delayed work queue. The removal process is run with RTNL held,
and the link status callback is acquring RTNL.
Resolve the issue by using trylock and rescheduling.
If cancel is in process, that block it from happening.
Fixes: 122a5f6410f4 ("staging: hv: use delayed_work for netvsc_send_garp()")
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit e58f95831e7468d25eb6e41f234842ecfe6f014f ]
gcc-8.0.0 (snapshot) points out that we copy a variable-length string
into a fixed length field using memcpy() with the destination length,
and that ends up copying whatever follows the string:
inlined from 'ql_core_dump' at drivers/net/ethernet/qlogic/qlge/qlge_dbg.c:1106:2:
drivers/net/ethernet/qlogic/qlge/qlge_dbg.c:708:2: error: 'memcpy' reading 15 bytes from a region of size 14 [-Werror=stringop-overflow=]
memcpy(seg_hdr->description, desc, (sizeof(seg_hdr->description)) - 1);
Changing it to use strncpy() will instead zero-pad the destination,
which seems to be the right thing to do here.
The bug is probably harmless, but it seems like a good idea to address
it in stable kernels as well, if only for the purpose of building with
gcc-8 without warnings.
Fixes: a61f80261306 ("qlge: Add ethtool register dump function.")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit a1a50c8e4c241a505b7270e1a3c6e50d94e794b1 ]
Junote Cai reported that he was not able to get a DSA setup involving the
Freescale DPAA/FMAN driver to work and narrowed it down to
of_find_net_device_by_node(). This function requires the network device's
device reference to be correctly set which is the case here, though we have
lost any device_node association there.
The problem is that dpaa_eth_add_device() allocates a "dpaa-ethernet" platform
device, and later on dpaa_eth_probe() is called but SET_NETDEV_DEV() won't be
propagating &pdev->dev.of_node properly. Fix this by inherenting both the parent
device and the of_node when dpaa_eth_add_device() creates the platform device.
Fixes: 3933961682a3 ("fsl/fman: Add FMan MAC driver")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit fd6055a806edc4019be1b9fb7d25262599bca5b1 ]
When peeking, if a bad csum is discovered, the skb is unlinked from
the queue with __sk_queue_drop_skb and the peek operation restarted.
__sk_queue_drop_skb only drops packets that match the queue head.
This fails if the skb was found after the head, using SO_PEEK_OFF
socket option. This causes an infinite loop.
We MUST drop this problematic skb, and we can simply check if skb was
already removed by another thread, by looking at skb->next :
This pointer is set to NULL by the __skb_unlink() operation, that might
have happened only under the spinlock protection.
Many thanks to syzkaller team (and particularly Dmitry Vyukov who
provided us nice C reproducers exhibiting the lockup) and Willem de
Bruijn who provided first version for this patch and a test program.
Fixes: 627d2d6b5500 ("udp: enable MSG_PEEK at non-zero offset")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Acked-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 78362998f58c7c271e2719dcd0aaced435c801f9 ]
This helps tools such as wpa_supplicant can start even if the macsec
module isn't loaded yet.
Fixes: c09440f7dcb3 ("macsec: introduce IEEE 802.1AE driver")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 4e587ea71bf924f7dac621f1351653bd41e446cb ]
Commit c5cff8561d2d adds rcu grace period before freeing fib6_node. This
generates a new sparse warning on rt->rt6i_node related code:
net/ipv6/route.c:1394:30: error: incompatible types in comparison
expression (different address spaces)
./include/net/ip6_fib.h:187:14: error: incompatible types in comparison
expression (different address spaces)
This commit adds "__rcu" tag for rt6i_node and makes sure corresponding
rcu API is used for it.
After this fix, sparse no longer generates the above warning.
Fixes: c5cff8561d2d ("ipv6: add rcu grace period before freeing fib6_node")
Signed-off-by: Wei Wang <weiwan@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit c5cff8561d2d0006e972bd114afd51f082fee77c ]
We currently keep rt->rt6i_node pointing to the fib6_node for the route.
And some functions make use of this pointer to dereference the fib6_node
from rt structure, e.g. rt6_check(). However, as there is neither
refcount nor rcu taken when dereferencing rt->rt6i_node, it could
potentially cause crashes as rt->rt6i_node could be set to NULL by other
CPUs when doing a route deletion.
This patch introduces an rcu grace period before freeing fib6_node and
makes sure the functions that dereference it takes rcu_read_lock().
Note: there is no "Fixes" tag because this bug was there in a very
early stage.
Signed-off-by: Wei Wang <weiwan@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 3de33e1ba0506723ab25734e098cf280ecc34756 ]
A packet length of exactly IPV6_MAXPLEN is allowed, we should
refuse parsing options only if the size is 64KiB or more.
While at it, remove one extra variable and one assignment which
were also introduced by the commit that introduced the size
check. Checking the sum 'offset + len' and only later adding
'len' to 'offset' doesn't provide any advantage over directly
summing to 'offset' and checking it.
Fixes: 6399f1fae4ec ("ipv6: avoid overflow of offset in ip6_find_1stfragopt")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b31ff3cdf540110da4572e3e29bd172087af65cc upstream.
If using a kernel with CONFIG_XFS_RT=y and we set the RHINHERIT flag on
a directory in a filesystem that does not have a realtime device and
create a new file in that directory, it gets marked as a real time file.
When data is written and a fsync is issued, the filesystem attempts to
flush a non-existent rt device during the fsync process.
This results in a crash dereferencing a null buftarg pointer in
xfs_blkdev_issue_flush():
BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
IP: xfs_blkdev_issue_flush+0xd/0x20
.....
Call Trace:
xfs_file_fsync+0x188/0x1c0
vfs_fsync_range+0x3b/0xa0
do_fsync+0x3d/0x70
SyS_fsync+0x10/0x20
do_syscall_64+0x4d/0xb0
entry_SYSCALL64_slow_path+0x25/0x25
Setting RT inode flags does not require special privileges so any
unprivileged user can cause this oops to occur. To reproduce, confirm
kernel is compiled with CONFIG_XFS_RT=y and run:
# mkfs.xfs -f /dev/pmem0
# mount /dev/pmem0 /mnt/test
# mkdir /mnt/test/foo
# xfs_io -c 'chattr +t' /mnt/test/foo
# xfs_io -f -c 'pwrite 0 5m' -c fsync /mnt/test/foo/bar
Or just run xfstests with MKFS_OPTIONS="-d rtinherit=1" and wait.
Kernels built with CONFIG_XFS_RT=n are not exposed to this bug.
Fixes: f538d4da8d52 ("[XFS] write barrier support")
Signed-off-by: Richard Wareing <rwareing@fb.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e973b1a5999e57da677ab50da5f5479fdc0f0c31 upstream.
Since commit 18290650b1c8 ("NFS: Move buffered I/O locking into
nfs_file_write()") nfs_file_write() has not flushed the correct byte
range during synchronous writes. generic_write_sync() expects that
iocb->ki_pos points to the right edge of the range rather than the
left edge.
To replicate the problem, open a file with O_DSYNC, have the client
write at increasing offsets, and then print the successful offsets.
Block port 2049 partway through that sequence, and observe that the
client application indicates successful writes in advance of what the
server received.
Fixes: 18290650b1c8 ("NFS: Move buffered I/O locking into nfs_file_write()")
Signed-off-by: Jacob Strauss <jsstraus@amazon.com>
Signed-off-by: Tarang Gupta <tarangg@amazon.com>
Tested-by: Tarang Gupta <tarangg@amazon.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 196639ebbe63a037fe9a80669140bd292d8bcd80 upstream.
The writeback code wants to send a commit after processing the pages,
which is why we want to delay releasing the struct path until after
that's done.
Also, the layout code expects that we do not free the inode before
we've put the layout segments in pnfs_writehdr_free() and
pnfs_readhdr_free()
Fixes: 919e3bd9a875 ("NFS: Ensure we commit after writeback is complete")
Fixes: 4714fb51fd03 ("nfs: remove pgio_header refcount, related cleanup")
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 746a272e44141af24a02f6c9b0f65f4c4598ed42 upstream.
When there's a fatal signal pending, arm's do_page_fault()
implementation returns 0. The intent is that we'll return to the
faulting userspace instruction, delivering the signal on the way.
However, if we take a fatal signal during fixing up a uaccess, this
results in a return to the faulting kernel instruction, which will be
instantly retried, resulting in the same fault being taken forever. As
the task never reaches userspace, the signal is not delivered, and the
task is left unkillable. While the task is stuck in this state, it can
inhibit the forward progress of the system.
To avoid this, we must ensure that when a fatal signal is pending, we
apply any necessary fixup for a faulting kernel instruction. Thus we
will return to an error path, and it is up to that code to make forward
progress towards delivering the fatal signal.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 95696d292e204073433ed2ef3ff4d3d8f42a8248 upstream.
The GIC-500 integrated in the Armada-37xx SoCs is compliant with
the GICv3 architecture, and thus provides a maintenance interrupt
that is required for hypervisors to function correctly.
With the interrupt provided in the DT, KVM now works as it should.
Tested on an Espressobin system.
Fixes: adbc3695d9e4 ("arm64: dts: add the Marvell Armada 3700 family and a development board")
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e860d2c904d1a9f38a24eb44c9f34b8f915a6ea3 upstream.
Validate the output buffer length for L2CAP config requests and responses
to avoid overflowing the stack buffer used for building the option blocks.
Signed-off-by: Ben Seri <ben@armis.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 20e2b791796bd68816fa115f12be5320de2b8021 upstream.
The ISA msnd drivers have loops fetching the ring-buffer head, tail
and size values inside the loops. Such codes are inefficient and
fragile.
This patch optimizes it, and also adds the sanity check to avoid the
endless loops.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196131
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196133
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: grygorii tertychnyi <gtertych@cisco.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit de0c799bba2610a8e1e9a50d76a28614520a4cd4 upstream.
Seen while reading the code, in handle_mm_fault(), in the case
arch_vma_access_permitted() is failing the call to
mem_cgroup_oom_disable() is not made.
To fix that, move the call to mem_cgroup_oom_enable() after calling
arch_vma_access_permitted() as it should not have entered the memcg OOM.
Link: http://lkml.kernel.org/r/1504625439-31313-1-git-send-email-ldufour@linux.vnet.ibm.com
Fixes: bae473a423f6 ("mm: introduce fault_env")
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 23d98c204386a98d9ef9f9e744f41443ece4929f upstream.
Those are funny cases. Make sure they work.
(Something is screwy with signal handling if a selector is 1, 2, or 3.
Anyone who wants to dive into that rabbit hole is welcome to do so.)
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Chang Seok <chang.seok.bae@intel.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6c6b5a39c4bf3dbd8cf629c9f5450e983c19dbb9 upstream.
Several distributions mount the "proper root" as ro during initrd and
then remount it as rw before pivot_root(2). Thus, if a rescan had been
aborted by a previous shutdown, the rescan would never be resumed.
This issue would manifest itself as several btrfs ioctl(2)s causing the
entire machine to hang when btrfs_qgroup_wait_for_completion was hit
(due to the fs_info->qgroup_rescan_running flag being set but the rescan
itself not being resumed). Notably, Docker's btrfs storage driver makes
regular use of BTRFS_QUOTA_CTL_DISABLE and BTRFS_IOC_QUOTA_RESCAN_WAIT
(causing this problem to be manifested on boot for some machines).
Cc: Jeff Mahoney <jeffm@suse.com>
Fixes: b382a324b60f ("Btrfs: fix qgroup rescan resume on mount")
Signed-off-by: Aleksa Sarai <asarai@suse.de>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Tested-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 40a5fce495715c48c2e02668144e68a507ac5a30 upstream.
The default host NQN, which is generated based on the host's UUID,
does not follow the UUID-based NQN format laid out in the NVMe 1.3
specification. Remove the "NVMf:" portion of the NQN to match the spec.
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 10777de570016471fd929869c7830a7772893e39 upstream.
The configuration for BCH is not correct in the current driver.
The ECC_CFG_ECC_DISABLE bit defines whether to enable or disable the
BCH ECC in which
0x1 : BCH_DISABLED
0x0 : BCH_ENABLED
But currently host->bch_enabled is being assigned to BCH_DISABLED.
Fixes: c76b78d8ec05a ("mtd: nand: Qualcomm NAND controller driver")
Signed-off-by: Abhishek Sahu <absahu@codeaurora.org>
Reviewed-by: Archit Taneja <architt@codeaurora.org>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d8a9b320a26c1ea28e51e4f3ecfb593d5aac2910 upstream.
The NAND page read fails without complete boot chain since
NAND_DEV_CMD_VLD value is not proper. The default power on reset
value for this register is
0xe - ERASE_START_VALID | WRITE_START_VALID | READ_STOP_VALID
The READ_START_VALID should be enabled for sending PAGE_READ
command. READ_STOP_VALID should be cleared since normal NAND
page read does not require READ_STOP command.
Fixes: c76b78d8ec05a ("mtd: nand: Qualcomm NAND controller driver")
Reviewed-by: Archit Taneja <architt@codeaurora.org>
Signed-off-by: Abhishek Sahu <absahu@codeaurora.org>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3bff08dffe3115a25ce04b95ea75f6d868572c60 upstream.
Commit a894cf6c5a82 ("mtd: nand: mxc: switch to mtd_ooblayout_ops")
introduced a bug in the OOB layout description. Even if the driver claims
that 3 ECC bytes are reserved to protect 512 bytes of data, it's actually
5 ECC bytes to protect 512+6 bytes of data (some OOB bytes are also
protected using extra ECC bytes).
Fix the mxc_v1_ooblayout_{free,ecc}() functions to reflect this behavior.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Fixes: a894cf6c5a82 ("mtd: nand: mxc: switch to mtd_ooblayout_ops")
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6d5104c5a6b56385426e15047050584794bb6254 upstream.
In chasing down a previous issue with EDID probing from calling
drm_helper_hpd_irq_event() from irq context, Laurent noticed
that the DRM documentation suggests that
drm_kms_helper_hotplug_event() should be used instead.
Thus this patch replaces drm_helper_hpd_irq_event() with
drm_kms_helper_hotplug_event(), which requires we update the
connector.status entry and only call _hotplug_event() when the
status changes.
Cc: David Airlie <airlied@linux.ie>
Cc: Archit Taneja <architt@codeaurora.org>
Cc: Wolfram Sang <wsa+renesas@sang-engineering.com>
Cc: Lars-Peter Clausen <lars@metafoo.de>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Archit Taneja <architt@codeaurora.org>
Link: http://patchwork.freedesktop.org/patch/msgid/1484614372-15342-3-git-send-email-john.stultz@linaro.org
Signed-off-by: Thong Ho <thong.ho.px@rvc.renesas.com>
Signed-off-by: Nhan Nguyen <nhan.nguyen.yb@renesas.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 518cb7057a59b9441336d2e88a396d52b6ab0cce upstream.
I was recently seeing issues with EDID probing, where
the logic to wait for the EDID read bit to be set by the
IRQ wasn't happening and the code would time out and fail.
Digging deeper, I found this was due to the fact that
IRQs were disabled as we were running in IRQ context from
the HPD signal.
Thus this patch changes the logic to handle the HPD signal
via a work_struct so we can be out of irq context.
With this patch, the EDID probing on hotplug does not time
out.
Cc: David Airlie <airlied@linux.ie>
Cc: Archit Taneja <architt@codeaurora.org>
Cc: Wolfram Sang <wsa+renesas@sang-engineering.com>
Cc: Lars-Peter Clausen <lars@metafoo.de>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Archit Taneja <architt@codeaurora.org>
Link: http://patchwork.freedesktop.org/patch/msgid/1484614372-15342-2-git-send-email-john.stultz@linaro.org
Signed-off-by: Thong Ho <thong.ho.px@rvc.renesas.com>
Signed-off-by: Nhan Nguyen <nhan.nguyen.yb@renesas.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8d26f491116feaa0b16de370b6a7ba40a40fa0b4 upstream.
Commit 1bc0eb044615 ("scsi: sg: protect accesses to 'reserved' page
array") adds needed concurrency protection for the "reserve" buffer.
Some checks that are initially made outside the lock are replicated once
the lock is taken to ensure the checks and resulting decisions are made
using consistent state.
The check that a request with flag SG_FLAG_MMAP_IO set fits in the
reserve buffer also needs to be performed again under the lock to ensure
the reserve buffer length compared against matches the value in effect
when the request is linked to the reserve buffer. An -ENOMEM should be
returned in this case, instead of switching over to an indirect buffer
as for non-MMAP_IO requests.
Signed-off-by: Todd Poynor <toddpoynor@google.com>
Acked-by: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6a8dadcca81fceff9976e8828cceb072873b7bd5 upstream.
Take f_mutex around mmap() processing to protect against races with the
SG_SET_RESERVED_SIZE ioctl. Ensure the reserve buffer length remains
consistent during the mapping operation, and set the "mmap called" flag
to prevent further changes to the reserved buffer size as an atomic
operation with the mapping.
[mkp: fixed whitespace]
Signed-off-by: Todd Poynor <toddpoynor@google.com>
Acked-by: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 591b6bb605785c12a21e8b07a08a277065b655a5 upstream.
Several legacy devices such as Geode-based Cisco ASA appliances
and DB800 development board do possess CS5536 IDE controller
with different PCI id than existing one. Using pata_generic is
not always feasible as at least DB800 requires MSR quirk from
pata_cs5536 to be used with vendor firmware.
Signed-off-by: Andrey Korolyov <andrey@xdel.ru>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fbf1c41fc0f4d3574ac2377245efd666c1fa3075 upstream.
Commit 0a94efb5acbb ("workqueue: implicit ordered attribute should be
overridable") introduced a __WQ_ORDERED_EXPLICIT flag but gave it the
same value as __WQ_LEGACY. I don't believe these were intended to
mean the same thing, so renumber __WQ_ORDERED_EXPLICIT.
Fixes: 0a94efb5acbb ("workqueue: implicit ordered attribute should be ...")
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bc60c90f472b6e762ea96ef384072145adc8d4af upstream.
It appears that MSI does not work on either G5 PPC nor on a E5500-based
platform, where other hardware is reported to work fine with MSI.
Both tests were conducted with NV4x hardware, so perhaps other (or even
this) hardware can be made to work. It's still possible to force-enable
with config=NvMSI=1 on load.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fa41ba0d08de7c975c3e94d0067553f9b934221f upstream.
Right now there is a potential hang situation for postcopy migrations,
if the guest is enabling storage keys on the target system during the
postcopy process.
For storage key virtualization, we have to forbid the empty zero page as
the storage key is a property of the physical page frame. As we enable
storage key handling lazily we then drop all mappings for empty zero
pages for lazy refaulting later on.
This does not work with the postcopy migration, which relies on the
empty zero page never triggering a fault again in the future. The reason
is that postcopy migration will simply read a page on the target system
if that page is a known zero page to fault in an empty zero page. At
the same time postcopy remembers that this page was already transferred
- so any future userfault on that page will NOT be retransmitted again
to avoid races.
If now the guest enters the storage key mode while in postcopy, we will
break this assumption of postcopy.
The solution is to disable the empty zero page for KVM guests early on
and not during storage key enablement. With this change, the postcopy
migration process is guaranteed to start after no zero pages are left.
As guest pages are very likely not empty zero pages anyway the memory
overhead is also pretty small.
While at it this also adds proper page table locking to the zero page
removal.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit acf5e051ac44d5dc60b21bc4734ef1b844d55551 upstream.
This patch adds the resources and DMI ID's for the MEN SC31,
which uses a different address region to map the LPC bus than
the one used for the existing SC24.
Signed-off-by: Michael Moese <michael.moese@men.de>
[jth add stable tag]
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4b5dde2d6234ff5bc68e97e6901d1f2a0a7f3749 upstream.
mwifiex records information about various channels as it receives scan
information. It does this by appending to a buffer that was sized
to the max number of supported channels on any band, but there are
numerous problems:
(a) scans can return info from more than one band (e.g., both 2.4 and 5
GHz), so the determined "max" is not large enough
(b) some firmware appears to return multiple results for a given
channel, so the max *really* isn't large enough
(c) there is no bounds checking when stashing these stats, so problems
(a) and (b) can easily lead to buffer overflows
Let's patch this by setting a slightly-more-correct max (that accounts
for a combination of both 2.4G and 5G bands) and adding a bounds check
when writing to our statistics buffer.
Due to problem (b), we still might not properly report all known survey
information (e.g., with "iw <dev> survey dump"), since duplicate results
(or otherwise "larger than expected" results) will cause some
truncation. But that's a problem for a future bugfix.
(And because of this known deficiency, only log the excess at the WARN
level, since that isn't visible by default in this driver and would
otherwise be a bit too noisy.)
Fixes: bf35443314ac ("mwifiex: channel statistics support for mwifiex")
Cc: Avinash Patil <patila@marvell.com>
Cc: Xinming Hu <huxm@marvell.com>
Signed-off-by: Brian Norris <briannorris@chromium.org>
Reviewed-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Reviewed-by: Ganapathi Bhat <gbhat@marvell.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3f7a5e13e85026b6e460bbd6e87f87379421d272 upstream.
We have a new PCI subsystem ID for 7265D. Add it to the list.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fc81bab5eeb103711925d7510157cf5cd2b153f4 upstream.
_rtl_pci_find_adapter fail path will jump to label fail3 for
unsupported adapter types.
However, on course for fail3 there will be call rtl_deinit_core
before rtl_init_core.
For the inclusion of checking pci_iounmap this fail can be moved to
fail2.
Fixes
[ 4.492963] BUG: unable to handle kernel NULL pointer dereference at (null)
[ 4.493067] IP: rtl_deinit_core+0x31/0x90 [rtlwifi]
Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 293b915fd9bebf33cdc906516fb28d54649a25ac upstream.
Trackpoint buttons detection fails on ThinkPad 570 and 470 series,
this makes the middle button of the trackpoint to not being recogized.
As I don't believe there is any trackpoint with less than 3 buttons this
patch just assumes three buttons when the extended button information
read fails.
Signed-off-by: Oscar Campos <oscar.campos@member.fsf.org>
Acked-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Aaron Ma <aaron.ma@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f35a7f91f66af528b3ee1921de16bea31d347ab0 upstream.
The rx ring buffers are added to a hash table if
firmware support full rx reorder. If the full rx
reorder support flag is not set before allocating
the rx ring buffers, none of the buffers are added
to the hash table.
There is a race condition between rx ring refill and
rx buffer replenish from napi poll. The interrupts are
enabled in hif start, before the rx ring is refilled during init.
We replenish buffers from napi poll due to the interrupts which
get enabled after hif start. Hence before the entire rx ring is
refilled during the init, the napi poll replenishes a few buffers
in steps of 100 buffers per attempt. During this rx ring replenish
from napi poll, the rx reorder flag has not been set due to which
the replenished buffers are not added to the hash table
Set the rx full reorder support flag before we allocate
the rx ring buffer to avoid the memory leak.
Signed-off-by: Rakesh Pillai <pillair@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Cc: Christian Lamparter <chunkeey@googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit efb3669e14fe17d0ec4ecf57d0365039fe726f59 upstream.
This adds Intel(R) Trace Hub PCI ID for Cannon Lake PCH-LP.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 84331e1390b6378a5129a3678c87a42c6f697d29 upstream.
This adds Intel(R) Trace Hub PCI ID for Cannon Lake PCH-H.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0f9b011d3321ca1079c7a46c18cb1956fbdb7bcb upstream.
The .release function of driver_ktype is 'driver_release()'.
This function frees the container_of this kobject.
So, this memory must not be freed explicitly in the error handling path of
'bus_add_driver()'. Otherwise a double free will occur.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4744d4e2afebf9644a439da9ca73d822fdd67bd9 upstream.
This driver assumes that the device is operating in the continuous
conversion mode which performs the conversion continuously. So this driver
inserts a wait time before reading the conversion register if the
configuration is changed from a previous request.
Currently, the wait time is only the period required for a single
conversion that is calculated as the reciprocal of the sampling frequency.
However we also need to wait for the the previous conversion to complete.
Otherwise we probably get the conversion result for the previous
configuration when the sampling frequency is lower.
Cc: Daniel Baluta <daniel.baluta@gmail.com>
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a6fe5e52d5ecfc98530034d6c9db73777cf41ede upstream.
pm_runtime_get_sync() and pm_runtime_put_autosuspend() return 0 on
success, 1 if the device's runtime PM status was already requested status
or error code on failure. So a positive return value doesn't indicate an
error condition.
However, any non-zero return values from buffer preenable and postdisable
callbacks are recognized as an error and this driver reuses the return
value from pm_runtime_get_sync() and pm_runtime_put_autosuspend() in
these callbacks. This change fixes the false error detections.
Cc: Daniel Baluta <daniel.baluta@gmail.com>
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>