IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Even if we have no waiters on any of the sbitmap_queue wait states, we
still have to loop every entry to check. We do this for every IO, so
the cost adds up.
Shift a bit of the cost to the slow path, when we actually have waiters.
Wrap prepare_to_wait_exclusive() and finish_wait(), so we can maintain
an internal count of how many are currently active. Then we can simply
check this count in sbq_wake_ptr() and not have to loop if we don't
have any sleepers.
Convert the two users of sbitmap with waiting, blk-mq-tag and iSCSI.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
sbitmap maintains a set of words that we use to set and clear bits, with
each bit representing a tag for blk-mq. Even though we spread the bits
out and maintain a hint cache, one particular bit allocated will end up
being cleared in the exact same spot.
This introduces batched clearing of bits. Instead of clearing a given
bit, the same bit is set in a cleared/free mask instead. If we fail
allocating a bit from a given word, then we check the free mask, and
batch move those cleared bits at that time. This trades 64 atomic bitops
for 2 cmpxchg().
In a threaded poll test case, half the overhead of getting and clearing
tags is removed with this change. On another poll test case with a
single thread, performance is unchanged.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
New versions of gcc reasonably warn about the odd pattern of
strncpy(p, q, strlen(q));
which really doesn't make sense: the strncpy() ends up being just a slow
and odd way to write memcpy() in this case.
Apparently there was a patch for this floating around earlier, but it
got lost.
Acked-again-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull vfs fixes from Al Viro:
"Assorted fixes all over the place.
The iov_iter one is this cycle regression (splice from UDP triggering
WARN_ON()), the rest is older"
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
afs: Use d_instantiate() rather than d_add() and don't d_drop()
afs: Fix missing net error handling
afs: Fix validation/callback interaction
iov_iter: teach csum_and_copy_to_iter() to handle pipe-backed ones
exportfs: do not read dentry after free
exportfs: fix 'passing zero to ERR_PTR()' warning
aio: fix failure to put the file pointer
sysv: return 'err' instead of 0 in __sysv_write_inode
s390 is the only architecture that is using own bust_spinlocks()
variant, while other arch-s seem to be OK with the common
implementation.
Heiko Carstens [1] said he would prefer s390 to use the common
bust_spinlocks() as well:
I did some code archaeology and this function is unchanged since ~17
years. When it was introduced it was close to identical to the x86
variant. All other architectures use the common code variant in the
meantime. So if we change this I'd prefer that we switch s390 to the
common code variant as well. Right now I can't see a reason for not
doing that
This patch removes s390 bust_spinlocks() and drops the weak attribute
from the common bust_spinlocks() version.
[1] lkml.kernel.org/r/20181025062800.GB4037@osiris
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
If we aren't forced to do round robin tag allocation, just use the
allocation hint to find the index for the tag word, don't use it for the
offset inside the word. This avoids a potential extra round trip in the
bit looping, and since we're fetching this cacheline, we may as well
check the whole word from the start.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Now that these macros are in header file, we can eventually
clean up the duplicate macros present in the drivers that
utilize the same cordic algorithm implementation.
Also add CORDIC_ prefix to nonprefixed macros.
Reviewed-by: Arend van Spriel <arend.vanspriel@broadcom.com>
Signed-off-by: Priit Laes <plaes@plaes.org>
Acked-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Trivial conflict in net/core/filter.c, a locally computed
'sdif' is now an argument to the function.
Signed-off-by: David S. Miller <davem@davemloft.net>
The same combination of csum_partial_copy_nocheck() with csum_add_block()
is used in a bunch of places. Add a helper doing just that and use it.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Now that call_rcu()'s callback is not invoked until after all
preempt-disable regions of code have completed (in addition to explicitly
marked RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_sched(). This commit therefore makes that change.
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Jens Axboe <axboe@kernel.dk>
Acked-by: Tejun Heo <tj@kernel.org>
We found some bugs in the DAX conversion to XArray (and one bug which
predated the XArray conversion). There were a couple of bugs in some of
the higher-level functions, which aren't actually being called in today's
kernel, but surfaced as a result of converting existing radix tree &
IDR users over to the XArray. Some of the other changes to how the
higher-level APIs work were also motivated by converting various users;
again, they're not in use in today's kernel, so changing them has a low
probability of introducing a bug.
Dan can still trigger a bug in the DAX code with hot-offline/online,
and we're working on tracking that down.
-----BEGIN PGP SIGNATURE-----
iQFIBAABCgAyFiEEejHryeLBw/spnjHrDpNsjXcpgj4FAlv542AUHHdpbGx5QGlu
ZnJhZGVhZC5vcmcACgkQDpNsjXcpgj5BoAf/QZzbBcYuYMLMDYofvHKGlmk2yx/a
ObUlxlQtXGHvPp3oC3rdwAvcN/KAMDpU0u+PXab2MnrNw5okhpS6ZwGODlkarNA4
XbVQNGbtEbACr1V3CWc0NzLbYm6JtGpMum0Wx9MVR/VdTnGArBLBYQMYa/c1YhKA
vEBPf+w0j0QoCTAgPiIvq0aksuBQERUvjhlUvoaMY7F4sAhnaW558lvaEcc1xGxq
70+3cRPT6Uh12tEvi0LKP1NNEXebvQSftMvFEUPF2xo5z2v//KEobzv/anbojxQ8
BtxouIGSr4tME9g3xSpd9rTbUcW3bwDAhuWZvpP/ViRwW2UkEQonpApdaw==
=0Ert
-----END PGP SIGNATURE-----
Merge tag 'xarray-4.20-rc4' of git://git.infradead.org/users/willy/linux-dax
Pull XArray updates from Matthew Wilcox:
"We found some bugs in the DAX conversion to XArray (and one bug which
predated the XArray conversion). There were a couple of bugs in some
of the higher-level functions, which aren't actually being called in
today's kernel, but surfaced as a result of converting existing radix
tree & IDR users over to the XArray.
Some of the other changes to how the higher-level APIs work were also
motivated by converting various users; again, they're not in use in
today's kernel, so changing them has a low probability of introducing
a bug.
Dan can still trigger a bug in the DAX code with hot-offline/online,
and we're working on tracking that down"
* tag 'xarray-4.20-rc4' of git://git.infradead.org/users/willy/linux-dax:
XArray tests: Add missing locking
dax: Avoid losing wakeup in dax_lock_mapping_entry
dax: Fix huge page faults
dax: Fix dax_unlock_mapping_entry for PMD pages
dax: Reinstate RCU protection of inode
dax: Make sure the unlocking entry isn't locked
dax: Remove optimisation from dax_lock_mapping_entry
XArray tests: Correct some 64-bit assumptions
XArray: Correct xa_store_range
XArray: Fix Documentation
XArray: Handle NULL pointers differently for allocation
XArray: Unify xa_store and __xa_store
XArray: Add xa_store_bh() and xa_store_irq()
XArray: Turn xa_erase into an exported function
XArray: Unify xa_cmpxchg and __xa_cmpxchg
XArray: Regularise xa_reserve
nilfs2: Use xa_erase_irq
XArray: Export __xa_foo to non-GPL modules
XArray: Fix xa_for_each with a single element at 0
Here are some small char/misc driver fixes for issues that have been
reported.
Nothing major, highlights include:
- gnss sync write fixes
- uio oops fix
- nvmem fixes
- other minor fixes and some documentation/maintainers updates
Full details are in the shortlog.
All of these have been in linux-next for a while with no reported
issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCW/ZqSw8cZ3JlZ0Brcm9h
aC5jb20ACgkQMUfUDdst+yks8gCgm0Amv9/GLpE7qZqe/Az2S7t7pm4An2NHYfS1
/vrBi80OCiSLGakl/zs2
=MYVD
-----END PGP SIGNATURE-----
Merge tag 'char-misc-4.20-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc
Pull char/misc driver fixes from Greg KH:
"Here are some small char/misc driver fixes for issues that have been
reported.
Nothing major, highlights include:
- gnss sync write fixes
- uio oops fix
- nvmem fixes
- other minor fixes and some documentation/maintainers updates
Full details are in the shortlog.
All of these have been in linux-next for a while with no reported
issues"
* tag 'char-misc-4.20-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
Documentation/security-bugs: Postpone fix publication in exceptional cases
MAINTAINERS: Add Sasha as a stable branch maintainer
gnss: sirf: fix synchronous write timeout
gnss: serial: fix synchronous write timeout
uio: Fix an Oops on load
test_firmware: fix error return getting clobbered
nvmem: core: fix regression in of_nvmem_cell_get()
misc: atmel-ssc: Fix section annotation on atmel_ssc_get_driver_data
drivers/misc/sgi-gru: fix Spectre v1 vulnerability
Drivers: hv: kvp: Fix the recent regression caused by incorrect clean-up
slimbus: ngd: remove unnecessary check
Now that the generic implementation of ChaCha20 has been refactored to
allow varying the number of rounds, add support for XChaCha12, which is
the XSalsa construction applied to ChaCha12. ChaCha12 is one of the
three ciphers specified by the original ChaCha paper
(https://cr.yp.to/chacha/chacha-20080128.pdf: "ChaCha, a variant of
Salsa20"), alongside ChaCha8 and ChaCha20. ChaCha12 is faster than
ChaCha20 but has a lower, but still large, security margin.
We need XChaCha12 support so that it can be used in the Adiantum
encryption mode, which enables disk/file encryption on low-end mobile
devices where AES-XTS is too slow as the CPUs lack AES instructions.
We'd prefer XChaCha20 (the more popular variant), but it's too slow on
some of our target devices, so at least in some cases we do need the
XChaCha12-based version. In more detail, the problem is that Adiantum
is still much slower than we're happy with, and encryption still has a
quite noticeable effect on the feel of low-end devices. Users and
vendors push back hard against encryption that degrades the user
experience, which always risks encryption being disabled entirely. So
we need to choose the fastest option that gives us a solid margin of
security, and here that's XChaCha12. The best known attack on ChaCha
breaks only 7 rounds and has 2^235 time complexity, so ChaCha12's
security margin is still better than AES-256's. Much has been learned
about cryptanalysis of ARX ciphers since Salsa20 was originally designed
in 2005, and it now seems we can be comfortable with a smaller number of
rounds. The eSTREAM project also suggests the 12-round version of
Salsa20 as providing the best balance among the different variants:
combining very good performance with a "comfortable margin of security".
Note that it would be trivial to add vanilla ChaCha12 in addition to
XChaCha12. However, it's unneeded for now and therefore is omitted.
As discussed in the patch that introduced XChaCha20 support, I
considered splitting the code into separate chacha-common, chacha20,
xchacha20, and xchacha12 modules, so that these algorithms could be
enabled/disabled independently. However, since nearly all the code is
shared anyway, I ultimately decided there would have been little benefit
to the added complexity.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for adding XChaCha12 support, rename/refactor
chacha20-generic to support different numbers of rounds. The
justification for needing XChaCha12 support is explained in more detail
in the patch "crypto: chacha - add XChaCha12 support".
The only difference between ChaCha{8,12,20} are the number of rounds
itself; all other parts of the algorithm are the same. Therefore,
remove the "20" from all definitions, structures, functions, files, etc.
that will be shared by all ChaCha versions.
Also make ->setkey() store the round count in the chacha_ctx (previously
chacha20_ctx). The generic code then passes the round count through to
chacha_block(). There will be a ->setkey() function for each explicitly
allowed round count; the encrypt/decrypt functions will be the same. I
decided not to do it the opposite way (same ->setkey() function for all
round counts, with different encrypt/decrypt functions) because that
would have required more boilerplate code in architecture-specific
implementations of ChaCha and XChaCha.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Refactor the unkeyed permutation part of chacha20_block() into its own
function, then add hchacha20_block() which is the ChaCha equivalent of
HSalsa20 and is an intermediate step towards XChaCha20 (see
https://cr.yp.to/snuffle/xsalsa-20081128.pdf). HChaCha20 skips the
final addition of the initial state, and outputs only certain words of
the state. It should not be used for streaming directly.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Lockdep caught me being sloppy in the test suite and failing to lock
the XArray appropriately.
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Matthew Wilcox <willy@infradead.org>
gcc-8 complains about the prototype for this function:
lib/ubsan.c:432:1: error: ignoring attribute 'noreturn' in declaration of a built-in function '__ubsan_handle_builtin_unreachable' because it conflicts with attribute 'const' [-Werror=attributes]
This is actually a GCC's bug. In GCC internals
__ubsan_handle_builtin_unreachable() declared with both 'noreturn' and
'const' attributes instead of only 'noreturn':
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84210
Workaround this by removing the noreturn attribute.
[aryabinin: add information about GCC bug in changelog]
Link: http://lkml.kernel.org/r/20181107144516.4587-1-aryabinin@virtuozzo.com
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Olof Johansson <olof@lixom.net>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Replace VLAN_TAG_PRESENT with single bit flag and free up
VLAN.CFI overload. Now VLAN.CFI is visible in networking stack
and can be passed around intact.
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
The test-suite caught these two mistakes when compiled for 32-bit.
I had only been running the test-suite in 64-bit mode.
Signed-off-by: Matthew Wilcox <willy@infradead.org>
The explicit '64' should have been BITS_PER_LONG, but while looking at
this code I realised I meant to use __ffs(), not ilog2().
Signed-off-by: Matthew Wilcox <willy@infradead.org>
lib/test_objagg.c: In function ‘test_delta_action_item’:
./include/linux/printk.h:308:2: warning: ‘errmsg’ may be used uninitialized in this function [-Wmaybe-uninitialized]
Signed-off-by: David S. Miller <davem@davemloft.net>
This lib tracks objects which could be of two types:
1) root object
2) nested object - with a "delta" which differentiates it from
the associated root object
The objects are tracked by a hashtable and reference-counted. User is
responsible of implementing callbacks to create/destroy root entity
related to each root object and callback to create/destroy nested object
delta.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The two serdev drivers were using the wrong timeout argument when
expecting the serdev_device_write() helper to wait indefinitely,
something which could result in incomplete writes when the controller
write buffer was getting full.
Signed-off-by: Johan Hovold <johan@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEHszNKQClByu0A+9RQQ3kT97htJUFAlvseggRHGpvaGFuQGtl
cm5lbC5vcmcACgkQQQ3kT97htJX33BAAqixfk5NsEm4Cvo925LBflv4liP524K/o
SxmPXuFkJN/WPEF4mAe0im67VwE6DCWgpIdu2kw/QUImzTTjaUCX6s3TmgCH9jSt
5cLtM7PCKoq5YNrl6W6cWARcDw2T/48LlewzmlBQBX19ashWQqNarfX81UDYPLRK
HSdDmgEd57VhgedNmZlMLrd43R2hrSCe8dZ4YgnnLgYkGzamsEbmXFWDOK5vGhHw
gl2/5d7gfKugmbCF//RmilqjK2rKxQ5uUa0v1MM/IrbOVa2XQBaP8ng8UPd5Y6aL
VByS3UHbygB0Jdg/eAZ7LqHeyKGf5Ahjj0+rMdMZEoHihnx0cxuYjqp8veG+W3d4
pWE+VBfuKBjt6YxSPN0LfQ3mKcTWj07X4R/6WDvY3wKfhRES/YByYS/qxmEMISkD
1xxydLSRztDgk5GWxRh+y/zMj5WJ6tJwYNmE6VmMBJGmIsv+/RyK4kQ0ujQi7TI7
ucih/SILm9QpM8obUzoFlisiahggr+4hI9/KM3EHQ/qnuGHUeFA/2tDdpmgpXnUY
xt2xk4m9PxaTz1sLA0HhtmbUR4Fafb9PCXMIaZ/CmYGbZ4LRULR7oE0cBSE/VQcU
9EYhgD+QnP80xJYBpoV37bD0mQWR0Xg2xCUC678+KDy06cl+UxbxCbIDPkI7rFnX
6/1I19Aqt2I=
=mYcF
-----END PGP SIGNATURE-----
Merge tag 'gnss-4.20-rc3' of https://git.kernel.org/pub/scm/linux/kernel/git/johan/gnss into char-misc-linus
Johan writes:
GNSS fixes for v4.20-rc3
The two serdev drivers were using the wrong timeout argument when
expecting the serdev_device_write() helper to wait indefinitely,
something which could result in incomplete writes when the controller
write buffer was getting full.
Signed-off-by: Johan Hovold <johan@kernel.org>
The CPU_NO_EFFICIENT_FFS pre-processor macro is no longer used, with all
architectures toggling the equivalent Kconfig symbol
CONFIG_CPU_NO_EFFICIENT_FFS instead. Remove our check for the unused
macro.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/21046/
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Zhaoxiu Zeng <zhaoxiu.zeng@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
There is no searon for u64 var cast to unsigned long long type.
Signed-off-by: Bo YU <tsu.yubo@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In the case where eq->fw->size > PAGE_SIZE the error return rc
is being set to EINVAL however this is being overwritten to
rc = req->fw->size because the error exit path via label 'out' is
not being taken. Fix this by adding the jump to the error exit
path 'out'.
Detected by CoverityScan, CID#1453465 ("Unused value")
Fixes: c92316bf8e94 ("test_firmware: add batched firmware tests")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The lib/raid6/test fails to build the neon objects
on arm64 because the correct machine type is 'aarch64'.
Once this is correctly enabled, the neon recovery objects
need to be added to the build.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
For allocating XArrays, it makes sense to distinguish beteen erasing an
entry and storing NULL. Storing NULL keeps the index allocated with a
NULL pointer associated with it while xa_erase() frees the index. Some
existing IDR users rely on this ability.
Signed-off-by: Matthew Wilcox <willy@infradead.org>
Saves around 115 bytes on a tinyconfig build and reduces the amount
of code duplication in the XArray implementation.
Signed-off-by: Matthew Wilcox <willy@infradead.org>
Make xa_erase() take the spinlock and then call __xa_erase(), but make
it out of line since it's such a common function.
Signed-off-by: Matthew Wilcox <willy@infradead.org>
xa_cmpxchg() was one of the largest functions in the xarray
implementation. By turning it into a wrapper and having the callers
take the lock (like several other functions), we save 160 bytes on a
tinyconfig build and reduce the duplication in xarray.c.
Signed-off-by: Matthew Wilcox <willy@infradead.org>
The xa_reserve() function was a little unusual in that it attempted to
be callable for all kinds of locking scenarios. Make it look like the
other APIs with __xa_reserve, xa_reserve_bh and xa_reserve_irq variants.
Signed-off-by: Matthew Wilcox <willy@infradead.org>
The following sequence of calls would result in an infinite loop in
xa_find_after():
xa_store(xa, 0, x, GFP_KERNEL);
index = 0;
xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) { }
xa_find_after() was confusing the situation where we found no entry in
the tree with finding a multiorder entry, so it would look for the
successor entry forever. Just check for this case explicitly. Includes
a few new checks in the test suite to be sure this doesn't reappear.
Signed-off-by: Matthew Wilcox <willy@infradead.org>
Pull AFS updates from Al Viro:
"AFS series, with some iov_iter bits included"
* 'work.afs' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (26 commits)
missing bits of "iov_iter: Separate type from direction and use accessor functions"
afs: Probe multiple fileservers simultaneously
afs: Fix callback handling
afs: Eliminate the address pointer from the address list cursor
afs: Allow dumping of server cursor on operation failure
afs: Implement YFS support in the fs client
afs: Expand data structure fields to support YFS
afs: Get the target vnode in afs_rmdir() and get a callback on it
afs: Calc callback expiry in op reply delivery
afs: Fix FS.FetchStatus delivery from updating wrong vnode
afs: Implement the YFS cache manager service
afs: Remove callback details from afs_callback_break struct
afs: Commit the status on a new file/dir/symlink
afs: Increase to 64-bit volume ID and 96-bit vnode ID for YFS
afs: Don't invoke the server to read data beyond EOF
afs: Add a couple of tracepoints to log I/O errors
afs: Handle EIO from delivery function
afs: Fix TTL on VL server and address lists
afs: Implement VL server rotation
afs: Improve FS server rotation error handling
...
This tag contains the follow-on patches I'd like to target for the 4.20
merge window. I'm being somewhat conservative here, as while there are
a few patches on the mailing list that were posted early in the merge
window I'd like to let those bake for another round -- this was a fairly
big release as far as RISC-V is concerened, and we need to walk before
we can run.
As far as the patches that made it go:
* A patch to ignore offline CPUs when calculating AT_HWCAP. This should
fix GDB on the HiFive unleashed, which has an embedded core for hart
0 which is exposed to Linux as an offline CPU.
* A move of EM_RISCV to elf-em.h, which is where it should have been to
begin with.
* I've also removed the 64-bit divide routines. I know I'm not really
playing by my own rules here because I posted the patches this
morning, but since they shouldn't be in the kernel I think it's better
to err on the side of going too fast here.
I don't anticipate any more patch sets for the merge window.
Changes since v1:
* Use a consistent base to merge from so the history isn't a mess.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlvZ//ITHHBhbG1lckBk
YWJiZWx0LmNvbQAKCRDvTKFQLMurQaOqEACpJTs19+1HFQ/YSB4P+drIImDq9XNF
OFElcqe+R961BnyHJUA4WObl0Bl9bDqciYhelwdeb/0gYaOBG5IsmwAKxN9N2f9d
m2/3eVUyiwMDKsc8Mrdcu7e3TLvfnhfaSOVe9hDvVcSeZvaC4S+dr+b7gjOZd45o
52SQqj6TMh20g5h6knaU5wnhHriJH7U4MwiEmwSTZuUkKj8Uoa1HGyzuVqqhi6A2
3y0m4VmVTwS9dmork2xZdsif+POSxrRxdtMTMWf85FelSO1OdTeMemUx2WnnWlCU
5VoPF5upXWB6uVtgXAVC8yhjCke5mUIOMcO10UGXdcjS/q9Vfg0yt6LusijTmYec
UznnpnkPOap3t6tb+dkRanP+BRphB6A9DpXUkiGGo2nwbi48OC+pTYjZMdRUX7r3
FHq3LknprDfK6+D6goftlXlYSmb8H2rSCubK5dv6Zq9/rkBAkN/ESo9HEXvtPrAh
oQAU1kmjq1EQg87fpmMvVySLApj+YPCoNMaPn3be03JRup4vaoGo8obmVP7rqgAG
BIq6gx2BqqWWNvJftFm85AurTC1K3ClLO0mgTD5zhHvaCTHNI0TLlYh58QcKU00j
c6+u+6tMF00Nvk8n/cbC/hRc2T+oAGb6hr6pFQEhANAkMu9dYpYfOWRbYl7Iiszq
J3eT+7rxvHXCpg==
=9Lsg
-----END PGP SIGNATURE-----
Merge tag 'riscv-for-linus-4.20-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux
Pull more RISC-V updates from Palmer Dabbelt:
"This contains the follow-on patches I'd like to target for the 4.20
merge window. I'm being somewhat conservative here, as while there are
a few patches on the mailing list that were posted early in the merge
window I'd like to let those bake for another round -- this was a
fairly big release as far as RISC-V is concerened, and we need to walk
before we can run.
As far as the patches that made it go:
- A patch to ignore offline CPUs when calculating AT_HWCAP. This
should fix GDB on the HiFive unleashed, which has an embedded core
for hart 0 which is exposed to Linux as an offline CPU.
- A move of EM_RISCV to elf-em.h, which is where it should have been
to begin with.
- I've also removed the 64-bit divide routines. I know I'm not really
playing by my own rules here because I posted the patches this
morning, but since they shouldn't be in the kernel I think it's
better to err on the side of going too fast here.
I don't anticipate any more patch sets for the merge window"
* tag 'riscv-for-linus-4.20-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux:
Move EM_RISCV into elf-em.h
RISC-V: properly determine hardware caps
Revert "lib: Add umoddi3 and udivmoddi4 of GCC library routines"
Revert "RISC-V: Select GENERIC_LIB_UMODDI3 on RV32"
We don't want 64-bit divide in the kernel.
This reverts commit 6315730e9eab7de5fa9864bb13a352713f48aef1.
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
When a memblock allocation APIs are called with align = 0, the alignment
is implicitly set to SMP_CACHE_BYTES.
Implicit alignment is done deep in the memblock allocator and it can
come as a surprise. Not that such an alignment would be wrong even
when used incorrectly but it is better to be explicit for the sake of
clarity and the prinicple of the least surprise.
Replace all such uses of memblock APIs with the 'align' parameter
explicitly set to SMP_CACHE_BYTES and stop implicit alignment assignment
in the memblock internal allocation functions.
For the case when memblock APIs are used via helper functions, e.g. like
iommu_arena_new_node() in Alpha, the helper functions were detected with
Coccinelle's help and then manually examined and updated where
appropriate.
The direct memblock APIs users were updated using the semantic patch below:
@@
expression size, min_addr, max_addr, nid;
@@
(
|
- memblock_alloc_try_nid_raw(size, 0, min_addr, max_addr, nid)
+ memblock_alloc_try_nid_raw(size, SMP_CACHE_BYTES, min_addr, max_addr,
nid)
|
- memblock_alloc_try_nid_nopanic(size, 0, min_addr, max_addr, nid)
+ memblock_alloc_try_nid_nopanic(size, SMP_CACHE_BYTES, min_addr, max_addr,
nid)
|
- memblock_alloc_try_nid(size, 0, min_addr, max_addr, nid)
+ memblock_alloc_try_nid(size, SMP_CACHE_BYTES, min_addr, max_addr, nid)
|
- memblock_alloc(size, 0)
+ memblock_alloc(size, SMP_CACHE_BYTES)
|
- memblock_alloc_raw(size, 0)
+ memblock_alloc_raw(size, SMP_CACHE_BYTES)
|
- memblock_alloc_from(size, 0, min_addr)
+ memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr)
|
- memblock_alloc_nopanic(size, 0)
+ memblock_alloc_nopanic(size, SMP_CACHE_BYTES)
|
- memblock_alloc_low(size, 0)
+ memblock_alloc_low(size, SMP_CACHE_BYTES)
|
- memblock_alloc_low_nopanic(size, 0)
+ memblock_alloc_low_nopanic(size, SMP_CACHE_BYTES)
|
- memblock_alloc_from_nopanic(size, 0, min_addr)
+ memblock_alloc_from_nopanic(size, SMP_CACHE_BYTES, min_addr)
|
- memblock_alloc_node(size, 0, nid)
+ memblock_alloc_node(size, SMP_CACHE_BYTES, nid)
)
[mhocko@suse.com: changelog update]
[akpm@linux-foundation.org: coding-style fixes]
[rppt@linux.ibm.com: fix missed uses of implicit alignment]
Link: http://lkml.kernel.org/r/20181016133656.GA10925@rapoport-lnx
Link: http://lkml.kernel.org/r/1538687224-17535-1-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Suggested-by: Michal Hocko <mhocko@suse.com>
Acked-by: Paul Burton <paul.burton@mips.com> [MIPS]
Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc]
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Move remaining definitions and declarations from include/linux/bootmem.h
into include/linux/memblock.h and remove the redundant header.
The includes were replaced with the semantic patch below and then
semi-automated removal of duplicated '#include <linux/memblock.h>
@@
@@
- #include <linux/bootmem.h>
+ #include <linux/memblock.h>
[sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h]
Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au
[sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h]
Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au
[sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal]
Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au
Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Serge Semin <fancer.lancer@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
All architecures use memblock for early memory management. There is no need
for the CONFIG_HAVE_MEMBLOCK configuration option.
[rppt@linux.vnet.ibm.com: of/fdt: fixup #ifdefs]
Link: http://lkml.kernel.org/r/20180919103457.GA20545@rapoport-lnx
[rppt@linux.vnet.ibm.com: csky: fixups after bootmem removal]
Link: http://lkml.kernel.org/r/20180926112744.GC4628@rapoport-lnx
[rppt@linux.vnet.ibm.com: remove stale #else and the code it protects]
Link: http://lkml.kernel.org/r/1538067825-24835-1-git-send-email-rppt@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1536927045-23536-4-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Tested-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Serge Semin <fancer.lancer@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Update the LZ4 compression module based on LZ4 v1.8.3 in order for the
erofs file system to use the newest LZ4_decompress_safe_partial() which
can now decode exactly the nb of bytes requested [1] to take place of the
open hacked code in the erofs file system itself.
Currently, apart from the erofs file system, no other users use
LZ4_decompress_safe_partial, so no worry about the interface.
In addition, LZ4 v1.8.x boosts up decompression speed compared to the
current code which is based on LZ4 v1.7.3, mainly due to shortcut
optimization for the specific common LZ4-sequences [2].
lzbench testdata (tested in kirin710, 8 cores, 4 big cores
at 2189Mhz, 2GB DDR RAM at 1622Mhz, with enwik8 testdata [3]):
Compressor name Compress. Decompress. Compr. size Ratio Filename
memcpy 5004 MB/s 4924 MB/s 100000000 100.00 enwik8
lz4hc 1.7.3 -9 12 MB/s 653 MB/s 42203253 42.20 enwik8
lz4hc 1.8.0 -9 12 MB/s 908 MB/s 42203096 42.20 enwik8
lz4hc 1.8.3 -9 11 MB/s 965 MB/s 42203094 42.20 enwik8
[1] https://github.com/lz4/lz4/issues/56608d347b5b2
[2] v1.8.1 perf: slightly faster compression and decompression speed
a31b7058cb
v1.8.2 perf: slightly faster HC compression and decompression speed
45f8603aae1a191b3f8d
[3] http://mattmahoney.net/dc/textdata.htmlhttp://mattmahoney.net/dc/enwik8.zip
Link: http://lkml.kernel.org/r/1537181207-21932-1-git-send-email-gaoxiang25@huawei.com
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Tested-by: Guo Xuenan <guoxuenan@huawei.com>
Cc: Colin Ian King <colin.king@canonical.com>
Cc: Yann Collet <yann.collet.73@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Fang Wei <fangwei1@huawei.com>
Cc: Chao Yu <yuchao0@huawei.com>
Cc: Miao Xie <miaoxie@huawei.com>
Cc: Sven Schmidt <4sschmid@informatik.uni-hamburg.de>
Cc: Kyungsik Lee <kyungsik.lee@lge.com>
Cc: <weidu.du@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Implicit casts to the same type are done by the language if necessary.
Link: http://lkml.kernel.org/r/20181014223934.GA18107@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch remove all following fall through warnings by
adding /* fall through */ markers.
Note that we cannot add "__attribute__ ((fallthrough));" due to it is GCC7 only
arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:384:25: warning: this statement may fall through [-Wimplicit-fallthrough=]
arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:391:25: warning: this statement may fall through [-Wimplicit-fallthrough=]
arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:393:16: warning: this statement may fall through [-Wimplicit-fallthrough=]
arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:430:25: warning: this statement may fall through [-Wimplicit-fallthrough=]
arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:556:25: warning: this statement may fall through [-Wimplicit-fallthrough=]
arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:595:25: warning: this statement may fall through [-Wimplicit-fallthrough=]
arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:602:25: warning: this statement may fall through [-Wimplicit-fallthrough=]
arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:627:25: warning: this statement may fall through [-Wimplicit-fallthrough=]
arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:646:25: warning: this statement may fall through [-Wimplicit-fallthrough=]
arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:696:25: warning: this statement may fall through [-Wimplicit-fallthrough=]
It is easy to see that thoses fall through are needed since in each case state->mode are set to the case value just below.
Link: http://lkml.kernel.org/r/1536215920-19955-1-git-send-email-clabbe@baylibre.com
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Cc: David Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>