IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Refactor the encoder for FATTR4_MAXNAME into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_MAXLINK into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_MAXFILESIZE into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_FS_LOCATIONS into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_FILES_TOTAL into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_FILES_FREE into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_FILES_AVAIL into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_FILEID into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_FILEHANDLE into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
We can de-duplicate the other filehandle encoder (in GETFH) using
our new helper.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_ACL into a helper. In a subsequent
patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the ACE encoding helper so that it can eventually be reused
for encoding OPEN results that contain delegation ACEs.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_ACLSUPPORT into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_RDATTR_ERROR into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_LEASE_TIME into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_FSID into a helper. In a subsequent
patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_SIZE into a helper. In a subsequent
patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_CHANGE into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
The code is restructured a bit to use the modern xdr_stream flow,
and the encoded cinfo value is made const so that callers of the
encoders can be passed a const cinfo.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_FH_EXPIRE_TYPE into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_TYPE into a helper. In a subsequent
patch, this helper will be called from a bitmask loop.
In addition, restructure the code so that byte-swapping is done on
constant values rather than at run time. Run-time swapping can be
costly on some platforms, and "type" is a frequently-requested
attribute.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Refactor the encoder for FATTR4_SUPPORTED_ATTRS into a helper. In a
subsequent patch, this helper will be called from a bitmask loop.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Add an encoding helper that encodes a single boolean "false" value.
Attributes that always return "false" can use this helper.
In a subsequent patch, this helper will be called from a bitmask
loop, so it is given a standardized synopsis.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Add an encoding helper that encodes a single boolean "true" value.
Attributes that always return "true" can use this helper.
In a subsequent patch, this helper will be called from a bitmask
loop, so it is given a standardized synopsis.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
I'm about to split nfsd4_encode_fattr() into a number of smaller
functions. Instead of passing a large number of arguments to each of
the smaller functions, create a struct that can gather the common
argument variables into something with a convenient handle on it.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
De-duplicate the encoding of bitmap4 results in
nfsd4_encode_setattr().
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
For alignment with the specification, the name of NFSD's encoder
function should match the name of the XDR type.
I've also replaced a few "naked integers" with symbolic constants
that better reflect the usage of these values.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
The generic XDR encoders return a length or a negative errno. NFSv4
encoders want to know simply whether the encode ran out of stream
buffer space. The return values for server-side encoding are either
nfs_ok or nfserr_resource.
So far I've found it adds a lot of duplicate code to try to use the
generic XDR encoder utilities when encoding the simple data types in
the NFSv4 operation encoders.
Add a set of NFSv4-specific utilities that handle the basic XDR data
types. These are added in xdr4.h so they might eventually be used by
the callback server and pNFS driver encoders too.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
There is no need to take down the whole system for these assertions.
I'd rather not attempt a heroic save here, as some bug has occurred
that has left the transport data structures in an unknown state.
Just warn and then leak the left-over resources.
Acked-by: Christian Brauner <brauner@kernel.org>
Reviewed-by: NeilBrown <neilb@suse.de>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Introduce rpc_status netlink support for NFSD in order to dump pending
RPC requests debugging information from userspace.
Closes: https://bugzilla.linux-nfs.org/show_bug.cgi?id=366
Tested-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Generate stubs and uAPI for nfsd netlink protocol. For the moment,
the new protocol has one operation: rpc_status.
The generated header and source files are created by running:
tools/net/ynl/ynl-regen.sh
Tested-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Acked-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
If the GETATTR request on a file that has write delegation in effect
and the request attributes include the change info and size attribute
then the request is handled as below:
Server sends CB_GETATTR to client to get the latest change info and file
size. If these values are the same as the server's cached values then
the GETATTR proceeds as normal.
If either the change info or file size is different from the server's
cached values, or the file was already marked as modified, then:
. update time_modify and time_metadata into file's metadata
with current time
. encode GETATTR as normal except the file size is encoded with
the value returned from CB_GETATTR
. mark the file as modified
If the CB_GETATTR fails for any reasons, the delegation is recalled
and NFS4ERR_DELAY is returned for the GETATTR.
Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Includes:
. CB_GETATTR proc for nfs4_cb_procedures[]
. XDR encoding and decoding function for CB_GETATTR request/reply
. add nfs4_cb_fattr to nfs4_delegation for sending CB_GETATTR
and store file attributes from client's reply.
Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
This removes the need to store and update back-links in the list.
It also remove the need for the _bh version of spin_lock().
Signed-off-by: NeilBrown <neilb@suse.de>
Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
sp_lock is now only used to protect sp_all_threads. This isn't needed
as sp_all_threads is only manipulated through svc_set_num_threads(),
which is already serialized. Read-acccess only requires rcu_read_lock().
So no more locking is needed.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Using an atomic_t avoids the need to take a spinlock (which can soon be
removed).
Choosing a thread to kill needs to be careful as we cannot set the "die
now" bit atomically with the test on the count. Instead we temporarily
increase the count.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
lwq avoids using back pointers in lists, and uses less locking.
This introduces a new spinlock, but the other one will be removed in a
future patch.
For svc_clean_up_xprts(), we now dequeue the entire queue, walk it to
remove and process the xprts that need cleaning up, then re-enqueue the
remaining queue.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Currently if several items of work become available in quick succession,
that number of threads (if available) will be woken. By the time some
of them wake up another thread that was already cache-warm might have
come along and completed the work. Anecdotal evidence suggests as many
as 15% of wakes find nothing to do once they get to the point of
looking.
This patch changes svc_pool_wake_idle_thread() to wake the first thread
on the queue but NOT remove it. Subsequent calls will wake the same
thread. Once that thread starts it will dequeue itself and after
dequeueing some work to do, it will wake the next thread if there is more
work ready. This results in a more orderly increase in the number of
busy threads.
As a bonus, this allows us to reduce locking around the idle queue.
svc_pool_wake_idle_thread() no longer needs to take a lock (beyond
rcu_read_lock()) as it doesn't manipulate the queue, it just looks at
the first item.
The thread itself can avoid locking by using the new
llist_del_first_this() interface. This will safely remove the thread
itself if it is the head. If it isn't the head, it will do nothing.
If multiple threads call this concurrently only one will succeed. The
others will do nothing, so no corruption can result.
If a thread wakes up and finds that it cannot dequeue itself that means
either
- that it wasn't woken because it was the head of the queue. Maybe the
freezer woke it. In that case it can go back to sleep (after trying
to freeze of course).
- some other thread found there was nothing to do very recently, and
placed itself on the head of the queue in front of this thread.
It must check again after placing itself there, so it can be deemed to
be responsible for any pending work, and this thread can go back to
sleep until woken.
No code ever tests for busy threads any more. Only each thread itself
cares if it is busy. So svc_thread_busy() is no longer needed.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Functions which directly manipulate a 'struct rqst', such as
svc_rqst_alloc() or svc_rqst_release_pages(), can reasonably
have "rqst" in there name.
However functions that act on the running thread, such as
XX_should_sleep() or XX_wait_for_work() should seem more
natural with a "svc_thread_" prefix.
So make those changes.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
lwq is a FIFO single-linked queue that only requires a spinlock
for dequeueing, which happens in process context. Enqueueing is atomic
with no spinlock and can happen in any context.
This is particularly useful when work items are queued from BH or IRQ
context, and when they are handled one at a time by dedicated threads.
Avoiding any locking when enqueueing means there is no need to disable
BH or interrupts, which is generally best avoided (particularly when
there are any RT tasks on the machine).
This solution is superior to using "list_head" links because we need
half as many pointers in the data structures, and because list_head
lists would need locking to add items to the queue.
This solution is superior to a bespoke solution as all locking and
container_of casting is integrated, so the interface is simple.
Despite the similar name, this solution meets a distinctly different
need to kfifo. kfifo provides a fixed sized circular buffer to which
data can be added at one end and removed at the other, and does not
provide any locking. lwq does not have any size limit and works with
data structures (objects?) rather than data (bytes).
A unit test for basic functionality, which runs at boot time, is included.
Signed-off-by: NeilBrown <neilb@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: David Gow <davidgow@google.com>
Cc: linux-kernel@vger.kernel.org
Message-Id: <20230911111333.4d1a872330e924a00acb905b@linux-foundation.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
llist_del_first_this() deletes a specific entry from an llist, providing
it is at the head of the list. Multiple threads can call this
concurrently providing they each offer a different entry.
This can be uses for a set of worker threads which are on the llist when
they are idle. The head can always be woken, and when it is woken it
can remove itself, and possibly wake the next if there is an excess of
work to do.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
With an llist we don't need to take a lock to add a thread to the list,
though we still need a lock to remove it. That will go in the next
patch.
Unlike double-linked lists, a thread cannot reliably remove itself from
the list. Only the first thread can be removed, and that can change
asynchronously. So some care is needed.
We already check if there is pending work to do, so we are unlikely to
add ourselves to the idle list and then want to remove ourselves again.
If we DO find something needs to be done after adding ourselves to the
list, we simply wake up the first thread on the list. If that was us,
we successfully removed ourselves and can continue. If it was some
other thread, they will do the work that needs to be done. We can
safely sleep until woken.
We also remove the test on freezing() from rqst_should_sleep(). Instead
we set TASK_FREEZABLE before scheduling. This makes is safe to
schedule() when a freeze is pending. As we now loop waiting to be
removed from the idle queue, this is a cleaner way to handle freezing.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
With list.h lists, it is easy to test if a node is on a list, providing
it was initialised and that it is removed with list_del_init().
This patch provides similar functionality for llist.h lists.
init_llist_node()
marks a node as being not-on-any-list be setting the ->next pointer to
the node itself.
llist_on_list()
tests if the node is on any list.
llist_del_first_init()
remove the first element from a llist, and marks it as being off-list.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
We can tell if a pool is congested by checking if the idle list is
empty. We don't need a separate flag.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Rather than searching a list of threads to find an idle one, having a
list of idle threads allows an idle thread to be found immediately.
This adds some spin_lock calls which is not ideal, but as the hold-time
is tiny it is still faster than searching a list. A future patch will
remove them using llist.h. This involves some subtlety and so is left
to a separate patch.
This removes the need for the RQ_BUSY flag. The rqst is "busy"
precisely when it is not on the "idle" list.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
svc threads are currently stopped using kthread_stop(). This requires
identifying a specific thread. However we don't care which thread
stops, just as long as one does.
So instead, set a flag in the svc_pool to say that a thread needs to
die, and have each thread check this flag instead of calling
kthread_should_stop(). The first thread to find and clear this flag
then moves towards exiting.
This removes an explicit dependency on sp_all_threads which will make a
future patch simpler.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Both nfsd and nfsv4-callback take a temporary reference to the svc_serv
while calling svc_set_num_threads() to stop the last thread. lockd does
not.
This extra reference prevents the scv_serv from being freed when the
last thread drops its reference count. This is not currently needed
for lockd as the svc_serv is not accessed after the last thread is told
to exit.
However a future patch will require svc_exit_thread() to access the
svc_serv after the svc_put() so it will need the code that calls
svc_set_num_threads() to keep a reference and keep the svc_serv active.
So copy the pattern from nfsd and nfsv4-cb to lockd, and take a
reference around svc_set_num_threads(.., 0)
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Tested-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Using svc_recv() for (NFSv4.1) back-channel handling means we have just
one mechanism for waking threads.
Also change kthread_freezable_should_stop() in nfs4_callback_svc() to
kthread_should_stop() as used elsewhere.
kthread_freezable_should_stop() effectively adds a try_to_freeze() call,
and svc_recv() already contains that at an appropriate place.
Signed-off-by: NeilBrown <neilb@suse.de>
Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
The test robot complained that, in some build configurations, the
@error variable in bc_svc_process's only caller is set but never
used. This happens because dprintk() is the only consumer of that
value.
- Remove the dprintk() call sites in favor of the svc_process
tracepoint
- The @error variable and the return value of bc_svc_process() are
now unused, so get rid of them.
- The @serv parameter is set to rqstp->rq_serv by the only caller,
and bc_svc_process() then uses it only to set rqstp->rq_serv. It
can be removed.
- Rename bc_svc_process() according to the convention that
globally-visible RPC server functions have names that begin with
"svc_"; and because it is globally-visible, give it a proper
kdoc comment.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202308121314.HA8Rq2XG-lkp@intel.com/
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
svc_get_next_xprt() does a lot more than just get an xprt. It also
decides if it needs to sleep, depending not only on the availability of
xprts but also on the need to exit or handle external work.
So rename it to svc_rqst_wait_for_work() and only do the testing and
waiting. Move all the waiting-related code out of svc_recv() into the
new svc_rqst_wait_for_work().
Move the dequeueing code out of svc_get_next_xprt() into svc_recv().
Previously svc_xprt_dequeue() would be called twice, once before waiting
and possibly once after. Now instead rqst_should_sleep() is called
twice. Once to decide if waiting is needed, and once to check against
after setting the task state do see if we might have missed a wakeup.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>