IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Use sendmsg() with MSG_SPLICE_PAGES rather than sendpage. This allows
multiple pages and multipage folios to be passed through.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Mike Christie <michael.christie@oracle.com>
cc: Lee Duncan <lduncan@suse.com>
cc: Chris Leech <cleech@redhat.com>
cc: "James E.J. Bottomley" <jejb@linux.ibm.com>
cc: "Martin K. Petersen" <martin.petersen@oracle.com>
cc: Jens Axboe <axboe@kernel.dk>
cc: Matthew Wilcox <willy@infradead.org>
cc: Al Viro <viro@zeniv.linux.org.uk>
cc: open-iscsi@googlegroups.com
Link: https://lore.kernel.org/r/20230623225513.2732256-12-dhowells@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Fix a NULL pointer crash that occurs when we are freeing the socket at the
same time we access it via sysfs.
The problem is that:
1. iscsi_sw_tcp_conn_get_param() and iscsi_sw_tcp_host_get_param() take
the frwd_lock and do sock_hold() then drop the frwd_lock. sock_hold()
does a get on the "struct sock".
2. iscsi_sw_tcp_release_conn() does sockfd_put() which does the last put
on the "struct socket" and that does __sock_release() which sets the
sock->ops to NULL.
3. iscsi_sw_tcp_conn_get_param() and iscsi_sw_tcp_host_get_param() then
call kernel_getpeername() which accesses the NULL sock->ops.
Above we do a get on the "struct sock", but we needed a get on the "struct
socket". Originally, we just held the frwd_lock the entire time but in
commit bcf3a2953d ("scsi: iscsi: iscsi_tcp: Avoid holding spinlock while
calling getpeername()") we switched to refcount based because the network
layer changed and started taking a mutex in that path, so we could no
longer hold the frwd_lock.
Instead of trying to maintain multiple refcounts, this just has us use a
mutex for accessing the socket in the interface code paths.
Link: https://lore.kernel.org/r/20220907221700.10302-1-michael.christie@oracle.com
Fixes: bcf3a2953d ("scsi: iscsi: iscsi_tcp: Avoid holding spinlock while calling getpeername()")
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
We don't always want to run the recv path from the network softirq because
when we have to have multiple sessions sharing the same CPUs, some sessions
can eat up the NAPI softirq budget and affect other sessions or users.
Allow us to queue the recv handling to the iscsi workqueue so we can have
the scheduler/wq code try to balance the work and CPU use across all
sessions' worker threads.
Note: It wasn't the original intent of the change but a nice side effect is
that for some workloads/configs we get a nice performance boost. For a
simple read heavy test:
fio --direct=1 --filename=/dev/dm-0 --rw=randread --bs=256K
--ioengine=libaio --iodepth=128 --numjobs=4
where the iscsi threads, fio jobs, and rps_cpus share CPUs we see a 32%
throughput boost. We also see increases for small I/O IOPs tests but it's
not as high.
Link: https://lore.kernel.org/r/20220616224557.115234-4-michael.christie@oracle.com
Reviewed-by: Lee Duncan <lduncan@suse.com>
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Based on 3 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version [author] [kishon] [vijay] [abraham]
[i] [kishon]@[ti] [com] this program is distributed in the hope that
it will be useful but without any warranty without even the implied
warranty of merchantability or fitness for a particular purpose see
the gnu general public license for more details
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version [author] [graeme] [gregory]
[gg]@[slimlogic] [co] [uk] [author] [kishon] [vijay] [abraham] [i]
[kishon]@[ti] [com] [based] [on] [twl6030]_[usb] [c] [author] [hema]
[hk] [hemahk]@[ti] [com] this program is distributed in the hope
that it will be useful but without any warranty without even the
implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 1105 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070033.202006027@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch replaces uses of the long obsolete hash interface with
ahash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Mike Christie <michaelc@cs.wisc.edu>
Several spots in the kernel perform a sequence like:
skb_queue_tail(&sk->s_receive_queue, skb);
sk->sk_data_ready(sk, skb->len);
But at the moment we place the SKB onto the socket receive queue it
can be consumed and freed up. So this skb->len access is potentially
to freed up memory.
Furthermore, the skb->len can be modified by the consumer so it is
possible that the value isn't accurate.
And finally, no actual implementation of this callback actually uses
the length argument. And since nobody actually cared about it's
value, lots of call sites pass arbitrary values in such as '0' and
even '1'.
So just remove the length argument from the callback, that way there
is no confusion whatsoever and all of these use-after-free cases get
fixed as a side effect.
Based upon a patch by Eric Dumazet and his suggestion to audit this
issue tree-wide.
Signed-off-by: David S. Miller <davem@davemloft.net>
This has iscsi_tcp use the iscsi_conn_get_addr_param
libiscsi function. It also drops the use of the libiscsi
session portal buffers, so they can be removed in
the next patches. Instead of copying the values
at bind time we get them during get() time. If we are
not connected userspace will now get -ENOTCONN,
so it knows that connection is disconnected instead
of a possible stale value.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
iscsi_conn is not used so remove it from iscsi_sw_tcp_conn.
Signed-off-by: Avi Kaplan <savik751@gmail.com>
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
This hooks iscsi_tcp into the libiscsi_tcp module and removes
code that is now in libiscsi_tcp.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
This converts iscsi_tcp to the new api and modifies how
it handles r2ts.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
This converts iscsi_tcp to use the iscsi_task name.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Convert iscsi_tcp to support merged tasks.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
The previous patches converted iscsi_tcp to support sg chaining.
This patch sets the proper flags and sets sg_table size to
4096. This allows fs io to be capped at max_sectors, but passthrough
IO to be limited by some other part of the kernel.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Convert xmit to iscsi chunks.
from michaelc@cs.wisc.edu:
Bug fixes, more digest integration, sg chaining conversion and other
sg wrapper changes, coding style sync up, and removal of io fields,
like pdu_sent, that are not needed.
Signed-off-by: Olaf Kirch <olaf.kirch@oracle.com>
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
iscsi_pool_init simplified
iscsi_pool_init currently has a lot of duplicate kfree() calls it does
when some allocation fails. This patch simplifies the code a little by
using iscsi_pool_free to tear down the pool in case of an error.
iscsi_pool_init also returns a copy of the item array to the caller.
Not all callers use this array, so we make it optional.
Instead of allocating a second array and return that, allocate just one
array, of twice the size.
Update users of iscsi_pool_{init,free}
This patch drops the (now useless) second argument to
iscsi_pool_free, and updates all callers.
It also removes the ctask->r2ts array, which was never
used anyway. Since the items argument to iscsi_pool_init
is now optional, we can pass NULL instead.
Signed-off-by: Olaf Kirch <olaf.kirch@oracle.com>
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
at libiscsi generic code
- currently code assumes a storage space of pdu header is allocated
at llds ctask and is pointed to by iscsi_cmd_task->hdr. Here I add
a hdr_max field pertaining to that storage, and an hdr_len that
accumulates the current use of the pdu-header.
- Add an iscsi_next_hdr() inline which returns the next free space
to write new Header at. Also iscsi_next_hdr() is used to retrieve
the address at which to write the header-digest.
- Add iscsi_add_hdr(length). What the user do is calls iscsi_next_hdr()
for address of the new header, than calls iscsi_add_hdr(length) with
the size of the new header. iscsi_add_hdr() will check if space is
available and update to the new size. length must be padded according
to standard.
- Add 2 padding inline helpers thanks to Olaf. Current patch does not
use them but Following patches will.
Also moved definition of ISCSI_PAD_LEN to iscsi_proto.h which had
PAD_WORD_LEN that was never used anywhere.
- Let iscsi_prep_scsi_cmd_pdu() signal an Error return since now it is
possible that it will fail.
- I was tired of yet again writing a "this is a digest" comment next to
sizeof(__u32) so I defined a new ISCSI_DIGEST_SIZE. Now I don't need
any comments. Changed all places that used sizeof(__u32) or "4" in
connection to a digest.
iscsi_tcp specific code
- At struct iscsi_tcp_cmd_task allocate maximum space allowed in
standard for all headers following the iscsi_cmd header. and mark
it so in iscsi_tcp_session_create()
- At iscsi_send_cmd_hdr() retrieve the correct headers size and
write header digest at iscsi_next_hdr().
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Olaf Kirch <olaf.kirch@oracle.com>
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Rewrite recv path. Fixes:
- data digest processing and error handling.
- ahs support.
Some fixups by Mike Christie
Signed-off-by: Olaf Kirch <olaf.kirch@oracle.com>
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
This patch adds logical unit reset support. This should work for ib_iser,
but I have not finished testing that driver so it is not hooked in yet.
This patch also temporarily reverts the iscsi_tcp r2t write out patch.
That code is completely rewritten in this patchset.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
There is a race condition in iscsi_tcp.c that may cause it to forget
that it received a R2T from the target. This race may cause a data-out
command (such as a write) to lock up. The race occurs here:
static int
iscsi_send_unsol_pdu(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
{
struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
int rc;
if (tcp_ctask->xmstate & XMSTATE_UNS_HDR) {
BUG_ON(!ctask->unsol_count);
tcp_ctask->xmstate &= ~XMSTATE_UNS_HDR; <---- RACE
...
static int
iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
{
...
tcp_ctask->xmstate |= XMSTATE_SOL_HDR_INIT; <---- RACE
...
While iscsi_xmitworker() (called from scsi_queue_work()) is preparing to
send unsolicited data, iscsi_tcp_data_recv() (called from
tcp_read_sock()) interrupts it upon receipt of a R2T from the target.
Both contexts do read-modify-write of tcp_ctask->xmstate. Usually, gcc
on x86 will make &= and |= atomic on UP (not guaranteed of course), but
in this case iscsi_send_unsol_pdu() reads the value of xmstate before
clearing the bit, which causes gcc to read xmstate into a CPU register,
test it, clear the bit, and then store it back to memory. If the recv
interrupt happens during this sequence, then the XMSTATE_SOL_HDR_INIT
bit set by the recv interrupt will be lost, and the R2T will be
forgotten.
The patch below (against 2.6.24-rc1) converts accesses of xmstate to use
set_bit, clear_bit, and test_bit instead of |= and &=. I have tested
this patch and verified that it fixes the problem. Another possible
approach would be to hold a lock during most of the rx/tx setup and
post-processing, and drop the lock only for the actual rx/tx.
Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
If we got the padding, data and header in different skbs,
we were not handling the padding correctly because we attributed it
to the data's skb. This resulted in the initiator reading from
pad bytes + skb offset instead of the correct offset.
If you could not connect with the open solaris target, this
will fix the lock up problem you were hitting.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
If iscsi_tcp partially sends a header, it would recalculate the
header size and readd the size of the digest (if header digests
are used).This would cause us to send sizeof(digest) extra bytes
when we sent the rest of the header.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
This patch fixes handling of expected datasn/r2tsn as received from
target. It is done according to: T10 rfc3720 section 3.2.2.3. Data Sequencing.
. unify expected datasn/r2tsn into one counter
. calculate than check expected datasn/r2tsn. On error print a message
and fail the request. (TODO use iscsi retransmits)
. remove the FIXME ;)
. avoid zero length memset
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Benny Halevy <bhalevy@panasas.com>
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
This patch converts ISCSI to use the new crypto_hash interface instead
of crypto_digest. It's a fairly straightforward substitution.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently allocated seperate tfms for data and header digests. There
is no reason for this since we can never calculate a rx header and
digest at the same time. Same for sends. So this patch removes the data
tfms and has the send and recv sides use the rx_tfm or tx_tfm.
I also made the connection creation code preallocate the tfms because I
thought I hit a bug where I changed the digests settings during a
relogin but could not allocate the tfm and then we just failed.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
iscsi_tcp calculates padding by using the expected transfer length. This
has the problem where if we have immediate data = no and initial R2T =
yes, and the transfer length ended up needing padding then we send:
1. header
2. padding which should have gone after data
3. data
Besides this bug, we also assume the target will always ask for nice
transfer lengths and the first burst length will always be a nice value.
As far as I can tell form the RFC this is not a requirement. It would be
silly to do this, but if someone did it we will end doing bad things.
Finally the last bug in that bit of code is in our handling of the
recalculation of data digests when we do not send a whole iscsi_buf in
one try. The bug here is that we call crypto_digest_final on a
iscsi_sendpage error, then when we send the rest of the iscsi_buf, we
doiscsi_data_digest_init and this causes the previous data digest to be
lost.
And to make matters worse, some of these bugs are replicated over and
over and over again for immediate data, solicited data and unsolicited
data. So the attached patch made over the iscsi git tree (see
kernel.org/git for details) which I updated today to include the patches
I said I merged, consolidates the sending of data, padding and digests
and calculation of data digests and fixes the above bugs.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
iSCSI RFC states that the first burst length must be smaller than the
max burst length. We currently assume targets will be good, but that may
not be the case, so this patch adds a check.
This patch also moves the unsol data out offset to the lib so the LLDs
do not have to track it.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
We currently try to allocate a max_recv_data_segment_length
which can be very large (default is 64K), and common uses
are up to 1MB. It is very very difficult to allocte this
much contiguous memory and it turns out we never even use it.
We really only need a couple of pages, so this patch has us
allocates just what we know what we need today.
Later if vendors start adding vendor specific data and
we need to handle large buffers we can do this, but for
the last 4 years we have not seen anyone do this or request
it.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
Discovered by steven@hayter.me.uk and patch by michaelc@cs.wisc.edu
The dtask mempool is reserving 261120 items per session! Since we are now
sending headers with sendmsg there is no reason for the mempool and that
was causing us to us carzy amounts of mem. We can preallicate a header in
the r2t and task struct and reuse them
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
This just converts iscsi_tcp to the lib
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
The current iscsi_tcp eh is not nicely setup for dm-multipath
and performs some extra task management functions when they
are not needed.
The attached patch:
- Fixes the TMF issues. If a session is rebuilt
then we do not send aborts.
- Fixes the problem where if the host reset fired, we would
return SUCCESS even though we had not really done anything
yet. This ends up causing problem with scsi_error.c's TUR.
- If someone has turned on the userspace nop daemon code to try
and detect network problems before the scsi command timeout
we can now drop and clean up the session before the scsi command
timesout and fires the eh speeding up the time it takes for a
command to go from one patch to another. For network problems
we fail the command with DID_BUS_BUSY so if failfast is set
scsi_decide_disposition fails the command up to dm for it to
try on another path.
- And we had to add some basic iscsi session block code. Previously
if we were trying to repair a session we would retrun a MLQUEUE code
in the queuecommand. This worked but it was not the most efficient
or pretty thing to do since it would take a while to relogin
to the target. For iscsi_tcp/open-iscsi a lot of the iscsi error handler
is in userspace the block code is pretty bare. We will be
adding to that for qla4xxx.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
>From erezz@voltaire.com:
rm conn->lock since it is not used anymore. The dataqueue is protected
by the session lock and xmitmutex.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Alex Aizman <itn780@yahoo.com>
Signed-off-by: Dmitry Yusupov <dmitry_yus@yahoo.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
From:
michaelc@cs.wisc.edufujita.tomonori@lab.ntt.co.jpda-x@monatomic.org
and err path fixup from:
ogerlitz@voltaire.com
This patch cleans up that interface by having the lld and class
pass a iscsi_cls_session or iscsi_cls_conn between each other when
the function is used by HW and SW iscsi llds. This way the lld
does not have to remember if it has to send a handle or pointer
and a handle or pointer to connection, session or host.
This also has the class verify the session handle that gets passed from
userspace instead of using the pointer passed into the kernel directly.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Alex Aizman <itn780@yahoo.com>
Signed-off-by: Dmitry Yusupov <dmitry_yus@yahoo.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
From Mike Christie <michaelc@cs.wisc.edu> and FUJITA Tomonori <tomof@acm.org>:
We cannot use page_address becuase some pages could be highmem.
Instead, we can use sock_no_sendpage which does kmap for us.
Signed-off-by: Alex Aizman <itn780@yahoo.com>
Signed-off-by: Dmitry Yusupov <dmitry_yus@yahoo.com>
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
Users can write to a page while we are sending it and making
digest calculations. This ends up causing us to retry the command
when a digest error is later reported. By using sock_no_sendpage
when data digests are calculated we can avoid a lot of (not all but it
helps) the retries becuase sock_no_sendpage is not zero copy.
Signed-off-by: Alex Aizman <itn780@yahoo.com>
Signed-off-by: Dmitry Yusupov <dmitry_yus@yahoo.com>
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
the scsi layer is using semaphores in a mutex way, this patch converts
these into using mutexes instead
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
From Wang Zhenyu:
High queue depth was a problem for some targets so make queue_depth adjustable
From Mike Christie
Make default queue_depth a little lower
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Alex Aizman <itn780@yahoo.com>
Signed-off-by: Dmitry Yusupov <dmitry_yus@yahoo.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>