1105355 Commits

Author SHA1 Message Date
Logan Gunthorpe
91fb2b6052 nvme-pci: convert to using dma_map_sgtable()
The dma_map operations now support P2PDMA pages directly. So remove
the calls to pci_p2pdma_[un]map_sg_attrs() and replace them with calls
to dma_map_sgtable().

dma_map_sgtable() returns more complete error codes than dma_map_sg()
and allows differentiating EREMOTEIO errors in case an unsupported
P2PDMA transfer is requested. When this happens, return BLK_STS_TARGET
so the request isn't retried.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26 07:28:07 -04:00
Logan Gunthorpe
2f8594412b nvme-pci: check DMA ops when indicating support for PCI P2PDMA
Introduce a supports_pci_p2pdma() operation in nvme_ctrl_ops to
replace the fixed NVME_F_PCI_P2PDMA flag such that the dma_map_ops
flags can be checked for PCI P2PDMA support.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26 07:28:07 -04:00
Logan Gunthorpe
30280eee2d iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg
Call pci_p2pdma_map_segment() when a PCI P2PDMA page is seen so the bus
address is set in the dma address and the segment is marked with
sg_dma_mark_bus_address(). iommu_map_sg() will then skip these segments.
Then, in __finalise_sg(), copy the dma address from the input segment
to the output segment. __invalidate_sg() must also learn to skip these
segments.

A P2PDMA page may have three possible outcomes when being mapped:
  1) If the data path between the two devices doesn't go through
     the root port, then it should be mapped with a PCI bus address
  2) If the data path goes through the host bridge, it should be mapped
     normally with an IOMMU IOVA.
  3) It is not possible for the two devices to communicate and thus
     the mapping operation should fail (and it will return -EREMOTEIO).

Similar to dma-direct, the sg_dma_mark_pci_p2pdma() flag is used to
indicate bus address segments. On unmap, P2PDMA segments are skipped
over when determining the start and end IOVA addresses.

With this change, the flags variable in the dma_map_ops is set to
DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for P2PDMA pages.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26 07:27:48 -04:00
Logan Gunthorpe
c96321834b iommu: Explicitly skip bus address marked segments in __iommu_map_sg()
In order to support PCI P2PDMA mappings with dma-iommu, explicitly skip
any segments marked with sg_dma_mark_bus_address() in __iommu_map_sg().

These segments should not be mapped into the IOVA and will be handled
separately in as subsequent patch for dma-iommu.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26 07:27:48 -04:00
Logan Gunthorpe
159bf19270 dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support
Add a flags member to the dma_map_ops structure with one flag to
indicate support for PCI P2PDMA.

Also, add a helper to check if a device supports PCI P2PDMA.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26 07:27:48 -04:00
Logan Gunthorpe
f02ad36d4f dma-direct: support PCI P2PDMA pages in dma-direct map_sg
Add PCI P2PDMA support for dma_direct_map_sg() so that it can map
PCI P2PDMA pages directly without a hack in the callers. This allows
for heterogeneous SGLs that contain both P2PDMA and regular pages.

A P2PDMA page may have three possible outcomes when being mapped:
  1) If the data path between the two devices doesn't go through the
     root port, then it should be mapped with a PCI bus address
  2) If the data path goes through the host bridge, it should be mapped
     normally, as though it were a CPU physical address
  3) It is not possible for the two devices to communicate and thus
     the mapping operation should fail (and it will return -EREMOTEIO).

SGL segments that contain PCI bus addresses are marked with
sg_dma_mark_pci_p2pdma() and are ignored when unmapped.

P2PDMA mappings are also failed if swiotlb needs to be used on the
mapping.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26 07:27:47 -04:00
Logan Gunthorpe
7c2645a2a3 dma-mapping: allow EREMOTEIO return code for P2PDMA transfers
Add EREMOTEIO error return to dma_map_sgtable() which will be used
by .map_sg() implementations that detect P2PDMA pages that the
underlying DMA device cannot access.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26 07:27:47 -04:00
Logan Gunthorpe
5e180ff326 PCI/P2PDMA: Introduce helpers for dma_map_sg implementations
Add pci_p2pdma_map_segment() as a helper for dma_map_sg()
implementations. It takes an scatterlist segment that must point to a
pci_p2pdma struct page and will map it if the mapping requires a bus
address.

The return value indicates whether the mapping required a bus address
or whether the caller still needs to map the segment normally. If the
segment should not be mapped, -EREMOTEIO is returned.

This helper uses a state structure to track the changes to the
pgmap across calls and avoid needing to lookup into the xarray for
every page.

The prototype for the helper is added to dma-map-ops.h as it is only
useful to dma map implementations and don't need to pollute the public
pci-p2pdma header.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26 07:27:47 -04:00
Logan Gunthorpe
719c986580 PCI/P2PDMA: Attempt to set map_type if it has not been set
Attempt to find the mapping type for P2PDMA pages on the first
DMA map attempt if it has not been done ahead of time.

Previously, the mapping type was expected to be calculated ahead of
time, but if pages are to come from userspace then there's no
way to ensure the path was checked ahead of time.

This change will calculate the mapping type if it hasn't pre-calculated
so it is no longer invalid to call pci_p2pdma_map_sg() before the mapping
type is calculated, so drop the WARN_ON when that is the case.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26 07:27:47 -04:00
Logan Gunthorpe
4239930120 lib/scatterlist: add flag for indicating P2PDMA segments in an SGL
Introduce a dma_flags field in struct scatterlist. These flags will be
used by dma_[un]map_sg_p2pdma() to determine when a given SGL segments
dma_address points to a PCI bus address. dma_unmap_sg_p2pdma() will need
to perform different cleanup when a segment is marked as a bus address.

The dma_flags field will fit in the existing padding on 64BIT systems
(assuming CONFIG_NEED_SG_DMA_LENGTH is also set).

The new bit will only be used when CONFIG_PCI_P2PDMA is set; this means
PCI P2PDMA will require CONFIG_64BIT. This should be acceptable as the
majority of P2PDMA use cases are restricted to newer root complexes and
roughly require the extra address space for memory BARs used in the
transactions.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26 07:27:47 -04:00
Tianyu Lan
7231180903 swiotlb: clean up some coding style and minor issues
- Fix the used field of struct io_tlb_area wasn't initialized
- Set area number to be 0 if input area number parameter is 0
- Use array_size() to calculate io_tlb_area array size
- Make parameters of swiotlb_do_find_slots() more reasonable

Fixes: 26ffb91fa5e0 ("swiotlb: split up the global swiotlb lock")
Signed-off-by: Tianyu Lan <tiala@microsoft.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-22 17:20:56 +02:00
Lukas Bulwahn
a45e52bf1e dma-mapping: update comment after dmabounce removal
Commit e3217540c271 ("ARM/dma-mapping: remove dmabounce") removes the
config DMABOUNCE. A comment to the function __dma_page_cpu_to_dev() refers
to this removed config DMABOUNCE.

Remove the obsolete explanation, but keep the recommendation not to use
__dma_page_cpu_to_dev() and use dma_sync_* functions instead.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-22 17:14:59 +02:00
John Garry
c9337ad439 scsi: sd: Add a comment about limiting max_sectors to shost optimal limit
Add a comment about limiting the default the SCSI disk request_queue
max_sectors initial value to that of the SCSI host optimal sectors limit.

Suggested-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-22 17:14:59 +02:00
John Garry
0568e61225 ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors
ATA devices (struct ata_device) have a max_sectors field which is
configured internally in libata. This is then used to (re)configure the
associated sdev request queue max_sectors value from how it is earlier set
in __scsi_init_queue(). In __scsi_init_queue() the max_sectors value is set
according to shost limits, which includes host DMA mapping limits.

Cap the ata_device max_sectors according to shost->max_sectors to respect
this shost limit.

Signed-off-by: John Garry <john.garry@huawei.com>
Acked-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-19 11:11:49 +02:00
John Garry
4cbfca5f77 scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit
Streaming DMA mappings may be considerably slower when mappings go through
an IOMMU and the total mapping length is somewhat long. This is because the
IOMMU IOVA code allocates and free an IOVA for each mapping, which may
affect performance.

For performance reasons set the request queue max_sectors from
dma_opt_mapping_size(), which knows this mapping limit.

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-19 11:11:49 +02:00
John Garry
608128d391 scsi: sd: allow max_sectors be capped at DMA optimal size limit
Streaming DMA mappings may be considerably slower when mappings go through
an IOMMU and the total mapping length is somewhat long. This is because the
IOMMU IOVA code allocates and free an IOVA for each mapping, which may
affect performance.

New member Scsi_Host.opt_sectors is added, which is the optimal host
max_sectors, and use this value to cap the request queue max_sectors when
set.

It could be considered to have request queues io_opt value initially
set at Scsi_Host.opt_sectors in __scsi_init_queue(), but that is not
really the purpose of io_opt.

Finally, even though Scsi_Host.opt_sectors value should never be greater
than the request queue max_hw_sectors value, continue to limit to this
value for safety.

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-19 11:11:49 +02:00
John Garry
bb7d1283e6 scsi: core: cap shost max_sectors according to DMA limits only once
The shost->max_sectors is repeatedly capped according to the host DMA
mapping limit for each sdev in __scsi_init_queue(). This is unnecessary, so
set only once when adding the host.

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-19 06:05:47 +02:00
John Garry
6d9870b7e5 dma-iommu: add iommu_dma_opt_mapping_size()
Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which
allows the drivers to know the optimal mapping limit and thus limit the
requested IOVA lengths.

This value is based on the IOVA rcache range limit, as IOVAs allocated
above this limit must always be newly allocated, which may be quite slow.

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-19 06:05:45 +02:00
John Garry
a229cc14f3 dma-mapping: add dma_opt_mapping_size()
Streaming DMA mapping involving an IOMMU may be much slower for larger
total mapping size. This is because every IOMMU DMA mapping requires an
IOVA to be allocated and freed. IOVA sizes above a certain limit are not
cached, which can have a big impact on DMA mapping performance.

Provide an API for device drivers to know this "optimal" limit, such that
they may try to produce mapping which don't exceed it.

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-19 06:05:41 +02:00
Christoph Hellwig
942a8186eb swiotlb: move struct io_tlb_slot to swiotlb.c
No need to expose this structure definition in the header.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-18 06:50:18 +02:00
Chao Gao
57e6840cf7 swiotlb: ensure a segment doesn't cross the area boundary
Free slots tracking assumes that slots in a segment can be allocated to
fulfill a request. This implies that slots in a segment should belong to
the same area. Although the possibility of a violation is low, it is better
to explicitly enforce segments won't span multiple areas by adjusting the
number of slabs when configuring areas.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-18 06:50:01 +02:00
Chao Gao
44335487ba swiotlb: consolidate rounding up default_nslabs
default_nslabs are rounded up in two cases with exactly same comments.
Add a simple wrapper to reduce duplicate code/comments. It is preparatory
to adding more logics into the round-up.

No functional change intended.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-18 06:49:58 +02:00
Chao Gao
91561d4ecb swiotlb: remove unused fields in io_tlb_mem
Commit 20347fca71a3 ("swiotlb: split up the global swiotlb lock") splits
io_tlb_mem into multiple areas. Each area has its own lock and index. The
global ones are not used so remove them.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-18 06:48:54 +02:00
Dan Carpenter
4a97739474 swiotlb: fix use after free on error handling path
Don't dereference "mem" after it has been freed.  Flip the
two kfree()s around to address this bug.

Fixes: 26ffb91fa5e0 ("swiotlb: split up the global swiotlb lock")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-18 06:48:46 +02:00
Tianyu Lan
20347fca71 swiotlb: split up the global swiotlb lock
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead to
significat lock contention on the swiotlb lock.

This patch splits the swiotlb bounce buffer pool into individual areas
which have their own lock. Each CPU tries to allocate in its own area
first. Only if that fails does it search other areas. On freeing the
allocation is freed into the area where the memory was originally
allocated from.

Area number can be set via swiotlb kernel parameter and is default
to be possible cpu number. If possible cpu number is not power of
2, area number will be round up to the next power of 2.

This idea from Andi Kleen patch(https://github.com/intel/tdx/commit/
4529b5784c141782c72ec9bd9a92df2b68cb7d45).

Based-on-idea-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-13 13:23:10 +02:00
Robin Murphy
c51ba246cb swiotlb: fail map correctly with failed io_tlb_default_mem
In the failure case of trying to use a buffer which we'd previously
failed to allocate, the "!mem" condition is no longer sufficient since
io_tlb_default_mem became static and assigned by default. Update the
condition to work as intended per the rest of that conversion.

Fixes: 463e862ac63e ("swiotlb: Convert io_default_tlb_mem to static allocation")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-12 08:47:31 +02:00
Robin Murphy
4136ce90f0 ARM/dma-mapping: merge IOMMU ops
The dma_sync_* operations are now the only difference between the
coherent and non-coherent IOMMU ops. Some minor tweaks to make those
safe for coherent devices with minimal overhead, and we can condense
down to a single set of DMA ops.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Marc Zyngier <maz@kernel.org>
2022-07-07 18:18:58 +02:00
Robin Murphy
d563bccfa3 ARM/dma-mapping: consolidate IOMMU ops callbacks
Merge the coherent and non-coherent callbacks down to a single
implementation each, relying on the generic dev->dma_coherent
flag at the points where the difference matters.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Marc Zyngier <maz@kernel.org>
2022-07-07 18:18:58 +02:00
Robin Murphy
42998ef08a ARM/dma-mapping: drop .dma_supported for IOMMU ops
When an IOMMU is present, we trust that it should be capable
of remapping any physical memory, and since the device masks
represent the input (virtual) addresses to the IOMMU it makes
no sense to validate them against physical PFNs anyway.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Marc Zyngier <maz@kernel.org>
2022-07-07 18:18:57 +02:00
Christoph Hellwig
ae626eb973 ARM/dma-mapping: use dma-direct unconditionally
Use dma-direct unconditionally on arm.  It has already been used for
some time for LPAE and nommu configurations.

This mostly changes the streaming mapping implementation and the (simple)
coherent allocator for device that are DMA coherent.  The existing
complex allocator for uncached mappings for non-coherent devices is still
used as is using the arch_dma_alloc/arch_dma_free hooks.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Andre Przywara <andre.przywara@arm.com> [highbank]
Tested-by: Marc Zyngier <maz@kernel.org>
2022-07-07 18:18:57 +02:00
Christoph Hellwig
af6f23b88e ARM/dma-mapping: use the generic versions of dma_to_phys/phys_to_dma by default
Only the footbridge platforms provide their own DMA address translation
helpers, so switch to the generic version for all other platforms, and
consolidate the footbridge implementation to remove two levels of
indirection.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Marc Zyngier <maz@kernel.org>
2022-07-07 18:18:57 +02:00
Christoph Hellwig
f9774cfd50 ARM/dma-mapping: use dma_to_phys/phys_to_dma in the dma-mapping code
Use the helpers as expected by the dma-direct code in the old arm
dma-mapping code to ease a gradual switch to the common DMA code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Marc Zyngier <maz@kernel.org>
2022-07-07 18:18:57 +02:00
Christoph Hellwig
d6e2e92597 ARM/dma-mapping: remove the unused virt_to_dma helper
virt_to_dma was only used by the now removed dmabounce code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Marc Zyngier <maz@kernel.org>
2022-07-07 18:18:56 +02:00
Christoph Hellwig
5ed390e5a9 ARM/dma-mapping: mark various dma-mapping routines static in dma-mapping.c
With the dmabounce removal these aren't used outside of dma-mapping.c,
so mark them static.  Move the dma_map_ops declarations down a bit
to avoid lots of forward declarations.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Marc Zyngier <maz@kernel.org>
2022-07-07 18:18:56 +02:00
Christoph Hellwig
e3217540c2 ARM/dma-mapping: remove dmabounce
Remove the now unused dmabounce code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
2022-07-07 18:18:56 +02:00
Arnd Bergmann
9ba26f5cec ARM: sa1100/assabet: move dmabounce hack to ohci driver
The sa1111 platform is one of the two remaining users of the old Arm
specific "dmabounce" code, which is an earlier implementation of the
generic swiotlb.

Linus Walleij submitted a patch that removes dmabounce support from
the ixp4xx, and I had a look at the other user, which is the sa1111
companion chip.

Looking at how dmabounce is used, I could narrow it down to one driver
one three machines:

 - dmabounce is only initialized on assabet/neponset, jornada720 and
   badge4, which are the platforms that have an sa1111 and support
   DMA on it.

 - All three of these suffer from "erratum #7" that requires only
   doing DMA to half the memory sections based on one of the address
   lines, in addition, the neponset also can't DMA to the RAM that
   is connected to sa1111 itself.

 - the pxa lubbock machine also has sa1111, but does not support DMA
   on it and does not set dmabounce.

 - only the OHCI and audio devices on sa1111 support DMA, but as
   there is no audio driver for this hardware, only OHCI remains.

In the OHCI code, I noticed that two other platforms already have
a local bounce buffer support in the form of the "local_mem"
allocator. Specifically, TMIO and SM501 use this on a few other ARM
boards with 16KB or 128KB of local SRAM that can be accessed from the
OHCI and from the CPU.

While this is not the same problem as on sa1111, I could not find a
reason why we can't re-use the existing implementation but replace the
physical SRAM address mapping with a locally allocated DMA buffer.

There are two main downsides:

 - rather than using a dynamically sized pool, this buffer needs
   to be allocated at probe time using a fixed size. Without
   having any idea of what it should be, I picked a size of
   64KB, which is between what the other two OHCI front-ends use
   in their SRAM. If anyone has a better idea what that size
   is reasonable, this can be trivially changed.

 - Previously, only USB transfers to unaddressable memory needed
   to go through the bounce buffer, now all of them do, which may
   impact runtime performance for USB endpoints that do a lot of
   transfers.

On the upside, the local_mem support uses write-combining buffers,
which should be a bit faster for transfers to the device compared to
normal uncached coherent memory as used in dmabounce.

Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Cc: linux-usb@vger.kernel.org
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-07 18:18:47 +02:00
Dongli Zhang
0bf28fc40d swiotlb: panic if nslabs is too small
Panic on purpose if nslabs is too small, in order to sync with the remap
retry logic.

In addition, print the number of bytes for tlb alloc failure.

Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-06-22 12:42:09 +02:00
Dongli Zhang
04c14e8e38 x86/swiotlb: fix param usage in boot-options.rst
Fix the usage of swiotlb param in kernel doc.

Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-06-22 12:42:08 +02:00
Dongli Zhang
466298c656 swiotlb: remove a useless return in swiotlb_init
Both swiotlb_init_remap() and swiotlb_init() have return type void.

Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-06-22 12:42:02 +02:00
Dongli Zhang
1a3c7d0841 swiotlb: remove the unused swiotlb_force declaration
The 'swiotlb_force' is removed since commit c6af2aa9ffc9 ("swiotlb: make
the swiotlb_init interface more useful").

Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-06-22 12:41:45 +02:00
Linus Torvalds
a111daf0c5 Linux 5.19-rc3 v5.19-rc3 2022-06-19 15:06:47 -05:00
Linus Torvalds
05c6ca8512 X86 updates:
- Make RESERVE_BRK() work again with older binutils. The recent
    'simplification' broke that.
 
  - Make early #VE handling increment RIP when successful.
 
  - Make the #VE code consistent vs. the RIP adjustments and add comments.
 
  - Handle load_unaligned_zeropad() across page boundaries correctly in #VE
    when the second page is shared.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmKvIG4THHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoaqCD/9NAUyHTjKDqdWuMD/ITU8ymDr+Ix8z
 vUlysdXbxJg6MvT12ZbhJUFTKAsXskGAAnXz/EtZ8zTQQVzTjis/HooJh4XLeuO4
 NLh9KV9FvH7w69e6Jg31MGkOUJU3BV+WYUx1f34zbQ8FHftxUwu+M47UYExPYKDR
 VIbNeQIpqoBfjTSPVGXlWl/panuZG6RV+PRcvxV3yeRRA8zyCB/WTmNkoDjbw4fl
 YCWwJF7/m4iT3LtoaFXWVGFzSRZoGHbhSdgEOZGIZ7sjvydoaQo402JuhW3WLI2m
 oXLVZ+2wOPGBKp3WQ1t3mpfScBvCiN3SW4pSPDQ+E8fT/RQiRMb29c9S6ANdm3nT
 27fYMJOq+xxex5gOYzdgLz7O99M08uOn2bxJwB+IBIr5jEFH9b4EffeEWsfdZBsi
 1AzkXCi+Ib0ZYAndxUP068m+4iW0LtuApm0fg6LhtdDmBGquj+88OZOUK7Z/kW/N
 IkjgCeqFgmdNb/+Z3XrdYobaAl6J4toIqA4A+O8yL6gJfn9PnaMGsYtA8c5yQchD
 kFoTu5pCALY2KjZkKFRMuEbMH2oj3sjjb7f6mYAHxec6jikIx2c5HswA4sLmzHAN
 GG2MDUH12bWoLfeA4IRwTRz/vh8IeZNq5ZzdCnS6KHUNk5OJRGLtRphKy8z+pOYx
 +i9ThZFBV8pBzg==
 =sRtG
 -----END PGP SIGNATURE-----

Merge tag 'x86-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 fixes from Thomas Gleixner:

 - Make RESERVE_BRK() work again with older binutils. The recent
   'simplification' broke that.

 - Make early #VE handling increment RIP when successful.

 - Make the #VE code consistent vs. the RIP adjustments and add
   comments.

 - Handle load_unaligned_zeropad() across page boundaries correctly in
   #VE when the second page is shared.

* tag 'x86-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/tdx: Handle load_unaligned_zeropad() page-cross to a shared page
  x86/tdx: Clarify RIP adjustments in #VE handler
  x86/tdx: Fix early #VE handling
  x86/mm: Fix RESERVE_BRK() for older binutils
2022-06-19 09:58:28 -05:00
Linus Torvalds
5d770f11a1 Build tool updates:
- Remove obsolete CONFIG_X86_SMAP reference from objtool
 
  - Fix overlapping text section failures in faddr2line for real
 
  - Remove OBJECT_FILES_NON_STANDARD usage from x86 ftrace and replace it
    with finegrained annotations so objtool can validate that code
    correctly.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmKvHbwTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoQlsEADC7BotE1Mi8HITScIlHT19bkZ7Bm6M
 g46RCtUw0u+lo7BclIetNYGWQ0z+D+1DbmZTBv5D22N/Wd6MrwOuuwRVtJ2dEosF
 l/EK1qKLxCGMdQ1x0KOmrjzHB4WmcUH0pQwDHa5N5s5IyrJRRFHtoLk3EYp6QIhi
 mIZTJgGUEe/aS7BtFM5xARprm7JxdbhXgXVbnC1ctzpOi2y6O7F4Ek3x4j5zTGV/
 49iuO2ljqmKdoFlPa1fhuw+O9sTUf+RmKLPEU+2nm7Wg+wlIwKIB3yDDBoZ9Q7lm
 YdorW0/506x4qOwa8KAjxXEb2qgZkL/fQcMS0jQQKNNDD8GHU+3YdjCNbrmZjA+T
 Ib0rs9YsxgLstnmqzU3QfarhgtdzIMhzF0cofoFDK0a8tHLqOhZF4j7wLmYo52ec
 6Hrt8IvJPW2b3TC6KfRAF+uWsDDoaum0OOFVjVu1b9G6fFf0i8h5JGWdkJh8cau9
 0qvAYg0sDjRf7zAZ6kbbGDVCljECyypIdCBF3ihA6yMhJX16VihEWTJyqIaoO+a0
 0XzPvkhX+6mY0kqn+5HvvhLEY4POA8pFKeTPx4eABvKnw7m01I9cJfHa7HEBQHe6
 ZJA7qWMBRrPfK2Ogrx8zohj2sOdHMHlIQU+6Ai+7+BHs13Nje4umGMu2YuZrnfIF
 gQ1ayCN8OsUwVA==
 =12KJ
 -----END PGP SIGNATURE-----

Merge tag 'objtool-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull build tooling updates from Thomas Gleixner:

 - Remove obsolete CONFIG_X86_SMAP reference from objtool

 - Fix overlapping text section failures in faddr2line for real

 - Remove OBJECT_FILES_NON_STANDARD usage from x86 ftrace and replace it
   with finegrained annotations so objtool can validate that code
   correctly.

* tag 'objtool-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/ftrace: Remove OBJECT_FILES_NON_STANDARD usage
  faddr2line: Fix overlapping text section failures, the sequel
  objtool: Fix obsolete reference to CONFIG_X86_SMAP
2022-06-19 09:54:16 -05:00
Linus Torvalds
727c3991df A single scheduler fix plugging a race between sched_setscheduler() and
balance_push(). sched_setscheduler() spliced the balance callbacks accross
 a lock break which makes an interleaving schedule() observe an empty list.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmKvHpETHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoUCfD/9LVsemdRGoz2Ur7LUjKMq9op7rzFmD
 arDTPsvb5ukbD2u+PrObo6pkAQvmSu/b/qFhLQNkGLUMWn/Ok5mq2jikwArboT3X
 k/BC56p7NSynJr2Xb035UcvlnLZxPn0giUo9SeGaGM4zm4rioXyrBWkmsjXmFh19
 LrRTCip9Aom5F+ctCC96BOGvrsM/C3F4si7ZwEp8ZCcClmhrrZhTBEPFGKg0AuVJ
 qJkxgVmvuszaS7CL5ZyHiE35liyqRYx/NLldVWWFKxQcBqlGxqB/4ymB3QyecJfI
 VsuWr8oNZOGbjbhg7Nv/qdbDTz5dHxpEuzHx0osAFhUwQ7TSmJfh2InE1Xe0jNyU
 0HXFsx2kmytLoMDuZCkzGJRaGotSsLL7ZTs2e6Ktbh4j42MGQaHYC5GbX68Doxv1
 A9aoKenh658oU26kPXGI0iYo9tkToMY7U8HM56CXxaAc5+XwHKPoU2pUOUTrD0a/
 eqP2Uu7QMuNzQ6M9vsD/lAm8HUlicp08h9s6wzENIwGSkJ7GdpItat5Mbh1pu7yh
 dL1+1q6kZCBjc9oy7THV4Oqph6Hxgg/oze9Oi9f4ocvt97GHtn1ahEOxey2i/CJR
 7912p/yxbVXWOmUV6BGZ1TxjV3P9HpgWD5KPt/vOItnhMeWFU0NnxW6e0JJSV4T9
 5dmPK8wH0/Nzvg==
 =k54w
 -----END PGP SIGNATURE-----

Merge tag 'sched-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull scheduler fix from Thomas Gleixner:
 "A single scheduler fix plugging a race between sched_setscheduler()
  and balance_push().

  sched_setscheduler() spliced the balance callbacks accross a lock
  break which makes it possible for an interleaving schedule() to
  observe an empty list"

* tag 'sched-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched: Fix balance_push() vs __sched_setscheduler()
2022-06-19 09:51:00 -05:00
Linus Torvalds
4afb65156a A RT fix for lockdep. lockdep invokes prandom_u32() to create cookies. This
worked until prandom_u32() was switched to the real random generator, which
 takes a spinlock for extraction, which does not work on RT when invoked
 from atomic contexts. lockdep has no requirement for real random numbers
 and it turns out sched_clock() is good enough to create the cookie. That
 works everywhere and is faster.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmKvHPcTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoW73D/9fVCl4iwazRZQ/srOivetw1tT1LVvV
 v4HZpwjKbmokXf4Y9XMb0IvLJQro8Qr+YHkmEdZ3gpPWskYOcYpUko9TLgm5NKlH
 T5PLqXa9mjMfBP1cHQETKUe/fFvNzwj2sp4QX4/Iydu/VrZRgeID4EDZuByz7Vyn
 fGYIC7W2ADugxdFAQ0/Zswf5N+Dfyd2T0+5AYcfkxhBJZNnYiInzE5ae+6sLPsO9
 h4mE5+nw6sNGNTZ4IHcr8Ff9V8ppxBMbOqFWaEa0nvnnnH483GzXRbXsdjFrAtNz
 GoJuDhhNmkVAO49wnCGIt755w7Ay1mFG1IjP4jxDWAwQGd1z/sY3PIZtqUPtB6Ou
 ldVApDJvS3wVB6rxEcoRjuZ6xgbA/6W6tPJ+KIP4Gbwgwm1Y8TekS2Qzc5BNqS9I
 AuXB6CFWxRZj4078ZwYM59ctKi8dOrGp7VRbU+J/xPZ7KcFvItzuCyITSoHOAg+S
 0RH3mZdggqIRViv25WYpr93+yS6yYzwZxhIVJ3H6IxHAqM0JGac8WjTDozGqbBI2
 GT7UX+1tngHSM+U9t1VZSC44omeBI7j3ZWh2V3OELhqP4xdUr1QXk5/MNGgYMXlt
 yqqfvVx4qCP7yIH9adwb++jkicbQbvrdICJL/gouW6zL03JWVv5zPZ+LUm4dss9o
 BvtiF6tjaOkWsA==
 =iuyB
 -----END PGP SIGNATURE-----

Merge tag 'locking-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull lockdep fix from Thomas Gleixner:
 "A RT fix for lockdep.

  lockdep invokes prandom_u32() to create cookies. This worked until
  prandom_u32() was switched to the real random generator, which takes a
  spinlock for extraction, which does not work on RT when invoked from
  atomic contexts.

  lockdep has no requirement for real random numbers and it turns out
  sched_clock() is good enough to create the cookie. That works
  everywhere and is faster"

* tag 'locking-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  locking/lockdep: Use sched_clock() for random numbers
2022-06-19 09:47:41 -05:00
Linus Torvalds
36da9f5fb6 A set of interrupt subsystem updates:
Core:
 	- Ensure runtime power management for chained interrupts
 
  Drivers:
 	- A collection of OF node refcount fixes
 
         - Unbreak MIPS uniprocessor builds
 
 	- Fix xilinx interrupt controller Kconfig dependencies
 
 	- Add a missing compatible string to the Uniphier driver
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmKvG20THHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYocovD/95RDU45NC5m2z6flbsx8vivMlgj6F1
 0JIzyN3ypF2RbP1ue6dNSkpoV0/T3vRe8Lq7nWhAPTC/I1adqTtF+yWbo0CMEVPf
 XE+TNH0DQ31be8PYxN0Bo0kEl1+F/N1ro2h7o3QrpIimKTHpcd3FExtyYx/mSBpA
 YB+pqXZjBbL7fEPJ6eUIjvlNzMsoC2UG+ylydCubMFiP2/Phc/I0LzARsb4aqoZZ
 2p7f11D6cZ5capQf4MfheegKX8wPMtPZ69NY1VfoO54/rMOnvkEA2VmrdQTypw+x
 BrI9/qz/dR3E5A+sdXJWMyxBwtegRHDUcGxWk/UCf2BrIHm4FP+8CW2Tu/bNwUbv
 NDcE2IbwlEusS0nghdm4amFjITZSVCt2AGy1NyiD9/XnktPloHNBjI4/g0vM+RAj
 kcSkiJvlDCKtNHCtMvnBIcpZtLjYePDkgN9Tj8OxQuukH1427OTzIUJJSYs+gIZm
 nyS8W1U8y2W5cRUIO3qXVQBvYeNO+DytQa2wVX55JSOanc6tZABiybMrmShGv/1y
 +EieMCHoBv+khsk+f8CRssdGWKjpYcaQewXR7BuopO7lXgoZ+8qgXlJE91nvdA5p
 GLPTfgdTMKkvSPoKIgg2BR2RSrH5ZjNZd4h9KZC1eKwciTzQ9HDkvLrfhp6jrZz/
 LGd5RIodC4EBuQ==
 =RCdF
 -----END PGP SIGNATURE-----

Merge tag 'irq-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq fixes from Thomas Gleixner:
 "A set of interrupt subsystem updates:

  Core:

   - Ensure runtime power management for chained interrupts

  Drivers:

   - A collection of OF node refcount fixes

   - Unbreak MIPS uniprocessor builds

   - Fix xilinx interrupt controller Kconfig dependencies

   - Add a missing compatible string to the Uniphier driver"

* tag 'irq-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  irqchip/loongson-liointc: Use architecture register to get coreid
  irqchip/uniphier-aidet: Add compatible string for NX1 SoC
  dt-bindings: interrupt-controller/uniphier-aidet: Add bindings for NX1 SoC
  irqchip/realtek-rtl: Fix refcount leak in map_interrupts
  irqchip/gic-v3: Fix refcount leak in gic_populate_ppi_partitions
  irqchip/gic-v3: Fix error handling in gic_populate_ppi_partitions
  irqchip/apple-aic: Fix refcount leak in aic_of_ic_init
  irqchip/apple-aic: Fix refcount leak in build_fiq_affinity
  irqchip/gic/realview: Fix refcount leak in realview_gic_of_init
  irqchip/xilinx: Remove microblaze+zynq dependency
  genirq: PM: Use runtime PM for chained interrupts
2022-06-19 09:45:16 -05:00
Linus Torvalds
bc94632ceb Char/Misc driver fixes for 5.19-rc3 - take 2
Here are some small char/misc driver fixes for 5.19-rc3 that resolve
 some reported issues.
 
 They include:
         - mei driver fixes
         - comedi driver fix
         - rtsx build warning fix
         - fsl-mc-bus driver fix
 
 All of these have been in linux-next for a while with no reported
 issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCYq8I/Q8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ylZlgCdEnqmS5N5deU40WD0CwRchzg1PCsAnAz0XUsz
 yAj2wz/dh6H741FisaoO
 =Dfta
 -----END PGP SIGNATURE-----

Merge tag 'char-misc-5.19-rc3-take2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver fixes for real from Greg KH:
 "Let's tag the proper branch this time...

  Here are some small char/misc driver fixes for 5.19-rc3 that resolve
  some reported issues.

  They include:

   - mei driver fixes

   - comedi driver fix

   - rtsx build warning fix

   - fsl-mc-bus driver fix

  All of these have been in linux-next for a while with no reported
  issues"

This is what the merge in commit f0ec9c65a8d6 _should_ have merged, but
Greg fat-fingered the pull request and I got some small changes from
linux-next instead there. Credit to Nathan Chancellor for eagle-eyes.

Link: https://lore.kernel.org/all/Yqywy+Md2AfGDu8v@dev-arch.thelio-3990X/

* tag 'char-misc-5.19-rc3-take2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
  bus: fsl-mc-bus: fix KASAN use-after-free in fsl_mc_bus_remove()
  mei: me: add raptor lake point S DID
  mei: hbm: drop capability response on early shutdown
  mei: me: set internal pg flag to off on hardware reset
  misc: rtsx: Fix clang -Wsometimes-uninitialized in rts5261_init_from_hw()
  comedi: vmk80xx: fix expression for tx buffer size
2022-06-19 09:37:29 -05:00
Linus Torvalds
ee4eb6eeaf MAINTAINERS rectifications and a few minor driver fixes
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEOZGx6rniZ1Gk92RdFA3kzBSgKbYFAmKu36cACgkQFA3kzBSg
 KbYhvhAAmng8BYywLMD3ihKKoAI4TZSqMtb6YCZpaNB0GMu9TzvM+uV7uhS9Kjpm
 MvbPbrU7SZAC1StzHsLbApKRb9XGkDHEcWktwbJyot0NSAR1Pk3/DUsU0h2XZiPG
 /k1clHM+b5uLFxZ3HDRbSs1QCdHcYtRZfpl3ZvzNZCDJDzG3do0vbivrue2UdqIp
 CajHnXI6UZgnRboeqZGylRAaneD1YgTzMTqmyusoOOTN20GePXSFTmzBGG7/XtGi
 ak3ijCnBpErp22vGgJeCMaQ+ON7WQ3mLNohXXWfb6GewIKMYwgKHeXhAaruAOu1w
 MyyskeLthLfpjKBkbdw+pFIO3EUueB6YuNbFBG0sNrwAvLtxZdJ0TEh1HcuOmnbQ
 SQRPjya3sULcwq/fDKx+vOHKHQ10LXlKFN064tEObZ4cAa0gIYPARfTrQ+h7LwWH
 omR2nk/W4ItiyDPu4mtHgnzV+NRQTn0ELa3n+K1DGUoZ98YOhJk8gyQZv8XLlPfH
 6gxD+p2kSokQvwDYWinl/uaoED6+Mc1QWoyyKzrR79Ulgq/ga8YXzTs9pHzNQThH
 sXXGHZBvHapAeWjDMxOUNKDc0kn6LaLo/hvbEupCbcVAZk9W38SU32cvAk7+Lu8t
 1PDIWTneyiUw+W26ZShBepHsUZJ50gN9D2iPi/dPH/J3Za0WXaw=
 =WbSS
 -----END PGP SIGNATURE-----

Merge tag 'i2c-for-5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux

Pull i2c fixes from Wolfram Sang:
 "MAINTAINERS rectifications and a few minor driver fixes"

* tag 'i2c-for-5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
  i2c: mediatek: Fix an error handling path in mtk_i2c_probe()
  i2c: designware: Use standard optional ref clock implementation
  MAINTAINERS: core DT include belongs to core
  MAINTAINERS: add include/dt-bindings/i2c to I2C SUBSYSTEM HOST DRIVERS
  i2c: npcm7xx: Add check for platform_driver_register
  MAINTAINERS: Update Synopsys DesignWare I2C to Supported
2022-06-19 09:35:09 -05:00
Linus Torvalds
063232b6c4 Fixes for 5.19-rc3:
- Fix a bug where inode flag changes would accidentally drop nrext64.
  - Fix a race condition when toggling LARP mode.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEUzaAxoMeQq6m2jMV+H93GTRKtOsFAmKqyp4ACgkQ+H93GTRK
 tOtnURAAmJUASVXnixuuqRp8srbotuWc9EGJY+0/UFAfnfSlgasVeS1XB5bZ1CZP
 QhRYgDfPnuDvXwNrz3LHFL1ihll1whbJeXP2tYnCTolB8yFutk/xDLmwvXuRVR0y
 yzbbl6MtnHZ7SThhsXgUoJ3b0ItVxq8xN/0h1VVr0OI2zUryOR+Kd1c/G3VIPPZ6
 ZXyigcdQFAqB1oB/f2D6yHIqtIZopS+kwtcMTBz0qr82Tvp4Vzh9OMCU6BwdtidG
 o/UIBSrliW8qgrXom5Asy5mmLCa3wou7JfQc176ADbG09XjxoL0djHF5ZcbpQT7i
 A3WRQwwsNPfTGmyukngk2rH9JoeVSzvhyXD2ArrLJB/Ra097reXpsH0ABm63ova3
 YV8sX8BCoTjNzoN+abHq9jXxfcLaesJyZKfm6wU1bJ/0nkSYnGqwI9tWii18lRUQ
 GuVEShDMJAIUYWo2ysmm1fRhNM7I9+kE8ZprNBuUnK3ej9efZQPV20uOzqDI7H0Z
 6IW1JKHZr4WHAHeymkl8AHKt6U6+tCBjSUT/CGlfph+NNvytd2XvvEAIW5oFMEvA
 fMvYSnuk40tb6LpBGQcXxRjl14BvgBgc2omkVZuJf1X3rkg7i6U9zJv9rp87CBhl
 PnEnLvDa86KHxmq2Jxs1rh0LYu2OzCNGsoxICf8w4mloZmEFIqA=
 =vvDX
 -----END PGP SIGNATURE-----

Merge tag 'xfs-5.19-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux

Pull xfs fixes from Darrick Wong:
 "There's not a whole lot this time around (I'm still on vacation) but
  here are some important fixes for new features merged in -rc1:

   - Fix a bug where inode flag changes would accidentally drop nrext64

   - Fix a race condition when toggling LARP mode"

* tag 'xfs-5.19-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
  xfs: preserve DIFLAG2_NREXT64 when setting other inode attributes
  xfs: fix variable state usage
  xfs: fix TOCTOU race involving the new logged xattrs control knob
2022-06-19 09:24:49 -05:00
Linus Torvalds
354c6e071b Fix a variety of bugs, many of which were found by folks using fuzzing
or error injection.  Also fix up how test_dummy_encryption mount
 option is handled for the new mount API.  Finally, fix/cleanup a
 number of comments and ext4 Documentation files.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEK2m5VNv+CHkogTfJ8vlZVpUNgaMFAmKuYpcACgkQ8vlZVpUN
 gaMXwwf8DSHJ3gI2Lo0wrzJm7KSS0C+HK29/rtLCZdxECQsZR156ZzSF3zAFKOwK
 Yx3RJwiFxrciUUytY/MWTyalCk+M8oW1093SfRqNNZCbZNi33acnbTqioa7INnDw
 snFGGEU1y0M0AUduxNWPr71P80sTyQa0ZplIc4YeR98zzMvoWgi1dvo4wNdtJNQb
 Gb0FtBhgP+IeK50eBlK4O0Eg5kqd0V5OeTLUYUfsWqU28ap8dHYE48I6sIdHx6az
 sa6b2+YRuBxJUV61FNujuVtkDgUHXtXM97kkGpywRSLjo4iFxlQvX9Ew4lBD9RDI
 b0YHVzK/DU9M3VfiYgzGwShCb/M68w==
 =NtNY
 -----END PGP SIGNATURE-----

Merge tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4

Pull ext4 fixes from Ted Ts'o:
 "Fix a variety of bugs, many of which were found by folks using fuzzing
  or error injection.

  Also fix up how test_dummy_encryption mount option is handled for the
  new mount API.

  Finally, fix/cleanup a number of comments and ext4 Documentation
  files"

* tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
  ext4: fix a doubled word "need" in a comment
  ext4: add reserved GDT blocks check
  ext4: make variable "count" signed
  ext4: correct the judgment of BUG in ext4_mb_normalize_request
  ext4: fix bug_on ext4_mb_use_inode_pa
  ext4: fix up test_dummy_encryption handling for new mount API
  ext4: use kmemdup() to replace kmalloc + memcpy
  ext4: fix super block checksum incorrect after mount
  ext4: improve write performance with disabled delalloc
  ext4: fix warning when submitting superblock in ext4_commit_super()
  ext4, doc: remove unnecessary escaping
  ext4: fix incorrect comment in ext4_bio_write_page()
  fs: fix jbd2_journal_try_to_free_buffers() kernel-doc comment
2022-06-18 21:51:12 -05:00