Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

Conflicts:

drivers/net/ethernet/intel/ice/ice_main.c
  c9663f79cd82 ("ice: adjust switchdev rebuild path")
  7758017911a4 ("ice: restore timestamp configuration after device reset")
https://lore.kernel.org/all/20231121211259.3348630-1-anthony.l.nguyen@intel.com/

Adjacent changes:

kernel/bpf/verifier.c
  bb124da69c47 ("bpf: keep track of max number of bpf_loop callback iterations")
  5f99f312bd3b ("bpf: add register bounds sanity checks and sanitization")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2023-11-23 12:19:49 -08:00
commit 45c226dde7
231 changed files with 4387 additions and 3370 deletions

View File

@ -375,9 +375,9 @@ Developer web site of Loongson and LoongArch (Software and Documentation):
Documentation of LoongArch ISA:
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.02-CN.pdf (in Chinese)
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.10-CN.pdf (in Chinese)
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.02-EN.pdf (in English)
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.10-EN.pdf (in English)
Documentation of LoongArch ELF psABI:

View File

@ -77,7 +77,7 @@ Protocol 2.14 BURNT BY INCORRECT COMMIT
Protocol 2.15 (Kernel 5.5) Added the kernel_info and kernel_info.setup_type_max.
============= ============================================================
.. note::
.. note::
The protocol version number should be changed only if the setup header
is changed. There is no need to update the version number if boot_params
or kernel_info are changed. Additionally, it is recommended to use

View File

@ -36,6 +36,7 @@ properties:
- qcom,sm8350-ufshc
- qcom,sm8450-ufshc
- qcom,sm8550-ufshc
- qcom,sm8650-ufshc
- const: qcom,ufshc
- const: jedec,ufs-2.0
@ -122,6 +123,7 @@ allOf:
- qcom,sm8350-ufshc
- qcom,sm8450-ufshc
- qcom,sm8550-ufshc
- qcom,sm8650-ufshc
then:
properties:
clocks:

View File

@ -91,6 +91,10 @@ compatibility checking tool (fsck.erofs), and a debugging tool (dump.erofs):
- git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git
For more information, please also refer to the documentation site:
- https://erofs.docs.kernel.org
Bugs and patches are welcome, please kindly help us and send to the following
linux-erofs mailing list:

View File

@ -193,9 +193,23 @@ Review timelines
Generally speaking, the patches get triaged quickly (in less than
48h). But be patient, if your patch is active in patchwork (i.e. it's
listed on the project's patch list) the chances it was missed are close to zero.
Asking the maintainer for status updates on your
patch is a good way to ensure your patch is ignored or pushed to the
bottom of the priority list.
The high volume of development on netdev makes reviewers move on
from discussions relatively quickly. New comments and replies
are very unlikely to arrive after a week of silence. If a patch
is no longer active in patchwork and the thread went idle for more
than a week - clarify the next steps and/or post the next version.
For RFC postings specifically, if nobody responded in a week - reviewers
either missed the posting or have no strong opinions. If the code is ready,
repost as a PATCH.
Emails saying just "ping" or "bump" are considered rude. If you can't figure
out the status of the patch from patchwork or where the discussion has
landed - describe your best guess and ask if it's correct. For example::
I don't understand what the next steps are. Person X seems to be unhappy
with A, should I do B and repost the patches?
.. _Changes requested:

View File

@ -338,9 +338,9 @@ Loongson与LoongArch的开发者网站软件与文档资源
LoongArch指令集架构的文档
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.02-CN.pdf (中文版)
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.10-CN.pdf (中文版)
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.02-EN.pdf (英文版)
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.10-EN.pdf (英文版)
LoongArch的ELF psABI文档

View File

@ -7855,6 +7855,7 @@ R: Yue Hu <huyue2@coolpad.com>
R: Jeffle Xu <jefflexu@linux.alibaba.com>
L: linux-erofs@lists.ozlabs.org
S: Maintained
W: https://erofs.docs.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs.git
F: Documentation/ABI/testing/sysfs-fs-erofs
F: Documentation/filesystems/erofs.rst
@ -11024,7 +11025,6 @@ F: drivers/net/wireless/intel/iwlwifi/
INTEL WMI SLIM BOOTLOADER (SBL) FIRMWARE UPDATE DRIVER
M: Jithu Joseph <jithu.joseph@intel.com>
R: Maurice Ma <maurice.ma@intel.com>
S: Maintained
W: https://slimbootloader.github.io/security/firmware-update.html
F: drivers/platform/x86/intel/wmi/sbl-fw-update.c
@ -13778,7 +13778,6 @@ F: drivers/net/ethernet/mellanox/mlxfw/
MELLANOX HARDWARE PLATFORM SUPPORT
M: Hans de Goede <hdegoede@redhat.com>
M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
M: Mark Gross <markgross@kernel.org>
M: Vadim Pasternak <vadimp@nvidia.com>
L: platform-driver-x86@vger.kernel.org
S: Supported
@ -14387,7 +14386,6 @@ F: drivers/platform/surface/surface_gpe.c
MICROSOFT SURFACE HARDWARE PLATFORM SUPPORT
M: Hans de Goede <hdegoede@redhat.com>
M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
M: Mark Gross <markgross@kernel.org>
M: Maximilian Luz <luzmaximilian@gmail.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
@ -14994,6 +14992,7 @@ M: Jakub Kicinski <kuba@kernel.org>
M: Paolo Abeni <pabeni@redhat.com>
L: netdev@vger.kernel.org
S: Maintained
P: Documentation/process/maintainer-netdev.rst
Q: https://patchwork.kernel.org/project/netdevbpf/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
@ -15045,6 +15044,7 @@ M: Jakub Kicinski <kuba@kernel.org>
M: Paolo Abeni <pabeni@redhat.com>
L: netdev@vger.kernel.org
S: Maintained
P: Documentation/process/maintainer-netdev.rst
Q: https://patchwork.kernel.org/project/netdevbpf/list/
B: mailto:netdev@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
@ -15055,6 +15055,7 @@ F: Documentation/networking/
F: Documentation/process/maintainer-netdev.rst
F: Documentation/userspace-api/netlink/
F: include/linux/in.h
F: include/linux/indirect_call_wrapper.h
F: include/linux/net.h
F: include/linux/netdevice.h
F: include/net/
@ -23664,7 +23665,6 @@ F: drivers/platform/x86/x86-android-tablets/
X86 PLATFORM DRIVERS
M: Hans de Goede <hdegoede@redhat.com>
M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
M: Mark Gross <markgross@kernel.org>
L: platform-driver-x86@vger.kernel.org
S: Maintained
Q: https://patchwork.kernel.org/project/platform-driver-x86/list/
@ -23702,6 +23702,20 @@ F: arch/x86/kernel/dumpstack.c
F: arch/x86/kernel/stacktrace.c
F: arch/x86/kernel/unwind_*.c
X86 TRUST DOMAIN EXTENSIONS (TDX)
M: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
R: Dave Hansen <dave.hansen@linux.intel.com>
L: x86@kernel.org
L: linux-coco@lists.linux.dev
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/tdx
F: arch/x86/boot/compressed/tdx*
F: arch/x86/coco/tdx/
F: arch/x86/include/asm/shared/tdx.h
F: arch/x86/include/asm/tdx.h
F: arch/x86/virt/vmx/tdx/
F: drivers/virt/coco/tdx-guest
X86 VDSO
M: Andy Lutomirski <luto@kernel.org>
L: linux-kernel@vger.kernel.org
@ -23882,8 +23896,7 @@ T: git git://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git
P: Documentation/filesystems/xfs-maintainer-entry-profile.rst
F: Documentation/ABI/testing/sysfs-fs-xfs
F: Documentation/admin-guide/xfs.rst
F: Documentation/filesystems/xfs-delayed-logging-design.rst
F: Documentation/filesystems/xfs-self-describing-metadata.rst
F: Documentation/filesystems/xfs-*
F: fs/xfs/
F: include/uapi/linux/dqblk_xfs.h
F: include/uapi/linux/fsmap.h

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 7
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc2
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*

View File

@ -68,6 +68,7 @@ LDFLAGS_vmlinux += -static -n -nostdlib
ifdef CONFIG_AS_HAS_EXPLICIT_RELOCS
cflags-y += $(call cc-option,-mexplicit-relocs)
KBUILD_CFLAGS_KERNEL += $(call cc-option,-mdirect-extern-access)
KBUILD_CFLAGS_KERNEL += $(call cc-option,-fdirect-access-external-data)
KBUILD_AFLAGS_MODULE += $(call cc-option,-fno-direct-access-external-data)
KBUILD_CFLAGS_MODULE += $(call cc-option,-fno-direct-access-external-data)
KBUILD_AFLAGS_MODULE += $(call cc-option,-mno-relax) $(call cc-option,-Wa$(comma)-mno-relax)
@ -142,6 +143,8 @@ vdso-install-y += arch/loongarch/vdso/vdso.so.dbg
all: $(notdir $(KBUILD_IMAGE))
vmlinuz.efi: vmlinux.efi
vmlinux.elf vmlinux.efi vmlinuz.efi: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(bootvars-y) $(boot)/$@

View File

@ -609,8 +609,7 @@
lu32i.d \reg, 0
lu52i.d \reg, \reg, 0
.pushsection ".la_abs", "aw", %progbits
768:
.dword 768b-766b
.dword 766b
.dword \sym
.popsection
#endif

View File

@ -40,13 +40,13 @@ static __always_inline unsigned long __percpu_##op(void *ptr, \
switch (size) { \
case 4: \
__asm__ __volatile__( \
"am"#asm_op".w" " %[ret], %[val], %[ptr] \n" \
"am"#asm_op".w" " %[ret], %[val], %[ptr] \n" \
: [ret] "=&r" (ret), [ptr] "+ZB"(*(u32 *)ptr) \
: [val] "r" (val)); \
break; \
case 8: \
__asm__ __volatile__( \
"am"#asm_op".d" " %[ret], %[val], %[ptr] \n" \
"am"#asm_op".d" " %[ret], %[val], %[ptr] \n" \
: [ret] "=&r" (ret), [ptr] "+ZB"(*(u64 *)ptr) \
: [val] "r" (val)); \
break; \
@ -63,7 +63,7 @@ PERCPU_OP(and, and, &)
PERCPU_OP(or, or, |)
#undef PERCPU_OP
static __always_inline unsigned long __percpu_read(void *ptr, int size)
static __always_inline unsigned long __percpu_read(void __percpu *ptr, int size)
{
unsigned long ret;
@ -100,7 +100,7 @@ static __always_inline unsigned long __percpu_read(void *ptr, int size)
return ret;
}
static __always_inline void __percpu_write(void *ptr, unsigned long val, int size)
static __always_inline void __percpu_write(void __percpu *ptr, unsigned long val, int size)
{
switch (size) {
case 1:
@ -132,8 +132,7 @@ static __always_inline void __percpu_write(void *ptr, unsigned long val, int siz
}
}
static __always_inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
int size)
static __always_inline unsigned long __percpu_xchg(void *ptr, unsigned long val, int size)
{
switch (size) {
case 1:

View File

@ -25,7 +25,7 @@ extern void set_merr_handler(unsigned long offset, void *addr, unsigned long len
#ifdef CONFIG_RELOCATABLE
struct rela_la_abs {
long offset;
long pc;
long symvalue;
};

View File

@ -52,7 +52,7 @@ static inline void __init relocate_absolute(long random_offset)
for (p = begin; (void *)p < end; p++) {
long v = p->symvalue;
uint32_t lu12iw, ori, lu32id, lu52id;
union loongarch_instruction *insn = (void *)p - p->offset;
union loongarch_instruction *insn = (void *)p->pc;
lu12iw = (v >> 12) & 0xfffff;
ori = v & 0xfff;
@ -102,6 +102,14 @@ static inline __init unsigned long get_random_boot(void)
return hash;
}
static int __init nokaslr(char *p)
{
pr_info("KASLR is disabled.\n");
return 0; /* Print a notice and silence the boot warning */
}
early_param("nokaslr", nokaslr);
static inline __init bool kaslr_disabled(void)
{
char *str;

View File

@ -58,21 +58,6 @@ static int constant_set_state_oneshot(struct clock_event_device *evt)
return 0;
}
static int constant_set_state_oneshot_stopped(struct clock_event_device *evt)
{
unsigned long timer_config;
raw_spin_lock(&state_lock);
timer_config = csr_read64(LOONGARCH_CSR_TCFG);
timer_config &= ~CSR_TCFG_EN;
csr_write64(timer_config, LOONGARCH_CSR_TCFG);
raw_spin_unlock(&state_lock);
return 0;
}
static int constant_set_state_periodic(struct clock_event_device *evt)
{
unsigned long period;
@ -92,6 +77,16 @@ static int constant_set_state_periodic(struct clock_event_device *evt)
static int constant_set_state_shutdown(struct clock_event_device *evt)
{
unsigned long timer_config;
raw_spin_lock(&state_lock);
timer_config = csr_read64(LOONGARCH_CSR_TCFG);
timer_config &= ~CSR_TCFG_EN;
csr_write64(timer_config, LOONGARCH_CSR_TCFG);
raw_spin_unlock(&state_lock);
return 0;
}
@ -161,7 +156,7 @@ int constant_clockevent_init(void)
cd->rating = 320;
cd->cpumask = cpumask_of(cpu);
cd->set_state_oneshot = constant_set_state_oneshot;
cd->set_state_oneshot_stopped = constant_set_state_oneshot_stopped;
cd->set_state_oneshot_stopped = constant_set_state_shutdown;
cd->set_state_periodic = constant_set_state_periodic;
cd->set_state_shutdown = constant_set_state_shutdown;
cd->set_next_event = constant_timer_next_event;

View File

@ -13,13 +13,13 @@ struct page *dmw_virt_to_page(unsigned long kaddr)
{
return pfn_to_page(virt_to_pfn(kaddr));
}
EXPORT_SYMBOL_GPL(dmw_virt_to_page);
EXPORT_SYMBOL(dmw_virt_to_page);
struct page *tlb_virt_to_page(unsigned long kaddr)
{
return pfn_to_page(pte_pfn(*virt_to_kpte(kaddr)));
}
EXPORT_SYMBOL_GPL(tlb_virt_to_page);
EXPORT_SYMBOL(tlb_virt_to_page);
pgd_t *pgd_alloc(struct mm_struct *mm)
{

View File

@ -140,11 +140,11 @@ config ARCH_MMAP_RND_COMPAT_BITS_MIN
default 8
config ARCH_MMAP_RND_BITS_MAX
default 24 if 64BIT
default 17
default 18 if 64BIT
default 13
config ARCH_MMAP_RND_COMPAT_BITS_MAX
default 17
default 13
# unless you want to implement ACPI on PA-RISC ... ;-)
config PM

View File

@ -349,15 +349,7 @@ struct pt_regs; /* forward declaration... */
#define ELF_HWCAP 0
/* Masks for stack and mmap randomization */
#define BRK_RND_MASK (is_32bit_task() ? 0x07ffUL : 0x3ffffUL)
#define MMAP_RND_MASK (is_32bit_task() ? 0x1fffUL : 0x3ffffUL)
#define STACK_RND_MASK MMAP_RND_MASK
struct mm_struct;
extern unsigned long arch_randomize_brk(struct mm_struct *);
#define arch_randomize_brk arch_randomize_brk
#define STACK_RND_MASK 0x7ff /* 8MB of VA */
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
struct linux_binprm;

View File

@ -47,6 +47,8 @@
#ifndef __ASSEMBLY__
struct rlimit;
unsigned long mmap_upper_limit(struct rlimit *rlim_stack);
unsigned long calc_max_stack_size(unsigned long stack_max);
/*

View File

@ -383,7 +383,7 @@ show_cpuinfo (struct seq_file *m, void *v)
char cpu_name[60], *p;
/* strip PA path from CPU name to not confuse lscpu */
strlcpy(cpu_name, per_cpu(cpu_data, 0).dev->name, sizeof(cpu_name));
strscpy(cpu_name, per_cpu(cpu_data, 0).dev->name, sizeof(cpu_name));
p = strrchr(cpu_name, '[');
if (p)
*(--p) = 0;

View File

@ -77,7 +77,7 @@ unsigned long calc_max_stack_size(unsigned long stack_max)
* indicating that "current" should be used instead of a passed-in
* value from the exec bprm as done with arch_pick_mmap_layout().
*/
static unsigned long mmap_upper_limit(struct rlimit *rlim_stack)
unsigned long mmap_upper_limit(struct rlimit *rlim_stack)
{
unsigned long stack_base;

View File

@ -15,6 +15,7 @@
#include <linux/io.h>
#include <asm/apic.h>
#include <asm/desc.h>
#include <asm/e820/api.h>
#include <asm/sev.h>
#include <asm/ibt.h>
#include <asm/hypervisor.h>
@ -286,15 +287,31 @@ static int hv_cpu_die(unsigned int cpu)
static int __init hv_pci_init(void)
{
int gen2vm = efi_enabled(EFI_BOOT);
bool gen2vm = efi_enabled(EFI_BOOT);
/*
* For Generation-2 VM, we exit from pci_arch_init() by returning 0.
* The purpose is to suppress the harmless warning:
* A Generation-2 VM doesn't support legacy PCI/PCIe, so both
* raw_pci_ops and raw_pci_ext_ops are NULL, and pci_subsys_init() ->
* pcibios_init() doesn't call pcibios_resource_survey() ->
* e820__reserve_resources_late(); as a result, any emulated persistent
* memory of E820_TYPE_PRAM (12) via the kernel parameter
* memmap=nn[KMG]!ss is not added into iomem_resource and hence can't be
* detected by register_e820_pmem(). Fix this by directly calling
* e820__reserve_resources_late() here: e820__reserve_resources_late()
* depends on e820__reserve_resources(), which has been called earlier
* from setup_arch(). Note: e820__reserve_resources_late() also adds
* any memory of E820_TYPE_PMEM (7) into iomem_resource, and
* acpi_nfit_register_region() -> acpi_nfit_insert_resource() ->
* region_intersects() returns REGION_INTERSECTS, so the memory of
* E820_TYPE_PMEM won't get added twice.
*
* We return 0 here so that pci_arch_init() won't print the warning:
* "PCI: Fatal: No config space access function found"
*/
if (gen2vm)
if (gen2vm) {
e820__reserve_resources_late();
return 0;
}
/* For Generation-1 VM, we'll proceed in pci_arch_init(). */
return 1;

View File

@ -63,6 +63,7 @@ int acpi_fix_pin2_polarity __initdata;
#ifdef CONFIG_X86_LOCAL_APIC
static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
static bool has_lapic_cpus __initdata;
static bool acpi_support_online_capable;
#endif
@ -232,6 +233,14 @@ acpi_parse_x2apic(union acpi_subtable_headers *header, const unsigned long end)
if (!acpi_is_processor_usable(processor->lapic_flags))
return 0;
/*
* According to https://uefi.org/specs/ACPI/6.5/05_ACPI_Software_Programming_Model.html#processor-local-x2apic-structure
* when MADT provides both valid LAPIC and x2APIC entries, the APIC ID
* in x2APIC must be equal or greater than 0xff.
*/
if (has_lapic_cpus && apic_id < 0xff)
return 0;
/*
* We need to register disabled CPU as well to permit
* counting disabled CPUs. This allows us to size
@ -1114,10 +1123,7 @@ static int __init early_acpi_parse_madt_lapic_addr_ovr(void)
static int __init acpi_parse_madt_lapic_entries(void)
{
int count;
int x2count = 0;
int ret;
struct acpi_subtable_proc madt_proc[2];
int count, x2count = 0;
if (!boot_cpu_has(X86_FEATURE_APIC))
return -ENODEV;
@ -1126,21 +1132,11 @@ static int __init acpi_parse_madt_lapic_entries(void)
acpi_parse_sapic, MAX_LOCAL_APIC);
if (!count) {
memset(madt_proc, 0, sizeof(madt_proc));
madt_proc[0].id = ACPI_MADT_TYPE_LOCAL_APIC;
madt_proc[0].handler = acpi_parse_lapic;
madt_proc[1].id = ACPI_MADT_TYPE_LOCAL_X2APIC;
madt_proc[1].handler = acpi_parse_x2apic;
ret = acpi_table_parse_entries_array(ACPI_SIG_MADT,
sizeof(struct acpi_table_madt),
madt_proc, ARRAY_SIZE(madt_proc), MAX_LOCAL_APIC);
if (ret < 0) {
pr_err("Error parsing LAPIC/X2APIC entries\n");
return ret;
}
count = madt_proc[0].count;
x2count = madt_proc[1].count;
count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_APIC,
acpi_parse_lapic, MAX_LOCAL_APIC);
has_lapic_cpus = count > 0;
x2count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_X2APIC,
acpi_parse_x2apic, MAX_LOCAL_APIC);
}
if (!count && !x2count) {
pr_err("No LAPIC entries present\n");

View File

@ -262,11 +262,14 @@ static uint32_t __init ms_hyperv_platform(void)
static int hv_nmi_unknown(unsigned int val, struct pt_regs *regs)
{
static atomic_t nmi_cpu = ATOMIC_INIT(-1);
unsigned int old_cpu, this_cpu;
if (!unknown_nmi_panic)
return NMI_DONE;
if (atomic_cmpxchg(&nmi_cpu, -1, raw_smp_processor_id()) != -1)
old_cpu = -1;
this_cpu = raw_smp_processor_id();
if (!atomic_try_cmpxchg(&nmi_cpu, &old_cpu, this_cpu))
return NMI_HANDLED;
return NMI_DONE;

View File

@ -175,9 +175,6 @@ int x64_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs)
frame = get_sigframe(ksig, regs, sizeof(struct rt_sigframe), &fp);
uc_flags = frame_uc_flags(regs);
if (setup_signal_shadow_stack(ksig))
return -EFAULT;
if (!user_access_begin(frame, sizeof(*frame)))
return -EFAULT;
@ -198,6 +195,9 @@ int x64_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs)
return -EFAULT;
}
if (setup_signal_shadow_stack(ksig))
return -EFAULT;
/* Set up registers for signal handler */
regs->di = ksig->sig;
/* In case the signal handler was declared without prototypes */

View File

@ -2858,11 +2858,8 @@ static struct request *blk_mq_get_new_requests(struct request_queue *q,
};
struct request *rq;
if (unlikely(bio_queue_enter(bio)))
return NULL;
if (blk_mq_attempt_bio_merge(q, bio, nsegs))
goto queue_exit;
return NULL;
rq_qos_throttle(q, bio);
@ -2878,35 +2875,23 @@ static struct request *blk_mq_get_new_requests(struct request_queue *q,
rq_qos_cleanup(q, bio);
if (bio->bi_opf & REQ_NOWAIT)
bio_wouldblock_error(bio);
queue_exit:
blk_queue_exit(q);
return NULL;
}
static inline struct request *blk_mq_get_cached_request(struct request_queue *q,
struct blk_plug *plug, struct bio **bio, unsigned int nsegs)
/* return true if this @rq can be used for @bio */
static bool blk_mq_can_use_cached_rq(struct request *rq, struct blk_plug *plug,
struct bio *bio)
{
struct request *rq;
enum hctx_type type, hctx_type;
enum hctx_type type = blk_mq_get_hctx_type(bio->bi_opf);
enum hctx_type hctx_type = rq->mq_hctx->type;
if (!plug)
return NULL;
rq = rq_list_peek(&plug->cached_rq);
if (!rq || rq->q != q)
return NULL;
WARN_ON_ONCE(rq_list_peek(&plug->cached_rq) != rq);
if (blk_mq_attempt_bio_merge(q, *bio, nsegs)) {
*bio = NULL;
return NULL;
}
type = blk_mq_get_hctx_type((*bio)->bi_opf);
hctx_type = rq->mq_hctx->type;
if (type != hctx_type &&
!(type == HCTX_TYPE_READ && hctx_type == HCTX_TYPE_DEFAULT))
return NULL;
if (op_is_flush(rq->cmd_flags) != op_is_flush((*bio)->bi_opf))
return NULL;
return false;
if (op_is_flush(rq->cmd_flags) != op_is_flush(bio->bi_opf))
return false;
/*
* If any qos ->throttle() end up blocking, we will have flushed the
@ -2914,12 +2899,12 @@ static inline struct request *blk_mq_get_cached_request(struct request_queue *q,
* before we throttle.
*/
plug->cached_rq = rq_list_next(rq);
rq_qos_throttle(q, *bio);
rq_qos_throttle(rq->q, bio);
blk_mq_rq_time_init(rq, 0);
rq->cmd_flags = (*bio)->bi_opf;
rq->cmd_flags = bio->bi_opf;
INIT_LIST_HEAD(&rq->queuelist);
return rq;
return true;
}
static void bio_set_ioprio(struct bio *bio)
@ -2949,7 +2934,7 @@ void blk_mq_submit_bio(struct bio *bio)
struct blk_plug *plug = blk_mq_plug(bio);
const int is_sync = op_is_sync(bio->bi_opf);
struct blk_mq_hw_ctx *hctx;
struct request *rq;
struct request *rq = NULL;
unsigned int nr_segs = 1;
blk_status_t ret;
@ -2960,20 +2945,36 @@ void blk_mq_submit_bio(struct bio *bio)
return;
}
if (!bio_integrity_prep(bio))
return;
bio_set_ioprio(bio);
rq = blk_mq_get_cached_request(q, plug, &bio, nr_segs);
if (!rq) {
if (!bio)
if (plug) {
rq = rq_list_peek(&plug->cached_rq);
if (rq && rq->q != q)
rq = NULL;
}
if (rq) {
if (!bio_integrity_prep(bio))
return;
rq = blk_mq_get_new_requests(q, plug, bio, nr_segs);
if (unlikely(!rq))
if (blk_mq_attempt_bio_merge(q, bio, nr_segs))
return;
if (blk_mq_can_use_cached_rq(rq, plug, bio))
goto done;
percpu_ref_get(&q->q_usage_counter);
} else {
if (unlikely(bio_queue_enter(bio)))
return;
if (!bio_integrity_prep(bio))
goto fail;
}
rq = blk_mq_get_new_requests(q, plug, bio, nr_segs);
if (unlikely(!rq)) {
fail:
blk_queue_exit(q);
return;
}
done:
trace_block_getrq(bio);
rq_qos_track(q, rq, bio);

View File

@ -250,9 +250,6 @@ int ivpu_rpm_get_if_active(struct ivpu_device *vdev)
{
int ret;
ivpu_dbg(vdev, RPM, "rpm_get_if_active count %d\n",
atomic_read(&vdev->drm.dev->power.usage_count));
ret = pm_runtime_get_if_active(vdev->drm.dev, false);
drm_WARN_ON(&vdev->drm, ret < 0);

View File

@ -1093,9 +1093,10 @@ int dpll_nl_pin_id_get_doit(struct sk_buff *skb, struct genl_info *info)
return -ENOMEM;
hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0,
DPLL_CMD_PIN_ID_GET);
if (!hdr)
if (!hdr) {
nlmsg_free(msg);
return -EMSGSIZE;
}
pin = dpll_pin_find_from_nlattr(info);
if (!IS_ERR(pin)) {
ret = dpll_msg_add_pin_handle(msg, pin);
@ -1123,8 +1124,10 @@ int dpll_nl_pin_get_doit(struct sk_buff *skb, struct genl_info *info)
return -ENOMEM;
hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0,
DPLL_CMD_PIN_GET);
if (!hdr)
if (!hdr) {
nlmsg_free(msg);
return -EMSGSIZE;
}
ret = dpll_cmd_pin_get_one(msg, pin, info->extack);
if (ret) {
nlmsg_free(msg);
@ -1256,8 +1259,10 @@ int dpll_nl_device_id_get_doit(struct sk_buff *skb, struct genl_info *info)
return -ENOMEM;
hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0,
DPLL_CMD_DEVICE_ID_GET);
if (!hdr)
if (!hdr) {
nlmsg_free(msg);
return -EMSGSIZE;
}
dpll = dpll_device_find_from_nlattr(info);
if (!IS_ERR(dpll)) {
@ -1284,8 +1289,10 @@ int dpll_nl_device_get_doit(struct sk_buff *skb, struct genl_info *info)
return -ENOMEM;
hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0,
DPLL_CMD_DEVICE_GET);
if (!hdr)
if (!hdr) {
nlmsg_free(msg);
return -EMSGSIZE;
}
ret = dpll_device_get_one(dpll, msg, info->extack);
if (ret) {

View File

@ -248,6 +248,7 @@ extern int amdgpu_umsch_mm;
extern int amdgpu_seamless;
extern int amdgpu_user_partt_mode;
extern int amdgpu_agp;
#define AMDGPU_VM_MAX_NUM_CTX 4096
#define AMDGPU_SG_THRESHOLD (256*1024*1024)

View File

@ -207,7 +207,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
}
for (i = 0; i < p->nchunks; i++) {
struct drm_amdgpu_cs_chunk __user **chunk_ptr = NULL;
struct drm_amdgpu_cs_chunk __user *chunk_ptr = NULL;
struct drm_amdgpu_cs_chunk user_chunk;
uint32_t __user *cdata;

View File

@ -207,6 +207,7 @@ int amdgpu_user_partt_mode = AMDGPU_AUTO_COMPUTE_PARTITION_MODE;
int amdgpu_umsch_mm;
int amdgpu_seamless = -1; /* auto */
uint amdgpu_debug_mask;
int amdgpu_agp = -1; /* auto */
static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work);
@ -961,6 +962,15 @@ module_param_named(seamless, amdgpu_seamless, int, 0444);
MODULE_PARM_DESC(debug_mask, "debug options for amdgpu, disabled by default");
module_param_named(debug_mask, amdgpu_debug_mask, uint, 0444);
/**
* DOC: agp (int)
* Enable the AGP aperture. This provides an aperture in the GPU's internal
* address space for direct access to system memory. Note that these accesses
* are non-snooped, so they are only used for access to uncached memory.
*/
MODULE_PARM_DESC(agp, "AGP (-1 = auto (default), 0 = disable, 1 = enable)");
module_param_named(agp, amdgpu_agp, int, 0444);
/* These devices are not supported by amdgpu.
* They are supported by the mach64, r128, radeon drivers
*/

View File

@ -1473,6 +1473,11 @@ int psp_xgmi_get_topology_info(struct psp_context *psp,
topology->nodes[i].num_links = (requires_reflection && topology->nodes[i].num_links) ?
topology->nodes[i].num_links : node_num_links;
}
/* popluate the connected port num info if supported and available */
if (ta_port_num_support && topology->nodes[i].num_links) {
memcpy(topology->nodes[i].port_num, link_extend_info_output->nodes[i].port_num,
sizeof(struct xgmi_connected_port_num) * TA_XGMI__MAX_PORT_NUM);
}
/* reflect the topology information for bi-directionality */
if (requires_reflection && topology->nodes[i].num_hops)

View File

@ -150,6 +150,7 @@ struct psp_xgmi_node_info {
uint8_t is_sharing_enabled;
enum ta_xgmi_assigned_sdma_engine sdma_engine;
uint8_t num_links;
struct xgmi_connected_port_num port_num[TA_XGMI__MAX_PORT_NUM];
};
struct psp_xgmi_topology_info {

View File

@ -1188,7 +1188,7 @@ static int amdgpu_ras_query_error_status_helper(struct amdgpu_device *adev,
}
if (block_obj->hw_ops->query_ras_error_count)
block_obj->hw_ops->query_ras_error_count(adev, &err_data);
block_obj->hw_ops->query_ras_error_count(adev, err_data);
if ((info->head.block == AMDGPU_RAS_BLOCK__SDMA) ||
(info->head.block == AMDGPU_RAS_BLOCK__GFX) ||

View File

@ -398,6 +398,7 @@ int amdgpu_uvd_sw_fini(struct amdgpu_device *adev)
* amdgpu_uvd_entity_init - init entity
*
* @adev: amdgpu_device pointer
* @ring: amdgpu_ring pointer to check
*
* Initialize the entity used for handle management in the kernel driver.
*/

View File

@ -230,6 +230,7 @@ int amdgpu_vce_sw_fini(struct amdgpu_device *adev)
* amdgpu_vce_entity_init - init entity
*
* @adev: amdgpu_device pointer
* @ring: amdgpu_ring pointer to check
*
* Initialize the entity used for handle management in the kernel driver.
*/

View File

@ -675,7 +675,7 @@ static void gmc_v10_0_vram_gtt_location(struct amdgpu_device *adev,
amdgpu_gmc_set_agp_default(adev, mc);
amdgpu_gmc_vram_location(adev, &adev->gmc, base);
amdgpu_gmc_gart_location(adev, mc, AMDGPU_GART_PLACEMENT_BEST_FIT);
if (!amdgpu_sriov_vf(adev))
if (!amdgpu_sriov_vf(adev) && (amdgpu_agp == 1))
amdgpu_gmc_agp_location(adev, mc);
/* base offset of vram pages */

View File

@ -640,8 +640,9 @@ static void gmc_v11_0_vram_gtt_location(struct amdgpu_device *adev,
amdgpu_gmc_set_agp_default(adev, mc);
amdgpu_gmc_vram_location(adev, &adev->gmc, base);
amdgpu_gmc_gart_location(adev, mc, AMDGPU_GART_PLACEMENT_HIGH);
if (!amdgpu_sriov_vf(adev) ||
(amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(11, 5, 0)))
if (!amdgpu_sriov_vf(adev) &&
(amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(11, 5, 0)) &&
(amdgpu_agp == 1))
amdgpu_gmc_agp_location(adev, mc);
/* base offset of vram pages */

View File

@ -1630,7 +1630,7 @@ static void gmc_v9_0_vram_gtt_location(struct amdgpu_device *adev,
} else {
amdgpu_gmc_vram_location(adev, mc, base);
amdgpu_gmc_gart_location(adev, mc, AMDGPU_GART_PLACEMENT_BEST_FIT);
if (!amdgpu_sriov_vf(adev))
if (!amdgpu_sriov_vf(adev) && (amdgpu_agp == 1))
amdgpu_gmc_agp_location(adev, mc);
}
/* base offset of vram pages */
@ -2170,8 +2170,6 @@ static int gmc_v9_0_sw_fini(void *handle)
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3))
amdgpu_gmc_sysfs_fini(adev);
adev->gmc.num_mem_partitions = 0;
kfree(adev->gmc.mem_partitions);
amdgpu_gmc_ras_fini(adev);
amdgpu_gem_force_release(adev);
@ -2185,6 +2183,9 @@ static int gmc_v9_0_sw_fini(void *handle)
amdgpu_bo_free_kernel(&adev->gmc.pdb0_bo, NULL, &adev->gmc.ptr_pdb0);
amdgpu_bo_fini(adev);
adev->gmc.num_mem_partitions = 0;
kfree(adev->gmc.mem_partitions);
return 0;
}

View File

@ -130,6 +130,9 @@ static void mmhub_v1_8_init_system_aperture_regs(struct amdgpu_device *adev)
uint64_t value;
int i;
if (amdgpu_sriov_vf(adev))
return;
inst_mask = adev->aid_mask;
for_each_inst(i, inst_mask) {
/* Program the AGP BAR */
@ -139,9 +142,6 @@ static void mmhub_v1_8_init_system_aperture_regs(struct amdgpu_device *adev)
WREG32_SOC15(MMHUB, i, regMC_VM_AGP_TOP,
adev->gmc.agp_end >> 24);
if (amdgpu_sriov_vf(adev))
return;
/* Program the system aperture low logical page number. */
WREG32_SOC15(MMHUB, i, regMC_VM_SYSTEM_APERTURE_LOW_ADDR,
min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);

View File

@ -2079,7 +2079,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
struct dmub_srv_create_params create_params;
struct dmub_srv_region_params region_params;
struct dmub_srv_region_info region_info;
struct dmub_srv_fb_params fb_params;
struct dmub_srv_memory_params memory_params;
struct dmub_srv_fb_info *fb_info;
struct dmub_srv *dmub_srv;
const struct dmcub_firmware_header_v1_0 *hdr;
@ -2182,6 +2182,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
adev->dm.dmub_fw->data +
le32_to_cpu(hdr->header.ucode_array_offset_bytes) +
PSP_HEADER_BYTES;
region_params.is_mailbox_in_inbox = false;
status = dmub_srv_calc_region_info(dmub_srv, &region_params,
&region_info);
@ -2205,10 +2206,10 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
return r;
/* Rebase the regions on the framebuffer address. */
memset(&fb_params, 0, sizeof(fb_params));
fb_params.cpu_addr = adev->dm.dmub_bo_cpu_addr;
fb_params.gpu_addr = adev->dm.dmub_bo_gpu_addr;
fb_params.region_info = &region_info;
memset(&memory_params, 0, sizeof(memory_params));
memory_params.cpu_fb_addr = adev->dm.dmub_bo_cpu_addr;
memory_params.gpu_fb_addr = adev->dm.dmub_bo_gpu_addr;
memory_params.region_info = &region_info;
adev->dm.dmub_fb_info =
kzalloc(sizeof(*adev->dm.dmub_fb_info), GFP_KERNEL);
@ -2220,7 +2221,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
return -ENOMEM;
}
status = dmub_srv_calc_fb_info(dmub_srv, &fb_params, fb_info);
status = dmub_srv_calc_mem_info(dmub_srv, &memory_params, fb_info);
if (status != DMUB_STATUS_OK) {
DRM_ERROR("Error calculating DMUB FB info: %d\n", status);
return -EINVAL;
@ -7481,6 +7482,9 @@ static int amdgpu_dm_i2c_xfer(struct i2c_adapter *i2c_adap,
int i;
int result = -EIO;
if (!ddc_service->ddc_pin || !ddc_service->ddc_pin->hw_info.hw_supported)
return result;
cmd.payloads = kcalloc(num, sizeof(struct i2c_payload), GFP_KERNEL);
if (!cmd.payloads)
@ -9603,14 +9607,14 @@ static bool should_reset_plane(struct drm_atomic_state *state,
struct drm_plane *other;
struct drm_plane_state *old_other_state, *new_other_state;
struct drm_crtc_state *new_crtc_state;
struct amdgpu_device *adev = drm_to_adev(plane->dev);
int i;
/*
* TODO: Remove this hack once the checks below are sufficient
* enough to determine when we need to reset all the planes on
* the stream.
* TODO: Remove this hack for all asics once it proves that the
* fast updates works fine on DCN3.2+.
*/
if (state->allow_modeset)
if (adev->ip_versions[DCE_HWIP][0] < IP_VERSION(3, 2, 0) && state->allow_modeset)
return true;
/* Exit early if we know that we're adding or removing the plane. */

View File

@ -536,11 +536,8 @@ bool dm_helpers_dp_read_dpcd(
struct amdgpu_dm_connector *aconnector = link->priv;
if (!aconnector) {
drm_dbg_dp(aconnector->base.dev,
"Failed to find connector for link!\n");
if (!aconnector)
return false;
}
return drm_dp_dpcd_read(&aconnector->dm_dp_aux.aux, address, data,
size) == size;

View File

@ -1604,31 +1604,31 @@ enum dc_status dm_dp_mst_is_port_support_mode(
unsigned int upper_link_bw_in_kbps = 0, down_link_bw_in_kbps = 0;
unsigned int max_compressed_bw_in_kbps = 0;
struct dc_dsc_bw_range bw_range = {0};
struct drm_dp_mst_topology_mgr *mst_mgr;
uint16_t full_pbn = aconnector->mst_output_port->full_pbn;
/*
* check if the mode could be supported if DSC pass-through is supported
* AND check if there enough bandwidth available to support the mode
* with DSC enabled.
* Consider the case with the depth of the mst topology tree is equal or less than 2
* A. When dsc bitstream can be transmitted along the entire path
* 1. dsc is possible between source and branch/leaf device (common dsc params is possible), AND
* 2. dsc passthrough supported at MST branch, or
* 3. dsc decoding supported at leaf MST device
* Use maximum dsc compression as bw constraint
* B. When dsc bitstream cannot be transmitted along the entire path
* Use native bw as bw constraint
*/
if (is_dsc_common_config_possible(stream, &bw_range) &&
aconnector->mst_output_port->passthrough_aux) {
mst_mgr = aconnector->mst_output_port->mgr;
mutex_lock(&mst_mgr->lock);
(aconnector->mst_output_port->passthrough_aux ||
aconnector->dsc_aux == &aconnector->mst_output_port->aux)) {
cur_link_settings = stream->link->verified_link_cap;
upper_link_bw_in_kbps = dc_link_bandwidth_kbps(aconnector->dc_link,
&cur_link_settings
);
down_link_bw_in_kbps = kbps_from_pbn(aconnector->mst_output_port->full_pbn);
&cur_link_settings);
down_link_bw_in_kbps = kbps_from_pbn(full_pbn);
/* pick the bottleneck */
end_to_end_bw_in_kbps = min(upper_link_bw_in_kbps,
down_link_bw_in_kbps);
mutex_unlock(&mst_mgr->lock);
/*
* use the maximum dsc compression bandwidth as the required
* bandwidth for the mode
@ -1643,8 +1643,7 @@ enum dc_status dm_dp_mst_is_port_support_mode(
/* check if mode could be supported within full_pbn */
bpp = convert_dc_color_depth_into_bpc(stream->timing.display_color_depth) * 3;
pbn = drm_dp_calc_pbn_mode(stream->timing.pix_clk_100hz / 10, bpp, false);
if (pbn > aconnector->mst_output_port->full_pbn)
if (pbn > full_pbn)
return DC_FAIL_BANDWIDTH_VALIDATE;
}

View File

@ -820,22 +820,22 @@ static void dcn35_set_idle_state(struct clk_mgr *clk_mgr_base, bool allow_idle)
if (dc->config.disable_ips == DMUB_IPS_ENABLE ||
dc->config.disable_ips == DMUB_IPS_DISABLE_DYNAMIC) {
val |= DMUB_IPS1_ALLOW_MASK;
val |= DMUB_IPS2_ALLOW_MASK;
} else if (dc->config.disable_ips == DMUB_IPS_DISABLE_IPS1) {
val = val & ~DMUB_IPS1_ALLOW_MASK;
val = val & ~DMUB_IPS2_ALLOW_MASK;
} else if (dc->config.disable_ips == DMUB_IPS_DISABLE_IPS2) {
val |= DMUB_IPS1_ALLOW_MASK;
val = val & ~DMUB_IPS2_ALLOW_MASK;
} else if (dc->config.disable_ips == DMUB_IPS_DISABLE_IPS2_Z10) {
} else if (dc->config.disable_ips == DMUB_IPS_DISABLE_IPS1) {
val |= DMUB_IPS1_ALLOW_MASK;
val |= DMUB_IPS2_ALLOW_MASK;
} else if (dc->config.disable_ips == DMUB_IPS_DISABLE_IPS2) {
val = val & ~DMUB_IPS1_ALLOW_MASK;
val |= DMUB_IPS2_ALLOW_MASK;
} else if (dc->config.disable_ips == DMUB_IPS_DISABLE_IPS2_Z10) {
val = val & ~DMUB_IPS1_ALLOW_MASK;
val = val & ~DMUB_IPS2_ALLOW_MASK;
}
if (!allow_idle) {
val = val & ~DMUB_IPS1_ALLOW_MASK;
val = val & ~DMUB_IPS2_ALLOW_MASK;
val |= DMUB_IPS1_ALLOW_MASK;
val |= DMUB_IPS2_ALLOW_MASK;
}
dcn35_smu_write_ips_scratch(clk_mgr, val);

View File

@ -3178,7 +3178,7 @@ static bool update_planes_and_stream_state(struct dc *dc,
struct pipe_ctx *otg_master = resource_get_otg_master_for_stream(&context->res_ctx,
context->streams[i]);
if (otg_master->stream->test_pattern.type != DP_TEST_PATTERN_VIDEO_MODE)
if (otg_master && otg_master->stream->test_pattern.type != DP_TEST_PATTERN_VIDEO_MODE)
resource_build_test_pattern_params(&context->res_ctx, otg_master);
}
}
@ -4934,8 +4934,8 @@ bool dc_dmub_is_ips_idle_state(struct dc *dc)
if (dc->hwss.get_idle_state)
idle_state = dc->hwss.get_idle_state(dc);
if ((idle_state & DMUB_IPS1_ALLOW_MASK) ||
(idle_state & DMUB_IPS2_ALLOW_MASK))
if (!(idle_state & DMUB_IPS1_ALLOW_MASK) ||
!(idle_state & DMUB_IPS2_ALLOW_MASK))
return true;
return false;

View File

@ -5190,6 +5190,9 @@ bool dc_resource_acquire_secondary_pipe_for_mpc_odm_legacy(
sec_next = sec_pipe->next_odm_pipe;
sec_prev = sec_pipe->prev_odm_pipe;
if (pri_pipe == NULL)
return false;
*sec_pipe = *pri_pipe;
sec_pipe->top_pipe = sec_top;

View File

@ -1202,11 +1202,11 @@ void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
allow_state = dc->hwss.get_idle_state(dc);
dc->hwss.set_idle_state(dc, false);
if (allow_state & DMUB_IPS2_ALLOW_MASK) {
if (!(allow_state & DMUB_IPS2_ALLOW_MASK)) {
// Wait for evaluation time
udelay(dc->debug.ips2_eval_delay_us);
commit_state = dc->hwss.get_idle_state(dc);
if (commit_state & DMUB_IPS2_COMMIT_MASK) {
if (!(commit_state & DMUB_IPS2_COMMIT_MASK)) {
// Tell PMFW to exit low power state
dc->clk_mgr->funcs->exit_low_power_state(dc->clk_mgr);
@ -1216,7 +1216,7 @@ void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
for (i = 0; i < max_num_polls; ++i) {
commit_state = dc->hwss.get_idle_state(dc);
if (!(commit_state & DMUB_IPS2_COMMIT_MASK))
if (commit_state & DMUB_IPS2_COMMIT_MASK)
break;
udelay(1);
@ -1235,10 +1235,10 @@ void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
}
dc_dmub_srv_notify_idle(dc, false);
if (allow_state & DMUB_IPS1_ALLOW_MASK) {
if (!(allow_state & DMUB_IPS1_ALLOW_MASK)) {
for (i = 0; i < max_num_polls; ++i) {
commit_state = dc->hwss.get_idle_state(dc);
if (!(commit_state & DMUB_IPS1_COMMIT_MASK))
if (commit_state & DMUB_IPS1_COMMIT_MASK)
break;
udelay(1);

View File

@ -177,6 +177,7 @@ struct dc_panel_patch {
unsigned int disable_fams;
unsigned int skip_avmute;
unsigned int mst_start_top_delay;
unsigned int remove_sink_ext_caps;
};
struct dc_edid_caps {

View File

@ -261,12 +261,6 @@ static void enc35_stream_encoder_enable(
/* invalid mode ! */
ASSERT_CRITICAL(false);
}
REG_UPDATE(DIG_FE_CLK_CNTL, DIG_FE_CLK_EN, 1);
REG_UPDATE(DIG_FE_EN_CNTL, DIG_FE_ENABLE, 1);
} else {
REG_UPDATE(DIG_FE_EN_CNTL, DIG_FE_ENABLE, 0);
REG_UPDATE(DIG_FE_CLK_CNTL, DIG_FE_CLK_EN, 0);
}
}
@ -436,6 +430,8 @@ static void enc35_disable_fifo(struct stream_encoder *enc)
struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, 0);
REG_UPDATE(DIG_FE_EN_CNTL, DIG_FE_ENABLE, 0);
REG_UPDATE(DIG_FE_CLK_CNTL, DIG_FE_CLK_EN, 0);
}
static void enc35_enable_fifo(struct stream_encoder *enc)
@ -443,6 +439,8 @@ static void enc35_enable_fifo(struct stream_encoder *enc)
struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_READ_START_LEVEL, 0x7);
REG_UPDATE(DIG_FE_CLK_CNTL, DIG_FE_CLK_EN, 1);
REG_UPDATE(DIG_FE_EN_CNTL, DIG_FE_ENABLE, 1);
enc35_reset_fifo(enc, true);
enc35_reset_fifo(enc, false);

View File

@ -1088,6 +1088,9 @@ static bool detect_link_and_local_sink(struct dc_link *link,
if (sink->edid_caps.panel_patch.skip_scdc_overwrite)
link->ctx->dc->debug.hdmi20_disable = true;
if (sink->edid_caps.panel_patch.remove_sink_ext_caps)
link->dpcd_sink_ext_caps.raw = 0;
if (dc_is_hdmi_signal(link->connector_signal))
read_scdc_caps(link->ddc, link->local_sink);

View File

@ -195,6 +195,7 @@ struct dmub_srv_region_params {
uint32_t vbios_size;
const uint8_t *fw_inst_const;
const uint8_t *fw_bss_data;
bool is_mailbox_in_inbox;
};
/**
@ -214,20 +215,25 @@ struct dmub_srv_region_params {
*/
struct dmub_srv_region_info {
uint32_t fb_size;
uint32_t inbox_size;
uint8_t num_regions;
struct dmub_region regions[DMUB_WINDOW_TOTAL];
};
/**
* struct dmub_srv_fb_params - parameters used for driver fb setup
* struct dmub_srv_memory_params - parameters used for driver fb setup
* @region_info: region info calculated by dmub service
* @cpu_addr: base cpu address for the framebuffer
* @gpu_addr: base gpu virtual address for the framebuffer
* @cpu_fb_addr: base cpu address for the framebuffer
* @cpu_inbox_addr: base cpu address for the gart
* @gpu_fb_addr: base gpu virtual address for the framebuffer
* @gpu_inbox_addr: base gpu virtual address for the gart
*/
struct dmub_srv_fb_params {
struct dmub_srv_memory_params {
const struct dmub_srv_region_info *region_info;
void *cpu_addr;
uint64_t gpu_addr;
void *cpu_fb_addr;
void *cpu_inbox_addr;
uint64_t gpu_fb_addr;
uint64_t gpu_inbox_addr;
};
/**
@ -563,8 +569,8 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
* DMUB_STATUS_OK - success
* DMUB_STATUS_INVALID - unspecified error
*/
enum dmub_status dmub_srv_calc_fb_info(struct dmub_srv *dmub,
const struct dmub_srv_fb_params *params,
enum dmub_status dmub_srv_calc_mem_info(struct dmub_srv *dmub,
const struct dmub_srv_memory_params *params,
struct dmub_srv_fb_info *out);
/**

View File

@ -434,7 +434,7 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
uint32_t fw_state_size = DMUB_FW_STATE_SIZE;
uint32_t trace_buffer_size = DMUB_TRACE_BUFFER_SIZE;
uint32_t scratch_mem_size = DMUB_SCRATCH_MEM_SIZE;
uint32_t previous_top = 0;
if (!dmub->sw_init)
return DMUB_STATUS_INVALID;
@ -459,8 +459,15 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
bios->base = dmub_align(stack->top, 256);
bios->top = bios->base + params->vbios_size;
mail->base = dmub_align(bios->top, 256);
mail->top = mail->base + DMUB_MAILBOX_SIZE;
if (params->is_mailbox_in_inbox) {
mail->base = 0;
mail->top = mail->base + DMUB_MAILBOX_SIZE;
previous_top = bios->top;
} else {
mail->base = dmub_align(bios->top, 256);
mail->top = mail->base + DMUB_MAILBOX_SIZE;
previous_top = mail->top;
}
fw_info = dmub_get_fw_meta_info(params);
@ -479,7 +486,7 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
dmub->fw_version = fw_info->fw_version;
}
trace_buff->base = dmub_align(mail->top, 256);
trace_buff->base = dmub_align(previous_top, 256);
trace_buff->top = trace_buff->base + dmub_align(trace_buffer_size, 64);
fw_state->base = dmub_align(trace_buff->top, 256);
@ -490,11 +497,14 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
out->fb_size = dmub_align(scratch_mem->top, 4096);
if (params->is_mailbox_in_inbox)
out->inbox_size = dmub_align(mail->top, 4096);
return DMUB_STATUS_OK;
}
enum dmub_status dmub_srv_calc_fb_info(struct dmub_srv *dmub,
const struct dmub_srv_fb_params *params,
enum dmub_status dmub_srv_calc_mem_info(struct dmub_srv *dmub,
const struct dmub_srv_memory_params *params,
struct dmub_srv_fb_info *out)
{
uint8_t *cpu_base;
@ -509,8 +519,8 @@ enum dmub_status dmub_srv_calc_fb_info(struct dmub_srv *dmub,
if (params->region_info->num_regions != DMUB_NUM_WINDOWS)
return DMUB_STATUS_INVALID;
cpu_base = (uint8_t *)params->cpu_addr;
gpu_base = params->gpu_addr;
cpu_base = (uint8_t *)params->cpu_fb_addr;
gpu_base = params->gpu_fb_addr;
for (i = 0; i < DMUB_NUM_WINDOWS; ++i) {
const struct dmub_region *reg =
@ -518,6 +528,12 @@ enum dmub_status dmub_srv_calc_fb_info(struct dmub_srv *dmub,
out->fb[i].cpu_addr = cpu_base + reg->base;
out->fb[i].gpu_addr = gpu_base + reg->base;
if (i == DMUB_WINDOW_4_MAILBOX && params->cpu_inbox_addr != 0) {
out->fb[i].cpu_addr = (uint8_t *)params->cpu_inbox_addr + reg->base;
out->fb[i].gpu_addr = params->gpu_inbox_addr + reg->base;
}
out->fb[i].size = reg->top - reg->base;
}
@ -707,9 +723,16 @@ enum dmub_status dmub_srv_sync_inbox1(struct dmub_srv *dmub)
return DMUB_STATUS_INVALID;
if (dmub->hw_funcs.get_inbox1_rptr && dmub->hw_funcs.get_inbox1_wptr) {
dmub->inbox1_rb.rptr = dmub->hw_funcs.get_inbox1_rptr(dmub);
dmub->inbox1_rb.wrpt = dmub->hw_funcs.get_inbox1_wptr(dmub);
dmub->inbox1_last_wptr = dmub->inbox1_rb.wrpt;
uint32_t rptr = dmub->hw_funcs.get_inbox1_rptr(dmub);
uint32_t wptr = dmub->hw_funcs.get_inbox1_wptr(dmub);
if (rptr > dmub->inbox1_rb.capacity || wptr > dmub->inbox1_rb.capacity) {
return DMUB_STATUS_HW_FAILURE;
} else {
dmub->inbox1_rb.rptr = rptr;
dmub->inbox1_rb.wrpt = wptr;
dmub->inbox1_last_wptr = dmub->inbox1_rb.wrpt;
}
}
return DMUB_STATUS_OK;
@ -743,6 +766,11 @@ enum dmub_status dmub_srv_cmd_queue(struct dmub_srv *dmub,
if (!dmub->hw_init)
return DMUB_STATUS_INVALID;
if (dmub->inbox1_rb.rptr > dmub->inbox1_rb.capacity ||
dmub->inbox1_rb.wrpt > dmub->inbox1_rb.capacity) {
return DMUB_STATUS_HW_FAILURE;
}
if (dmub_rb_push_front(&dmub->inbox1_rb, cmd))
return DMUB_STATUS_OK;

View File

@ -123,7 +123,7 @@ typedef enum {
VOLTAGE_GUARDBAND_COUNT
} GFX_GUARDBAND_e;
#define SMU_METRICS_TABLE_VERSION 0x8
#define SMU_METRICS_TABLE_VERSION 0x9
typedef struct __attribute__((packed, aligned(4))) {
uint32_t AccumulationCounter;
@ -211,6 +211,14 @@ typedef struct __attribute__((packed, aligned(4))) {
//XGMI Data tranfser size
uint64_t XgmiReadDataSizeAcc[8];//in KByte
uint64_t XgmiWriteDataSizeAcc[8];//in KByte
//PCIE BW Data and error count
uint32_t PcieBandwidth[4];
uint32_t PCIeL0ToRecoveryCountAcc; // The Pcie counter itself is accumulated
uint32_t PCIenReplayAAcc; // The Pcie counter itself is accumulated
uint32_t PCIenReplayARolloverCountAcc; // The Pcie counter itself is accumulated
uint32_t PCIeNAKSentCountAcc; // The Pcie counter itself is accumulated
uint32_t PCIeNAKReceivedCountAcc; // The Pcie counter itself is accumulated
} MetricsTable_t;
#define SMU_VF_METRICS_TABLE_VERSION 0x3

View File

@ -1454,7 +1454,7 @@ static int smu_v13_0_6_register_irq_handler(struct smu_context *smu)
static int smu_v13_0_6_notify_unload(struct smu_context *smu)
{
if (smu->smc_fw_version <= 0x553500)
if (amdgpu_in_reset(smu->adev))
return 0;
dev_dbg(smu->adev->dev, "Notify PMFW about driver unload");
@ -2095,6 +2095,14 @@ static ssize_t smu_v13_0_6_get_gpu_metrics(struct smu_context *smu, void **table
smu_v13_0_6_get_current_pcie_link_speed(smu);
gpu_metrics->pcie_bandwidth_acc =
SMUQ10_ROUND(metrics->PcieBandwidthAcc[0]);
gpu_metrics->pcie_bandwidth_inst =
SMUQ10_ROUND(metrics->PcieBandwidth[0]);
gpu_metrics->pcie_l0_to_recov_count_acc =
metrics->PCIeL0ToRecoveryCountAcc;
gpu_metrics->pcie_replay_count_acc =
metrics->PCIenReplayAAcc;
gpu_metrics->pcie_replay_rover_count_acc =
metrics->PCIenReplayARolloverCountAcc;
}
gpu_metrics->system_clock_counter = ktime_get_boottime_ns();

View File

@ -336,6 +336,12 @@ static const struct dmi_system_id orientation_data[] = {
DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "IdeaPad Duet 3 10IGL5"),
},
.driver_data = (void *)&lcd1200x1920_rightside_up,
}, { /* Lenovo Legion Go 8APU1 */
.matches = {
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Legion Go 8APU1"),
},
.driver_data = (void *)&lcd1600x2560_leftside_up,
}, { /* Lenovo Yoga Book X90F / X90L */
.matches = {
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),

View File

@ -14,7 +14,7 @@ struct nvkm_event {
int index_nr;
spinlock_t refs_lock;
spinlock_t list_lock;
rwlock_t list_lock;
int *refs;
struct list_head ntfy;
@ -38,7 +38,7 @@ nvkm_event_init(const struct nvkm_event_func *func, struct nvkm_subdev *subdev,
int types_nr, int index_nr, struct nvkm_event *event)
{
spin_lock_init(&event->refs_lock);
spin_lock_init(&event->list_lock);
rwlock_init(&event->list_lock);
return __nvkm_event_init(func, subdev, types_nr, index_nr, event);
}

View File

@ -726,6 +726,11 @@ nouveau_display_create(struct drm_device *dev)
if (nouveau_modeset != 2) {
ret = nvif_disp_ctor(&drm->client.device, "kmsDisp", 0, &disp->disp);
/* no display hw */
if (ret == -ENODEV) {
ret = 0;
goto disp_create_err;
}
if (!ret && (disp->disp.outp_mask || drm->vbios.dcb.entries)) {
nouveau_display_create_properties(dev);

View File

@ -81,17 +81,17 @@ nvkm_event_ntfy_state(struct nvkm_event_ntfy *ntfy)
static void
nvkm_event_ntfy_remove(struct nvkm_event_ntfy *ntfy)
{
spin_lock_irq(&ntfy->event->list_lock);
write_lock_irq(&ntfy->event->list_lock);
list_del_init(&ntfy->head);
spin_unlock_irq(&ntfy->event->list_lock);
write_unlock_irq(&ntfy->event->list_lock);
}
static void
nvkm_event_ntfy_insert(struct nvkm_event_ntfy *ntfy)
{
spin_lock_irq(&ntfy->event->list_lock);
write_lock_irq(&ntfy->event->list_lock);
list_add_tail(&ntfy->head, &ntfy->event->ntfy);
spin_unlock_irq(&ntfy->event->list_lock);
write_unlock_irq(&ntfy->event->list_lock);
}
static void
@ -176,7 +176,7 @@ nvkm_event_ntfy(struct nvkm_event *event, int id, u32 bits)
return;
nvkm_trace(event->subdev, "event: ntfy %08x on %d\n", bits, id);
spin_lock_irqsave(&event->list_lock, flags);
read_lock_irqsave(&event->list_lock, flags);
list_for_each_entry_safe(ntfy, ntmp, &event->ntfy, head) {
if (ntfy->id == id && ntfy->bits & bits) {
@ -185,7 +185,7 @@ nvkm_event_ntfy(struct nvkm_event *event, int id, u32 bits)
}
}
spin_unlock_irqrestore(&event->list_lock, flags);
read_unlock_irqrestore(&event->list_lock, flags);
}
void

View File

@ -689,8 +689,8 @@ r535_gsp_rpc_get(struct nvkm_gsp *gsp, u32 fn, u32 argc)
struct nvfw_gsp_rpc *rpc;
rpc = r535_gsp_cmdq_get(gsp, ALIGN(sizeof(*rpc) + argc, sizeof(u64)));
if (!rpc)
return NULL;
if (IS_ERR(rpc))
return ERR_CAST(rpc);
rpc->header_version = 0x03000000;
rpc->signature = ('C' << 24) | ('P' << 16) | ('R' << 8) | 'V';
@ -1159,7 +1159,7 @@ static void
r535_gsp_acpi_mux_id(acpi_handle handle, u32 id, MUX_METHOD_DATA_ELEMENT *mode,
MUX_METHOD_DATA_ELEMENT *part)
{
acpi_handle iter = NULL, handle_mux;
acpi_handle iter = NULL, handle_mux = NULL;
acpi_status status;
unsigned long long value;

View File

@ -63,7 +63,7 @@ static int dw_reg_read(void *context, unsigned int reg, unsigned int *val)
{
struct dw_i2c_dev *dev = context;
*val = readl_relaxed(dev->base + reg);
*val = readl(dev->base + reg);
return 0;
}
@ -72,7 +72,7 @@ static int dw_reg_write(void *context, unsigned int reg, unsigned int val)
{
struct dw_i2c_dev *dev = context;
writel_relaxed(val, dev->base + reg);
writel(val, dev->base + reg);
return 0;
}
@ -81,7 +81,7 @@ static int dw_reg_read_swab(void *context, unsigned int reg, unsigned int *val)
{
struct dw_i2c_dev *dev = context;
*val = swab32(readl_relaxed(dev->base + reg));
*val = swab32(readl(dev->base + reg));
return 0;
}
@ -90,7 +90,7 @@ static int dw_reg_write_swab(void *context, unsigned int reg, unsigned int val)
{
struct dw_i2c_dev *dev = context;
writel_relaxed(swab32(val), dev->base + reg);
writel(swab32(val), dev->base + reg);
return 0;
}
@ -99,8 +99,8 @@ static int dw_reg_read_word(void *context, unsigned int reg, unsigned int *val)
{
struct dw_i2c_dev *dev = context;
*val = readw_relaxed(dev->base + reg) |
(readw_relaxed(dev->base + reg + 2) << 16);
*val = readw(dev->base + reg) |
(readw(dev->base + reg + 2) << 16);
return 0;
}
@ -109,8 +109,8 @@ static int dw_reg_write_word(void *context, unsigned int reg, unsigned int val)
{
struct dw_i2c_dev *dev = context;
writew_relaxed(val, dev->base + reg);
writew_relaxed(val >> 16, dev->base + reg + 2);
writew(val, dev->base + reg);
writew(val >> 16, dev->base + reg + 2);
return 0;
}

View File

@ -771,8 +771,8 @@ static int ocores_i2c_resume(struct device *dev)
return ocores_init(dev, i2c);
}
static DEFINE_SIMPLE_DEV_PM_OPS(ocores_i2c_pm,
ocores_i2c_suspend, ocores_i2c_resume);
static DEFINE_NOIRQ_DEV_PM_OPS(ocores_i2c_pm,
ocores_i2c_suspend, ocores_i2c_resume);
static struct platform_driver ocores_i2c_driver = {
.probe = ocores_i2c_probe,

View File

@ -265,6 +265,9 @@ struct pxa_i2c {
u32 hs_mask;
struct i2c_bus_recovery_info recovery;
struct pinctrl *pinctrl;
struct pinctrl_state *pinctrl_default;
struct pinctrl_state *pinctrl_recovery;
};
#define _IBMR(i2c) ((i2c)->reg_ibmr)
@ -1299,12 +1302,13 @@ static void i2c_pxa_prepare_recovery(struct i2c_adapter *adap)
*/
gpiod_set_value(i2c->recovery.scl_gpiod, ibmr & IBMR_SCLS);
gpiod_set_value(i2c->recovery.sda_gpiod, ibmr & IBMR_SDAS);
WARN_ON(pinctrl_select_state(i2c->pinctrl, i2c->pinctrl_recovery));
}
static void i2c_pxa_unprepare_recovery(struct i2c_adapter *adap)
{
struct pxa_i2c *i2c = adap->algo_data;
struct i2c_bus_recovery_info *bri = adap->bus_recovery_info;
u32 isr;
/*
@ -1318,7 +1322,7 @@ static void i2c_pxa_unprepare_recovery(struct i2c_adapter *adap)
i2c_pxa_do_reset(i2c);
}
WARN_ON(pinctrl_select_state(bri->pinctrl, bri->pins_default));
WARN_ON(pinctrl_select_state(i2c->pinctrl, i2c->pinctrl_default));
dev_dbg(&i2c->adap.dev, "recovery: IBMR 0x%08x ISR 0x%08x\n",
readl(_IBMR(i2c)), readl(_ISR(i2c)));
@ -1340,20 +1344,76 @@ static int i2c_pxa_init_recovery(struct pxa_i2c *i2c)
if (IS_ENABLED(CONFIG_I2C_PXA_SLAVE))
return 0;
bri->pinctrl = devm_pinctrl_get(dev);
if (PTR_ERR(bri->pinctrl) == -ENODEV) {
bri->pinctrl = NULL;
i2c->pinctrl = devm_pinctrl_get(dev);
if (PTR_ERR(i2c->pinctrl) == -ENODEV)
i2c->pinctrl = NULL;
if (IS_ERR(i2c->pinctrl))
return PTR_ERR(i2c->pinctrl);
if (!i2c->pinctrl)
return 0;
i2c->pinctrl_default = pinctrl_lookup_state(i2c->pinctrl,
PINCTRL_STATE_DEFAULT);
i2c->pinctrl_recovery = pinctrl_lookup_state(i2c->pinctrl, "recovery");
if (IS_ERR(i2c->pinctrl_default) || IS_ERR(i2c->pinctrl_recovery)) {
dev_info(dev, "missing pinmux recovery information: %ld %ld\n",
PTR_ERR(i2c->pinctrl_default),
PTR_ERR(i2c->pinctrl_recovery));
return 0;
}
/*
* Claiming GPIOs can influence the pinmux state, and may glitch the
* I2C bus. Do this carefully.
*/
bri->scl_gpiod = devm_gpiod_get(dev, "scl", GPIOD_OUT_HIGH_OPEN_DRAIN);
if (bri->scl_gpiod == ERR_PTR(-EPROBE_DEFER))
return -EPROBE_DEFER;
if (IS_ERR(bri->scl_gpiod)) {
dev_info(dev, "missing scl gpio recovery information: %pe\n",
bri->scl_gpiod);
return 0;
}
/*
* We have SCL. Pull SCL low and wait a bit so that SDA glitches
* have no effect.
*/
gpiod_direction_output(bri->scl_gpiod, 0);
udelay(10);
bri->sda_gpiod = devm_gpiod_get(dev, "sda", GPIOD_OUT_HIGH_OPEN_DRAIN);
/* Wait a bit in case of a SDA glitch, and then release SCL. */
udelay(10);
gpiod_direction_output(bri->scl_gpiod, 1);
if (bri->sda_gpiod == ERR_PTR(-EPROBE_DEFER))
return -EPROBE_DEFER;
if (IS_ERR(bri->sda_gpiod)) {
dev_info(dev, "missing sda gpio recovery information: %pe\n",
bri->sda_gpiod);
return 0;
}
if (IS_ERR(bri->pinctrl))
return PTR_ERR(bri->pinctrl);
bri->prepare_recovery = i2c_pxa_prepare_recovery;
bri->unprepare_recovery = i2c_pxa_unprepare_recovery;
bri->recover_bus = i2c_generic_scl_recovery;
i2c->adap.bus_recovery_info = bri;
return 0;
/*
* Claiming GPIOs can change the pinmux state, which confuses the
* pinctrl since pinctrl's idea of the current setting is unaffected
* by the pinmux change caused by claiming the GPIO. Work around that
* by switching pinctrl to the GPIO state here. We do it this way to
* avoid glitching the I2C bus.
*/
pinctrl_select_state(i2c->pinctrl, i2c->pinctrl_recovery);
return pinctrl_select_state(i2c->pinctrl, i2c->pinctrl_default);
}
static int i2c_pxa_probe(struct platform_device *dev)

View File

@ -2379,12 +2379,12 @@ retry_baser:
break;
}
if (!shr)
gic_flush_dcache_to_poc(base, PAGE_ORDER_TO_SIZE(order));
its_write_baser(its, baser, val);
tmp = baser->val;
if (its->flags & ITS_FLAGS_FORCE_NON_SHAREABLE)
tmp &= ~GITS_BASER_SHAREABILITY_MASK;
if ((val ^ tmp) & GITS_BASER_SHAREABILITY_MASK) {
/*
* Shareability didn't stick. Just use
@ -2394,10 +2394,9 @@ retry_baser:
* non-cacheable as well.
*/
shr = tmp & GITS_BASER_SHAREABILITY_MASK;
if (!shr) {
if (!shr)
cache = GITS_BASER_nC;
gic_flush_dcache_to_poc(base, PAGE_ORDER_TO_SIZE(order));
}
goto retry_baser;
}
@ -2609,6 +2608,11 @@ static int its_alloc_tables(struct its_node *its)
/* erratum 24313: ignore memory access type */
cache = GITS_BASER_nCnB;
if (its->flags & ITS_FLAGS_FORCE_NON_SHAREABLE) {
cache = GITS_BASER_nC;
shr = 0;
}
for (i = 0; i < GITS_BASER_NR_REGS; i++) {
struct its_baser *baser = its->tables + i;
u64 val = its_read_baser(its, baser);

View File

@ -254,7 +254,7 @@ enum evict_result {
typedef enum evict_result (*le_predicate)(struct lru_entry *le, void *context);
static struct lru_entry *lru_evict(struct lru *lru, le_predicate pred, void *context)
static struct lru_entry *lru_evict(struct lru *lru, le_predicate pred, void *context, bool no_sleep)
{
unsigned long tested = 0;
struct list_head *h = lru->cursor;
@ -295,7 +295,8 @@ static struct lru_entry *lru_evict(struct lru *lru, le_predicate pred, void *con
h = h->next;
cond_resched();
if (!no_sleep)
cond_resched();
}
return NULL;
@ -382,7 +383,10 @@ struct dm_buffer {
*/
struct buffer_tree {
struct rw_semaphore lock;
union {
struct rw_semaphore lock;
rwlock_t spinlock;
} u;
struct rb_root root;
} ____cacheline_aligned_in_smp;
@ -393,9 +397,12 @@ struct dm_buffer_cache {
* on the locks.
*/
unsigned int num_locks;
bool no_sleep;
struct buffer_tree trees[];
};
static DEFINE_STATIC_KEY_FALSE(no_sleep_enabled);
static inline unsigned int cache_index(sector_t block, unsigned int num_locks)
{
return dm_hash_locks_index(block, num_locks);
@ -403,22 +410,34 @@ static inline unsigned int cache_index(sector_t block, unsigned int num_locks)
static inline void cache_read_lock(struct dm_buffer_cache *bc, sector_t block)
{
down_read(&bc->trees[cache_index(block, bc->num_locks)].lock);
if (static_branch_unlikely(&no_sleep_enabled) && bc->no_sleep)
read_lock_bh(&bc->trees[cache_index(block, bc->num_locks)].u.spinlock);
else
down_read(&bc->trees[cache_index(block, bc->num_locks)].u.lock);
}
static inline void cache_read_unlock(struct dm_buffer_cache *bc, sector_t block)
{
up_read(&bc->trees[cache_index(block, bc->num_locks)].lock);
if (static_branch_unlikely(&no_sleep_enabled) && bc->no_sleep)
read_unlock_bh(&bc->trees[cache_index(block, bc->num_locks)].u.spinlock);
else
up_read(&bc->trees[cache_index(block, bc->num_locks)].u.lock);
}
static inline void cache_write_lock(struct dm_buffer_cache *bc, sector_t block)
{
down_write(&bc->trees[cache_index(block, bc->num_locks)].lock);
if (static_branch_unlikely(&no_sleep_enabled) && bc->no_sleep)
write_lock_bh(&bc->trees[cache_index(block, bc->num_locks)].u.spinlock);
else
down_write(&bc->trees[cache_index(block, bc->num_locks)].u.lock);
}
static inline void cache_write_unlock(struct dm_buffer_cache *bc, sector_t block)
{
up_write(&bc->trees[cache_index(block, bc->num_locks)].lock);
if (static_branch_unlikely(&no_sleep_enabled) && bc->no_sleep)
write_unlock_bh(&bc->trees[cache_index(block, bc->num_locks)].u.spinlock);
else
up_write(&bc->trees[cache_index(block, bc->num_locks)].u.lock);
}
/*
@ -442,18 +461,32 @@ static void lh_init(struct lock_history *lh, struct dm_buffer_cache *cache, bool
static void __lh_lock(struct lock_history *lh, unsigned int index)
{
if (lh->write)
down_write(&lh->cache->trees[index].lock);
else
down_read(&lh->cache->trees[index].lock);
if (lh->write) {
if (static_branch_unlikely(&no_sleep_enabled) && lh->cache->no_sleep)
write_lock_bh(&lh->cache->trees[index].u.spinlock);
else
down_write(&lh->cache->trees[index].u.lock);
} else {
if (static_branch_unlikely(&no_sleep_enabled) && lh->cache->no_sleep)
read_lock_bh(&lh->cache->trees[index].u.spinlock);
else
down_read(&lh->cache->trees[index].u.lock);
}
}
static void __lh_unlock(struct lock_history *lh, unsigned int index)
{
if (lh->write)
up_write(&lh->cache->trees[index].lock);
else
up_read(&lh->cache->trees[index].lock);
if (lh->write) {
if (static_branch_unlikely(&no_sleep_enabled) && lh->cache->no_sleep)
write_unlock_bh(&lh->cache->trees[index].u.spinlock);
else
up_write(&lh->cache->trees[index].u.lock);
} else {
if (static_branch_unlikely(&no_sleep_enabled) && lh->cache->no_sleep)
read_unlock_bh(&lh->cache->trees[index].u.spinlock);
else
up_read(&lh->cache->trees[index].u.lock);
}
}
/*
@ -502,14 +535,18 @@ static struct dm_buffer *list_to_buffer(struct list_head *l)
return le_to_buffer(le);
}
static void cache_init(struct dm_buffer_cache *bc, unsigned int num_locks)
static void cache_init(struct dm_buffer_cache *bc, unsigned int num_locks, bool no_sleep)
{
unsigned int i;
bc->num_locks = num_locks;
bc->no_sleep = no_sleep;
for (i = 0; i < bc->num_locks; i++) {
init_rwsem(&bc->trees[i].lock);
if (no_sleep)
rwlock_init(&bc->trees[i].u.spinlock);
else
init_rwsem(&bc->trees[i].u.lock);
bc->trees[i].root = RB_ROOT;
}
@ -648,7 +685,7 @@ static struct dm_buffer *__cache_evict(struct dm_buffer_cache *bc, int list_mode
struct lru_entry *le;
struct dm_buffer *b;
le = lru_evict(&bc->lru[list_mode], __evict_pred, &w);
le = lru_evict(&bc->lru[list_mode], __evict_pred, &w, bc->no_sleep);
if (!le)
return NULL;
@ -702,7 +739,7 @@ static void __cache_mark_many(struct dm_buffer_cache *bc, int old_mode, int new_
struct evict_wrapper w = {.lh = lh, .pred = pred, .context = context};
while (true) {
le = lru_evict(&bc->lru[old_mode], __evict_pred, &w);
le = lru_evict(&bc->lru[old_mode], __evict_pred, &w, bc->no_sleep);
if (!le)
break;
@ -915,10 +952,11 @@ static void cache_remove_range(struct dm_buffer_cache *bc,
{
unsigned int i;
BUG_ON(bc->no_sleep);
for (i = 0; i < bc->num_locks; i++) {
down_write(&bc->trees[i].lock);
down_write(&bc->trees[i].u.lock);
__remove_range(bc, &bc->trees[i].root, begin, end, pred, release);
up_write(&bc->trees[i].lock);
up_write(&bc->trees[i].u.lock);
}
}
@ -979,8 +1017,6 @@ struct dm_bufio_client {
struct dm_buffer_cache cache; /* must be last member */
};
static DEFINE_STATIC_KEY_FALSE(no_sleep_enabled);
/*----------------------------------------------------------------*/
#define dm_bufio_in_request() (!!current->bio_list)
@ -1871,7 +1907,8 @@ static void *new_read(struct dm_bufio_client *c, sector_t block,
if (need_submit)
submit_io(b, REQ_OP_READ, read_endio);
wait_on_bit_io(&b->state, B_READING, TASK_UNINTERRUPTIBLE);
if (nf != NF_GET) /* we already tested this condition above */
wait_on_bit_io(&b->state, B_READING, TASK_UNINTERRUPTIBLE);
if (b->read_error) {
int error = blk_status_to_errno(b->read_error);
@ -2421,7 +2458,7 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
r = -ENOMEM;
goto bad_client;
}
cache_init(&c->cache, num_locks);
cache_init(&c->cache, num_locks, (flags & DM_BUFIO_CLIENT_NO_SLEEP) != 0);
c->bdev = bdev;
c->block_size = block_size;

View File

@ -1673,7 +1673,7 @@ static struct bio *crypt_alloc_buffer(struct dm_crypt_io *io, unsigned int size)
unsigned int nr_iovecs = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
gfp_t gfp_mask = GFP_NOWAIT | __GFP_HIGHMEM;
unsigned int remaining_size;
unsigned int order = MAX_ORDER - 1;
unsigned int order = MAX_ORDER;
retry:
if (unlikely(gfp_mask & __GFP_DIRECT_RECLAIM))

View File

@ -33,7 +33,7 @@ struct delay_c {
struct work_struct flush_expired_bios;
struct list_head delayed_bios;
struct task_struct *worker;
atomic_t may_delay;
bool may_delay;
struct delay_class read;
struct delay_class write;
@ -73,39 +73,6 @@ static inline bool delay_is_fast(struct delay_c *dc)
return !!dc->worker;
}
static void flush_delayed_bios_fast(struct delay_c *dc, bool flush_all)
{
struct dm_delay_info *delayed, *next;
mutex_lock(&delayed_bios_lock);
list_for_each_entry_safe(delayed, next, &dc->delayed_bios, list) {
if (flush_all || time_after_eq(jiffies, delayed->expires)) {
struct bio *bio = dm_bio_from_per_bio_data(delayed,
sizeof(struct dm_delay_info));
list_del(&delayed->list);
dm_submit_bio_remap(bio, NULL);
delayed->class->ops--;
}
}
mutex_unlock(&delayed_bios_lock);
}
static int flush_worker_fn(void *data)
{
struct delay_c *dc = data;
while (1) {
flush_delayed_bios_fast(dc, false);
if (unlikely(list_empty(&dc->delayed_bios))) {
set_current_state(TASK_INTERRUPTIBLE);
schedule();
} else
cond_resched();
}
return 0;
}
static void flush_bios(struct bio *bio)
{
struct bio *n;
@ -118,36 +85,61 @@ static void flush_bios(struct bio *bio)
}
}
static struct bio *flush_delayed_bios(struct delay_c *dc, bool flush_all)
static void flush_delayed_bios(struct delay_c *dc, bool flush_all)
{
struct dm_delay_info *delayed, *next;
struct bio_list flush_bio_list;
unsigned long next_expires = 0;
unsigned long start_timer = 0;
struct bio_list flush_bios = { };
bool start_timer = false;
bio_list_init(&flush_bio_list);
mutex_lock(&delayed_bios_lock);
list_for_each_entry_safe(delayed, next, &dc->delayed_bios, list) {
cond_resched();
if (flush_all || time_after_eq(jiffies, delayed->expires)) {
struct bio *bio = dm_bio_from_per_bio_data(delayed,
sizeof(struct dm_delay_info));
list_del(&delayed->list);
bio_list_add(&flush_bios, bio);
bio_list_add(&flush_bio_list, bio);
delayed->class->ops--;
continue;
}
if (!start_timer) {
start_timer = 1;
next_expires = delayed->expires;
} else
next_expires = min(next_expires, delayed->expires);
if (!delay_is_fast(dc)) {
if (!start_timer) {
start_timer = true;
next_expires = delayed->expires;
} else {
next_expires = min(next_expires, delayed->expires);
}
}
}
mutex_unlock(&delayed_bios_lock);
if (start_timer)
queue_timeout(dc, next_expires);
return bio_list_get(&flush_bios);
flush_bios(bio_list_get(&flush_bio_list));
}
static int flush_worker_fn(void *data)
{
struct delay_c *dc = data;
while (!kthread_should_stop()) {
flush_delayed_bios(dc, false);
mutex_lock(&delayed_bios_lock);
if (unlikely(list_empty(&dc->delayed_bios))) {
set_current_state(TASK_INTERRUPTIBLE);
mutex_unlock(&delayed_bios_lock);
schedule();
} else {
mutex_unlock(&delayed_bios_lock);
cond_resched();
}
}
return 0;
}
static void flush_expired_bios(struct work_struct *work)
@ -155,10 +147,7 @@ static void flush_expired_bios(struct work_struct *work)
struct delay_c *dc;
dc = container_of(work, struct delay_c, flush_expired_bios);
if (delay_is_fast(dc))
flush_delayed_bios_fast(dc, false);
else
flush_bios(flush_delayed_bios(dc, false));
flush_delayed_bios(dc, false);
}
static void delay_dtr(struct dm_target *ti)
@ -177,8 +166,7 @@ static void delay_dtr(struct dm_target *ti)
if (dc->worker)
kthread_stop(dc->worker);
if (!delay_is_fast(dc))
mutex_destroy(&dc->timer_lock);
mutex_destroy(&dc->timer_lock);
kfree(dc);
}
@ -236,7 +224,8 @@ static int delay_ctr(struct dm_target *ti, unsigned int argc, char **argv)
ti->private = dc;
INIT_LIST_HEAD(&dc->delayed_bios);
atomic_set(&dc->may_delay, 1);
mutex_init(&dc->timer_lock);
dc->may_delay = true;
dc->argc = argc;
ret = delay_class_ctr(ti, &dc->read, argv);
@ -282,12 +271,12 @@ out:
"dm-delay-flush-worker");
if (IS_ERR(dc->worker)) {
ret = PTR_ERR(dc->worker);
dc->worker = NULL;
goto bad;
}
} else {
timer_setup(&dc->delay_timer, handle_delayed_timer, 0);
INIT_WORK(&dc->flush_expired_bios, flush_expired_bios);
mutex_init(&dc->timer_lock);
dc->kdelayd_wq = alloc_workqueue("kdelayd", WQ_MEM_RECLAIM, 0);
if (!dc->kdelayd_wq) {
ret = -EINVAL;
@ -312,7 +301,7 @@ static int delay_bio(struct delay_c *dc, struct delay_class *c, struct bio *bio)
struct dm_delay_info *delayed;
unsigned long expires = 0;
if (!c->delay || !atomic_read(&dc->may_delay))
if (!c->delay)
return DM_MAPIO_REMAPPED;
delayed = dm_per_bio_data(bio, sizeof(struct dm_delay_info));
@ -321,6 +310,10 @@ static int delay_bio(struct delay_c *dc, struct delay_class *c, struct bio *bio)
delayed->expires = expires = jiffies + msecs_to_jiffies(c->delay);
mutex_lock(&delayed_bios_lock);
if (unlikely(!dc->may_delay)) {
mutex_unlock(&delayed_bios_lock);
return DM_MAPIO_REMAPPED;
}
c->ops++;
list_add_tail(&delayed->list, &dc->delayed_bios);
mutex_unlock(&delayed_bios_lock);
@ -337,21 +330,20 @@ static void delay_presuspend(struct dm_target *ti)
{
struct delay_c *dc = ti->private;
atomic_set(&dc->may_delay, 0);
mutex_lock(&delayed_bios_lock);
dc->may_delay = false;
mutex_unlock(&delayed_bios_lock);
if (delay_is_fast(dc))
flush_delayed_bios_fast(dc, true);
else {
if (!delay_is_fast(dc))
del_timer_sync(&dc->delay_timer);
flush_bios(flush_delayed_bios(dc, true));
}
flush_delayed_bios(dc, true);
}
static void delay_resume(struct dm_target *ti)
{
struct delay_c *dc = ti->private;
atomic_set(&dc->may_delay, 1);
dc->may_delay = true;
}
static int delay_map(struct dm_target *ti, struct bio *bio)

View File

@ -185,7 +185,7 @@ static int fec_is_erasure(struct dm_verity *v, struct dm_verity_io *io,
{
if (unlikely(verity_hash(v, verity_io_hash_req(v, io),
data, 1 << v->data_dev_block_bits,
verity_io_real_digest(v, io))))
verity_io_real_digest(v, io), true)))
return 0;
return memcmp(verity_io_real_digest(v, io), want_digest,
@ -386,7 +386,7 @@ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io,
/* Always re-validate the corrected block against the expected hash */
r = verity_hash(v, verity_io_hash_req(v, io), fio->output,
1 << v->data_dev_block_bits,
verity_io_real_digest(v, io));
verity_io_real_digest(v, io), true);
if (unlikely(r < 0))
return r;

View File

@ -135,20 +135,21 @@ static int verity_hash_update(struct dm_verity *v, struct ahash_request *req,
* Wrapper for crypto_ahash_init, which handles verity salting.
*/
static int verity_hash_init(struct dm_verity *v, struct ahash_request *req,
struct crypto_wait *wait)
struct crypto_wait *wait, bool may_sleep)
{
int r;
ahash_request_set_tfm(req, v->tfm);
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP |
CRYPTO_TFM_REQ_MAY_BACKLOG,
crypto_req_done, (void *)wait);
ahash_request_set_callback(req,
may_sleep ? CRYPTO_TFM_REQ_MAY_SLEEP | CRYPTO_TFM_REQ_MAY_BACKLOG : 0,
crypto_req_done, (void *)wait);
crypto_init_wait(wait);
r = crypto_wait_req(crypto_ahash_init(req), wait);
if (unlikely(r < 0)) {
DMERR("crypto_ahash_init failed: %d", r);
if (r != -ENOMEM)
DMERR("crypto_ahash_init failed: %d", r);
return r;
}
@ -179,12 +180,12 @@ out:
}
int verity_hash(struct dm_verity *v, struct ahash_request *req,
const u8 *data, size_t len, u8 *digest)
const u8 *data, size_t len, u8 *digest, bool may_sleep)
{
int r;
struct crypto_wait wait;
r = verity_hash_init(v, req, &wait);
r = verity_hash_init(v, req, &wait, may_sleep);
if (unlikely(r < 0))
goto out;
@ -322,7 +323,7 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io,
r = verity_hash(v, verity_io_hash_req(v, io),
data, 1 << v->hash_dev_block_bits,
verity_io_real_digest(v, io));
verity_io_real_digest(v, io), !io->in_tasklet);
if (unlikely(r < 0))
goto release_ret_r;
@ -556,7 +557,7 @@ static int verity_verify_io(struct dm_verity_io *io)
continue;
}
r = verity_hash_init(v, req, &wait);
r = verity_hash_init(v, req, &wait, !io->in_tasklet);
if (unlikely(r < 0))
return r;
@ -652,7 +653,7 @@ static void verity_tasklet(unsigned long data)
io->in_tasklet = true;
err = verity_verify_io(io);
if (err == -EAGAIN) {
if (err == -EAGAIN || err == -ENOMEM) {
/* fallback to retrying with work-queue */
INIT_WORK(&io->work, verity_work);
queue_work(io->v->verify_wq, &io->work);
@ -1033,7 +1034,7 @@ static int verity_alloc_zero_digest(struct dm_verity *v)
goto out;
r = verity_hash(v, req, zero_data, 1 << v->data_dev_block_bits,
v->zero_digest);
v->zero_digest, true);
out:
kfree(req);

View File

@ -128,7 +128,7 @@ extern int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io,
u8 *data, size_t len));
extern int verity_hash(struct dm_verity *v, struct ahash_request *req,
const u8 *data, size_t len, u8 *digest);
const u8 *data, size_t len, u8 *digest, bool may_sleep);
extern int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io,
sector_t block, u8 *digest, bool *is_zero);

View File

@ -682,10 +682,24 @@ static void xgbe_service(struct work_struct *work)
static void xgbe_service_timer(struct timer_list *t)
{
struct xgbe_prv_data *pdata = from_timer(pdata, t, service_timer);
struct xgbe_channel *channel;
unsigned int i;
queue_work(pdata->dev_workqueue, &pdata->service_work);
mod_timer(&pdata->service_timer, jiffies + HZ);
if (!pdata->tx_usecs)
return;
for (i = 0; i < pdata->channel_count; i++) {
channel = pdata->channel[i];
if (!channel->tx_ring || channel->tx_timer_active)
break;
channel->tx_timer_active = 1;
mod_timer(&channel->tx_timer,
jiffies + usecs_to_jiffies(pdata->tx_usecs));
}
}
static void xgbe_init_timers(struct xgbe_prv_data *pdata)

View File

@ -314,10 +314,15 @@ static int xgbe_get_link_ksettings(struct net_device *netdev,
cmd->base.phy_address = pdata->phy.address;
cmd->base.autoneg = pdata->phy.autoneg;
cmd->base.speed = pdata->phy.speed;
cmd->base.duplex = pdata->phy.duplex;
if (netif_carrier_ok(netdev)) {
cmd->base.speed = pdata->phy.speed;
cmd->base.duplex = pdata->phy.duplex;
} else {
cmd->base.speed = SPEED_UNKNOWN;
cmd->base.duplex = DUPLEX_UNKNOWN;
}
cmd->base.autoneg = pdata->phy.autoneg;
cmd->base.port = PORT_NONE;
XGBE_LM_COPY(cmd, supported, lks, supported);

View File

@ -1193,7 +1193,19 @@ static int xgbe_phy_config_fixed(struct xgbe_prv_data *pdata)
if (pdata->phy.duplex != DUPLEX_FULL)
return -EINVAL;
xgbe_set_mode(pdata, mode);
/* Force the mode change for SFI in Fixed PHY config.
* Fixed PHY configs needs PLL to be enabled while doing mode set.
* When the SFP module isn't connected during boot, driver assumes
* AN is ON and attempts autonegotiation. However, if the connected
* SFP comes up in Fixed PHY config, the link will not come up as
* PLL isn't enabled while the initial mode set command is issued.
* So, force the mode change for SFI in Fixed PHY configuration to
* fix link issues.
*/
if (mode == XGBE_MODE_SFI)
xgbe_change_mode(pdata, mode);
else
xgbe_set_mode(pdata, mode);
return 0;
}

View File

@ -3844,7 +3844,7 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
int aq_ret = 0;
int i, ret;
int i;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = -EINVAL;
@ -3868,8 +3868,10 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
}
cfilter = kzalloc(sizeof(*cfilter), GFP_KERNEL);
if (!cfilter)
return -ENOMEM;
if (!cfilter) {
aq_ret = -ENOMEM;
goto err_out;
}
/* parse destination mac address */
for (i = 0; i < ETH_ALEN; i++)
@ -3917,13 +3919,13 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
/* Adding cloud filter programmed as TC filter */
if (tcf.dst_port)
ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter, true);
aq_ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter, true);
else
ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
if (ret) {
aq_ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
if (aq_ret) {
dev_err(&pf->pdev->dev,
"VF %d: Failed to add cloud filter, err %pe aq_err %s\n",
vf->vf_id, ERR_PTR(ret),
vf->vf_id, ERR_PTR(aq_ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
goto err_free;
}

View File

@ -7403,15 +7403,6 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
goto err_vsi_rebuild;
}
/* configure PTP timestamping after VSI rebuild */
if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) {
if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_SELF)
ice_ptp_cfg_timestamp(pf, false);
else if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_ALL)
/* for E82x PHC owner always need to have interrupts */
ice_ptp_cfg_timestamp(pf, true);
}
err = ice_eswitch_rebuild(pf);
if (err) {
dev_err(dev, "Switchdev rebuild failed: %d\n", err);
@ -7463,6 +7454,9 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
ice_plug_aux_dev(pf);
if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG))
ice_lag_rebuild(pf);
/* Restore timestamp mode settings after VSI rebuild */
ice_ptp_restore_timestamp_mode(pf);
return;
err_vsi_rebuild:

View File

@ -256,48 +256,42 @@ ice_verify_pin_e810t(struct ptp_clock_info *info, unsigned int pin,
}
/**
* ice_ptp_configure_tx_tstamp - Enable or disable Tx timestamp interrupt
* @pf: The PF pointer to search in
* @on: bool value for whether timestamp interrupt is enabled or disabled
* ice_ptp_cfg_tx_interrupt - Configure Tx timestamp interrupt for the device
* @pf: Board private structure
*
* Program the device to respond appropriately to the Tx timestamp interrupt
* cause.
*/
static void ice_ptp_configure_tx_tstamp(struct ice_pf *pf, bool on)
static void ice_ptp_cfg_tx_interrupt(struct ice_pf *pf)
{
struct ice_hw *hw = &pf->hw;
bool enable;
u32 val;
switch (pf->ptp.tx_interrupt_mode) {
case ICE_PTP_TX_INTERRUPT_ALL:
/* React to interrupts across all quads. */
wr32(hw, PFINT_TSYN_MSK + (0x4 * hw->pf_id), (u32)0x1f);
enable = true;
break;
case ICE_PTP_TX_INTERRUPT_NONE:
/* Do not react to interrupts on any quad. */
wr32(hw, PFINT_TSYN_MSK + (0x4 * hw->pf_id), (u32)0x0);
enable = false;
break;
case ICE_PTP_TX_INTERRUPT_SELF:
default:
enable = pf->ptp.tstamp_config.tx_type == HWTSTAMP_TX_ON;
break;
}
/* Configure the Tx timestamp interrupt */
val = rd32(&pf->hw, PFINT_OICR_ENA);
if (on)
val = rd32(hw, PFINT_OICR_ENA);
if (enable)
val |= PFINT_OICR_TSYN_TX_M;
else
val &= ~PFINT_OICR_TSYN_TX_M;
wr32(&pf->hw, PFINT_OICR_ENA, val);
}
/**
* ice_set_tx_tstamp - Enable or disable Tx timestamping
* @pf: The PF pointer to search in
* @on: bool value for whether timestamps are enabled or disabled
*/
static void ice_set_tx_tstamp(struct ice_pf *pf, bool on)
{
struct ice_vsi *vsi;
u16 i;
vsi = ice_get_main_vsi(pf);
if (!vsi)
return;
/* Set the timestamp enable flag for all the Tx rings */
ice_for_each_txq(vsi, i) {
if (!vsi->tx_rings[i])
continue;
vsi->tx_rings[i]->ptp_tx = on;
}
if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_SELF)
ice_ptp_configure_tx_tstamp(pf, on);
pf->ptp.tstamp_config.tx_type = on ? HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF;
wr32(hw, PFINT_OICR_ENA, val);
}
/**
@ -311,7 +305,7 @@ static void ice_set_rx_tstamp(struct ice_pf *pf, bool on)
u16 i;
vsi = ice_get_main_vsi(pf);
if (!vsi)
if (!vsi || !vsi->rx_rings)
return;
/* Set the timestamp flag for all the Rx rings */
@ -320,23 +314,50 @@ static void ice_set_rx_tstamp(struct ice_pf *pf, bool on)
continue;
vsi->rx_rings[i]->ptp_rx = on;
}
pf->ptp.tstamp_config.rx_filter = on ? HWTSTAMP_FILTER_ALL :
HWTSTAMP_FILTER_NONE;
}
/**
* ice_ptp_cfg_timestamp - Configure timestamp for init/deinit
* ice_ptp_disable_timestamp_mode - Disable current timestamp mode
* @pf: Board private structure
* @ena: bool value to enable or disable time stamp
*
* This function will configure timestamping during PTP initialization
* and deinitialization
* Called during preparation for reset to temporarily disable timestamping on
* the device. Called during remove to disable timestamping while cleaning up
* driver resources.
*/
void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena)
static void ice_ptp_disable_timestamp_mode(struct ice_pf *pf)
{
ice_set_tx_tstamp(pf, ena);
ice_set_rx_tstamp(pf, ena);
struct ice_hw *hw = &pf->hw;
u32 val;
val = rd32(hw, PFINT_OICR_ENA);
val &= ~PFINT_OICR_TSYN_TX_M;
wr32(hw, PFINT_OICR_ENA, val);
ice_set_rx_tstamp(pf, false);
}
/**
* ice_ptp_restore_timestamp_mode - Restore timestamp configuration
* @pf: Board private structure
*
* Called at the end of rebuild to restore timestamp configuration after
* a device reset.
*/
void ice_ptp_restore_timestamp_mode(struct ice_pf *pf)
{
struct ice_hw *hw = &pf->hw;
bool enable_rx;
ice_ptp_cfg_tx_interrupt(pf);
enable_rx = pf->ptp.tstamp_config.rx_filter == HWTSTAMP_FILTER_ALL;
ice_set_rx_tstamp(pf, enable_rx);
/* Trigger an immediate software interrupt to ensure that timestamps
* which occurred during reset are handled now.
*/
wr32(hw, PFINT_OICR, PFINT_OICR_TSYN_TX_M);
ice_flush(hw);
}
/**
@ -2037,10 +2058,10 @@ ice_ptp_set_timestamp_mode(struct ice_pf *pf, struct hwtstamp_config *config)
{
switch (config->tx_type) {
case HWTSTAMP_TX_OFF:
ice_set_tx_tstamp(pf, false);
pf->ptp.tstamp_config.tx_type = HWTSTAMP_TX_OFF;
break;
case HWTSTAMP_TX_ON:
ice_set_tx_tstamp(pf, true);
pf->ptp.tstamp_config.tx_type = HWTSTAMP_TX_ON;
break;
default:
return -ERANGE;
@ -2048,7 +2069,7 @@ ice_ptp_set_timestamp_mode(struct ice_pf *pf, struct hwtstamp_config *config)
switch (config->rx_filter) {
case HWTSTAMP_FILTER_NONE:
ice_set_rx_tstamp(pf, false);
pf->ptp.tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
break;
case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
@ -2064,12 +2085,15 @@ ice_ptp_set_timestamp_mode(struct ice_pf *pf, struct hwtstamp_config *config)
case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
case HWTSTAMP_FILTER_NTP_ALL:
case HWTSTAMP_FILTER_ALL:
ice_set_rx_tstamp(pf, true);
pf->ptp.tstamp_config.rx_filter = HWTSTAMP_FILTER_ALL;
break;
default:
return -ERANGE;
}
/* Immediately update the device timestamping mode */
ice_ptp_restore_timestamp_mode(pf);
return 0;
}
@ -2737,7 +2761,7 @@ void ice_ptp_prepare_for_reset(struct ice_pf *pf)
clear_bit(ICE_FLAG_PTP, pf->flags);
/* Disable timestamping for both Tx and Rx */
ice_ptp_cfg_timestamp(pf, false);
ice_ptp_disable_timestamp_mode(pf);
kthread_cancel_delayed_work_sync(&ptp->work);
@ -2803,15 +2827,7 @@ static int ice_ptp_init_owner(struct ice_pf *pf)
/* Release the global hardware lock */
ice_ptp_unlock(hw);
if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_ALL) {
/* The clock owner for this device type handles the timestamp
* interrupt for all ports.
*/
ice_ptp_configure_tx_tstamp(pf, true);
/* React on all quads interrupts for E82x */
wr32(hw, PFINT_TSYN_MSK + (0x4 * hw->pf_id), (u32)0x1f);
if (!ice_is_e810(hw)) {
/* Enable quad interrupts */
err = ice_ptp_tx_ena_intr(pf, true, itr);
if (err)
@ -2881,13 +2897,6 @@ static int ice_ptp_init_port(struct ice_pf *pf, struct ice_ptp_port *ptp_port)
case ICE_PHY_E810:
return ice_ptp_init_tx_e810(pf, &ptp_port->tx);
case ICE_PHY_E822:
/* Non-owner PFs don't react to any interrupts on E82x,
* neither on own quad nor on others
*/
if (!ice_ptp_pf_handles_tx_interrupt(pf)) {
ice_ptp_configure_tx_tstamp(pf, false);
wr32(hw, PFINT_TSYN_MSK + (0x4 * hw->pf_id), (u32)0x0);
}
kthread_init_delayed_work(&ptp_port->ov_work,
ice_ptp_wait_for_offsets);
@ -3032,6 +3041,9 @@ void ice_ptp_init(struct ice_pf *pf)
/* Start the PHY timestamping block */
ice_ptp_reset_phy_timestamping(pf);
/* Configure initial Tx interrupt settings */
ice_ptp_cfg_tx_interrupt(pf);
set_bit(ICE_FLAG_PTP, pf->flags);
err = ice_ptp_init_work(pf, ptp);
if (err)
@ -3067,7 +3079,7 @@ void ice_ptp_release(struct ice_pf *pf)
return;
/* Disable timestamping for both Tx and Rx */
ice_ptp_cfg_timestamp(pf, false);
ice_ptp_disable_timestamp_mode(pf);
ice_ptp_remove_auxbus_device(pf);

View File

@ -292,7 +292,7 @@ int ice_ptp_clock_index(struct ice_pf *pf);
struct ice_pf;
int ice_ptp_set_ts_config(struct ice_pf *pf, struct ifreq *ifr);
int ice_ptp_get_ts_config(struct ice_pf *pf, struct ifreq *ifr);
void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena);
void ice_ptp_restore_timestamp_mode(struct ice_pf *pf);
void ice_ptp_extts_event(struct ice_pf *pf);
s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb);
@ -317,8 +317,7 @@ static inline int ice_ptp_get_ts_config(struct ice_pf *pf, struct ifreq *ifr)
return -EOPNOTSUPP;
}
static inline void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena) { }
static inline void ice_ptp_restore_timestamp_mode(struct ice_pf *pf) { }
static inline void ice_ptp_extts_event(struct ice_pf *pf) { }
static inline s8
ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)

View File

@ -2306,9 +2306,6 @@ ice_tstamp(struct ice_tx_ring *tx_ring, struct sk_buff *skb,
if (likely(!(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)))
return;
if (!tx_ring->ptp_tx)
return;
/* Tx timestamps cannot be sampled when doing TSO */
if (first->tx_flags & ICE_TX_FLAGS_TSO)
return;

View File

@ -380,7 +380,6 @@ struct ice_tx_ring {
#define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2)
u8 flags;
u8 dcb_tc; /* Traffic class of ring */
u8 ptp_tx;
} ____cacheline_internodealigned_in_smp;
static inline bool ice_ring_uses_build_skb(struct ice_rx_ring *ring)

View File

@ -1088,6 +1088,7 @@ int otx2_add_flow(struct otx2_nic *pfvf, struct ethtool_rxnfc *nfc)
struct ethhdr *eth_hdr;
bool new = false;
int err = 0;
u64 vf_num;
u32 ring;
if (!flow_cfg->max_flows) {
@ -1100,7 +1101,21 @@ int otx2_add_flow(struct otx2_nic *pfvf, struct ethtool_rxnfc *nfc)
if (!(pfvf->flags & OTX2_FLAG_NTUPLE_SUPPORT))
return -ENOMEM;
if (ring >= pfvf->hw.rx_queues && fsp->ring_cookie != RX_CLS_FLOW_DISC)
/* Number of queues on a VF can be greater or less than
* the PF's queue. Hence no need to check for the
* queue count. Hence no need to check queue count if PF
* is installing for its VF. Below is the expected vf_num value
* based on the ethtool commands.
*
* e.g.
* 1. ethtool -U <netdev> ... action -1 ==> vf_num:255
* 2. ethtool -U <netdev> ... action <queue_num> ==> vf_num:0
* 3. ethtool -U <netdev> ... vf <vf_idx> queue <queue_num> ==>
* vf_num:vf_idx+1
*/
vf_num = ethtool_get_flow_spec_ring_vf(fsp->ring_cookie);
if (!is_otx2_vf(pfvf->pcifunc) && !vf_num &&
ring >= pfvf->hw.rx_queues && fsp->ring_cookie != RX_CLS_FLOW_DISC)
return -EINVAL;
if (fsp->location >= otx2_get_maxflows(flow_cfg))
@ -1182,6 +1197,9 @@ int otx2_add_flow(struct otx2_nic *pfvf, struct ethtool_rxnfc *nfc)
flow_cfg->nr_flows++;
}
if (flow->is_vf)
netdev_info(pfvf->netdev,
"Make sure that VF's queue number is within its queue limit\n");
return 0;
}

View File

@ -1934,6 +1934,8 @@ int otx2_stop(struct net_device *netdev)
/* Clear RSS enable flag */
rss = &pf->hw.rss_info;
rss->enable = false;
if (!netif_is_rxfh_configured(netdev))
kfree(rss->rss_ctx[DEFAULT_RSS_CONTEXT_GROUP]);
/* Cleanup Queue IRQ */
vec = pci_irq_vector(pf->pdev,

View File

@ -2599,9 +2599,7 @@ static void rtl_set_rx_mode(struct net_device *dev)
rx_mode &= ~AcceptMulticast;
} else if (netdev_mc_count(dev) > MC_FILTER_LIMIT ||
dev->flags & IFF_ALLMULTI ||
tp->mac_version == RTL_GIGA_MAC_VER_35 ||
tp->mac_version == RTL_GIGA_MAC_VER_46 ||
tp->mac_version == RTL_GIGA_MAC_VER_48) {
tp->mac_version == RTL_GIGA_MAC_VER_35) {
/* accept all multicasts */
} else if (netdev_mc_empty(dev)) {
rx_mode &= ~AcceptMulticast;

View File

@ -280,7 +280,7 @@ config DWMAC_INTEL
config DWMAC_LOONGSON
tristate "Loongson PCI DWMAC support"
default MACH_LOONGSON64
depends on STMMAC_ETH && PCI
depends on (MACH_LOONGSON64 || COMPILE_TEST) && STMMAC_ETH && PCI
depends on COMMON_CLK
help
This selects the LOONGSON PCI bus support for the stmmac driver,

View File

@ -1769,10 +1769,12 @@ int wx_sw_init(struct wx *wx)
wx->subsystem_device_id = pdev->subsystem_device;
} else {
err = wx_flash_read_dword(wx, 0xfffdc, &ssid);
if (!err)
wx->subsystem_device_id = swab16((u16)ssid);
if (err < 0) {
wx_err(wx, "read of internal subsystem device id failed\n");
return err;
}
return err;
wx->subsystem_device_id = swab16((u16)ssid);
}
wx->mac_table = kcalloc(wx->mac.num_rar_entries,

View File

@ -121,10 +121,8 @@ static int ngbe_sw_init(struct wx *wx)
/* PCI config space info */
err = wx_sw_init(wx);
if (err < 0) {
wx_err(wx, "read of internal subsystem device id failed\n");
if (err < 0)
return err;
}
/* mac type, phy type , oem type */
ngbe_init_type_code(wx);

View File

@ -364,10 +364,8 @@ static int txgbe_sw_init(struct wx *wx)
/* PCI config space info */
err = wx_sw_init(wx);
if (err < 0) {
wx_err(wx, "read of internal subsystem device id failed\n");
if (err < 0)
return err;
}
txgbe_init_type_code(wx);

View File

@ -966,7 +966,7 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
if (lp->features & XAE_FEATURE_FULL_TX_CSUM) {
/* Tx Full Checksum Offload Enabled */
cur_p->app0 |= 2;
} else if (lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) {
} else if (lp->features & XAE_FEATURE_PARTIAL_TX_CSUM) {
csum_start_off = skb_transport_offset(skb);
csum_index_off = csum_start_off + skb->csum_offset;
/* Tx Partial Checksum Offload Enabled */

View File

@ -2206,9 +2206,6 @@ static int netvsc_vf_join(struct net_device *vf_netdev,
goto upper_link_failed;
}
/* set slave flag before open to prevent IPv6 addrconf */
vf_netdev->flags |= IFF_SLAVE;
schedule_delayed_work(&ndev_ctx->vf_takeover, VF_TAKEOVER_INT);
call_netdevice_notifiers(NETDEV_JOIN, vf_netdev);
@ -2315,16 +2312,18 @@ static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)
}
/* Fallback path to check synthetic vf with
* help of mac addr
/* Fallback path to check synthetic vf with help of mac addr.
* Because this function can be called before vf_netdev is
* initialized (NETDEV_POST_INIT) when its perm_addr has not been copied
* from dev_addr, also try to match to its dev_addr.
* Note: On Hyper-V and Azure, it's not possible to set a MAC address
* on a VF that matches to the MAC of a unrelated NETVSC device.
*/
list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) {
ndev = hv_get_drvdata(ndev_ctx->device_ctx);
if (ether_addr_equal(vf_netdev->perm_addr, ndev->perm_addr)) {
netdev_notice(vf_netdev,
"falling back to mac addr based matching\n");
if (ether_addr_equal(vf_netdev->perm_addr, ndev->perm_addr) ||
ether_addr_equal(vf_netdev->dev_addr, ndev->perm_addr))
return ndev;
}
}
netdev_notice(vf_netdev,
@ -2332,6 +2331,19 @@ static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)
return NULL;
}
static int netvsc_prepare_bonding(struct net_device *vf_netdev)
{
struct net_device *ndev;
ndev = get_netvsc_byslot(vf_netdev);
if (!ndev)
return NOTIFY_DONE;
/* set slave flag before open to prevent IPv6 addrconf */
vf_netdev->flags |= IFF_SLAVE;
return NOTIFY_DONE;
}
static int netvsc_register_vf(struct net_device *vf_netdev)
{
struct net_device_context *net_device_ctx;
@ -2531,6 +2543,21 @@ static int netvsc_probe(struct hv_device *dev,
goto devinfo_failed;
}
/* We must get rtnl lock before scheduling nvdev->subchan_work,
* otherwise netvsc_subchan_work() can get rtnl lock first and wait
* all subchannels to show up, but that may not happen because
* netvsc_probe() can't get rtnl lock and as a result vmbus_onoffer()
* -> ... -> device_add() -> ... -> __device_attach() can't get
* the device lock, so all the subchannels can't be processed --
* finally netvsc_subchan_work() hangs forever.
*
* The rtnl lock also needs to be held before rndis_filter_device_add()
* which advertises nvsp_2_vsc_capability / sriov bit, and triggers
* VF NIC offering and registering. If VF NIC finished register_netdev()
* earlier it may cause name based config failure.
*/
rtnl_lock();
nvdev = rndis_filter_device_add(dev, device_info);
if (IS_ERR(nvdev)) {
ret = PTR_ERR(nvdev);
@ -2540,16 +2567,6 @@ static int netvsc_probe(struct hv_device *dev,
eth_hw_addr_set(net, device_info->mac_adr);
/* We must get rtnl lock before scheduling nvdev->subchan_work,
* otherwise netvsc_subchan_work() can get rtnl lock first and wait
* all subchannels to show up, but that may not happen because
* netvsc_probe() can't get rtnl lock and as a result vmbus_onoffer()
* -> ... -> device_add() -> ... -> __device_attach() can't get
* the device lock, so all the subchannels can't be processed --
* finally netvsc_subchan_work() hangs forever.
*/
rtnl_lock();
if (nvdev->num_chn > 1)
schedule_work(&nvdev->subchan_work);
@ -2586,9 +2603,9 @@ static int netvsc_probe(struct hv_device *dev,
return 0;
register_failed:
rtnl_unlock();
rndis_filter_device_remove(dev, nvdev);
rndis_failed:
rtnl_unlock();
netvsc_devinfo_put(device_info);
devinfo_failed:
free_percpu(net_device_ctx->vf_stats);
@ -2753,6 +2770,8 @@ static int netvsc_netdev_event(struct notifier_block *this,
return NOTIFY_DONE;
switch (event) {
case NETDEV_POST_INIT:
return netvsc_prepare_bonding(event_dev);
case NETDEV_REGISTER:
return netvsc_register_vf(event_dev);
case NETDEV_UNREGISTER:
@ -2788,12 +2807,17 @@ static int __init netvsc_drv_init(void)
}
netvsc_ring_bytes = ring_size * PAGE_SIZE;
register_netdevice_notifier(&netvsc_netdev_notifier);
ret = vmbus_driver_register(&netvsc_drv);
if (ret)
return ret;
goto err_vmbus_reg;
register_netdevice_notifier(&netvsc_netdev_notifier);
return 0;
err_vmbus_reg:
unregister_netdevice_notifier(&netvsc_netdev_notifier);
return ret;
}
MODULE_LICENSE("GPL");

View File

@ -78,7 +78,7 @@ REG_STRIDE_FIELDS(EV_CH_E_CNTXT_0, ev_ch_e_cntxt_0,
0x0001c000 + 0x12000 * GSI_EE_AP, 0x80);
static const u32 reg_ev_ch_e_cntxt_1_fmask[] = {
[R_LENGTH] = GENMASK(19, 0),
[R_LENGTH] = GENMASK(23, 0),
};
REG_STRIDE_FIELDS(EV_CH_E_CNTXT_1, ev_ch_e_cntxt_1,

View File

@ -7,6 +7,7 @@
#include <linux/filter.h>
#include <linux/netfilter_netdev.h>
#include <linux/bpf_mprog.h>
#include <linux/indirect_call_wrapper.h>
#include <net/netkit.h>
#include <net/dst.h>
@ -68,6 +69,7 @@ static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
netdev_tx_t ret_dev = NET_XMIT_SUCCESS;
const struct bpf_mprog_entry *entry;
struct net_device *peer;
int len = skb->len;
rcu_read_lock();
peer = rcu_dereference(nk->peer);
@ -85,15 +87,22 @@ static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
case NETKIT_PASS:
skb->protocol = eth_type_trans(skb, skb->dev);
skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
__netif_rx(skb);
if (likely(__netif_rx(skb) == NET_RX_SUCCESS)) {
dev_sw_netstats_tx_add(dev, 1, len);
dev_sw_netstats_rx_add(peer, len);
} else {
goto drop_stats;
}
break;
case NETKIT_REDIRECT:
dev_sw_netstats_tx_add(dev, 1, len);
skb_do_redirect(skb);
break;
case NETKIT_DROP:
default:
drop:
kfree_skb(skb);
drop_stats:
dev_core_stats_tx_dropped_inc(dev);
ret_dev = NET_XMIT_DROP;
break;
@ -169,11 +178,18 @@ out:
rcu_read_unlock();
}
static struct net_device *netkit_peer_dev(struct net_device *dev)
INDIRECT_CALLABLE_SCOPE struct net_device *netkit_peer_dev(struct net_device *dev)
{
return rcu_dereference(netkit_priv(dev)->peer);
}
static void netkit_get_stats(struct net_device *dev,
struct rtnl_link_stats64 *stats)
{
dev_fetch_sw_netstats(stats, dev->tstats);
stats->tx_dropped = DEV_STATS_READ(dev, tx_dropped);
}
static void netkit_uninit(struct net_device *dev);
static const struct net_device_ops netkit_netdev_ops = {
@ -184,6 +200,7 @@ static const struct net_device_ops netkit_netdev_ops = {
.ndo_set_rx_headroom = netkit_set_headroom,
.ndo_get_iflink = netkit_get_iflink,
.ndo_get_peer_dev = netkit_peer_dev,
.ndo_get_stats64 = netkit_get_stats,
.ndo_uninit = netkit_uninit,
.ndo_features_check = passthru_features_check,
};
@ -218,6 +235,7 @@ static void netkit_setup(struct net_device *dev)
ether_setup(dev);
dev->max_mtu = ETH_MAX_MTU;
dev->pcpu_stat_type = NETDEV_PCPU_STAT_TSTATS;
dev->flags |= IFF_NOARP;
dev->priv_flags &= ~IFF_TX_SKB_SHARING;

View File

@ -1079,17 +1079,17 @@ static int aqc111_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
u16 pkt_count = 0;
u64 desc_hdr = 0;
u16 vlan_tag = 0;
u32 skb_len = 0;
u32 skb_len;
if (!skb)
goto err;
if (skb->len == 0)
skb_len = skb->len;
if (skb_len < sizeof(desc_hdr))
goto err;
skb_len = skb->len;
/* RX Descriptor Header */
skb_trim(skb, skb->len - sizeof(desc_hdr));
skb_trim(skb, skb_len - sizeof(desc_hdr));
desc_hdr = le64_to_cpup((u64 *)skb_tail_pointer(skb));
/* Check these packets */

View File

@ -1581,11 +1581,11 @@ static int ax88179_reset(struct usbnet *dev)
*tmp16 = AX_PHYPWR_RSTCTL_IPRL;
ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_PHYPWR_RSTCTL, 2, 2, tmp16);
msleep(200);
msleep(500);
*tmp = AX_CLK_SELECT_ACS | AX_CLK_SELECT_BCS;
ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_CLK_SELECT, 1, 1, tmp);
msleep(100);
msleep(200);
/* Ethernet PHY Auto Detach*/
ax88179_auto_detach(dev);

View File

@ -1289,6 +1289,7 @@ static const struct usb_device_id products[] = {
{QMI_FIXED_INTF(0x19d2, 0x0168, 4)},
{QMI_FIXED_INTF(0x19d2, 0x0176, 3)},
{QMI_FIXED_INTF(0x19d2, 0x0178, 3)},
{QMI_FIXED_INTF(0x19d2, 0x0189, 4)}, /* ZTE MF290 */
{QMI_FIXED_INTF(0x19d2, 0x0191, 4)}, /* ZTE EuFi890 */
{QMI_FIXED_INTF(0x19d2, 0x0199, 1)}, /* ZTE MF820S */
{QMI_FIXED_INTF(0x19d2, 0x0200, 1)},

View File

@ -236,8 +236,8 @@ static void veth_get_ethtool_stats(struct net_device *dev,
data[tx_idx + j] += *(u64 *)(base + offset);
}
} while (u64_stats_fetch_retry(&rq_stats->syncp, start));
pp_idx = tx_idx + VETH_TQ_STATS_LEN;
}
pp_idx = idx + dev->real_num_tx_queues * VETH_TQ_STATS_LEN;
page_pool_stats:
veth_get_page_pool_stats(dev, &data[pp_idx]);
@ -373,7 +373,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
skb_tx_timestamp(skb);
if (likely(veth_forward_skb(rcv, skb, rq, use_napi) == NET_RX_SUCCESS)) {
if (!use_napi)
dev_lstats_add(dev, length);
dev_sw_netstats_tx_add(dev, 1, length);
else
__veth_xdp_flush(rq);
} else {
@ -387,14 +387,6 @@ drop:
return ret;
}
static u64 veth_stats_tx(struct net_device *dev, u64 *packets, u64 *bytes)
{
struct veth_priv *priv = netdev_priv(dev);
dev_lstats_read(dev, packets, bytes);
return atomic64_read(&priv->dropped);
}
static void veth_stats_rx(struct veth_stats *result, struct net_device *dev)
{
struct veth_priv *priv = netdev_priv(dev);
@ -432,24 +424,24 @@ static void veth_get_stats64(struct net_device *dev,
struct veth_priv *priv = netdev_priv(dev);
struct net_device *peer;
struct veth_stats rx;
u64 packets, bytes;
tot->tx_dropped = veth_stats_tx(dev, &packets, &bytes);
tot->tx_bytes = bytes;
tot->tx_packets = packets;
tot->tx_dropped = atomic64_read(&priv->dropped);
dev_fetch_sw_netstats(tot, dev->tstats);
veth_stats_rx(&rx, dev);
tot->tx_dropped += rx.xdp_tx_err;
tot->rx_dropped = rx.rx_drops + rx.peer_tq_xdp_xmit_err;
tot->rx_bytes = rx.xdp_bytes;
tot->rx_packets = rx.xdp_packets;
tot->rx_bytes += rx.xdp_bytes;
tot->rx_packets += rx.xdp_packets;
rcu_read_lock();
peer = rcu_dereference(priv->peer);
if (peer) {
veth_stats_tx(peer, &packets, &bytes);
tot->rx_bytes += bytes;
tot->rx_packets += packets;
struct rtnl_link_stats64 tot_peer = {};
dev_fetch_sw_netstats(&tot_peer, peer->tstats);
tot->rx_bytes += tot_peer.tx_bytes;
tot->rx_packets += tot_peer.tx_packets;
veth_stats_rx(&rx, peer);
tot->tx_dropped += rx.peer_tq_xdp_xmit_err;
@ -1506,25 +1498,12 @@ static void veth_free_queues(struct net_device *dev)
static int veth_dev_init(struct net_device *dev)
{
int err;
dev->lstats = netdev_alloc_pcpu_stats(struct pcpu_lstats);
if (!dev->lstats)
return -ENOMEM;
err = veth_alloc_queues(dev);
if (err) {
free_percpu(dev->lstats);
return err;
}
return 0;
return veth_alloc_queues(dev);
}
static void veth_dev_free(struct net_device *dev)
{
veth_free_queues(dev);
free_percpu(dev->lstats);
}
#ifdef CONFIG_NET_POLL_CONTROLLER
@ -1796,6 +1775,7 @@ static void veth_setup(struct net_device *dev)
NETIF_F_HW_VLAN_STAG_RX);
dev->needs_free_netdev = true;
dev->priv_destructor = veth_dev_free;
dev->pcpu_stat_type = NETDEV_PCPU_STAT_TSTATS;
dev->max_mtu = ETH_MAX_MTU;
dev->hw_features = VETH_FEATURES;

View File

@ -121,22 +121,12 @@ struct net_vrf {
int ifindex;
};
struct pcpu_dstats {
u64 tx_pkts;
u64 tx_bytes;
u64 tx_drps;
u64 rx_pkts;
u64 rx_bytes;
u64 rx_drps;
struct u64_stats_sync syncp;
};
static void vrf_rx_stats(struct net_device *dev, int len)
{
struct pcpu_dstats *dstats = this_cpu_ptr(dev->dstats);
u64_stats_update_begin(&dstats->syncp);
dstats->rx_pkts++;
dstats->rx_packets++;
dstats->rx_bytes += len;
u64_stats_update_end(&dstats->syncp);
}
@ -161,10 +151,10 @@ static void vrf_get_stats64(struct net_device *dev,
do {
start = u64_stats_fetch_begin(&dstats->syncp);
tbytes = dstats->tx_bytes;
tpkts = dstats->tx_pkts;
tdrops = dstats->tx_drps;
tpkts = dstats->tx_packets;
tdrops = dstats->tx_drops;
rbytes = dstats->rx_bytes;
rpkts = dstats->rx_pkts;
rpkts = dstats->rx_packets;
} while (u64_stats_fetch_retry(&dstats->syncp, start));
stats->tx_bytes += tbytes;
stats->tx_packets += tpkts;
@ -421,7 +411,7 @@ static int vrf_local_xmit(struct sk_buff *skb, struct net_device *dev,
if (likely(__netif_rx(skb) == NET_RX_SUCCESS))
vrf_rx_stats(dev, len);
else
this_cpu_inc(dev->dstats->rx_drps);
this_cpu_inc(dev->dstats->rx_drops);
return NETDEV_TX_OK;
}
@ -616,11 +606,11 @@ static netdev_tx_t vrf_xmit(struct sk_buff *skb, struct net_device *dev)
struct pcpu_dstats *dstats = this_cpu_ptr(dev->dstats);
u64_stats_update_begin(&dstats->syncp);
dstats->tx_pkts++;
dstats->tx_packets++;
dstats->tx_bytes += len;
u64_stats_update_end(&dstats->syncp);
} else {
this_cpu_inc(dev->dstats->tx_drps);
this_cpu_inc(dev->dstats->tx_drops);
}
return ret;
@ -1174,22 +1164,15 @@ static void vrf_dev_uninit(struct net_device *dev)
vrf_rtable_release(dev, vrf);
vrf_rt6_release(dev, vrf);
free_percpu(dev->dstats);
dev->dstats = NULL;
}
static int vrf_dev_init(struct net_device *dev)
{
struct net_vrf *vrf = netdev_priv(dev);
dev->dstats = netdev_alloc_pcpu_stats(struct pcpu_dstats);
if (!dev->dstats)
goto out_nomem;
/* create the default dst which points back to us */
if (vrf_rtable_create(dev) != 0)
goto out_stats;
goto out_nomem;
if (vrf_rt6_create(dev) != 0)
goto out_rth;
@ -1203,9 +1186,6 @@ static int vrf_dev_init(struct net_device *dev)
out_rth:
vrf_rtable_release(dev, vrf);
out_stats:
free_percpu(dev->dstats);
dev->dstats = NULL;
out_nomem:
return -ENOMEM;
}
@ -1704,6 +1684,8 @@ static void vrf_setup(struct net_device *dev)
dev->min_mtu = IPV6_MIN_MTU;
dev->max_mtu = IP6_MAX_MTU;
dev->mtu = dev->max_mtu;
dev->pcpu_stat_type = NETDEV_PCPU_STAT_DSTATS;
}
static int vrf_validate(struct nlattr *tb[], struct nlattr *data[],

View File

@ -210,7 +210,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
*/
while (skb_queue_len(&peer->staged_packet_queue) > MAX_STAGED_PACKETS) {
dev_kfree_skb(__skb_dequeue(&peer->staged_packet_queue));
++dev->stats.tx_dropped;
DEV_STATS_INC(dev, tx_dropped);
}
skb_queue_splice_tail(&packets, &peer->staged_packet_queue);
spin_unlock_bh(&peer->staged_packet_queue.lock);
@ -228,7 +228,7 @@ err_icmp:
else if (skb->protocol == htons(ETH_P_IPV6))
icmpv6_ndo_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0);
err:
++dev->stats.tx_errors;
DEV_STATS_INC(dev, tx_errors);
kfree_skb(skb);
return ret;
}

View File

@ -416,20 +416,20 @@ dishonest_packet_peer:
net_dbg_skb_ratelimited("%s: Packet has unallowed src IP (%pISc) from peer %llu (%pISpfsc)\n",
dev->name, skb, peer->internal_id,
&peer->endpoint.addr);
++dev->stats.rx_errors;
++dev->stats.rx_frame_errors;
DEV_STATS_INC(dev, rx_errors);
DEV_STATS_INC(dev, rx_frame_errors);
goto packet_processed;
dishonest_packet_type:
net_dbg_ratelimited("%s: Packet is neither ipv4 nor ipv6 from peer %llu (%pISpfsc)\n",
dev->name, peer->internal_id, &peer->endpoint.addr);
++dev->stats.rx_errors;
++dev->stats.rx_frame_errors;
DEV_STATS_INC(dev, rx_errors);
DEV_STATS_INC(dev, rx_frame_errors);
goto packet_processed;
dishonest_packet_size:
net_dbg_ratelimited("%s: Packet has incorrect size from peer %llu (%pISpfsc)\n",
dev->name, peer->internal_id, &peer->endpoint.addr);
++dev->stats.rx_errors;
++dev->stats.rx_length_errors;
DEV_STATS_INC(dev, rx_errors);
DEV_STATS_INC(dev, rx_length_errors);
goto packet_processed;
packet_processed:
dev_kfree_skb(skb);

View File

@ -333,7 +333,8 @@ err:
void wg_packet_purge_staged_packets(struct wg_peer *peer)
{
spin_lock_bh(&peer->staged_packet_queue.lock);
peer->device->dev->stats.tx_dropped += peer->staged_packet_queue.qlen;
DEV_STATS_ADD(peer->device->dev, tx_dropped,
peer->staged_packet_queue.qlen);
__skb_queue_purge(&peer->staged_packet_queue);
spin_unlock_bh(&peer->staged_packet_queue.lock);
}

View File

@ -26,10 +26,14 @@ struct virtual_nci_dev {
struct mutex mtx;
struct sk_buff *send_buff;
struct wait_queue_head wq;
bool running;
};
static int virtual_nci_open(struct nci_dev *ndev)
{
struct virtual_nci_dev *vdev = nci_get_drvdata(ndev);
vdev->running = true;
return 0;
}
@ -40,6 +44,7 @@ static int virtual_nci_close(struct nci_dev *ndev)
mutex_lock(&vdev->mtx);
kfree_skb(vdev->send_buff);
vdev->send_buff = NULL;
vdev->running = false;
mutex_unlock(&vdev->mtx);
return 0;
@ -50,7 +55,7 @@ static int virtual_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
struct virtual_nci_dev *vdev = nci_get_drvdata(ndev);
mutex_lock(&vdev->mtx);
if (vdev->send_buff) {
if (vdev->send_buff || !vdev->running) {
mutex_unlock(&vdev->mtx);
kfree_skb(skb);
return -1;

View File

@ -176,7 +176,7 @@ static struct notifier_block parisc_panic_block = {
static int qemu_power_off(struct sys_off_data *data)
{
/* this turns the system off via SeaBIOS */
*(int *)data->cb_data = 0;
gsc_writel(0, (unsigned long) data->cb_data);
pdc_soft_power_button(1);
return NOTIFY_DONE;
}

View File

@ -964,33 +964,6 @@ static const struct pci_device_id pmc_pci_ids[] = {
{ }
};
static int amd_pmc_get_dram_size(struct amd_pmc_dev *dev)
{
int ret;
switch (dev->cpu_id) {
case AMD_CPU_ID_YC:
if (!(dev->major > 90 || (dev->major == 90 && dev->minor > 39))) {
ret = -EINVAL;
goto err_dram_size;
}
break;
default:
ret = -EINVAL;
goto err_dram_size;
}
ret = amd_pmc_send_cmd(dev, S2D_DRAM_SIZE, &dev->dram_size, dev->s2d_msg_id, true);
if (ret || !dev->dram_size)
goto err_dram_size;
return 0;
err_dram_size:
dev_err(dev->dev, "DRAM size command not supported for this platform\n");
return ret;
}
static int amd_pmc_s2d_init(struct amd_pmc_dev *dev)
{
u32 phys_addr_low, phys_addr_hi;
@ -1009,8 +982,8 @@ static int amd_pmc_s2d_init(struct amd_pmc_dev *dev)
return -EIO;
/* Get DRAM size */
ret = amd_pmc_get_dram_size(dev);
if (ret)
ret = amd_pmc_send_cmd(dev, S2D_DRAM_SIZE, &dev->dram_size, dev->s2d_msg_id, true);
if (ret || !dev->dram_size)
dev->dram_size = S2D_TELEMETRY_DRAMBYTES_MAX;
/* Get STB DRAM address */

View File

@ -588,17 +588,14 @@ static void release_attributes_data(void)
static int hp_add_other_attributes(int attr_type)
{
struct kobject *attr_name_kobj;
union acpi_object *obj = NULL;
int ret;
char *attr_name;
mutex_lock(&bioscfg_drv.mutex);
attr_name_kobj = kzalloc(sizeof(*attr_name_kobj), GFP_KERNEL);
if (!attr_name_kobj) {
ret = -ENOMEM;
goto err_other_attr_init;
}
if (!attr_name_kobj)
return -ENOMEM;
mutex_lock(&bioscfg_drv.mutex);
/* Check if attribute type is supported */
switch (attr_type) {
@ -615,14 +612,14 @@ static int hp_add_other_attributes(int attr_type)
default:
pr_err("Error: Unknown attr_type: %d\n", attr_type);
ret = -EINVAL;
goto err_other_attr_init;
kfree(attr_name_kobj);
goto unlock_drv_mutex;
}
ret = kobject_init_and_add(attr_name_kobj, &attr_name_ktype,
NULL, "%s", attr_name);
if (ret) {
pr_err("Error encountered [%d]\n", ret);
kobject_put(attr_name_kobj);
goto err_other_attr_init;
}
@ -630,27 +627,26 @@ static int hp_add_other_attributes(int attr_type)
switch (attr_type) {
case HPWMI_SECURE_PLATFORM_TYPE:
ret = hp_populate_secure_platform_data(attr_name_kobj);
if (ret)
goto err_other_attr_init;
break;
case HPWMI_SURE_START_TYPE:
ret = hp_populate_sure_start_data(attr_name_kobj);
if (ret)
goto err_other_attr_init;
break;
default:
ret = -EINVAL;
goto err_other_attr_init;
}
if (ret)
goto err_other_attr_init;
mutex_unlock(&bioscfg_drv.mutex);
return 0;
err_other_attr_init:
kobject_put(attr_name_kobj);
unlock_drv_mutex:
mutex_unlock(&bioscfg_drv.mutex);
kfree(obj);
return ret;
}

Some files were not shown because too many files have changed in this diff Show More