IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
rbd_img_obj_exists_submit() and rbd_img_obj_parent_read_full() are on
the writeback path for cloned images -- we attempt a stat on the parent
object to see if it exists and potentially read it in to call copyup.
GFP_NOIO should be used instead of GFP_KERNEL here.
Cc: stable@vger.kernel.org
Link: http://tracker.ceph.com/issues/22014
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: David Disseldorp <ddiss@suse.de>
Confirmed with Kailang of Realtek, the pin 0x19 is for Headset Mic, and
the pin 0x1a is for Headphone Mic, he suggested to apply
ALC269_FIXUP_DELL1_MIC_NO_PRESENCE to fix this problem. And we
verified applying this FIXUP can fix this problem.
Cc: <stable@vger.kernel.org>
Cc: Kailang Yang <kailang@realtek.com>
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
When the kernel is compiled with !CONFIG_SCHED_DEBUG support, we expect that
all SCHED_FEAT are turned into compile time constants being propagated
to support compiler optimizations.
Specifically, we expect that code blocks like this:
if (sched_feat(FEATURE_NAME) [&& <other_conditions>]) {
/* FEATURE CODE */
}
are turned into dead-code in case FEATURE_NAME defaults to FALSE, and thus
being removed by the compiler from the finale image.
For this mechanism to properly work it's required for the compiler to
have full access, from each translation unit, to whatever is the value
defined by the sched_feat macro. This macro is defined as:
#define sched_feat(x) (sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
and thus, the compiler can optimize that code only if the value of
sysctl_sched_features is visible within each translation unit.
Since:
029632fbb ("sched: Make separate sched*.c translation units")
the scheduler code has been split into separate translation units
however the definition of sysctl_sched_features is part of
kernel/sched/core.c while, for all the other scheduler modules, it is
visible only via kernel/sched/sched.h as an:
extern const_debug unsigned int sysctl_sched_features
Unfortunately, an extern reference does not allow the compiler to apply
constants propagation. Thus, on !CONFIG_SCHED_DEBUG kernel we still end up
with code to load a memory reference and (eventually) doing an unconditional
jump of a chunk of code.
This mechanism is unavoidable when sched_features can be turned on and off at
run-time. However, this is not the case for "production" kernels compiled with
!CONFIG_SCHED_DEBUG. In this case, sysctl_sched_features is just a constant value
which cannot be changed at run-time and thus memory loads and jumps can be
avoided altogether.
This patch fixes the case of !CONFIG_SCHED_DEBUG kernel by declaring a local version
of the sysctl_sched_features constant for each translation unit. This will
ultimately allow the compiler to perform constants propagation and dead-code
pruning.
Tests have been done, with !CONFIG_SCHED_DEBUG on a v4.14-rc8 with and without
the patch, by running 30 iterations of:
perf bench sched messaging --pipe --thread --group 4 --loop 50000
on a 40 cores Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz using the
powersave governor to rule out variations due to frequency scaling.
Statistics on the reported completion time:
count mean std min 99% max
v4.14-rc8 30.0 15.7831 0.176032 15.442 16.01226 16.014
v4.14-rc8+patch 30.0 15.5033 0.189681 15.232 15.93938 15.962
... show a 1.8% speedup on average completion time and 0.5% speedup in the
99 percentile.
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Reviewed-by: Brendan Jackman <brendan.jackman@arm.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Link: http://lkml.kernel.org/r/20171108184101.16006-1-patrick.bellasi@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This change suppresses the 'dd' output and adds the '-quiet' parameter
to mkisofs tool. It also removes the 'Using ...' messages, as none of the
messages matter to the user normally.
"make V=1" can still be used for a more verbose build.
The new build messages are now a streamlined set of:
$ make isoimage
...
Kernel: arch/x86/boot/bzImage is ready (#75)
GENIMAGE arch/x86/boot/image.iso
Kernel: arch/x86/boot/image.iso is ready
Signed-off-by: Changbin Du <changbin.du@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1510207751-22166-1-git-send-email-changbin.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Steffen Klassert says:
====================
pull request (net): ipsec 2017-11-09
1) Fix a use after free due to a reallocated skb head.
From Florian Westphal.
2) Fix sporadic lookup failures on labeled IPSEC.
From Florian Westphal.
3) Fix a stack out of bounds when a socket policy is applied
to an IPv6 socket that sends IPv4 packets.
Please pull or let me know if there are problems.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Cong Wang says:
====================
net_sched: close the race between call_rcu() and cleanup_net()
This patchset tries to fix the race between call_rcu() and
cleanup_net() again. Without holding the netns refcnt the
tc_action_net_exit() in netns workqueue could be called before
filter destroy works in tc filter workqueue. This patchset
moves the netns refcnt from tc actions to tcf_exts, without
breaking per-netns tc actions.
Patch 1 reverts the previous fix, patch 2 introduces two new
API's to help to address the bug and the rest patches switch
to the new API's. Please see each patch for details.
I was not able to reproduce this bug, but now after adding
some delay in filter destroy work I manage to trigger the
crash. After this patchset, the crash is not reproducible
any more and the debugging printk's show the order is expected
too.
====================
Fixes: ddf97ccdd7cb ("net_sched: add network namespace support for tc actions")
Reported-by: Lucas Bates <lucasb@mojatatu.com>
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of holding netns refcnt in tc actions, we can minimize
the holding time by saving it in struct tcf_exts instead. This
means we can just hold netns refcnt right before call_rcu() and
release it after tcf_exts_destroy() is done.
However, because on netns cleanup path we call tcf_proto_destroy()
too, obviously we can not hold netns for a zero refcnt, in this
case we have to do cleanup synchronously. It is fine for RCU too,
the caller cleanup_net() already waits for a grace period.
For other cases, refcnt is non-zero and we can safely grab it as
normal and release it after we are done.
This patch provides two new API for each filter to use:
tcf_exts_get_net() and tcf_exts_put_net(). And all filters now can
use the following pattern:
void __destroy_filter() {
tcf_exts_destroy();
tcf_exts_put_net(); // <== release netns refcnt
kfree();
}
void some_work() {
rtnl_lock();
__destroy_filter();
rtnl_unlock();
}
void some_rcu_callback() {
tcf_queue_work(some_work);
}
if (tcf_exts_get_net()) // <== hold netns refcnt
call_rcu(some_rcu_callback);
else
__destroy_filter();
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit ceffcc5e254b450e6159f173e4538215cebf1b59.
If we hold that refcnt, the netns can never be destroyed until
all actions are destroyed by user, this breaks our netns design
which we expect all actions are destroyed when we destroy the
whole netns.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit baedf68a068ca29624f241426843635920f16e1d.
There is an updated version of this fix which covers
the problem more thoroughly.
Signed-off-by: David S. Miller <davem@davemloft.net>
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Before we can globally change the function prototype of all timer callbacks,
we have to change those set up by DEFINE_TIMER(). Prepare for this by
casting the callbacks until the prototype changes globally.
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Kees Cook <keescook@chromium.org>
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Cc: Wensong Zhang <wensong@linux-vs.org>
Cc: Simon Horman <horms@verge.net.au>
Cc: Julian Anastasov <ja@ssi.bg>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Cc: Florian Westphal <fw@strlen.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: netdev@vger.kernel.org
Cc: lvs-devel@vger.kernel.org
Cc: netfilter-devel@vger.kernel.org
Cc: coreteam@netfilter.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Julian Anastasov <ja@ssi.bg>
Acked-by: Simon Horman <horms@verge.net.au>
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Cc: qla2xxx-upstream@qlogic.com
Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Tested-by: Bart Van Assche <Bart.VanAssche@wdc.com>
The pointer clk is being initialized to ERR_PTR(-ENODEV) however
this value is never read before it is set to clk_data->clk. Thus
the initialization is redundant and can be mored.
Cleans up clang warning:
Value stored to 'clk' during its initialization is never read
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Based on ACPI 6.2 Section 8.4.7.1.9 If the PCC register space is used,
all PCC registers, for all processors in the same performance domain
(as defined by _PSD), must be defined to be in the same subspace.
Based on Section 14.1 of ACPI specification, it is possible to have a
maximum of 256 PCC subspace IDs. Add support of multiple PCC subspace
ID instead of using a single global pcc_data structure.
While at that, fix the time_delta check in send_pcc_cmd() so that
last_mpar_reset and mpar_count are initialized properly.
Signed-off-by: George Cherian <george.cherian@cavium.com>
Reviewed-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Move the MAX_PCC_SUBSPACES definition to acpi/pcc.h file in
preparation to add subspace ID support for cppc_acpi driver.
Signed-off-by: George Cherian <george.cherian@cavium.com>
Reviewed-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The function param_set_trace_method_name is local to the source and does
not need to be in global scope, so make it static.
Cleans up sparse warning:
symbol 'param_set_trace_method_name' was not declared. Should it be static?
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
ACPI _LID methods may depend on OpRegions and do not always handle
handlers for those OpRegions not being present properly e.g. :
Method (_LID, 0, NotSerialized) // _LID: Lid Status
{
If ((^^I2C5.PMI1.AVBL == One) && (^^GPO2.AVBL == One))
{
Return (^^GPO2.LPOL) /* \_SB_.GPO2.LPOL */
}
}
Note the missing Return (1) when either of the OpRegions is not available,
this causes (in this case) a report of the lid-switch being closed,
which causes userspace to do an immediate suspend at boot.
This commit delays getting the initial state and thus calling _LID for
the first time until userspace opens the /dev/input/event# node. This
ensures that all drivers will have had a chance to load and registerer
their OpRegions before the first _LID call, fixing this issue.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Originally the Samsung quirks removed by commit 4c237371 can be covered
by commit e923e8e7 and ec_freeze_events=Y mode. But commit 9c40f956
changed ec_freeze_events=Y back to N, making this problem re-surface.
Actually, if commit e923e8e7 is robust enough, we can freely change
ec_freeze_events mode, so this patch fixes the issue by improving
commit e923e8e7.
Related commits listed in the merged order:
Commit: e923e8e79e18fd6be9162f1be6b99a002e9df2cb
Subject: ACPI / EC: Fix an issue that SCI_EVT cannot be detected
after event is enabled
Commit: 4c237371f290d1ed3b2071dd43554362137b1cce
Subject: ACPI / EC: Remove old CLEAR_ON_RESUME quirk
Commit: 9c40f956ce9b331493347d1b3cb7e384f7dc0581
Subject: Revert "ACPI / EC: Enable event freeze mode..." to fix
a regression
This patch not only fixes the reported post-resume EC event triggering
source issue, but also fixes an unreported similar issue related to the
driver bind by adding EC event triggering source in ec_install_handlers().
Fixes: e923e8e79e18 (ACPI / EC: Fix an issue that SCI_EVT cannot be detected after event is enabled)
Fixes: 4c237371f290 (ACPI / EC: Remove old CLEAR_ON_RESUME quirk)
Fixes: 9c40f956ce9b (Revert "ACPI / EC: Enable event freeze mode..." to fix a regression)
Link: https://bugzilla.kernel.org/show_bug.cgi?id=196833
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Reported-by: Alistair Hamilton <ahpatent@gmail.com>
Tested-by: Alistair Hamilton <ahpatent@gmail.com>
Cc: 4.11+ <stable@vger.kernel.org> # 4.11+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
32-bit timestamps are deprecated in the kernel, so we should not use
get_seconds(). In this case, the 'struct cper_record_header' structure
already contains a 64-bit field, so the only required change is to use
the safe ktime_get_real_seconds() interface as a replacement.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
'cached_raw_freq' is used to get the next frequency quickly but should
always be in sync with sg_policy->next_freq. There is a case where it is
not and in such cases it should be reset to avoid switching to incorrect
frequencies.
Consider this case for example:
- policy->cur is 1.2 GHz (Max)
- New request comes for 780 MHz and we store that in cached_raw_freq.
- Based on 780 MHz, we calculate the effective frequency as 800 MHz.
- We then see the CPU wasn't idle recently and choose to keep the next
freq as 1.2 GHz.
- Now we have cached_raw_freq is 780 MHz and sg_policy->next_freq is
1.2 GHz.
- Now if the utilization doesn't change in then next request, then the
next target frequency will still be 780 MHz and it will match with
cached_raw_freq. But we will choose 1.2 GHz instead of 800 MHz here.
Fixes: b7eaf1aab9f8 (cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely)
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: 4.12+ <stable@vger.kernel.org> # 4.12+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Himanshu Jha <himanshujha199640@gmail.com>
Acked-by: Luis R. Rodriguez <mcgrof@kernel.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Problem: This flag does not get cleared currently in the suspend or
resume path in the following cases:
* In case some driver's suspend routine returns an error.
* Successful s2idle case
* etc?
Why is this a problem: What happens is that the next suspend attempt
could fail even though the user did not enable the flag by writing to
/sys/power/wakeup_count. This is 1 use case how the issue can be seen
(but similar use case with driver suspend failure can be thought of):
1. Read /sys/power/wakeup_count
2. echo count > /sys/power/wakeup_count
3. echo freeze > /sys/power/wakeup_count
4. Let the system suspend, and wakeup the system using some wake source
that calls pm_wakeup_event() e.g. power button or something.
5. Note that the combined wakeup count would be incremented due
to the pm_wakeup_event() in the resume path.
6. After resuming the events_check_enabled flag is still set.
At this point if the user attempts to freeze again (without writing to
/sys/power/wakeup_count), the suspend would fail even though there has
been no wake event since the past resume.
Address that by clearing the flag just before a resume is completed,
so that it is always cleared for the corner cases mentioned above.
Signed-off-by: Rajat Jain <rajatja@google.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
On platforms with large number of Pstates, the transition table, which
is a NxN matrix, can overflow beyond the PAGE_SIZE boundary.
This can be seen on POWER9 which has 100+ Pstates.
As a result, each time the trans_table is read for any of the CPUs, we
will get the following error.
---------------------------------------------------
fill_read_buffer: show+0x0/0xa0 returned bad count
---------------------------------------------------
This patch ensures that in case of an overflow, we print a warning
once in the dmesg and return FILE TOO LARGE error for this and all
subsequent accesses of trans_table.
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Make these const as they are only getting passed to the functions
bL_cpufreq_{register/unregister} having the arguments as const.
Signed-off-by: Bhumika Goyal <bhumirks@gmail.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Make the arguments of functions bL_cpufreq_{register/unregister} as
const as the ops pointer does not modify the fields of the
cpufreq_arm_bL_ops structure it points to. The pointer arm_bL_ops is
also getting initialized with ops but the pointer does not modify the
fields. So, make the function argument and the structure pointer const.
Add const to function prototypes too.
Signed-off-by: Bhumika Goyal <bhumirks@gmail.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Clean up cpuidle_enable_device() to avoid doing an assignment
in an expression evaluated as an argument of if (), which also
makes the code in question more readable.
Signed-off-by: Gaurav Jindal <gauravjindal1104@gmail.com>
[ rjw: Subject & changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Do not fetch per CPU drv if cpuidle_curr_governor is NULL
to avoid useless per CPU processing.
Signed-off-by: Gaurav Jindal <gauravjindal1104@gmail.com>
[ rjw: Subject & changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
acpi_remove_pm_notifier() ends up calling flush_workqueue() while
holding acpi_pm_notifier_lock, and that same lock is taken by
by the work via acpi_pm_notify_handler(). This can deadlock.
To fix the problem let's split the single lock into two: one to
protect the dev->wakeup between the work vs. add/remove, and
another one to handle notifier installation vs. removal.
After commit a1d14934ea4b "workqueue/lockdep: 'Fix' flush_work()
annotation" I was able to kill the machine (Intel Braswell)
very easily with 'powertop --auto-tune', runtime suspending i915,
and trying to wake it up via the USB keyboard. The cases when
it didn't die are presumably explained by lockdep getting disabled
by something else (cpu hotplug locking issues usually).
Fortunately I still got a lockdep report over netconsole
(trickling in very slowly), even though the machine was
otherwise practically dead:
[ 112.179806] ======================================================
[ 114.670858] WARNING: possible circular locking dependency detected
[ 117.155663] 4.13.0-rc6-bsw-bisect-00169-ga1d14934ea4b #119 Not tainted
[ 119.658101] ------------------------------------------------------
[ 121.310242] xhci_hcd 0000:00:14.0: xHCI host not responding to stop endpoint command.
[ 121.313294] xhci_hcd 0000:00:14.0: xHCI host controller not responding, assume dead
[ 121.313346] xhci_hcd 0000:00:14.0: HC died; cleaning up
[ 121.313485] usb 1-6: USB disconnect, device number 3
[ 121.313501] usb 1-6.2: USB disconnect, device number 4
[ 134.747383] kworker/0:2/47 is trying to acquire lock:
[ 137.220790] (acpi_pm_notifier_lock){+.+.}, at: [<ffffffff813cafdf>] acpi_pm_notify_handler+0x2f/0x80
[ 139.721524]
[ 139.721524] but task is already holding lock:
[ 144.672922] ((&dpc->work)){+.+.}, at: [<ffffffff8109ce90>] process_one_work+0x160/0x720
[ 147.184450]
[ 147.184450] which lock already depends on the new lock.
[ 147.184450]
[ 154.604711]
[ 154.604711] the existing dependency chain (in reverse order) is:
[ 159.447888]
[ 159.447888] -> #2 ((&dpc->work)){+.+.}:
[ 164.183486] __lock_acquire+0x1255/0x13f0
[ 166.504313] lock_acquire+0xb5/0x210
[ 168.778973] process_one_work+0x1b9/0x720
[ 171.030316] worker_thread+0x4c/0x440
[ 173.257184] kthread+0x154/0x190
[ 175.456143] ret_from_fork+0x27/0x40
[ 177.624348]
[ 177.624348] -> #1 ("kacpi_notify"){+.+.}:
[ 181.850351] __lock_acquire+0x1255/0x13f0
[ 183.941695] lock_acquire+0xb5/0x210
[ 186.046115] flush_workqueue+0xdd/0x510
[ 190.408153] acpi_os_wait_events_complete+0x31/0x40
[ 192.625303] acpi_remove_notify_handler+0x133/0x188
[ 194.820829] acpi_remove_pm_notifier+0x56/0x90
[ 196.989068] acpi_dev_pm_detach+0x5f/0xa0
[ 199.145866] dev_pm_domain_detach+0x27/0x30
[ 201.285614] i2c_device_probe+0x100/0x210
[ 203.411118] driver_probe_device+0x23e/0x310
[ 205.522425] __driver_attach+0xa3/0xb0
[ 207.634268] bus_for_each_dev+0x69/0xa0
[ 209.714797] driver_attach+0x1e/0x20
[ 211.778258] bus_add_driver+0x1bc/0x230
[ 213.837162] driver_register+0x60/0xe0
[ 215.868162] i2c_register_driver+0x42/0x70
[ 217.869551] 0xffffffffa0172017
[ 219.863009] do_one_initcall+0x45/0x170
[ 221.843863] do_init_module+0x5f/0x204
[ 223.817915] load_module+0x225b/0x29b0
[ 225.757234] SyS_finit_module+0xc6/0xd0
[ 227.661851] do_syscall_64+0x5c/0x120
[ 229.536819] return_from_SYSCALL_64+0x0/0x7a
[ 231.392444]
[ 231.392444] -> #0 (acpi_pm_notifier_lock){+.+.}:
[ 235.124914] check_prev_add+0x44e/0x8a0
[ 237.024795] __lock_acquire+0x1255/0x13f0
[ 238.937351] lock_acquire+0xb5/0x210
[ 240.840799] __mutex_lock+0x75/0x940
[ 242.709517] mutex_lock_nested+0x1c/0x20
[ 244.551478] acpi_pm_notify_handler+0x2f/0x80
[ 246.382052] acpi_ev_notify_dispatch+0x44/0x5c
[ 248.194412] acpi_os_execute_deferred+0x14/0x30
[ 250.003925] process_one_work+0x1ec/0x720
[ 251.803191] worker_thread+0x4c/0x440
[ 253.605307] kthread+0x154/0x190
[ 255.387498] ret_from_fork+0x27/0x40
[ 257.153175]
[ 257.153175] other info that might help us debug this:
[ 257.153175]
[ 262.324392] Chain exists of:
[ 262.324392] acpi_pm_notifier_lock --> "kacpi_notify" --> (&dpc->work)
[ 262.324392]
[ 267.391997] Possible unsafe locking scenario:
[ 267.391997]
[ 270.758262] CPU0 CPU1
[ 272.431713] ---- ----
[ 274.060756] lock((&dpc->work));
[ 275.646532] lock("kacpi_notify");
[ 277.260772] lock((&dpc->work));
[ 278.839146] lock(acpi_pm_notifier_lock);
[ 280.391902]
[ 280.391902] *** DEADLOCK ***
[ 280.391902]
[ 284.986385] 2 locks held by kworker/0:2/47:
[ 286.524895] #0: ("kacpi_notify"){+.+.}, at: [<ffffffff8109ce90>] process_one_work+0x160/0x720
[ 288.112927] #1: ((&dpc->work)){+.+.}, at: [<ffffffff8109ce90>] process_one_work+0x160/0x720
[ 289.727725]
Fixes: c072530f391e (ACPI / PM: Revork the handling of ACPI device wakeup notifications)
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: 3.17+ <stable@vger.kernel.org> # 3.17+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Commit 7744ccdbc16f0 ("x86/mm: Add Secure Memory Encryption (SME)
support") as a side-effect made PAGE_KERNEL all of a sudden unavailable
to modules which can't make use of EXPORT_SYMBOL_GPL() symbols.
This is because once SME is enabled, sme_me_mask (which is introduced as
EXPORT_SYMBOL_GPL) makes its way to PAGE_KERNEL through _PAGE_ENC,
causing imminent build failure for all the modules which make use of all
the EXPORT-SYMBOL()-exported API (such as vmap(), __vmalloc(),
remap_pfn_range(), ...).
Exporting (as EXPORT_SYMBOL()) interfaces (and having done so for ages)
that take pgprot_t argument, while making it impossible to -- all of a
sudden -- pass PAGE_KERNEL to it, feels rather incosistent.
Restore the original behavior and make it possible to pass PAGE_KERNEL
to all its EXPORT_SYMBOL() consumers.
[ This is all so not wonderful. We shouldn't need that "sme_me_mask"
access at all in all those places that really don't care about that
level of detail, and just want _PAGE_KERNEL or whatever.
We have some similar issues with _PAGE_CACHE_WP and _PAGE_NOCACHE,
both of which hide a "cachemode2protval()" call, and which also ends
up using another EXPORT_SYMBOL(), but at least that only triggers for
the much more rare cases.
Maybe we could move these dynamic page table bits to be generated much
deeper down in the VM layer, instead of hiding them in the macros that
everybody uses.
So this all would merit some cleanup. But not today. - Linus ]
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Despised-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
At a couple of places smatch emits warnings like this:
arch/s390/mm/vmem.c:409 vmem_map_init() warn:
right shifting more than type allows
In fact shifting a signed type right is undefined. Avoid this and add
an unsigned long cast. The shifted values are always positive.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
The current way of adding new instructions to the opcode tables is
painful and error prone. Therefore add, similar to binutils, a text
file which contains all opcodes and the corresponding mnemonics and
instruction formats.
A small gen_opcode_table tool then generates a header file with the
required enums and opcode table initializers at the prepare step of
the kernel build.
This way only a simple text file has to be maintained, which can be
rather easily extended.
Unlike before where there were plenty of opcode tables and a large
switch statement to find the correct opcode table, there is now only
one opcode table left which contains all instructions. A second opcode
offset table now contains offsets within the opcode table to find
instructions which have the same opcode prefix. In order to save space
all 1-byte opcode instructions are grouped together at the end of the
opcode table. This is also quite similar to like it was before.
In addition also move and change code and definitions within the
disassembler. As a side effect this reduces the size required for the
code and opcode tables by ~1.5k.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
insn_to_mnemonic() was introduced ages ago for KVM debugging, but is
unused in the meantime. Therefore remove it.
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
do_gettimeofday() is deprecated because it's not y2038-safe on
32-bit architectures. Since it is basically a wrapper around
ktime_get_real_ts64(), we can just call that function directly
instead.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
[sth@linux.vnet.ibm.com: fix build]
Signed-off-by: Stefan Haberland <sth@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Not only is it annoying to have one single flag for all pointers, as if
that was a global choice and all kernel pointers are the same, but %pK
can't get the 'access' vs 'open' time check right anyway.
So make the /proc/kallsyms pointer value code use logic specific to that
particular file. We do continue to honor kptr_restrict, but the default
(which is unrestricted) is changed to instead take expected users into
account, and restrict access by default.
Right now the only actual expected user is kernel profiling, which has a
separate sysctl flag for kernel profile access. There may be others.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A non-zero value is converted to 1 when assigned to a bool variable, so the
conditional operator in is_ima_appraise_enabled is redundant.
The value of a comparison operator is either 1 or 0 so the conditional
operator in ima_inode_setxattr is redundant as well.
Confirmed that the patch is correct by comparing the object file from
before and after the patch. They are identical.
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>