b05d8d9ef5
When many memory regions are being added and automatically onlined the following lockup is sometimes observed: INFO: task udevd:1872 blocked for more than 120 seconds. ... Call Trace: [<ffffffff816ec0bc>] schedule_timeout+0x22c/0x350 [<ffffffff816eb98f>] wait_for_common+0x10f/0x160 [<ffffffff81067650>] ? default_wake_function+0x0/0x20 [<ffffffff816eb9fd>] wait_for_completion+0x1d/0x20 [<ffffffff8144cb9c>] hv_memory_notifier+0xdc/0x120 [<ffffffff816f298c>] notifier_call_chain+0x4c/0x70 ... When several memory blocks are going online simultaneously we got several hv_memory_notifier() trying to acquire the ha_region_mutex. When this mutex is being held by hot_add_req() all these competing acquire_region_mutex() do mutex_trylock, fail, and queue themselves into wait_for_completion(..). However when we do complete() from release_region_mutex() only one of them wakes up. This could be solved by changing complete() -> complete_all() memory onlining can be delayed as well, in that case we can still get several hv_memory_notifier() runners at the same time trying to grab the mutex. Only one of them will succeed and the others will hang for forever as complete() is not being called. We don't see this issue often because we have 5sec onlining timeout in hv_mem_hot_add() and usually all udev events arrive in this time frame. Get rid of the trylock path, waiting on the mutex is supposed to provide the required serialization. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
||
---|---|---|
.. | ||
channel_mgmt.c | ||
channel.c | ||
connection.c | ||
hv_balloon.c | ||
hv_fcopy.c | ||
hv_kvp.c | ||
hv_snapshot.c | ||
hv_util.c | ||
hv.c | ||
hyperv_vmbus.h | ||
Kconfig | ||
Makefile | ||
ring_buffer.c | ||
vmbus_drv.c |