IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
A deadlock situation can occur when autostarting a LXC domain 'guest'
due to two threads attempting to take opposing locks while holding
opposing locks (AB BA problem). Thread A takes and holds the 'vm' lock
while attempting to take the 'client' lock, meanwhile, thread B takes
and holds the 'client' lock while attempting to take the 'vm' lock.
The potential for this can be seen as follows:
Thread A:
virLXCProcessAutostartDomain (takes vm lock)
--> virLXCProcessStart
--> virLXCProcessConnectMonitor
--> virLXCMonitorNew
--> virNetClientSetCloseCallback (wants client lock)
Thread B:
virNetClientIncomingEvent (takes client lock)
--> virNetClientIOHandleInput
--> virNetClientCallDispatch
--> virNetClientCallDispatchMessage
--> virNetClientProgramDispatch
--> virLXCMonitorHandleEventInit
--> virLXCProcessMonitorInitNotify (wants vm lock)
Since these threads are scheduled independently and are preemptible it
is possible for the deadlock scenario to occur where each thread locks
their first lock but both will fail to get their second lock and just
spin forever. You get something like:
virLXCProcessAutostartDomain (takes vm lock)
--> virLXCProcessStart
--> virLXCProcessConnectMonitor
--> virLXCMonitorNew
<...>
virNetClientIncomingEvent (takes client lock)
--> virNetClientIOHandleInput
--> virNetClientCallDispatch
--> virNetClientCallDispatchMessage
--> virNetClientProgramDispatch
--> virLXCMonitorHandleEventInit
--> virLXCProcessMonitorInitNotify (wants vm lock but spins)
<...>
--> virNetClientSetCloseCallback (wants client lock but spins)
Neither thread ever gets the lock it needs to be able to continue
while holding the lock that the other thread needs.
The actual window for preemption which can cause this deadlock is
rather small, between the calls to virNetClientProgramNew() and
execution of virNetClientSetCloseCallback(), both in
virLXCMonitorNew(). But it can be seen in real world use that this
small window is enough.
By moving the call to virNetClientSetCloseCallback() ahead of
virNetClientProgramNew() we can close any possible chance of the
deadlock taking place. There should be no other implications to the
move since the close callback (in the unlikely event was called) will
spin on the vm lock. The remaining work that takes place between the
old call location of virNetClientSetCloseCallback() and the new
location is unaffected by the move.
Signed-off-by: Mark Asselstine <mark.asselstine@windriver.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
With the introduction of cgroup v2 there are new names used with
cgroups based on which version is used:
- legacy: cgroup v1
- unified: cgroup v2
- hybrid: cgroup v1 and cgroup v2
Let's use 'legacy' instead of 'cgroupv1' or 'controllers' in our code.
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
They all need virCgroupV1GetMemoryUnlimitedKB() so it's easier to
move them in one commit.
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
We need to update one test-case because now new cgroup object will be
created only if there is any cgroup backend available.
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
We will need to extract current cgroup v1 implementation into separate
backend because there will be new cgroup v2 implementation and both will
have to co-exist.
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This will be required once cgroup v2 is introduced. The cgroup
detection is not simple and we will have multiple backends so we
should not just jump into the middle of the detection code.
In order to use virCgroupNewSelf we need to create all the remaining
data files:
- {name}.cgroups represents /proc/cgroups, it is a list of cgroup
controllers compiled into kernel
- {name}.self.cgroup represents /proc/self/cgroup, it describes
cgroups to which the process belongs
For "no-cgroups" we need to modify the expected behavior because
virCgroupNewSelf() will fail if there are no controllers available.
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Because we can set which files to return for cgroup tests there
is no need to have special function tailored to run tests.
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>