IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Currently the socket code will unlink any UNIX socket path which is
associated with a server socket. This is not fine grained enough, as we
need to avoid unlinking server sockets we were passed by systemd.
To deal with this we must explicitly track whether each socket needs to
be unlinked when closed, separately of the client vs server state.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Libvirtd has long had integration with avahi for advertising libvirtd
using mDNS when TCP/TLS listening is enabled. For a long time the
virt-manager application had support for auto-detecting libvirtds
on the local network using mDNS, but this was removed last year
commit fc8f8d5d7e3ba80a0771df19cf20e84a05ed2422
Author: Cole Robinson <crobinso@redhat.com>
Date: Sat Oct 6 20:55:31 2018 -0400
connect: Drop avahi support
Libvirtd can advertise itself over avahi. The feature is disabled by
default though and in practice I hear of no one actually using it
and frankly I don't think it's all that useful
The 'Open Connection' wizard has a disproportionate amount of code
devoted to this feature, but I don't think it's useful or worth
maintaining, so let's drop it
I've never heard of any other applications having support for using
mDNS to detect libvirtd instances. Though it is theoretically possible
something exists out there, it is clearly going to be a niche use case
in the virt ecosystem as a whole.
By removing avahi integration we can cut down the dependency chain for
the basic libvirtd install and reduce our code maint burden.
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Validate that the virNetServer(Client) RPC APIs are processing the
private data callbacks correctly by passing in non-NULL pointers.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
There is a race between virNetServerProcessClients (main thread) and
remoteDispatchAuthList/remoteDispatchAuthPolkit/remoteSASLFinish (worker
thread) that can lead to decrementing srv->nclients_unauth when it's
zero. Since virNetServerCheckLimits relies on the value
srv->nclients_unauth the underrun causes libvirtd to stop accepting
new connections forever.
Example race scenario (assuming libvirtd is using policykit and the
client is privileged):
1. The client calls the RPC remoteDispatchAuthList =>
remoteDispatchAuthList is executed on a worker thread (Thread
T1). We're assuming now the execution stops for some time before
the line 'virNetServerClientSetAuth(client, 0)'
2. The client closes the connection irregularly. This causes the
event loop to wake up and virNetServerProcessClient to be
called (on the main thread T0). During the
virNetServerProcessClients the srv lock is hold. The condition
virNetServerClientNeedAuth(client) will be checked and as the
authentication is not finished right now
virNetServerTrackCompletedAuthLocked(srv) will be called =>
--srv->nclients_unauth => 0
3. The Thread T1 continues, marks the client as authenticated, and
calls virNetServerTrackCompletedAuthLocked(srv) =>
--srv->nclients_unauth => --0 => wrap around as nclient_unauth is
unsigned
4. virNetServerCheckLimits(srv) will disable the services forever
To fix it, add an auth_pending field to the client struct so that it
is now possible to determine if the authentication process has already
been handled for this client.
Setting the authentication method to none for the client in
virNetServerProcessClients is not a proper way to indicate that the
counter has been decremented, as this would imply that the client is
authenticated.
Additionally, adjust the existing test cases for this new field.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
json_reformat uses two spaces for when indenting nested objects, let's
do the same. The result of virJSONValueToString will be exactly the same
as json_reformat would produce.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Admin API needs a way of addressing specific clients. Unlike servers, which we
are happy to address by names both because its name reflects its purpose (to
some extent) and we only have two of them (so far), naming clients doesn't make
any sense, since a) each client is an anonymous, i.e. not recognized after a
disconnect followed by a reconnect, b) we can't predict what kind of requests
it's going to send to daemon, and c) the are loads of them comming and going,
so the only viable option is to use an ID which is of a reasonably wide data
type.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Since the daemon can manage and add (at fresh start) multiple servers,
we also should be able to add them from a JSON state file in case of a
daemon restart, so post exec restart support for multiple servers is also
provided. Patch also updates virnetdaemontest accordingly.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Since its introduction in 2011 (particularly in commit f4324e3292),
the option doesn't work. It just effectively disables all incoming
connections. That's because the client private data that contain the
'keepalive_supported' boolean, are initialized to zeroes so the bool is
false and the only other place where the bool is used is when checking
whether the client supports keepalive. Thus, according to the server,
no client supports keepalive.
Removing this instead of fixing it is better because a) apparently
nobody ever tried it since 2011 (4 years without one month) and b) we
cannot know whether the client supports keepalive until we get a ping or
pong keepalive packet. And that won't happen until after we dispatched
the ConnectOpen call.
Another two reasons would be c) the keepalive_required was tracked on
the server level, but keepalive_supported was in private data of the
client as well as the check that was made in the remote layer, thus
making all other instances of virNetServer miss this feature unless they
all implemented it for themselves and d) we can always add it back in
case there is a request and a use-case for it.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Rename the test to virnetdaemontest and use virNetDaemon objects instead
of virNetServer inside.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>