IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
postgres has a limitation on its notify message size (8k), and the
messages we generate for deep copying functionality easily go over this
limit; instead of passing a giant nested data structure across the
message bus, this change makes it so that we temporarily store the JSON
structure in memcached, and look it up from *within* the task
see: https://github.com/ansible/tower/issues/4162
* Gather brroadcast websocket metrics and push them into redis every
configurable seconds.
* Pop metrics from redis in web view layer to display via the api on
demand
* 100 is the default capacity for a channel. If the client doesn't read
the socket fast enough, websocket messages can and will be lost. This
increases the default to 10,000
* Do not return from blocking unsubscribe until _after_ putting the
gotten unsubscribe message on the queue so that it can be read by the
thread of execution that was unblocked.
* New tower nodes that are (de)registered in the Instance table are seen
by the websocket layer and connected to or disconnected from by the
websocket broadcast backplane using a polling mechanism.
* This is especially useful for openshift and kubernetes. This will be
useful for standalone Tower in the future when the restarting of Tower
services is not required.
* Sending health about websockets over websockets is not a great idea.
* I tried sending health data via prometheus and encountered problems
that will need PR's to prometheus_client library to solve. Circle back
to this later.
* We can not query the dispatcher running on isolated nodes to see if
the playbook is still running because that is the nature of isolated
nodes, they don't run the dispatcher nor do they run the message broker.
Therefore, we should query the control node that is arbitrating the
isolated work. If the control node process in the dispatcher is dead,
consider the iso job dead.
* Instead of waiting an arbitrary number of seconds. We can now wait the
exact amount of time needed to KNOW that we are unsubscribed. This
changeset takes advantage of the new subscribe reply semantic.
* This change adds more than just an unsubscribe reply.
* Websockets canrequest to join/leave groups. They do so using a single
idempotent request. This change replies to group requests over the
websockets with the diff of the group subscription. i.e. what groups the
user currenntly is in, what groups were left, and what groups were
joined.
* User in channels session is a lazy user class. This does not conform
to what the generic Role ancestry code expects. The Role ancestry code
expects a User objects. This change converts the lazy object into a
proper User object before calling the permission code path.
* asgiref async_to_sync was causing a Redis connection _for each_ call
to emit_channel_notification i.e. every event that the callback receiver
processes. This is a "known" issue
https://github.com/django/channels_redis/pull/130#issuecomment-424274470
and the advise is to slow downn the rate at which you call
async_to_sync. That is not an option for us. Instead, we put the async
group_send call onto the event loop for the current thread and wait for
it to be processed immediately.
The known issue has to do with event loop + socket relationship. Each
connection to redis is achieved via a socket. That conection can only be
waiting on by the event loop that corresponds to the calling thread.
async_to_sync creates a _new thread_ for each invocation. Thus, a new
connection to redis is required. Thus, the excess redis connections that
can be observed via netstat | grep redis | wc -l.
* Under the new postgres backed notify/listen message queue, this never
actually worked. Without using the database to store state, we can not
provide a at-most-once delivery mechanism w/ multi-readers.
* With this change, work is done ONLY on the node that requested for the
work to be done. Under rabbitmq, the node that was first to get the
message off the queue would do the work; presumably the least busy node.