IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This commit updates all files that weren't passing yamllint for them to
pass.
A new yamllint target has been added. One can run `tox -e yamllint` or
`yamllint -s .` locally to ensure yaml files are still passing.
This check will be enabled in the CI so it can get on every new
contributions, and prevent merging non-compliant code.
Signed-off-by: Yanis Guenane <yguenane@redhat.com>
- use awx-python in shebang in dev env
- scl enable where needed for rhel7 & container installs
- use scram-sha-256 pg user hashing by default
- ensure psycopg2 is using the correct PG_CONFIG at build time for the right libpq version
Without this, CURRENT_UID isnt actually passed in from the host, and wipes out /etc/passwd even when we’re actually running as root.
I tested this as a non-root user on Linux, and on Docker for Mac
this commit implements the bulk of `awx-manage run_dispatcher`, a new
command that binds to RabbitMQ via kombu and balances messages across
a pool of workers that are similar to celeryd workers in spirit.
Specifically, this includes:
- a new decorator, `awx.main.dispatch.task`, which can be used to
decorate functions or classes so that they can be designated as
"Tasks"
- support for fanout/broadcast tasks (at this point in time, only
`conf.Setting` memcached flushes use this functionality)
- support for job reaping
- support for success/failure hooks for job runs (i.e.,
`handle_work_success` and `handle_work_error`)
- support for auto scaling worker pool that scale processes up and down
on demand
- minimal support for RPC, such as status checks and pool recycle/reload
* Meant to be a starting point to more efficiently manage work routing
and to balance work across all tower nodes
* Integrate flower as a dev tool that starts alongside other nodes.
Helpful for observing and monitoring the queues/exchanges
* For the moment, force the task manager to only run on one node (not
sure if this is needed)
* Define queues and routes for all task work
* Bump celery version to 3.1.23
* Expose flower through haproxy