IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This commit updates all files that weren't passing yamllint for them to
pass.
A new yamllint target has been added. One can run `tox -e yamllint` or
`yamllint -s .` locally to ensure yaml files are still passing.
This check will be enabled in the CI so it can get on every new
contributions, and prevent merging non-compliant code.
Signed-off-by: Yanis Guenane <yguenane@redhat.com>
Yesterday I noticed that we have awx/projects in our .gitignore. I am assuming
this pre-dates our containerized development environment. With this commit, any
project under awx/projects/ will be made available in the dev environment for
selection when creating a Manual project. This comes in super handy when
testing changes to playbooks locally.
- use awx-python in shebang in dev env
- scl enable where needed for rhel7 & container installs
- use scram-sha-256 pg user hashing by default
- ensure psycopg2 is using the correct PG_CONFIG at build time for the right libpq version
Without this, CURRENT_UID isnt actually passed in from the host, and wipes out /etc/passwd even when we’re actually running as root.
I tested this as a non-root user on Linux, and on Docker for Mac
I wanted to pass `—user` to `docker-compose` up, but that option doesnt exist. To get around this, I had to record the uid on the host (CURRENT_UID), interpolate the variable in tools/docker-compose.yml, and detect that inside the container. I then piggy-backed on the /etc/passwd hack we use for scenarios with unpredictable uids.
this commit implements the bulk of `awx-manage run_dispatcher`, a new
command that binds to RabbitMQ via kombu and balances messages across
a pool of workers that are similar to celeryd workers in spirit.
Specifically, this includes:
- a new decorator, `awx.main.dispatch.task`, which can be used to
decorate functions or classes so that they can be designated as
"Tasks"
- support for fanout/broadcast tasks (at this point in time, only
`conf.Setting` memcached flushes use this functionality)
- support for job reaping
- support for success/failure hooks for job runs (i.e.,
`handle_work_success` and `handle_work_error`)
- support for auto scaling worker pool that scale processes up and down
on demand
- minimal support for RPC, such as status checks and pool recycle/reload
* Jupyter starts alongside the other awx services and is available on
0.0.0.0:8888
* make target: make jupyter
* default settings in settings/development.py
* Added jupyter, matplotlib, numpy to dev dependencies
* rampart_groups_setup_playbook:
Updating changelog for Instance Groups
Fix an incorrect reference on instance group jobs list
Purge remaining references to rampart groups
Simplify can_access for instance groups on job templates
Adding Instance Group permissions and tests
Increase test coverage for task scheduler inventory updates
Exit logic fixes for instance group tools
View Fixes for instance groups
new view to allow associations but no creations
Updating acceptance documentation and system docs
Updating unit tests for task manager refactoring
Update views and serializers to support instance group (ramparts)
Implementing models for instance groups, updating task manager
Updating the setup playbook to support instance group installation
Add nginx to server start and switch back to first tmux win
Fix an issue where the local queue wouldn't use the rabbitmq name
* New InstanceGroup model and associative relationship with Instances
* Associative instances between Organizations, Inventory, and Job
Templates and InstanceGroups
* Migrations for adding fields and tables for Instance Groups
* Adding activity stream reference for instance groups
* Task Manager Refactoring:
** Simplifying task manager relationships and move away from the
interstitial hash tables
** Simplify dependency determination logic
** Reduce task manager runtime complexity by removing the partial
references and moving the logic into the task manager directly or
relying on Job model logic for determinism
http://docs.celeryproject.org/en/latest/reference/celery.contrib.rdb.html
allows you to remotely debug running celery tasks with:
from celery.contrib import rdb
rdb.set_trace()
this will bind a remote Python debugger on a random TCP port between
6899-6999, which you can telnet into for remote task debugging