IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
* rampart_groups_setup_playbook:
Updating changelog for Instance Groups
Fix an incorrect reference on instance group jobs list
Purge remaining references to rampart groups
Simplify can_access for instance groups on job templates
Adding Instance Group permissions and tests
Increase test coverage for task scheduler inventory updates
Exit logic fixes for instance group tools
View Fixes for instance groups
new view to allow associations but no creations
Updating acceptance documentation and system docs
Updating unit tests for task manager refactoring
Update views and serializers to support instance group (ramparts)
Implementing models for instance groups, updating task manager
Updating the setup playbook to support instance group installation
Add nginx to server start and switch back to first tmux win
Fix an issue where the local queue wouldn't use the rabbitmq name
* includes top level views for instances and instance groups and
extending those views to be able to view running jobs
* Associative endpoints on Organizations, Inventories, and Job
Templates
* Related and summary field entries where appropriate
* Adding job model references to executing instance group
* Fix up default queue properties for clustering from the settings file
* Update production and default settings for instance queues in settings
* New InstanceGroup model and associative relationship with Instances
* Associative instances between Organizations, Inventory, and Job
Templates and InstanceGroups
* Migrations for adding fields and tables for Instance Groups
* Adding activity stream reference for instance groups
* Task Manager Refactoring:
** Simplifying task manager relationships and move away from the
interstitial hash tables
** Simplify dependency determination logic
** Reduce task manager runtime complexity by removing the partial
references and moving the logic into the task manager directly or
relying on Job model logic for determinism
Credentials now have a required CredentialType, which defines inputs
(i.e., username, password) and injectors (i.e., assign the username to
SOME_ENV_VARIABLE at job runtime)
This commit only implements the model changes necessary to support the
new inputs model, and includes code for the credential serializer that
allows backwards-compatible support for /api/v1/credentials/; tasks.py
still needs to be updated to actually respect CredentialType injectors.
This change *will* break the UI for credentials (because it needs to be
updated to use the new v2 endpoint).
see: #5877
see: #5876
see: #5805
I had to pull the git urls out of the main requirements files because in order to install offline (--no-index), we need pip to install from local package archives rather than cloning repo.
The weird `cat` thing going on in the Makefile is because we need to install everything as part of a single `pip install` transaction. Without this, installing only requirements_git.txt will result in dependencies getting unintentionally updated.
I know, this sucks. I spent all day trying to get to the bottom of the CI failures that started happening the other day with no luck.
There is something going on with how we were moving the node_modules directory into the source tree from the pre-built location in /tmp. This was working, but then it broke. I hope to cycle back on this sometime next week if I have the time.
http://docs.celeryproject.org/en/latest/reference/celery.contrib.rdb.html
allows you to remotely debug running celery tasks with:
from celery.contrib import rdb
rdb.set_trace()
this will bind a remote Python debugger on a random TCP port between
6899-6999, which you can telnet into for remote task debugging
* This also allows disabling https mode in the nginx configuration
* Reconfigure the development container to not specifically require
https, so the haproxy cluster configuration can work
Fixes this error:
```
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "./tools/docker-compose/unit-tests/docker-compose.yml", line 10, column 7
expected <block end>, but found '<scalar>'
in "./tools/docker-compose/unit-tests/docker-compose.yml", line 11, column 40
```
node_modules was being overwritten by the bind-mount when the container started. This is kind of hacky, but it’s the only way to do this without changing everything.