IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
the main goal of this change is to make `make docker-isolated` work out
of the box
- specify the proper version for awx-expect --version
- update some deprecated playbook bits
- change the isolated container to privileged so bwrap will work
- fix awx-manage test_isolated_connection
- expedite the first isolated heartbeat so you don't have to wait 10m;
this is accomplished by _not_ setting Instance.last_isolated_check to
now() at insertion time (which causes the next check not to happen for
10 minutes)
- fix a bug that caused isolated node execution to fail when bwrap was
enabled
see: https://github.com/ansible/tower/issues/2150
This reverts commit 9863fe71dc.
Exporting YAML on dev envs with honcho and in production environments
would timeout. This was due to daphne handling the export request
in dev but not in production. This fixes network_ui to use uwsgi instead
of daphne to handle the request.
The ansible-network-ui prototype project builds a standalone Network UI
outside of Tower as its own Django application. The original prototype
code is located here:
https://github.com/benthomasson/ansible-network-ui.
The prototype provides a virtual canvas that supports placing
networking devices onto 2D plane and connecting those devices together
with connections called links. The point where the link connects
to the network device is called an interface. The devices, interfaces,
and links may all have their respective names. This models physical
networking devices is a simple fashion.
The prototype implements a pannable and zoomable 2D canvas in using SVG
elements and AngularJS directives. This is done by adding event
listeners for mouse and keyboard events to an SVG element that fills the
entire browser window.
Mouse and keyboard events are handled in a processing pipeline where
the processing units are implemented as finite state machines that
provide deterministic behavior to the UI.
The finite state machines are built in a visual way that makes
the states and transitions clearly evident. The visual tool for
building FSM is located here:
https://github.com/benthomasson/fsm-designer-svg. This tool
is a fork of this project where the canvas is the same. The elements
on the page are FSM states and the directional connections are called
transitions. The bootstrapping of the FSM designer tool and
network-ui happen in parallel. It was useful to try experiemental
code in FSM designer and then import it into network-ui.
The FSM designer tool provides a YAML description of the design
which can be used to generate skeleton code and check the implementation
against the design for discrepancies.
Events supported:
* Mouse click
* Mouse scroll-wheel
* Keyboard events
* Touch events
Interactions supported:
* Pan canvas by clicking-and-dragging on the background
* Zooming canvas by scrolling mousewheel
* Adding devices and links by using hotkeys
* Selecting devices, interaces, and links by clicking on their icon
* Editing labels on devices, interfaces, and links by double-clicking on
their icon
* Moving devices around the canvas by clicking-and-dragging on their
icon
Device types supported:
* router
* switch
* host
* racks
The database schema for the prototype is also developed with a visual
tool that makes the relationships in the snowflake schema for the models
quickly evident. This tool makes it very easy to build queries across
multiple tables using Django's query builder.
See: https://github.com/benthomasson/db-designer-svg
The client and the server communicate asynchronously over a websocket.
This allows the UI to be very responsive to user interaction since
the full request/response cycle is not needed for every user
interaction.
The server provides persistence of the UI state in the database
using event handlers for events generated in the UI. The UI
processes mouse and keyboard events, updates the UI, and
generates new types of events that are then sent to the server
to be persisted in the database.
UI elements are tracked by unique ids generated on the client
when an element is first created. This allows the elements to
be correctly tracked before they are stored in the database.
The history of the UI is stored in the TopologyHistory model
which is useful for tracking which client made which change
and is useful for implementing undo/redo.
Each message is given a unique id per client and has
a known message type. Message types are pre-populated
in the MessageType model using a database migration.
A History message containing all the change messages for a topology is
sent when the websocket is connected. This allows for undo/redo work
across sessions.
This prototype provides a server-side test runner for driving
tests in the user interface. Events are emitted on the server
to drive the UI. Test code coverage is measured using the
istanbul library which produces instrumented client code.
Code coverage for the server is is measured by the coverage library.
The test code coverage for the Python code is 100%.
* use embedded beat rather than standalone
* dynamically set celeryd hostname at runtime
* add embeded beat flag to celery startup
* Embedded beat mode routes will piggyback off of celery worker setup
signal
* Based on the tower topology (Instance and InstanceGroup
relationships), have celery dyamically listen to queues on boot
* Add celery task capable of "refreshing" what queues each celeryd
worker listens to. This will be used to support changes in the topology.
* Cleaned up some celery task definitions.
* Converged wrongly targeted job launch/finish messages to 'tower'
queue, rather than a 1-off queue.
* Dynamically route celery tasks destined for the local node
* separate beat process
add support for separate beat process
* Jupyter starts alongside the other awx services and is available on
0.0.0.0:8888
* make target: make jupyter
* default settings in settings/development.py
* Added jupyter, matplotlib, numpy to dev dependencies
* release_3.2.0: (138 commits)
Pull Dutch and Spanish translations
Increase verbosity of CTiT Logging test error handling
fix to console error of conditional toggle showing
Fix error message when calling remove on undefined DOM element
fix ctit logging toggle from being showed for log types other than https
Remove delete and edit buttons from smart inventory host list. Only option should be view.
feedback from PR
Enhance query string in ad hoc command event save to consider smart inventory
Fixed host filter clearall
fuller validation for host_filter
On JT form, Show credential tags from summary_fields if user doesn't have view permission on the credential
Align key toggle button to role dropdown in user team permissions modal
Removed rogue console.logs
Removed extra refresh call
Enhace query string in job event save to consider smart inventory
Fix typo in scan_packages plugin
Switch running_jobs and capacity table columns
Disable insights cred when user doesn't have edit permissions
Disallow changing credential_type of an existing credential
fix bug with host_filter RBAC check
...
* release_3.2.0: (342 commits)
fail all jobs on an offline node
filtering out super users from permissions lists
removing vars from schedules for project syncs and inv syncs
update license page if user inputs a new type of license
Show IG name on job results if it comes from the socket
rename isolated->expect in script tooling
Center survey maker delete dialog in browser window
Fix job details right panel content from overflowing in Firefox
graceful killing of receiver worker processes
change imports to reflect isolated->expect move
Update smart inventory host popover content
Fix extra variable textarea scroll in Firefox
initial commit to move folder isolated->expect
Add missing super call in NotificationTemplateSerializer
Various workflow maker bug fixes
Style nodes with deleted unified job templates
Fixed job template view for user with read-only access
presume 401 from insights means invalid credential
only reap non-netsplit nodes
import os, fixing bug that forced SIGKILL
...
* Change scheme from using event dict to JobEvent object
* Add processing to grok object fields
* Allow override of provided formatter in case of future issues
* colorize uwsgi and celery logs; DEBUG lines are green, WARN lines
are yellow, ERROR lines (and tracebacks) are red
* pretty-print fact callback receiver JSON
* simplify the uwsgi log format so it's more legible
This moves the container-based code location and venvs.
The goal here is that the paths of Tower source for isolated
vs normal nodes matches (both in prod and local development) so that we
don't have to add a bunch of additional bwrap argument logic for
<location-of-isolated-tower-venv>.
instead of launching isolated tasks via `systemctl`, treat
`awx.main.isolated.run` as an executable that knows how to daemonize
additionally, add `setup.py isolated_build` for isolated Tower source
distribution
* Task Manager logic wasn't assigning default instance group on system
jobs
* Task credential changes assumed the model would have a credential
* Fix up an innocuous error symlinking rdb.py if it already exists
* rampart_groups_setup_playbook:
Updating changelog for Instance Groups
Fix an incorrect reference on instance group jobs list
Purge remaining references to rampart groups
Simplify can_access for instance groups on job templates
Adding Instance Group permissions and tests
Increase test coverage for task scheduler inventory updates
Exit logic fixes for instance group tools
View Fixes for instance groups
new view to allow associations but no creations
Updating acceptance documentation and system docs
Updating unit tests for task manager refactoring
Update views and serializers to support instance group (ramparts)
Implementing models for instance groups, updating task manager
Updating the setup playbook to support instance group installation
Add nginx to server start and switch back to first tmux win
Fix an issue where the local queue wouldn't use the rabbitmq name
* includes top level views for instances and instance groups and
extending those views to be able to view running jobs
* Associative endpoints on Organizations, Inventories, and Job
Templates
* Related and summary field entries where appropriate
* Adding job model references to executing instance group
* Fix up default queue properties for clustering from the settings file
* Update production and default settings for instance queues in settings
* New InstanceGroup model and associative relationship with Instances
* Associative instances between Organizations, Inventory, and Job
Templates and InstanceGroups
* Migrations for adding fields and tables for Instance Groups
* Adding activity stream reference for instance groups
* Task Manager Refactoring:
** Simplifying task manager relationships and move away from the
interstitial hash tables
** Simplify dependency determination logic
** Reduce task manager runtime complexity by removing the partial
references and moving the logic into the task manager directly or
relying on Job model logic for determinism
Credentials now have a required CredentialType, which defines inputs
(i.e., username, password) and injectors (i.e., assign the username to
SOME_ENV_VARIABLE at job runtime)
This commit only implements the model changes necessary to support the
new inputs model, and includes code for the credential serializer that
allows backwards-compatible support for /api/v1/credentials/; tasks.py
still needs to be updated to actually respect CredentialType injectors.
This change *will* break the UI for credentials (because it needs to be
updated to use the new v2 endpoint).
see: #5877
see: #5876
see: #5805
I had to pull the git urls out of the main requirements files because in order to install offline (--no-index), we need pip to install from local package archives rather than cloning repo.
The weird `cat` thing going on in the Makefile is because we need to install everything as part of a single `pip install` transaction. Without this, installing only requirements_git.txt will result in dependencies getting unintentionally updated.
I know, this sucks. I spent all day trying to get to the bottom of the CI failures that started happening the other day with no luck.
There is something going on with how we were moving the node_modules directory into the source tree from the pre-built location in /tmp. This was working, but then it broke. I hope to cycle back on this sometime next week if I have the time.
http://docs.celeryproject.org/en/latest/reference/celery.contrib.rdb.html
allows you to remotely debug running celery tasks with:
from celery.contrib import rdb
rdb.set_trace()
this will bind a remote Python debugger on a random TCP port between
6899-6999, which you can telnet into for remote task debugging
* This also allows disabling https mode in the nginx configuration
* Reconfigure the development container to not specifically require
https, so the haproxy cluster configuration can work