2019-09-17 22:49:01 +03:00
# Integration With Third-Party Log Aggregators
2016-11-18 19:20:44 +03:00
This feature builds in the capability to send detailed logs to several kinds
2019-09-17 22:49:01 +03:00
of third party external log aggregation services. Services connected to this
2016-11-18 19:20:44 +03:00
data feed should be useful in order to gain insights into Tower usage
or technical trends. The data is intended to be
2019-09-17 22:49:01 +03:00
sent in JSON format via three ways: over an HTTP connection, a direct TCP
connection, or a direct UDP connection. It uses minimal service-specific
2016-11-18 19:20:44 +03:00
tweaks engineered in a custom handler or via an imported library.
2019-09-17 22:49:01 +03:00
2016-11-18 19:20:44 +03:00
## Loggers
2019-09-17 22:49:01 +03:00
This feature introduces several new loggers which are intended to
deliver a large amount of information in a predictable and structured format,
2016-11-18 19:20:44 +03:00
following the same structure as one would expect if obtaining the data
2019-09-17 22:49:01 +03:00
from the API. These data loggers are the following:
2016-11-18 19:20:44 +03:00
2019-09-17 22:49:01 +03:00
- `awx.analytics.job_events` - Data returned from the Ansible callback module
- `awx.analytics.activity_stream` - Record of changes to the objects within the Ansible Tower app
- `awx.analytics.system_tracking` - Data gathered by Ansible scan modules ran by scan job templates
2016-11-18 19:20:44 +03:00
2019-09-17 22:49:01 +03:00
These loggers only use log-level of `INFO` . Additionally, the standard Tower logs are deliverable through this
same mechanism. It should be obvious to the user how to enable or disable
each of these five sources of data without manipulating a complex dictionary
in their local settings file, as well as adjust the log level consumed
2016-11-18 19:20:44 +03:00
from the standard Tower logs.
2019-09-17 22:49:01 +03:00
2016-11-18 19:20:44 +03:00
## Supported Services
2017-02-09 20:37:57 +03:00
Committed to support:
2016-11-18 19:20:44 +03:00
- Splunk
- Elastic Stack / ELK Stack / Elastic Cloud
2017-02-09 20:37:57 +03:00
Have tested:
2016-11-18 19:20:44 +03:00
2017-02-09 20:37:57 +03:00
- Sumologic
2016-11-18 19:20:44 +03:00
- Loggly
2017-02-09 20:37:57 +03:00
Considered, but have not tested:
- Datadog
2019-09-17 22:49:01 +03:00
- Red Hat Common Logging via Logstash connector
2016-11-18 19:20:44 +03:00
### Elastic Search Instructions
In the development environment, the server can be started up with the
log aggregation services attached via the Makefile targets. This starts
2019-09-17 22:49:01 +03:00
up the three associated services of Logstash, Elastic Search, and Kibana
as their own separate containers.
2016-11-18 19:20:44 +03:00
In addition to running these services, it establishes connections to the
2019-09-17 22:49:01 +03:00
`tower_tools` containers as needed. This is derived from the [`docker-elk`
project](https://github.com/deviantony/docker-elk):
2016-11-18 19:20:44 +03:00
```bash
# Start a single server with links
make docker-compose-elk
# Start the HA cluster with links
make docker-compose-cluster-elk
```
2019-09-17 22:49:01 +03:00
For more instructions on getting started with the environment that this example spins
up, also refer to instructions in [`/tools/elastic/README.md` ](https://github.com/ansible/awx/blob/devel/tools/elastic/README.md ).
2016-11-18 19:20:44 +03:00
2019-09-17 22:49:01 +03:00
If you were to start from scratch, standing up your own version of the Elastic
Stack, then the only change you should need is to add the following lines
to the Logstash `logstash.conf` file:
2016-11-18 19:20:44 +03:00
```
filter {
json {
source => "message"
}
}
```
2019-09-17 22:49:01 +03:00
2016-11-18 19:20:44 +03:00
#### Debugging and Pitfalls
Backward-incompatible changes were introduced with Elastic 5.0.0, and
customers may need different configurations depending on what
versions they are using.
2019-09-17 22:49:01 +03:00
2016-11-18 19:20:44 +03:00
# Log Message Schema
Common schema for all loggers:
| Field | Information |
|-----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
2019-09-17 22:49:01 +03:00
| `cluster_host_id` | (string) Unique identifier of the host within the Tower cluster |
| `level` | (choice of `DEBUG` , `INFO` , `WARNING` , `ERROR` , etc.) Standard python log level, roughly reflecting the significance of the event; all of the data loggers (as a part of this feature) use `INFO` level, but the other Tower logs will use different levels as appropriate |
| `logger_name` | (string) Name of the logger we use in the settings, *e.g.* , "`awx.analytics.activity_stream`" |
| `@timestamp` | (datetime) Time of log |
| `path` | (string) File path in code where the log was generated |
2016-11-18 19:20:44 +03:00
## Activity Stream Schema
| Field | Information |
|-------------------|-------------------------------------------------------------------------------------------------------------------------|
2019-09-17 22:49:01 +03:00
| (common) | This uses all the fields common to all loggers listed above |
| actor | (string) Username of the user who took the action documented in the log |
| changes | (string) Unique identifier of the host within the Tower cluster |
| operation | (choice of several options) The basic category of the change logged in the Activity Stream, for instance, "associate". |
| object1 | (string) Information about the primary object being operated on, consistent with what we show in the Activity Stream |
| object2 | (string) If applicable, the second object involved in the action |
2016-11-18 19:20:44 +03:00
## Job Event Schema
2019-09-17 22:49:01 +03:00
This logger echoes the data being saved into Job Events, except when they
would otherwise conflict with expected standard fields from the logger (in which case the fields are named differently).
Notably, the field `host` on the `job_event` model is given as `event_host` .
2016-11-18 19:20:44 +03:00
There is also a sub-dictionary field `event_data` within the payload,
which will contain different fields depending on the specifics of the
Ansible event.
This logger also includes the common fields.
2019-09-17 22:49:01 +03:00
2016-11-18 19:20:44 +03:00
## Scan / Fact / System Tracking Data Schema
2019-09-17 22:49:01 +03:00
These contain a detailed dictionary-type field for either services,
2016-11-18 19:20:44 +03:00
packages, or files.
| Field | Information |
|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
2019-09-17 22:49:01 +03:00
| (common) | This uses all the fields common to all loggers listed above |
| services | (dict, optional) For services scans, this field is included and has keys based on the name of the service. **NOTE:** Periods are disallowed by Elastic Search in names, and are replaced with `"_"` by our log formatter |
2016-11-18 19:20:44 +03:00
| packages | (dict, optional) Included for log messages from package scans |
| files | (dict, optional) Included for log messages from file scans |
2019-09-17 22:49:01 +03:00
| host | (str) Name of host that the scan applies to |
| inventory_id | (int) Inventory ID host is inside of
2016-11-18 19:20:44 +03:00
## Job Status Changes
This is a intended to be a lower-volume source of information about
2019-09-17 22:49:01 +03:00
changes in job states compared to Job Events, and also intended to
capture changes to types of Unified Jobs other than Job Template-based
2016-11-18 19:20:44 +03:00
jobs.
In addition to common fields, these logs include fields present on
the job model.
2019-09-17 22:49:01 +03:00
2016-11-18 19:20:44 +03:00
## Tower Logs
In addition to the common fields, this will contain a `msg` field with
the log message. Errors contain a separate `traceback` field.
2017-02-09 20:37:57 +03:00
These logs can be enabled or disabled in CTiT by adding or removing
it to the setting `LOG_AGGREGATOR_LOGGERS` .
2016-11-18 19:20:44 +03:00
2019-09-17 22:49:01 +03:00
2016-11-18 19:20:44 +03:00
# Configuring Inside of Tower
Parameters needed in order to configure the connection to the log
aggregation service will include most of the following for all
supported services:
- Host
- Port
2017-02-09 20:37:57 +03:00
- The type of service, allowing service-specific customizations
- Optional username for the connection, used by certain services
- Some kind of token or password
- A flag to indicate how system tracking records will be sent
- Selecting which loggers to send
- Enabling sending logs
2019-09-17 22:49:01 +03:00
- Connection type (HTTPS, TCP, or UDP)
- Timeout value if connection type is based on TCP protocol (HTTPS and TCP)
2016-11-18 19:20:44 +03:00
Some settings for the log handler will not be exposed to the user via
2017-03-15 21:09:26 +03:00
this mechanism. For example, threading (enabled).
2016-11-18 19:20:44 +03:00
Parameters for the items listed above should be configurable through
the Configure-Tower-in-Tower interface.
2017-03-15 21:09:26 +03:00
One note on configuring Host and Port: When entering URL it is customary to
include port number, like `https://localhost:4399/foo/bar` . So for the convenience
of users, when connection type is HTTPS, we allow entering hostname as a URL
with port number and thus ignore Port field. In other words, Port field is
optional in this case. On the other hand, TCP and UDP connections are determined
by `<hostname, port number>` tuple rather than URL. So in the case of TCP/UDP
connection, Port field is supposed to be provided and Host field is supposed to
contain hostname only. If instead a URL is entered in Host field, its hostname
portion will be extracted as the actual hostname.
2016-11-18 19:20:44 +03:00
2019-09-17 22:49:01 +03:00
2016-11-18 19:20:44 +03:00
# Acceptance Criteria Notes
2019-09-17 22:49:01 +03:00
**Connection:** Testers need to replicate the documented steps for setting up
2016-11-18 19:20:44 +03:00
and connecting with a destination log aggregation service, if that is
an officially supported service. That will involve 1) configuring the
settings, as documented, 2) taking some action in Tower that causes a log
message from each type of data logger to be sent and 3) verifying that
the content is present in the log aggregation service.
2019-09-17 22:49:01 +03:00
**Schema:** After the connection steps are completed, a tester will need to create
2016-11-18 19:20:44 +03:00
an index. We need to confirm that no errors are thrown in this process.
It also needs to be confirmed that the schema is consistent with the
documentation. In the case of Splunk, we need basic confirmation that
the data is compatible with the existing app schema.
2019-09-17 22:49:01 +03:00
**Tower logs:** Formatting of Traceback message is a known issue in several
2016-11-18 19:20:44 +03:00
open-source log handlers, so we should confirm that server errors result
in the log aggregator receiving a well-formatted multi-line string
with the traceback message.
Log messages should be sent outside of the
2019-09-17 22:49:01 +03:00
request-response cycle. For example, Loggly examples use
2016-11-18 19:20:44 +03:00
`requests_futures.sessions.FuturesSession` , which does some
threading work to fire the message without interfering with other
operations. A timeout on the part of the log aggregation service should
not cause Tower operations to hang.