Audit Logging

Produce audit logs of all user activity on the system. This will generate an auditable trail of user session events and access modification of all entities in the system. Audit logs will output to the standard out for the docker container and can be routed into a log aggregation system if needed.

This feature is included in the Security and Compliance package. Follow the link or contact sales@form.io for more details.

The Audit Log data is presented in the following format:

date EVENT uuid projectId sessionId userId [additional information]

You can turn off audit logging by setting the NOAUDITLOG flag to true in the .env file or in docker secrets.

Audit Logging Events

There are a number of events that are tracked when audit logging is enabled. The following outlines these events that are logged.

Aggregation System Deployment

There are special aggregation systems that help to aggregate and manage your Docker logs. In this article, we will consider connection using the open-source Elastic Stack log analysis software as an example.

The Elastic Stack consists of two different tools: Elasticsearch and Kibana. Elasticsearch is needed to store log data and Kibana to visualize it. To transfer the logs into ElasticSearch we will use the open source data shipper FileBeat.

There are many ways to set up the Elastic Stack, the easiest is using docker-compose. We will use the official docker images and there will be a single ElasticSearch node.

Step 1: Add the following rows to the existing docker-compose.yml:

services:
  elasticsearch:
    image: "docker.elastic.co/elasticsearch/elasticsearch:7.2.0"
    environment:
        - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
        - "discovery.type=single-node"
    ports:
        - "9200:9200"
    volumes:
        - elasticsearch_data:/usr/share/elasticsearch/data

  kibana:
      image: "docker.elastic.co/kibana/kibana:7.2.0"
      ports:
          - "5601:5601"

  filebeat:
      image: "docker.elastic.co/beats/filebeat:7.2.0"
      user: root
      volumes:
          - /MY_WORKDIR/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
          - /var/lib/docker:/var/lib/docker:ro
          - /var/run/docker.sock:/var/run/docker.sock
volumes:
  mdb-data:
  elasticsearch_data:

Note that you need to replace MY_WORKDIR with the path of your working directory.

For ElasticSearch and Kibana basic docker configuration is enough. They are respectively available on ports 9200 and 5601. ElasticSearch has a volume to keep its data. Kibana doesn't need a volume as it uses ElasticSearch to persist its configuration. FileBeat, on the other hand, needs to be configured. To share this config file with the container, we need a read-only volume /usr/share/filebeat/filebeat.yml:ro. FileBeat also needs to have access to docker log files. They can usually be found in /var/lib/docker/containers but that might depend on your docker installation. The docker socket /var/run/docker.sock is also shared with the container. That allows FileBeat to use the docker daemon to retrieve information and enrich the logs with things that are not directly in the log files, such as the name of the image or the name of the container.

Step 2: Create the filebeat configuration file in /MY_WORKDIR/filebeat.yml

filebeat.inputs: 
  - 
    paths:
      - /var/lib/docker/containers/*/*.log
    type: container
logging.json: true
logging.metrics.enabled: false
output.elasticsearch: 
  hosts: 
    - "elasticsearch:9200"
  indices: 
    - 
      index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
processors: 
  - 
    add_docker_metadata: 
      host: "unix:///var/run/docker.sock"
  - 
    decode_json_fields: 
      fields: 
        - message
      overwrite_keys: true
      target: json
  • The FileBeat input type container is needed to import logs from docker. /var/lib/docker/containers/*/*.log is the location of the log files inside the FileBeat container.

  • The output elasticsearch setting allows you to configure the ElasticSearch address as well as the indexes where the logs are imported. The template index filebeat-%{[agent.version]}-%{+yyyy.MM.dd} includes a date. This means that docker logs are imported into the index corresponding to the date they appeared.

  • add_docker_metadata is needed in order to add useful information to the logs, such as the name of the image or the name of the container. Only IDs are displayed by default.

  • decode_json_fields lets to parse logs encoded in JSON. The logs in FileBeat, ElasticSearch, and Kibana consist of multiple fields. The message field is what the application (running inside a docker container) writes to the standard output.

Step 3: Launch docker containers with docker-compose

Kibana is now available on http://localhost:5601. Click on the left on Discover and on Create Index Pattern. Create a pattern using filebeat-* (to include all logs from FileBeat) and @timestamp.

Done! Click on Discover again and you will see the logs. Note that logs of all containers are displayed with the default configuration. The formio logs described at the beginning of this article come from the formio container.

Last updated