SQL Audit Logging

On this page Carat arrow pointing down

SQL audit logging gives you detailed information about queries being executed against your system. This feature is especially useful when you want to log all queries that are run against a table containing personally identifiable information (PII).

It consists of using the ALTER TABLE ... EXPERIMENTAL_AUDIT command to enable the SENSITIVE_ACCESS logging channel per table.

This page provides an example of SQL audit logging in CockroachDB, including:

  • How to turn audit logging on and off.
  • Where the audit log files live.
  • What the audit log files look like.

Note that enabling SQL audit logs can negatively impact performance. As a result, we recommend using SQL audit logs for security purposes only. For more details, see Performance considerations.

Tip:

For the best visibility into security-related events on your cluster, we recommend configuring SENSITIVE_ACCESS together with the USER_ADMIN, PRIVILEGES, and SESSIONS logging channels. To learn more, see Logging Use Cases.

Note:

This feature is in preview. This feature is subject to change. To share feedback and/or issues, contact Support.

File format

Audit log messages, like all log messages, consist of two sections:

  • A payload that contains notable events structured in JSON. These can include information such as the application name, full text of the query (which may contain PII), user account that triggered the event, number of rows produced (e.g., for SELECT) or processed (e.g., for INSERT or UPDATE), status of the query, and more. For more information on the possible event types logged to the SENSITIVE_ACCESS channel, see Notable Event Types.
  • An envelope that contains event metadata (e.g., severity, date, timestamp, channel). Depending on the log format you specify when configuring logs, the envelope can be formatted either as JSON or as a flat prefix to the message.

File storage location

By default, audit logs are prefixed cockroach-sql-audit and are stored in the same directory as the other logs generated by CockroachDB.

To store the audit log files in a specific directory, configure the SENSITIVE_ACCESS channel with a custom dir path.

Tip:

If your deployment requires particular lifecycle and access policies for audit log files, point SENSITIVE_ACCESS to a directory that has permissions set so that only CockroachDB can create/delete files.

Performance considerations

To ensure non-repudiation in audit logs, we recommend enabling auditable for the SENSITIVE_ACCESS channel. CockroachDB will then synchronously log all of the activity of every user on a cluster in a way that is durable to system failures. Note that every query that causes a logging event must access the disk of the node on which audit logging is enabled. As a result, enabling auditable on a logging channel negatively impacts performance, and we recommend using this setting for security purposes only.

For debugging and troubleshooting on production clusters, the most performant way to log all queries is to enable the SQL_EXEC logging channel. For details, see Logging Use Cases.

Example

Step 1. Create sample tables

Use the statements below to create:

  • A customers table which contains PII such as name, address, etc.
  • An orders table with a foreign key into customers, which does not expose any PII

Later, we'll show how to turn on auditing for the customers table.

icon/buttons/copy
> CREATE TABLE customers (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    name STRING NOT NULL,
    address STRING NOT NULL,
    national_id INT NOT NULL,
    telephone INT NOT NULL,
    email STRING UNIQUE NOT NULL
);
icon/buttons/copy
> CREATE TABLE orders (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    product_id INT NOT NULL,
    delivery_status STRING check (delivery_status='processing' or delivery_status='in-transit' or delivery_status='delivered') NOT NULL,
    customer_id UUID NOT NULL REFERENCES customers (id)
);

Step 2. Turn on auditing for the customers table

We turn on auditing for a table using the EXPERIMENTAL_AUDIT subcommand of ALTER TABLE.

icon/buttons/copy
> ALTER TABLE customers EXPERIMENTAL_AUDIT SET READ WRITE;

This directs SQL audit events for the customers table into the SENSITIVE_ACCESS logging channel.

Note:

To turn on auditing for more than one table, issue a separate ALTER statement for each table.

Step 3. Populate the customers table

Now that we have auditing turned on, let's add some customer data:

icon/buttons/copy
> INSERT INTO customers (name, address, national_id, telephone, email) VALUES (
    'Pritchard M. Cleveland',
    '23 Crooked Lane, Garden City, NY USA 11536',
    778124477,
    12125552000,
    'pritchmeister@aol.com'
);
icon/buttons/copy
> INSERT INTO customers (name, address, national_id, telephone, email) VALUES (
    'Vainglorious K. Snerptwiddle III',
    '44 Straight Narrows, Garden City, NY USA 11536',
    899127890,
    16465552000,
    'snerp@snerpy.net'
);

Now let's verify that our customers were added successfully:

icon/buttons/copy
> SELECT * FROM customers;
                   id                  |               name               |                    address                     | national_id |  telephone  |         email
---------------------------------------+----------------------------------+------------------------------------------------+-------------+-------------+------------------------
  859c6aa1-ae36-49c8-9f12-7a952b4e6915 | Vainglorious K. Snerptwiddle III | 44 Straight Narrows, Garden City, NY USA 11536 |   899127890 | 16465552000 | snerp@snerpy.net
  90810df2-d3c1-4038-8462-132f4df5112b | Pritchard M. Cleveland           | 23 Crooked Lane, Garden City, NY USA 11536     |   778124477 | 12125552000 | pritchmeister@aol.com
(2 rows)

Step 4. Check the audit log

By default, events in the SENSITIVE_ACCESS channel are output to a log file that is prefixed cockroach-sql-audit and stored in the same directory as the other logs generated by CockroachDB.

To store the audit log files in a specific directory, configure the SENSITIVE_ACCESS channel with a custom dir path. Like the other log files, it's rotated according to the max-file-size setting.

When we look at the audit log for this example, we see the following lines showing every command we've run so far, as expected.

I210323 18:49:57.612823 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 1 ={"Timestamp":1616525397608639000,"EventType":"sensitive_table_access","Statement":"‹ALTER TABLE defaultdb.public.customers EXPERIMENTAL_AUDIT SET READ WRITE›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","Age":4.222,"TxnCounter":20,"TableName":"‹defaultdb.public.customers›","AccessMode":"rw"}
I210323 18:50:04.518707 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 2 ={"Timestamp":1616525404415644000,"EventType":"sensitive_table_access","Statement":"‹INSERT INTO \"\".\"\".customers(name, address, national_id, telephone, email) VALUES ('Pritchard M. Cleveland', '23 Crooked Lane, Garden City, NY USA 11536', 778124477, 12125552000, 'pritchmeister@aol.com')›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":1,"Age":103.066,"TxnCounter":28,"TableName":"‹defaultdb.public.customers›","AccessMode":"rw"}
I210323 18:50:07.591609 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 3 ={"Timestamp":1616525407566154000,"EventType":"sensitive_table_access","Statement":"‹INSERT INTO \"\".\"\".customers(name, address, national_id, telephone, email) VALUES ('Vainglorious K. Snerptwiddle III', '44 Straight Narrows, Garden City, NY USA 11536', 899127890, 16465552000, 'snerp@snerpy.net')›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":1,"Age":25.48,"TxnCounter":36,"TableName":"‹defaultdb.public.customers›","AccessMode":"rw"}
I210323 18:50:10.951550 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 4 ={"Timestamp":1616525410949087000,"EventType":"sensitive_table_access","Statement":"‹SELECT * FROM \"\".\"\".customers›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":2,"Age":2.514,"FullTableScan":true,"TxnCounter":38,"TableName":"‹defaultdb.public.customers›","AccessMode":"r"}
Note:

The above example shows the default crdb-v2 log format. This can be changed to another format (e.g., JSON). For details, see Configure Logs.

Tip:

For descriptions of all SQL audit event types and their fields, see Notable Event Types.

Step 5. Populate the orders table

Unlike the customers table, orders doesn't have any PII, just a Product ID and a delivery status.

Let's populate the orders table with some placeholder data using CREATE SEQUENCE:

icon/buttons/copy
> CREATE SEQUENCE product_ids_asc START 1 INCREMENT 1;

Evaluate the below a few times to generate data; note that this would error if SELECT returned multiple results, but it doesn't in this case.

icon/buttons/copy
> INSERT INTO orders (product_id, delivery_status, customer_id) VALUES (
    nextval('product_ids_asc'),
    'processing',
    (SELECT id FROM customers WHERE name ~ 'Cleve')
);

Let's verify that our orders were added successfully:

icon/buttons/copy
> SELECT * FROM orders ORDER BY product_id;
                   id                  | product_id | delivery_status |             customer_id
---------------------------------------+------------+-----------------+---------------------------------------
  77fa8340-8a65-4ab2-8191-ed87fc049b33 |          1 | processing      | 90810df2-d3c1-4038-8462-132f4df5112b
  36c8b00d-01f0-4956-bb0e-6e9219f49bae |          2 | processing      | 90810df2-d3c1-4038-8462-132f4df5112b
  5eebf961-1e4c-41a4-b6c6-441c3d5ef595 |          3 | processing      | 90810df2-d3c1-4038-8462-132f4df5112b
(3 rows)

Step 6. Check the audit log again

Because we used a SELECT against the customers table to generate the placeholder data for orders, those queries will also show up in the audit log.

Note that two log entries are created for each query: one entry for the SELECT subquery, and one entry for the foreign key check on customer_id. Since the customers table is read twice with each query, the TableName and TxnCounter values will be duplicated across entries:

I210323 19:12:09.339065 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 5 ={"Timestamp":1616526729194157000,"EventType":"sensitive_table_access","Statement":"‹INSERT INTO \"\".\"\".orders(product_id, delivery_status, customer_id) VALUES (nextval('product_ids_asc'), 'processing', (SELECT id FROM \"\".\"\".customers WHERE name ~ 'Cleve'))›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":1,"Age":144.956,"FullTableScan":true,"FullIndexScan":true,"TxnCounter":46,"TableName":"‹defaultdb.public.customers›","AccessMode":"r"}
I210323 19:12:09.339204 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 6 ={"Timestamp":1616526729194157000,"EventType":"sensitive_table_access","Statement":"‹INSERT INTO \"\".\"\".orders(product_id, delivery_status, customer_id) VALUES (nextval('product_ids_asc'), 'processing', (SELECT id FROM \"\".\"\".customers WHERE name ~ 'Cleve'))›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":1,"Age":144.956,"FullTableScan":true,"FullIndexScan":true,"TxnCounter":46,"TableName":"‹defaultdb.public.customers›","AccessMode":"r"}
I210323 19:12:13.407107 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 7 ={"Timestamp":1616526733375741000,"EventType":"sensitive_table_access","Statement":"‹INSERT INTO \"\".\"\".orders(product_id, delivery_status, customer_id) VALUES (nextval('product_ids_asc'), 'processing', (SELECT id FROM \"\".\"\".customers WHERE name ~ 'Cleve'))›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":1,"Age":31.427,"FullTableScan":true,"FullIndexScan":true,"TxnCounter":52,"TableName":"‹defaultdb.public.customers›","AccessMode":"r"}
I210323 19:12:13.407177 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 8 ={"Timestamp":1616526733375741000,"EventType":"sensitive_table_access","Statement":"‹INSERT INTO \"\".\"\".orders(product_id, delivery_status, customer_id) VALUES (nextval('product_ids_asc'), 'processing', (SELECT id FROM \"\".\"\".customers WHERE name ~ 'Cleve'))›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":1,"Age":31.427,"FullTableScan":true,"FullIndexScan":true,"TxnCounter":52,"TableName":"‹defaultdb.public.customers›","AccessMode":"r"}
I210323 19:12:14.228906 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 9 ={"Timestamp":1616526734201401000,"EventType":"sensitive_table_access","Statement":"‹INSERT INTO \"\".\"\".orders(product_id, delivery_status, customer_id) VALUES (nextval('product_ids_asc'), 'processing', (SELECT id FROM \"\".\"\".customers WHERE name ~ 'Cleve'))›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":1,"Age":27.554,"FullTableScan":true,"FullIndexScan":true,"TxnCounter":58,"TableName":"‹defaultdb.public.customers›","AccessMode":"r"}
I210323 19:12:14.228964 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 10 ={"Timestamp":1616526734201401000,"EventType":"sensitive_table_access","Statement":"‹INSERT INTO \"\".\"\".orders(product_id, delivery_status, customer_id) VALUES (nextval('product_ids_asc'), 'processing', (SELECT id FROM \"\".\"\".customers WHERE name ~ 'Cleve'))›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":1,"Age":27.554,"FullTableScan":true,"FullIndexScan":true,"TxnCounter":58,"TableName":"‹defaultdb.public.customers›","AccessMode":"r"}
Note:

The above example shows the default crdb-v2 log format. This can be changed to another format (e.g., JSON). For details, see Configure Logs.

Tip:

For descriptions of all SQL audit event types and their fields, see Notable Event Types.

See also


Yes No
On this page

Yes No