Auth events are among the most security-relevant signals in any system: who logged in, from where, using what method, and what they did after. SIEM systems (Splunk, Datadog, Elastic Security) need these events in a structured, consistent format to power detection rules, dashboards, and compliance reports. The most common failure mode is logging either too little (missing events that matter for incident investigation) or too much (logging sensitive values that turn the audit log itself into a security liability).
Required auth events
These are the minimum events for a SOC 2 Type II or ISO 27001 compliant auth audit trail:
- user.login.success — successful authentication, including MFA method used
- user.login.failure — failed authentication with reason (wrong password, locked account, MFA failed)
- user.logout — explicit logout, include session age
- user.mfa.enrolled — new MFA method added, include method type and device
- user.mfa.removed — MFA method removed, who removed it (user vs admin)
- user.password.changed — password change or reset, include whether self-service or admin-initiated
- user.email.changed — email address update with old and new values (redacted to domain in logs)
- org.member.invited — invitation sent, include invitee email domain, role
- org.member.removed — member removed from org
- org.role.changed — role change, include previous and new role
- org.sso.configured — SSO configuration added or modified
- token.issued — access token issued (log at DEBUG by default, INFO for privileged scopes)
- token.revoked — explicit token revocation
- api_key.created — new PAT or API key created
- api_key.deleted — PAT or API key deleted or expired
Event schema
Use a consistent JSON schema for every event. SIEM parsers break when field names or types are inconsistent across event types.
// Standard auth event schema
interface AuditEvent {
// Required on every event
id: string; // UUID, unique event ID for deduplication
occurred_at: string; // ISO 8601 timestamp with milliseconds
action: string; // namespaced: 'user.login.success'
outcome: 'success' | 'failure' | 'unknown';
// Actor
actor: {
type: 'user' | 'service' | 'system';
id: string; // user ID or service account ID
org_id?: string;
ip: string; // IPv4 or IPv6
user_agent?: string; // browser/client UA
session_id?: string;
mfa_method?: string; // if this action required MFA
};
// Target resource
target?: {
type: string; // 'user', 'org', 'token', 'role'
id: string;
org_id?: string;
display?: string; // human-readable label, no PII
};
// Context
context?: {
request_id: string; // correlation ID for distributed tracing
country?: string; // ISO 3166-1 alpha-2
risk_score?: number; // 0-100 if risk engine ran
};
// Event-specific payload (varies by action type)
payload?: Record;
}
// Example: user.login.success
const loginEvent: AuditEvent = {
id: 'evt_01HXZ...',
occurred_at: '2021-11-29T14:23:01.432Z',
action: 'user.login.success',
outcome: 'success',
actor: {
type: 'user',
id: 'user_ABC',
org_id: 'org_XYZ',
ip: '203.0.113.42',
mfa_method: 'totp',
},
context: {
request_id: 'req_...',
country: 'US',
},
payload: {
auth_method: 'password+totp',
new_device: false,
},
};
console.log(req.body) on a login endpoint just logged every user's password in plaintext. Use structured logging and an explicit allowlist of loggable fields.SIEM ingestion: Splunk, Datadog, Elastic
# Shipping audit logs to Datadog via Fluentd/Vector
# /etc/vector/audit.yaml
sources:
audit_postgres:
type: postgresql_metrics # or use a custom source reading from your DB
# Better: publish events to a queue, consume from Vector
transforms:
normalize_audit:
type: remap
source: |
.service = "auth"
.env = "${ENVIRONMENT}"
.ddtags = "service:auth,env:" + string!(.env)
# Map to Datadog's log attributes
.usr.id = .actor.id
.usr.org_id = .actor.org_id
.network.client.ip = .actor.ip
.evt.name = .action
.evt.outcome = .outcome
sinks:
datadog_logs:
type: datadog_logs
inputs: ["normalize_audit"]
default_api_key: "${DD_API_KEY}"
compression: gzip
# Splunk: HEC (HTTP Event Collector) ingestion
# Auth service publishes to a queue; Splunk forwarder consumes
curl -X POST https://splunk.internal:8088/services/collector/event \
-H "Authorization: Splunk ${HEC_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"time": 1638196981.432,
"host": "auth-service-01",
"source": "auth:audit",
"sourcetype": "_json",
"index": "auth_audit",
"event": {
"action": "user.login.success",
"actor_id": "user_ABC",
"org_id": "org_XYZ",
"ip": "203.0.113.42",
"mfa_method": "totp"
}
}'
Retention policy
Compliance frameworks specify minimum retention periods for security logs:
- SOC 2: no specific requirement, but auditors typically expect 1 year.
- PCI DSS: 1 year minimum, with 3 months immediately available.
- HIPAA: 6 years for audit documentation.
- ISO 27001: organization-defined, auditors typically expect 1 year.
- GDPR: not a retention requirement per se, but deletion requirements mean you should not retain logs with PII beyond the minimum needed.
Implement tiered storage: hot storage (database or Elasticsearch) for the last 90 days with full query capability, cold storage (S3/GCS with Glacier-class pricing) for 90 days to 7 years in compressed JSON format. Ensure cold storage is immutable and append-only — audit logs should not be modifiable even by administrators.
# S3 lifecycle policy for audit log retention
aws s3api put-bucket-lifecycle-configuration \
--bucket yourapp-audit-logs \
--lifecycle-configuration '{
"Rules": [{
"ID": "AuditLogRetention",
"Status": "Enabled",
"Filter": { "Prefix": "auth-events/" },
"Transitions": [
{ "Days": 90, "StorageClass": "STANDARD_IA" },
{ "Days": 365, "StorageClass": "GLACIER" }
],
"Expiration": { "Days": 2557 }
}]
}'
# Enable Object Lock for WORM (Write Once Read Many) compliance
aws s3api put-object-lock-configuration \
--bucket yourapp-audit-logs \
--object-lock-configuration '{"ObjectLockEnabled":"Enabled","Rule":{"DefaultRetention":{"Mode":"COMPLIANCE","Days":2557}}}'