Secret Server
Webhook Events
→ SIEM Pipeline
Configure Secret Server's native webhook engine to forward privileged access events — login failures, secret access, password rotations, heartbeat alerts — into your SIEM platform in real time for detection, alerting, and compliance.
How Secret Server Feeds Your SIEM
Secret Server's Event Pipeline captures every significant PAM activity — authentication attempts, secret retrievals, rotation outcomes, dependency health checks — and dispatches them to registered webhook endpoints within seconds of occurrence. Your SIEM platform ingests these structured payloads and correlates them against other security telemetry for detection and compliance reporting.
The webhook engine operates independently of the sync connector infrastructure — it is a push-only, event-driven channel that does not query the SIEM and holds no SIEM credentials. Authentication is asymmetric: Secret Server signs outbound payloads with a configurable HMAC-SHA256 signature or Bearer token; the SIEM endpoint validates on receipt.
┌──────────────────────────────────────────────────────────────┐ │ SECRET SERVER PAM │ ├──────────────┬─────────────────┬────────────┬───────────────┤ │ Auth Events │ Secret Access │ Rotation │ Heartbeat │ ├──────────────┴─────────────────┴────────────┴───────────────┤ │ Internal Event Bus (Engine Broker) │ ├─────────────────────────────────────────────────────────────-┤ │ Webhook Engine · Filter Rules · Payload Builder │ ├──────────────────────────────────────────────────────────────┤ │ HMAC-SHA256 Sign · TLS 1.3 · Retry w/ Exponential BO │ └──────────┬──────────────────────┬──────────────┬────────────┘ ↓ ↓ ↓ [Splunk HEC] [Sentinel DCR] [QRadar Syslog] ↓ ↓ ↓ SIEM Correlation · Detection Rules · Compliance Dashboards
Secret Server supports both outbound HTTPS webhooks (structured JSON) and legacy CEF-over-Syslog. This guide covers the modern webhook path, which delivers richer structured payloads, per-event filtering, and supports payload customization. Syslog/CEF is documented separately in the legacy integration module.
Event Types Available for SIEM Forwarding
Secret Server categorizes forwardable events into four primary groups. Each event type carries a severity classification that maps to your SIEM's alert priority model. Click any card to preview the JSON payload for that event category.
Additional Forwardable Event Types
Configuring Secret Server Webhooks
Webhooks are configured under Admin → Event Subscriptions → Add Subscription. Each subscription defines a target endpoint URL, the event types to forward, optional filter rules, and the authentication method for the receiving SIEM endpoint.
Admin → Configuration → Event Pipeline. Toggle Enable Event Pipeline to ON. Confirm the pipeline service is running on all Distributed Engine nodes. This enables the broker that routes internal events to registered webhook subscriptions.Admin → Event Subscriptions → New. Name the subscription descriptively (e.g., SIEM-Splunk-SecOps-Prod). Select subscription type Webhook (HTTP POST). Enter the SIEM ingest endpoint URL — this must be HTTPS with a valid TLS certificate.USER_LOGIN_FAILURE, USER_LOCKOUT, SECRET_VIEW, SECRET_CHECKOUT, HEARTBEAT_FAILURE, SECRET_ROTATION_FAILURE, EXPORT_SECRET, and ROLE_ASSIGNMENT_CHANGE. Use separate subscriptions for different SIEM alert tiers.Production; Only forward login failures with consecutive count ≥ 3; Exclude service accounts matching svc-* from view events. Filters use a simple expression language documented in the admin guide.%SS_DATA%\EventPipeline\DeadLetter\ for manual replay. Set a Delivery Timeout of 10 seconds per attempt.WEBHOOK_TEST payload to the configured endpoint. Confirm the SIEM receives and indexes the test event. Click Activate Subscription. Monitor the Event Pipeline health dashboard at Admin → Event Subscriptions → Status for the first 30 minutes after activation.Payload Format & Customisation
All webhook payloads are delivered as HTTP POST requests with Content-Type: application/json. Every payload shares a common envelope structure containing metadata fields, with an event-specific body nested within. The payload builder supports Liquid template syntax for customisation — you can reshape the JSON to match your SIEM's expected ingestion format.
Standard Payload Envelope
Liquid Template Payload Customisation
Use Liquid syntax in the Custom Payload Template field of each event subscription to reshape the JSON to your SIEM's required format. This is especially important for Splunk (which requires a specific HEC envelope) and QRadar (which expects a flat key-value structure).
{
"time": "{{ event.timestamp | date: '%s' }}",
"host": "{{ event.ssInstance }}",
"source": "secret_server",
"sourcetype": "ss:pam:event",
"index": "pam_security",
"event": {
"event_id": "{{ event.eventId }}",
"event_type": "{{ event.eventType }}",
"severity": "{{ event.severity }}",
"actor": "{{ event.actor.domain }}\\{{ event.actor.username }}",
"src_ip": "{{ event.actor.ipAddress }}",
"secret_name":"{{ event.secret.name | default: '' }}",
"folder_path":"{{ event.secret.folderPath | default: '' }}",
"outcome": "{% if event.outcome.result %}{{ event.outcome.result }}{% endif %}"
}
}
Always index the eventId UUID in your SIEM and use it as a deduplication key. Secret Server may re-deliver events after network failures or retry exhaustion. SIEMs that process the same eventId twice should discard the duplicate rather than creating duplicate alerts or inflating correlation counters.
Webhook Endpoint Authentication
Secret Server supports three mechanisms for authenticating outbound webhook deliveries to your SIEM endpoint. Select the method appropriate to your SIEM's ingestion API capabilities and your organisation's security policy.
X-SS-Signature header. Your SIEM endpoint validates the signature before processing.Authorization: Bearer <token> header. Used for SIEM ingestion APIs that require a pre-shared API key (Splunk HEC token, QRadar auth token).HMAC-SHA256 Signature Validation (Recommended)
import hmac, hashlib, os from flask import Flask, request, abort app = Flask(__name__) # Store the shared secret in an env variable — never hardcode WEBHOOK_SECRET = os.environ["SS_WEBHOOK_SECRET"].encode() @app.route("/ss-events", methods=["POST"]) def receive_event(): sig_header = request.headers.get("X-SS-Signature", "") body = request.get_data() # raw bytes # Compute expected HMAC-SHA256 over raw body expected = "sha256=" + hmac.new( WEBHOOK_SECRET, body, hashlib.sha256 ).hexdigest() # Constant-time comparison prevents timing attacks if not hmac.compare_digest(expected, sig_header): abort(401) # reject invalid signature event = request.get_json() process_event(event) # forward to SIEM index return "", 200 def process_event(event): # Route to appropriate SIEM index by severity severity = event.get("severity", "INFO") index = "pam_critical" if severity == "CRITICAL" else "pam_security" # ... forward to SIEM ingestion API
Secret Server's webhook engine waits for an HTTP 200 response before marking a delivery successful. If your SIEM endpoint takes more than 10 seconds to respond (the default timeout), Secret Server will mark the delivery as failed and retry with backoff. Your endpoint should queue the event for async processing and return 200 OK immediately — do not block on SIEM indexing within the request handler.
SIEM Integration Configurations
Complete, production-ready configuration examples for the three most common enterprise SIEM platforms. Select your platform below for endpoint configuration, source type setup, and example detection rules.
Step 1: Enable HEC in Splunk (Settings → Data Inputs → HTTP Event Collector → New Token). Create an index named pam_security and assign it to the token. Copy the HEC token GUID.
Step 2: In Secret Server, create a webhook subscription with endpoint URL https://<splunk-host>:8088/services/collector/event and add the header Authorization: Splunk <HEC_TOKEN>.
Step 3: Use the Liquid custom payload template below to wrap events in HEC format:
{
"time": "{{ event.timestamp | date: '%s' }}",
"host": "{{ event.ssInstance }}",
"source": "secret_server:webhook",
"sourcetype": "ss:pam:event",
"index": "pam_security",
"event": {
"event_id": "{{ event.eventId }}",
"event_type": "{{ event.eventType }}",
"severity": "{{ event.severity }}",
"ss_instance": "{{ event.ssInstance }}",
"actor_user": "{{ event.actor.username }}",
"actor_domain": "{{ event.actor.domain }}",
"actor_ip": "{{ event.actor.ipAddress }}",
"secret_id": "{{ event.secret.id | default: '' }}",
"secret_name": "{{ event.secret.name | default: '' }}",
"folder_path": "{{ event.secret.folderPath | default: '' }}",
"template": "{{ event.secret.template | default: '' }}",
"outcome": "{{ event.outcome.result | default: '' }}",
"raw_severity": "{{ event.severity }}"
}
}
| Detection: Privileged Account Brute Force (SS → Splunk) index=pam_security sourcetype="ss:pam:event" event_type="USER_LOGIN_FAILURE" | stats count AS failure_count, dc(actor_ip) AS unique_ips, values(actor_ip) AS source_ips BY actor_user actor_domain _time span=5m | where failure_count >= 5 | eval risk_score = failure_count * unique_ips | sort -risk_score | table _time actor_domain actor_user failure_count unique_ips source_ips risk_score | alert if risk_score > 15 action.email.subject = "ALERT: Privileged Account Brute Force — {{ actor_user }}"
Delinea publishes a free Splunk Technology Add-On (TA) on Splunkbase that includes pre-built field extractions, data models, and dashboard templates for Secret Server events. Install the TA first — it auto-configures the ss:pam:event sourcetype and CIM mappings, so your detection searches work with Splunk ES out of the box.
Step 1: Create a Custom Log Analytics table named SecretServerEvents_CL in your Log Analytics Workspace. Define the schema using the field list from Section 04.
Step 2: Create a Data Collection Rule (DCR) with a custom JSON endpoint — copy the DCR Ingestion URL and the Immutable ID from the Azure Portal.
Step 3: Configure the Secret Server webhook to post to the DCR endpoint URL using an Azure AD Bearer token obtained via the OAuth 2.0 client credentials flow.
# Create the custom table schema in Log Analytics $tableSchema = @{ properties = @{ schema = @{ name = "SecretServerEvents_CL" columns = @( @{ name = "EventId"; type = "string" }, @{ name = "EventType"; type = "string" }, @{ name = "Severity"; type = "string" }, @{ name = "TimeGenerated"; type = "datetime" }, @{ name = "ActorUser"; type = "string" }, @{ name = "ActorDomain"; type = "string" }, @{ name = "ActorIp"; type = "string" }, @{ name = "SecretName"; type = "string" }, @{ name = "FolderPath"; type = "string" }, @{ name = "Outcome"; type = "string" }, @{ name = "SSInstance"; type = "string" } ) } } } # Obtain AAD token for DCR ingestion $token = (Get-AzAccessToken -ResourceUrl "https://monitor.azure.com/").Token # Post a test event to the DCR endpoint $body = @([ordered]@{ EventId = [guid]::NewGuid().ToString() EventType = "WEBHOOK_TEST" Severity = "INFORMATIONAL" TimeGenerated = (Get-Date -Format "o") ActorUser = "test-actor" }) | ConvertTo-Json -AsArray Invoke-RestMethod -Uri "$DCR_ENDPOINT" ` -Method POST -Body $body ` -ContentType "application/json" ` -Headers @{ Authorization = "Bearer $token" }
// Analytic Rule: Consecutive Heartbeat Failures → Possible Out-of-Band Password Change SecretServerEvents_CL | where EventType == "HEARTBEAT_FAILURE" | where TimeGenerated > ago(1h) | summarize FailureCount = count(), AffectedSecrets = make_set(SecretName), FolderPaths = make_set(FolderPath), FirstFail = min(TimeGenerated), LastFail = max(TimeGenerated) by SSInstance, bin(TimeGenerated, 15m) | where FailureCount >= 3 | extend AlertSeverity = iff(FailureCount >= 10, "High", "Medium") | extend AlertName = strcat("SS Heartbeat Failures: ", tostring(FailureCount), " in 15m"), Entities = AffectedSecrets | project TimeGenerated, AlertSeverity, AlertName, FailureCount, AffectedSecrets, FolderPaths
Option A (Recommended): Use the QRadar Universal REST API Protocol — configure a log source pointing at a lightweight middleware that pulls events from a Secret Server-populated message queue (SNS or RabbitMQ). This provides reliable, ordered delivery with native QRadar log source management.
Option B: Deploy a syslog-JSON bridge server (NGINX + Logstash) that receives Secret Server webhooks, flattens the JSON, and forwards as syslog UDP/TCP to QRadar's log source listener on port 514. Use the custom payload template below to produce QRadar-compatible output.
{
"deviceVendor": "Delinea",
"deviceProduct": "SecretServer",
"deviceVersion": "{{ event.version }}",
"eventId": "{{ event.eventId }}",
"eventClassId": "{{ event.eventType }}",
"severity": "{{ event.severity }}",
"startTime": "{{ event.timestamp }}",
"sourceUserName": "{{ event.actor.domain }}\\{{ event.actor.username }}",
"sourceAddress": "{{ event.actor.ipAddress }}",
"destinationHost": "{{ event.ssInstance }}",
"cs1Label": "SecretName",
"cs1": "{{ event.secret.name | default: '' }}",
"cs2Label": "FolderPath",
"cs2": "{{ event.secret.folderPath | default: '' }}",
"cs3Label": "Template",
"cs3": "{{ event.secret.template | default: '' }}",
"outcome": "{{ event.outcome.result | default: '' }}",
"reason": "{{ event.outcome.reason | default: '' }}"
}
-- QRadar Ariel: Detect Bulk Secret Export — High Priority Offence Rule SELECT "startTime" AS event_time, "sourceUserName" AS actor, "sourceAddress" AS src_ip, "cs1" AS secret_name, "cs2" AS folder_path, "severity" FROM events WHERE "eventClassId" = 'EXPORT_SECRET' AND "deviceProduct" = 'SecretServer' AND DATEFORMAT("startTime", 'yyyy-MM-dd HH:mm:ss') > DATEADD('MINUTE', -30, NOW()) ORDER BY event_time DESC -- Trigger offence if count >= 1 (any export is alertable) -- Map to: Category 5012 (Data Exfiltration) · Severity HIGH
IBM provides an official Delinea Secret Server DSM on the IBM Security App Exchange. Installing the DSM auto-populates QRadar's log source type, event category mappings, and QID resolutions for all standard Secret Server event codes. This eliminates the need to manually map events in the Custom Rules Engine (CRE) and should be installed before configuring the log source.
Knowledge Check
Five questions covering webhook configuration, event handling, and SIEM integration concepts.
A heartbeat failure event fires repeatedly for the same secret over 6 hours. What is the most likely SOC interpretation and the correct escalation action?
Why must the SIEM webhook endpoint use hmac.compare_digest() rather than a standard string equality check (==) when validating HMAC-SHA256 signatures?
compare_digest() is faster for long strings — it uses SIMD instructions internallycompare_digest() automatically handles HMAC key rotation without configuration changes==) short-circuits on the first differing character, making the comparison time proportional to how many leading bytes match. An attacker can exploit this timing side-channel to craft valid signatures iteratively. hmac.compare_digest() runs in constant time regardless of where the strings diverge, preventing this timing attack.Your SIEM webhook endpoint takes 12 seconds to process and index an event before returning HTTP 200. What will Secret Server's webhook engine do after 10 seconds?
Why is the eventId UUID field critical for SIEM processing, and what should happen when the SIEM receives an event with a duplicate eventId?
eventId is only useful for support tickets — SIEMs should ignore it and use the timestamp for deduplicationeventId as a deduplication key and discard any event whose ID has already been indexed, preventing duplicate alertseventId values indicate a security incident — they should trigger a high-severity alert for event spoofingeventId and deduplicate on it — processing the same event ID twice would inflate alert counts, distort correlation timelines, and generate false-positive detections.An analyst notices a SECRET_ROTATION_FAILURE event immediately followed by a HEARTBEAT_FAILURE event for the same secret. What compound detection rule should fire?
Deployment Checklist
Complete all items before enabling the webhook pipeline in production.
Secret Server Configuration
- Event Pipeline service enabled and running on all Distributed Engine nodes
- Event subscription created with HTTPS endpoint (valid TLS cert — no self-signed)
- Event types selected covering minimum SOC set: auth failures, secret views, rotations, heartbeat failures
- HMAC-SHA256 signing enabled; shared secret stored in vault (not hardcoded)
- Retry policy configured: 5 retries, exponential backoff, dead letter queue enabled
- Test event delivered successfully and confirmed indexed in SIEM
- Delivery timeout set to 10 seconds; SIEM endpoint confirmed to respond within 2 seconds
SIEM Endpoint Security
- HMAC signature validated on every inbound event (constant-time comparison)
- Endpoint returns HTTP 200 immediately; event processing is asynchronous
eventIddeduplication logic implemented and tested with replay scenarios- Ingestion endpoint access-controlled to Secret Server engine IP ranges only (WAF/firewall)
- TLS 1.2+ enforced on SIEM ingestion endpoint; weak ciphers disabled
Detection Rules & Alerting
- Brute-force detection rule active: ≥5 login failures within 5 minutes triggers HIGH alert
- Heartbeat failure rule active: ≥3 consecutive failures on same secret triggers MEDIUM alert
- Secret bulk export rule active: any EXPORT_SECRET event triggers immediate HIGH alert
- Compound detection rule: ROTATION_FAILURE + HEARTBEAT_FAILURE within 1 hour triggers HIGH escalation
- SIEM alert routing tested end-to-end to on-call SOC queue
- Dead letter queue monitored with alert if queue depth exceeds 10 undelivered events
- Runbook written for each alert type covering triage steps and escalation path