-
Notifications
You must be signed in to change notification settings - Fork 473
Kubernetes.audit_logs: add support for cloud providers #14554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Kubernetes.audit_logs: add support for cloud providers #14554
Conversation
c66a453
to
c77254f
Compare
Pinging @elastic/security-service-integrations (Team:Security-Service Integrations) |
🚀 Benchmarks reportTo see the full report comment with |
@@ -1,8 +1,16 @@ | |||
# audit-logs | |||
|
|||
audit-logs integration collects and parses Kubernetes audit logs. | |||
Audit logs integration collects and parses Kubernetes audit logs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Audit logs integration collects and parses Kubernetes audit logs. | |
Audit-logs integration collects and parses Kubernetes audit logs. |
@@ -0,0 +1,103 @@ | |||
{{#unless log_group_name}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if those files should be placed under relevant aws-cloudwatch integration (and similarly under azure and gcp). I am thinking that it would be difficult for our users to figure out that we have audit logs support and to check on k8s when initially are on a CSP integration.
@zmoog wdyt on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After some discussions (see #5799 (comment) and further comments), that option was on the table but it shows some inconveniences in terms of maintenance mainly).
Happy to hear your thoughts on this point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chemamartinez - This is amazing and thank you for putting in the PR.
@gizas - Definitely a great question. My two cents:
For AKS at least, most Azure data sources are tied to the single Azure integration. Entra ID, Activity, Diagnostic, Platform, etc. logs are all in the Azure integration as a separate data stream, rather than having their own integrations. However, Azure may be unique in this perspective since all logs, regardless of service, are forwarded to an Event Hub, thus putting them in a centralized location for us to ingest from. From a detection rule perspective, if we wanted to correlate activity between - for instance - Entra ID sign-ins and K8s, two separate integration installations are required instead of one. However, for AWS, most rules are written on CloudTrail audit logs, so we still would require a separate integration, CloudWatch, for this.
On the flip side - K8s as a separate integration may make more sense to isolate the data streams (both local and CSP-based) as being done here and point users with K8s requirements to a single integration and pick which provider. We know that K8s is a very popular integration as-is so it may be good to roll this out with what is already adopted heavily.
- append: | ||
field: error.message | ||
value: '{{{ _ingest.on_failure_message }}}' | ||
value: > |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -1,6 +1,19 @@ | |||
--- | |||
description: Pipeline for processing Kubernetes audit logs. | |||
processors: | |||
- rename: | |||
field: message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should do it with set: https://github.com/elastic/integrations/blob/main/packages/aws/data_stream/cloudwatch_logs/elasticsearch/ingest_pipeline/default.yml#L9
In order not to loose message field right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be the default criteria for this in Obs integrations? We usually keep the original raw message in event.original
and message
gets removed so it is not duplicated.
@@ -6,10 +6,322 @@ elasticsearch: | |||
mappings: | |||
dynamic: false | |||
streams: | |||
- input: aws-cloudwatch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar as above comment https://github.com/elastic/integrations/pull/14554/files#r2218628097
@@ -1,8 +1,16 @@ | |||
# audit-logs | |||
|
|||
audit-logs integration collects and parses Kubernetes audit logs. | |||
Audit logs integration collects and parses Kubernetes audit logs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Audit logs integration collects and parses Kubernetes audit logs. | |
Audit-logs integration collects and parses Kubernetes audit logs. |
|
💚 Build Succeeded
History
|
Proposed commit message
Extend the Kubernetes audit_logs data stream to support collecting audit logs from managed Kubernetes clusters in major cloud providers:
Checklist
changelog.yml
file.Related issues
Screenshots