-
Notifications
You must be signed in to change notification settings - Fork 481
Kubernetes.audit_logs: add support for cloud providers #14554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes.audit_logs: add support for cloud providers #14554
Conversation
c66a453
to
c77254f
Compare
Pinging @elastic/security-service-integrations (Team:Security-Service Integrations) |
🚀 Benchmarks reportTo see the full report comment with |
@@ -1,8 +1,16 @@ | |||
# audit-logs | |||
|
|||
audit-logs integration collects and parses Kubernetes audit logs. | |||
Audit logs integration collects and parses Kubernetes audit logs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Audit logs integration collects and parses Kubernetes audit logs. | |
Audit-logs integration collects and parses Kubernetes audit logs. |
@@ -0,0 +1,103 @@ | |||
{{#unless log_group_name}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if those files should be placed under relevant aws-cloudwatch integration (and similarly under azure and gcp). I am thinking that it would be difficult for our users to figure out that we have audit logs support and to check on k8s when initially are on a CSP integration.
@zmoog wdyt on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After some discussions (see #5799 (comment) and further comments), that option was on the table but it shows some inconveniences in terms of maintenance mainly).
Happy to hear your thoughts on this point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chemamartinez - This is amazing and thank you for putting in the PR.
@gizas - Definitely a great question. My two cents:
For AKS at least, most Azure data sources are tied to the single Azure integration. Entra ID, Activity, Diagnostic, Platform, etc. logs are all in the Azure integration as a separate data stream, rather than having their own integrations. However, Azure may be unique in this perspective since all logs, regardless of service, are forwarded to an Event Hub, thus putting them in a centralized location for us to ingest from. From a detection rule perspective, if we wanted to correlate activity between - for instance - Entra ID sign-ins and K8s, two separate integration installations are required instead of one. However, for AWS, most rules are written on CloudTrail audit logs, so we still would require a separate integration, CloudWatch, for this.
On the flip side - K8s as a separate integration may make more sense to isolate the data streams (both local and CSP-based) as being done here and point users with K8s requirements to a single integration and pick which provider. We know that K8s is a very popular integration as-is so it may be good to roll this out with what is already adopted heavily.
- append: | ||
field: error.message | ||
value: '{{{ _ingest.on_failure_message }}}' | ||
value: > |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -1,6 +1,19 @@ | |||
--- | |||
description: Pipeline for processing Kubernetes audit logs. | |||
processors: | |||
- rename: | |||
field: message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should do it with set: https://github.com/elastic/integrations/blob/main/packages/aws/data_stream/cloudwatch_logs/elasticsearch/ingest_pipeline/default.yml#L9
In order not to loose message field right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be the default criteria for this in Obs integrations? We usually keep the original raw message in event.original
and message
gets removed so it is not duplicated.
@@ -6,10 +6,322 @@ elasticsearch: | |||
mappings: | |||
dynamic: false | |||
streams: | |||
- input: aws-cloudwatch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar as above comment https://github.com/elastic/integrations/pull/14554/files#r2218628097
@@ -1,8 +1,16 @@ | |||
# audit-logs | |||
|
|||
audit-logs integration collects and parses Kubernetes audit logs. | |||
Audit logs integration collects and parses Kubernetes audit logs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Audit logs integration collects and parses Kubernetes audit logs. | |
Audit-logs integration collects and parses Kubernetes audit logs. |
|
This looks good to me, thank you for the changes. I agree with Terrance it's probably best to modify the K8s intregration which is already heavily used rather than modify each individual CSP integration. This will also help us maintain a single K8s ruleset, and will make K8s audit log ingest more user friendly for everyone. |
packages/kubernetes/data_stream/audit_logs/elasticsearch/ingest_pipeline/default.yml
Outdated
Show resolved
Hide resolved
|
||
- append: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think its worth adding cloud.*
fields depending on input.type
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cloud metadata is not present in events, do you mean using the add_cloud_metadata
processor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. The metadata is not present in all inputs and only see it in Cloudwatch
input.
https://github.com/elastic/beats/blob/main/x-pack/filebeat/input/awscloudwatch/processor.go#L62-L65
Eventhub
defines some, but that cannot be used in cloud.*
fields. https://github.com/elastic/beats/blob/main/x-pack/filebeat/input/azureeventhub/v2_input.go#L483-L495
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can set cloud.provider
based on ctx.input.type
.
- set:
field: cloud.provider
value: aws
if: ctx.input?.type == "aws-cloudwatch"
packages/kubernetes/data_stream/audit_logs/_dev/test/pipeline/test-audit.log-expected.json
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks Chema.
💚 Build Succeeded
History
|
|
Package kubernetes - 1.81.0 containing this change is available at https://epr.elastic.co/package/kubernetes/1.81.0/ |
Proposed commit message
Extend the Kubernetes audit_logs data stream to support collecting audit logs from managed Kubernetes clusters in major cloud providers:
Checklist
changelog.yml
file.Related issues
Screenshots