You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/logs/guide/best-practices-for-log-management.md
+9-13Lines changed: 9 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ If you want to transform your logs or redact sensitive data in your logs before
42
42
43
43
### Set up multiple indexes for log segmentation
44
44
45
-
Set up multiple indexes if you want to segment your logs for different retention periods or daily quotas, usage monitoring, and billing.
45
+
Set up multiple indexes if you want to segment your logs for different retention periods or daily quotas, usage monitoring, and billing.
46
46
47
47
For example, if you have logs that only need to be retained for 7 days, while other logs need to be retained for 30 days, use multiple indexes to separate out the logs by the two retention periods.
48
48
@@ -66,7 +66,7 @@ If you want to retain logs for an extended time while maintaining querying speed
66
66
67
67
If you want to store your logs for longer periods of time, set up [Log Archives][2] to send your logs to a storage-optimized system, such as Amazon S3, Azure Storage, or Google Cloud Storage. When you want to use Datadog to analyze those logs, use [Log Rehydration][3]™ to capture those logs back in Datadog. With multiple archives, you can both segment logs for compliance reasons and keep rehydration costs under control.
68
68
69
-
#### Set up max scan size to manage expensive rehydrations
69
+
#### Set up max scan size to manage expensive rehydrations
70
70
71
71
Set a limit on the volume of logs that can be rehydrated at one time. When setting up an archive, you can define the maximum volume of log data that can be scanned for Rehydration. See [Define maximum scan size][4] for more information.
72
72
@@ -123,7 +123,7 @@ Create an anomaly detection monitor to alert on any unexpected log indexing spik
123
123
124
124
1. [Check Log patterns for this service](https://app.datadoghq.com/logs/patterns?from_ts=1582549794112&live=true&to_ts=1582550694112&query=service%3A{{service.name}})
125
125
2. [Add an exclusion filter on the noisy pattern](https://app.datadoghq.com/logs/pipelines/indexes)
126
-
```
126
+
```
127
127
7. Click **Create**.
128
128
129
129
### Alert when an indexed log volume passes a specified threshold
@@ -135,7 +135,7 @@ Set up a monitor to alert if an indexed log volume in any scope of your infrastr
135
135
3. Click **More...** and select **Create monitor**.
136
136
4. Add tags (for example, `host, `services, and so on) to the **group by** field.
137
137
5. Enter the **Alert threshold** for your use case. Optionally, enter a **Warning threshold**.
138
-
6. Add a notification title, for example:
138
+
6. Add a notification title, for example:
139
139
```
140
140
Unexpected spike on indexed logs for service {{service.name}}
141
141
```
@@ -147,27 +147,23 @@ Set up a monitor to alert if an indexed log volume in any scope of your infrastr
147
147
148
148
#### Alert on indexed logs volume since the beginning of the month
149
149
150
-
Leverage the `datadog.estimated_usage.logs.ingested_events` metric filtered on `datadog_is_excluded:false` to only count indexed logs and the [metric monitor cumulative window][28] to monitor the count since the beginning of the month.
150
+
Leverage the `datadog.estimated_usage.logs.ingested_events` metric filtered on `datadog_is_excluded:false` to only count indexed logs and the [metric monitor cumulative window][28] to monitor the count since the beginning of the month.
151
151
152
152
{{< img src="logs/guide/monthly_usage_monitor.png" alt="Setup a monitor to alert for the count of indexed logs since the beginning of the month" style="width:70%;">}}
153
153
154
154
#### Alert on indexes reaching their daily quota
155
155
156
156
[Set up a daily quota][16] on indexes to prevent indexing more than a given number of logs per day. If an index has a daily quota, Datadog recommends that you set the [monitor that notifies on that index's volume](#alert-when-an-indexed-log-volume-passes-a-specified-threshold) to alert when 80% of this quota is reached within the past 24 hours.
157
157
158
-
An event is generated when the daily quota is reached. These events have the `datadog_index` tag which includes the index name. Therefore, when this event has been generated, you can [create a facet][17] on the `datadog_index` tag, so that you can use `datadog_index` in the `group by` step for setting up a multi-alert monitor.
158
+
An event is generated when the daily quota is reached. These events have the `datadog_index` tag which includes the index name. Therefore, when this event has been generated, you can [create a facet][17] on the `datadog_index` tag, so that you can use `datadog_index` in the `group by` step for setting up a multi-alert monitor.
159
159
160
160
To set up a monitor to alert when the daily quota is reached for an index:
161
161
162
162
1. Navigate to [Monitors > New Monitor][13] and click **Event**.
163
163
2. Enter: `source:datadog datadog_index:* "daily quota reached"` in the **Define the search query** section. Include `datadog_index:*` to ensure only index related events are selected.
164
164
3. In the **Count of** field, add `datadog_index` to group by index. This updates the query to read `Show Count of * by datadog_index (datadog_index)`.
165
-
4. For **Evaluate the query over**, select **current day**. For **Starting at**, select the time when indexes reset. This keeps the monitor in alert status until quota reset.
166
-
167
-
This is an example of what the search query looks like when defined in Datadog:
168
-
169
-
{{< img src="logs/guide/daily_quota_notification_search_query.png" alt="The Datadog Alert on Index Quota Reached Search Query configuration" style="width:70%;">}}
170
-
165
+
4. For **Evaluate the query over**, select **current day**. For **Starting at**, select the time when indexes reset. This keeps the monitor in alert status until quota reset. This is an example of what the search query looks like when defined in Datadog:
166
+
{{< img src="logs/guide/daily_quota_notification_search_query.png" alt="The Datadog Alert on Index Quota Reached Search Query configuration" style="width:100%;">}}
171
167
5. In the **Set alert conditions** section, select `above or equal to` and enter `1` for the **Alert threshold**.
172
168
6. Add a notification title and message in the **Configure notifications and automations** section. The **Multi Alert** button is automatically selected because the monitor is grouped by `datadog_index(datadog_index)`.
173
169
7. Click **Save**.
@@ -198,7 +194,7 @@ Even if you use exclusion filters, you can still visualize trends and anomalies
198
194
199
195
### Enable Sensitive Data Scanner for Personally Identifiable Information (PII) detection
200
196
201
-
If you want to prevent data leaks and limit non-compliance risks, use Sensitive Data Scanner to identify, tag, and optionally redact or hash sensitive data. For example, you can scan for credit card numbers, bank routing numbers, and API keys in your logs, APM spans, and RUM events, See [Sensitive Data Scanner][23] on how to set up scanning rules to determine what data to scan.
197
+
If you want to prevent data leaks and limit non-compliance risks, use Sensitive Data Scanner to identify, tag, and optionally redact or hash sensitive data. For example, you can scan for credit card numbers, bank routing numbers, and API keys in your logs, APM spans, and RUM events, See [Sensitive Data Scanner][23] on how to set up scanning rules to determine what data to scan.
202
198
203
199
**Note**: [Sensitive Data Scanner][24] is a separate billable product.
0 commit comments