You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/outputs/azure.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,6 +20,7 @@ To get more details about how to setup Azure Log Analytics, please refer to the
20
20
| Log_Type_Key | If included, the value for this key will be looked upon in the record and if present, will over-write the `log_type`. If not found then the `log_type` value will be used. ||
21
21
| Time\_Key | Optional parameter to specify the key name where the timestamp will be stored. |@timestamp|
22
22
| Time\_Generated | If enabled, the HTTP request header 'time-generated-field' will be included so Azure can override the timestamp with the key specified by 'time_key' option. | off |
23
+
| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`0`|
23
24
24
25
## Getting Started
25
26
@@ -61,4 +62,3 @@ Another example using the `Log_Type_Key` with [record-accessor](https://docs.flu
Copy file name to clipboardExpand all lines: pipeline/outputs/azure_blob.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,6 +31,7 @@ We expose different configuration properties. The following table lists all the
31
31
| emulator\_mode | If you want to send data to an Azure emulator service like [Azurite](https://github.com/Azure/Azurite), enable this option so the plugin will format the requests to the expected format. | off |
32
32
| endpoint | If you are using an emulator, this option allows you to specify the absolute HTTP address of such service. e.g: [http://127.0.0.1:10000](http://127.0.0.1:10000). ||
33
33
| tls | Enable or disable TLS encryption. Note that Azure service requires this to be turned on. | off |
34
+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`0`|
34
35
35
36
## Getting Started
36
37
@@ -128,4 +129,3 @@ Azurite Queue service is successfully listening at http://127.0.0.1:10001
Copy file name to clipboardExpand all lines: pipeline/outputs/bigquery.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -59,6 +59,7 @@ You must configure workload identity federation in GCP before using it with Flue
59
59
| pool\_id | GCP workload identity pool where the identity provider was created. Used to construct the full resource name of the identity provider. ||
60
60
| provider\_id | GCP workload identity provider. Used to construct the full resource name of the identity provider. Currently only AWS accounts are supported. ||
61
61
| google\_service\_account | Email address of the Google service account to impersonate. The workload identity provider must have permissions to impersonate this service account, and the service account must have permissions to access Google BigQuery resources (e.g. `write` access to tables) ||
62
+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`0`|
62
63
63
64
See Google's [official documentation](https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll) for further details.
64
65
@@ -77,4 +78,3 @@ If you are using a _Google Cloud Credentials File_, the following configuration
Copy file name to clipboardExpand all lines: pipeline/outputs/chronicle.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,6 +34,7 @@ Fluent Bit's Chronicle output plugin uses a JSON credentials file for authentica
34
34
| log\_type | The log type to parse logs as. Google Chronicle supports parsing for [specific log types only](https://cloud.google.com/chronicle/docs/ingestion/parser-list/supported-default-parsers). ||
35
35
| region | The GCP region in which to store security logs. Currently, there are several supported regions: `US`, `EU`, `UK`, `ASIA`. Blank is handled as `US`. ||
36
36
| log\_key | By default, the whole log record will be sent to Google Chronicle. If you specify a key name with this option, then only the value of that key will be sent to Google Chronicle. ||
37
+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`0`|
37
38
38
39
See Google's [official documentation](https://cloud.google.com/chronicle/docs/reference/ingestion-api) for further details.
Copy file name to clipboardExpand all lines: pipeline/outputs/cloudwatch.md
+1-22Lines changed: 1 addition & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,6 +34,7 @@ See [here](https://github.com/fluent/fluent-bit-docs/tree/43c4fe134611da471e706b
34
34
| profile | Option to specify an AWS Profile for credentials. Defaults to `default`|
35
35
| auto\_retry\_requests | Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to `true`. |
36
36
| external\_id | Specify an external ID for the STS API, can be used with the role\_arn parameter if your role requires an external ID. |
37
+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. Default: `1`. |
37
38
38
39
## Getting Started
39
40
@@ -80,28 +81,6 @@ The following AWS IAM permissions are required to use this plugin:
80
81
}
81
82
```
82
83
83
-
### Worker support
84
-
85
-
Fluent Bit 1.7 adds a new feature called `workers` which enables outputs to have dedicated threads. This `cloudwatch_logs` plugin has partial support for workers in Fluent Bit 2.1.11 and prior. **2.1.11 and prior, the plugin can support a single worker; enabling multiple workers will lead to errors/indeterminate behavior.**
86
-
Starting from Fluent Bit 2.1.12, the `cloudwatch_logs` plugin added full support for workers, meaning that more than one worker can be configured.
87
-
88
-
Example:
89
-
90
-
```
91
-
[OUTPUT]
92
-
Name cloudwatch_logs
93
-
Match *
94
-
region us-east-1
95
-
log_group_name fluent-bit-cloudwatch
96
-
log_stream_prefix from-fluent-bit-
97
-
auto_create_group On
98
-
workers 1
99
-
```
100
-
101
-
If you enable workers, you are enabling one or more dedicated threads for your CloudWatch output.
102
-
We recommend starting with 1 worker, evaluating the performance, and then enabling more workers if needed.
103
-
For most users, the plugin can provide sufficient throughput with 0 or 1 workers.
104
-
105
84
### Log Stream and Group Name templating using record\_accessor syntax
106
85
107
86
Sometimes, you may want the log group or stream name to be based on the contents of the log record itself. This plugin supports templating log group and stream names using Fluent Bit [record\_accessor](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/record-accessor) syntax.
Copy file name to clipboardExpand all lines: pipeline/outputs/datadog.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,6 +25,7 @@ Before you begin, you need a [Datadog account](https://app.datadoghq.com/signup)
25
25
| dd_source |_Recommended_ - A human readable name for the underlying technology of your service (e.g. `postgres` or `nginx`). If unset, Datadog will look for the source in the [`ddsource` attribute](https://docs.datadoghq.com/logs/log_configuration/pipelines/?tab=source#source-attribute). ||
26
26
| dd_tags |_Optional_ - The [tags](https://docs.datadoghq.com/tagging/) you want to assign to your logs in Datadog. If unset, Datadog will look for the tags in the [`ddtags' attribute](https://docs.datadoghq.com/api/latest/logs/#send-logs). ||
27
27
| dd_message_key | By default, the plugin searches for the key 'log' and remap the value to the key 'message'. If the property is set, the plugin will search the property name key. ||
28
+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`0`|
Copy file name to clipboardExpand all lines: pipeline/outputs/elasticsearch.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ The **es** output plugin, allows to ingest your records into an [Elasticsearch](
48
48
| Trace\_Error | If elasticsearch return an error, print the elasticsearch API request and response \(for diag only\)| Off |
49
49
| Current\_Time\_Index | Use current time for index generation instead of message record | Off |
50
50
| Suppress\_Type\_Name | When enabled, mapping types is removed and `Type` option is ignored. If using Elasticsearch 8.0.0 or higher - it [no longer supports mapping types](https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html), so it shall be set to On. | Off |
51
-
| Workers |Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. |2|
51
+
| Workers |The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`2`|
52
52
53
53
> The parameters _index_ and _type_ can be confusing if you are new to Elastic, if you have used a common relational database before, they can be compared to the _database_ and _table_ concepts. Also see [the FAQ below](elasticsearch.md#faq)
0 commit comments