If you specify a string for this parameter, the operation returns only log groups that\n have names that match the string based on a case-sensitive substring search. For example, if\n you specify DataLogs, log groups named DataLogs, aws/DataLogs, and\n GroupDataLogs would match, but datalogs, Data/log/s and\n Groupdata would not match.
\n
If you specify logGroupNamePattern in your request, then only\n arn, creationTime, and logGroupName are included in\n the response.
\n \n
\n logGroupNamePattern and logGroupNamePrefix are mutually exclusive.\n Only one of these parameters can be passed.
\n "
+ "smithy.api#documentation": "
If you specify a string for this parameter, the operation returns only log groups that\n have names that match the string based on a case-sensitive substring search. For example, if\n you specify DataLogs, log groups named DataLogs,\n aws/DataLogs, and GroupDataLogs would match, but\n datalogs, Data/log/s and Groupdata would not\n match.
\n
If you specify logGroupNamePattern in your request, then only\n arn, creationTime, and logGroupName are included in\n the response.
\n \n
\n logGroupNamePattern and logGroupNamePrefix are mutually exclusive.\n Only one of these parameters can be passed.
The actual log data content returned in the streaming response. This contains the fields and values of the log event in a structured format that can be parsed and processed by the client.
A structure containing the extracted fields from a log event. These fields are extracted based on the log format and can be used for structured querying and analysis.
Retrieves a large logging object (LLO) and streams it back. This API is used to fetch the content of large portions of log events that have been ingested through the PutOpenTelemetryLogs API. \n When log events contain fields that would cause the total event size to exceed 1MB, CloudWatch Logs automatically processes up to 10 fields, starting with the largest fields. Each field is truncated as needed to keep \n the total event size as close to 1MB as possible. The excess portions are stored as Large Log Objects (LLOs) and these fields are processed separately and LLO reference system fields (in the format @ptr.$[path.to.field]) are \n added. The path in the reference field reflects the original JSON structure where the large field was located. For example, this could be @ptr.$['input']['message'], @ptr.$['AAA']['BBB']['CCC']['DDD'], @ptr.$['AAA'], or any other path matching your log structure.
A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false.
A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation.
A stream of structured log data returned by the GetLogObject operation. This stream contains log events with their associated metadata and extracted fields.
An internal error occurred during the streaming of log data. This exception is thrown when there's an issue with the internal streaming mechanism used by the GetLogObject operation.
Creates an account-level data protection policy, subscription filter policy, or field\n index policy that applies to all log groups or a subset of log groups in the account.
\n
To use this operation, you must be signed on with the correct permissions depending on the\n type of policy that you are creating.
\n
\n
\n
To create a data protection policy, you must have the\n logs:PutDataProtectionPolicy and logs:PutAccountPolicy\n permissions.
\n
\n
\n
To create a subscription filter policy, you must have the\n logs:PutSubscriptionFilter and logs:PutAccountPolicy\n permissions.
\n
\n
\n
To create a transformer policy, you must have the logs:PutTransformer and\n logs:PutAccountPolicy permissions.
\n
\n
\n
To create a field index policy, you must have the logs:PutIndexPolicy and\n logs:PutAccountPolicy permissions.
\n
\n
\n
\n Data protection policy\n
\n
A data protection policy can help safeguard sensitive data that's ingested by your log\n groups by auditing and masking the sensitive log data. Each account can have only one\n account-level data protection policy.
\n \n
Sensitive data is detected and masked when it is ingested into a log group. When you set\n a data protection policy, log events ingested into the log groups before that time are not\n masked.
\n \n
If you use PutAccountPolicy to create a data protection policy for your whole\n account, it applies to both existing log groups and all log groups that are created later in\n this account. The account-level policy is applied to existing log groups with eventual\n consistency. It might take up to 5 minutes before sensitive data in existing log groups begins\n to be masked.
\n
By default, when a user views a log event that includes masked data, the sensitive data is\n replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to\n true to view the unmasked log events. Users with the logs:Unmask\n can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs\n Insights query with the unmask query command.
To use the PutAccountPolicy operation for a data protection policy, you must\n be signed on with the logs:PutDataProtectionPolicy and\n logs:PutAccountPolicy permissions.
\n
The PutAccountPolicy operation applies to all log groups in the account. You\n can use PutDataProtectionPolicy to create a data protection policy that applies to just one\n log group. If a log group has its own data protection policy and the account also has an\n account-level data protection policy, then the two policies are cumulative. Any sensitive term\n specified in either policy is masked.
\n
\n Subscription filter policy\n
\n
A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to\n both existing log groups and log groups that are created later in this account. Supported\n destinations are Kinesis Data Streams, Firehose, and Lambda. When log\n events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP\n format.
\n
The following destinations are supported for subscription filters:
\n
\n
\n
An Kinesis Data Streams data stream in the same account as the subscription policy, for\n same-account delivery.
\n
\n
\n
An Firehose data stream in the same account as the subscription policy, for\n same-account delivery.
\n
\n
\n
A Lambda function in the same account as the subscription policy, for\n same-account delivery.
\n
\n
\n
A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations.
\n
\n
\n
Each account can have one account-level subscription filter policy per Region. If you are\n updating an existing filter, you must specify the correct name in PolicyName. To\n perform a PutAccountPolicy subscription filter operation for any destination\n except a Lambda function, you must also have the iam:PassRole\n permission.
\n
\n Transformer policy\n
\n
Creates or updates a log transformer policy for your account. You use\n log transformers to transform log events into a different format, making them easier for you\n to process and analyze. You can also transform logs from different sources into standardized\n formats that contain relevant, source-specific information. After you have created a\n transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You\n can then refer to the transformed versions of the logs during operations such as querying with\n CloudWatch Logs Insights or creating metric filters or subscription filters.
\n
You can also use a transformer to copy metadata from metadata keys into the log events\n themselves. This metadata can include log group name, log stream name, account ID and\n Region.
\n
A transformer for a log group is a series of processors, where each processor applies one\n type of transformation to the log events ingested into this log group. For more information\n about the available processors to use in a transformer, see Processors that you can use.
\n
Having log events in standardized format enables visibility across your applications for\n your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation\n for common log types with out-of-the-box transformation templates for major Amazon Web Services\n log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use\n pre-built transformation templates or create custom transformation policies.
\n
You can create transformers only for the log groups in the Standard log class.
\n
You can have one account-level transformer policy that applies to all log groups in the\n account. Or you can create as many as 20 account-level transformer policies that are each\n scoped to a subset of log groups with the selectionCriteria parameter. If you\n have multiple account-level transformer policies with selection criteria, no two of them can\n use the same or overlapping log group name prefixes. For example, if you have one policy\n filtered to log groups that start with my-log, you can't have another field index\n policy filtered to my-logpprod or my-logging.
\n
You can also set up a transformer at the log-group level. For more information, see PutTransformer. If there is both a log-group level transformer created with\n PutTransformer and an account-level transformer that could apply to the same\n log group, the log group uses only the log-group level transformer. It ignores the\n account-level transformer.
\n
\n Field index policy\n
\n
You can use field index policies to create indexes on fields found in log events in the\n log group. Creating field indexes can help lower the scan volume for CloudWatch Logs\n Insights queries that reference those fields, because these queries attempt to skip the\n processing of log events that are known to not match the indexed field. Good fields to index\n are fields that you often need to query for and fields or values that match only a small\n fraction of the total log events. Common examples of indexes include request ID, session ID,\n user IDs, or instance IDs. For more information, see Create field indexes\n to improve query performance and reduce costs\n
\n
To find the fields that are in your log group events, use the GetLogGroupFields operation.
\n
For example, suppose you have created a field index for requestId. Then, any\n CloudWatch Logs Insights query on that log group that includes requestId =\n value\n or requestId in [value,\n value, ...] will attempt to process only the log events where\n the indexed field matches the specified value.
\n
Matches of log events to the names of indexed fields are case-sensitive. For example, an\n indexed field of RequestId won't match a log event containing\n requestId.
\n
You can have one account-level field index policy that applies to all log groups in the\n account. Or you can create as many as 20 account-level field index policies that are each\n scoped to a subset of log groups with the selectionCriteria parameter. If you\n have multiple account-level index policies with selection criteria, no two of them can use the\n same or overlapping log group name prefixes. For example, if you have one policy filtered to\n log groups that start with my-log, you can't have another field index policy\n filtered to my-logpprod or my-logging.
\n
If you create an account-level field index policy in a monitoring account in cross-account\n observability, the policy is applied only to the monitoring account and not to any source\n accounts.
\n
If you want to create a field index policy for a single log group, you can use PutIndexPolicy instead of PutAccountPolicy. If you do so, that log\n group will use only that log-group level policy, and will ignore the account-level policy that\n you create with PutAccountPolicy.
"
+ "smithy.api#documentation": "
Creates an account-level data protection policy, subscription filter policy, field index\n policy, transformer policy, or metric extraction policy that applies to all log groups or a\n subset of log groups in the account.
\n
To use this operation, you must be signed on with the correct permissions depending on the\n type of policy that you are creating.
\n
\n
\n
To create a data protection policy, you must have the\n logs:PutDataProtectionPolicy and logs:PutAccountPolicy\n permissions.
\n
\n
\n
To create a subscription filter policy, you must have the\n logs:PutSubscriptionFilter and logs:PutAccountPolicy\n permissions.
\n
\n
\n
To create a transformer policy, you must have the logs:PutTransformer and\n logs:PutAccountPolicy permissions.
\n
\n
\n
To create a field index policy, you must have the logs:PutIndexPolicy and\n logs:PutAccountPolicy permissions.
\n
\n
\n
To create a metric extraction policy, you must have the\n logs:PutMetricExtractionPolicy and\n logs:PutAccountPolicy permissions.
\n
\n
\n
\n Data protection policy\n
\n
A data protection policy can help safeguard sensitive data that's ingested by your log\n groups by auditing and masking the sensitive log data. Each account can have only one\n account-level data protection policy.
\n \n
Sensitive data is detected and masked when it is ingested into a log group. When you set\n a data protection policy, log events ingested into the log groups before that time are not\n masked.
\n \n
If you use PutAccountPolicy to create a data protection policy for your whole\n account, it applies to both existing log groups and all log groups that are created later in\n this account. The account-level policy is applied to existing log groups with eventual\n consistency. It might take up to 5 minutes before sensitive data in existing log groups begins\n to be masked.
\n
By default, when a user views a log event that includes masked data, the sensitive data is\n replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to\n true to view the unmasked log events. Users with the logs:Unmask\n can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs\n Insights query with the unmask query command.
To use the PutAccountPolicy operation for a data protection policy, you must\n be signed on with the logs:PutDataProtectionPolicy and\n logs:PutAccountPolicy permissions.
\n
The PutAccountPolicy operation applies to all log groups in the account. You\n can use PutDataProtectionPolicy to create a data protection policy that applies to just one\n log group. If a log group has its own data protection policy and the account also has an\n account-level data protection policy, then the two policies are cumulative. Any sensitive term\n specified in either policy is masked.
\n
\n Subscription filter policy\n
\n
A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to\n both existing log groups and log groups that are created later in this account. Supported\n destinations are Kinesis Data Streams, Firehose, and Lambda. When log\n events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP\n format.
\n
The following destinations are supported for subscription filters:
\n
\n
\n
An Kinesis Data Streams data stream in the same account as the subscription policy, for\n same-account delivery.
\n
\n
\n
An Firehose data stream in the same account as the subscription policy, for\n same-account delivery.
\n
\n
\n
A Lambda function in the same account as the subscription policy, for\n same-account delivery.
\n
\n
\n
A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations.
\n
\n
\n
Each account can have one account-level subscription filter policy per Region. If you are\n updating an existing filter, you must specify the correct name in PolicyName. To\n perform a PutAccountPolicy subscription filter operation for any destination\n except a Lambda function, you must also have the iam:PassRole\n permission.
\n
\n Transformer policy\n
\n
Creates or updates a log transformer policy for your account. You use\n log transformers to transform log events into a different format, making them easier for you\n to process and analyze. You can also transform logs from different sources into standardized\n formats that contain relevant, source-specific information. After you have created a\n transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You\n can then refer to the transformed versions of the logs during operations such as querying with\n CloudWatch Logs Insights or creating metric filters or subscription filters.
\n
You can also use a transformer to copy metadata from metadata keys into the log events\n themselves. This metadata can include log group name, log stream name, account ID and\n Region.
\n
A transformer for a log group is a series of processors, where each processor applies one\n type of transformation to the log events ingested into this log group. For more information\n about the available processors to use in a transformer, see Processors that you can use.
\n
Having log events in standardized format enables visibility across your applications for\n your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation\n for common log types with out-of-the-box transformation templates for major Amazon Web Services\n log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use\n pre-built transformation templates or create custom transformation policies.
\n
You can create transformers only for the log groups in the Standard log class.
\n
You can have one account-level transformer policy that applies to all log groups in the\n account. Or you can create as many as 20 account-level transformer policies that are each\n scoped to a subset of log groups with the selectionCriteria parameter. If you\n have multiple account-level transformer policies with selection criteria, no two of them can\n use the same or overlapping log group name prefixes. For example, if you have one policy\n filtered to log groups that start with my-log, you can't have another field index\n policy filtered to my-logpprod or my-logging.
\n
You can also set up a transformer at the log-group level. For more information, see PutTransformer. If there is both a log-group level transformer created with\n PutTransformer and an account-level transformer that could apply to the same\n log group, the log group uses only the log-group level transformer. It ignores the\n account-level transformer.
\n
\n Field index policy\n
\n
You can use field index policies to create indexes on fields found in log events in the\n log group. Creating field indexes can help lower the scan volume for CloudWatch Logs\n Insights queries that reference those fields, because these queries attempt to skip the\n processing of log events that are known to not match the indexed field. Good fields to index\n are fields that you often need to query for and fields or values that match only a small\n fraction of the total log events. Common examples of indexes include request ID, session ID,\n user IDs, or instance IDs. For more information, see Create field indexes\n to improve query performance and reduce costs\n
\n
To find the fields that are in your log group events, use the GetLogGroupFields operation.
\n
For example, suppose you have created a field index for requestId. Then, any\n CloudWatch Logs Insights query on that log group that includes requestId =\n value\n or requestId in [value,\n value, ...] will attempt to process only the log events where\n the indexed field matches the specified value.
\n
Matches of log events to the names of indexed fields are case-sensitive. For example, an\n indexed field of RequestId won't match a log event containing\n requestId.
\n
You can have one account-level field index policy that applies to all log groups in the\n account. Or you can create as many as 20 account-level field index policies that are each\n scoped to a subset of log groups with the selectionCriteria parameter. If you\n have multiple account-level index policies with selection criteria, no two of them can use the\n same or overlapping log group name prefixes. For example, if you have one policy filtered to\n log groups that start with my-log, you can't have another field index policy\n filtered to my-logpprod or my-logging.
\n
If you create an account-level field index policy in a monitoring account in cross-account\n observability, the policy is applied only to the monitoring account and not to any source\n accounts.
\n
If you want to create a field index policy for a single log group, you can use PutIndexPolicy instead of PutAccountPolicy. If you do so, that log\n group will use only that log-group level policy, and will ignore the account-level policy that\n you create with PutAccountPolicy.
\n
\n Metric extraction policy\n
\n
A metric extraction policy controls whether CloudWatch Metrics can be created through the\n Embedded Metrics Format (EMF) for log groups in your account. By default, EMF metric creation\n is enabled for all log groups. You can use metric extraction policies to disable EMF metric\n creation for your entire account or specific log groups.
\n
When a policy disables EMF metric creation for a log group, log events in the EMF format\n are still ingested, but no CloudWatch Metrics are created from them.
\n \n
Creating a policy disables metrics for AWS features that use EMF to create metrics, such\n as CloudWatch Container Insights and CloudWatch Application Signals. To prevent turning off\n those features by accident, we recommend that you exclude the underlying log-groups through a\n selection-criteria such as LogGroupNamePrefix NOT IN [\"/aws/containerinsights\",\n \"/aws/ecs/containerinsights\", \"/aws/application-signals/data\"].
\n \n
Each account can have either one account-level metric extraction policy that applies to\n all log groups, or up to 5 policies that are each scoped to a subset of log groups with the\n selectionCriteria parameter. The selection criteria supports filtering by LogGroupName and\n LogGroupNamePrefix using the operators IN and NOT IN. You can specify up to 50 values in each\n IN or NOT IN list.
\n
The selection criteria can be specified in these formats:
\n
\n LogGroupName IN [\"log-group-1\", \"log-group-2\"]\n
\n
\n LogGroupNamePrefix NOT IN [\"/aws/prefix1\", \"/aws/prefix2\"]\n
\n
If you have multiple account-level metric extraction policies with selection criteria, no\n two of them can have overlapping criteria. For example, if you have one policy with selection\n criteria LogGroupNamePrefix IN [\"my-log\"], you can't have another metric extraction policy\n with selection criteria LogGroupNamePrefix IN [\"/my-log-prod\"] or LogGroupNamePrefix IN\n [\"/my-logging\"], as the set of log groups matching these prefixes would be a subset of the log\n groups matching the first policy's prefix, creating an overlap.
\n
When using NOT IN, only one policy with this operator is allowed per account.
\n
When combining policies with IN and NOT IN operators, the overlap check ensures that\n policies don't have conflicting effects. Two policies with IN and NOT IN operators do not\n overlap if and only if every value in the IN policy is completely contained within some value\n in the NOT IN policy. For example:
\n
\n
\n
If you have a NOT IN policy for prefix \"/aws/lambda\", you can create an IN policy for\n the exact log group name \"/aws/lambda/function1\" because the set of log groups matching\n \"/aws/lambda/function1\" is a subset of the log groups matching \"/aws/lambda\".
\n
\n
\n
If you have a NOT IN policy for prefix \"/aws/lambda\", you cannot create an IN policy\n for prefix \"/aws\" because the set of log groups matching \"/aws\" is not a subset of the log\n groups matching \"/aws/lambda\".
Use this parameter to apply the new policy to a subset of log groups in the\n account.
\n
Specifing selectionCriteria is valid only when you specify\n SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or\n TRANSFORMER_POLICYfor policyType.
\n
If policyType is SUBSCRIPTION_FILTER_POLICY, the only supported\n selectionCriteria filter is LogGroupName NOT IN []\n
\n
If policyType is FIELD_INDEX_POLICY or\n TRANSFORMER_POLICY, the only supported selectionCriteria filter is\n LogGroupNamePrefix\n
\n
The selectionCriteria string can be up to 25KB in length. The length is\n determined by using its UTF-8 bytes.
\n
Using the selectionCriteria parameter with\n SUBSCRIPTION_FILTER_POLICY is useful to help prevent infinite loops. For more\n information, see Log recursion\n prevention.
"
+ "smithy.api#documentation": "
Use this parameter to apply the new policy to a subset of log groups in the\n account.
\n
Specifying selectionCriteria is valid only when you specify\n SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or\n TRANSFORMER_POLICYfor policyType.
\n
If policyType is SUBSCRIPTION_FILTER_POLICY, the only supported\n selectionCriteria filter is LogGroupName NOT IN []\n
\n
If policyType is FIELD_INDEX_POLICY or\n TRANSFORMER_POLICY, the only supported selectionCriteria filter is\n LogGroupNamePrefix\n
\n
The selectionCriteria string can be up to 25KB in length. The length is\n determined by using its UTF-8 bytes.
\n
Using the selectionCriteria parameter with\n SUBSCRIPTION_FILTER_POLICY is useful to help prevent infinite loops. For more\n information, see Log recursion\n prevention.
Defines the type of log that the source is sending.
\n
\n
\n
For Amazon Bedrock, the valid value is APPLICATION_LOGS and TRACES.
\n
\n
\n
For CloudFront, the valid value is ACCESS_LOGS.
\n
\n
\n
For Amazon CodeWhisperer, the valid value is EVENT_LOGS.
\n
\n
\n
For Elemental MediaPackage, the valid values are EGRESS_ACCESS_LOGS and\n INGRESS_ACCESS_LOGS.
\n
\n
\n
For Elemental MediaTailor, the valid values are AD_DECISION_SERVER_LOGS,\n MANIFEST_SERVICE_LOGS, and TRANSCODE_LOGS.
\n
\n
\n
For Entity Resolution, the valid value is WORKFLOW_LOGS.
\n
\n
\n
For IAM Identity Center, the valid value is\n ERROR_LOGS.
\n
\n
\n
For PCS, the valid values are PCS_SCHEDULER_LOGS and PCS_JOBCOMP_LOGS.
\n
\n
\n
For Amazon Q, the valid value is EVENT_LOGS.
\n
\n
\n
For Amazon SES mail manager, the valid values are APPLICATION_LOG\n and TRAFFIC_POLICY_DEBUG_LOGS.
\n
\n
\n
For Amazon WorkMail, the valid values are ACCESS_CONTROL_LOGS,\n AUTHENTICATION_LOGS, WORKMAIL_AVAILABILITY_PROVIDER_LOGS,\n WORKMAIL_MAILBOX_ACCESS_LOGS, and\n WORKMAIL_PERSONAL_ACCESS_TOKEN_LOGS.
\n
\n
\n
For Amazon VPC Route Server, the valid value is\n EVENT_LOGS.
\n
\n
",
+ "smithy.api#documentation": "
Defines the type of log that the source is sending.
\n
\n
\n
For Amazon Bedrock, the valid value is APPLICATION_LOGS and TRACES.
\n
\n
\n
For CloudFront, the valid value is ACCESS_LOGS.
\n
\n
\n
For Amazon CodeWhisperer, the valid value is EVENT_LOGS.
\n
\n
\n
For Elemental MediaPackage, the valid values are EGRESS_ACCESS_LOGS and\n INGRESS_ACCESS_LOGS.
\n
\n
\n
For Elemental MediaTailor, the valid values are AD_DECISION_SERVER_LOGS,\n MANIFEST_SERVICE_LOGS, and TRANSCODE_LOGS.
\n
\n
\n
For Entity Resolution, the valid value is WORKFLOW_LOGS.
\n
\n
\n
For IAM Identity Center, the valid value is\n ERROR_LOGS.
\n
\n
\n
For PCS, the valid values are PCS_SCHEDULER_LOGS and\n PCS_JOBCOMP_LOGS.
\n
\n
\n
For Amazon Q, the valid value is EVENT_LOGS.
\n
\n
\n
For Amazon SES mail manager, the valid values are APPLICATION_LOG\n and TRAFFIC_POLICY_DEBUG_LOGS.
\n
\n
\n
For Amazon WorkMail, the valid values are ACCESS_CONTROL_LOGS,\n AUTHENTICATION_LOGS, WORKMAIL_AVAILABILITY_PROVIDER_LOGS,\n WORKMAIL_MAILBOX_ACCESS_LOGS, and\n WORKMAIL_PERSONAL_ACCESS_TOKEN_LOGS.
\n
\n
\n
For Amazon VPC Route Server, the valid value is\n EVENT_LOGS.
\n
\n
",
"smithy.api#required": {}
}
},
@@ -12981,7 +13110,7 @@
"smithy.api#deprecated": {
"message": "Please use the generic tagging API UntagResource"
},
- "smithy.api#documentation": "\n
The UntagLogGroup operation is on the path to deprecation. We recommend that you use\n UntagResource instead.
\n \n
Removes the specified tags from the specified log group.
CloudWatch Logs doesn't support IAM policies that prevent users from assigning specified\n tags to log groups using the aws:Resource/key-name\n or\n aws:TagKeys condition keys.
"
+ "smithy.api#documentation": "\n
The UntagLogGroup operation is on the path to deprecation. We recommend that you use\n UntagResource instead.
\n \n
Removes the specified tags from the specified log group.
When using IAM policies to control tag management for CloudWatch Logs log groups, the\n condition keys aws:Resource/key-name and aws:TagKeys cannot be used to restrict which tags\n users can assign.
"
}
},
"com.amazonaws.cloudwatchlogs#UntagLogGroupRequest": {
diff --git a/aws-models/mediaconvert.json b/aws-models/mediaconvert.json
index e9e6d7b6ec33..75b715fd784a 100644
--- a/aws-models/mediaconvert.json
+++ b/aws-models/mediaconvert.json
@@ -13892,7 +13892,7 @@
"FileInput": {
"target": "com.amazonaws.mediaconvert#__stringMax2048PatternS3Https",
"traits": {
- "smithy.api#documentation": "Specify the source file for your transcoding job. You can use multiple inputs in a single job. The service concatenates these inputs, in the order that you specify them in the job, to create the outputs. If your input format is IMF, specify your input by providing the path to your CPL. For example, \"s3://bucket/vf/cpl.xml\". If the CPL is in an incomplete IMP, make sure to use *Supplemental IMPs* to specify any supplemental IMPs that contain assets referenced by the CPL.",
+ "smithy.api#documentation": "Specify the source file for your transcoding job. You can use multiple inputs in a single job. The service concatenates these inputs, in the order that you specify them in the job, to create the outputs. For standard inputs, provide the path to your S3, HTTP, or HTTPS source file. For example, s3://amzn-s3-demo-bucket/input.mp4 for an Amazon S3 input or https://example.com/input.mp4 for an HTTPS input. For TAMS inputs, specify the HTTPS endpoint of your TAMS server. For example, https://tams-server.example.com . When you do, also specify Source ID, Timerange, GAP handling, and the Authorization connection ARN under TAMS settings. (Don't include these parameters in the Input file URL.) For IMF inputs, specify your input by providing the path to your CPL. For example, s3://amzn-s3-demo-bucket/vf/cpl.xml . If the CPL is in an incomplete IMP, make sure to use Supplemental IMPsto specify any supplemental IMPs that contain assets referenced by the CPL.",
"smithy.api#jsonName": "fileInput"
}
},
@@ -13959,6 +13959,13 @@
"smithy.api#jsonName": "supplementalImps"
}
},
+ "TamsSettings": {
+ "target": "com.amazonaws.mediaconvert#InputTamsSettings",
+ "traits": {
+ "smithy.api#documentation": "Specify a Time Addressable Media Store (TAMS) server as an input source. TAMS is an open-source API specification that provides access to time-segmented media content. Use TAMS to retrieve specific time ranges from live or archived media streams. When you specify TAMS settings, MediaConvert connects to your TAMS server, retrieves the media segments for your specified time range, and processes them as a single input. This enables workflows like extracting clips from live streams or processing specific portions of archived content. To use TAMS, you must: 1. Have access to a TAMS-compliant server 2. Specify the server URL in the Input file URL field 3. Provide the required SourceId and Timerange parameters 4. Configure authentication, if your TAMS server requires it",
+ "smithy.api#jsonName": "tamsSettings"
+ }
+ },
"TimecodeSource": {
"target": "com.amazonaws.mediaconvert#InputTimecodeSource",
"traits": {
@@ -14247,6 +14254,42 @@
"smithy.api#documentation": "When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn't automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don't specify, the default value is Auto. Auto is the correct setting for all inputs that are not PsF. Don't set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts."
}
},
+ "com.amazonaws.mediaconvert#InputTamsSettings": {
+ "type": "structure",
+ "members": {
+ "AuthConnectionArn": {
+ "target": "com.amazonaws.mediaconvert#__stringPatternArnAwsAZ09EventsAZ090912ConnectionAZAZ09AF0936",
+ "traits": {
+ "smithy.api#documentation": "Specify the ARN (Amazon Resource Name) of an EventBridge Connection to authenticate with your TAMS server. The EventBridge Connection stores your authentication credentials securely. MediaConvert assumes your job's IAM role to access this connection, so ensure the role has the events:RetrieveConnectionCredentials, secretsmanager:DescribeSecret, and secretsmanager:GetSecretValue permissions. Format: arn:aws:events:region:account-id:connection/connection-name/unique-id",
+ "smithy.api#jsonName": "authConnectionArn"
+ }
+ },
+ "GapHandling": {
+ "target": "com.amazonaws.mediaconvert#TamsGapHandling",
+ "traits": {
+ "smithy.api#documentation": "Specify how MediaConvert handles gaps between media segments in your TAMS source. Gaps can occur in live streams due to network issues or other interruptions. Choose from the following options: * Skip gaps - Default. Skip over gaps and join segments together. This creates a continuous output with no blank frames, but may cause timeline discontinuities. * Fill with black - Insert black frames to fill gaps between segments. This maintains timeline continuity but adds black frames where content is missing. * Hold last frame - Repeat the last frame before a gap until the next segment begins. This maintains visual continuity during gaps.",
+ "smithy.api#jsonName": "gapHandling"
+ }
+ },
+ "SourceId": {
+ "target": "com.amazonaws.mediaconvert#__string",
+ "traits": {
+ "smithy.api#documentation": "Specify the unique identifier for the media source in your TAMS server. MediaConvert uses this source ID to locate the appropriate flows containing the media segments you want to process. The source ID corresponds to a specific media source registered in your TAMS server. This source must be of type urn:x-nmos:format:multi, and can can reference multiple flows for audio, video, or combined audio/video content. MediaConvert automatically selects the highest quality flows available for your job. This setting is required when include TAMS settings in your job.",
+ "smithy.api#jsonName": "sourceId"
+ }
+ },
+ "Timerange": {
+ "target": "com.amazonaws.mediaconvert#__stringPattern019090190908019090190908",
+ "traits": {
+ "smithy.api#documentation": "Specify the time range of media segments to retrieve from your TAMS server. MediaConvert fetches only the segments that fall within this range. Use the format specified by your TAMS server implementation. This must be two timestamp values with the format {sign?}{seconds}:{nanoseconds}, separated by an underscore, surrounded by either parentheses or square brackets. Example: [15:0_35:0) This setting is required when include TAMS settings in your job.",
+ "smithy.api#jsonName": "timerange"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Specify a Time Addressable Media Store (TAMS) server as an input source. TAMS is an open-source API specification that provides access to time-segmented media content. Use TAMS to retrieve specific time ranges from live or archived media streams. When you specify TAMS settings, MediaConvert connects to your TAMS server, retrieves the media segments for your specified time range, and processes them as a single input. This enables workflows like extracting clips from live streams or processing specific portions of archived content. To use TAMS, you must: 1. Have access to a TAMS-compliant server 2. Specify the server URL in the Input file URL field 3. Provide the required SourceId and Timerange parameters 4. Configure authentication, if your TAMS server requires it"
+ }
+ },
"com.amazonaws.mediaconvert#InputTemplate": {
"type": "structure",
"members": {
@@ -24024,6 +24067,32 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.mediaconvert#TamsGapHandling": {
+ "type": "enum",
+ "members": {
+ "SKIP_GAPS": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "SKIP_GAPS"
+ }
+ },
+ "FILL_WITH_BLACK": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "FILL_WITH_BLACK"
+ }
+ },
+ "HOLD_LAST_FRAME": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "HOLD_LAST_FRAME"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Specify how MediaConvert handles gaps between media segments in your TAMS source. Gaps can occur in live streams due to network issues or other interruptions. Choose from the following options: * Skip gaps - Default. Skip over gaps and join segments together. This creates a continuous output with no blank frames, but may cause timeline discontinuities. * Fill with black - Insert black frames to fill gaps between segments. This maintains timeline continuity but adds black frames where content is missing. * Hold last frame - Repeat the last frame before a gap until the next segment begins. This maintains visual continuity during gaps."
+ }
+ },
"com.amazonaws.mediaconvert#TeletextDestinationSettings": {
"type": "structure",
"members": {
@@ -25940,7 +26009,7 @@
"Height": {
"target": "com.amazonaws.mediaconvert#__integerMin0Max2147483647",
"traits": {
- "smithy.api#documentation": "Specify the height of the video overlay cropping rectangle. To use the same height as your overlay input video: Keep blank, or enter 0. To specify a different height for the cropping rectangle: Enter an integer representing the Unit type that you choose, either Pixels or Percentage. For example, when you enter 100 and choose Pixels, the cropping rectangle will 100 pixels high. When you enter 10, choose Percentage, and your overlay input video is 1920x1080, the cropping rectangle will be 108 pixels high.",
+ "smithy.api#documentation": "Specify the height of the video overlay cropping rectangle. To use the same height as your overlay input video: Keep blank, or enter 0. To specify a different height for the cropping rectangle: Enter an integer representing the Unit type that you choose, either Pixels or Percentage. For example, when you enter 100 and choose Pixels, the cropping rectangle will be 100 pixels high. When you enter 10, choose Percentage, and your overlay input video is 1920x1080, the cropping rectangle will be 108 pixels high.",
"smithy.api#jsonName": "height"
}
},
@@ -25954,7 +26023,7 @@
"Width": {
"target": "com.amazonaws.mediaconvert#__integerMin0Max2147483647",
"traits": {
- "smithy.api#documentation": "Specify the width of the video overlay cropping rectangle. To use the same width as your overlay input video: Keep blank, or enter 0. To specify a different width for the cropping rectangle: Enter an integer representing the Unit type that you choose, either Pixels or Percentage. For example, when you enter 100 and choose Pixels, the cropping rectangle will 100 pixels wide. When you enter 10, choose Percentage, and your overlay input video is 1920x1080, the cropping rectangle will be 192 pixels wide.",
+ "smithy.api#documentation": "Specify the width of the video overlay cropping rectangle. To use the same width as your overlay input video: Keep blank, or enter 0. To specify a different width for the cropping rectangle: Enter an integer representing the Unit type that you choose, either Pixels or Percentage. For example, when you enter 100 and choose Pixels, the cropping rectangle will be 100 pixels wide. When you enter 10, choose Percentage, and your overlay input video is 1920x1080, the cropping rectangle will be 192 pixels wide.",
"smithy.api#jsonName": "width"
}
},
@@ -29511,6 +29580,12 @@
"smithy.api#pattern": "^([01][0-9]|2[0-4]):[0-5][0-9]:[0-5][0-9][:;][0-9]{2}(@[0-9]+(\\.[0-9]+)?(:[0-9]+)?)?$"
}
},
+ "com.amazonaws.mediaconvert#__stringPattern019090190908019090190908": {
+ "type": "string",
+ "traits": {
+ "smithy.api#pattern": "^(\\[|\\()?(-?(0|[1-9][0-9]*):(0|[1-9][0-9]{0,8}))?(_(-?(0|[1-9][0-9]*):(0|[1-9][0-9]{0,8}))?)?(\\]|\\))?$"
+ }
+ },
"com.amazonaws.mediaconvert#__stringPattern01D20305D205D": {
"type": "string",
"traits": {
@@ -29559,6 +29634,12 @@
"smithy.api#pattern": "^[A-Za-z]{2,3}(-[A-Za-z0-9-]+)?$"
}
},
+ "com.amazonaws.mediaconvert#__stringPatternArnAwsAZ09EventsAZ090912ConnectionAZAZ09AF0936": {
+ "type": "string",
+ "traits": {
+ "smithy.api#pattern": "^arn:aws[a-z0-9-]*:events:[a-z0-9-]+:[0-9]{12}:connection/[a-zA-Z0-9-]+/[a-f0-9-]{36}$"
+ }
+ },
"com.amazonaws.mediaconvert#__stringPatternArnAwsUsGovAcm": {
"type": "string",
"traits": {
diff --git a/aws-models/outposts.json b/aws-models/outposts.json
index fdb5ce7507c8..00238a21373c 100644
--- a/aws-models/outposts.json
+++ b/aws-models/outposts.json
@@ -276,7 +276,7 @@
"AssetId": {
"target": "com.amazonaws.outposts#AssetId",
"traits": {
- "smithy.api#documentation": "
The ID of the asset. An Outpost asset can be a single server within an Outposts rack or an Outposts server configuration.
"
+ "smithy.api#documentation": "
The ID of the asset. An Outpost asset can be a single server within an Outposts rack or\n an Outposts server configuration.
A capacity task can have one of the following statuses:
\n
\n
\n
\n REQUESTED - The capacity task was created and is awaiting the next step\n by Amazon Web Services Outposts.
\n
\n
\n
\n IN_PROGRESS - The capacity task is running and cannot be cancelled.
\n
\n
\n
\n FAILED - The capacity task could not be completed.
\n
\n
\n
\n COMPLETED - The capacity task has completed successfully.
\n
\n
\n
\n WAITING_FOR_EVACUATION - The capacity task requires capacity to run. You must stop the recommended EC2 running instances to free up capacity for the task to run.
\n
\n
\n
\n CANCELLATION_IN_PROGRESS - The capacity task has been cancelled and is in the process of cleaning up resources.
\n
\n
\n
\n CANCELLED - The capacity task is cancelled.
\n
\n
"
+ "smithy.api#documentation": "
Status of the capacity task.
\n
A capacity task can have one of the following statuses:
\n
\n
\n
\n REQUESTED - The capacity task was created and is awaiting the next step\n by Amazon Web Services Outposts.
\n
\n
\n
\n IN_PROGRESS - The capacity task is running and cannot be\n cancelled.
\n
\n
\n
\n FAILED - The capacity task could not be completed.
\n
\n
\n
\n COMPLETED - The capacity task has completed successfully.
\n
\n
\n
\n WAITING_FOR_EVACUATION - The capacity task requires capacity to run. You\n must stop the recommended EC2 running instances to free up capacity for the task to\n run.
\n
\n
\n
\n CANCELLATION_IN_PROGRESS - The capacity task has been cancelled and is in\n the process of cleaning up resources.
The date the current contract term ends for the specified Outpost. You must start the renewal or\n decommission process at least 5 business days before the current term for your\n Amazon Web Services Outposts ends. Failing to complete these steps at least 5 business days before the\n current term ends might result in unanticipated charges.
The power connector that Amazon Web Services should plan to provide for connections to the hardware.\n Note the correlation between PowerPhase and PowerConnector.
\n CS8365C – (common in US); 3P+E, 50A; three phase
\n
\n
\n
\n
"
+ "smithy.api#documentation": "
The power connector that Amazon Web Services should plan to provide for connections to the hardware.\n Note the correlation between PowerPhase and PowerConnector.
The warm-up status of a dedicated IP address. The status can have one of the following\n values:
\n
\n
\n
\n IN_PROGRESS – The IP address isn't ready to use because the\n dedicated IP warm-up process is ongoing.
\n
\n
\n
\n DONE – The dedicated IP warm-up process is complete, and\n the IP address is ready to use.
\n
\n
",
+ "smithy.api#documentation": "
The warm-up status of a dedicated IP address. The status can have one of the following\n values:
\n
\n
\n
\n IN_PROGRESS – The IP address isn't ready to use because the\n dedicated IP warm-up process is ongoing.
\n
\n
\n
\n DONE – The dedicated IP warm-up process is complete, and\n the IP address is ready to use.
\n
\n
\n
\n NOT_APPLICABLE – The warm-up status doesn't apply to this IP address.\n This status is used for IP addresses in managed dedicated IP pools, where Amazon SES automatically\n handles the warm-up process.
Indicates how complete the dedicated IP warm-up process is. When this value equals 1,\n the address has completed the warm-up process and is ready for use.
",
+ "smithy.api#documentation": "
Indicates the progress of your dedicated IP warm-up:
\n
\n
\n
\n 0-100 – For standard dedicated IP addresses, this shows the warm-up completion percentage. A value of 100 means the IP address is fully warmed up and ready for use.
\n
\n
\n
\n -1 – Appears for IP addresses in managed dedicated pools where Amazon SES automatically handles the warm-up process, making the percentage not applicable.
Amazon Web Services Systems Manager is the operations hub for your Amazon Web Services applications and resources and a secure\n end-to-end management solution for hybrid cloud environments that enables safe and secure\n operations at scale.
For information about each of the tools that comprise Systems Manager, see Using\n Systems Manager tools in the Amazon Web Services Systems Manager User Guide.
Amazon Web Services Systems Manager is the operations hub for your Amazon Web Services applications and resources and a secure\n end-to-end management solution for hybrid cloud environments that enables safe and secure\n operations at scale.
For information about each of the tools that comprise Systems Manager, see Using\n Systems Manager tools in the Amazon Web Services Systems Manager User Guide.
The time the execution ran as a datetime object that is saved in the following format:\n yyyy-MM-dd'T'HH:mm:ss'Z'\n
",
+ "smithy.api#documentation": "
The time the execution ran as a datetime object that is saved in the following format:\n yyyy-MM-dd'T'HH:mm:ss'Z'\n
\n \n
For State Manager associations, this timestamp represents when the compliance status was\n captured and reported by the Systems Manager service, not when the underlying association was actually\n executed on the managed node. To track actual association execution times, use the DescribeAssociationExecutionTargets command or check the association execution\n history in the Systems Manager console.
A summary for the compliance item. The summary includes an execution ID, the execution type\n (for example, command), and the execution time.
"
+ "smithy.api#documentation": "
A summary for the compliance item. The summary includes an execution ID, the execution type\n (for example, command), and the execution time.
\n \n
For State Manager associations, the ExecutionTime value represents when the\n compliance status was captured and aggregated by the Systems Manager service, not necessarily when the\n underlying association was executed on the managed node. State Manager updates compliance status\n for all associations on an instance whenever any association executes, which means multiple\n associations may show the same execution time even if they were executed at different\n times.
Lists the parameters in your Amazon Web Services account or the parameters shared with you when you enable\n the Shared option.
\n
Request results are returned on a best-effort basis. If you specify MaxResults\n in the request, the response includes information up to the limit specified. The number of items\n returned, however, can be between zero and the value of MaxResults. If the service\n reaches an internal limit while processing the results, it stops the operation and returns the\n matching values up to that point and a NextToken. You can specify the\n NextToken in a subsequent call to get the next set of results.
\n \n
If you change the KMS key alias for the KMS key used to encrypt a parameter,\n then you must also update the key alias the parameter uses to reference KMS. Otherwise,\n DescribeParameters retrieves whatever the original key alias was\n referencing.
\n ",
+ "smithy.api#documentation": "
Lists the parameters in your Amazon Web Services account or the parameters shared with you when you enable\n the Shared option.
\n
Request results are returned on a best-effort basis. If you specify MaxResults\n in the request, the response includes information up to the limit specified. The number of items\n returned, however, can be between zero and the value of MaxResults. If the service\n reaches an internal limit while processing the results, it stops the operation and returns the\n matching values up to that point and a NextToken. You can specify the\n NextToken in a subsequent call to get the next set of results.
\n
Parameter names can't contain spaces. The service removes any spaces specified for the\n beginning or end of a parameter name. If the specified name for a parameter contains spaces\n between characters, the request fails with a ValidationException error.
\n \n
If you change the KMS key alias for the KMS key used to encrypt a parameter,\n then you must also update the key alias the parameter uses to reference KMS. Otherwise,\n DescribeParameters retrieves whatever the original key alias was\n referencing.
Get information about a single parameter by specifying the parameter name.
\n \n
To get information about more than one parameter at a time, use the GetParameters operation.
\n "
+ "smithy.api#documentation": "
Get information about a single parameter by specifying the parameter name.
\n
Parameter names can't contain spaces. The service removes any spaces specified for the\n beginning or end of a parameter name. If the specified name for a parameter contains spaces\n between characters, the request fails with a ValidationException error.
\n \n
To get information about more than one parameter at a time, use the GetParameters operation.
Retrieves the history of all changes to a parameter.
\n \n
If you change the KMS key alias for the KMS key used to encrypt a parameter,\n then you must also update the key alias the parameter uses to reference KMS. Otherwise,\n GetParameterHistory retrieves whatever the original key alias was\n referencing.
\n ",
+ "smithy.api#documentation": "
Retrieves the history of all changes to a parameter.
\n
Parameter names can't contain spaces. The service removes any spaces specified for the\n beginning or end of a parameter name. If the specified name for a parameter contains spaces\n between characters, the request fails with a ValidationException error.
\n \n
If you change the KMS key alias for the KMS key used to encrypt a parameter,\n then you must also update the key alias the parameter uses to reference KMS. Otherwise,\n GetParameterHistory retrieves whatever the original key alias was\n referencing.
Get information about one or more parameters by specifying multiple parameter names.
\n \n
To get information about a single parameter, you can use the GetParameter\n operation instead.
\n "
+ "smithy.api#documentation": "
Get information about one or more parameters by specifying multiple parameter names.
\n \n
To get information about a single parameter, you can use the GetParameter\n operation instead.
\n \n
Parameter names can't contain spaces. The service removes any spaces specified for the\n beginning or end of a parameter name. If the specified name for a parameter contains spaces\n between characters, the request fails with a ValidationException error.
Retrieve information about one or more parameters under a specified level in a hierarchy.
\n
Request results are returned on a best-effort basis. If you specify MaxResults\n in the request, the response includes information up to the limit specified. The number of items\n returned, however, can be between zero and the value of MaxResults. If the service\n reaches an internal limit while processing the results, it stops the operation and returns the\n matching values up to that point and a NextToken. You can specify the\n NextToken in a subsequent call to get the next set of results.
",
+ "smithy.api#documentation": "
Retrieve information about one or more parameters under a specified level in a hierarchy.
\n
Request results are returned on a best-effort basis. If you specify MaxResults\n in the request, the response includes information up to the limit specified. The number of items\n returned, however, can be between zero and the value of MaxResults. If the service\n reaches an internal limit while processing the results, it stops the operation and returns the\n matching values up to that point and a NextToken. You can specify the\n NextToken in a subsequent call to get the next set of results.
\n
Parameter names can't contain spaces. The service removes any spaces specified for the\n beginning or end of a parameter name. If the specified name for a parameter contains spaces\n between characters, the request fails with a ValidationException error.
A parameter label is a user-defined alias to help you manage different versions of a\n parameter. When you modify a parameter, Amazon Web Services Systems Manager automatically saves a new version and\n increments the version number by one. A label can help you remember the purpose of a parameter\n when there are multiple versions.
\n
Parameter labels have the following requirements and restrictions.
\n
\n
\n
A version of a parameter can have a maximum of 10 labels.
\n
\n
\n
You can't attach the same label to different versions of the same parameter. For example,\n if version 1 has the label Production, then you can't attach Production to version 2.
\n
\n
\n
You can move a label from one version of a parameter to another.
\n
\n
\n
You can't create a label when you create a new parameter. You must attach a label to a\n specific version of a parameter.
\n
\n
\n
If you no longer want to use a parameter label, then you can either delete it or move it\n to a different version of a parameter.
\n
\n
\n
A label can have a maximum of 100 characters.
\n
\n
\n
Labels can contain letters (case sensitive), numbers, periods (.), hyphens (-), or\n underscores (_).
\n
\n
\n
Labels can't begin with a number, \"aws\" or \"ssm\" (not case\n sensitive). If a label fails to meet these requirements, then the label isn't associated with a\n parameter and the system displays it in the list of InvalidLabels.
\n
\n
"
+ "smithy.api#documentation": "
A parameter label is a user-defined alias to help you manage different versions of a\n parameter. When you modify a parameter, Amazon Web Services Systems Manager automatically saves a new version and\n increments the version number by one. A label can help you remember the purpose of a parameter\n when there are multiple versions.
\n
Parameter labels have the following requirements and restrictions.
\n
\n
\n
A version of a parameter can have a maximum of 10 labels.
\n
\n
\n
You can't attach the same label to different versions of the same parameter. For example,\n if version 1 has the label Production, then you can't attach Production to version 2.
\n
\n
\n
You can move a label from one version of a parameter to another.
\n
\n
\n
You can't create a label when you create a new parameter. You must attach a label to a\n specific version of a parameter.
\n
\n
\n
If you no longer want to use a parameter label, then you can either delete it or move it\n to a different version of a parameter.
\n
\n
\n
A label can have a maximum of 100 characters.
\n
\n
\n
Labels can contain letters (case sensitive), numbers, periods (.), hyphens (-), or\n underscores (_).
\n
\n
\n
Labels can't begin with a number, \"aws\" or \"ssm\" (not case\n sensitive). If a label fails to meet these requirements, then the label isn't associated with a\n parameter and the system displays it in the list of InvalidLabels.
\n
\n
\n
Parameter names can't contain spaces. The service removes any spaces specified for\n the beginning or end of a parameter name. If the specified name for a parameter contains spaces\n between characters, the request fails with a ValidationException error.
The value of the yum repo configuration. For example:
\n
\n [main]\n
\n
\n name=MyCustomRepository\n
\n
\n baseurl=https://my-custom-repository\n
\n
\n enabled=1\n
\n \n
For information about other options available for your yum repository configuration, see\n dnf.conf(5).
\n ",
+ "smithy.api#documentation": "
The value of the repo configuration.
\n
\n Example for yum repositories\n
\n
\n [main]\n
\n
\n name=MyCustomRepository\n
\n
\n baseurl=https://my-custom-repository\n
\n
\n enabled=1\n
\n
For information about other options available for your yum repository configuration, see\n dnf.conf(5) on the\n man7.org website.
\n
\n Examples for Ubuntu Server and Debian Server\n
\n
\n deb http://security.ubuntu.com/ubuntu jammy main\n
\n
\n deb https://site.example.com/debian distribution component1 component2 component3\n
\n
Repo information for Ubuntu Server repositories must be specifed in a single line. For more\n examples and information, see jammy (5)\n sources.list.5.gz on the Ubuntu Server Manuals website and sources.list format on the\n Debian Wiki.
Registers a compliance type and other compliance details on a designated resource. This\n operation lets you register custom compliance details with a resource. This call overwrites\n existing compliance information on the resource, so you must provide a full list of compliance\n items each time that you send the request.
\n
ComplianceType can be one of the following:
\n
\n
\n
ExecutionId: The execution ID when the patch, association, or custom compliance item was\n applied.
\n
\n
\n
ExecutionType: Specify patch, association, or Custom:string.
\n
\n
\n
ExecutionTime. The time the patch, association, or custom compliance item was applied to\n the managed node.
\n
\n
\n
Id: The patch, association, or custom compliance ID.
\n
\n
\n
Title: A title.
\n
\n
\n
Status: The status of the compliance item. For example, approved for patches,\n or Failed for associations.
\n
\n
\n
Severity: A patch severity. For example, Critical.
\n
\n
\n
DocumentName: An SSM document name. For example, AWS-RunPatchBaseline.
\n
\n
\n
DocumentVersion: An SSM document version number. For example, 4.
\n
\n
\n
Classification: A patch classification. For example, security updates.
\n
\n
\n
PatchBaselineId: A patch baseline ID.
\n
\n
\n
PatchSeverity: A patch severity. For example, Critical.
\n
\n
\n
PatchState: A patch state. For example, InstancesWithFailedPatches.
\n
\n
\n
PatchGroup: The name of a patch group.
\n
\n
\n
InstalledTime: The time the association, patch, or custom compliance item was applied to\n the resource. Specify the time by using the following format:\n yyyy-MM-dd'T'HH:mm:ss'Z'\n
\n
\n
"
+ "smithy.api#documentation": "
Registers a compliance type and other compliance details on a designated resource. This\n operation lets you register custom compliance details with a resource. This call overwrites\n existing compliance information on the resource, so you must provide a full list of compliance\n items each time that you send the request.
\n
ComplianceType can be one of the following:
\n
\n
\n
ExecutionId: The execution ID when the patch, association, or custom compliance item was\n applied.
\n
\n
\n
ExecutionType: Specify patch, association, or Custom:string.
\n
\n
\n
ExecutionTime. The time the patch, association, or custom compliance item was applied to\n the managed node.
\n \n
For State Manager associations, this represents the time when compliance status was\n captured by the Systems Manager service during its internal compliance aggregation workflow, not\n necessarily when the association was executed on the managed node. State Manager updates\n compliance information for all associations on an instance whenever any association executes,\n which may result in multiple associations showing the same execution time.
\n \n
\n
\n
Id: The patch, association, or custom compliance ID.
\n
\n
\n
Title: A title.
\n
\n
\n
Status: The status of the compliance item. For example, approved for patches,\n or Failed for associations.
\n
\n
\n
Severity: A patch severity. For example, Critical.
\n
\n
\n
DocumentName: An SSM document name. For example, AWS-RunPatchBaseline.
\n
\n
\n
DocumentVersion: An SSM document version number. For example, 4.
\n
\n
\n
Classification: A patch classification. For example, security updates.
\n
\n
\n
PatchBaselineId: A patch baseline ID.
\n
\n
\n
PatchSeverity: A patch severity. For example, Critical.
\n
\n
\n
PatchState: A patch state. For example, InstancesWithFailedPatches.
\n
\n
\n
PatchGroup: The name of a patch group.
\n
\n
\n
InstalledTime: The time the association, patch, or custom compliance item was applied to\n the resource. Specify the time by using the following format:\n yyyy-MM-dd'T'HH:mm:ss'Z'\n
The fully qualified name of the parameter that you want to create or update.
\n \n
You can't enter the Amazon Resource Name (ARN) for a parameter, only the parameter name\n itself.
\n \n
The fully qualified name includes the complete hierarchy of the parameter path and name. For\n parameters in a hierarchy, you must include a leading forward slash character (/) when you create\n or reference a parameter. For example: /Dev/DBServer/MySQL/db-string13\n
\n
Naming Constraints:
\n
\n
\n
Parameter names are case sensitive.
\n
\n
\n
A parameter name must be unique within an Amazon Web Services Region
\n
\n
\n
A parameter name can't be prefixed with \"aws\" or \"ssm\"\n (case-insensitive).
\n
\n
\n
Parameter names can include only the following symbols and letters:\n a-zA-Z0-9_.-\n
\n
In addition, the slash character ( / ) is used to delineate hierarchies in parameter\n names. For example: /Dev/Production/East/Project-ABC/MyParameter\n
\n
\n
\n
A parameter name can't include spaces.
\n
\n
\n
Parameter hierarchies are limited to a maximum depth of fifteen levels.
\n
\n
\n
For additional information about valid values for parameter names, see Creating Systems Manager parameters in the Amazon Web Services Systems Manager User Guide.
\n \n
The reported maximum length of 2048 characters for a parameter name includes 1037\n characters that are reserved for internal use by Systems Manager. The maximum length for a parameter name\n that you specify is 1011 characters.
\n
This count of 1011 characters includes the characters in the ARN that precede the name you\n specify. This ARN length will vary depending on your partition and Region. For example, the\n following 45 characters count toward the 1011 character maximum for a parameter created in the\n US East (Ohio) Region: arn:aws:ssm:us-east-2:111122223333:parameter/.
\n ",
+ "smithy.api#documentation": "
The fully qualified name of the parameter that you want to create or update.
\n \n
You can't enter the Amazon Resource Name (ARN) for a parameter, only the parameter name\n itself.
\n \n
The fully qualified name includes the complete hierarchy of the parameter path and name. For\n parameters in a hierarchy, you must include a leading forward slash character (/) when you create\n or reference a parameter. For example: /Dev/DBServer/MySQL/db-string13\n
\n
Naming Constraints:
\n
\n
\n
Parameter names are case sensitive.
\n
\n
\n
A parameter name must be unique within an Amazon Web Services Region
\n
\n
\n
A parameter name can't be prefixed with \"aws\" or \"ssm\"\n (case-insensitive).
\n
\n
\n
Parameter names can include only the following symbols and letters:\n a-zA-Z0-9_.-\n
\n
In addition, the slash character ( / ) is used to delineate hierarchies in parameter\n names. For example: /Dev/Production/East/Project-ABC/MyParameter\n
\n
\n
\n
Parameter names can't contain spaces. The service removes any spaces specified for\n the beginning or end of a parameter name. If the specified name for a parameter contains spaces\n between characters, the request fails with a ValidationException error.
\n
\n
\n
Parameter hierarchies are limited to a maximum depth of fifteen levels.
\n
\n
\n
For additional information about valid values for parameter names, see Creating Systems Manager parameters in the Amazon Web Services Systems Manager User Guide.
\n \n
The reported maximum length of 2048 characters for a parameter name includes 1037\n characters that are reserved for internal use by Systems Manager. The maximum length for a parameter name\n that you specify is 1011 characters.
\n
This count of 1011 characters includes the characters in the ARN that precede the name you\n specify. This ARN length will vary depending on your partition and Region. For example, the\n following 45 characters count toward the 1011 character maximum for a parameter created in the\n US East (Ohio) Region: arn:aws:ssm:us-east-2:111122223333:parameter/.
Parameter names can't contain spaces. The service removes any spaces specified for the\n beginning or end of a parameter name. If the specified name for a parameter contains spaces\n between characters, the request fails with a ValidationException error.
"
}
},
"com.amazonaws.ssm#UnlabelParameterVersionRequest": {
diff --git a/examples/cross_service/rest_ses/Cargo.toml b/examples/cross_service/rest_ses/Cargo.toml
index 19dfbb669e2c..882ea6766822 100644
--- a/examples/cross_service/rest_ses/Cargo.toml
+++ b/examples/cross_service/rest_ses/Cargo.toml
@@ -30,7 +30,7 @@ tracing-bunyan-formatter = "0.3.4"
tracing-log = "0.1.3"
xlsxwriter = "0.6.0"
aws-config= { version = "1.8.2", path = "../../../sdk/aws-config" }
-aws-sdk-cloudwatchlogs= { version = "1.93.0", path = "../../../sdk/cloudwatchlogs" }
+aws-sdk-cloudwatchlogs= { version = "1.94.0", path = "../../../sdk/cloudwatchlogs" }
aws-sdk-rdsdata= { version = "1.78.0", path = "../../../sdk/rdsdata" }
aws-sdk-ses= { version = "1.79.0", path = "../../../sdk/ses" }
aws-smithy-types= { version = "1.3.2", path = "../../../sdk/aws-smithy-types" }
diff --git a/examples/examples/cloudwatchlogs/Cargo.toml b/examples/examples/cloudwatchlogs/Cargo.toml
index e9e6e8735ed8..f7b73d465116 100644
--- a/examples/examples/cloudwatchlogs/Cargo.toml
+++ b/examples/examples/cloudwatchlogs/Cargo.toml
@@ -12,7 +12,7 @@ tracing = "0.1.40"
async-recursion = "1.0.5"
futures = "0.3.30"
aws-config= { version = "1.8.2", path = "../../../sdk/aws-config", features = ["behavior-version-latest"] }
-aws-sdk-cloudwatchlogs= { version = "1.93.0", path = "../../../sdk/cloudwatchlogs", features = ["test-util"] }
+aws-sdk-cloudwatchlogs= { version = "1.94.0", path = "../../../sdk/cloudwatchlogs", features = ["test-util"] }
aws-types= { version = "1.3.7", path = "../../../sdk/aws-types" }
[dependencies.tokio]
diff --git a/examples/examples/ec2/Cargo.toml b/examples/examples/ec2/Cargo.toml
index 67413e7b3596..aadebee05765 100644
--- a/examples/examples/ec2/Cargo.toml
+++ b/examples/examples/ec2/Cargo.toml
@@ -12,7 +12,7 @@ mockall = "0.13.0"
inquire = "0.7.5"
reqwest = "0.12.5"
aws-smithy-runtime-api= { version = "1.8.3", path = "../../../sdk/aws-smithy-runtime-api" }
-aws-sdk-ssm= { version = "1.85.0", path = "../../../sdk/ssm" }
+aws-sdk-ssm= { version = "1.85.1", path = "../../../sdk/ssm" }
aws-smithy-async= { version = "1.2.5", path = "../../../sdk/aws-smithy-async" }
aws-config= { version = "1.8.2", path = "../../../sdk/aws-config", features = ["behavior-version-latest"] }
aws-sdk-ec2= { version = "1.148.0", path = "../../../sdk/ec2" }
diff --git a/examples/examples/ses/Cargo.toml b/examples/examples/ses/Cargo.toml
index 1c3a45554685..4bc0144b9d2c 100644
--- a/examples/examples/ses/Cargo.toml
+++ b/examples/examples/ses/Cargo.toml
@@ -14,7 +14,7 @@ open = "5.1.2"
aws-smithy-http= { version = "0.62.1", path = "../../../sdk/aws-smithy-http" }
aws-smithy-mocks-experimental= { version = "0.2.4", path = "../../../sdk/aws-smithy-mocks-experimental" }
aws-config= { version = "1.8.2", path = "../../../sdk/aws-config", features = ["behavior-version-latest"] }
-aws-sdk-sesv2= { version = "1.87.0", path = "../../../sdk/sesv2", features = ["test-util"] }
+aws-sdk-sesv2= { version = "1.88.0", path = "../../../sdk/sesv2", features = ["test-util"] }
[dependencies.tokio]
version = "1.20.1"
diff --git a/examples/examples/ssm/Cargo.toml b/examples/examples/ssm/Cargo.toml
index 6e129415b1f2..db55dc15c596 100644
--- a/examples/examples/ssm/Cargo.toml
+++ b/examples/examples/ssm/Cargo.toml
@@ -8,7 +8,7 @@ publish = false
[dependencies]
aws-config= { version = "1.8.2", path = "../../../sdk/aws-config", features = ["behavior-version-latest"] }
-aws-sdk-ssm= { version = "1.85.0", path = "../../../sdk/ssm" }
+aws-sdk-ssm= { version = "1.85.1", path = "../../../sdk/ssm" }
[dependencies.tokio]
version = "1.20.1"
diff --git a/sdk/auditmanager/Cargo.toml b/sdk/auditmanager/Cargo.toml
index 50ad37c6c6de..effde2ab192c 100644
--- a/sdk/auditmanager/Cargo.toml
+++ b/sdk/auditmanager/Cargo.toml
@@ -1,7 +1,7 @@
# Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
[package]
name = "aws-sdk-auditmanager"
-version = "1.78.0"
+version = "1.79.0"
authors = ["AWS Rust SDK Team ", "Russell Cohen "]
description = "AWS SDK for AWS Audit Manager"
edition = "2021"
diff --git a/sdk/auditmanager/README.md b/sdk/auditmanager/README.md
index 2d31ae612ac3..87612c4daa83 100644
--- a/sdk/auditmanager/README.md
+++ b/sdk/auditmanager/README.md
@@ -26,7 +26,7 @@ your project, add the following to your **Cargo.toml** file:
```toml
[dependencies]
aws-config = { version = "1.1.7", features = ["behavior-version-latest"] }
-aws-sdk-auditmanager = "1.78.0"
+aws-sdk-auditmanager = "1.79.0"
tokio = { version = "1", features = ["full"] }
```
diff --git a/sdk/auditmanager/src/error_meta.rs b/sdk/auditmanager/src/error_meta.rs
index 421919067426..2b1063687624 100644
--- a/sdk/auditmanager/src/error_meta.rs
+++ b/sdk/auditmanager/src/error_meta.rs
@@ -1738,6 +1738,9 @@ impl From {
Error::ResourceNotFoundException(inner)
}
+ crate::operation::register_organization_admin_account::RegisterOrganizationAdminAccountError::ThrottlingException(inner) => {
+ Error::ThrottlingException(inner)
+ }
crate::operation::register_organization_admin_account::RegisterOrganizationAdminAccountError::ValidationException(inner) => {
Error::ValidationException(inner)
}
diff --git a/sdk/auditmanager/src/lib.rs b/sdk/auditmanager/src/lib.rs
index 0054acf9f187..de1dae9b0e7a 100644
--- a/sdk/auditmanager/src/lib.rs
+++ b/sdk/auditmanager/src/lib.rs
@@ -44,7 +44,7 @@
//! ```toml
//! [dependencies]
//! aws-config = { version = "1.1.7", features = ["behavior-version-latest"] }
-//! aws-sdk-auditmanager = "1.78.0"
+//! aws-sdk-auditmanager = "1.79.0"
//! tokio = { version = "1", features = ["full"] }
//! ```
//!
diff --git a/sdk/auditmanager/src/operation/register_organization_admin_account.rs b/sdk/auditmanager/src/operation/register_organization_admin_account.rs
index 79829dc34c35..ae13d00cc07a 100644
--- a/sdk/auditmanager/src/operation/register_organization_admin_account.rs
+++ b/sdk/auditmanager/src/operation/register_organization_admin_account.rs
@@ -270,6 +270,8 @@ pub enum RegisterOrganizationAdminAccountError {
InternalServerException(crate::types::error::InternalServerException),
///
The resource that's specified in the request can't be found.
ValidationException(crate::types::error::ValidationException),
/// An unexpected error occurred (e.g., invalid JSON returned by the service or an unknown error code).
@@ -308,6 +310,7 @@ impl RegisterOrganizationAdminAccountError {
Self::AccessDeniedException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
Self::InternalServerException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
Self::ResourceNotFoundException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::ThrottlingException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
Self::ValidationException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
Self::Unhandled(e) => &e.meta,
}
@@ -324,6 +327,10 @@ impl RegisterOrganizationAdminAccountError {
pub fn is_resource_not_found_exception(&self) -> bool {
matches!(self, Self::ResourceNotFoundException(_))
}
+ /// Returns `true` if the error kind is `RegisterOrganizationAdminAccountError::ThrottlingException`.
+ pub fn is_throttling_exception(&self) -> bool {
+ matches!(self, Self::ThrottlingException(_))
+ }
/// Returns `true` if the error kind is `RegisterOrganizationAdminAccountError::ValidationException`.
pub fn is_validation_exception(&self) -> bool {
matches!(self, Self::ValidationException(_))
@@ -335,6 +342,7 @@ impl ::std::error::Error for RegisterOrganizationAdminAccountError {
Self::AccessDeniedException(_inner) => ::std::option::Option::Some(_inner),
Self::InternalServerException(_inner) => ::std::option::Option::Some(_inner),
Self::ResourceNotFoundException(_inner) => ::std::option::Option::Some(_inner),
+ Self::ThrottlingException(_inner) => ::std::option::Option::Some(_inner),
Self::ValidationException(_inner) => ::std::option::Option::Some(_inner),
Self::Unhandled(_inner) => ::std::option::Option::Some(&*_inner.source),
}
@@ -346,6 +354,7 @@ impl ::std::fmt::Display for RegisterOrganizationAdminAccountError {
Self::AccessDeniedException(_inner) => _inner.fmt(f),
Self::InternalServerException(_inner) => _inner.fmt(f),
Self::ResourceNotFoundException(_inner) => _inner.fmt(f),
+ Self::ThrottlingException(_inner) => _inner.fmt(f),
Self::ValidationException(_inner) => _inner.fmt(f),
Self::Unhandled(_inner) => {
if let ::std::option::Option::Some(code) = ::aws_smithy_types::error::metadata::ProvideErrorMetadata::code(self) {
@@ -371,6 +380,7 @@ impl ::aws_smithy_types::error::metadata::ProvideErrorMetadata for RegisterOrgan
Self::AccessDeniedException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
Self::InternalServerException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
Self::ResourceNotFoundException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::ThrottlingException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
Self::ValidationException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
Self::Unhandled(_inner) => &_inner.meta,
}
diff --git a/sdk/auditmanager/src/protocol_serde/shape_register_organization_admin_account.rs b/sdk/auditmanager/src/protocol_serde/shape_register_organization_admin_account.rs
index 082ae5c5602f..8642fb25bbed 100644
--- a/sdk/auditmanager/src/protocol_serde/shape_register_organization_admin_account.rs
+++ b/sdk/auditmanager/src/protocol_serde/shape_register_organization_admin_account.rs
@@ -69,6 +69,20 @@ pub fn de_register_organization_admin_account_http_error(
tmp
})
}
+ "ThrottlingException" => crate::operation::register_organization_admin_account::RegisterOrganizationAdminAccountError::ThrottlingException({
+ #[allow(unused_mut)]
+ let mut tmp = {
+ #[allow(unused_mut)]
+ let mut output = crate::types::error::builders::ThrottlingExceptionBuilder::default();
+ output = crate::protocol_serde::shape_throttling_exception::de_throttling_exception_json_err(_response_body, output)
+ .map_err(crate::operation::register_organization_admin_account::RegisterOrganizationAdminAccountError::unhandled)?;
+ let output = output.meta(generic);
+ crate::serde_util::throttling_exception_correct_errors(output)
+ .build()
+ .map_err(crate::operation::register_organization_admin_account::RegisterOrganizationAdminAccountError::unhandled)?
+ };
+ tmp
+ }),
"ValidationException" => crate::operation::register_organization_admin_account::RegisterOrganizationAdminAccountError::ValidationException({
#[allow(unused_mut)]
let mut tmp = {
diff --git a/sdk/cloudwatchlogs/Cargo.toml b/sdk/cloudwatchlogs/Cargo.toml
index 876aa0f6b8fb..0ea6aea7ee88 100644
--- a/sdk/cloudwatchlogs/Cargo.toml
+++ b/sdk/cloudwatchlogs/Cargo.toml
@@ -1,7 +1,7 @@
# Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
[package]
name = "aws-sdk-cloudwatchlogs"
-version = "1.93.0"
+version = "1.94.0"
authors = ["AWS Rust SDK Team ", "Russell Cohen "]
description = "AWS SDK for Amazon CloudWatch Logs"
edition = "2021"
diff --git a/sdk/cloudwatchlogs/README.md b/sdk/cloudwatchlogs/README.md
index a703081f4a05..be15c6f46b8d 100644
--- a/sdk/cloudwatchlogs/README.md
+++ b/sdk/cloudwatchlogs/README.md
@@ -19,7 +19,7 @@ your project, add the following to your **Cargo.toml** file:
```toml
[dependencies]
aws-config = { version = "1.1.7", features = ["behavior-version-latest"] }
-aws-sdk-cloudwatchlogs = "1.93.0"
+aws-sdk-cloudwatchlogs = "1.94.0"
tokio = { version = "1", features = ["full"] }
```
diff --git a/sdk/cloudwatchlogs/src/client.rs b/sdk/cloudwatchlogs/src/client.rs
index 697976f6d5d8..16d3b9f79997 100644
--- a/sdk/cloudwatchlogs/src/client.rs
+++ b/sdk/cloudwatchlogs/src/client.rs
@@ -267,6 +267,8 @@ mod get_log_events;
mod get_log_group_fields;
+mod get_log_object;
+
mod get_log_record;
mod get_query_results;
diff --git a/sdk/cloudwatchlogs/src/client/get_log_object.rs b/sdk/cloudwatchlogs/src/client/get_log_object.rs
new file mode 100644
index 000000000000..b5866d127967
--- /dev/null
+++ b/sdk/cloudwatchlogs/src/client/get_log_object.rs
@@ -0,0 +1,14 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+impl super::Client {
+ /// Constructs a fluent builder for the [`GetLogObject`](crate::operation::get_log_object::builders::GetLogObjectFluentBuilder) operation.
+ ///
+ /// - The fluent builder is configurable:
+ /// - [`unmask(bool)`](crate::operation::get_log_object::builders::GetLogObjectFluentBuilder::unmask) / [`set_unmask(Option)`](crate::operation::get_log_object::builders::GetLogObjectFluentBuilder::set_unmask): required: **false**
A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false.
A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation.
+ /// - On success, responds with [`GetLogObjectOutput`](crate::operation::get_log_object::GetLogObjectOutput) with field(s):
+ /// - [`field_stream(EventReceiver)`](crate::operation::get_log_object::GetLogObjectOutput::field_stream):
A stream of structured log data returned by the GetLogObject operation. This stream contains log events with their associated metadata and extracted fields.
A data protection policy must include two JSON blocks:
The first block must include both a DataIdentifer array and an Operation property with an Audit action. The DataIdentifer array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask.
The Operation property with an Audit action is required to find the sensitive data terms. This Audit action must contain a FindingsDestination object. You can optionally use that FindingsDestination object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Firehose streams, and S3 buckets, they must already exist.
The second block must include both a DataIdentifer array and an Operation property with an Deidentify action. The DataIdentifer array must exactly match the DataIdentifer array in the first block of the policy.
The Operation property with the Deidentify action is what actually masks the data, and it must contain the "MaskConfig": {} object. The "MaskConfig": {} object must be empty.
For an example data protection policy, see the Examples section on this page.
The contents of the two DataIdentifer arrays must match exactly.
In addition to the two JSON blocks, the policyDocument can also include Name, Description, and Version fields. The Name is different than the operation's policyName parameter, and is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.
The JSON specified in policyDocument can be up to 30,720 characters long.
Subscription filter policy
A subscription filter policy can include the following attributes in a JSON block:
DestinationArn The ARN of the destination to deliver log events to. Supported destinations are:
An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery.
An Firehose data stream in the same account as the subscription policy, for same-account delivery.
A Lambda function in the same account as the subscription policy, for same-account delivery.
A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations.
RoleArn The ARN of an IAM role that grants CloudWatch Logs permissions to deliver ingested log events to the destination stream. You don't need to provide the ARN when you are working with a logical destination for cross-account delivery.
FilterPattern A filter pattern for subscribing to a filtered stream of log events.
Distribution The method used to distribute log data to the destination. By default, log data is grouped by log stream, but the grouping can be set to Random for a more even distribution. This property is only applicable when the destination is an Kinesis Data Streams data stream.
Transformer policy
A transformer policy must include one JSON block with the array of processors and their configurations. For more information about available processors, see Processors that you can use.
Field index policy
A field index filter policy can include the following attribute in a JSON block:
Fields The array of field indexes to create.
It must contain at least one field index.
The following is an example of an index policy document that creates two indexes, RequestId and TransactionId.
Currently the only valid value for this parameter is ALL, which specifies that the data protection policy applies to all log groups in the account. If you omit this parameter, the default of ALL is used.
Use this parameter to apply the new policy to a subset of log groups in the account.
Specifing selectionCriteria is valid only when you specify SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or TRANSFORMER_POLICYfor policyType.
If policyType is SUBSCRIPTION_FILTER_POLICY, the only supported selectionCriteria filter is LogGroupName NOT IN \[\]
If policyType is FIELD_INDEX_POLICY or TRANSFORMER_POLICY, the only supported selectionCriteria filter is LogGroupNamePrefix
The selectionCriteria string can be up to 25KB in length. The length is determined by using its UTF-8 bytes.
Using the selectionCriteria parameter with SUBSCRIPTION_FILTER_POLICY is useful to help prevent infinite loops. For more information, see Log recursion prevention.
Use this parameter to apply the new policy to a subset of log groups in the account.
Specifying selectionCriteria is valid only when you specify SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or TRANSFORMER_POLICYfor policyType.
If policyType is SUBSCRIPTION_FILTER_POLICY, the only supported selectionCriteria filter is LogGroupName NOT IN \[\]
If policyType is FIELD_INDEX_POLICY or TRANSFORMER_POLICY, the only supported selectionCriteria filter is LogGroupNamePrefix
The selectionCriteria string can be up to 25KB in length. The length is determined by using its UTF-8 bytes.
Using the selectionCriteria parameter with SUBSCRIPTION_FILTER_POLICY is useful to help prevent infinite loops. For more information, see Log recursion prevention.
/// - On success, responds with [`PutAccountPolicyOutput`](crate::operation::put_account_policy::PutAccountPolicyOutput) with field(s):
/// - [`account_policy(Option)`](crate::operation::put_account_policy::PutAccountPolicyOutput::account_policy):
The account policy that you created.
/// - On failure, responds with [`SdkError`](crate::operation::put_account_policy::PutAccountPolicyError)
diff --git a/sdk/cloudwatchlogs/src/error_meta.rs b/sdk/cloudwatchlogs/src/error_meta.rs
index e92a59f09884..096b6ffbf113 100644
--- a/sdk/cloudwatchlogs/src/error_meta.rs
+++ b/sdk/cloudwatchlogs/src/error_meta.rs
@@ -11,6 +11,8 @@ pub enum Error {
///
PutLogEvents actions are now always accepted and never return DataAlreadyAcceptedException regardless of whether a given batch of log events has already been accepted.
An internal error occurred during the streaming of log data. This exception is thrown when there's an issue with the internal streaming mechanism used by the GetLogObject operation.
An internal error occurred during the streaming of log data. This exception is thrown when there's an issue with the internal streaming mechanism used by the GetLogObject operation.
+ InternalStreamingException(crate::types::error::InternalStreamingException),
+ /// An unexpected error occurred (e.g., invalid JSON returned by the service or an unknown error code).
+ #[deprecated(note = "Matching `Unhandled` directly is not forwards compatible. Instead, match using a \
+ variable wildcard pattern and check `.code()`:
+ \
+ `err if err.code() == Some(\"SpecificExceptionCode\") => { /* handle the error */ }`
+ \
+ See [`ProvideErrorMetadata`](#impl-ProvideErrorMetadata-for-GetLogObjectError) for what information is available for the error.")]
+ Unhandled(crate::error::sealed_unhandled::Unhandled),
+}
+impl GetLogObjectError {
+ /// Creates the `GetLogObjectError::Unhandled` variant from any error type.
+ pub fn unhandled(
+ err: impl ::std::convert::Into<::std::boxed::Box>,
+ ) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source: err.into(),
+ meta: ::std::default::Default::default(),
+ })
+ }
+
+ /// Creates the `GetLogObjectError::Unhandled` variant from an [`ErrorMetadata`](::aws_smithy_types::error::ErrorMetadata).
+ pub fn generic(err: ::aws_smithy_types::error::ErrorMetadata) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source: err.clone().into(),
+ meta: err,
+ })
+ }
+ ///
+ /// Returns error metadata, which includes the error code, message,
+ /// request ID, and potentially additional information.
+ ///
+ pub fn meta(&self) -> &::aws_smithy_types::error::ErrorMetadata {
+ match self {
+ Self::AccessDeniedException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::InvalidOperationException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::InvalidParameterException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::LimitExceededException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::ResourceNotFoundException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::InternalStreamingException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::Unhandled(e) => &e.meta,
+ }
+ }
+ /// Returns `true` if the error kind is `GetLogObjectError::AccessDeniedException`.
+ pub fn is_access_denied_exception(&self) -> bool {
+ matches!(self, Self::AccessDeniedException(_))
+ }
+ /// Returns `true` if the error kind is `GetLogObjectError::InvalidOperationException`.
+ pub fn is_invalid_operation_exception(&self) -> bool {
+ matches!(self, Self::InvalidOperationException(_))
+ }
+ /// Returns `true` if the error kind is `GetLogObjectError::InvalidParameterException`.
+ pub fn is_invalid_parameter_exception(&self) -> bool {
+ matches!(self, Self::InvalidParameterException(_))
+ }
+ /// Returns `true` if the error kind is `GetLogObjectError::LimitExceededException`.
+ pub fn is_limit_exceeded_exception(&self) -> bool {
+ matches!(self, Self::LimitExceededException(_))
+ }
+ /// Returns `true` if the error kind is `GetLogObjectError::ResourceNotFoundException`.
+ pub fn is_resource_not_found_exception(&self) -> bool {
+ matches!(self, Self::ResourceNotFoundException(_))
+ }
+ /// Returns `true` if the error kind is `GetLogObjectError::InternalStreamingException`.
+ pub fn is_internal_streaming_exception(&self) -> bool {
+ matches!(self, Self::InternalStreamingException(_))
+ }
+}
+impl ::std::error::Error for GetLogObjectError {
+ fn source(&self) -> ::std::option::Option<&(dyn ::std::error::Error + 'static)> {
+ match self {
+ Self::AccessDeniedException(_inner) => ::std::option::Option::Some(_inner),
+ Self::InvalidOperationException(_inner) => ::std::option::Option::Some(_inner),
+ Self::InvalidParameterException(_inner) => ::std::option::Option::Some(_inner),
+ Self::LimitExceededException(_inner) => ::std::option::Option::Some(_inner),
+ Self::ResourceNotFoundException(_inner) => ::std::option::Option::Some(_inner),
+ Self::InternalStreamingException(_inner) => ::std::option::Option::Some(_inner),
+ Self::Unhandled(_inner) => ::std::option::Option::Some(&*_inner.source),
+ }
+ }
+}
+impl ::std::fmt::Display for GetLogObjectError {
+ fn fmt(&self, f: &mut ::std::fmt::Formatter<'_>) -> ::std::fmt::Result {
+ match self {
+ Self::AccessDeniedException(_inner) => _inner.fmt(f),
+ Self::InvalidOperationException(_inner) => _inner.fmt(f),
+ Self::InvalidParameterException(_inner) => _inner.fmt(f),
+ Self::LimitExceededException(_inner) => _inner.fmt(f),
+ Self::ResourceNotFoundException(_inner) => _inner.fmt(f),
+ Self::InternalStreamingException(_inner) => _inner.fmt(f),
+ Self::Unhandled(_inner) => {
+ if let ::std::option::Option::Some(code) = ::aws_smithy_types::error::metadata::ProvideErrorMetadata::code(self) {
+ write!(f, "unhandled error ({code})")
+ } else {
+ f.write_str("unhandled error")
+ }
+ }
+ }
+ }
+}
+impl ::aws_smithy_types::retry::ProvideErrorKind for GetLogObjectError {
+ fn code(&self) -> ::std::option::Option<&str> {
+ ::aws_smithy_types::error::metadata::ProvideErrorMetadata::code(self)
+ }
+ fn retryable_error_kind(&self) -> ::std::option::Option<::aws_smithy_types::retry::ErrorKind> {
+ ::std::option::Option::None
+ }
+}
+impl ::aws_smithy_types::error::metadata::ProvideErrorMetadata for GetLogObjectError {
+ fn meta(&self) -> &::aws_smithy_types::error::ErrorMetadata {
+ match self {
+ Self::AccessDeniedException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::InvalidOperationException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::InvalidParameterException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::LimitExceededException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::ResourceNotFoundException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::InternalStreamingException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::Unhandled(_inner) => &_inner.meta,
+ }
+ }
+}
+impl ::aws_smithy_runtime_api::client::result::CreateUnhandledError for GetLogObjectError {
+ fn create_unhandled_error(
+ source: ::std::boxed::Box,
+ meta: ::std::option::Option<::aws_smithy_types::error::ErrorMetadata>,
+ ) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source,
+ meta: meta.unwrap_or_default(),
+ })
+ }
+}
+impl ::aws_types::request_id::RequestId for crate::operation::get_log_object::GetLogObjectError {
+ fn request_id(&self) -> Option<&str> {
+ self.meta().request_id()
+ }
+}
+
+pub use crate::operation::get_log_object::_get_log_object_output::GetLogObjectOutput;
+
+pub use crate::operation::get_log_object::_get_log_object_input::GetLogObjectInput;
+
+mod _get_log_object_input;
+
+mod _get_log_object_output;
+
+/// Builders
+pub mod builders;
diff --git a/sdk/cloudwatchlogs/src/operation/get_log_object/_get_log_object_input.rs b/sdk/cloudwatchlogs/src/operation/get_log_object/_get_log_object_input.rs
new file mode 100644
index 000000000000..ae8bcd0b6166
--- /dev/null
+++ b/sdk/cloudwatchlogs/src/operation/get_log_object/_get_log_object_input.rs
@@ -0,0 +1,75 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+
+///
A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false.
+ pub unmask: ::std::option::Option,
+ ///
A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation.
A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false.
A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation.
A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false.
A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false.
A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false.
A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation.
+ /// This field is required.
+ pub fn log_object_pointer(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
+ self.log_object_pointer = ::std::option::Option::Some(input.into());
+ self
+ }
+ ///
A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation.
A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation.
+ pub fn get_log_object_pointer(&self) -> &::std::option::Option<::std::string::String> {
+ &self.log_object_pointer
+ }
+ /// Consumes the builder and constructs a [`GetLogObjectInput`](crate::operation::get_log_object::GetLogObjectInput).
+ pub fn build(
+ self,
+ ) -> ::std::result::Result {
+ ::std::result::Result::Ok(crate::operation::get_log_object::GetLogObjectInput {
+ unmask: self.unmask,
+ log_object_pointer: self.log_object_pointer,
+ })
+ }
+}
diff --git a/sdk/cloudwatchlogs/src/operation/get_log_object/_get_log_object_output.rs b/sdk/cloudwatchlogs/src/operation/get_log_object/_get_log_object_output.rs
new file mode 100644
index 000000000000..e62395605cb1
--- /dev/null
+++ b/sdk/cloudwatchlogs/src/operation/get_log_object/_get_log_object_output.rs
@@ -0,0 +1,97 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+
+///
A stream of structured log data returned by the GetLogObject operation. This stream contains log events with their associated metadata and extracted fields.
A stream of structured log data returned by the GetLogObject operation. This stream contains log events with their associated metadata and extracted fields.
A stream of structured log data returned by the GetLogObject operation. This stream contains log events with their associated metadata and extracted fields.
A stream of structured log data returned by the GetLogObject operation. This stream contains log events with their associated metadata and extracted fields.
A stream of structured log data returned by the GetLogObject operation. This stream contains log events with their associated metadata and extracted fields.
+ pub fn get_field_stream(
+ &self,
+ ) -> &::std::option::Option<
+ crate::event_receiver::EventReceiver,
+ > {
+ &self.field_stream
+ }
+ pub(crate) fn _request_id(mut self, request_id: impl Into) -> Self {
+ self._request_id = Some(request_id.into());
+ self
+ }
+
+ pub(crate) fn _set_request_id(&mut self, request_id: Option) -> &mut Self {
+ self._request_id = request_id;
+ self
+ }
+ /// Consumes the builder and constructs a [`GetLogObjectOutput`](crate::operation::get_log_object::GetLogObjectOutput).
+ /// This method will fail if any of the following fields are not set:
+ /// - [`field_stream`](crate::operation::get_log_object::builders::GetLogObjectOutputBuilder::field_stream)
+ pub fn build(
+ self,
+ ) -> ::std::result::Result {
+ ::std::result::Result::Ok(crate::operation::get_log_object::GetLogObjectOutput {
+ field_stream: self.field_stream.ok_or_else(|| {
+ ::aws_smithy_types::error::operation::BuildError::missing_field(
+ "field_stream",
+ "field_stream was not specified but it is required when building GetLogObjectOutput",
+ )
+ })?,
+ _request_id: self._request_id,
+ })
+ }
+}
diff --git a/sdk/cloudwatchlogs/src/operation/get_log_object/builders.rs b/sdk/cloudwatchlogs/src/operation/get_log_object/builders.rs
new file mode 100644
index 000000000000..10ef6e6e2fec
--- /dev/null
+++ b/sdk/cloudwatchlogs/src/operation/get_log_object/builders.rs
@@ -0,0 +1,175 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+pub use crate::operation::get_log_object::_get_log_object_output::GetLogObjectOutputBuilder;
+
+pub use crate::operation::get_log_object::_get_log_object_input::GetLogObjectInputBuilder;
+
+impl crate::operation::get_log_object::builders::GetLogObjectInputBuilder {
+ /// Sends a request with this input using the given client.
+ pub async fn send_with(
+ self,
+ client: &crate::Client,
+ ) -> ::std::result::Result<
+ crate::operation::get_log_object::GetLogObjectOutput,
+ ::aws_smithy_runtime_api::client::result::SdkError<
+ crate::operation::get_log_object::GetLogObjectError,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
+ >,
+ > {
+ let mut fluent_builder = client.get_log_object();
+ fluent_builder.inner = self;
+ fluent_builder.send().await
+ }
+}
+/// Fluent builder constructing a request to `GetLogObject`.
+///
+///
Retrieves a large logging object (LLO) and streams it back. This API is used to fetch the content of large portions of log events that have been ingested through the PutOpenTelemetryLogs API. When log events contain fields that would cause the total event size to exceed 1MB, CloudWatch Logs automatically processes up to 10 fields, starting with the largest fields. Each field is truncated as needed to keep the total event size as close to 1MB as possible. The excess portions are stored as Large Log Objects (LLOs) and these fields are processed separately and LLO reference system fields (in the format @ptr.$\[path.to.field\]) are added. The path in the reference field reflects the original JSON structure where the large field was located. For example, this could be @ptr.$\['input'\]\['message'\], @ptr.$\['AAA'\]\['BBB'\]\['CCC'\]\['DDD'\], @ptr.$\['AAA'\], or any other path matching your log structure.
+///
+/// [`GetLogObjectOutput`](crate::operation::get_log_object::GetLogObjectOutput) contains an event stream field as well as one or more non-event stream fields.
+/// Due to its current implementation, the non-event stream fields are not fully deserialized
+/// until the [`send`](Self::send) method completes. As a result, accessing these fields of the operation
+/// output struct within an interceptor may return uninitialized values.
+///
+#[derive(::std::clone::Clone, ::std::fmt::Debug)]
+pub struct GetLogObjectFluentBuilder {
+ handle: ::std::sync::Arc,
+ inner: crate::operation::get_log_object::builders::GetLogObjectInputBuilder,
+ config_override: ::std::option::Option,
+}
+impl
+ crate::client::customize::internal::CustomizableSend<
+ crate::operation::get_log_object::GetLogObjectOutput,
+ crate::operation::get_log_object::GetLogObjectError,
+ > for GetLogObjectFluentBuilder
+{
+ fn send(
+ self,
+ config_override: crate::config::Builder,
+ ) -> crate::client::customize::internal::BoxFuture<
+ crate::client::customize::internal::SendResult<
+ crate::operation::get_log_object::GetLogObjectOutput,
+ crate::operation::get_log_object::GetLogObjectError,
+ >,
+ > {
+ ::std::boxed::Box::pin(async move { self.config_override(config_override).send().await })
+ }
+}
+impl GetLogObjectFluentBuilder {
+ /// Creates a new `GetLogObjectFluentBuilder`.
+ pub(crate) fn new(handle: ::std::sync::Arc) -> Self {
+ Self {
+ handle,
+ inner: ::std::default::Default::default(),
+ config_override: ::std::option::Option::None,
+ }
+ }
+ /// Access the GetLogObject as a reference.
+ pub fn as_input(&self) -> &crate::operation::get_log_object::builders::GetLogObjectInputBuilder {
+ &self.inner
+ }
+ /// Sends the request and returns the response.
+ ///
+ /// If an error occurs, an `SdkError` will be returned with additional details that
+ /// can be matched against.
+ ///
+ /// By default, any retryable failures will be retried twice. Retry behavior
+ /// is configurable with the [RetryConfig](aws_smithy_types::retry::RetryConfig), which can be
+ /// set when configuring the client.
+ pub async fn send(
+ self,
+ ) -> ::std::result::Result<
+ crate::operation::get_log_object::GetLogObjectOutput,
+ ::aws_smithy_runtime_api::client::result::SdkError<
+ crate::operation::get_log_object::GetLogObjectError,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
+ >,
+ > {
+ let input = self
+ .inner
+ .build()
+ .map_err(::aws_smithy_runtime_api::client::result::SdkError::construction_failure)?;
+ let runtime_plugins = crate::operation::get_log_object::GetLogObject::operation_runtime_plugins(
+ self.handle.runtime_plugins.clone(),
+ &self.handle.conf,
+ self.config_override,
+ );
+ let mut output = crate::operation::get_log_object::GetLogObject::orchestrate(&runtime_plugins, input).await?;
+
+ // Converts any error encountered beyond this point into an `SdkError` response error
+ // with an `HttpResponse`. However, since we have already exited the `orchestrate`
+ // function, the original `HttpResponse` is no longer available and cannot be restored.
+ // This means that header information from the original response has been lost.
+ //
+ // Note that the response body would have been consumed by the deserializer
+ // regardless, even if the initial message was hypothetically processed during
+ // the orchestrator's deserialization phase but later resulted in an error.
+ fn response_error(
+ err: impl ::std::convert::Into<::aws_smithy_runtime_api::box_error::BoxError>,
+ ) -> ::aws_smithy_runtime_api::client::result::SdkError<
+ crate::operation::get_log_object::GetLogObjectError,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
+ > {
+ ::aws_smithy_runtime_api::client::result::SdkError::response_error(
+ err,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse::new(
+ ::aws_smithy_runtime_api::http::StatusCode::try_from(200).expect("valid successful code"),
+ ::aws_smithy_types::body::SdkBody::empty(),
+ ),
+ )
+ }
+
+ let message = output.field_stream.try_recv_initial_response().await.map_err(response_error)?;
+
+ match message {
+ ::std::option::Option::Some(_message) => ::std::result::Result::Ok(output),
+ ::std::option::Option::None => ::std::result::Result::Ok(output),
+ }
+ }
+
+ /// Consumes this builder, creating a customizable operation that can be modified before being sent.
+ pub fn customize(
+ self,
+ ) -> crate::client::customize::CustomizableOperation<
+ crate::operation::get_log_object::GetLogObjectOutput,
+ crate::operation::get_log_object::GetLogObjectError,
+ Self,
+ > {
+ crate::client::customize::CustomizableOperation::new(self)
+ }
+ pub(crate) fn config_override(mut self, config_override: impl ::std::convert::Into) -> Self {
+ self.set_config_override(::std::option::Option::Some(config_override.into()));
+ self
+ }
+
+ pub(crate) fn set_config_override(&mut self, config_override: ::std::option::Option) -> &mut Self {
+ self.config_override = config_override;
+ self
+ }
+ ///
A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false.
A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false.
A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false.
A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation.
A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation.
A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation.
Currently the only valid value for this parameter is ALL, which specifies that the data protection policy applies to all log groups in the account. If you omit this parameter, the default of ALL is used.
pub scope: ::std::option::Option,
///
Use this parameter to apply the new policy to a subset of log groups in the account.
- ///
Specifing selectionCriteria is valid only when you specify SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or TRANSFORMER_POLICYfor policyType.
+ ///
Specifying selectionCriteria is valid only when you specify SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or TRANSFORMER_POLICYfor policyType.
///
If policyType is SUBSCRIPTION_FILTER_POLICY, the only supported selectionCriteria filter is LogGroupName NOT IN \[\]
///
If policyType is FIELD_INDEX_POLICY or TRANSFORMER_POLICY, the only supported selectionCriteria filter is LogGroupNamePrefix
///
The selectionCriteria string can be up to 25KB in length. The length is determined by using its UTF-8 bytes.
Use this parameter to apply the new policy to a subset of log groups in the account.
- ///
Specifing selectionCriteria is valid only when you specify SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or TRANSFORMER_POLICYfor policyType.
+ ///
Specifying selectionCriteria is valid only when you specify SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or TRANSFORMER_POLICYfor policyType.
///
If policyType is SUBSCRIPTION_FILTER_POLICY, the only supported selectionCriteria filter is LogGroupName NOT IN \[\]
///
If policyType is FIELD_INDEX_POLICY or TRANSFORMER_POLICY, the only supported selectionCriteria filter is LogGroupNamePrefix
///
The selectionCriteria string can be up to 25KB in length. The length is determined by using its UTF-8 bytes.
diff --git a/sdk/cloudwatchlogs/src/operation/put_account_policy/builders.rs b/sdk/cloudwatchlogs/src/operation/put_account_policy/builders.rs
index 02650678368c..34341562b10d 100644
--- a/sdk/cloudwatchlogs/src/operation/put_account_policy/builders.rs
+++ b/sdk/cloudwatchlogs/src/operation/put_account_policy/builders.rs
@@ -22,7 +22,7 @@ impl crate::operation::put_account_policy::builders::PutAccountPolicyInputBuilde
}
/// Fluent builder constructing a request to `PutAccountPolicy`.
///
-///
Creates an account-level data protection policy, subscription filter policy, or field index policy that applies to all log groups or a subset of log groups in the account.
+///
Creates an account-level data protection policy, subscription filter policy, field index policy, transformer policy, or metric extraction policy that applies to all log groups or a subset of log groups in the account.
///
To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are creating.
To create a transformer policy, you must have the logs:PutTransformer and logs:PutAccountPolicy permissions.
///
///
To create a field index policy, you must have the logs:PutIndexPolicy and logs:PutAccountPolicy permissions.
+///
+///
To create a metric extraction policy, you must have the logs:PutMetricExtractionPolicy and logs:PutAccountPolicy permissions.
///
///
Data protection policy
///
A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy.
You can have one account-level field index policy that applies to all log groups in the account. Or you can create as many as 20 account-level field index policies that are each scoped to a subset of log groups with the selectionCriteria parameter. If you have multiple account-level index policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with my-log, you can't have another field index policy filtered to my-logpprod or my-logging.
///
If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts.
///
If you want to create a field index policy for a single log group, you can use PutIndexPolicy instead of PutAccountPolicy. If you do so, that log group will use only that log-group level policy, and will ignore the account-level policy that you create with PutAccountPolicy.
+///
Metric extraction policy
+///
A metric extraction policy controls whether CloudWatch Metrics can be created through the Embedded Metrics Format (EMF) for log groups in your account. By default, EMF metric creation is enabled for all log groups. You can use metric extraction policies to disable EMF metric creation for your entire account or specific log groups.
+///
When a policy disables EMF metric creation for a log group, log events in the EMF format are still ingested, but no CloudWatch Metrics are created from them.
+///
Creating a policy disables metrics for AWS features that use EMF to create metrics, such as CloudWatch Container Insights and CloudWatch Application Signals. To prevent turning off those features by accident, we recommend that you exclude the underlying log-groups through a selection-criteria such as LogGroupNamePrefix NOT IN \["/aws/containerinsights", "/aws/ecs/containerinsights", "/aws/application-signals/data"\].
+///
+///
Each account can have either one account-level metric extraction policy that applies to all log groups, or up to 5 policies that are each scoped to a subset of log groups with the selectionCriteria parameter. The selection criteria supports filtering by LogGroupName and LogGroupNamePrefix using the operators IN and NOT IN. You can specify up to 50 values in each IN or NOT IN list.
+///
The selection criteria can be specified in these formats:
+///
LogGroupName IN \["log-group-1", "log-group-2"\]
+///
LogGroupNamePrefix NOT IN \["/aws/prefix1", "/aws/prefix2"\]
+///
If you have multiple account-level metric extraction policies with selection criteria, no two of them can have overlapping criteria. For example, if you have one policy with selection criteria LogGroupNamePrefix IN \["my-log"\], you can't have another metric extraction policy with selection criteria LogGroupNamePrefix IN \["/my-log-prod"\] or LogGroupNamePrefix IN \["/my-logging"\], as the set of log groups matching these prefixes would be a subset of the log groups matching the first policy's prefix, creating an overlap.
+///
When using NOT IN, only one policy with this operator is allowed per account.
+///
When combining policies with IN and NOT IN operators, the overlap check ensures that policies don't have conflicting effects. Two policies with IN and NOT IN operators do not overlap if and only if every value in the IN policy is completely contained within some value in the NOT IN policy. For example:
+///
+///
+///
If you have a NOT IN policy for prefix "/aws/lambda", you can create an IN policy for the exact log group name "/aws/lambda/function1" because the set of log groups matching "/aws/lambda/function1" is a subset of the log groups matching "/aws/lambda".
+///
+///
If you have a NOT IN policy for prefix "/aws/lambda", you cannot create an IN policy for prefix "/aws" because the set of log groups matching "/aws" is not a subset of the log groups matching "/aws/lambda".
CloudWatch Logs doesn't support IAM policies that prevent users from assigning specified tags to log groups using the aws:Resource/key-name or aws:TagKeys condition keys.
+///
When using IAM policies to control tag management for CloudWatch Logs log groups, the condition keys aws:Resource/key-name and aws:TagKeys cannot be used to restrict which tags users can assign.
#[deprecated(note = "Please use the generic tagging API UntagResource")]
#[derive(::std::clone::Clone, ::std::fmt::Debug)]
pub struct UntagLogGroupFluentBuilder {
diff --git a/sdk/cloudwatchlogs/src/primitives.rs b/sdk/cloudwatchlogs/src/primitives.rs
index 391aa9d59c9d..ec90f8121d26 100644
--- a/sdk/cloudwatchlogs/src/primitives.rs
+++ b/sdk/cloudwatchlogs/src/primitives.rs
@@ -1,4 +1,5 @@
// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+pub use ::aws_smithy_types::Blob;
/// Event stream related primitives such as `Message` or `Header`.
pub mod event_stream;
diff --git a/sdk/cloudwatchlogs/src/protocol_serde.rs b/sdk/cloudwatchlogs/src/protocol_serde.rs
index d5d90ee01b60..a0c5840173e6 100644
--- a/sdk/cloudwatchlogs/src/protocol_serde.rs
+++ b/sdk/cloudwatchlogs/src/protocol_serde.rs
@@ -127,6 +127,8 @@ pub(crate) mod shape_get_log_events;
pub(crate) mod shape_get_log_group_fields;
+pub(crate) mod shape_get_log_object;
+
pub(crate) mod shape_get_log_record;
pub(crate) mod shape_get_query_results;
@@ -321,12 +323,18 @@ pub(crate) mod shape_get_log_events_input;
pub(crate) mod shape_get_log_group_fields_input;
+pub(crate) mod shape_get_log_object_input;
+
+pub(crate) mod shape_get_log_object_output;
+
pub(crate) mod shape_get_log_record_input;
pub(crate) mod shape_get_query_results_input;
pub(crate) mod shape_get_transformer_input;
+pub(crate) mod shape_internal_streaming_exception;
+
pub(crate) mod shape_invalid_operation_exception;
pub(crate) mod shape_invalid_parameter_exception;
@@ -563,6 +571,8 @@ pub(crate) mod shape_export_task;
pub(crate) mod shape_field_index;
+pub(crate) mod shape_fields_data;
+
pub(crate) mod shape_filtered_log_event;
pub(crate) mod shape_grok;
diff --git a/sdk/cloudwatchlogs/src/protocol_serde/shape_fields_data.rs b/sdk/cloudwatchlogs/src/protocol_serde/shape_fields_data.rs
new file mode 100644
index 000000000000..9c060f1208f1
--- /dev/null
+++ b/sdk/cloudwatchlogs/src/protocol_serde/shape_fields_data.rs
@@ -0,0 +1,51 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+pub(crate) fn de_fields_data_payload(
+ input: &[u8],
+) -> ::std::result::Result {
+ let mut tokens_owned = ::aws_smithy_json::deserialize::json_token_iter(crate::protocol_serde::or_empty_doc(input)).peekable();
+ let tokens = &mut tokens_owned;
+ let result = crate::protocol_serde::shape_fields_data::de_fields_data(tokens)?
+ .ok_or_else(|| ::aws_smithy_json::deserialize::error::DeserializeError::custom("expected payload member value"));
+ if tokens.next().is_some() {
+ return Err(::aws_smithy_json::deserialize::error::DeserializeError::custom(
+ "found more JSON tokens after completing parsing",
+ ));
+ }
+ result
+}
+
+pub(crate) fn de_fields_data<'a, I>(
+ tokens: &mut ::std::iter::Peekable,
+) -> ::std::result::Result