You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
implement X-Ray tracing for end-to-end observability (#3)
* testing out X-Ray tracing
* use dashes instead of underscores, run pre-commit and update README.md
* fix README.md instructions for demo; fix paths
* instrument cmr_query
* instrument tracing in evaluator; add payload as annotation
* fix bug
* put annotation differently
* enable active tracing on SNS
* use xray_recorder for tracing evaluator functions
* Update README.md
* capture trace using decorator
Copy file name to clipboardExpand all lines: README.md
+74-26Lines changed: 74 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -230,7 +230,13 @@ This guide provides a quick way to get started with our project. Please see our
230
230
cd unity-initiator/terraform-unity/initiator/
231
231
```
232
232
233
-
1. Copy a sample router configuration YAML file to use for deployment and update the AWS region and AWS account ID to match your AWS environment. We will be using the NISAR TLM test case for this demo so we also rename the SNS topic ARN for it accordingly:
233
+
1. You will need an S3 bucket for terraform to stage the router Lambda zip file and router configuration YAML file during deployment. Create one or reuse an existing one and set an environment variable for it:
234
+
235
+
```
236
+
export CODE_BUCKET=<some S3 bucket name>
237
+
```
238
+
239
+
1. Copy a sample router configuration YAML file to use for deployment and update the AWS region and AWS account ID to match your AWS environment. We will be using the NISAR TLM test case for this demo so we also rename the SNS topic ARN for it accordingly. We then upload the router configuration file:
234
240
235
241
```
236
242
cp ../../tests/resources/test_router.yaml .
@@ -239,18 +245,7 @@ This guide provides a quick way to get started with our project. Please see our
239
245
sed -i "s/hilo-hawaii-1/${AWS_REGION}/g" test_router.yaml
240
246
sed -i "s/123456789012:eval_nisar_ingest/${AWS_ACCOUNT_ID}:uod-dev-eval_nisar_ingest-evaluator_topic/g" test_router.yaml
241
247
sed -i "s/123456789012:eval_airs_ingest/${AWS_ACCOUNT_ID}:uod-dev-eval_airs_ingest-evaluator_topic/g" test_router.yaml
242
-
```
243
-
244
-
1. You will need an S3 bucket for terraform to stage the router Lambda zip file during deployment. Create one or reuse an existing one and set an environment variable for it:
245
-
246
-
```
247
-
export CODE_BUCKET=<some S3 bucket name>
248
-
```
249
-
250
-
1. You will need an S3 bucket to store the router configuration YAML file. Create one or reuse an existing one (could be the same one in the previous step) and set an environment variable for it:
**Take note of the `initiator_topic_arn` that is output by terraform. It will be used when setting up any triggers.**
280
274
281
-
#### Deploying an Example Evaluator (SNS topic->SQS queue->Lambda)
275
+
#### Deploying Example Evaluators (SNS topic->SQS queue->Lambda)
282
276
283
-
1. Change directory to the location of the sns_sqs_lambda evaluator terraform:
277
+
In this demo we will deploy 2 evaluators:
284
278
279
+
1.`eval_nisar_ingest` - evaluate ingestion of NISAR telemetry files deposited into the ISL bucket
280
+
281
+
1.`eval_airs_ingest` - evaluate ingestion of AIRS RetStd files returned by a periodic CMR query
282
+
283
+
##### Evaluator Deployment for NISAR TLM (via staged data to the ISL)
284
+
1. Change directory to the location of the evaluators terraform:
285
285
```
286
-
cp -rp sns_sqs_lambda sns_sqs_lambda-nisar_tlm
286
+
cd ../evaluators
287
+
```
288
+
289
+
1. Make a copy of the `sns_sqs_lambda` directory for the NISAR TLM evaluator:
290
+
291
+
```
292
+
cp -rp sns-sqs-lambda sns-sqs-lambda-nisar-tlm
287
293
```
288
294
289
295
1. Change directory into the NISAR TLM evaluator terraform:
290
296
291
297
```
292
-
cd sns_sqs_lambda-nisar_tlm/
298
+
cd sns-sqs-lambda-nisar-tlm/
293
299
```
294
300
295
301
1. Set the name of the evaluator to our NISAR example:
@@ -301,7 +307,7 @@ This guide provides a quick way to get started with our project. Please see our
301
307
1. Note the implementation of the evaluator code. It currently doesn't do any real evaluation but simply returns that evaluation was successful:
302
308
303
309
```
304
-
cat data.tf
310
+
cat lambda_handler.py
305
311
```
306
312
307
313
1. Initialize terraform:
@@ -315,17 +321,59 @@ This guide provides a quick way to get started with our project. Please see our
315
321
```
316
322
terraform apply \
317
323
--var evaluator_name=${EVALUATOR_NAME} \
324
+
--var code_bucket=${CODE_BUCKET} \
318
325
-auto-approve
319
326
```
320
327
321
328
**Take note of the `evaluator_topic_arn` that is output by terraform. It should match the topic ARN in the test_router.yaml file you used during the initiator deployment. If they match then the router Lambda is now able to submit payloads to this evaluator SNS topic.**
322
329
330
+
##### Evaluator Deployment for AIRS RetStd (via scheduled CMR query)
331
+
1. Change directory to the location of the evaluators terraform:
332
+
```
333
+
cd ..
334
+
```
335
+
336
+
1. Make a copy of the `sns_sqs_lambda` directory for the AIRS RetStd evaluator:
337
+
```
338
+
cp -rp sns-sqs-lambda sns-sqs-lambda-airs-retstd
339
+
```
340
+
341
+
1. Change directory into the AIRS RetStd evaluator terraform:
342
+
```
343
+
cd sns-sqs-lambda-airs-retstd/
344
+
```
345
+
346
+
1. Set the name of the evaluator to our AIRS example:
347
+
```
348
+
export EVALUATOR_NAME=eval_airs_ingest
349
+
```
350
+
351
+
1. Note the implementation of the evaluator code. It currently doesn't do any real evaluation but simply returns that evaluation was successful:
352
+
```
353
+
cat lambda_handler.py
354
+
```
355
+
356
+
1. Initialize terraform:
357
+
```
358
+
terraform init
359
+
```
360
+
361
+
1. Run terraform apply:
362
+
```
363
+
terraform apply \
364
+
--var evaluator_name=${EVALUATOR_NAME} \
365
+
--var code_bucket=${CODE_BUCKET} \
366
+
-auto-approve
367
+
```
368
+
369
+
**Take note of the `evaluator_topic_arn` that is output by terraform. It should match the respective topic ARN in the test_router.yaml file you used during the initiator deployment. If they match then the router Lambda is now able to submit payloads to this evaluator SNS topic.**
370
+
323
371
#### Deploying an S3 Event Notification Trigger
324
372
325
-
1. Change directory to the location of the s3_bucket_notification trigger terraform:
373
+
1. Change directory to the location of the s3-bucket-notification trigger terraform:
326
374
327
375
```
328
-
cd ../../triggers/s3_bucket_notification/
376
+
cd ../../triggers/s3-bucket-notification/
329
377
```
330
378
331
379
1. You will need an S3 bucket to configure event notification on. Create one or reuse an existing one (could be the same one in the previous steps) and set an environment variable for it:
@@ -382,10 +430,10 @@ This guide provides a quick way to get started with our project. Please see our
382
430
383
431
#### Deploying an EventBridge Scheduler Trigger
384
432
385
-
1. Change directory to the location of the s3_bucket_notification trigger terraform:
433
+
1. Change directory to the location of the scheduled-task trigger terraform:
386
434
387
435
```
388
-
cd ../scheduled_task/
436
+
cd ../scheduled-task/
389
437
```
390
438
391
439
1. Note the implementation of the trigger lambda code. It currently hard codes a payload URL however in a real implementation, code would be written to query for new files from some REST API, database, etc. Here we simulate that and simply return a NISAR TLM file:
@@ -416,10 +464,10 @@ This guide provides a quick way to get started with our project. Please see our
416
464
417
465
#### Deploying an EventBridge Scheduler Trigger for Periodic CMR Queries
418
466
419
-
1. Change directory to the location of the s3_bucket_notification trigger terraform:
467
+
1. Change directory to the location of the cmr-query trigger terraform:
420
468
421
469
```
422
-
cd ../cmr_query/
470
+
cd ../cmr-query/
423
471
```
424
472
425
473
1. Note the implementation of the trigger lambda code. It will query CMR for granules for a particular collection within a timeframe, query its dynamodb table if they already exist, and if not, submit them as payload URLs to the initiator SNS topic and save them into the dynamodb table:
| <aname="input_code_bucket"></a> [code\_bucket](#input\_code\_bucket)| The S3 bucket where lambda zip files will be stored and accessed |`string`| n/a | yes |
51
56
| <aname="input_evaluator_name"></a> [evaluator\_name](#input\_evaluator\_name)| The evaluator name |`string`| n/a | yes |
52
57
| <aname="input_project"></a> [project](#input\_project)| The unity project its installed into |`string`|`"uod"`| no |
53
58
| <aname="input_venue"></a> [venue](#input\_venue)| The unity venue its installed into |`string`|`"dev"`| no |
0 commit comments