You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+87-21Lines changed: 87 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,6 +49,7 @@ Trigger events by themselves don't automatically mean that SDS processing is rea
49
49
As described by [Hua et al. [2022]](#1):
50
50
> A fundamental capability of an SDS is to systematically process science data through a series of data transformations from raw instrument data to geophysical measurements. Data are first made available to the SDS from GDS to be processed to higher level data products. The data transformation steps may utilize ancillary and auxiliary files as well as production rules that stipulate conditions for when each step should be executed.
51
51
52
+
In an SDS, evaluators are functions (irrespective of how they are deployed and called) that perform adaptation-specific evaluation to determine if the next step in the processing pipeline is ready for execution.
52
53
In an SDS, evaluators are functions (irrespective of how they are deployed and called) that perform adaptation-specific evaluation to determine if the next step in the processing pipeline is ready for execution.
53
54
54
55
As an example, the following shows the input-output diagram for the NISAR L-SAR L0B PGE (a.k.a. science algorithm):
@@ -176,26 +177,28 @@ In this case, the router sees that the action is `submit_dag_by_id` and thus mak
176
177
177
178
## Contents
178
179
179
-
*[Features](#features)
180
-
*[Contents](#contents)
181
-
*[Quick Start](#quick-start)
182
-
*[Requirements](#requirements)
183
-
*[Setting Up the End-to-End Demo](#setting-up-the-end-to-end-demo)
184
-
*[Deploying the Initiator](#deploying-the-initiator)
185
-
*[Deploying an Example Evaluator (SNS topic-\>SQS queue-\>Lambda)](#deploying-an-example-evaluator-sns-topic-sqs-queue-lambda)
186
-
*[Deploying an S3 Event Notification Trigger](#deploying-an-s3-event-notification-trigger)
@@ -235,6 +238,7 @@ This guide provides a quick way to get started with our project. Please see our
235
238
export AWS_REGION=$(aws configure get region)
236
239
sed -i "s/hilo-hawaii-1/${AWS_REGION}/g" test_router.yaml
237
240
sed -i "s/123456789012:eval_nisar_ingest/${AWS_ACCOUNT_ID}:uod-dev-eval_nisar_ingest-evaluator_topic/g" test_router.yaml
241
+
sed -i "s/123456789012:eval_airs_ingest/${AWS_ACCOUNT_ID}:uod-dev-eval_airs_ingest-evaluator_topic/g" test_router.yaml
238
242
```
239
243
240
244
1. You will need an S3 bucket for terraform to stage the router Lambda zip file during deployment. Create one or reuse an existing one and set an environment variable for it:
@@ -279,7 +283,13 @@ This guide provides a quick way to get started with our project. Please see our
279
283
1. Change directory to the location of the sns_sqs_lambda evaluator terraform:
280
284
281
285
```
282
-
cd ../evaluators/sns_sqs_lambda/
286
+
cp -rp sns_sqs_lambda sns_sqs_lambda-nisar_tlm
287
+
```
288
+
289
+
1. Change directory into the NISAR TLM evaluator terraform:
290
+
291
+
```
292
+
cd sns_sqs_lambda-nisar_tlm/
283
293
```
284
294
285
295
1. Set the name of the evaluator to our NISAR example:
@@ -404,6 +414,62 @@ This guide provides a quick way to get started with our project. Please see our
404
414
1. The deployed EventBridge scheduler runs the trigger Lambda function with schedule expression of `rate(1 minute)`. After a minute, verify that the `eval_nisar_ingest` evaluator Lambda function was called successfully for each of those scheduled invocations by looking at its CloudWatch logs for entries similar to this:
#### Deploying an EventBridge Scheduler Trigger for Periodic CMR Queries
418
+
419
+
1. Change directory to the location of the s3_bucket_notification trigger terraform:
420
+
421
+
```
422
+
cd ../cmr_query/
423
+
```
424
+
425
+
1. Note the implementation of the trigger lambda code. It will query CMR for granules for a particular collection within a timeframe, query its dynamodb table if they already exist, and if not, submit them as payload URLs to the initiator SNS topic and save them into the dynamodb table:
426
+
427
+
```
428
+
cat lambda_handler.py
429
+
```
430
+
431
+
1. Set the CMR provider ID for the AIRS RetStd collection:
432
+
433
+
```
434
+
export PROVIDER_ID=GES_DISC
435
+
```
436
+
437
+
1. Set the CMR concept ID for the AIRS RetStd collection:
438
+
439
+
```
440
+
export CONCEPT_ID=C1701805619-GES_DISC
441
+
```
442
+
443
+
1. Set the amount of seconds to look back from the current epoch for granules in the collection. For example, we will set this value to 2 days (172800 seconds) so that when the CMR query lambda kicks off, it will query for all AIRS RetStd granules using a temporal search of `now - 172800 seconds` to `now`:
444
+
445
+
```
446
+
export SECONDS_BACK=172800
447
+
```
448
+
449
+
1. Initialize terraform:
450
+
451
+
```
452
+
terraform init
453
+
```
454
+
455
+
1. Run terraform apply. Note the DEPLOYMENT_NAME, CODE_BUCKET and INITIATOR_TOPIC_ARN environment variables should have been set in the previous steps. If not set them again:
1. The deployed EventBridge scheduler runs the trigger CMR query Lambda function with schedule expression of `rate(1 minute)`. After a minute, verify that the `eval_airs_ingest` evaluator Lambda function was called successfully for each of those scheduled invocations by looking at its CloudWatch logs for entries similar to this:
0 commit comments