diff --git a/content/rdbms-migration/migration-chapter00.en.md b/content/rdbms-migration/migration-chapter00.en.md index 6ff8c6f..a4f45f8 100644 --- a/content/rdbms-migration/migration-chapter00.en.md +++ b/content/rdbms-migration/migration-chapter00.en.md @@ -6,17 +6,17 @@ weight: 10 --- In this module, you will create an environment to host the MySQL database on Amazon EC2. This instance will be used to host source database and simulate on-premise side of migration architecture. All the resources to configure source infrastructure are deployed via [Amazon CloudFormation](https://aws.amazon.com/cloudformation/) template. -There are two CloudFormation templates used in this exercise which will deploy following resources. +There are two CloudFormation templates used in this exercise which deploy the following resources. -CloudFormation MySQL Template Resources: - - OnPrem VPC: Source VPC will represent an on-premise source environment in the N. Virginia region. This VPC will host source MySQL database on Amazon EC2 - - Amazon EC2 MySQL Database: Amazon EC2 Amazon Linux 2 AMI with MySQL installed and running - - Load IMDb dataset: The template will create IMDb database on MySQL and load IMDb public dataset files into database. You can learn more about IMDb dataset inside [Explore Source Model](/hands-on-labs/rdbms-migration/migration-chapter03) +CloudFormation MySQL Template Resources (**Already deployed**): + - **OnPrem VPC**: Source VPC will represent an on-premise source environment in the workshop region. This VPC will host source MySQL database on Amazon EC2 + - **Amazon EC2 MySQL Database**: Amazon EC2 Amazon Linux 2 AMI with MySQL installed and running + - **Load IMDb dataset**: The template will create IMDb database on MySQL and load IMDb public dataset files into database. You can learn more about IMDb dataset inside [Explore Source Model](/hands-on-labs/rdbms-migration/migration-chapter03) -CloudFormation DMS Instance Resources: - - DMS VPC: Migration VPC on in the N. Virginia region. This VPC will host DMS replication instance. - - Replication Instance: DMS Replication instance that will facilitate database migration from source MySQL server on EC2 to Amazon DynamoDB +CloudFormation DMS Instance Resources (**Needs deploying**): + - **DMS VPC**: Migration VPC in the workshop region. This VPC will host DMS replication instance. + - **Replication Instance**: DMS Replication instance that will facilitate database migration from source MySQL server on EC2 to Amazon DynamoDB ![Final Deployment Architecture](/static/images/migration-environment.png) diff --git a/content/rdbms-migration/migration-chapter02-1.en.md b/content/rdbms-migration/migration-chapter02-1.en.md index f27ca91..b9ed3f8 100644 --- a/content/rdbms-migration/migration-chapter02-1.en.md +++ b/content/rdbms-migration/migration-chapter02-1.en.md @@ -5,23 +5,23 @@ date: 2021-04-25T07:33:04-05:00 weight: 25 --- -Let's create the DMS resources for the workshop. +Let's create the DMS resources for the workshop. First, we ensure if a DMS service role `dms-vpc-role` is already available. Then we need to deploy the DMS resources. -1. Go to IAM console > Roles > Create Role -2. Under “Select trusted entity” select “AWS service” then under “Use case” select “DMS” from the pulldown list and click the “DMS” radio button. Then click “Next” -3. Under “Add permissions” use the search box to find the “AmazonDMSVPCManagementRole” policy and select it, then click “Next” -5. Under “Name, review, and create” add the role name as exactly `dms-vpc-role` and click Create role +1. Go to IAM console > Roles > Search for `dms-vpc-role`. If you see a role, skip to the CloudFormation stack deployment. Else, select **Create role** and follow next steps. +2. Under **Select trusted entity** select **AWS service** then under **Use case** select **DMS** from the drop-down and click the **DMS** radio button. Then click **Next** +3. Under **Add permissions** use the search box to find the `AmazonDMSVPCManagementRole` policy and select it, then click **Next** +5. Under **Name, review, and create** add the role name as exactly `dms-vpc-role` and click **Create role** ::alert[_Do not continue unless you have made the IAM role._] -1. Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=dynamodbmigration&templateURL=:param{key="lhol_migration_dms_setup_yaml"}) +1. Launch the CloudFormation template in the workshop region to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=dynamodbmigration&templateURL=:param{key="lhol_migration_dms_setup_yaml"}) 1. *Optionally, download [the YAML template](:param{key="lhol_migration_dms_setup_yaml"}) and launch it your own way* -9. Click Next -10. Confirm the Stack Name *dynamodbmigration* and keep the default parameters (modify if necessary) +9. Click **Next** +10. Confirm the Stack name `dynamodbmigration` and keep the default parameters (modify if necessary) ![Final Deployment Architecture](/static/images/migration18.jpg) -11. Click “Next” twice -12. Check “I acknowledge that AWS CloudFormation might create IAM resources with custom names.” -1. Click Submit. The CloudFormation template will take about 15 minutes to build a replication environment. You should continue the lab while the stack creates in the background. +11. Click **Next** twice +12. Check ***I acknowledge that AWS CloudFormation might create IAM resources with custom names***. +1. Click **Submit**. The CloudFormation template will take about 15 minutes to build a replication environment. You should continue the lab while the stack creates in the background. ![Final Deployment Architecture](/static/images/migration19.jpg) ::alert[_Do not wait for the stack to complete creation._ **Please continue the lab and allow it to create in the background.**] diff --git a/content/rdbms-migration/migration-chapter02.en.md b/content/rdbms-migration/migration-chapter02.en.md index 9e7582a..930932d 100644 --- a/content/rdbms-migration/migration-chapter02.en.md +++ b/content/rdbms-migration/migration-chapter02.en.md @@ -5,18 +5,21 @@ date: 2021-04-25T07:33:04-05:00 weight: 20 --- This chapter will create source environment on AWS as discussed during Exercise Overview. + +::alert[MySQL Environment with data has already been deployed in your workshop account. No action is required on this page. The following steps are for you to learn. Head over to next page to deploy DMS resources.] + The CloudFormation template used below will create Source VPC, EC2 hosting MySQL server, IMDb database and load IMDb public dataset into 6 tables. -1. Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=rdbmsmigration&templateURL=:param{key="lhol_migration_setup_yaml"}) +1. Launch the CloudFormation template in workshop region to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=rdbmsmigration&templateURL=:param{key="lhol_migration_setup_yaml"}) 1. *Optionally, download [the YAML template](:param{key="lhol_migration_setup_yaml"}) and launch it your own way* - 4. Click Next - 5. Confirm the Stack Name *rdbmsmigration* and update parameters if necessary (leave the default options if at all possible) + 4. Click **Next** + 5. Confirm the Stack name `rdbmsmigration` and update parameters if necessary (leave the default options if at all possible) ![Final Deployment Architecture](/static/images/migration6.jpg) - 6. Click “Next” twice then check “I acknowledge that AWS CloudFormation might create IAM resources with custom names.” - 7. Click "Submit" + 6. Click **Next** twice then check ***I acknowledge that AWS CloudFormation might create IAM resources with custom names***. + 7. Click **Submit** 8. The CloudFormation stack will take about 5 minutes to build the environment ![Final Deployment Architecture](/static/images/migration7.jpg) - 9. Go to [EC2 Dashboard](https://console.aws.amazon.com/ec2/v2/home?region=us-west-2#Instances:) and ensure the Status check column is 2/2 checks passed before moving to the next step. + 9. Go to [EC2 Dashboard](https://console.aws.amazon.com/ec2/v2/home#Instances:) and ensure the **Status check** column is 2/2 checks passed before moving to the next step. ![Final Deployment Architecture](/static/images/migration8.jpg) diff --git a/content/rdbms-migration/migration-chapter03.en.md b/content/rdbms-migration/migration-chapter03.en.md index 8ed01c3..4d71755 100644 --- a/content/rdbms-migration/migration-chapter03.en.md +++ b/content/rdbms-migration/migration-chapter03.en.md @@ -13,10 +13,10 @@ It created a MySQL database called `imdb`, added 6 new tables (one for each IMDb The CloudFormation template also configured a remote MySQL user based on input parameters for the template. To explore the dataset, follow the instructions below to log in to the EC2 server. - 1. Go to [EC2 console](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Instances:instanceState=running). - 2. Select the MySQL Instance and click Connect. + 1. Go to [EC2 console](https://console.aws.amazon.com/ec2/v2/home#Instances:instanceState=running). + 2. Select the MySQL Instance and click **Connect**. ![Final Deployment Architecture](/static/images/migration9.jpg) - 3. Make sure "ec2-user" is in the Username field. Click Connect. + 3. Make sure `ec2-user` is in the **User name** field. Click **Connect**. ![Final Deployment Architecture](/static/images/migration10.jpg) 4. Elevate your privileges using the `sudo` command. ```bash @@ -31,7 +31,7 @@ To explore the dataset, follow the instructions below to log in to the EC2 serve 6. You can see all the 6 files copied from the IMDB dataset to the local EC2 directory. ![Final Deployment Architecture](/static/images/migration12.jpg) 7. Feel free to explore the files. - 8. Go to AWS CloudFormation [Stacks](https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks?filteringStatus=active&filteringText=&viewNested=true&hideStacks=false) and click on the stack you created earlier. Go to the Parameters tab and copy the username and password listed next to "DbMasterUsername" and "DbMasterPassword". + 8. Go to [AWS CloudFormation Console](https://console.aws.amazon.com/cloudformation/home#/stacks?filteringStatus=active&filteringText=&viewNested=true&hideStacks=false) and click on the stack you created earlier. Go to the **Parameters** tab and copy the username and password listed next to **DbMasterUsername** and **DbMasterPassword**. ![Final Deployment Architecture](/static/images/migration13.jpg) 9. Go back to EC2 Instance console and login to mysql. ```bash diff --git a/content/rdbms-migration/migration-chapter04.en.md b/content/rdbms-migration/migration-chapter04.en.md index 84b797d..49e997f 100644 --- a/content/rdbms-migration/migration-chapter04.en.md +++ b/content/rdbms-migration/migration-chapter04.en.md @@ -10,6 +10,7 @@ You can often query the data from multiple tables and assemble at the presentati To support high-traffic queries with ultra-low latency, designing a schema to take advantage of a NoSQL system generally makes technical and economic sense. To start designing a target data model in Amazon DynamoDB that will scale efficiently, you must identify the common access patterns. For the IMDb use case we have identified a set of access patterns as described below: + ![Final Deployment Architecture](/static/images/migration32.png) A common approach to DynamoDB schema design is to identify application layer entities and use denormalization and composite key aggregation to reduce query complexity. diff --git a/content/rdbms-migration/migration-chapter05.en.md b/content/rdbms-migration/migration-chapter05.en.md index 3203ee0..8200de8 100644 --- a/content/rdbms-migration/migration-chapter05.en.md +++ b/content/rdbms-migration/migration-chapter05.en.md @@ -9,14 +9,28 @@ In this exercise, we will set up Database Migration Service (DMS) jobs to migrat ## Verify DMS creation -1. Go to [DMS Console](https://console.aws.amazon.com/dms/v2/home?region=us-east-1#dashboard) and click on Replication Instances. You can able to see a replication instance with Class dms.c5.2xlarge in Available Status. +1. Go to [DMS Console](https://console.aws.amazon.com/dms/v2/home?region=us-east-1#dashboard) and click on **Replication Instances**. You should be able to see a replication instance with **Class** `dms.c5.2xlarge` in `Available` **Status**. ![Final Deployment Architecture](/static/images/migration20.jpg) ::alert[_Make sure the DMS instance is Available before you continue. If it is not Available, return to the CloudFormation console to review and troubleshoot the CloudFormation stack._] +## Update inbound rules for MySQL instance security group to allow access from DMS IP + +1. Select the `mysqltodynamo-instance` DMS replication instance, copy its **Public IP address**. + +![Copy DMS Public IP](/static/images/migration52.png) + +2. Open [EC2 console](https://console.aws.amazon.com/ec2/v2/home#Instances:instanceState=running), select **MySQL-Instance**. Under the **Security** tab, select the security group of the MySQL instance (eg: `sg-xxxxx`). + +![Open MySQL EC2 Security Group](/static/images/migration53.png) + +3. Select **Edit inbound rules**, then **Add rule**. Select **MYSQL/Aurora** in **Type**, and paste the Public IP address of the MySQL Instance with a `/32` suffix. For example, for IP `54.X.X.X`, enter `54.X.X.X/32` in **Source**. Finally, select **Save rules**. + +![Edit inbound rules of Security Group](/static/images/migration54.png) + ## Create source and target endpoints -1. Click on Endpoints and Create endpoint button +1. From the DMS console, select **Endpoints** and then **Create endpoint**. ![Final Deployment Architecture](/static/images/migration21.jpg) 2. Create the source endpoint. Use the following parameters to configure the endpoint: @@ -26,17 +40,19 @@ In this exercise, we will set up Database Migration Service (DMS) jobs to migrat | Endpoint identifier | mysql-endpoint | | Source engine | MySQL | | Access to endpoint database | Select the "Provide access information manually" radio button | - | Server name | From the [EC2 dashboard](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Instances:instanceState=running), select MySQL-Instance and copy Public IPv4 DNS | + | Server name | From the [EC2 dashboard](https://console.aws.amazon.com/ec2/v2/home#Instances:instanceState=running), select MySQL-Instance and copy Public IPv4 DNS | | Port | 3306 | | SSL mode | none | | User name | Value of DbMasterUsername added as parameter during Configure MySQL Environment | | Password | Value of DbMasterPassword added as parameter during Configure MySQL Environment | - ![Final Deployment Architecture](/static/images/migration22.jpg) - Open Test endpoint connection (optional) section, then in the VPC drop-down select DMS-VPC and click the Run test button to verify that your endpoint configuration is valid. The test will run for a minute and you should see a successful message in the Status column. Click on the Create endpoint button to create the endpoint. If you see a connection error, re-type the username and password to ensure no mistakes were made. Further, ensure you provided the IPv4 DNS name ending in amazonaws.com in the field **Server name**. - ![Final Deployment Architecture](/static/images/migration23.jpg) +![Final Deployment Architecture](/static/images/migration22.jpg) -4. Create the target endpoint. Repeat all steps to create the target endpoint with the following parameter values: +3. Open **Test endpoint connection (optional)** section, then in the **VPC** drop-down select **DMS-VPC** and click the **Run test** to verify that your endpoint configuration is valid. The test will run for a minute and you should see a *successful* message in the **Status** column. Click on the **Create endpoint** to create the endpoint. If you see a connection error, re-type the username and password to ensure no mistakes were made. Further, ensure you provided the IPv4 DNS name ending in `amazonaws.com` in the field **Server name**. + +![Final Deployment Architecture](/static/images/migration23.jpg) + +4. Create the target endpoint. Repeat all steps to create the target endpoint with the following parameter values: | Parameter | Value | | ----------------------- | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | @@ -45,18 +61,19 @@ In this exercise, we will set up Database Migration Service (DMS) jobs to migrat | Target engine | Amazon DynamoDB | | Service access role ARN | CloudFormation template has created new role with full access to Amazon DynamoDB. Copy Role ARN from [dynamodb-access](https://us-east-1.console.aws.amazon.com/iamv2/home#/roles/details/dynamodb-access?section=permissions) role | - ![Final Deployment Architecture](/static/images/migration24.jpg) - Open Test endpoint connection (optional) section, then in the VPC drop-down select DMS-VPC and click the Run test button to verify that your endpoint configuration is valid. The test will run for a minute and you should see a successful message in the Status column. Click on the Create endpoint button to create the endpoint. +![Final Deployment Architecture](/static/images/migration24.jpg) + +5. Open **Test endpoint connection (optional)** section, then in the **VPC** drop-down select **DMS-VPC** and select **Run test** to verify that your endpoint configuration is valid. The test will run for a minute and you should see a *successful* message in the **Status** column. Click **Create endpoint** to create the endpoint. ## Configure and Run a Replication Task -Still in the AWS DMS console, go to Database migration tasks and click the Create Task button. We will create 3 replication jobs to migrate denormalized view, ratings (title_ratings) and regions/languages (title_akas) information. +Still in the AWS DMS console, go to **Database migration tasks** and click on **Create database migration task**. We will create 3 replication jobs to migrate denormalized view, ratings (`title_ratings`) and regions/languages (`title_akas`) information. 1. Task1: Enter the following parameter values in the Create database migration task screen: | Parameter | Value | | -------------------------------------------- | :---------------------------------------------------------------------------: | - | Task identified | historical-migration01 | + | Task identifier | historical-migration01 | | Replication instance | mysqltodynamodb-instance-\* | | Source database endpoint | mysql-endpoint | | Target database endpoint | dynamodb-endpoint | @@ -76,7 +93,7 @@ Some statistics around full dataset is give at the bottom of this chapter. Copy list of selective movies by Clint Eastwood. -``` +```json { "filter-operator": "eq", "value": "tt0309377" @@ -179,7 +196,7 @@ Copy list of selective movies by Clint Eastwood. } ``` -Below JSON document will migrate denormalized view from imdb MySQL database (Task identified: historical-migration01). +Below JSON document will migrate denormalized view from imdb MySQL database (Task identifier: `historical-migration01`). Replace the string “REPLACE THIS STRING BY MOVIES LIST” with list of movies copied earlier (Checkout following screenshot for any confusion). Then paste the resulting JSON code in to the JSON editor, replacing the existing code. ```json { @@ -238,11 +255,14 @@ Replace the string “REPLACE THIS STRING BY MOVIES LIST” with list of movies ``` ![Final Deployment Architecture](/static/images/migration36.png) -Go to the bottom and click on Create task. At this point the task will be created and will automatically start loading selected movies from source to target DynamoDB table. -You can move forward and create two more tasks with similar steps (historical-migration02 and historical-migration03). -Use the same settings as above except the Table Mappings JSON document. For historical-migration02 and historical-migration03 tasks use the JSON document mentioned below. -Below JSON document will migrate title_akas table from imdb MySQL database (Task identified: historical-migration02) +::alert[Make sure the **Turn on premigration assessment** is un-checked. For MySQL based source databases, AWS DMS supports running a bunch of validations as part of a premigration assessment like if binlog compression is disabled, or DMS has replication priveleges. Full list in [AWS Documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.AssessmentReport.MySQL.html).] + +Go to the bottom and click on **Create database migraton task**. At this point the task will be created and will automatically start loading selected movies from source to target DynamoDB table. +You can move forward and create two more tasks with similar steps (`historical-migration02` and `historical-migration03`). +Use the same settings as above except the Table Mappings JSON document. For `historical-migration02` and `historical-migration03` tasks use the JSON document mentioned below. + +Below JSON document will migrate title_akas table from imdb MySQL database (Task identifier: `historical-migration02`) Replace the string "REPLACE THIS STRING BY MOVIES LIST" with list of movies copied earlier. ```json @@ -301,7 +321,7 @@ Replace the string "REPLACE THIS STRING BY MOVIES LIST" with list of movies copi } ``` -Below JSON document will migrate title_ratings table from imdb MySQL database (Task identified: historical-migration03) +Below JSON document will migrate title_ratings table from imdb MySQL database (Task identifier: `historical-migration03`) Replace the string "REPLACE THIS STRING BY MOVIES LIST" with list of movies copied earlier. ```json @@ -368,7 +388,7 @@ Replace the string "REPLACE THIS STRING BY MOVIES LIST" with list of movies copi :::: ### Monitor and the restart/resume the tasks -The replication task for historical migration will start moving data from MySQL imdb.movies view, title_akas and title_ratings to DynamoDB table will start in a few minutes. +The replication task for historical migration will start moving data from MySQL `imdb.movies` view, `title_akas` and `title_ratings` to DynamoDB table will start in a few minutes. If you are loading selective records based on the list above, it may take 5-10 minutes to complete all three tasks. If you were to run this exercise again but do a full load, the load times would be as follows: diff --git a/content/relational-migration/setup/index2.en.md b/content/relational-migration/setup/index2.en.md index a5f35c6..3c20f9f 100644 --- a/content/relational-migration/setup/index2.en.md +++ b/content/relational-migration/setup/index2.en.md @@ -25,7 +25,7 @@ The Lambda source code project has been setup as follows * **chalicelib/dynamodb_calls.py** -1. Next, let's deploy the Chalice application stack. This step may take a few minutes to complete. +1. Next, let's deploy the Chalice application stack. This step may take a few minutes to complete. ```bash chalice deploy --stage relational ``` diff --git a/design-patterns/cloudformation/C9.yaml b/design-patterns/cloudformation/C9.yaml index bee6b40..d34d480 100644 --- a/design-patterns/cloudformation/C9.yaml +++ b/design-patterns/cloudformation/C9.yaml @@ -85,8 +85,45 @@ Mappings: options: UserDataURL: "https://amazon-dynamodb-labs.com/assets/UserDataC9.sh" version: "1" + # AWS Managed Prefix Lists for EC2 InstanceConnect + AWSRegions2PrefixListID: + ap-south-1: + PrefixList: pl-0fa83cebf909345ca + eu-north-1: + PrefixList: pl-0bd77a95ba8e317a6 + eu-west-3: + PrefixList: pl-0f2a97ab210dbbae1 + eu-west-2: + PrefixList: pl-067eefa539e593d55 + eu-west-1: + PrefixList: pl-0839cc4c195a4e751 + ap-northeast-3: + PrefixList: pl-086543b458dc7add9 + ap-northeast-2: + PrefixList: pl-00ec8fd779e5b4175 + ap-northeast-1: + PrefixList: pl-08d491d20eebc3b95 + ca-central-1: + PrefixList: pl-0beea00ad1821f2ef + sa-east-1: + PrefixList: pl-029debe66aa9d13b3 + ap-southeast-1: + PrefixList: pl-073f7512b7b9a2450 + ap-southeast-2: + PrefixList: pl-0e1bc5673b8a57acc + eu-central-1: + PrefixList: pl-03384955215625250 + us-east-1: + PrefixList: pl-0e4bcff02b13bef1e + us-east-2: + PrefixList: pl-03915406641cb1f53 + us-west-1: + PrefixList: pl-0e99958a47b22d6ab + us-west-2: + PrefixList: pl-047d464325e7bf465 + Resources: -#LADV Role + #LADV Role DDBReplicationRole: Type: AWS::IAM::Role Properties: @@ -742,6 +779,11 @@ Resources: IpProtocol: tcp FromPort: 3306 ToPort: 3306 + - Description: "Allow Instance Connect" + FromPort: 22 + ToPort: 22 + IpProtocol: tcp + SourcePrefixListId: !FindInMap [AWSRegions2PrefixListID, !Ref 'AWS::Region', PrefixList] Tags: - Key: Name Value: MySQL-SecurityGroup @@ -802,6 +844,24 @@ Resources: mysql -u root "-p${DbMasterPassword}" -e "GRANT ALL PRIVILEGES ON *.* TO '${DbMasterUsername}'" mysql -u root "-p${DbMasterPassword}" -e "FLUSH PRIVILEGES" mysql -u root "-p${DbMasterPassword}" -e "CREATE DATABASE app_db;" + ## Setup MySQL Tables + cd /var/lib/mysql-files/ + curl -O https://www.amazondynamodblabs.com/static/rdbms-migration/rdbms-migration.zip + unzip -q rdbms-migration.zip + chmod 775 *.* + mysql -u root "-p${DbMasterPassword}" -e "CREATE DATABASE imdb;" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_akas (titleId VARCHAR(200), ordering VARCHAR(200),title VARCHAR(1000), region VARCHAR(1000), language VARCHAR(1000), types VARCHAR(1000),attributes VARCHAR(1000),isOriginalTitle VARCHAR(5),primary key (titleId, ordering));" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_basics (tconst VARCHAR(200), titleType VARCHAR(1000),primaryTitle VARCHAR(1000), originalTitle VARCHAR(1000), isAdult VARCHAR(1000), startYear VARCHAR(1000),endYear VARCHAR(1000),runtimeMinutes VARCHAR(1000),genres VARCHAR(1000),primary key (tconst));" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_crew (tconst VARCHAR(200), directors VARCHAR(1000),writers VARCHAR(1000),primary key (tconst));" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_principals (tconst VARCHAR(200), ordering VARCHAR(200),nconst VARCHAR(200), category VARCHAR(1000), job VARCHAR(1000), characters VARCHAR(1000),primary key (tconst,ordering,nconst));" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_ratings (tconst VARCHAR(200), averageRating float,numVotes integer,primary key (tconst));" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.name_basics (nconst VARCHAR(200), primaryName VARCHAR(1000),birthYear VARCHAR(1000), deathYear VARCHAR(1000), primaryProfession VARCHAR(1000), knownForTitles VARCHAR(1000),primary key (nconst));" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_ratings.tsv' IGNORE INTO TABLE imdb.title_ratings FIELDS TERMINATED BY '\t';" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_basics.tsv' IGNORE INTO TABLE imdb.title_basics FIELDS TERMINATED BY '\t';" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_crew.tsv' IGNORE INTO TABLE imdb.title_crew FIELDS TERMINATED BY '\t';" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_principals.tsv' IGNORE INTO TABLE imdb.title_principals FIELDS TERMINATED BY '\t';" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/name_basics.tsv' IGNORE INTO TABLE imdb.name_basics FIELDS TERMINATED BY '\t';" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_akas.tsv' IGNORE INTO TABLE imdb.title_akas FIELDS TERMINATED BY '\t';" Tags: - Key: Name Value: MySQL-Instance diff --git a/static/images/migration36.png b/static/images/migration36.png index 40bf8a4..46233f5 100644 Binary files a/static/images/migration36.png and b/static/images/migration36.png differ diff --git a/static/images/migration52.png b/static/images/migration52.png new file mode 100644 index 0000000..9744d7f Binary files /dev/null and b/static/images/migration52.png differ diff --git a/static/images/migration53.png b/static/images/migration53.png new file mode 100644 index 0000000..746297b Binary files /dev/null and b/static/images/migration53.png differ diff --git a/static/images/migration54.png b/static/images/migration54.png new file mode 100644 index 0000000..cd38668 Binary files /dev/null and b/static/images/migration54.png differ diff --git a/sync.sh b/sync.sh new file mode 100755 index 0000000..2d65ad2 --- /dev/null +++ b/sync.sh @@ -0,0 +1,110 @@ +#!/bin/bash + +usage() { + echo "Usage: $0 [--dry-run] -d DEST_REPO" + echo " --dry-run Show what would be synced without making changes" + echo " -d, --dest Destination repository path" + echo + echo "Example:" + echo " $0 --dry-run -d /Users/$USER/workspace/amazon-dynamodb-immersion-day" + exit 1 +} + +# Get the directory where the script is located +SOURCE_REPO="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +# Parse command line arguments +DRY_RUN=false +DEST_REPO="" + +while [[ $# -gt 0 ]]; do + case $1 in + --dry-run) + DRY_RUN=true + shift + ;; + -d|--dest) + DEST_REPO="$2" + shift 2 + ;; + *) + usage + ;; + esac +done + +# Validate required parameters +if [ -z "$DEST_REPO" ]; then + usage +fi + +# Define source and destination directory pairs +src_dirs=( + "content" + "static" +) + +dest_dirs=( + "content" + "static" +) + +# Define source and destination file pairs +src_files=( + "design-patterns/cloudformation/C9.yaml" +) + +dest_files=( + "static/ddb.yaml" +) + +# Function to perform sync +perform_sync() { + local rsync_options="-avz" + + if [ "$DRY_RUN" = true ]; then + rsync_options="$rsync_options --dry-run" + echo "Performing dry run..." + else + echo "Performing actual sync..." + fi + + echo "Source repository: $SOURCE_REPO" + echo "Destination repository: $DEST_REPO" + + # Sync directories + for i in "${!src_dirs[@]}"; do + echo "Syncing directory: ${src_dirs[$i]}/ -> ${dest_dirs[$i]}/" + mkdir -p "$DEST_REPO/${dest_dirs[$i]}" + rsync $rsync_options "$SOURCE_REPO/${src_dirs[$i]}/" "$DEST_REPO/${dest_dirs[$i]}/" + done + + # Sync individual files + for i in "${!src_files[@]}"; do + echo "Syncing file: ${src_files[$i]} -> ${dest_files[$i]}" + dest_dir=$(dirname "$DEST_REPO/${dest_files[$i]}") + mkdir -p "$dest_dir" + rsync $rsync_options "$SOURCE_REPO/${src_files[$i]}" "$DEST_REPO/${dest_files[$i]}" + done + echo "Great! Now follow instructions in the amazon-dynamodb-immersion-day README.md document to complete the sync." +} + +# Verify destination repository exists +if [ ! -d "$DEST_REPO" ]; then + echo "Error: Destination repository does not exist: $DEST_REPO" + exit 1 +fi + +# Execute sync +if [ "$DRY_RUN" = true ]; then + perform_sync +else + read -p "This will perform an actual sync. Are you sure? (y/n) " -n 1 -r + echo + if [[ $REPLY =~ ^[Yy]$ ]]; then + perform_sync + else + echo "Sync cancelled." + exit 1 + fi +fi