You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/modernizr/01-mysql-mcp/index.en.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ chapter: true
11
11
12
12
Stage 1 establishes the foundation for your modernization project by conducting a systematic analysis of your existing MySQL database. This phase involves using specialized AI tools to automatically discover and document your current data architecture, relationships, and usage patterns.
13
13
14
-
The analysis process leverages the MySQL MCP Server—a specialized AI assistant designed specifically for relational database analysis. This tool connects directly to your running MySQL instance to extract comprehensive metadata about your database schema, including table structures, relationships, indexes, and constraints.
14
+
The analysis process leverages the MySQL MCP Server — a specialized AI assistant designed specifically for relational database analysis. This tool connects directly to your running MySQL instance to extract comprehensive metadata about your database schema.
15
15
16
16
## Key Analysis Components
17
17
@@ -28,9 +28,9 @@ The MySQL MCP Server performs automated schema discovery by querying the MySQL i
28
28
29
29
Understanding how your data entities relate to each other is crucial for designing an effective NoSQL structure. The analysis identifies:
30
30
31
-
-**One-to-Many Relationships**: Parent-child relationships that may benefit from document embedding in DynamoDB
32
-
-**Many-to-Many Relationships**: Complex associations that require careful modeling in NoSQL
33
-
-**Hierarchical Data Patterns**: Nested structures that can be optimized using DynamoDB's flexible schema
31
+
-**One-to-Many Relationships**: Parent-child relationships that may benefit from modeling as DynamoDB item collections
32
+
-**Many-to-Many Relationships**: Associations that require designs such as adjacency lists to model in NoSQL
33
+
-**Hierarchical Data Patterns**: Nested structures that can modeled with composite sort keys
34
34
35
35
### Access Pattern Analysis
36
36
@@ -46,10 +46,10 @@ The MySQL MCP Server generates comprehensive documentation artifacts that serve
46
46
47
47
-**Entity Relationship Diagrams**: Visual representations of your current data model
48
48
-**Schema Documentation**: Detailed specifications of all database objects
49
-
-**Access Pattern Catalog**: Systematic documentation of how your application interacts with data
49
+
-**Access Pattern Catalog**: Documentation of how your application interacts with data
50
50
51
51
## Setting Up the Analysis Environment
52
52
53
53
Before beginning the analysis, ensure your MySQL database is accessible and the MCP Server has appropriate permissions to read schema metadata. The analysis process is read-only and does not modify your production data.
54
54
55
-
This systematic approach to database analysis provides the detailed understanding necessary to design an optimal DynamoDB architecture that preserves all existing functionality while improving performance and scalability.
55
+
Information gathered in this step of database analysis provides the detailed understanding necessary to design an optimal DynamoDB architecture that preserves all existing functionality while improving performance and scalability.
Copy file name to clipboardExpand all lines: content/modernizr/02-data-modeling/data-modeling-01.en.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ Great please continue with task 3.1 available here prompts/02-dynamodb-data-mode
20
20
21
21
## Access Pattern Validations
22
22
23
-
Task 3.1 implements a validation step that verifies your DynamoDB design supports all identified access patterns from the requirements analysis. This validation process serves as a quality assurance checkpoint, detecting potential AI-generated artifacts that don't correspond to actual system requirements—a common issue in AI-assisted development where models may extrapolate beyond provided specifications.
23
+
Task 3.1 implements a validation step that verifies your DynamoDB design supports all identified access patterns from the requirements analysis. This validation process serves as a quality assurance checkpoint, detecting potential AI-generated artifacts that don't correspond to actual system requirements — a common issue in AI-assisted development where models may extrapolate beyond provided specifications.
24
24
25
25
Following validation, you'll receive a table-by-table analysis of your data model. Before proceeding, review the generated design document at `artifacts/stage-02/dynamodb_data_model.md` to understand the proposed architecture.
26
26
@@ -51,11 +51,11 @@ The e-commerce application presents several one-to-many relationships that infor
51
51
52
52
## DynamoDB Entity Aggregation Strategy
53
53
54
-
These relationships reveal natural aggregation opportunities where related entities can be co-located within the same table partition. For instance, user-centric data including profile information, order history, and active cart contents share logical cohesion and similar access patterns.
54
+
These relationships reveal natural aggregation opportunities where related entities can be co-located within the same table partition. For instance, user-centric data including profile information, order history, and active cart contents share logical cohesion and similar access patterns. Generally a good mantra for data modeling in DynamoDB is that "data accessed together should be stored together".
55
55
56
56
## Partition Key Design Principles
57
57
58
-
DynamoDB enables semantic partition key selection that provides both uniqueness guarantees and meaningful data organization. Rather than relying on surrogate keys, you can implement partition keys like `user_email` that enforce natural business constraints while enabling intuitive data access patterns.
58
+
DynamoDB lets you use real business attributes as partition keys instead of generated IDs. For example, using `user_email` as the key ensures each record is unique while also making it easier to organize and query data in ways that match how the application actually uses it.
59
59
60
60
For additional query flexibility during the migration phase, Global Secondary Indexes (GSI) can provide alternate access paths based on `userID` or `username` attributes without impacting primary table performance.
61
61
@@ -68,7 +68,7 @@ Example entity co-location for the Users table:
68
68
-**User Order History**: Historical order records associated with the user
69
69
-**Active Cart Items**: Current shopping session state data
70
70
71
-
This co-location strategy leverages DynamoDB's partition-level consistency and enables efficient retrieval of all user-related information through single query operations.
71
+
This co-location strategy enables efficient retrieval of all user-related information through single query operations, reducing read cost and improving performance.
Copy file name to clipboardExpand all lines: content/modernizr/02-data-modeling/data-modeling-02.en.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ During our transition from the old system to the new one, we need to be able to
36
36
37
37
## The Shopping Cart Entity - Your Digital Shopping Basket
38
38
39
-
Imagine you're walking through a store with a shopping basket. Each item you pick up gets added to your basket with a note about what it is and how many you want. That's exactly what our shopping cart entity does digitally.
39
+
Imagine you're walking through a store with a shopping basket. You'll select various quantities of different items to add to your cart. That's exactly what our shopping cart entity does digitally, except instead of a phyical item its a small note representing what is what is being purchased and in what quantity.
40
40
41
41
We link each cart item to the person's email (PK) and give each item a special label that starts with `CART#` followed by the product ID (SK).
Copy file name to clipboardExpand all lines: content/modernizr/02-data-modeling/data-modeling-03.en.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ When your AI assistant (the LLM) first looked at this problem, it came up with a
14
14
15
15
## Why Are We Using PK and SK Instead of Just Product ID?
16
16
17
-
You might wonder why we're not just using `product_id` as our main identifier. The reason is like building a house with room to expand later. By using PK (Primary Key) and SK (Sort Key), we're creating space to add related information about each product in the future without having to rebuild our entire system.
17
+
You might wonder why we're not just using `product_id` as our main identifier. The reason is like building a house with room to expand later. By using PK (Primary Key) and SK (Sort Key), we're creating space to add related information about each product in the future without having to rebuild our entire system. As a side benefit, shorter key names save storage too!
18
18
19
19
For now, our PK will be the product ID, and our SK will be `#META` (which means "this contains the main product information"). This setup is like having a filing cabinet where each product gets its own folder, and we can add different types of documents to that folder later.
Copy file name to clipboardExpand all lines: content/modernizr/02-data-modeling/data-modeling-05.en.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ This iterative approach reflects industry best practices where database designs
37
37
38
38
As the system progresses through task 3.3, verify that the final design adheres to established naming conventions and structural standards. Specifically, ensure no entity prefixes are applied to primary key values such as `PROD_<product_id>` or `USER_<user_id>` or `USER_<email>`.
39
39
40
-
The validation process serves as quality assurance, catching potential issues before they propagate to subsequent implementation phases. Investing time in thorough validation at this stage prevents costly rework during later development phases.
40
+
The validation process serves as quality assurance, catching potential issues before they propagate to subsequent implementation phases. Investing time in thorough validation at this stage prevents costly rework during later development phases as the quality of each stage's output depends on the quality of it's input.
Copy file name to clipboardExpand all lines: content/modernizr/02-data-modeling/data-modeling-06.en.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,15 +43,15 @@ During contract generation, monitor for these frequent issues that require manua
43
43
AI systems sometimes deviate from specified naming conventions, creating custom table or attribute names instead of using the standardized schema:
44
44
45
45
```bash
46
-
You will need to validate the creation of the migration contract, I see you have defined your own table names, and you didn't used the table names I have provided, same for the GSIs I specifically asked for generic names for the GIS and the PK and SK to avoid issues or hardcoded values. To give you an example in the migration contract artifacts/stage-02/migrationContract.json the first table `UserOrdersCart` should be called `Users`, the partition key should be PK and the sort tkey SK, Please re-read the data_model and update my migration contract
46
+
You will need to validate the creation of the migration contract, I see you have defined your own table names, and you didn't used the table names I have provided, same for the GSIs I specifically asked for generic names for the GIS and the PK and SK to avoid issues or hardcoded values. To give you an example in the migration contract artifacts/stage-02/migrationContract.json the first table `UserOrdersCart` should be called `Users`, the partition key should be PK and the sort key SK, Please re-read the data_model and update my migration contract
47
47
```
48
48
49
49
### Invalid Transformation Methods
50
50
51
51
The system may generate non-existent transformation functions instead of using the supported contract specifications:
52
52
53
53
```bash
54
-
I noticed you have a made up transformation called `json-parse` it should be `json-construction` The format of that attribute is a map so you need to use JSON contruction, can you please update that attribute name? and validate you have no created other made up methods? you need to follow the specifications as directed in the `contracts` folder
54
+
I noticed you have a made up transformation called `json-parse` it should be `json-construction` The format of that attribute is a map so you need to use JSON contruction, can you please update that attribute name and validate you have no created other made up methods? you need to follow the specifications as directed in the `contracts` folder
Copy file name to clipboardExpand all lines: content/modernizr/02-data-modeling/index.en.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,9 +7,9 @@ chapter: true
7
7
8
8
## DynamoDB Data Modeling Workshop
9
9
10
-
Having completed the application analysis in Stage 1, you now understand the existing MySQL schema, identified access patterns, and established performance requirements. Stage 2 focuses on translating this relational data model into an optimized DynamoDB design that supports all identified access patterns while leveraging NoSQL advantages.
10
+
Having completed the application analysis in Stage 1, you now understand the existing MySQL schema, identified access patterns, and established performance requirements. Stage 2 focuses on translating this relational data model into an optimized DynamoDB design that supports all identified access patterns while leveraging the advantages of NoSQL.
11
11
12
-
This stage requires active collaboration between human architectural decision-making and AI-assisted technical implementation. You'll guide the design process while leveraging specialized AI tools to ensure optimal DynamoDB data modeling practices.
12
+
This stage requires active collaboration between you as a human architect and and AI tools assisting with technical implementation. You'll guide the design process while the AI makes recommendations for DynamoDB data modeling practices.
13
13
14
14
## Interactive Design Process
15
15
@@ -33,17 +33,17 @@ You'll know the workshop is working because you'll see Cline actively creating n
**Take your time** with each file that gets generated. Don't rush through this process! Read everything, understand what Cline is doing at each step, and don't hesitate to ask questions. This is interactive learning - if you get confused at any point, just ask Cline to explain what's happening.
36
+
**Take your time** with each file that gets generated. Don't rush through this process! Read everything, understand what Cline is doing at each step and why, and don't hesitate to ask questions. This is interactive learning - if you get confused at any point, just ask Cline to explain what's happening.
37
37
38
38
## Understanding the Variables
39
39
40
-
Every design session is unique, so what you see might be slightly different from someone else's experience. That's normal and expected! Instead of giving you exact screenshots to match, we'll provide guidance on the important concepts and decisions you'll encounter.
40
+
Generative AI is by nature non deterministic. Some "temperature" or variability in an LLMs answers allows it to be more creative in problem solving. Because of this, every design session is unique, so what you see might be slightly different from someone else's experience. That's normal and expected! Instead of giving you exact screenshots to match, we'll provide guidance on the important concepts and decisions you'll encounter.
41
41
42
42
One crucial thing to remember: **we need to support all 48 different access patterns** that were identified in Stage 1. Make sure this gets communicated clearly to Cline throughout the process.
43
43
44
44
## Providing Traffic Information
45
45
46
-
Cline will likely ask you about expected traffic patterns - basically, "How many people will be using this system?" Here's the information you should provide when asked:
46
+
Cline will likely ask you about expected traffic patterns — basically, "How many people will be using this system?" Here's the information you should provide when asked:
47
47
48
48
```shell
49
49
Our best marketing estimates put us with 1400 Queries per second.
@@ -75,26 +75,26 @@ During this intensive design stage, you might occasionally see messages about ra
75
75
76
76
## Using Specialized AI Tools
77
77
78
-
At some point, Cline will ask for permission to use the DynamoDB MCP Server - this is like accessing a specialized AI consultant who's an expert specifically in DynamoDB design. When asked, give permission for this. This expert AI will help analyze all the data we've collected and create a proper database design.
78
+
At some point, Cline will ask for permission to use the DynamoDB MCP Server — this is like accessing a specialized AI consultant who's an expert specifically in DynamoDB design. When asked, give permission for this. This expert AI will help analyze all the data we've collected and create a proper database design.
The DynamoDB specialist will first create a summary file called `dynamodb_requirements.md`. This is like having an architect show you the summary of everything you've discussed before they start drawing the blueprints.
84
+
Cline will first create a summary file called `dynamodb_requirements.md`. This is like having an architect show you the summary of everything you've discussed before they start drawing the blueprints.
85
85
86
86
::alert[**Important:** Read this file carefully! Sometimes AI can accidentally add requirements that were never discussed, or miss important details. This is your chance to catch any errors before they become part of the final design.]{type="info"}
Once you approve the requirements summary, Cline will create your actual DynamoDB data model. This is exciting - you're seeing your new database structure come to life! Cline has generated the new data model file `artifacts/stage-02/dynamodb_data_model.md` please open it and read it carefully.
92
+
Once you approve the requirements summary, Cline will create your actual DynamoDB data model. This is exciting — you're seeing your new database structure come to life! Cline has generated the new data model file `artifacts/stage-02/dynamodb_data_model.md` please open it and read it carefully.
After getting your initial design, the next step is validation - making sure this design is actually good and not just something that sounds impressive but won't work in practice. We'll examine the design carefully to ensure it's based on real requirements rather than AI imagination.
98
+
After getting your initial design, the next step is validation — making sure this design is actually good and not just something that sounds impressive but won't work in practice. We'll examine the design carefully to ensure it's based on real requirements rather than AI imagination.
99
99
100
100
Remember, this is a collaborative process where your input and decisions shape the final outcome. You're learning to be a database architect while the AI handles the technical implementation details!
0 commit comments