Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion motia-content-creation/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -106,4 +106,8 @@ temp/
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
Thumbs.db

# Typescript
types.d.ts
package-lock.json
1 change: 1 addition & 0 deletions motia-content-creation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ API → Scrape → Generate → Schedule
2. **Install project dependencies:**
```bash
npm install or pnpm install
npx motia install
```

3. **Configure environment:**
Expand Down
13 changes: 8 additions & 5 deletions motia-content-creation/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,20 @@
"web-scraping"
],
"dependencies": {
"motia": "^0.4.0-beta.90",
"@mendable/firecrawl-js": "^1.0.0",
"openai": "^4.90.0",
"axios": "^1.10.0",
"dotenv": "^16.5.0",
"zod": "^3.25.67",
"axios": "^1.10.0"
"install": "^0.13.0",
"motia": "^0.4.0-beta.90",
"openai": "^4.90.0",
"or": "^0.2.0",
"pnpm": "^10.15.0",
"zod": "^3.25.67"
},
Comment on lines +22 to 30
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Prune likely-unused/supply-chain-risk dependencies from runtime deps.

Given the migration to Ollama in Python steps, these runtime deps look unnecessary in the Node package and can enlarge the attack surface:

  • openai (Node) — Python steps use ollama; no Node usage shown
  • pnpm — package manager should not be an app runtime dependency
  • install, or — uncommon packages; please justify usage
  • axios — only keep if the TS scheduling steps actually import it

Move truly-needed tools to devDependencies or remove entirely.

Run this script to confirm actual usage before removal:

If unused, apply this diff:

   "dependencies": {
     "@mendable/firecrawl-js": "^1.0.0",
-    "axios": "^1.10.0",
     "dotenv": "^16.5.0",
-    "install": "^0.13.0",
     "motia": "^0.4.0-beta.90",
-    "openai": "^4.90.0",
-    "or": "^0.2.0",
-    "pnpm": "^10.15.0",
     "zod": "^3.25.67"
   },

🏁 Script executed:

#!/bin/bash
# Verify usage of suspicious deps across the repo
set -euo pipefail
echo "Searching for imports/usages..."
rg -n -C2 -g '!**/node_modules/**' -g '!**/dist/**' -g '!**/build/**' \
  -e '\bfrom\s+["'\'']axios["'\'']' \
  -e '\brequire\(["'\'']axios["'\'']\)' \
  -e '\bfrom\s+["'\'']openai["'\'']' \
  -e '\brequire\(["'\'']openai["'\'']\)' \
  -e '\bfrom\s+["'\'']pnpm["'\'']' \
  -e '\brequire\(["'\'']pnpm["'\'']\)' \
  -e '\bfrom\s+["'\'']install["'\'']' \
  -e '\brequire\(["'\'']install["'\'']\)' \
  -e '\bfrom\s+["'\'']or["'\'']' \
  -e '\brequire\(["'\'']or["'\'']\)' \
  motia-content-creation

Length of output: 1385


Remove unused dependencies and retain only truly required runtime packages

The usage scan confirms:

  • axios is imported in
    • motia-content-creation/steps/schedule-twitter.step.ts (line 3)
    • motia-content-creation/steps/schedule-linkedin.step.ts (line 3)
      so it must remain in dependencies.
  • openai, pnpm, install, and or have no import/require occurrences and can be safely removed.

Apply this diff to motia-content-creation/package.json:

   "dependencies": {
     "@mendable/firecrawl-js": "^1.0.0",
     "axios": "^1.10.0",
     "dotenv": "^16.5.0",
     "motia": "^0.4.0-beta.90",
-    "openai": "^4.90.0",
-    "install": "^0.13.0",
-    "or": "^0.2.0",
-    "pnpm": "^10.15.0",
     "zod": "^3.25.67"
   },

This pruning reduces supply-chain risk without impacting any runtime code.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"axios": "^1.10.0",
"dotenv": "^16.5.0",
"zod": "^3.25.67",
"axios": "^1.10.0"
"install": "^0.13.0",
"motia": "^0.4.0-beta.90",
"openai": "^4.90.0",
"or": "^0.2.0",
"pnpm": "^10.15.0",
"zod": "^3.25.67"
},
"dependencies": {
"@mendable/firecrawl-js": "^1.0.0",
"axios": "^1.10.0",
"dotenv": "^16.5.0",
"motia": "^0.4.0-beta.90",
"zod": "^3.25.67"
},
🤖 Prompt for AI Agents
In motia-content-creation/package.json around lines 22 to 30, the dependencies
list contains unused packages; keep "axios" (used in
steps/schedule-twitter.step.ts and steps/schedule-linkedin.step.ts) and remove
"openai", "pnpm", "install", and "or" from the dependencies section; update
package.json by deleting those four entries (and any trailing commas adjusted)
so only required runtime packages remain, then run a quick install (pnpm/npm)
and a codebase search to confirm no runtime imports were missed.

"devDependencies": {
"@types/node": "^20.17.28",
"@types/react": "^18.3.23",
"ts-node": "^10.9.2",
"typescript": "^5.8.3"
}
}
}
34 changes: 18 additions & 16 deletions motia-content-creation/steps/generate-linkedin.step.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
import os
import json
import ollama
import asyncio
from pydantic import BaseModel, HttpUrl
from datetime import datetime
from dotenv import load_dotenv
from openai import AsyncOpenAI

load_dotenv()

OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')

openai_client = AsyncOpenAI(api_key=OPENAI_API_KEY)
OLLAMA_MODEL = os.getenv('OLLAMA_MODEL', 'deepseek-r1')

class GenerateInput(BaseModel):
requestId: str
Expand All @@ -35,21 +34,24 @@ async def handler(input, context):

linkedinPrompt = linkedinPromptTemplate.replace('{{title}}', input['title']).replace('{{content}}', input['content'])

context.logger.info("🔄 LinkedIn content generation started...")

linkedin_content = await openai_client.chat.completions.create(
model="gpt-4o",
messages=[{'role': 'user', 'content': linkedinPrompt}],
temperature=0.7,
max_tokens=2000,
response_format={'type': 'json_object'}
context.logger.info(f"🔄 LinkedIn content generation started using Ollama model: {OLLAMA_MODEL}...")
response = ollama.chat(
model=OLLAMA_MODEL,
messages=[{'role': 'user', 'content': linkedinPrompt}],
options={
'temperature': 0.7,
'num_predict': 2000
}
)


response_content = response['message']['content']
context.logger.info(f"Received raw response from Ollama: {response_content[:100]}...")

try:
linkedin_content = json.loads(linkedin_content.choices[0].message.content)
linkedin_content = json.loads(response['message']['content'])
except Exception:
linkedin_content = {'text': linkedin_content.choices[0].message.content}

linkedin_content = {'text': response['message']['content']}
context.logger.info(f"🎉 LinkedIn content generated successfully!")

await context.emit({
Expand Down
25 changes: 12 additions & 13 deletions motia-content-creation/steps/generate-twitter.step.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,11 @@
from pydantic import BaseModel, HttpUrl
from datetime import datetime
from dotenv import load_dotenv
from openai import AsyncOpenAI
import ollama

load_dotenv()

OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')

openai_client = AsyncOpenAI(api_key=OPENAI_API_KEY)
OLLAMA_MODEL = os.getenv('OLLAMA_MODEL', 'deepseek-r1')

class GenerateInput(BaseModel):
requestId: str
Expand Down Expand Up @@ -37,18 +35,19 @@ async def handler(input, context):

context.logger.info("🔄 Twitter content generation started...")

twitter_content = await openai_client.chat.completions.create(
model="gpt-4o",
messages=[{'role': 'user', 'content': twitterPrompt}],
temperature=0.7,
max_tokens=2000,
response_format={'type': 'json_object'}
)
twitter_content = ollama.chat(
model=OLLAMA_MODEL,
messages=[{'role': 'user', 'content': twitterPrompt}],
options={
'temperature': 0.7,
'num_predict': 2000
}
)

try:
twitter_content = json.loads(twitter_content.choices[0].message.content)
twitter_content = json.loads(twitter_content['message']['content'])
except Exception:
twitter_content = {'text': twitter_content.choices[0].message.content}
twitter_content = {'text': twitter_content['message']['content']}

context.logger.info(f"🎉 Twitter content generated successfully!")

Expand Down
14 changes: 7 additions & 7 deletions motia-content-creation/steps/scrape.step.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import os
from pydantic import BaseModel, HttpUrl
from firecrawl import FirecrawlApp
from firecrawl import Firecrawl
from datetime import datetime
from dotenv import load_dotenv

Expand All @@ -26,15 +26,15 @@ class ScrapeInput(BaseModel):
async def handler(input, context):
context.logger.info(f"🕷️ Scraping article: {input['url']}")

app = FirecrawlApp(api_key=FIRECRAWL_API_KEY)
firecrawl = Firecrawl(api_key=FIRECRAWL_API_KEY)

scrapeResult = app.scrape_url(input['url'])
scrapeResult = firecrawl.scrape(input['url'], formats=["markdown"])
Comment on lines +29 to +31
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Guard missing FIRECRAWL_API_KEY and avoid blocking the event loop; also prefer a neutral client name

  • If FIRECRAWL_API_KEY is unset, Firecrawl(api_key=None) may fail later; fail fast with a clear message.
  • firecrawl.scrape(...) is synchronous; calling it in an async def may block the event loop during network I/O. Offload to a thread.
  • Minor: naming the variable firecrawl shadows the import symbol; client is clearer.

Apply:

-    firecrawl = Firecrawl(api_key=FIRECRAWL_API_KEY)
-
-    scrapeResult = firecrawl.scrape(input['url'], formats=["markdown"])
+    if not FIRECRAWL_API_KEY:
+        raise RuntimeError("FIRECRAWL_API_KEY is not set. Configure it in your environment.")
+
+    client = Firecrawl(api_key=FIRECRAWL_API_KEY)
+
+    # Offload sync HTTP call to a worker thread to avoid blocking the event loop
+    scrapeResult = await asyncio.to_thread(client.scrape, str(input['url']), formats=["markdown"])

Add outside this range:

import asyncio  # at the top with other imports
🤖 Prompt for AI Agents
In motia-content-creation/steps/scrape.step.py around lines 29 to 31, guard
against a missing FIRECRAWL_API_KEY, avoid blocking the event loop, and rename
the variable: check that FIRECRAWL_API_KEY is set and raise a clear exception
immediately if not; instantiate the client into a variable named client (not
firecrawl) and call client.scrape off the event loop by wrapping the synchronous
call in asyncio.to_thread (or run_in_executor) from inside the async function;
also ensure import asyncio is added at the top with other imports.


if not scrapeResult.success:
raise Exception(f"Firecrawl scraping failed: {scrapeResult.error}")
if not hasattr(scrapeResult, 'markdown'):
raise Exception(f"Firecrawl scraping failed: No content returned")

content = scrapeResult.markdown
title = scrapeResult.metadata.get('title', 'Untitled Article')
content = scrapeResult.markdown or ''
title = getattr(scrapeResult.metadata, 'title', 'Untitled Article') if hasattr(scrapeResult, 'metadata') else 'Untitled Article'

context.logger.info(f"✅ Successfully scraped: {title} ({len(content) if content else 0} characters)")

Expand Down