This guide details how to integrate the CYNEPIC Architecture (Neuro-Symbolic-Causal Agentic System) into existing enterprise infrastructures, including Cloud Platforms, ERP systems, and SaaS applications.
Best for: Interactive dashboards, chat bots, and real-time decision support.
Endpoint: POST /query
Pattern:
- Client sends natural language query + context context.
- CYNEPIC routes, analyses, and reasoning.
- Returns structured JSON with
result,confidence, andactions.
Best for: High-volume transaction monitoring, automated compliance checks.
mechanism: Kafka / Webhooks Pattern:
- Upstream (e.g., SAP, Salesforce) emits event -> Kafka Topic
orders.created. - CYNEPIC Consumer reads event.
- Runs Guardian Policy Check.
- If
risk > threshold, trigger HumanEscalation. - Else, publish to
orders.approved.
Best for: Nightly causal analysis of historical data.
Pattern:
- Upload CSV/Parquet to
/datasets. - Trigger
/simulations/runwith scenario config. - Retrieve results via
/simulations/compare.
Scenario: Automate "Procurement Approval" based on Supplier Risk Causal Score.
Steps:
-
Data Ingestion:
- Export Supplier Performance data to CYNEPIC via
/datasets. - Variables:
delivery_delay,financial_health,region_risk.
- Export Supplier Performance data to CYNEPIC via
-
API Hook:
- In ERP workflow, call CYNEPIC API before final approval.
curl -X POST https://cynepic-api.internal/query \ -H "Authorization: Bearer <token>" \ -d '{ "query": "Assess risk for Supplier X PO #12345", "context": { "supplier_id": "SUP-99", "po_amount": 50000, "region": "APAC" } }'
-
Decision Logic:
- If
response.guardian_verdict == "APPROVED", auto-approve in ERP. - If
REJECTED, flag for review.
- If
- API Service: Stateless Docker container (FastAPI). Scale horizontally.
- Dashboard: React SPA (S3/CloudFront or Nginx).
- State Store:
- Neo4j (Managed AuraDB or Self-Host EC2).
- Redis (ElastiCache).
- Identity: Use IAM roles for service-to-service auth.
- Secrets: Inject
DEEPSEEK_API_KEYvia Secrets Manager (not env vars). - Network: Run API/DB in private subnets; expose only via Load Balancer/WAF.
When planning your integration, answer these:
- Epistemic Uncertainty: "Do we have enough data validity to trust the causal model?"
- Mitigation: Use the
confidence_scorein the API response to gate automated actions.
- Mitigation: Use the
- Latency Budget: "Can we wait 10-20 seconds for full Causal + Bayesian analysis?"
- Mitigation: Use "Clear" domain routing for simple lookups (<1s).
- Human-in-the-Loop: "Who resolves 'Disorder' or 'Chaotic' queries?"
- Mitigation: Configure HumanLayer channels (Slack/Teams) correctly in
.env.
- Mitigation: Configure HumanLayer channels (Slack/Teams) correctly in
import requests
def check_po_risk(po_data):
response = requests.post(
"http://localhost:8000/query",
json={
"query": f"Analyze risk for PO {po_data['id']}",
"context": po_data
}
)
result = response.json()
if result['guardian']['status'] == 'pass':
return "APPROVE"
else:
return f"REVIEW: {result['guardian']['reason']}"CARF exposes 18 cognitive tools via its Model Context Protocol (MCP) server for integration with external AI agents (Claude, GPT, custom agents).
| Module | Tools | Purpose |
|---|---|---|
| router | classify_query, get_routing_config |
Cynefin domain classification |
| causal | causal_analyze, estimate_effect |
Causal inference via DoWhy/EconML |
| bayesian | bayesian_infer, update_beliefs |
Bayesian posterior estimation |
| guardian | check_policy, evaluate_risk |
Policy compliance checking |
| oracle | fast_predict, compare_strategies |
ChimeraOracle fast causal predictions |
| memory | query_experience_buffer, search_memory |
Semantic memory retrieval |
| reflector | reflector_repair |
Self-correction on rejected actions |
# Start MCP server (alongside FastAPI)
python -m src.mcpMCP tools expose CARF's analytical capabilities as read-mostly services. External agents cannot use MCP tools to modify CARF's policies, configuration, or internal state. The reflector_repair tool allows external agents to use CARF's repair logic on their proposed actions, not on CARF's internals.
| Method | Path | Purpose |
|---|---|---|
| POST | /world-model/counterfactual |
Counterfactual reasoning from natural language |
| POST | /world-model/counterfactual/compare |
Multi-scenario comparison |
| POST | /world-model/counterfactual/attribute |
Causal attribution with but-for tests |
| POST | /world-model/simulate |
Forward simulation with do-calculus interventions |
| POST | /world-model/neurosymbolic/reason |
Full neural-symbolic reasoning loop |
| POST | /world-model/neurosymbolic/validate |
Claim validation against symbolic KB |
| GET | /world-model/h-neuron/status |
H-Neuron sentinel configuration |
| POST | /world-model/h-neuron/assess |
Run hallucination risk assessment |
| POST | /world-model/retrieve/neurosymbolic |
NeSy-augmented retrieval (3-layer) |
| POST | /world-model/analyze-deep |
Combined CARF + counterfactual + NeSy + simulation |
Key endpoints under /governance/*:
GET /governance/domains— List governance domainsPOST /governance/policies/extract— LLM-assisted policy extractionGET /governance/conflicts— Cross-domain conflict detectionGET /governance/compliance/{framework}— Compliance scoring (EU AI Act, CSRD, GDPR, ISO 27001)GET /governance/cost/breakdown— LLM token cost analysisGET /governance/audit— Audit trail
Production deployments use Firebase JWT authentication:
# Authenticated request
curl -X POST https://api.example.com/query \
-H "Authorization: Bearer <firebase_jwt_token>" \
-H "Content-Type: application/json" \
-d '{"query": "Analyze supplier risk"}'Local development bypasses auth automatically.
CARF supports three deployment profiles via CARF_PROFILE environment variable:
| Profile | Auth | Rate Limit | CORS | Use Case |
|---|---|---|---|---|
research |
None | None | * |
Local development, experimentation |
staging |
API key | 100 req/min | Configured origins | Testing, demo |
production |
API key + Firebase JWT | 50 req/min | Strict origins | Enterprise deployment |