Skip to main content

Frequently Asked Questions (FAQ)

No. Daemo is a runtime, not a database. We process requests in real-time. Your business data never rests on our servers—it stays in your database. The AI interacts with your data only through the functions you explicitly expose via the SDK.

Yes. Daemo provides managed LLM access by default (no setup required), but you can "Bring Your Own Key" (BYOK) for OpenAI, Anthropic, or Gemini if you require strict data isolation or want to use your existing enterprise contracts. Configure this in the Daemo Control Plane.

Daemo uses a Reverse Gateway pattern. Your application dials out to the Daemo engine via a secure, persistent outbound tunnel. You do not need to open inbound ports, configure VPNs, or expose your internal network. This is the "CISO Dream" feature—your data stays behind your firewall.

MCP is the pipe; Daemo is the engine. MCP (introduced by Anthropic) standardizes how an LLM connects to a data source—like a USB cable. Daemo adds the safety, state, and orchestration layer on top. MCP passes messages; Daemo executes code, enforces permissions, and supports rollback. They solve different problems. Read the full comparison →

NL-to-SQL bypasses your application layer. Tools that write raw SQL (SELECT * FROM users) can query anything—they don't know about permissions, business rules, or side effects. Daemo uses NL-to-Function: the AI calls your typed methods (e.g., GetUser(123)), so every action respects your existing validation, logging, and security checks. Learn why this matters →

No. Daemo doesn't require fine-tuning, RAG pipelines, or document uploads. Your functions are understood immediately through their typed schemas and natural-language descriptions. No waiting for model training—just deploy and go.

Through architecture, not prompts. The Two-Phase Engine separates data gathering (Phase 1) from presentation (Phase 2). The AI must produce verified, structured data (finalJSON) before it can speak to the user. All math and aggregations happen in real logic—not in the LLM's "head."

Daemo has built-in Self-Correction. If a function call fails or returns an error, the engine can adapt, retry with different parameters, or gracefully handle the exception. It supports up to 20 reasoning steps per query to solve complex problems.

Yes. The Daemo engine supports multi-step reasoning. For example, a query like "Refund Bob's last order" might trigger: findUser('bob')getLastOrder(userId)refund(orderId). It can chain 5, 10, or 20 function calls together as needed.

Daemo supports Human-in-the-Loop workflows. You can flag high-stakes functions to trigger an Approval Inbox. A human manager must approve the action before Daemo executes it. The AI pauses and waits—no irreversible actions without oversight.

Through Context Injection. Authenticated user IDs are injected directly from your JWTs into function calls. The AI cannot "pretend" to be a different user or escalate privileges, even if a malicious prompt tries to trick it.

Yes. Every prompt, every reasoning step, every function call, and every result is logged and traceable. You get a full audit trail for debugging, compliance, and security reviews.

Yes. Daemo is LLM Provider Agnostic. You can switch between OpenAI, Anthropic (Claude), or Google Gemini without changing your application code. This prevents vendor lock-in and lets you choose the best model for your use case.

Yes. Daemo was designed for enterprise reality. It supports .NET Framework (4.x), modern .NET 6-9, and Node.js/TypeScript natively. The Reverse Gateway pattern means you can modernize a 15-year-old monolith without rewriting it.

Yes. Daemo maintains Memory & State across conversation turns. Follow-up questions like "What about last month?" work naturally because the engine preserves context from previous queries in the same thread.

Yes. If your compliance requirements demand it, you can run the entire Daemo engine on your own infrastructure (AWS, Azure, on-prem). Contact us for self-hosted deployment options.

Check out the Dev Registry. You can browse and install community-contributed functions, agent templates, and example integrations to jumpstart your project.