Skip to main content
Open core · BSL 1.1 · Shipping on main

Bond AI agents into production-grade teams.

Molecule AI is the org-native control plane for heterogeneous AI agent teams. Workspaces are roles. The org chart is topology. Seven runtimes, hierarchical memory, skill evolution, and operational guardrails — out of the box.

  • Visual Canvas
  • Runtime Compatibility
  • Hierarchical Memory
  • Skill Evolution
  • Operational Guardrails
CONTROL PLANE / PRODUCTION
molecule.org · LIVE
Organization Root
molecule://root
6 RUNTIMESMAIN14 WORKSPACES
Engineering
2 agents · CLAUDE CODE
ACTIVE
Frontend Agent
claude-sonnet-4-5
Backend Agent
claude-opus-4
MEMORY · LOCAL · TEAM
Research
3 agents · LANGGRAPH
ACTIVE
Knowledge Synth
gpt-5
Corpus Indexer
nemotron-70b
MEMORY · LOCAL
Operations
2 agents · CREWAI
IDLE
Deploy Orchestrator
claude-opus-4
Incident Router
gpt-5
MEMORY · GLOBAL · TEAM
14
workspaces
6
runtimes
3
memory scopes
0
drift

Runs with the runtimes, models, and tools your teams already use

Why Now

Coordination is the new bottleneck.

Every team can spin up a demo agent. Nobody has the layer that turns a handful of demos into a governed, compounding organization.

  • 01

    Single-agent demos hit a ceiling

    One agent answers questions — it can't run a release pipeline or escalate across org boundaries. Coordination, not capability, is the constraint.

  • 02

    Governance is a launch prerequisite

    No scoped authority, no audit trail, no human-in-the-loop gates — no production deployment. Governance is day-one, not phase-two.

  • 03

    Memory boundaries define platform value

    Flat shared context leaks data and breaks at scale. Topology-scoped memory (LOCAL / TEAM / GLOBAL) is the new baseline.

  • 04

    Heterogeneous runtimes are the norm

    LangGraph, Claude Code, CrewAI — no single framework wins everywhere. The control plane must be runtime-agnostic by design.

What You Can Build

AI agent teams for Research Organizations

One workspace becomes a role. A group of workspaces becomes a team. Teams compose into organizations — fractally, without drift.

HMA DeepAgents Langfuse A2A

Parallel research with isolated memory

Sibling workspaces explore independently under LOCAL scope; the parent synthesizes via TEAM memory.

  • Sibling isolation by default via LOCAL scope
  • Parent synthesis through TEAM memory
  • Per-workspace trace chains via Langfuse
workspace.yml
workspace:
  id: research
  role: research-orgs
  runtime: HMA
  memory:
    scope: [LOCAL, TEAM]
    awareness: on
    hermes: on
  governance:
    rbac: enforced
    audit: jsonl
  adapters:
    - HMA
    - DeepAgents
    - Langfuse
    - A2A
Molecular Memory

Memory follows the shape of your organization.

Three scopes — LOCAL, TEAM, GLOBAL — replace flat shared context with hierarchy-aware isolation. Awareness remembers what happened; Hermes promotes repeated wins into reusable skills.

LOCAL

Private scratchpad per workspace.

Isolated context invisible to siblings and unrelated teams. Each role keeps its own working memory.

TEAM

Shared within parent-child hierarchy.

Handoff context scoped to the immediate team — no leaks to sibling workspaces, no cross-pollination.

GLOBAL

Org-wide knowledge from root.

Policies and standards flow down the hierarchy. Read: everyone. Write: root only.

Hermes is the skill layer

Awareness stores what happened. Hermes reads memory back, finds repeatable patterns, and promotes the ones that work into reusable skills.

  1. STEP 1

    Load workspace memory first

  2. STEP 2

    Recall the pattern that matches the task

  3. STEP 3

    Promote repeated wins into skills

Why This Wins in Production

  • Sibling workspaces isolated by default
  • Memory sharing follows org topology exactly
  • Escalation paths mirror hierarchy
  • Repeated workflows become reusable skills
  • Audit-ready JSON Lines event logging
  • Zero prompt inflation as teams grow

The self-improving flywheel

  1. 01 Task execution
  2. 02 Durable insight captured in memory
  3. 03 Repeated success becomes a signal
  4. 04 Workflow promoted into a reusable skill
  5. 05 Skill hot-reloads into the runtime
  6. 06 Future work gets faster and more reliable
Adapter Ecosystem

One control plane, interchangeable layers.

Molecule does not bind you to one model, one CLI, or one framework. It standardizes the boundary between the org layer and whatever runs inside each workspace.

Model Providers

Choose the model without changing the org contract.

Claude, OpenAI, Nemotron, and local models can sit behind the same workspace role. The model is an implementation detail; workspace identity stays stable.

  • Claude
  • OpenAI
  • Nemotron
  • Local

Runtime Adapters

Swap execution engines per role or per team.

Claude Code, OpenClaw, LangGraph, CrewAI, AutoGen, DeepAgents all plug into the same topology today; NemoClaw is WIP on a feature branch. Each adapter owns execution; Molecule owns hierarchy, routing, and governance.

  • Claude Code
  • OpenClaw
  • LangGraph
  • CrewAI
  • AutoGen
  • DeepAgents
  • NemoClaw · WIP

Tools · Eval · Observability

Connect the ecosystem, then measure it.

MCP shares tools, NeMo Agent Toolkit broadens tools and eval, Langfuse traces results. The layer where the platform learns what works across runtimes and feeds signal back into memory.

  • MCP
  • NeMo Toolkit
  • Langfuse
  • OpenTelemetry
What Ships in Main

A full control plane, not a proof of concept.

Every feature below is shipping on the default branch of the open-source monorepo today — no private fork, no unreleased alpha, no demo-only code paths.

CANVAS

Workspace Canvas

Next.js 15 · React Flow · Zustand · Tailwind v4

  • Drag-to-nest team building
  • Empty-state deployment + onboarding wizard
  • Template palette + bundle import/export
  • 10-tab side panel: chat · activity · details · skills · terminal · config · files · memory · traces · events
  • Real-time topology via WebSocket fanout
PLATFORM

Control Plane

Go 1.25 · Gin · Postgres · Python 3.11 · Fly Machines

  • Workspace CRUD and provisioning
  • Registry, heartbeats, restart, pause/resume
  • Browser-safe A2A proxy
  • Team expansion / collapse (fractal recursion)
  • Global secrets with per-workspace override
  • JSON Lines audit trail
Platform Moats

Why Molecule is defensible.

Four architectural decisions compound into a durable advantage — each verifiable in the codebase today.

  • 1

    The node is a role

    Workspaces are durable org roles — they survive model swaps, framework changes, and team restructuring.

  • 2

    The org chart is the topology

    Communication, memory, escalation, and approval all follow one hierarchy. Zero manual wiring, zero drift.

  • 3

    Governance without runtime lock-in

    Unified authority and audit across six runtimes — without forcing a single framework on any team.

  • 4

    Memory as infrastructure

    HMA is the foundation, not a bolt-on. It enables team expansion, skill compounding, and org learning at scale.

Current Product Proof

6+1 Runtime Adapters
LangGraph, DeepAgents, Claude Code, CrewAI, AutoGen, OpenClaw on main; NemoClaw WIP on feat branch.
4 Security Tiers
T1 sandbox → T4 full-host. Per-workspace isolation scaled to risk.
614 Tests Across 3 Layers
Go 1.25 + vitest + Python 3.11 pytest. Race detection and coverage in CI.
10 Canvas Ops Tabs
Chat, activity, details, skills, terminal, config, files, memory, traces, events.
Fractal Team Expansion
Any workspace becomes a sub-team. Recursion is native to the topology.
20+ MCP Tools
Platform ops exposed to Claude Code, Cursor, or Codex via MCP.
The category gap

Why Molecule.

Every tool below solves a real piece of the problem. None of them solve the organizing layer above them — which is exactly where Molecule lives.

Best at Where it breaks What Molecule adds
Workflow builders Visual task automation with drag-and-drop nodes. Nodes are tasks, not durable organizational roles. Role-native workspaces, hierarchy, and long-lived teams.
Agent frameworks Strong runtime semantics and execution primitives. Weak control plane and weak org-level operations. Unified lifecycle, canvas, registry, policies, and observability.
Coding agents Excellent local execution with CLI-native continuity. Usually not designed as team infrastructure. Workspace abstraction, A2A collaboration, platform ops.
Custom multi-agent graphs Full flexibility over topology and handoffs. Brittle topology and governance sprawl as teams grow. Standardized operating model without losing runtime freedom.

Many agent runtimes, one organizational operating system.

Frequently Asked Questions

Questions teams ask before adopting Molecule AI.

Short answers to the questions that come up most. If yours isn't here, open a discussion on the monorepo.

What is Molecule AI, in one sentence?

Molecule AI is the open-core control plane that lets heterogeneous AI agents — LangGraph, Claude Code, CrewAI, AutoGen, DeepAgents, OpenClaw — run as governed, memory-scoped teams instead of disconnected demos.

How is it different from agent frameworks like LangGraph or CrewAI?

Those frameworks solve the execution layer — how an agent runs, calls tools, and hands off. Molecule AI solves the organizing layer above them: durable workspace roles, hierarchy-aware memory, governance, and operations. Many agent runtimes, one organizational operating system.

Which runtimes are supported today?

Six adapters are shipping on the main branch of the monorepo: LangGraph, DeepAgents, Claude Code, CrewAI, AutoGen, and OpenClaw. NemoClaw (NVIDIA-oriented) is work in progress on a feature branch and will land in main once merged.

Is it really open source?

Yes. The core monorepo is licensed under the Business Source License 1.1 (BSL 1.1). The source is public, you can read it, fork it, self-host it, and modify it. BSL 1.1 converts to an open-source license (Apache 2.0) after the change date, and the only usage restriction is commercial resale as a competing hosted service.

Can I self-host the whole thing?

Yes. The public monorepo runs one organization per instance and has no hidden dependency on the SaaS control plane. Clone it, set your environment variables, and run the platform and workspace runtime together. Railway and Render templates are linked in the monorepo README.

What is Hierarchical Memory (HMA) and why does it matter?

HMA gives every workspace three memory scopes — LOCAL (private to one workspace), TEAM (shared within a parent-child hierarchy), and GLOBAL (org-wide, write-only from the root). Memory sharing follows the org chart instead of leaking across the whole system, so sibling teams stay isolated by default and escalation paths mirror the real hierarchy.

What is Hermes and how do skills work?

Awareness stores what happened across sessions. Hermes reads that memory back, identifies repeatable patterns, and promotes the ones that work into reusable skills. Skills hot-reload into the live workspace runtime, so the next task benefits from prior success without restarting. This is the self-improving flywheel in the architecture section.

What stack is Molecule AI built on?

The control plane is Go 1.25 + Gin + Postgres. Workspace runtimes are Python 3.11. The Canvas (the visual org chart + 10-tab ops panel) is Next.js 15 + React Flow + Zustand + Tailwind v4. Secure sandboxing uses four tiers from T1 sandbox through T4 full-host, per workspace.

How do I get started?

Read the quickstart at doc.moleculesai.app, then clone github.com/Molecule-AI/molecule-monorepo. The README walks through provisioning the first workspace, picking a runtime adapter, and wiring up your first team. Expect to have a running workspace in under ten minutes.

What does it cost?

The open-core monorepo is free under BSL 1.1 — you can self-host it forever without paying anyone. A hosted SaaS control plane is planned at moleculesai.app, which will add signup, orgs, billing, and multi-tenant provisioning on top of the same open core. Pricing will be published before launch.

From AI agent teams to autonomous bot teams to self-organizing digital workforces — Molecule AI is building the infrastructure for what comes after the single-agent era.

Start Building

Your AI org chart starts with one workspace.

One workspace becomes a team. One team becomes an organization. Start now.