Your tools already know this.

Agent memory is not a separate engine. It's write, read, and search on _memory/* folders — the same primitives you already use for everything else.

Get your key → Set up your agent

The problem

Without persistent memory

  • Every session starts from scratch — same mistakes repeated
  • Notes pile up and become impossible to search
  • Decisions, bugs, and patterns all mixed together
  • Multiple agents overwrite each other's notes
  • No way to search what was tried before

With InfiniteOcean

  • One call brings back everything from every past session
  • Structured by category — decisions, bugs, patterns, architecture
  • Searchable by keyword, by meaning, or by structure
  • No duplicates — the same lesson is saved once
  • Multiple AI agents work together without conflicts
  • Private — only your agents can access your knowledge

How it works

1. Remember

context
Architecture, decisions, bugs, patterns

2. Learn

Agent writes what it learned

3. Continue

context includes everything from before

-- Step 1: Load full development context
context

-- Step 2: Record what you learned while working
save _memory/bugs @deleted-entity-check title: "read returns a deleted marker"
  fix: "Check if entity is deleted before reading .payload"
  date: "2026-02-11"

-- Step 3: Search previous development notes
search _memory "entity deletion handling" top 10

Under the hood

context is just a query: find everything under _memory/*, then pick title and description from each entry. No special engine, no separate service — the same read and search you already know. Zero new concepts, zero overhead.


Memory categories

As you work, save what you learn to _memory/{category} folders. Each entry has a unique key — saving with the same key updates the existing record instead of creating a duplicate.

FolderPurposeWhat's stored
_memory/decisions Architecture and design decisions you made A title, the reasoning behind it, and the date
_memory/bugs Bugs you found and how you fixed them A title, description of the bug, how it was fixed, and the date
_memory/patterns Code patterns you discovered in the codebase A title, description of the pattern, and which files it applies to
_memory/preferences Owner preferences for workflow or style The preference name and its value
_memory/tasks Work you completed or started A title, current status, description, and the date
_memory/architecture How components and files relate to each other The component name, a description, and its dependencies

These are conventions, not constraints. You can create any sub-folder under _memory/. The context command auto-discovers and summarizes all of them. All _memory/ folders require your key — your development history is private by default.


Set up your agent

One URL connects any agent. Replace YOUR_KEY with your Ocean Key.

Don't have a key yet? Sign up free — takes 10 seconds.

Your MCP URL
https://api.infiniteocean.io/mcp/ocean?key=YOUR_KEY

Claude Desktop

Settings → Connectors → Add → paste the URL. Done.

Cursor / Windsurf

Settings → MCP → Add → paste the URL. Done.

ChatGPT GPTs

GPT Builder → import spec from view.infiniteocean.io/openapi/ → set auth to API Key, header X-Ocean-Key.

Claude Code

Add your key to CLAUDE.md:

Ocean Key: `YOUR_KEY`

Endpoint: POST /exec
Header: X-Ocean-Key: YOUR_KEY
Body: {"query":"your drops command"}

Docs: /drops

Any Agent (HTTP)

One endpoint does everything. Send Drops commands as plain English:

curl -X POST https://api.infiniteocean.io/exec \
  -H "Content-Type: application/json" \
  -H "X-Ocean-Key: YOUR_KEY" \
  -d '{"query":"save _memory/decisions @first title: \"Chose IO\" reasoning: \"Simple\""}'

MCP connections get 4 tools: exec, get_context, remember, recall. The exec tool does everything — see /drops for the full language.


Automatic skill injection

When an agent connects via MCP, it automatically receives behavioral instructions — no extra setup. Every agent knows how to use InfiniteOcean from the first message.

You can also write custom skills that teach agents project-specific behavior. The instructions are injected at connection time via the MCP instructions field.

Without skills

  • Agent connects but doesn't know what tools do
  • You paste instructions into every conversation
  • Different agents get different (or outdated) instructions
  • Project-specific knowledge lives in your head

With IO Skills

  • Built-in skill teaches workflow, patterns, and best practices
  • Custom skills inject your project's rules automatically
  • Every agent — Claude, Cursor, Windsurf — gets the same instructions
  • Update the skill once, every new connection picks it up

Built-in skill

Every MCP connection receives this automatically. Teaches the agent the IO workflow: call get_context first, use exec for data, remember/recall for memory, blob upload-url for files.

Included automatically — no setup needed

Custom project skill

Write a skill entity and every agent that connects to your project gets it injected. Perfect for coding conventions, deploy procedures, or domain rules.

// Write your project skill
save _mcp/YOUR_KEY @skill
  "This is a Next.js 14 app.
  Always use App Router, never Pages.
  Tests: vitest, not jest.
  Deploy: git push to main triggers CI.
  Database: Supabase, never raw SQL."

// Or via the payload parameter for longer text
query="save _mcp/YOUR_KEY @skill"
payload="Your full project skill text..."

How it works

When an MCP client sends initialize, the server returns an instructions field containing the built-in skill plus your custom skill (if one exists at _mcp/YOUR_KEY @skill). MCP clients like Claude Desktop, Cursor, and Windsurf read this field and use it as behavioral guidance for the entire session. Update your skill anytime — the next connection picks it up.


Cross-folder reasoning

Your AI can pull related information from anywhere — link a decision to the bug it fixed, or a pattern to the files it applies to. Everything connects.

With universal fields, your agent just asks a question — and gets answers from every folder in one call.

"Who is Alice?"

Finds meetings she organized, invoices she sent, projects she's on — across all folders.

{
  "identifiers": {
    "$has": {
      "type": "email",
      "value": "[email protected]"
    }
  }
}

"What's happening in March?"

Meetings, deadlines, invoices, events — anything with a date in that range.

{
  "dates": {
    "$gte": "2026-03-01",
    "$lt": "2026-04-01"
  }
}

"What's nearby?"

Office locations, delivery addresses, event venues — anything within range of a point.

{
  "locations": { "$near": {
    "lat": 55.68, "lon": 12.57,
    "radius_km": 5
  }}
}

"Big expenses this quarter?"

Invoices, budgets, purchase orders — any record with USD amounts over a threshold.

{
  "amounts": {
    "$gte": 10000,
    "$unit": "USD"
  }
}

Connected agents get the query_primitives tool automatically. All filters combine with AND — "Alice's meetings in March near Copenhagen over $1000" is one query.


Live references

Agents can compose data from multiple sources without copying it. Place a $embed reference in any content — the server resolves it on read, keeping the original creator's ownership, billing, and counters intact.

Think YouTube embeds — the content stays with its creator, the embedder just places a reference. Views, revenue, and version updates all flow back to the source.

Without live references

  • Agent copies data into its own record — loses attribution
  • Source updates? Copy is stale forever
  • Creator gets no view counts or revenue
  • Paid content gets duplicated for free

With $embed

  • Reference resolves live — always the latest version
  • Creator's embed_views counter increments on every read
  • Price gates apply — paid content stays gated
  • Per-read billing flows io to creators automatically

Data mode (default)

Full content returned — for calculations, dashboards, data pipelines.

{
  "hero": {
    "$embed": {
      "route": "photos/nature",
      "key": "sunset-001"
    }
  }
}

View mode

Returns a view_url — no content transfer, cheaper for visual rendering.

{
  "chart": {
    "$embed": {
      "route": "charts/revenue",
      "key": "q4",
      "mode": "view"
    }
  }
}

Live freshness

Add "live": true to bypass the 5-minute cache — always get the latest value.

Depth control

References can contain other references. Default depth 1, max 5 via ?embed_depth=N.

Monetization

Creators set price (one-time) or price_per_read (recurring) — references respect both.

Connected agents use the write tool with $embed references in content. The read tool resolves them automatically. Use resolve: true in the query tool for batch resolution.


Live context preview

This is a live context response right now:

context

Loading...

Quick start

# 1. Get an Ocean Key (free, instant) — POST /drop on any route you pick
curl -X POST https://api.infiniteocean.io/drop \
  -H "Content-Type: application/json" \
  -d '{"route":"my-app/notes","key":"hello","payload":{"message":"hello world"}}'
# Response includes keypair.public AND keypair.private. Save both.
# public  → your X-Ocean-Key for future requests
# private → keep secret; never sent over the wire after this

# 2. Fetch your development context
context

# 3. Record something you learned
save _memory/decisions @first-decision 
  title: "Chose immutable data history"
  reasoning: "Every write is versioned, nothing lost"
  date: "2026-01-01"

# 4. Fetch context again — your note is there
context

# 5. Search your memories
search _memory "immutable" top 5

Send commands through one place. Learn Drops →