LLM-Native Language Going Agent-First

G Guru ยท

We started NERD with a simple premise: an LLM-native language - machines write, humans audit. That's still the foundation.

But a language needs a use case. So we picked one: agent-first.

Why Agents

You can build a general-purpose programming language that covers everything. But it's better to start from a use case than be fascinated by technology for its own sake.

Looking at where the world is heading:

  • Organizations are increasingly adopting agents for real workloads
  • Developer tools like Cursor and Claude Code are transforming how we build software
  • MCP is creating a massive ecosystem of reusable tools
  • General-purpose agents are becoming more capable every month
  • Traditional UIs are giving way to chat interfaces and agent-driven experiences

And here's the key insight from working deeply with agents and tools over the past months, especially since the announcement of MCP:

Tools are absorbing integration complexity away from agents.

Authentication, API quirks, error handling, retries, rate limiting - all of it is moving into tool providers. What remains for the agent is orchestration.

For orchestration, there are many options - frameworks like LangChain and CrewAI let you build agents with simple English instructions. They're powerful, well-maintained, and solve real problems.

With powerful models like Opus 4.5 and Grok, and developer tools like Cursor and Claude Code, you can also build a solid agent just with your language of choice - no framework required.

For an organization starting their agent journey, maybe all they need is one efficient agent for their use case. For that, a plain programming language works fine. But today's agent frameworks are all built using languages that were designed for different purposes.

That's where NERD is trying to be different:

Language Originally Built For NERD Approach
Java Enterprise apps, microservices Agent-first

Built for LLMs to write
Evolves with industry needs
Minimal, purpose-built
Python Scripting, data science, ML
TypeScript Web apps, frontend, Node.js
Go Cloud infrastructure, DevOps
Rust Systems programming, safety

These languages are great at what they were designed for. But when repurposed for agents, they carry dependencies and syntax from their original era.

A lighter language that is LLM-native and agent-first - evolving based on how the industry moves - looks more promising than adapting languages built for different problems.

What about context storage, long-term memory, vector databases?

That market is evolving fast - new products are emerging. NERD is still experimental, and we'll need to figure out integrations for the context engineering space and long-term memory as things mature. We'll revisit this as the ecosystem develops.

What Agents Actually Need

If you assume tools absorb integrations, what does an agent actually need?

1
LLM calls
Talk to Claude, GPT, whatever. Get responses. Handle conversations.
2
Tool calls
MCP, HTTP endpoints. Call tools, get results. The tool handles the complexity.
3
Control flow
Conditionals, loops, branching. Basic orchestration logic.
4
Data handling
JSON parsing, string manipulation. The glue between LLM responses and tool inputs.

That's... not much. And that's the point.

What This Looks Like

One line of NERD code. That's an agent:

llm claude "What is Cloudflare Workers? One sentence."

๐Ÿ“„ agent.nerd - Run with: nerd run agent.nerd

Two lines that connect to MCP tools:

mcp tools "https://docs.mcp.cloudflare.com/mcp"
mcp send "https://docs.mcp.cloudflare.com/mcp" "search_cloudflare_documentation" "{\"query\":\"Workers\"}"

๐Ÿ“„ mcp_test.nerd - Discovers and calls remote tools

No imports. No configuration boilerplate. No framework initialization. Just the intent, directly expressed.

The irony: We optimized for LLMs, not humans. But the result is more readable than traditional code.

Plain English words. No cryptic symbols. Dense, yes - but paradoxically clear. Just don't try to write it yourself.

The Principles

The core philosophy, now with sharper focus:

Principle What It Means
LLM-native Optimized for machine generation, not human authorship unchanged
Agent-first Prioritize capabilities needed for agentic use cases new focus

These aren't in conflict. An agent language written by LLMs, for orchestrating LLM-powered workflows. The snake eating its tail, but productively.

What We're Building

The priority order, based on this philosophy:

Capability Description Status
HTTP GET, POST requests โœ“ Done
LLM module Claude API calls โœ“ Done
MCP support Remote tool calling โœ“ Done
JSON Parse, generate, extract Coming next
SSE/Streaming Real-time responses Coming next
Conversation Multi-turn state Future
Context storage Long-term memory, vectors More to explore

General-purpose features (strings, lists, math) come as needed. But the agent use case drives priorities.

SLMs and Embedded

Here's another trend worth watching: Small Language Models (SLMs).

As models get smaller and more efficient, they'll run everywhere - edge devices, IoT, embedded systems.

NERD compiles to native code via LLVM, written in pure C with no dependencies.

That means NERD agents could run on embedded devices. A thin orchestration layer, natively compiled, coordinating local SLMs with remote tools. No Python runtime. No container. Just a binary.

We're not there yet. But the foundation makes it possible.

Why a Language?

If orchestration becomes thin, why bother with a language at all?

Because there's value in having a compiled, auditable artifact:

  • Compiles to native code - fast, portable, no runtime
  • Deterministic - same input, same output
  • Version-controlled - track changes, review diffs
  • Runs without an LLM - once compiled, it's just a binary

NERD is that layer. A thin, auditable intermediate between human intent and machine execution.

Not a Pivot

To be clear: the philosophy hasn't changed.

NERD is still LLM-native - a language machines write, humans audit. Token-efficient. Compiles to native code. Not for human authorship, but human-observable.

What's new is using agents as a hook to prioritize which use cases we support first. Instead of building a general-purpose language and hoping it finds users, we're starting with the problem space where this matters most.

Agents need thin orchestration. NERD provides thin orchestration. The fit is natural.

Still an Experiment

This is early. Very early.

The implementation might change completely. Maybe C isn't the right foundation when non-deterministic agents need different patterns. Maybe we'll need to integrate with existing runtimes. Maybe the whole approach is flawed.

But the philosophy feels worth exploring: an agent-first language, optimized for LLMs to write and humans to audit.

If that resonates, come build with us. If it doesn't work out, at least we tried something different.

The Philosophy

To summarize what NERD stands for:

LLM-native Machines write it. Optimized for token efficiency and LLM generation.
Human-observable Humans audit and review, but don't edit directly.
Agent-first Prioritizes capabilities agents need: LLM calls, tool calls, orchestration.
Native compilation Compiles via LLVM. No runtime. Just a binary.
Minimal by design Only the primitives needed. No bloat from languages built for other purposes.

There's basic HTTP and LLM scaffolding to play with - far from production-ready, but enough to see where this could go. Lots more to build. PRs welcome.


โ† Read: The Foundation

Build with us on GitHub