Skip to content

2026

Escaping ouroboros: build systems that maximize interaction

The early adopters of the AI wave had a gift they didn't fully appreciate: clean ground truth. The pre-2018 web was messy but human. The early quants traded against human intuition. But that world is fading.

We have entered the era of the data ouroboros: models trained on the outputs of models. If you keep training on the raw fire hose, you aren't learning reality. You're photocopying a photocopy.

The way out is not more data. It's better interaction.

This piece explains why the convergence of quant shops and AI labs matters, and how to build systems that maximize interaction to escape the loop.

Backpressure: making subagents 10x more effective

Subagents are great at exploring, but terrible at stopping. If you've ever spun up a helper agent and watched it spray dozens of tool calls, you know the pain: high latency, noisy output, and a growing chance it drifts into nonsense.

The fix isn't more prompts. It's backpressure.

In this article, I'll walk through how I made subagents roughly 10x more effective by applying three levers:

  • Reduce tool calls without hurting accuracy
  • Enrich error logs so agents recover faster
  • Minimize total steps before they go runaway

With reference to projects I've worked on, you'll get to learn what it takes to evolve a subagent out-of-the-box to to a successful, token-efficient one.