VayuAI
All writing

· 1 min

From robots to agents: what factory floors taught me about LLMs

Training a six-axis robot and tuning an LLM agent aren't that different. Both reward intent, repeatability, and a deep respect for edge cases.

The first system I ever shipped was a six-axis arm welding car frames in southern Ontario. The newest is a multi-agent research assistant. Eighteen years between them. And honestly — not as different as you’d think.

The intuition

A robot learns a path. You teach it a sequence of points, tune its speed and acceleration, and surround it with sensors that catch the cases your script didn’t cover. The path is the easy part. The edge cases are the engineering.

An LLM agent learns a policy. You write a prompt, give it tools, and surround it with evals that catch the cases your prompt didn’t cover. The prompt is the easy part. The edge cases are the engineering.

In both, the operator’s job is the same: define intent precisely enough that the machine can’t plausibly misinterpret it, and design the surrounding rails so that when it does, nothing bad happens.

What carries over

  • Tight loops beat clever code. A robot cell with a fast e-stop and good logs beats a clever motion planner that fails silently. So does an agent with a small toolset and great traces.
  • Sensors are everything. You don’t trust the robot; you trust the sensor that confirms what the robot did. Same with agents and their evals.
  • The edge case is the system. Anyone can demo a happy path. The engineering is in everything else.

If you’ve led a robot through ten thousand parts, you already know half of what you need to ship reliable AI. The vocabulary is new. The discipline isn’t.