AI Agents vs RPA
Dec 19, 2024
I've noticed a concerning trend in the AI-agent space for the last 2 years: the widespread adoption of hardcoded workflows.
The thinking goes that by constraining agents to pre-defined steps, we can make them more reliable and performant. We were guilty of this at Superagent (YC W24).
I now disagree fundamentally. Here's why:
First, it misunderstands what large language models are useful for. These are not RPA systems. RPA excels at automating repetitive, rule-based processes with clearly defined steps. That's valuable, but agents operate in a fundamentally different problem space - one of reasoning, adaptation, and handling novel situations.
Even Daniel Dines, Founder & CEO of UiPath, emphasized this distinction on the 20VC podcast: AI agents and RPA workflows serve different purposes and solve different problems.
So why is the agent community mimicking RPA approaches when building agents, when not even the RPA community itself sees this as the right path forward?
Second, let me be brutally honest: when it comes to designing ways for agents to solve problems, I'm dumber than they are. After two years of building agents, every time I try to prescribe how they should think through a problem, they consistently prove my approach was inferior to what they would have come up with on their own.
It's time we get over ourselves and let the agents make these decisions themselves.
Look, I get it. I've seen many agents perform poorly in the wild. And yes, we need better evaluation frameworks, guardrails, fine-tuning approaches, and memory systems for unconstrained agents to work well. But isn't building these capabilities exactly what we're here for?