AI agents: less magic, more graphs
You’re spot on: most modern “AI agents” boil down to directed graphs, often with cycles.
Common structure:
-
Nodes: actions, LLM calls, tool executions
-
Edges: routing logic (conditional or LLM-decided)
-
Shared state: memory/context
-
Loops/feedback: what makes it feel non-linear
Add:
-
LLM-based routing
-
cycles (reflection, retries)
➡️ and a boring DAG suddenly feels agentic.
That’s why frameworks like LangGraph are explicit about this abstraction.
Why the Hacker News frustration makes sense
On HN, “agent” is criticized as a marketing label:
anything with ≥1 tool call + a loop gets branded as an agent,
even when execution is mostly predefined.
Many production “agents” are actually quite deterministic.
The 2025–2026 practical consensus: a spectrum
Pure workflow / orchestration
-
Fully predefined DAG
-
LLM used for fixed tasks
-
Highly predictable, low cost
→ Zapier/n8n + LLM node, classic RAG
Agentic workflow (where most real value is today)
-
Mostly structured
-
LLM decides branches, retries, tool choice within guardrails
-
Bounded loops and planning
→ Used by many real products
→ Andrew Ng: agentic systems
Fully autonomous agent
-
LLM controls its own process
-
Open-ended planning and self-correction
-
Rare in production at scale today
The pragmatic definition
Many experienced engineers say:
“If the DAG is largely steered by the LLM at runtime, it’s agentic.”
The winning approach:
➡️ start with workflows
➡️ add agentic behavior where needed
➡️ scale autonomy as models improve and costs drop