2025-08-305 min read

AI Has Agents But No Agency

Exploring the paradox of AI agents and the question of true agency in artificial intelligence systems

AIPhilosophyMemoryAgency

Memory and thinking are inseparable partners. Without memory, we cannot think—every idea builds upon stored knowledge, every insight connects to previous experiences. Yet paradoxically, too much memory might actually hinder thinking, overwhelming us with endless associations that prevent focused reasoning.

This paradox illuminates a fundamental problem with today's artificial intelligence. Large language models like GPT-4 are vast computational systems—sophisticated models that can process, retrieve, and manipulate enormous amounts of information with superhuman speed. But they lack true agency—the ability to think independently and create genuinely new ideas.

Consider how these models operate. They excel at recognizing complex statistical patterns in data, which allows them to generate fluent, contextually relevant text. They can summarize, translate, and answer questions with remarkable skill. Yet their creativity stems from a sophisticated form of recombination and interpolation. Like expert librarians with perfect recall, they can retrieve and combine existing information in novel ways but struggle to write truly original books from first principles. An LLM might craft a story by blending elements from thousands of narratives it has processed, but it has not yet demonstrated the ability to conceive of genuinely novel conceptual frameworks or solve problems that lie far outside the distribution of its training data.

This limitation becomes clearer when we examine the economics of AI development. The industry has largely followed a "bigger is better" philosophy—more parameters, more data, more computing power. Training GPT-4 reportedly cost over $100 million, yet the performance gains from even larger investments in subsequent models appear to be slowing. Each increase in scale yields diminishing returns in quality gains, suggesting we may be approaching a ceiling for this specific architectural approach.

The root issue lies in the relationship between memory and thinking. Humans solve novel problems through intuitive leaps and creative reasoning that can transcend our stored knowledge. Einstein revolutionized physics not through superior data processing, but through imaginative thinking that broke free from existing frameworks. His memory informed his creativity, but didn't constrain it.

LLMs remain prisoners of their training data. Their vast memory—encompassing much of human written knowledge—paradoxically limits their thinking. They can summarize quantum gravity research but struggle to propose testable, revolutionary hypotheses that aren't derivative of existing work. They recombine what exists but cannot truly imagine what doesn't yet exist.

Even proposed solutions like continuous learning—where models update with new data over time-may only help LLMs keep current with human output rather than enabling independent creation. They would still be following, not leading.

The fundamental challenge is clear: current AI systems are powerful models for information processing but have no genuine agency for independent thought. They can manipulate knowledge but cannot transcend it. Until we solve this paradox—creating systems that use memory to enable rather than constrain thinking—we'll remain trapped in increasingly sophisticated echoes of human intelligence rather than achieving true artificial general intelligence.

The path forward requires reimagining AI's foundations entirely, not just scaling its current form. As François Chollet has argued, intelligence is not only a function of scale. True AI breakthroughs will come not from building bigger memory banks, but from creating systems that can think freely beyond the boundaries of what they remember.