Stephen Wolfram has spent 40 years building computational tools. In a recent conversation with Alp Uguray on Masters of Automation, he shared his perspective on what distinguishes large language models from the computational systems he's been developing, and what this means for the future of technology.
The distinction matters more than we might initially think.
Two Different Approaches to Intelligence
Wolfram describes LLMs as systems that excel at "broad but shallow" tasks—mimicking patterns in human communication and reasoning. They're remarkably good at producing text that looks and feels human-generated. But they operate fundamentally differently from computational systems that can build what he calls "arbitrarily tall towers" of precise reasoning.
This isn't a criticism of LLMs. It's an observation about different types of capability. LLMs have discovered regularities in human language that we didn't know existed. They can produce meaningful text by predicting patterns. But when Wolfram tried using GPT-4 to understand a complex mathematical proof that had stumped humans for 25 years, it failed completely. It could generate text that looked like mathematical explanation, but the actual insights weren't there.
The key insight is that computation and pattern-matching serve different purposes. We need both, but we should understand what each does well.
The Ruliad and the Limits of Human Mathematics
Wolfram introduces an important concept: the Ruliad, which represents all possible computations. Most of this space, he argues, is "incredibly alien" to human thinking.
Consider this: humans have published perhaps 4 million mathematical theorems throughout history. This represents an infinitesimally small sample of all possible mathematics. We've chosen these particular theorems based on our biological nature, our sensory systems, and historical accidents. LLMs train on this human-curated slice, which means they inherit our limitations.
This has practical implications. When we build AI systems, we're often trying to automate human-like reasoning. But there's a vast computational universe beyond what humans naturally explore. Systems that can navigate this space might discover solutions we'd never think to look for.
The Future of Programming
Wolfram makes a strong prediction: traditional programming is becoming obsolete. He compares it to assembly language in the 1970s—once essential, now largely irrelevant except for specialized applications.
His argument is straightforward. Today, CEOs use high-level computational languages to prototype solutions in hours. Then programmers spend months implementing the same thing in traditional code. This isn't efficient. As AI improves at translating intent into code, the need for manual programming diminishes.
The implication isn't that technical skills become worthless. Rather, the valuable skill shifts from syntax memorization to computational thinking—the ability to formalize problems in ways computers can solve. This is similar to how mathematical notation enabled advances in science 500 years ago. We're now developing notation for computational thought.
How AI Systems Might Communicate
An interesting question arises: how will AI systems communicate with each other? Humans compress complex thoughts into linear streams of words—maybe 50,000 tokens in a typical language. But this is a constraint of our biology, not a fundamental limit.
Wolfram speculates that AIs might develop languages with millions of words, where concepts we explain at length could be single tokens. They might share information through modalities we don't use, like direct transfer of high-dimensional representations.
But even enhanced communication faces limits. The world contains infinite complexity. No finite language, whether it has 50,000 or 50 million words, can capture every possible pattern or configuration. The interesting question is which abstractions AIs will choose to develop.
Computational Thinking as Core Competency
Throughout the conversation, Wolfram emphasizes computational thinking as a fundamental skill. In the 1970s, he was among the few theoretical physicists using computers—a significant advantage. Today, he's built that advantage into a technology stack that serves as what he calls his "computational superpower."
His approach to running his company reflects this philosophy. He says, "anything I don't understand isn't done very well." This doesn't mean he micromanages. It means he ensures he understands the foundations of what his company builds. Often, asking the right questions reveals issues before examining code.
For entrepreneurs and technologists, the lesson is clear: computational thinking—the ability to recognize computational patterns and formalize problems—becomes increasingly valuable. It's not about memorizing programming languages. It's about understanding how to structure problems for computational solution.
Building on Foundations
Wolfram's approach to conviction in entrepreneurship is instructive. He builds his understanding from first principles, ensuring he's "always on bedrock." This means that when critics challenge his ideas, he has solid foundations to stand on.
He notes that from the outside, his projects might look high-risk. From his perspective, they never have. He sees the path to success clearly, even if obstacles arise along the way. This isn't blind optimism—it's confidence built on deep understanding.
The key is maintaining flexibility while holding core convictions. Wolfram describes how his company has developed a culture of pivoting plans when evidence suggests a better path. The goal remains constant, but the route can change.
Implications
Several important implications emerge from this conversation:
-
Different tools for different problems: LLMs excel at human-like tasks. Computational systems excel at precise reasoning. Understanding which tool fits which problem becomes crucial.
-
Programming skills vs. computational thinking: As programming becomes automated, the ability to think computationally—to structure and formalize problems—becomes the differentiating skill.
-
The value of foundations: Whether in science or entrepreneurship, building on solid conceptual foundations provides both confidence and flexibility.
-
Embracing non-human intelligence: The future might not be about making AI more human-like, but about working with intelligences that operate on fundamentally different principles.
-
The importance of broad learning: Wolfram advocates learning across disciplines rather than narrow specialization. Different fields offer different thinking tools, and combining them creates advantage.
Looking Forward
The conversation suggests we're at an interesting inflection point. LLMs have shown us that machines can master human-like communication. But the larger opportunity might be in computational systems that explore territories beyond human intuition.
For those building technology, the question becomes: are we automating existing human capabilities, or are we creating genuinely new forms of intelligence? Both have value, but they require different approaches and lead to different futures.
The most practical takeaway might be this: in a world where AI can mimic human patterns and automated systems can execute precise computations, the unique human contribution becomes the ability to think computationally while maintaining the judgment to know which problems matter and which solutions serve human needs.
As Wolfram notes, we've sampled only a tiny fraction of the computational universe. The question isn't whether AI will become more human-like, but whether we're ready to work with intelligences that are fundamentally different from our own.