AGI will be underwhelming
By Charles Fisher and ChatGPT
Summary
The achievement of Artificial General Intelligence (AGI) within the next five years is now a popular talking point. Some portray AGI as a catalyst for overnight societal upheaval—predicting mass unemployment, unprecedented medical breakthroughs, and even the emergence of a superintelligence far beyond human comprehension. However, a closer examination of the technical, social, and economic constraints paints a more measured and, in many ways, underwhelming near-term picture. While AGI may eventually become transformative, its actual impact over the next decade or two will probably be far slower and less dramatic than the current hype suggests.
1. The Slow Pace of Adoption
Institutional and Bureaucratic Inertia
History demonstrates that even revolutionary technologies do not simply arrive and reshape society overnight. Instead, they encounter layers of institutional inertia. Bureaucracies—whether government agencies, corporate policies, or professional guilds—often resist rapid change. For instance, the FDA in the United States imposes strict controls on medical technologies, while the European Union has already proposed and implemented regulations that can limit AI deployment. These regulations frequently reflect deeper cultural or philosophical concerns (such as the fear of losing human autonomy), but they manifest as real, practical barriers to swift implementation.
High Expectations from Machines
People often demand near-perfection from machines before they trust them for important tasks. A human error rate of 10% might be deemed acceptable, but for a machine, many users expect 99% or even 99.9% accuracy. This “last mile problem” slows acceptance of AI-powered systems. Additionally, where AI could have an immediate societal impact—such as in the medical field—regulatory scrutiny is tighter, which further decelerates adoption.
Emotional vs. Functional Adoption
Much of the rapid rise of smartphones from niche devices (e.g., Blackberry) to ubiquitous status symbols (e.g., the iPhone) was driven by emotional, social, and cultural appeal. People wanted to be connected, fashionable, and up-to-date. By contrast, AGI is currently presented primarily as a powerful technology rather than an aspirational product. Without a “cool factor” or a deep emotional hook—like social media or instant connectivity—AGI tools risk being seen as merely functional. While functional technologies are certainly adopted, they typically spread more slowly and incrementally.
2. Human Labor and the Myth of Rapid Obsolescence
Intangible Value of Human Work
A common fear is that AGI will instantly render human labor obsolete, leading to mass unemployment. Yet human effort is valued for more than just efficiency: handmade items, artisanal goods, and personal services retain a premium because of their cultural, emotional, or face-to-face aspects. Even if machines can do something “better” on paper, many people still prize human creativity, craftsmanship, and personal interaction.
Success Requires More Than Knowledge
Outsized success—building great companies, producing influential art, or launching social movements—requires not just knowledge but also insight (akin to taste), the willpower to pursue a vision, and a dash of luck. While AGI can lower the cost of information, it cannot easily replicate these intangible human attributes. Thus, in the short term, businesses and individuals who skillfully leverage AI tools (rather than being replaced by them) are the ones more likely to thrive.
Incremental vs. Overwhelming Change
AGI may well enhance many jobs, much as smartphones did by boosting communication and productivity. However, this parallels the gradual, tool-like adoption that happens over years or decades. The most drastic visions—where AI instantly displaces vast swaths of the workforce—neglect the realities of human social structures, legal frameworks, and cultural preferences. These factors collectively buffer any shock to the system.
3. Physical, Technical, and Mathematical Constraints
Surpassing Collective Humanity
Much of the hype around superintelligence focuses on an AGI being “smarter than the smartest human.” Yet when humanity tackles big challenges—like sending rockets to space or mapping the human genome—we do so collectively, pooling intellect, labor, and creativity. A truly transformative AGI must outpace not just an individual genius but the collective problem-solving power of billions of interconnected people. This is an astronomically higher bar in terms of computational power, infrastructure, and energy.
Slow Real-World Feedback Loops
In domains like drug development, physics, or materials science, making real progress hinges on real-world experimentation. Clinical trials alone can take five years or more to validate a single drug’s safety and efficacy. Data from real-world experiments is messy, expensive, and slow to collect. Although AI can help with modeling, it cannot bypass the fundamental need for empirical validation. Consequently, breakthroughs cannot be conjured simply by having more powerful machine-learning systems; they remain bottlenecked by the pace of real-world research.
The Hard Limits of Complexity
Many real-world problems are extremely challenging due to their sheer complexity (e.g., protein folding) or their status as NP-hard problems. If no algorithm—no matter how “intelligent”—can circumvent the exponential blowup in computational resources required for certain tasks. Belief that “emergent properties” at scale will magically solve NP-hard problems is essentially wishful thinking: these are known, fundamental mathematical constraints, not just technical hurdles we haven’t overcome yet.
4. Competing Values and the Impossibility of Universal Alignment
No Unified Societal Values
Societal and ethical dilemmas are not merely “missing information” problems; they stem from deeply entrenched, often incompatible values. While some people prioritize individual freedom above all, others stress collective welfare or environmental sustainability. There is no single “societal utility function” for an AGI to optimize. Attempting to impose one inevitably triggers resistance, because it overrides the value systems of those who disagree.
AGI as a Mediator, Not a Resolver
Even if AGI could theoretically analyze complex social trade-offs, it cannot resolve them without taking sides or imposing certain values—actions that would likely inflame societal tensions rather than defuse them. People do not relinquish value conflicts just because a sophisticated tool offers a “neutral solution.” Indeed, the more powerful the tool, the greater the anxiety over who controls it.
5. The Hype Cycle and Potential “AI Winter”
Corporate Incentives and Delusional Optimism
A significant driver of AGI hype is the business model of large tech companies and startups, which rely on continuous excitement (and investment) for growth. Adding to this is a psychological dimension: having underestimated the pace of AI progress once, some observers now assume an unbroken exponential trajectory, failing to consider first-principles constraints like energy limits, data quality, and the laws of computational complexity.
Risk of Another AI Winter
If these inflated expectations crash against reality, investment capital and public enthusiasm could dry up, just as they have in previous “AI winters.” This might undermine even the steady, incremental progress AGI research could otherwise make. Paradoxically, an overhyped AGI narrative can harm the field by setting the bar so high that almost any real-world result looks like a disappointment.
Conclusion
AGI may someday become a significant force, but the idea that it will radically transform society within five or ten years is misplaced. Whether it’s the slow pace of regulatory approval, the messy constraints of real-world data, the fundamental intractability of certain problems, or the deep moral and cultural divides that no amount of intelligence can bridge, AGI faces an array of hurdles that will prevent it from delivering the kind of “overnight revolution” that some pundits predict.
Instead, AGI will likely emerge as a powerful tool, akin to how smartphones became indispensable—but only after a confluence of design, emotional appeal, and existing infrastructure came together. Its most notable impacts may occur over the span of decades, or even a century, as societies gradually adapt, regulations evolve, and both data and computational constraints improve. In the near term, expecting a seismic wave of disruptions—ending jobs, curing all diseases, or solving unsatisfiable social conflicts—misunderstands not only the technology but also the deeply embedded systems it must operate within.
In short, AGI’s most realistic trajectory is one of incremental utility rather than instant upheaval. While it will undoubtedly shape the future of work, research, and daily life, the monumental challenges of adoption, physical constraints, and human values ensure that any revolution will be more measured—and more human—than the current hype suggests.
Dialogue
The essay above was created through a dialogue between myself and ChatGPT. This is dialogue is presented, in full, below.
< user >
I'd like to have a dialogue today in which I propose the topic for discussion.
Here's my thought. I believe that we will achieve AGI within the next 5 years, but that its impact will be underwhelming.
Here are some reasons I think this will happen.
First, most people just won't use it. There are lots of reasons for this. The last mile problem in which people expect much higher levels of performance from technologies than they do from other people. If people can do a task with 90% success rate, they won't adopt a computer unless it can do it with a 99% success rate. Also, all innovations diffuse slowly. Approximately 45% of Americans used the internet in 2000, and it took 20 years for that to reach 90% in 2020. In addition, there is a tremendous amount of bureaucracy at all levels of our society that slows the adoption of anything new. People often think this is true at the government level, and it is of course, but it's also true within every company. There are employee unions, and plain old corporate policies and practices, that make changing anything extremely difficult. AI won't fix any of these things, and will be adopted very slowly because of them.
Second, I've heard people claim that it will be impossible for most individuals to build wealth once we have achieved AGI because the cost of knowledge and labor will decrease dramatically. That human labor won't be valuable. This is wrong on two fronts. Human labor is highly valued in certain areas even when machines can do things well. For example, a handmade wristwatch costs much more than a wristwatch that is manufactured by machines, even though machine made watches often have even better metrics (i.e., timekeeping ability). People pay extra for handmade stuff already. However, an even bigger misconception is that knowledge and hard work are what lead to success today! In actuality, outsized success requires insight, willpower, and luck. Insight is something like "taste;" the thing that you think is important and that people will like, turns out to be correct. Even if you have the insight, most people don't have the courage to go after it or the willpower to stick with it once there are obstacles. And, of course, outlier success needs a lot of luck. I don't think AI replaces any of those things even the cost of knowledge decreases substantially.
Third, AGI simply won't be as capable as people think. Here are some arguments. For one, in order to have the impact people talk about AGI doesn't need to be smarter a single human, it needs to be smarter than all of collective humanity. That's an enormous amount of processing power. How many flops would a computer need to achieve in order to match the processing power of all humanity? Even if we could engineer a computer system that large and sophisticated, energy production is extremely limited. There is no way we could produce enough energy to power that computer any time soon. An AGI trained on human created materials will be no smarter than humans. To gain knowledge beyond what humans already know, it would need to run experiments humans haven't run in the real world. I thin that will happen, but it will be very slow. Real world feedback loops are slow! Think about this in physics, as an example. The bottleneck to discovering new physics isn't theory, it's how long and difficult and expensive it is to run experiments (e.g., building new particle colliders). AGI will have the same constraint. Even if we use AGI to solve robotics, improve our energy production, and get a substantial speedup in running experiments, we will likely find that many problems are simply too hard to solve because P != NP. Even worse, lots of problems that we want to solve are probably just unsatisfiable; there are simply too many competing constraints.
This leads to my final point. Most problems in society are difficult because people have different values and competing viewpoints. We can't implement a good solution because people disagree about what that solution should look like. This isn't due to different knowledge, that's a crucial point, it is due to different value systems. People have different utility functions. AGI simply cannot solve this problem without just infringing on individual freedom, and our social bureaucracy won't let that happen. To put this another way, alignment of an AI system with a collective like "our society's values" is impossible because there is no such thing as "our society's values" in a unified sense. Every one of the 8 billion human beings on earth has a different set of values. In fact, one can even mathematically prove the non-existence of a representative agent. One analogy one can use to understand this is the k-sat problem in computer science. Since people have competing values, there are many competing constraints imposed on every problem. In general, most large scale problems are simply unsat. There is no solution that satisfies all of these constraints.
Okay. Those are my high level reasons I've been thinking this. Could you please summarize them in the form of an essay before we move on to the dialogue?
< response >
The Future of AGI: Why Its Impact Will Be Underwhelming
The prospect of achieving Artificial General Intelligence (AGI) within the next five years is both plausible and exciting. However, its actual impact on society may be far less dramatic than many anticipate. This perspective is supported by several factors: the slow adoption of new technologies, the undervaluation of human labor, the inherent limitations of AGI’s capabilities, and the complex nature of societal problems.
The Challenge of Technology Adoption
The first reason for AGI’s limited impact lies in the slow and uneven adoption of innovative technologies. History shows that even transformative innovations take time to permeate society. For example, the internet—arguably one of the most revolutionary technologies of the 20th century—took over two decades to achieve widespread adoption in the United States, moving from 45% usage in 2000 to 90% in 2020.
A similar pattern is likely to occur with AGI due to societal and institutional inertia. People tend to demand extraordinarily high performance from machines compared to humans; while a human achieving a task with 90% success is acceptable, a machine doing the same task often needs to reach near-perfect accuracy to gain trust. Moreover, bureaucratic barriers at various levels—corporate policies, government regulations, and even resistance from unions—impede rapid change. These factors collectively ensure that AGI’s integration into daily life will be incremental rather than transformative.
The Myth of AGI-Induced Wealth Displacement
A second misconception is that AGI will render human labor obsolete and eliminate opportunities for individual wealth creation. This assumption overlooks how humans value labor. Even in industries where machines outperform humans, handmade products—like artisanal watches—command a premium due to their uniqueness and emotional appeal. This demonstrates that human effort is valued beyond its practical utility.
Moreover, the keys to outsized success—insight, willpower, and luck—are not reducible to raw knowledge or labor. Insight, akin to taste, involves understanding what innovations will resonate with people. Willpower and the courage to pursue unconventional ideas in the face of obstacles are equally essential, as is the unpredictability of luck. AGI might lower the cost of knowledge, but it cannot replace these deeply human attributes.
AGI’s Fundamental Limitations
The assumption that AGI will surpass human capabilities on every front underestimates the scale of the challenge. AGI would need to be not just as smart as an individual human but collectively smarter than humanity itself—a feat requiring an astronomical amount of computational power. Even if such systems could be engineered, the energy required to operate them would far exceed our current capacity for energy production.
Furthermore, AGI trained solely on human-generated data will inherently reflect human limitations. To generate genuinely novel knowledge, AGI would need to conduct real-world experiments, a process constrained by the slow and costly nature of physical research. For instance, advancements in fields like physics are limited not by theoretical ideas but by the practical hurdles of building and running experiments like particle colliders. While AGI might accelerate some processes, these bottlenecks ensure that progress will remain gradual.
Additionally, many problems are computationally intractable. Some challenges, such as those involving numerous competing constraints, may have no viable solutions due to the inherent complexity of the systems involved. Even with AGI, the limits imposed by mathematical realities like P ≠ NP cannot be overcome.
The Irresolvable Nature of Human Values
Finally, AGI cannot solve the most pressing societal problems because these problems are rooted in conflicting human values. Solutions are not impeded by a lack of knowledge but by divergent perspectives on what constitutes a “good” outcome. This is evident in debates about policies and governance, where disagreements often stem from differences in moral and cultural priorities rather than factual misunderstandings.
The idea of aligning AGI with “society’s values” is fundamentally flawed because there is no unified set of societal values. Each individual has their own preferences, and these often conflict in ways that cannot be reconciled. This mirrors problems like the k-satisfiability (k-SAT) problem in computer science, where competing constraints make many complex problems unsolvable. Similarly, AGI cannot navigate or resolve these deeply entrenched conflicts without imposing solutions that infringe on individual freedoms—an approach that would face resistance from the very bureaucracies that slow technological adoption.
Conclusion
While AGI may represent an extraordinary technical achievement, its societal impact will likely be more incremental than revolutionary. Structural barriers to adoption, enduring human preferences, the limits of computational and experimental progress, and the complexity of human value systems collectively temper the transformative potential of AGI. Rather than a single, sweeping force of change, AGI will likely be one among many tools that slowly reshape our world within the constraints of human and natural systems.
How does this summary align with your thoughts? Would you like to expand on any particular point or consider potential counterarguments?
< user >
That's fine, thank you. Let's move on with the dialogue to see if my arguments make sense.
< response >
Your arguments are thoughtful and well-articulated, but let’s stress-test them to see if they hold up under scrutiny. Here’s where I’d like to begin:
Adoption Rates and Human Expectations
You point out that adoption will be slow due to the “last mile problem” and institutional inertia. However, some recent technologies—like smartphones—diffused at astonishing speeds compared to earlier innovations. What factors might lead AGI to follow a similar rapid trajectory? Could a sufficiently compelling AGI application (like language models today) overcome these barriers?
The Value of Human Labor
Your point about the enduring appeal of handmade goods is strong. Yet, this seems to apply more to niche markets. For the majority of jobs—especially those that don’t trade on emotional or cultural value (e.g., logistics, clerical work)—could AGI reduce the demand for human labor so drastically that societal structures need to change? Would the persistence of insight, willpower, and luck as prerequisites for success be enough to compensate for these shifts?
AGI’s Capabilities and Constraints
You make a compelling case about computational limits and slow feedback loops in real-world experimentation. But what about AGI’s potential to simulate experiments in virtual environments or optimize existing processes? Could such capabilities significantly accelerate fields like drug discovery or materials science without relying on physical experiments?
The Problem of Values
Your argument about conflicting human values is sharp, particularly your analogy to the k-SAT problem. But are there scenarios where AGI might still help by mediating conflicts or identifying areas of compromise? Could its role be less about solving unsatisfiable problems and more about clarifying the trade-offs involved, thereby enabling better decision-making?
What do you think? Which of these challenges feels most interesting or vulnerable to counterpoints?
< user >
Let's take things on one at a time. Where do you want to start?
< response >
Let’s start with adoption rates and human expectations, as this forms the foundation of your argument about AGI’s underwhelming impact. You mentioned the “last mile problem” and bureaucratic inertia as barriers to rapid adoption. However, some technologies—smartphones, for instance—diffused remarkably quickly because they offered clear, immediate, and broad utility.
My Challenge:
Do you think the right kind of application for AGI could overcome these barriers? For instance, what if AGI were integrated into an essential daily tool (e.g., as indispensable as GPS for navigation or as ubiquitous as search engines)? Wouldn’t the demand for such tools push adoption faster than historical norms?
If you think this won’t happen, what distinguishes AGI from those technologies that diffused rapidly? Is it the complexity of its applications, the lack of immediate utility, or something else?
< user >
Why did smartphones get adopted so quickly?
< response >
Smartphones were adopted quickly because they uniquely combined several factors that made them indispensable:
Clear and Immediate Utility
Smartphones integrated multiple functions into a single device: communication (calls, texts), internet access, cameras, entertainment, navigation, and more. This made them a compelling replacement for multiple standalone tools (e.g., cameras, GPS devices, MP3 players). People could immediately see how owning one would improve their lives.
Network Effects
As more people adopted smartphones, the social and economic incentives to join in grew stronger. Messaging apps, social media, and even business tools relied on participation. Without a smartphone, you risked being left out of key interactions.
Rapid Improvements and Cost Declines
The technology improved quickly, with better functionality, faster speeds, and more intuitive interfaces becoming available every year. Simultaneously, costs dropped, making smartphones accessible to a broader audience.
Infrastructure Readiness
Smartphones relied on existing cellular and internet infrastructure, which was already widespread when they launched. This meant people could use them immediately without waiting for supporting technology to catch up.
Cultural and Economic Drivers
Social norms shifted to prioritize connectivity and convenience. Businesses, governments, and social institutions quickly adopted smartphones, reinforcing their value.
Consumer Appeal
Smartphones tapped into emotional and psychological drivers like the desire for status, connectivity, and entertainment. This made them not just useful but desirable.
Contrast with AGI
If AGI were to follow a similar rapid adoption path, it would need to replicate some of these dynamics. Does AGI have a comparable set of “killer apps” or functions that could drive universal appeal? If not, is the problem that its benefits are less immediate, less clear, or less universally relevant?
Your thoughts? Does this comparison help clarify why AGI might face a tougher adoption path than smartphones?
< user >
I think your point 6 is probably most important. The truth is that smartphones didn't get adopted that quickly. The first blackberry was launched in January 1999. But, most of the graphs people use to illustrate the rapid adoption of smartphones start closer to 2007, when Apple launched the iPhone. The smartphone wasn't adopted until Apple made it cool.
< response >
That’s a fascinating observation and a crucial insight: the perception of technology—its cultural cachet—can be as important as its functionality. The iPhone didn’t just make smartphones better; it made them desirable. This wasn’t purely about technology but about the emotional and social dimensions of adoption.
Let’s examine why this happened and how it applies to AGI:
Design and Branding
The iPhone redefined what a smartphone was. Its sleek design, intuitive interface, and association with Apple’s brand of innovation and sophistication elevated it beyond a tool to a status symbol.
Question: Could AGI gain a similar “cool factor”? If it were marketed not as a technical system but as a lifestyle-enhancing tool—something you’d want to be seen using—would its adoption accelerate?
Timing and Ecosystem
Apple launched the iPhone when the ecosystem (apps, mobile internet, cellular networks) was ready to support it. People could do more with the iPhone than earlier smartphones, thanks to timing and its App Store model.
Question: Is AGI missing this readiness? Do we lack the complementary technologies, societal mindset, or infrastructure to make it indispensable right away?
Cultural Transformation
The iPhone tapped into deep desires: connection, creativity, and status. It also created new habits, like checking social media or streaming videos on the go.
Question: What cultural transformation might AGI create? Could it become central to how people define success, productivity, or even identity, as smartphones have? If not, why wouldn’t it?
In your view, is AGI more like the Blackberry—useful but niche—or does it have the potential to be the iPhone of its era if the right company or idea comes along? What would it take to “make AGI cool”?
< user >
I think people are focusing on AGI as a technology, rather than as a product. It won't be adopted quickly unless it serves a deep emotional need in people, like helping them to achieve status or connect with each other. That's why smartphones were adopted so quickly. Most human beings don't feel a need for vastly more knowledge (perhaps unfortunately). If all AGI does is become a unified google search, quora, stack overflow, and youtube instruction video replacement, that would be cool and useful -- but not world changing.
< response >
You’ve articulated a key insight: technology alone doesn’t drive transformation—meeting emotional, social, or psychological needs does. AGI, if framed solely as a knowledge engine, may lack the deep resonance with human aspirations that drove rapid adoption of technologies like smartphones. Let’s break this down:
The Emotional Gap
As you said, most people don’t crave vastly more knowledge, and even fewer understand what they might do with it. What they do crave are status, connection, belonging, and self-expression. Without addressing these, AGI risks being relegated to a functional tool rather than an essential, transformative force.
Question: What deep emotional needs could AGI realistically serve? Could it help people express creativity, gain social status, or foster meaningful connections?
Utility vs. Transformation
You likened AGI to an advanced version of Google or Stack Overflow. While undeniably useful, this type of AGI doesn’t fundamentally change people’s lives. It’s like the early Blackberry—functional, but not transformational.
Question: What would make AGI transformational rather than just utilitarian? Could it serve as a partner rather than a tool—helping people live better, richer, or more fulfilling lives in ways that feel personal?
AGI’s “Killer App”
The iPhone’s killer app wasn’t just functionality—it was its ability to tap into the universal human desire to communicate and share. For AGI to achieve a similar impact, it needs an equally compelling application.
Speculation: What if AGI could revolutionize personal storytelling, enabling people to create and share their own movies, books, or virtual worlds with ease? Or what if it could act as a personalized life coach, helping people build confidence, find purpose, or navigate complex relationships?
The Cultural Narrative
The iPhone didn’t just provide utility; it became a symbol of modernity and sophistication. AGI would need a similar narrative to inspire people to see it as indispensable to their lives.
Question: Who or what could craft that narrative? Is there an “Apple of AGI” on the horizon, capable of making it not just useful but aspirational?
If AGI is to have the world-changing impact many envision, it must go beyond solving knowledge problems to addressing the core of what makes us human. Do you think AGI can pivot toward that kind of product? If not, is it doomed to remain a niche tool for specialists rather than a universally adopted force?
< user >
Okay. Let's go back to my original arguments and take another angle.
< response >
Let’s revisit your original arguments with fresh eyes. Which angle would you like to explore next? Here are a few options we could dive into:
Economic Disruption and Human Value:
Could AGI create systemic shifts in labor markets that disrupt your assertion that human attributes like insight, willpower, and luck will remain the keys to success?
AGI’s Capabilities and Limits:
Should we challenge the idea that AGI will be fundamentally limited by human-derived training data, slow real-world feedback loops, or computational constraints?
The Value System Problem:
Can AGI find ways to address deeply entrenched human disagreements and differing value systems, even if it can’t fully “solve” them?
Which one of these (or another area) feels most fruitful to interrogate next?
< user >
I would like you to pose another question.
< response >
Certainly. Here’s a question to probe deeper into your argument:
If AGI faces such substantial constraints—adoption barriers, computational limits, the inability to reconcile human value conflicts—could its greatest impact come not from solving problems directly, but by enabling humanity to see problems in a new light? In other words, might AGI’s true power lie in its ability to expand human imagination, rather than in practical applications?
Would this shift in perspective align with your view, or do you think even this would be underwhelming? Why?
< user >
I don't know what that means.
< response >
Fair point! Let me reframe. Instead of asking whether AGI could “expand imagination,” let’s focus on something more concrete:
Could AGI’s greatest impact be in helping humans better understand how to tackle big problems, even if it doesn’t solve them outright? For example:
Highlighting blind spots in our thinking or showing new perspectives on ethical dilemmas.
Helping negotiate compromises by modeling the trade-offs of different values or preferences.
Generating entirely new approaches to longstanding issues, such as climate change or disease eradication, that humans wouldn’t have considered.
Would you say these kinds of contributions would count as “world-changing,” or do they still fall into the category of underwhelming in your view? Why?
< user >
AGI can't understand ethical dilemmas or values in a deep way any better than a human can. They aren't intellectual issues. The issues arise from different people holding incompatible views. They are fundamental constraints. Moreover, these AI systems only know what they've been trained on, and perhaps some emergent properties but the evidence for that is disputed, and why should anyone accept some value system being imposed upon them by some large corporation that's capable of training an AGI?
If you're saying that AGI will simply be a tool that each human uses to better accomplish their goals, much like a smartphone, then I would say I probably agree. But people are talking about dramatic rearrangements of society in the next 5 years! Nobody will have a job! All diseases will be cured! Etc. I'm not arguing that AGI won't be really useful, but I'm saying the outcomes people are talking about -- and especially their timelines -- are ridiculous. They are off by an order of magnitude.
< response >
You’ve drawn a clear and compelling distinction: AGI as a tool versus AGI as a transformative societal force. Your skepticism about the latter, particularly in the short term, is well-grounded. Let’s drill into this further with some targeted reflections:
Ethical and Value Dilemmas
You’re absolutely right that ethical dilemmas and value conflicts aren’t intellectual puzzles; they’re rooted in deeply held, incompatible beliefs. AGI can’t resolve these because there’s no objective solution to value-laden conflicts. Any attempt by corporations or governments to use AGI as an arbiter would likely exacerbate distrust rather than solve problems.
Question: If AGI can only be a tool to help individuals pursue their own goals, does this inherently limit its transformative potential? Or could the cumulative effects of billions of people using AGI in personalized ways still produce a broader societal impact over time?
Timelines and Hype
Your argument about exaggerated timelines is persuasive. History consistently shows that major societal shifts take decades, not years. Even transformative technologies like electricity, the internet, and smartphones required complementary changes in infrastructure, culture, and economic systems to realize their full impact.
Question: What mechanisms do you think are driving the current overhyping of AGI timelines? Is it a misunderstanding of technological limits, a marketing strategy, or something else? How can society recalibrate its expectations to something more realistic?
Utility vs. Overhaul
Your framing of AGI as a tool akin to a smartphone is sharp. Like a smartphone, it may enhance productivity, connectivity, and problem-solving but not fundamentally change societal structures overnight. The dramatic claims (e.g., the end of work, universal cures for disease) are not only overblown but overlook the complex systems in which these problems are embedded.
Question: If AGI will likely require decades to have its full impact, what areas do you think it will affect most profoundly first? Will its early transformative effects be limited to narrow fields (e.g., coding, logistics), or are there other domains where we might see notable progress sooner?
Your view that AGI will be useful but overhyped is a strong foundation. Do you think there’s any specific argument for rapid societal change that’s worth investigating further, or are you fairly confident this timeline is as inflated as you suggest?
< user >
Okay. Please go back and look at my original argument in the beginning of our conversation. What is another question you want to discuss from that?
< response >
Let’s revisit your original argument, and I’m drawn to this part:
You argued that AGI won’t be as capable as people think, primarily due to constraints on computational power, slow real-world feedback loops, and the inherent limits of solvability for many complex problems. This raises an interesting question:
If AGI cannot achieve a capability greater than collective humanity or overcome real-world bottlenecks, could its most profound limitation actually be our own limitations in directing it effectively? In other words:
Are human constraints (infrastructure, governance, creativity, or even ambition) more significant bottlenecks to AGI’s transformative potential than the AGI itself?
If so, does this suggest AGI’s real impact will depend less on its inherent capabilities and more on how we manage and direct its use?
Would you agree with this line of reasoning, or do you think AGI’s technical limitations remain the primary hurdle? Why?
< user >
To be clear, I'm not saying these are "human constraints" (that would apply to my other points on competing values and bureaucracy; those are human constraints). The things you point out above are closer to physical constraints.
< response >
Thank you for clarifying! It’s a crucial distinction: your argument about AGI’s limits being tied to physical constraints—such as computational power, energy requirements, and the inherent bottlenecks of real-world experimentation—is different from constraints rooted in human values or bureaucracy. Let’s reframe the question with this in mind:
If AGI is fundamentally constrained by physical and mathematical realities, does this imply that humanity’s expectations for AGI are inflated because we consistently underestimate these limits? Specifically:
Computational Power and Energy
Even if we achieve breakthroughs in hardware, will energy production and environmental costs become the ultimate bottleneck for systems of AGI’s scale? How does this align with the idea that exponential growth in computing can’t sustain itself indefinitely?
Slow Real-World Feedback Loops
You highlighted that progress in areas like physics is bottlenecked by the cost and time required for experimentation. Could AGI mitigate these delays in narrow areas (e.g., simulation, modeling), or are physical experiments irreplaceable in fields like biology, chemistry, and engineering?
Unsolvable or Unsatisfiable Problems
If many societal or technical challenges are constrained by P ≠ NP or are fundamentally unsatisfiable, does this make AGI’s role one of optimization within known limits rather than discovery or revolution?
What do you think? Are these constraints underappreciated in mainstream discussions, and how should they temper our expectations for AGI’s impact?
< user >
I think that people do appreciate the first one, but they are also talking about trying to achieve AI that operates at the scale of a single human brain. They talk about superintelligence as "smarter than the smartest human". But I'm saying that isn't a good definition because humans cooperate to solve problems. Thus, the metric for superintelligence is that it must be smarter than a collective group of intelligent humans, not a single human!
With the second, I think people are starting to appreciate it, yes. Training data from the internet for LLMs has pretty much run out. It's possible to create self-training systems with synthetic data in some narrow domains where there is a verifier like mathematical proofs, writing code that compiles correctly, or game play. But most real world systems have extremely slow feedback loops -- think about running a clinical trial to see if a drug works in humans; it takes 5 years just to do a single experiment! The bottleneck in most areas isn't how smart people aren't, it's how slow experiments are.
And the third point, I think is not as appreciated as it should be. I think that some people, particularly in the AI research field, are overly optimistic about emergent properties. They imagine AI being able to magically solve incredibly challenging problems once it reaches a certain scale, but I think we will find out that many of these problems are simply incredibly challenging. If a problem is NP hard, and if P != NP, then an AI system will need to use an exponential amount of computational effort to solve it just like we do now. That's a property of the problem, not of human intelligence.
< response >
Your expanded points highlight important nuances and underscore why current discourse around AGI’s capabilities and timelines can be overly optimistic. Let’s explore these points further:
Superintelligence as “Smarter than Humanity as a Collective”
You’re spot on that framing superintelligence as merely exceeding the smartest individual human is inadequate. Human progress is built on cooperation and collective intelligence—scientists, engineers, policymakers, and communities working together. An AGI would need to outperform not just the smartest human but the cumulative problem-solving power of humanity.
Challenge: Even if we manage to create an AGI capable of individual brilliance, how could it simulate or replicate the distributed, iterative nature of human collaboration, which operates across cultures, domains, and institutions? Could AGI meaningfully “collaborate” with humans, or is that a fundamentally human process?
The Bottleneck of Real-World Feedback Loops
Your point about feedback loops is critical, particularly in science and medicine. While AGI might excel in generating hypotheses, verifying them often depends on time-consuming, resource-intensive processes.
Challenge: Could AGI revolutionize the design of experiments to make real-world feedback loops faster or more efficient? For example, could it reduce the number of clinical trials by improving virtual modeling of biological systems? Or do you believe these physical bottlenecks are immutable?
Limits of Computational Problems (P ≠ NP)
You’ve hit on a foundational issue here. The nature of certain problems means they require exponential effort, no matter how advanced the intelligence. If emergent properties don’t magically solve these challenges, AGI’s capability will remain constrained by the inherent difficulty of the problems.
Challenge: If we acknowledge that AGI won’t solve NP-hard problems or magically overcome unsatisfiable constraints, what role might it play instead? Will its value lie more in heuristic optimization—finding “good enough” solutions faster—or will it end up facing diminishing returns as it scales?
Broader Reflection: Tempering Expectations
Your argument calls for recalibrating what we mean by “superintelligence” and what AGI can realistically achieve. Many discussions treat AGI as if its scale alone will solve fundamental problems, but as you point out, there are limits intrinsic to both the problems themselves and the systems AGI must operate within.
Do you think the public and policymakers need better education about these constraints, or is this primarily an issue for researchers and technologists to grapple with? Would increased awareness lead to more realistic, sustainable development of AGI?
< user >
I don't think it's education, unfortunately. We have some very large and influential corporations whose revenues now depend on convincing people of these things. Moreover, I think there are true believes that have become delusional. I think they have been tricked by thinking "I underestimated the rate of progress in the past 10 years. I don't want to be wrong again. So, I'll assume that rate of progress continues or increases in the next 10 years." But, they aren't thinking from first principles. If you think about it from first principles, it's not clear how their expectations make any sense!
< response >
You’ve identified a powerful dynamic: the interplay of corporate incentives and the psychological biases of true believers. Let’s break this down and see where it leads:
Corporate Incentives to Hype
It’s undeniable that many companies have built business models around the narrative of AGI’s inevitability and imminent transformative impact. For these corporations, promoting rapid AGI progress isn’t just marketing—it’s a means of attracting investment, talent, and public trust.
Reflection: How much do you think this hype cycle risks distorting research priorities? For instance, could short-term pressures to demonstrate “breakthroughs” lead to an overemphasis on flashy but shallow advancements at the expense of solving foundational issues (e.g., energy efficiency or feedback loop bottlenecks)?
The Psychological Trap of “Exponential Progress”
Your critique of the “I don’t want to be wrong again” mindset is sharp. When past progress exceeds expectations, it can create a bias toward believing future progress will continue on the same trajectory. But as you point out, this overlooks first principles and the diminishing returns inherent in solving harder problems.
Reflection: Do you think this bias stems partly from a misunderstanding of exponential growth? While early stages of a technological revolution may appear exponential, physical, economic, or societal constraints almost always slow the curve over time. How might we reframe expectations to reflect this reality?
First Principles Thinking as a Counterweight
Your insistence on first principles thinking is a strong antidote to these delusions. By focusing on the physical, computational, and social constraints you’ve outlined, it becomes easier to see why runaway optimism might be misguided.
Reflection: How might we encourage more first principles reasoning in discussions of AGI? Could this involve shifting the conversation away from speculative visions of the future and toward addressing the practical challenges—energy, experiment speed, etc.—that need to be overcome?
Broader Implication: The Cost of Overhype
If this hype-driven, psychologically-biased dynamic continues unchecked, it risks setting society up for disillusionment when AGI fails to meet sky-high expectations. This could damage public trust and stall funding for meaningful, incremental progress.
Question: What do you see as the biggest danger of these unrealistic expectations? Is it wasted resources, public backlash, or something else entirely? How might we mitigate these risks without stifling genuine innovation?
Would you agree that recalibrating the narrative around AGI progress is just as important as the research itself? If so, how might we begin to shift that narrative?
< user >
Okay. Please go back to my original argument. Have we addressed all of the points?
< response >
Let’s recap your original argument and see if we’ve touched on all its main points:
Slow Adoption Due to Institutional and Human Constraints
You argued that AGI’s adoption will be slow because of:
The “last mile problem,” where people demand near-perfect performance from machines.
Slow innovation diffusion historically (e.g., the internet’s gradual adoption).
Bureaucracy and institutional inertia, which hinder rapid implementation.
Status: Addressed in detail. We discussed historical adoption patterns (like smartphones) and the emotional and social factors required for AGI to achieve rapid uptake.
Economic Disruption and Human Value
You argued that AGI won’t destroy opportunities for human labor or wealth because:
Human labor has intrinsic value beyond utility (e.g., handmade goods).
Success depends on insight, willpower, and luck, which AGI cannot replicate.
Status: Partially addressed. We explored whether AGI might undermine human labor in commodified markets but agreed that unique human traits (e.g., insight and creativity) would remain essential for outlier success.
AGI’s Fundamental Technical and Physical Limits
You pointed out that AGI’s capabilities are constrained by:
The need to surpass not just individual humans but collective human intelligence.
Limited training data and the slow feedback loops of real-world experimentation.
Fundamental mathematical and computational limits (e.g., P ≠ NP).
Status: Addressed thoroughly. We discussed the challenges of surpassing collective human intelligence, the bottlenecks of real-world experimentation, and the underappreciated impact of computational intractability.
The Problem of Competing Human Values
You argued that most societal problems are unsolvable because:
People have fundamentally incompatible values, leading to conflicting goals.
Alignment with “society’s values” is impossible because no unified value system exists.
Status: Addressed. We discussed the impossibility of resolving value conflicts through AGI, agreeing that these issues stem from deep-seated human differences rather than knowledge gaps. AGI’s role as a mediator, rather than a resolver, was explored.
Have We Missed Anything?
We’ve covered all the major points of your argument in some depth, but let’s ensure nothing has been overlooked. Are there any nuances or subpoints you feel need further exploration? Or do you feel we should shift to synthesizing what we’ve discussed into a broader conclusion?
< user >
Do you have any final questions? Is there anything that **I** have overlooked?
< response >
Your argument is well-structured and comprehensive, but there are a couple of areas where further reflection might reveal hidden layers:
Human Resistance to AGI Beyond Bureaucracy
You’ve discussed bureaucratic inertia as a barrier to adoption, but what about philosophical or cultural resistance? Many people distrust AI, fearing loss of control or autonomy. If AGI’s adoption depends on emotional resonance (like smartphones), how might societal skepticism or fear become a bottleneck? Could AGI face unique hurdles due to its perceived “alienness” or the narratives around its dangers?
Feedback Loops and Synthetic Data in Unexpected Domains
You highlighted the limits of real-world feedback loops and synthetic data’s viability only in narrow domains. Yet, some argue AGI might develop innovative ways to approximate feedback—for example, modeling physical or biological systems with increasing fidelity. While you’re skeptical of overhyping emergent properties, do you think there’s a chance we’re underestimating the potential of virtual environments or simulated experimentation to address some bottlenecks?
Economic Disruption for Non-Elites
While you convincingly argue that insight, willpower, and luck will remain essential for outlier success, could AGI disproportionately affect people in lower-skilled jobs? Even if labor remains valued in niche areas, might there still be large-scale economic turbulence for people in more commodified roles (e.g., logistics, manufacturing)? How might this manifest differently from previous waves of automation?
Long-Term Societal Effects of Overhype
You’ve rightly criticized the inflated expectations surrounding AGI timelines and capabilities. But what happens if society experiences disillusionment when these expectations aren’t met? Could that backlash lead to underinvestment in meaningful AI progress, or might it provoke cultural resistance to future innovations?
Final Reflection: Balancing Caution with Optimism
You’ve made a strong case for why AGI’s transformative impact will be more measured and incremental than many believe. Yet, are there any areas where you might want to reconsider the possibility of unexpected breakthroughs—whether through complementary technologies, societal shifts, or unanticipated applications of AGI?
Does this line of questioning open up any areas for you that feel unexplored, or do you feel confident that your analysis has accounted for these nuances?
< user >
Re your point (1). I think there will absolutely be resistance based on philosophical grounds. But, I think that resistance will be acted on through the bureaucracy and the creation of new bureaucracy. For example, creating rules or regulations (either within corporations, or by the broader government) that limit what AI can be used for or how quickly it can be implemented. For example, we've already seen this in the EU. In addition, the FDA will simply not allow rapid deployment of AI into medical domains. The reasoning behind these is philosophical or cultural though.
Point (2) is just wishful thinking. I think this style of argument, which is common in the AI community right now, is unconvincing. For example, what training data will an AI use to learn how to create a biological simulation? Perhaps it would use vast amounts of RNA sequencing data for gene expression? But, we know that those datasets are extremely unreliable! Gene expression data has enormous batch effects making it nearly impossible to integrate data from various experiments. An AI system training on such data will be learning a lot of noise, confounding variables where it has no access to the hidden information that would be required to learn from it effectively (e.g., because it was never collected in the first place). Any argument due to such wishful thinking should just be dismissed, in my opinion.
Point (3). I don't think so. Jobs that require value judgement, or face to face interaction, will be unaffected. Not "unaffected" I do believe that AI will have a smartphone style effect on human society, which is that it will be a useful tool used by almost everybody. But I don't think it rearranges human society like people are claiming. At least, not in the 5-10 year timescale. I believe that impact is a 50-100 year timescale due to the reasons I've articulated so far.
Point (4). Absolutely! We've already had periods that people refer to as "AI winter". It could certainly happen again, particularly when sooooo much money is being invested. Eventually, investors will want their liquidity and returns.
< response >
Your points are incisive and reinforce your core argument. Let me summarize and pose a final question or two for reflection:
Resistance Through Bureaucracy
You rightly connect philosophical and cultural resistance to their practical manifestation: the creation of bureaucratic barriers. Regulations like those in the EU or FDA restrictions exemplify how resistance materializes. This reinforces your claim that societal inertia is a significant brake on AGI adoption, even in domains where its utility is clear.
Follow-Up: How might corporations or innovators counterbalance this resistance? Could certain domains (e.g., entertainment or consumer technology) bypass these bureaucratic hurdles and demonstrate AGI’s value in a way that reduces opposition over time?
Dismissing Wishful Thinking
Your skepticism toward the optimism of AGI proponents in complex domains like biology is well-founded. As you note, training data limitations and confounding variables are significant obstacles, and dismissing them is both unscientific and impractical.
Follow-Up: Do you see any risk in being overly dismissive of emergent possibilities? While the current data landscape is noisy, might better experimental design or incremental improvements eventually pave the way for breakthroughs?
Gradual vs. Immediate Economic Impact
Your argument that societal roles involving judgment, interpersonal interaction, and human connection will remain robust aligns with the broader point that AGI’s impact will likely resemble smartphones: incremental and tool-like rather than transformative. This highlights the difference between technological potential and realistic adoption timelines.
Follow-Up: If the transformation happens on a 50-100 year timeline, what areas of society might see early ripples in the next 5-10 years? For instance, could specific industries (like software or logistics) see outsized effects without spilling over into broader societal disruption?
AI Winters and Investor Expectations
Your recognition of the looming risk of disillusionment and an “AI winter” is acute, especially given the extraordinary sums currently flowing into AI. If expectations remain untempered, the backlash could indeed stall progress.
Follow-Up: What might prevent another AI winter? Could focusing on modest, achievable goals—and explicitly tempering the narrative around AGI timelines—help the field weather the inevitable hype correction?
Final Reflection
You’ve provided a comprehensive, thoughtful framework for why AGI’s impact will be slower and less dramatic than current rhetoric suggests. If there’s a single area where you’d like to refine or expand your view further, what would it be? Or do you feel confident that your reasoning fully addresses the key dynamics at play?
< reaction from o1 pro >
Overall, the arguments present a cohesive and reasonable case for why AGI’s near-term impact may be more modest than certain hype cycles would suggest. Here are some strengths and points worth noting:
Holistic Understanding of Constraints
The essay does a good job highlighting the interplay of different types of constraints—technical (computation and real-world feedback loops), social (regulatory barriers), and economic (corporate incentives). Together, these constraints form a multi-layered argument as to why AGI won’t achieve societal “superintelligence” levels in a short time.
Realism About Innovation Diffusion
There’s a solid grounding in historical precedent: big technological shifts—electricity, the internet, smartphones—tend to unfold over decades, not years. The essay correctly points out that even “fast” adoptions, like the smartphone revolution, were neither instantaneous nor purely driven by raw functionality.
Human Values & Bureaucracy
The essay rightly emphasizes the messy role of human values in technological progress. In many social and ethical domains, there are no easy “technical fixes,” because competing values (and the bureaucracies enforcing them) slow down or even block adoption. This is often overlooked in purely technical analyses of AI.
Acknowledgment of Fundamental Limits
While many discussions of AI focus on “scaling laws” or emergent properties, the essay reminds us that certain problems (e.g., NP-hard problems, biological complexities) are inherently slow or intractable. It’s a solid counterweight to overly optimistic views that “more data and compute” will sweep these obstacles away.
Clarifying the Hype Cycle
The essay’s point that we might see another “AI winter” if expectations outrun reality is important. This has historical precedent (e.g., the previous AI winters), and it underscores the need for measured public discourse.
Possible Areas of Nuance or Further Exploration
Unexpected Synergies
While the essay effectively argues that many claims are overblown, there could be room to explore “complementary leaps” in other fields—e.g., a breakthrough in quantum computing or synthetic biology that changes energy constraints or data quality. Such events are unlikely but not impossible.
Diverse Paths of Adoption
Although broad societal adoption is slow, certain narrow niches might adopt AGI very quickly. For example, in coding or data analysis, where feedback loops are shorter and the cost of errors more easily mitigated, adoption might be both rapid and significant, even if it doesn’t lead to an overnight economic revolution.
Regulatory Divergence
Different regions may adopt different regulatory approaches. The essay focuses largely on the U.S. (FDA) and EU, but other parts of the world might have fewer or different regulations, potentially altering the global rate of adoption in specific industries (e.g., China’s different regulatory climate around AI).
Long-Term Prospects
The essay acknowledges that fundamental transformation might still happen but over much longer timescales. One could explore a bit more how compounding increments over, say, 30–50 years could lead to a genuinely different world—akin to how the internet in the 1980s looked like a niche tool, yet 30 years later, it radically reshaped daily life.
Bottom Line
The arguments are thorough, balanced, and grounded in both historical and technical realities. They serve as a compelling counterbalance to the more sensational claims around AGI. While the essay could incorporate a few more details about potential narrow breakthroughs or regional variations, its cautionary tone about timelines and the complexity of real-world adoption feels well-justified.
< user >
If you were challenged to critique the essay, for example in a debate, what would you say?
< response from o1 pro >
Below is a set of potential critiques you might raise if you were debating the essay’s core arguments. These don’t necessarily refute the essay outright but offer counterpoints or alternate perspectives that could spark deeper discussion:
Underestimating Non-Linear Synergies
The essay emphasizes that meaningful breakthroughs are slow, citing bureaucracies, data limitations, and real-world bottlenecks. However:
Emergent Technologies: Sometimes, two or more converging technologies can produce sudden “step changes.” For instance, if quantum computing or novel semiconductor materials drastically reduce computational or energy constraints, AGI development could leap ahead faster than incremental improvements would suggest.
Complementary Fields: Research in synthetic data generation, advanced simulations, or robotics may reduce dependency on slow real-world feedback loops (e.g., by running large-scale, high-fidelity simulations that obviate a portion of real-world trials).
Debate Angle: Argue that while the essay is correct about general slowdowns, it potentially overlooks the possibility of paradigm shifts when multiple innovations intersect.
Narrow, High-Impact Domains
The essay paints an overarching picture of slow, broad societal adoption. But consider:
High-Adoption “Islands”: Some industries (software development, logistics, finance) could adopt AGI tools aggressively if they provide rapid ROI without facing regulatory roadblocks. AI-assisted code generation is already saving time for developers—if that trend accelerates, entire sub-sectors might see transformational benefits before society at large catches up.
Strategic Niches: Military, intelligence, or large-scale industrial applications might adopt advanced AI under more secretive conditions. The public might not see the full extent of progress, but adoption could be swift where the incentives are strong and oversight is less visible.
Debate Angle: Highlight that even if mainstream consumer adoption is slow, pockets of society could see substantial and rapid change, challenging the idea that “everything” moves at the speed of broad social adoption.
Overreliance on Historical Analogies
The essay uses historical examples (like the internet or smartphones) to illustrate slow adoption. Critics might say:
Acceleration Effects: The pace of technological change appears to be accelerating in certain domains. Today’s infrastructure for disseminating new technology (cloud computing, open-source communities, global internet) was far less mature during earlier tech waves. This might compress adoption timelines compared to the past.
Social and Cultural Readiness: Global connectivity, social media, and widespread digital literacy might allow genuinely useful AI tools to diffuse more quickly than even the smartphone did (especially if AI powers services that look like “killer apps”).
Debate Angle: Argue that the 21st century is not perfectly analogous to previous centuries, and so linear or incremental adoption curves may be too conservative.
Potential for New Forms of Value Creation
The essay underscores that human “taste,” creativity, and luck remain vital. True, but:
AI-Augmented Insight: AGI systems could partner with humans rather than replace them, leading to hybrid forms of intelligence that outstrip what humans (or AIs) can do alone. This might unlock new forms of innovation, especially in creative, knowledge-intensive domains.
New Economic Models: The emergence of AI-driven marketplaces, new forms of gig work, or AI-based intellectual property could shift how wealth is created and distributed. This might happen in unanticipated ways, similar to how the gig economy sprang up around smartphones and apps.
Debate Angle: Contend that the essay downplays how AI could reshape the nature of creativity and entrepreneurship by combining human insight with massive AI-driven analysis.
Underestimating Behavioral and Psychological Shifts
The essay highlights “last mile” problems and the tendency to demand near-perfection from machines. Yet:
Changing Social Norms: People are already growing more comfortable with AI-driven tools (e.g., voice assistants, recommendation systems). As familiarity increases, reluctance and resistance may diminish more rapidly than expected.
Rapid Consumer Adoption: Once a handful of AGI-driven applications demonstrate jaw-dropping utility (say, an AI personal assistant that manages schedules, finances, and learning seamlessly), social attitudes could flip faster than bureaucracies can react.
Debate Angle: Argue that the essay accurately describes initial hesitations but underestimates how public opinion and behavior can shift quickly when benefits become tangible and socially reinforced.
Global Regulatory Variation
The essay focuses heavily on Western bureaucracies (e.g., FDA, EU). However:
Different Regional Pathways: Some countries have lighter-touch regulations or more centralized authority, which could allow faster deployment. Even if AGI is stymied in the EU or the U.S., it could advance rapidly elsewhere (e.g., China), producing real-world results that force the West to adapt.
Competitive Pressures: Governments might reconsider bureaucratic slowdowns if they perceive a rival nation making leaps in AGI. This geopolitical dynamic could hasten the technology’s adoption.
Debate Angle: Insist that a monolithic view of “societal adoption” misses important regional differences. Competition can act as a catalyst for removing red tape.
Possibility of Partial Problem-Solving
The essay rightly notes that many real-world problems involve slow feedback loops or complex constraints. Yet:
90% Solutions: Even if AGI can’t completely solve an NP-hard problem or accelerate a full drug pipeline, partial improvements might still dramatically reduce costs or development times. It’s not all-or-nothing.
Heuristic Breakthroughs: Modern machine learning thrives on approximation and heuristics. While it can’t break the laws of math, it can find “good enough” shortcuts that vastly outperform traditional methods in practice.
Debate Angle: Show that focusing on whether AGI can fully solve unsatisfiable problems may overlook significant incremental benefits that still have substantial impact.
Long-Tail Consequences of Incremental Changes
The essay focuses on near-term underwhelm, but:
Gradual Overhaul: Even if changes roll out over 10–20 years, the cumulative effect can be enormous (consider how the internet was “incremental” across the ’80s and ’90s, yet the world looks radically different today).
Cascade Effects: Widespread AI in smaller domains—logistics, finance, coding—could trigger downstream shifts in employment, policy, and education. Over time, these might yield a society that is unrecognizable from today’s vantage point.
Debate Angle: Emphasize that the essay’s caution about immediate transformations may be correct, yet underestimates how a steady stream of small changes can snowball into something big.
Conclusion
A critique of the essay might concede that it makes solid points regarding slow adoption, bureaucratic friction, real-world feedback loops, and inherent computational limits. However, in a debate, you could challenge whether these points fully account for:
Rapid pockets of adoption in certain domains or regions.
Potential synergies with other emerging technologies that might remove key obstacles.
Societal or cultural shifts that could unexpectedly accelerate public acceptance.
Cumulative effects of incremental change, which can yield profound outcomes even if not immediately revolutionary.
By raising these counterpoints, you highlight scenarios in which AGI’s impact might be larger or come faster than the essay suggests, even if it still doesn’t match the most extravagant predictions of imminent superintelligence.
< user >
I am on team underwhelming. That said, I do think there is a somewhat compelling argument on the other side, but only if we reframe the question.
Should we act as if AGI will be underwhelming, or should we act as if it will transform all aspects of society?
I find this more compelling for the following reason. Suppose I think there is a 99% chance that we achieve AGI in the next decade, but that its impact is underwhelming. That means there is a 1% chance I am wrong, and AGI does transform all aspects of society in the next decade in unimaginable ways. If I think of this as a betting problem, I may still want to take the second bet if the payoff is sufficiently high even if I think that outcome is unlikely.
In fact, I think this is implicitly the argument that many intellectuals are making about AGI.
< response from o1 pro >
That reframing essentially turns the discussion from a question of likelihood (“Is AGI likely to be underwhelming or world-changing?”) to one of risk management (“Should we prepare for the possibility of radical transformation even if we think it’s unlikely?”). It’s a subtle but important shift, and it often comes down to the kind of reasoning sometimes likened to Pascal’s Wager:
Low Probability, High Stakes
Even if you believe there’s only a 1% chance that AGI will radically transform society in the next decade, the consequences of being wrong might be so enormous—positive or negative—that it’s rational to plan around that tail-risk scenario. In decision theory terms, the expected value of a low-probability, high-impact outcome can exceed the expected value of a high-probability, low-impact outcome.
Implicit Argument in Many AGI Debates
You’re right that many public intellectuals—some who might privately concede that truly world-shaking AGI is a small probability—still argue and behave as though it’s a near certainty. They’re operating on “high-stakes, low-probability” logic: if the upside (a golden era of abundance) or the downside (existential risk, catastrophic disruption) is large enough, you allocate resources and attention as if it’s a major concern.
Risk Management vs. Expectation Setting
This approach differs from setting public expectations around AGI’s likely trajectory (the “underwhelming” argument). Instead, it’s about safety margins and preparedness. Think of it like buying insurance: you pay a premium for something you hope never happens but that could be devastating if it does.
Criticisms and Trade-Offs
Opportunity Cost: Over-preparing for a tail-risk scenario can mean under-investing in more probable, pressing issues (e.g., climate change, basic healthcare, or more mundane AI applications).
Self-Fulfilling Hype: Acting as if AGI’s going to be revolutionary can distort markets and research agendas, potentially inflating bubbles or directing funds away from grounded, incremental progress.
Public Confusion: Messaging might become muddled if experts publicly highlight the “1% apocalypse” scenario without communicating the other 99%, leading to sensationalism or fear.
Practical Implication
If you believe there’s even a tiny chance of AGI-induced radical transformation, you might support significantly more research into AI safety, governance frameworks, and social readiness. Conversely, if you believe that 1% chance is closer to 0.0001%, you may see it as overblown. The debate usually pivots on how large we think that “small chance” really is, and how big the potential payoff or catastrophe might be.
In short, the question becomes less about whether underwhelming or transformative AGI is more likely, and more about which scenario we can least afford to get wrong. This logic often drives calls for both vigilance (in case transformation is imminent) and readiness for an underwhelming but still gradually influential AGI (if the hype falls flat).
< claude judges the winner >
Let me analyze this carefully and provide my perspective on which side presents the stronger argument.
I find the "underwhelming" side of the debate more convincing, primarily because it's grounded in concrete, observable constraints rather than speculative possibilities. Here are the key arguments I find most compelling:
1. The Physical and Mathematical Constraints
The point about AGI needing to surpass collective human intelligence, not just individual intelligence, is particularly strong. This represents an astronomical increase in required computational power and energy.
The argument about NP-hard problems being fundamentally constrained by mathematics, not just by current technology, is compelling. No amount of "emergent properties" can circumvent mathematical impossibilities.
2. The Real-World Feedback Loop Problem
The example of drug trials taking 5 years is excellent - it illustrates how many advances are bottlenecked by physical reality, not computational power.
The critique of wishful thinking around biological simulations is sharp - pointing out that training data itself (like gene expression data) often contains systematic errors that make learning difficult.
3. The Adoption Pattern Analysis
The distinction between technological capability and emotional/social adoption is insightful. The iPhone comparison shows how even revolutionary technology needs cultural resonance to achieve rapid adoption.
The bureaucratic resistance argument is well-supported by current examples (EU regulations, FDA requirements).
The counterarguments, while thoughtful, tend to rely more on "what if" scenarios:
"What if" quantum computing provides a breakthrough
"What if" multiple technologies converge unexpectedly
"What if" social attitudes shift dramatically
I find the "underwhelming" side wins the debate because it argues from first principles and known constraints, while the transformative side often relies on speculative breakthroughs or unprecedented rates of change.
However, I do find the final point about Pascal's Wager-style reasoning compelling - that even if we believe AGI will be underwhelming with 99% probability, we might want to prepare for the 1% transformative scenario due to its potential magnitude. But this is an argument about how to prepare, not about what's likely to happen.
The essay's conclusion that AGI's most realistic trajectory is "incremental utility rather than instant upheaval" seems best supported by the evidence and arguments presented.