AI Development's Magnitude of Change, Status Quo Bias, and Countering Tyler Cowen

2025/01/17

zenx.blog

Summary: A good framework for testing your status quo bias is "how revolutionary and world-changing would X technology/development be 10, 20, 50, or 100 years ago?" There is no status quo in the world right now; this is the fastest pace of change in human history, and we have become desensitized and assimilated to it. Optimists about AI are probably right—AI will be the next big society-altering event, bigger by at least a full order of magnitude than the industrial revolution. Reasoning below.

The current and upcoming revolution caused by AI has the majority being skeptics. If you make a new Twitter account (so its algorithm is not tailored) and look at any AI art post, it is people complaining about the use of AI, lack of human authenticity, or the non-consensual artwork that was used to train the image model used to generate the image. Similar negative narratives flow through all of AI, from the size of AI/AI-focused companies to methods used in training, to energy usage, to AI doomerism.

To clarify, I am not stating these issues to discredit them. I do believe many of these arguments hold some level of merit or truth, but I hate what most people using these arguments are attempting to imply—that we should 'shut down' or 'discourage' AI use and development due to the aforementioned issues.

I cannot help but be confused at how incompetent your thinking skills need to be to come to that conclusion. To come to that conclusion, you lack any order of magnitude thinking. The only one I can see through is AI doomerism arguments: that AI could end our lives/society. The doomerism theory requires you to think long-term and about the implications of the technology as negative, which is the only part where I differ, where I argue human institutions and systems would likely force the implication to be positive. People are genuinely connecting the fact that LLMs used human work without consent in their training and therefore hating AI. I see why you can morally dislike that and think of yourself as a good person for being anti-AI, but you are missing many, many steps in your thinking and opinion-forming skills to arrive at that conclusion that way. My last post is about an example of most people's incorrect way of thinking and how to start to develop effective learning, thinking, and opinion-forming skills. I am not some all-knowing deity that can easily spew objective truth; I am just good at thinking long-term and at higher orders of magnitude through a lot of work, studying, reading, and practice. I am still improving and not the best, but it's easier to be better at it than the median person when the median skill level is incompetency. If you are in the anti-AI camp (again, besides maybe doomerism), skim through my last post and then self-reflect on your faults with an LLM chatbot, prompt it using my last and this essay, and ask about how to improve on your psychological biases.

Back on topic from my tangent—most people reading this piece are not at the level of the mentioned opinion. In fact, most influential and growth-oriented institutions and people have incorporated AI developments into their outlook positively as they see the implications. Despite this, most are experiencing some level of status quo bias—the nature of humans to adhere to the status quo and be unintentionally blind to incoming change. McKinsey, the #1 ranked consulting firm, arguably the most important company to shift institutional outlook on AI development, estimates AI adding $2.6 to $3.4 trillion annually to the global economy in the upcoming years. The upper end of that, $3.4T, is an extra 2.85% on top of current 2-3% global GDP growth rates, which is large but severely underestimating impact. McKinsey even offers good analysis of AI developments but fails to understand the change it can bring to the status quo. I understand McKinsey's incentives to underestimate AI's change and understate its outlook to worldwide institutions, but if you truly understood AI development, you'd feel it imperative to catch the world up as too many are unaware.

$3.4T yearly, while significant, is disrespectful to the voices of those who are aware of what's coming. You could even argue that as of right now, it has revolutionized the world and its impact is unrealized. Ever since ChatGPT's introduction, the world (the average person) has assimilated and become desensitized to AI's wonders. In the meantime (both good and bad), machine learning has completely transformed the academic landscape in a million ways; made hyperaddictive social media apps and tailored content; given us self-driving tech (Waymo, Tesla); revolutionized healthcare with drug discovery and diagnostic tools; and much more. I could write an essay alone on just the current impacts that are not being realized and that we have assimilated to and accustomed to the status quo—things that would be out of science fiction 5-10 years ago. Many of these changes are not reflective in finances, like being able to make people scroll TikToks for hours with ML, but that's not the point. This is where you absolutely need long-term/orders of magnitude thinking. Look at the technological change and its impacts (financial or not) that have occurred every, let's say, 5 years. Has this rate of change increased over time? Yes. Is it likely that the rate and magnitude of change will soon be large enough to reflect itself in GDP and economic growth? Yes. The only counterargument here I will accept is that you can differ on the rate of growth and exponentiality of this change, and therefore come to the conclusion that growth will come but slowly and compound. My thesis on this comes from the AI industry, where they argue it will be society-altering, and very soon. My personal thesis is that this phenomenon is an iteration of the dot-com bubble, which I will write more in-depth on in a separate essay at a later date, but the idea is pretty self-explanatory anyways.

https://www.youtube.com/watch?v=GT_sXIUJPUo Tyler Cowen provides one of the best arguments as to why AI growth will come but will be slow and compounded like the industrial revolution. I think he provides a good argument but also fails from a small level of status quo bias. His argument is that the industrial revolution had 1-2% GDP growth rates over 100+ years, and AI will do something similar but at a higher growth rate, which I can understand. I think he fails in understanding how to relate historical events to the present day. In World War II, ~3% of the global population died. If a World War were to break out now, you could argue that 3% of the current population could die. This is the equivalent of what Tyler Cowen is arguing. Like yes, it can be valid, but it's probably not. If a World War broke out now and incentives were strong enough, nukes would come into play and we'd go >3%. You could say "oh but that's unfair to say we have nukes now," but that's not the point. If you believe that, you aren't thinking at a proper order of magnitude and should refer to earlier. The point isn't nukes; it's the rate of change in the world. In World War I, ~1% of the world population died, a war that was more Europe-central. From World War I to II, the rate and magnitude of change increased, allowing for a more globalized, interconnected world, which caused a 3x difference in global population (as a %) deaths from 1% to 3% from WWI to WWII. You could argue nuance and 'no but this separated WWI and WWII, and they can't actually be compared,' but still that's despite the point. No world event like WWI or WWII had happened until each respective war's time period. The overall point I'm illustrating is, again, the rate and magnitude of change. You can make the same argument and comparison from the Napoleonic Wars to WWI; the theory holds true. All this to counter Tyler Cowen's point that, if prior to the industrial revolution humanity averaged 0-0.1% yearly GDP growth and we got to 1-2% (a 10x-20x magnitude change) yearly growth post-IR, it is entirely possible if you follow the rate and magnitude of change at which humanity operates that we can achieve another 10x-20x magnitude level growth change during the next economic revolution and that 25-40% yearly GDP growth (or more) is genuinely a possible statistic. It sounds so stupid to write down, but it is literally possible. Economics functions on the basic principles of constraints of land, labor, and capital. AI literally breaks all three of these: infinite labor which leads to infinite capital, and 'land' (which we already have a surplus of) becomes the cloud/computers as that's what AI operates on, the same computers that have followed crazy exponential efficiency growth for the past 30 years and continuing (but high magnitude exponential change like seen in the tech/hardware/chip industry is probably only contained to that industry, right? that kind of change could never leak out to other systems in the real world and cause massive societal change, right?) and may accelerate further as AI revives Moore's law. Zoom out and look at the increasing rate and average magnitude of each crop yield, not the nature of each crop yield itself if you are aiming to foresee change and impact.

I'm not going to continue to practice my science fiction worldbuilding skills, but just leave my TLDR counterpoint to Tyler Cowen is that he is miscorrelating and underestimating the rate and magnitude of change that humanity has been operating under for its entire history—one big exponential curve.

Try to understand that the status quo is imaginary; the world is changing faster than it ever has in the history of the world right now. There is no status quo! One month in 2025 is equivalent to centuries of societal events pre-year 1000. AI is likely coming in many forms, including adjacent tech mediums in automation and robotics. It's very likely there are other unforeseen changes I am not reading into. All I am going to do is anticipate faster and more change.

Also, I introspect and improve my writing by creating counterpoints to my own arguments in my head while I write. When I go on tangents and lack nuance, it is me countering those counterarguments in my mind and writing without care for nuance as I hope the reader makes the same assumptions that I did in my head. Try to ignore any depth or nuance I may have glossed over and attempt to understand the takeaway point from my essays; it would save me some insecurity. Thank you.