Andrew C Wang's Blog
AGI Is Pointless

AGI Is Pointless

Edit History

The fear of Artificial General Intelligence (AGI) stems from the fear of replacement. People have been replaced by technology for millenniums, and AI is no different, but the wealth of knowledge that AI is able to attain is the primary limiting factor — from not having enough resources online to train on to real-world financial issues with needing AGI at all. From my perspective, we are far from AGI, and, even if we get there, no one necessarily would want to hire it.

AI will always gear towards a business goal

A machine learning model has many limiting factors such as the financing of the training and supervision, the number of parameters, the size of your model to run efficiently in a business setting, and more. Thus, AI-native companies will always train their models in the most cost-efficient way to maximize an objective.

However, consider just how much knowledge is stuck in people’s head and how little they care for documenting their decision-making and knowledge. Without those pieces of info, jobs will be irreplaceable by AI. I believe the wealth of general, basic knowledge is so vast that an AI model will make it harder for people to enter the job market; but with experience, it’ll become much easier. This touches on the idea that we should increase the number of apprenticeships since experience matters more, but that’s a separate topic for another day [1].

General Intelligence vs Domain Intelligence

When I hire someone for an early stage startup, I want someone with general intelligence. When I hire someone at a large company, especially with growing issues that require specialization, I require someone with domain knowledge of that field.

It’s the same with AI. We don’t need AGI; it’s costly, you have to train the “general intelligence” to become a domain expert, and how many companies need that domain expert for it to be worth training from a financial perspective?

Finally, so much domain knowledge is hidden in people’s heads, not on the internet for LLM companies to train their data on. Consider your own company: you hire people with experience because they have a wealth of knowledge and experiences with niche problems up to a point that there’s no way they wrote all of their problems down in public for web scrapers to train models on. Documentation is something most people don’t do; the complexities of decision-making are often nuanced, gut-oriented, and this is why interviewers ask you to explain your thought process and are not generative AI interviewers.

Conclusively, I’ve been playing with DevinAI (as of 10-26-2024). It’s not impressive. I am impressed by v0.dev by Vercel; I’m able to generate website code for NextJS super seamlessly; it’s trained to give me NextJS, Lucide React icons, and more — it’s kind of a domain expert, but at the level of “general intelligence” based on its current skills in generating well-designed websites. I believe we’ll see “general intelligence” models with some form of training take over more jobs such as Upwork freelancers.

If a villain wanted AGI, is it even possible?

Given the financial burden of creating machine learning models, probably not. Besides the obvious lack of need and for AI-native companies to use its limited resources to train models on, it’s already possible to shut down the United States in a few minutes. From the perspective of the energy grid, the Internet, and more, there are plenty of ways for a 15-year-old to create mass chaos in our system.

Tangentially speaking, our society is built on a high trust system which means defenses are low. Defenses are costly; if we never needed to worry about hacking, companies can use all their profits on growing the product. As AGI continues to near closer, we’ll see more “defenses” being made in the cyber realm. But I’m sure the number of teen hackers in post-capitalist economies will accelerate the need to invest in cybersecurity faster than the progression of AI.

So will AI companies with too much F-U money train AGI?

LLMs predict what the next words will be based on the previous words in some statistical model.

If we think about movies/games (e.g. Detroit Becomes Human), AI models require an objective to maximize. It needs to build unto itself; some may call this “thinking.” If AI-native companies are training their models to maximize a business goal, then the “AI” armada is far from wanting to being a threat to humanity and more of a helper for businesses in achieving their objective.

Companies like OpenAI are training generalist models in the hopes that businesses find use cases themselves — the classic selling of the shovel to the gold miners in a gold rush. However, they’re missing their own business objective. What is a good model? What are its customers’ primary use case? What should the RLHF look like to maximize customer potential? These questions aren’t being answered, and I have a strong belief that OpenAI will run out of money.

Back to being serious: Jobs

Yes, there will be fewer jobs on the market; positions like entry-level programming will be replaced (surprisingly, I think data entry jobs will be last replaced considering the extreme poverty in impoverished countries like the Philippines. Human labor is always cheaper). There may be less software engineers, but that might be because it’s harder to do the new job of engineers who will have more “senior” engineering responsibilities that aren’t replicable by AI.

I think the “AGI revolution” is a weird tech bubble; lots of tech folks don’t understand how much the world revolves around cheap human labor from our cheap brains subsidized by governments. I also think the tech world doesn’t really know about most jobs in the world and how much it revolves around people, themselves.

Consider the job of IT admins: they’re worried about the roles and permissions assigned to employees. Roles and permissions extend beyond internal company apps; they also include third-party apps like Slack or Figma. The IT admin can assign permissions the moment a designer joins such that they have access to Figma, but a manager may argue with the IT admin which permissions should be given immediately (i.e. birthright rules). Sometimes, managers won’t argue because of compliance — sometimes they will for the sake of productivity and tight deadlines. LLMs follow directions; the best decision-making an LLM can do is based on the confidence of the person talking to it.

AGI has been narrowed in scope from taking over the world, to replacing just white collar jobs, to now being specialized in certain domains that are potentially profitable. Obviously, the last point is how any new technology takes over industries (e.g. the computer being super expensive and used exclusively by companies to now being a cheap product), but the hype around LLMs is slowly dying because their necessity is becoming less and less obvious.

What specifically in AI is worth worrying about?

Decision-making skills are a key trait when I interview. What do software engineering candidates think when they make decisions? Do they incorporate a business context?

The same for AI: is every idea we think and say going to be written down for AI to consume? No.

Thus, you won’t see AI replace most positions since the skill requires knowledge that people won’t write down, but you will see less of certain entry-level positions — you know, the ones where they fetch coffee for you for $12/hour or more realistically jobs that hand you something to do with clear instructions.

Final Thoughts

That is my extremely practical perspective. Plenty of entry level, easy-to-model-behavior jobs will be replaced by AI or people will be more efficient, thus making the number of those jobs or salaries lower. I, for one, love ChatGPT; it has made my coding and business making experience much faster. However, the world is full of cheap human labor with a wealth of knowledge. Governments, by design, subsidize human labor to keep the economy running; it has been in the best interest ot make people cheap, and it will continue to hinder the financial necessity for AI-native companies to train AGI.

The world revolves around people. We do things in the interest of people which mean we communicate with people with self-interest. We solve human problems that require human-oriented solutions (a classic example is a UI/UX designer trying to gamify infinite scrolling on Temu/Instagram).

I have one too many friends who continue to think AGI will replace all jobs without considering how a robot would make a pumpkin spice latte with 30% milk — or when Starbucks corporate decide they need to add fairy dust, new instructions will need to be added. I believe the trend is still in overhyped mode, and I wanted to make a late-night post that I’ll continuously update with more of my thoughts once I remember them.

[1] Side note: I recently had to create a priority queue with an underlying balanced binary search tree. I never thought I’d need to know LeetCode when I dropped out of college, but it turns out my algorithms course was useful in this regard. You may not use your LeetCode knowledge often (of which, I’ve never actually tried practicing LeetCode before), but software is all about efficiency and automation — from the business side like B2B SaaS to the technical side like performance — so knowledge of algorithms is required in many complex or highly performant applications. One of my previous startups also used graph theory, so I wasn’t completely foreign to the ideas of LeetCode.

Draft 1: 10-26-2024 04:55a EST