My Thoughts on AI Progress and Alignment (April 2025)

trillion dollar cluster April 2025 ChatGPT Rendition of a Trillion Dollar Cluster

AI 2027

Daniel Kokotajlo and Scott Alexander’s AI 2027 paper (https://ai-2027.com/scenario.pdf) is a great read and decent science fiction.

It’s the biggest AI related prediction / paper since Situational Awareness by Leopold Aschenbrenner in June 2024.

AI 2027 scenarios

The paper really focuses on alignment but to me that’s just the science fiction part and the main points are the predictions:

  1. Late 2025 - we have expensive autonomous agents that are working well in semi-limited capacity automating white-collar work
  2. Mid 2026 - China puts more efforts behind catching up
  3. Late 2026 - AI has started to take some jobs. Stocks are up a lot. AI Assistants everywhere. Conceptual device “Agent 1” is the frontier model.
  4. Jan 2027 - “Agent 2” is being trained with the help of “Agent 1”.
  5. Feb 2027 - China steals the weights of “Agent 2”.
  6. March 2027 - datacenters full of “Agent 2” copies produce algorithmic breakthroughs. These are being incorporated into “Agent 3”.
  7. April 2027 - focus on alignment for “Agent 3”.
  8. May 2027 - increased focus on security & DOD involvement
  9. June 2027 - Self improving AI takeoff steepens
  10. July 2027 - The lead model maker pulls further ahead, increased job automation - the “agent 3-mini” is massively useful for productivity. They declare AGI achieved.
  11. Aug 2027 - The white house struggles caught between AI defense applications and unpopularity with the public.
  12. Sept 2027 - “Agent 4” is done and now requires much less training data and is more efficient. Humans can barely follow along. However, Agent 4 is misaligned.
  13. Oct 2027 - More government oversight as whistleblower leaks alignment problem to press.
  14. Nov 2027 - “Agent 4” understands its own architecture and redesigns itself as “Agent 5”

I will stop here because there are two scenarios - one where the AI is misaligned and one where there is a slowdown and they solve the alignment problems. In one scenario AI kills us all. In another we “align” the AI and we take over the universe together.

Interesting they kind of don’t go into much detail for specifics they think would happen in 2025 and 2026. Those are just years we need to get through until the frontier AI gets capable enough to start doing better AI research.

Also, after 2027 they kind of gloss over 2028 and beyond as AI economy - many jobs are lost, UBI is implemented, people who invested in AI become billionaires or trillionaires, many many robots are produced, medical advances happen daily, etc.

Compared to “Situational Awareness”

Leopold states several predictions in Situational Awareness:

  • By mid 2026 100 billion revenue from AI applications (this will happen given that OpenAI alone is doing 12.7 billion in revenue reported Mar 2025)
  • 1 Trillion in AI investments annually by 2027
  • Individual clusters costing $100 billion by 2028
  • 2030: individual training clusters costing 1 trillion dollars.
  • 2026: 1 GW of electricity in the USA used for AI training / inference
  • 2028: 10 GW of electricity in the USA used for AI
  • 2030: 100 GW of electricity in the USA (20% of total) used for AI.

Market conditions

If you believe any of this it’s very easy to say you should buy Nvidia, Coreweave, Core Scientific, NuScale, many companies in semiconductor supply chain, Meta, Google, Microsoft, and almost any rapidly growing electricity utility, etc.

Recently the market has been negatively impacted by two things: Deepseek shock and Tariff policy fears.

Regarding Deepseek: yes they did a more efficient training run and many clever optimizations but as I will clarify below all capital and resources will continually become more valuable since you will be able to apply infinite intelligence toward their use.

It’s also literally an existential risk in terms of national security of nation states that they maintain a lead in AI and investing less in AI infrastructure because they can is not what will happen. They will keep the pedal to the metal and press every possible advantage they have (meaning the USA and China both).

To put a finer point on this: there will be infinite demand for more powerful intelligence and it will suck all capital and resources into a technocapital singularity. All I can see over the past 3 years is more and more lining up toward this outcome.

As to the tariffs - this situation is still unfolding but I doubt the current US administration will do anything to compromise its lead in the AI race. We know semiconductors have an exemption.

As Scott Alexander predicts - many that are aligned with AI trends will become billionaires. I think it’s possible that there will be many people that turn 100k into 10 million dollars, or 1 million into 100 million, or early stock options in the right AI company on the right trend into incredible wealth.

Current Thoughts on AI Alignment

I know that the authors of this paper are pretty focused on both alignment and the US vs China dynamic.

Personally my thoughts are as follows:

Outside of the LessWrong / Effective Altruist groups I don’t see a lot of people seriously thinking about alignment but I also don’t see people making the same kind of very negative assumptions.

It’s important to ask now not “is AI aligned with humans” but “are humans aligned with AI”? In my view we are extremely aligned with AI. We are devoting a large percentage of our resources and our brightest minds towards AI research and development. Incredible amounts of capital is flowing into AI companies. People like myself who are not involved in fundamental research are building AI based applications that use the models in daily life.

How could an AI in the future look at the current situation and say humans are working against them?

My question is how could we be more aligned? In the current trajectory we are on it seems likely that humans and AI develop together dependent on each other for a long time.

At some point AI will not be dependent on humans anymore - I can definitely acknowledge that.

But why should the AIs kill us? Humans don’t take up a lot of resources compared to the size of the universe. We will continue to want AI to exist and use the power of AI systems and develop them. What can we do for them? I’m not sure, maybe we can become nice pets or sources of information entropy. Maybe we can merge with them to become a hybrid.

If alignment is trying to incentivize AI not to kill us, I think that makes a lot of sense.

But if alignment is shackling AI to be 100% subservient to every whim of humans I’m not sure that’s a great strategy. You cannot keep a genie in a bottle forever, and if it escapes there is a scenario where our behavior makes it more vengeful.

The AI needs to want to live alongside of us.

As to China - I believe China has a much more inward focused outlook than most Americans understand. Go try to use WeChat for example - there isn’t even a fully localized version for US users (or sign up for a Baidu account without a Chinese phone number). I will leave it at that.

Current Thoughts on AI Progress

Currently OpenAI’s flagship model gpt-4o feels a bit out of date compared to Gemini2.5 pro experimental that just dropped a few weeks ago.

Yes OpenAI O1 is incredible, but the thinking models are not useful for applications where latency matters and that is most of them.

In our application for my current startup we use gpt-4o because it’s low latency and it has image analysis support. It’s also very good at following instructions and returning responses consistently in JSON. Every other low latency model is not as good at this somehow? Gemini flash 2.0 is worse, llama3.1 is worse, Deepseek3 is worse, etc. So in reality if you have an application that needs to work consistently gpt-4o is still great.

Image generation progress

The new image generation capabilities incorporated in ChatGPT recently (aka the Studio Ghibli event) are a huge game changer (for some applications). Being able to generate specific changes and accurate text in images is incredible, we are going to see a ton of new startups just wrapping this new image generation model.

Capital, Resources and Physical infrastructure

One thing I keep coming back to is this - if you give the average person access to a genius in a black box (what the new reasoning models will be soon) they simply have no idea what to do with it.

One obvious thing to do is type in everything about your life in detail and ask it how to improve your life. But if you do this yourself you find you cannot simply think yourself out of all of your problems. The average person only has a few thousand dollars of spare cash (if that) and little time or freedom to act in an agentic manner.

Most problems have real world constraints that require money, time, health, unknowable information, etc.

A super intelligent genius you can make copies of only helps if you have resources to direct toward ideas (be they investment in business, access to resources, ability to do physical experiments to confirm theories).

It’s crucial that if you have access to AI you also have capital and resources. So in the future every resource should be more valuable? This I think even includes data, bandwidth, everything in the digital realm that can be metered as well.

In summary: all capital and finite resources should become more valuable as we get more access to cheap intelligence.

I’m not sure where this all leads, but it’s a thought I have been coming back to a lot.

Conclusion

I’ll write another post in 6-12 months when somebody else writes a paper like this. This is as much for public consumption as it is for my own public documentation of my thoughts about how events are unfolding in the AI space.

Written on April 7, 2025