My last blog post was February 7, 2016. Since then my blog has just slowly accumulated technical debt and dust. Today I had Claude1, an AI from Anthropic, clean up all the rust and migrate this blog from Google App Engine to GitHub Pages in an hour. The fact that an AI helped me migrate this blog is a neat summary of how much has changed while I wasn’t looking.

The AI State of the World in 2016

When I last posted, AI’s public milestones looked like this:

  • DeepMind’s AlphaGo had just beaten Fan Hui (October 2015)2 and was about to beat Lee Sedol in March 20163
  • Siri could barely set timers reliably
  • Google Translate was still using phrase-based statistical machine translation4
  • Deep learning had conquered ImageNet with ResNet (2015)5, but most people didn’t care
  • GPT-1 wouldn’t exist for another two years
  • Self-driving cars were promised by Elon Musk to arrive by 2017, then 2018, then 20196

Back then, there were companies claiming they’d help you generate blogs and code automatically, but they were universally terrible—just templates and boilerplate. The idea that I’d have an AI actually understand and migrate my blog, write real code, and help create content would have seemed like science fiction.

The AI Explosion

The GPT Timeline That Changed Everything

  • 2016: AI can play Go3
  • 2018: GPT-1 was released with 117M parameters—and nobody really noticed7
  • 2019: GPT-2 with 1.5B parameters - OpenAI worried it was “too dangerous” to release8
  • 2020: GPT-3 with 175B parameters—suddenly AI could write coherent essays9
  • 2022: ChatGPT launched—the world changed overnight10
  • 2023: GPT-4 passed the bar exam11, medical licensing exams, and could see images
  • 2024: Claude 3.5 Sonnet12, Gemini, and Llama 313—AI coding assistants became pair programmers
  • 2025: AI agents starting to do independent work, though not very well

What AI Does For Me Now

Today, while migrating this blog, an AI assistant:

  • Diagnosed that my domain resolved to Squarespace, which was redirecting to Google App Engine
  • Wrote the migration scripts
  • Created dark mode CSS that respects system preferences
  • Cleaned up nine years of technical debt
  • Even wrote the first draft of this post

This would have taken me days in 2016. It took two hours in 2025.

The Coding Revolution

The way we write code has fundamentally changed:

  • Copilot/Cursor can autocomplete entire functions from comments
  • ChatGPT/Claude debug error messages more effectively than Stack Overflow
  • AI code review often surfaces bugs before humans review the code
  • Documentation can be generated and kept current with the right tooling
  • Junior developers with AI can ship production code that would have required a whole team in 2016

AI Capabilities That Still Blow My Mind

  • Multimodal understanding: I can paste a niche meme and get the joke explained
  • Code translation: “Convert this Python 2.7 to modern TypeScript” just works
  • Contextual awareness: With large context windows, AI can track our conversations and project state
  • Creative writing: AI can match any writing style or tone
  • Image generation: DALL-E, Midjourney, and Stable Diffusion have displaced much of stock photography

What AI Still Can’t Do (But Probably Will Soon)

  • Run continuously—they still need humans to invoke them
  • Self-improve their own code—AGI remains elusive
  • Replace senior developers—but it’s getting uncomfortably close
  • Physical world tasks—they can’t fix your plumbing or build furniture (yet)
  • Understand true context beyond their context window
  • Generate truly novel scientific breakthroughs—though they’re great at synthesis

The Biggest Surprise: No Skynet

Here’s what I didn’t expect: we have AI that can pass the bar exam, write symphonies, and code better than most humans, but it hasn’t taken over the world. No Skynet. No Ultron. No HAL 9000. Not even a “Her” style AI that we fall in love with (though Grok’s Ani is trying)14.

Instead, we got… really smart autocomplete.

These AIs are incredibly capable but fundamentally passive. They wait for our prompts. They don’t have desires or goals. Yes, there are agentic systems and SWE agents being developed everywhere, but they’re still fundamentally reactive—they execute tasks we define rather than pursuing their own objectives.

I expected this phase—powerful but not autonomous AI—to last maybe a year or two before someone figured out how to make them into agents. But here we are in 2025, and Claude still needs me to press Enter. ChatGPT still waits patiently for the next prompt. Copilot suggests code but doesn’t write entire programs while I sleep.

But let’s not get too comfortable. The pace hasn’t slowed—it’s accelerating. Just this year:

  • OpenAI15 and Google’s Gemini16 both reached IMO gold-medal-level performance in mathematics
  • Genie 3 can generate playable 3D worlds from a single image17
  • GPT-5 can reason through complex multi-step problems that would have stumped GPT-418
  • AI that seemed miraculous in 2023 now feels quaint

Looking back at ChatGPT from just two years ago is like watching a flip phone after using an iPhone. It was impressive then, but today’s models make it look like a toy. If this pace continues, my “extended tool phase” observation might age very poorly, very quickly.

It’s like we’ve built a Formula 1 engine but we’re still figuring out how to attach wheels. The capability is there, but the autonomy isn’t. And honestly? That’s probably a good thing. This extended “tool phase” is giving us time to figure out alignment, safety, and what we actually want from AI before it starts wanting things from us. But that time might be shorter than we think.

The Unexpected Consequences

The world is drowning in AI-generated content. We can produce homework essays, generic creative writing, passably good artwork, and corporate boilerplate faster than anyone can consume it. The internet is filling up with what people call “slop”—technically correct but soulless content that nobody asked for. Meanwhile, Stack Overflow traffic has reportedly dropped by 50% since 202219. Why wade through outdated answers when AI gives you a personalized solution in seconds? The entire model of community-driven knowledge sharing is evaporating.

The tech optimists insist this is all progress—creative destruction at its finest. Old inefficiencies swept away by superior technology. The invisible hand optimizing knowledge transfer. But tell that to the Stack Overflow contributors who spent decades building a commons, or the junior developers who will never develop deep expertise, or the teachers watching their entire pedagogical model collapse. The market may be efficient, but efficiency isn’t everything.

This shift has fundamentally broken how we learn. Junior developers now skip entire learning curves, jumping straight from “Hello World” to building production apps with AI assistance. They never struggle through the fundamentals that build intuition. It’s like GPS navigation—incredibly useful, but we’ve forgotten how to read maps. We need AI to be productive, but we need expertise to know when AI is wrong. The very tool that lets juniors code like seniors also prevents them from developing the judgment to spot AI hallucinations, judge architectural decisions, or debug things on their own.

Everything looks the same now. AI tends to give similar answers to similar prompts, so everyone’s code has converged on the same patterns, the same variable names, the same comments. We’re losing the quirky, personal style that made codebases unique. Academic institutions are in crisis mode trying to detect AI-generated assignments. Code reviews now include checking if the implementation is “too perfect” to be human-written. We have to specify “written by a human” like it’s artisanal bread. This post? Co-written with Claude. My code? Codex helped. The line between human and AI creation has blurred beyond recognition.

Perhaps most insidiously, the expectations have completely recalibrated. What used to take a week should now take a day. What took a day should take an hour. Clients and managers have internalized AI-assisted productivity as the new baseline. “Just have ChatGPT do it” has become the new “just Google it,” except the expectations are 10x higher. We’ve built a treadmill that only goes faster.

What Didn’t Change

Despite the AI revolution:

  • Haskell still isn’t mainstream (but AI can explain monads better than any human)
  • Jekyll still works great for blogs (though AI now writes the posts)
  • Git still tracks our changes (now mostly AI-generated)
  • Python 2 vs 3 matters less when AI can translate between them quickly
  • We still need humans to decide what to build

Looking Forward (If That Even Makes Sense Anymore)

If AI progress stopped completely, right now, today—we’d still have a decade of upheaval as current AI spreads everywhere. Every industry, every job, every creative endeavor would be transformed just by what already exists. GPT-5 alone, frozen in time, would still revolutionize education, medicine, law, and programming as it reaches the billions who haven’t touched it yet.

But AI progress isn’t stopping. It’s accelerating.

2034 is going to be a completely alien world. I can barely imagine 2027.

The gap between 2016 and 2025 feels huge, but it might be nothing compared to 2025 to 2027. We could have AGI by then. Or we could discover fundamental limits that slow everything down. We could have AI agents running companies. Or we could have regulations that keep AI as a supervised tool. We could solve alignment and create beneficial superintelligence. Or we could be living through the final scenes of a cautionary tale.

The only prediction I’m confident in: the future is going to get very weird, very fast. The kind of weird where this blog post feels like it was written in the stone age. That’s assuming Yudkowsky isn’t right and we all stop writing blog posts entirely.

But hey, at least I finally got dark mode working.


P.S.—The migration to GitHub Pages was remarkably smooth. The hardest part was remembering that my domain’s DNS resolved to Squarespace (after Google sold Google Domains to them20), which was redirecting to Google App Engine, which was serving my static Jekyll site through a Python Flask app. Why was I serving static files through Python? Because in 2016 I was playing around with App Engine, not because it made any sense. Sometimes the simplest solution (static files served statically) is the one you should have started with.


References

Click here to suggest a topic through GitHub. If you don't have a GitHub, feel free to email me.