The Christmas break gave me something I rarely get during the year: time to learn without urgency. Two threads of exploration converged in interesting ways - one practical, one theoretical, both about AI’s role in how we work.
Hands-On: Claude Code
I spent part of the break experimenting with Claude Code, finally putting it through its paces on a problem that had defeated me for years: refactoring my Emacs configuration. Seven minutes of AI-assisted work accomplished what months of Sunday afternoons couldn’t.
But the real lesson wasn’t about speed. It was about suitability - recognizing that some tasks are fundamentally better suited to AI collaboration than human grinding. Pattern-based refactoring across large codebases. Systematic documentation. Consistent application of organizational schemes. These aren’t human strengths, and pretending they are wastes time.
Working directly with Claude Code made AI assistance tangible in a way that abstract discussions never could. It’s one thing to read about AI capabilities. It’s another to watch it hold an entire configuration structure in “mind” while making granular, contextually-aware changes across hundreds of lines.
Theory: Supremacy
While Claude Code gave me hands-on experience with AI’s present capabilities, Parmy Olson’s Supremacy: AI, ChatGPT, and the Race That Will Change the World reframed my understanding of how we got here.
I knew AI had been powering Google Search, recommendation algorithms, spam filters, and voice assistants for years. What I didn’t grasp was the competitive landscape that shaped their development - the internal battles at Google between research and product teams, the exodus of talent to OpenAI, the philosophical splits over AI safety and the race dynamics that pushed ChatGPT’s public release.
The book revealed that today’s AI landscape isn’t just the result of technological progress. It’s the product of corporate rivalries, personality clashes, strategic (mis)calculations, and competing visions of how AI should be developed and deployed. Google had the talent and technology but hesitated to deploy publicly. OpenAI moved faster with fewer resource. Each move shaped the next.
Moving Forward
The break taught me three things:
First, AI collaboration can be genuinely transformative when matched to suitable problems.
Second, we’ve barely begun to understand how AI has already transformed the tools we use daily.
Third, and perhaps most practically: plain text workflows matter more than ever. Claude Code’s effectiveness with my Emacs configuration wasn’t just about AI capability - it was about working with plain text files that AI can parse, understand, and modify systematically. This has implications for how I structure all my work going forward. Documentation in markdown. Configuration in readable formats. Notes in org-mode. The more I keep my workflows in plain text, the more I can leverage AI collaboration effectively.
Back to work soon, but with clearer eyes about what we are all working with.
Have you had similar realizations about AI’s role in tools you use daily? I’d love to hear about them.