There’s something appropriately meta about using an AI coding assistant to build an AI-powered news aggregator. QuantumExecBrief monitors quantum computing industry news, filters for business relevance, and generates weekly executive briefings using Claude. The whole thing was built with Claude Code as my primary development partner. The story here isn’t the subject matter, that could be anything. Here’s what I learned about the workflow.

The Project in Brief

QuantumExecBrief does a few things:

  • Monitors 22 RSS feeds for quantum computing industry news
  • Filters for business relevance (not academic research papers)
  • Generates weekly AI-synthesized executive briefings
  • Deploys as a static site to Azure

The stack is Python, SQLite, Jinja2, the Claude API, and GitHub Actions for automation. About 3,800 lines of Python with a comprehensive test suite.

Not a massive project, but enough to develop patterns worth sharing.

Workflow Tips

Start with Structure, Not Code

The temptation with AI coding assistants is to dive straight into “write me a function that does X.” Resist this. Describe what you want in natural language first. I asked ChatGPT first to generate a requirements document from my idea, I saved that in the working directory and fed that right in. Claude then read it and built a plan around it.

Let Claude Teach You

Don’t just ask for code but ask for explanations. Some of the most valuable moments came from Claude proactively adding things I didn’t know I needed.

A simple question like this:

Me: Are there any security concerns in this repo?

Led to a prioritised security review that surfaced issues I wouldn’t have thought to look for - including XXE vulnerabilities in RSS parsing. Claude added defusedxml for protection. I hadn’t asked for it. I learned something about XML security I would have missed otherwise.

The duplicate detection uses Jaccard similarity for fuzzy title matching. That came from a conversation where I asked “how should I handle near-duplicate headlines?” Not “implement Jaccard similarity” - I didn’t know that was the answer until Claude explained it.

Iterate in Small Chunks

Build module by module. Test each piece before moving on. The speed of iteration is the superpower here - you can try ideas fast and throw away what doesn’t work.

The keyword matching system for relevance filtering went through three or four iterations in a single session. First pass was too strict, second was too loose, third added negative keywords, fourth refined the scoring. Each iteration took minutes. That kind of rapid experimentation changes what you’re willing to try. You know the value - guide the tool and the project towards it.

Use Claude for the Boring Bits

Test fixtures. Boilerplate. CI/CD configs. Requirements files. All the glue that holds a project together but isn’t interesting to write.

Let Claude generate the pytest.ini, the GitHub Actions workflows, the requirements.txt. Save your mental energy for the interesting decisions - what the system should do, not how to configure pytest markers.

Prompts Are Product

For a project that uses the Claude API, the prompts themselves are core product. The synthesis prompt and editorial prompt in QuantumExecBrief evolved significantly through iteration.

The target audience kept shifting as I refined my thinking:

Me: Please review AI prompts - I want to be clear this is for execs potentially outside the quantum space (ie CTOs in manufacturing or CIOs in pharma). Does the prompt give that view?

Me: Ok, please update the prompt to cover people outside quantum - that is the target audience.

Each iteration improved the output. Treat prompt engineering as a collaborative writing exercise. Store prompts in version-controlled files (mine are .txt files in config/). Review changes like you’d review code changes. The prompts matter as much as the Python.

Ask for Graceful Degradation

Explicitly ask: “What happens if X fails?” This kind of production thinking emerges from the right questions.

Me: I need to be able to test the newsletter without sending it to everyone - what is the right approach here?

This led to a discussion of test audiences, environment flags, and confirmation steps before broadcast. Production concerns I hadn’t fully thought through.

Claude suggested fallback patterns throughout QuantumExecBrief. The synthesis works without an API key (you just get unprocessed articles). The editorial pass is optional. The fetchers handle partial failures gracefully. None of this was in my original spec - it came from asking the right questions during development.

With these tools you’re outsourcing some of the mundane - you still need to think and understand what is happening, that lets you think more strategically.

What I Learned

AI Coding Assistants Change How You Think

The mental model shifts. Less “how do I implement X?” and more “what should X do?” You spend more time on requirements and architecture, less on syntax and boilerplate.

You become a reviewer and director rather than just a writer. This is a genuine shift in how programming feels, not just faster typing.

Speed Creates Different Tradeoffs

When iteration is fast, you try more things. Some code gets thrown away. That’s fine. The first solution isn’t precious when the second is cheap.

This changes risk tolerance. I tried ideas I would have dismissed as “too much work” in a traditional workflow. Some didn’t work out. The cost of finding out was low.

The Meta-Irony is Real

Using Claude Code to build prompts for the Claude API. Asking an AI how to ask an AI. It’s turtles all the way down.

And it works. Claude is genuinely good at critiquing and improving prompts for Claude. The meta-level doesn’t create problems - it creates useful feedback loops.

What I’d Do Differently

A few things for next time:

Start with tests earlier. Retrofitting tests is less satisfying than building them alongside the code. Test-first works well with AI assistants because you can describe the expected behaviour before implementing it.

Sketch the data model first. I dove into fetchers before properly thinking through the article model. Came back to bite me later.

Document architectural decisions as you make them. Claude conversations capture the reasoning, but extracting it afterwards is tedious. A lightweight ADR (Architecture Decision Record) practice would help.

The Result

QuantumExecBrief is live and running. Weekly briefings synthesised from industry news, deployed automatically, no manual intervention required.

The workflow patterns transfer to other projects. The specific domain was quantum computing news, but the approach - structure first, iterate fast, let Claude teach you - works for anything.

If you’re interested in the quantum computing briefings, you can find them at QuantumExecBrief. If you’re building projects with Claude Code, I’d be curious to hear what workflow patterns you’ve developed.