Discussion about this post

User's avatar
Pawel Jozefiak's avatar

The IRB data warning is one I keep seeing people skip over and it matters. Research workflows have a specific trust problem - you're often working with data that's one mistake away from a compliance issue, and Claude Code's default behavior is to read everything in the project directory.

The context window degradation point is also underappreciated (most people just keep going until outputs get weird). Good that you're building the Markus Academy series around this - researchers need the setup guide more than developers do tbh, the stakes for getting it wrong are higher.

Aadarshkumar's avatar

This is such a well-structured breakdown, thank you for putting it together. The ladder analogy hit close to home — I spent months at Level 0-1, convinced I was "using AI" when I was really just copy-pasting between tabs.

The part about context window degradation is something I wish someone had told me earlier. I've had sessions where Claude started giving noticeably worse answers and I thought it was a model issue, not a context problem. That mental model shift changes how you work with it completely.

What you've described here is exactly the gap we're trying to bridge with a course we're building — "Master Claude in the Real World." Not the theory of what Claude can do, but the actual workflows: prompting, Claude Code, Cowork, and integrations. We just launched on Kickstarter for anyone who wants to follow along as we build it out: https://shorturl.at/ZrG8p

Excited to see where this series goes. The jump from Level 2 to Level 3 is where things genuinely start to feel different.

12 more comments...

No posts

Ready for more?