My Month with Claude Code

I pretty much lived in a terminal interface for a month. Here’s what I learned.

This last month I went very deep with Claude Code. I pretty much lived in a Terminal interface during each working day. Not a flex, I assure you, but I have emerged with a clearer sense of where Claude Code is most effective for designers.

Over this past month I felt like I had regained the same energy I had for work in my early twenties. I was working long days but I didn’t feel burned out. I felt energised, empowered and I was enjoying work more than ever. This is a phenomenon that Nikhyl Singhal talked about in a recent episode of Lenny’s podcast. He describes this idea of ‘finding your moment of joy’ with AI and that it’s a tipping point for getting people to adopt it in their day-to-day work.

That said, working with AI often felt like speed and efficiency but rarely was it true progress.

Here’s the key takeaways as short as I can make them:

Search still exists, try that first

Be very mindful when using Claude Code to search large datasets. It uses a lot of tokens and takes a relatively long time compared to search. When I used the Slack MCP I often found that it could have just been a search in Slack itself. Where it excelled was using Claude to analyse a Slack channel full of customer feedback. I was able to ask it to read the channel and list most common pain points. It was particularly good at dismissing feedback that was irrelevant to the project. 

Searches of entire codebases on GitHub felt valuable for understanding how things worked, especially when familiarising myself with products that were new to me. Honestly, these searches could have just been a short conversation with an engineer and it was hard to qualify accuracy as I lacked any true understanding to challenge the AI.

Research synthesis is okay. Not amazing.

As much as we strive for purity in our research process, applying intuition can really help you move faster. I found that where I had personally conducted user interviews my hand-written notes were good enough to act on. For context, I was doing a process of rapid iterative testing of a prototype. For greenfield research with many hour-long conversations across multiple researchers there are certainly efficiencies to be gained with AI. My take here though is not to have AI do all the work and spit out the answers but to instead use AI to build tools that aid the researchers in cutting through the data. Can we point our intuition at the data and reveal new insight?

My advice therefore is to pay attention to your motivation before prompting a big synthesis of a bunch of transcripts. Are you looking for something you already know or intuit? Can you not just simply act on your intuition now? Is a research report really necessary and will anyone read it?

Be mindful of over documentation

Writing documentation is now so easy and it feels like you’re creating valuable content. But ask yourself honestly, is anyone going to read it and if they do, will it help your team move forward? I can already feel the weight of the Markdown files I’ve had Claude generate this past month and they’re already gathering dust.

What would often happen is Claude would reveal something to me I believed was super valuable, I wanted to then share this information so I had it generate a Markdown document. No one wants to receive a Markdown document because it’s just another thing to read. It's also less meaningful to them because it wasn’t something they arrived at. A better approach is to ask yourself why do I want to share this and what do I want the other person to do? That’ll reveal what you actually need to do, which is often just a question in Slack.

It’s tempting to automate status updates

I automated some status updates and had Claude post them to Slack on my behalf. Most of the time it was pretty bad. It’s helped me get a sharper view on when it’s more valuable to have things written by me.

Whilst we may view writing status updates as a chore there is value to be gained in taking the time here:

  • The content: there certain nuance is hard for the agent to gather and you could lose a lot of time trying to ensure it’s documented for the AI to work with.

  • The process: writing it myself actually forces me to think about it and that’s always valuable

  • The connection: authentic voice leads to more engagement/replies/conversation, which is surely the point of posting something to slack.

There’s many kinds of status updates, summarise “top things we heard from customers this week” is perfect as there’s a single line from data to update and it’s simply moving the information into a shared place we can converse about it. My updates failed because they required nuance, lacked accuracy, or didn’t infer the right lens of what I wanted to say. 

Image generation for storyboarding saved a ton of time

I’ve been around the block. Until recently I would hand draw storyboards to help communicate the vision of the product to the team and stakeholders. Storyboards are effective because they communicate how our product is experienced by the user and they can showcase a ton of future ideas without anyone getting confused about whether they exist yet. It’s very clearly a sketch and not a prototype that looks real.

My process for hand drawing them would take about 1-2 hours per sketch and would involve printing out stock images of people holding phones or being in scenes like coffee shops or airports, I would then trace these photos and overlay those drawings with either hand drawn or computer. drawn elements, like notifications popping up from a device.

Now I can use AI (Gemini in this case, not Claude Code) to generate these storyboards rapidly and in a consistent style. Try to generate each frame of the storyboard independently. Being tight about what you want it to generate. Don’t try and have it generate all the scenes in one shot. It’ll take you way longer trying to fix everything.

Do you really need its design ideas?

Eager to play around with Figma Console MCP and Pencil.dev, I had Claude generate a bunch of different variations on a screen to explore different approaches I might take. I’m probably not going to do this all that much in the future. 90% of the output was useless, 10% was something I was already considering. Given that AI is a sort of amalgamation of all that already exists, it’s not all that different than screenshotting ideas off Mobbin. Which can actually be an enjoyable process, during which I tend to take time forming an opinion on what I like and don’t like.

Building prototypes is magical 

This is where Claude shines for designers. I can now build chat, voice, animations, databases of dummy content, complex interactions. It feels limitless. I can’t imagine ever wiring up screens into a prototype ever again. That was always tedious work and building them in Claude feels so much better.

Two things to watch out for:

  1. If you don’t tell Claude it’s a prototype it’ll get cooking on a whole lot of stuff you don’t need it to. Be pragmatic about what you need to actually work and what can continue to remain off the cards. For example: Do I actually need user authentication or just a button that makes it look like the user is logging in?

  2. Avoid the fidelity trap. Worrying too much about aesthetic details during early prototyping is easily done. Ignore everyone else’s highly polished prototypes, instead focus on your problem, build just enough function and fidelity to learn what you need to learn and move on. 

Coded prototypes are not a linear path to production code. They are as disposable as a sketch, remembering this will ensure you don’t over engineer them or get too precious about them.

Building your own tools feels incredible but can be a time suck

When I was building a prototype I missed having a canvas to see screens side by side. So I built one that rendered each view of the prototype on a canvas and allowed me to move the screens around. Just like Figma but instead of a picture of a thing it was the actual thing. When I made a change to the prototype it was reflected in the frames on the canvas. This helped me to see what I was building in a way that felt more familiar and helped me communicate the user journey at a higher level, rather than requiring someone to click through the prototype each time. 

I got quite far with it quite fast but when I tried to add connectors between the screens to really show the user journey things got a lot more complicated. I sense that it was going to take a lot more effort to get that working how I imagined it would. So I put the brakes on that for the time being. If I get a sense that it’ll really help me in future to have that feature then I’ll carve out time to build it. But it’s worth remembering that some problems can be just as easily solved in a lo-fi way. For instance, it doesn’t take that long to screen grab a bunch of views, drop them in Miro and add some connector lines. Again if it’s a disposable artefact for solving a communication problem we needn’t over engineer the solution.  

So if you’re building your own tools ask yourself if this is a problem you’re repeatedly encountering and whether the existing methods and tools are just as quick?

It’s very good at thinking like an engineer

There’s a point in the design process where it is vital to start thinking like an engineer, and that’s right before you’re going to do a developer review. I wrote a skill that used Figma MCP to point Claude do my design file and review my design as an engineer would. It was incredibly detailed, to the point where I’m scared to have it review my next design. I had a good way to test this too. I pointed it at a design that had just been through developer review and where an engineer had already created a google doc of their own that we were working in to define logic and solve other problems. The AI arrived at many of the same conclusions and questioning that revealed gaps in logic and issues with the design. I plan to use this in future to do a pre-developer review so I can fix some of the issue prior to that and save time. This is not a moment to replace though, as the review helps the engineer familiarise themselves with the design.

I wrote another small skill that was really just a prompt I committed to Claude’s memory.md. I helps by “schooling me” on my technical/engineering language. Whenever I’m talking to Claude it’ll continue to act as normal but it’s inject an FYI beneath its message whenever I use technical language incorrectly. Here’s the prompt:

Add this to your MEMORY.md (~/.claude/projects/-Users-<username>/memory/MEMORY.md):

## Communication Preferences

### Technical Terminology Corrections

When I use a technical term imprecisely in a way that could cause real confusion with another developer (i.e., the wrong term would lead someone to understand something meaningfully different), add a brief, gentle correction at the end of the response:

  > FYI: you said **X** — that actually means **Y**. Next time, **Z** is the right term, because **A**

Add this to your MEMORY.md (~/.claude/projects/-Users-<username>/memory/MEMORY.md):

## Communication Preferences

### Technical Terminology Corrections

When I use a technical term imprecisely in a way that could cause real confusion with another developer (i.e., the wrong term would lead someone to understand something meaningfully different), add a brief, gentle correction at the end of the response:

  > FYI: you said **X** — that actually means **Y**. Next time, **Z** is the right term, because **A**

Add this to your MEMORY.md (~/.claude/projects/-Users-<username>/memory/MEMORY.md):

## Communication Preferences

### Technical Terminology Corrections

When I use a technical term imprecisely in a way that could cause real confusion with another developer (i.e., the wrong term would lead someone to understand something meaningfully different), add a brief, gentle correction at the end of the response:

  > FYI: you said **X** — that actually means **Y**. Next time, **Z** is the right term, because **A**

FYI: I didn’t make a skill because in Claude’s words... 

A skill is overkill here because skills are for reusable processes — things with steps, decision logic, or a specific trigger that kicks off a workflow. This is just a standing instruction: a preference that shapes how Claude responds all the time, in the background. That's exactly what MEMORY.md (or CLAUDE.md) is for. Wrapping it in a skill would add ceremony with no benefit.

Building for reals 

This is where I get less confident in what to recommend. Engineers are obviously using Claude Code with their day-to-day work, so we know it’s more than capable at writing production code. But we also know that it’s difficult to asses the quality of AI’s output when it’s something that isn’t our expertise. So whilst I’m now more capable of helping to build and ship our work I’m wary of generating useless code that needs to be rewritten. But what’s the alternative? Creating every screen and component in Figma is also a waste, not only that it’s a fictional rendering. 

In terms of process or ways of working, this is where the waters are most murky. It’s the overlap of this new Venn diagram of responsibility between design and engineering. Getting clarity here will require close collaboration with engineering and will also depend on what it is you’re trying to design. Working on relatively small changes to an established system with a pattern library in place feels like safe ground for a designer to start creating branches and submitting pull requests. Engineering new capabilities for a less established product might require getting some foundations in place with an engineer first.

This step in the process is what I’m trying to figure out now. What I’m striving for is moving from prototyping to production code in the most efficient way possible. I have some thoughts on what that might look like but I’m going to hold off until I’ve gone further down that path.

In summary

If there’s one thing to take away it’s that AI is incredibly good at making you feel like you’re making progress. More than ever it’s important to be vigilant and ask yourself whether what you’re doing is really having an impact or are you simply prompting things into existence just because you can.