My employer offers effectively unlimited1 use of Claude Code for work, and we’ve been encouraged to explore and increase our use of it. However, I’m somewhat hesitant to adopt Claude into my work.
Claude withdrawal symptoms
Even though Claude Code may be free within my company, I won’t have access to it outside of work. Probably unsurprisingly, I’m currently spending more time programming at work than on my personal projects, which means that my programming habits are now grown and nurtured in the comfort of my workplace.
If I get used to relying on Claude Code for menial tasks like asking questions, generating boilerplate, or debugging, then I worry I’ll find myself longing for Claude when working on personal projects, where I won’t have the same luxury, and this could discourage me from working on my own projects—perhaps to my employer’s benefit. Ultimately, I don’t want to develop a dependence on Claude.
Ooh, shiny
At my company, we’re also encouraged to familiarize ourselves with and adapt Claude and AI tools in general into our workflow to stay ahead of those who can’t use AI effectively. Maybe I’m naive—this recommendation comes from people with far more years of industry experience than I do—but I remain skeptical.
Right now, there is still plenty of hype for AI, so technologies continue to change quickly. Who’s to say that Claude Code will remain the industry leader a year from now? Effort spent customizing tooling (e.g. creating skills, “writing” CLAUDE.md) and optimizing prompts and workflows might go to waste once everyone switches to the new shiny AI agent on the market.
I feel like learning to use Claude isn’t really a skill that requires dedicated effort to master either. Claude itself had a fairly painless installation and setup process, and it comes with a flashy terminal UI. Just by typing in a short prompt, Claude can immediately get to work with minimal configuration, and its results are adequate. The companies behind these AI agents are incentivized to keep the barrier to entry low, to make their tools easy to learn and use, and I don’t see that going away any time soon. LLMs are made for human use, so we don’t need to change ourselves or our systems to serve LLMs.
Insert anti-AI argument here
In general, I’m still not completely sold that AI can improve our productivity that much.
Like sure, an LLM can blast away the most menial, tedious parts of coding, like writing test cases and boilerplate for a React component or API route. But you know that saying that 80% of the work is done in 20% of the time? The LLM is great at knocking out that 80% of the work—the parts that you already could do easily without much thought or effort—but that only leads to 20% time savings. The hard parts—defining requirements, coordination with other teams, code review2, infrastructure, meetings—each take up equal slices of the remaining pie.
Plus, while AI agents can now reliably generate code that works, I still doubt its quality. Without significant handholding (such as handwriting examples for Claude to work off of), it feels like LLMs tend to generate much more code than humans do for the same task—after all, writing code is so much faster for an LLM than a human. This likely comes from LLMs’ ability to turn a short sentence into a novel, where in code, it may handle errors or edge cases3 when a human may opt not to. While its code works, this comes at a disservice to humans: more code means more for humans to review, and longer documentation means more words for humans to read.4 We’ll be spending more time reviewing code as LLMs handle writing code, so having to review functional but verbose code will hit harder than before.
And with recent research suggesting it may be counterproductive letting Claude auto-generate its own CLAUDE.md, I wonder how many of the “best practices” about agentic coding are just placebo, or from people being led on by the LLM’s sycophantic overconfidence.
Conclusion
I don’t really want to get used to using Claude Code, even when my employer provides it for free:
- I don’t want to develop a dependence on Claude Code that dissuades me from working on my personal projects.
- A new, shiny, better competitor to Claude will probably show up sooner or later.
- I’m still not confident that Claude can improve our productivity that much.
I’ll still keep an eye on Claude and its developments, though, so I’ll cautiously stay ten steps behind Claude.
-
There’s like an organization/team-wide limit, but it’s unlimited in the sense that I don’t really need to set limits on my own use. ↩
-
I guess even the greatest AI enthusiast does not completely trust LLM-generated code without human review. Only when our team trusts AI enough to do away with code review will I see AI take a threatening step towards replacing our jobs. ↩
-
Unnecessary handling of errors and edge cases may not just hamper reviewability but also set a footgun. For example, error catching might already be handled by an outer layer, so explicit error handling might discard valuable stack traces and add noise to error logs. A human may know whether an edge case is unlikely or impossible in practice, while this knowledge may be outside the LLM’s context window. Sometimes, it may be better to fail noisily for an edge case that is expected to not happen, than to incorrectly handle a yet hypothetical edge case. ↩
-
Here’s a thought: One might say that verbose code isn’t a problem if it’ll just be agents looking at it in the future, but since LLMs are trained on human text, I wonder if overly wordy code and documentation also reduces the effectiveness of LLMs down the line. ↩