If anyone from Anthropic is reading this, your billing for Claude Code is hostile to your users.
Why doesn’t Claude Code usage count against the same plan that usage of Claude.ai and Claude Desktop are billed against?
I upgraded to the $200/month plan because I really like Claude Code but then was so annoyed to find that this upgrade didn’t even apply to my usage of Claude Code.
This would put anthropic in the business of minimizing the context to increase profits, same as Cursor and others who cheap out on context and try to RAG etc. Which would quickly make it worse, so I hope they stay on api pricing
Some base usage included in the plan might be a good balance
I’ve been using codemcp (https://github.com/ezyang/codemcp) to get “most” of the functionality of Claude code (I believe it uses prompts extracted from Claude Code), but using my existing pro plan.
It’s less autonomous, since it’s based on the Claude chat interface, and you need to write “continue” every so often, but it’s nice to save the $$
I totally agree with this, I would rather have some kind of prediction than using the Claude Code roulette. I would definitely upgrade my plan if I got Claude Code usage included.
Claude.ai/Desktop is priced based on average user usage. If you have 1 power user sending 1000 requests per day, and 99 sending 5, many even none, you can afford having a single $10/month plan for everyone to keep things simple.
But every Claude Code user is a 1000 requests per day user, so the economics don't work anymore.
I would accept a higher-priced plan (which covered both my use of Claude.ai/Claude Desktop AND my use of Claude Code).
Anthropic make it seem like Claude Code is a product categorized like Claude Desktop (usage of which gets billed against your Claude.ai plan). This is how it signs off all its commits:
Generated with [Claude Code](https://claude.ai/code)
At the very least, this is misleading. It misled me.
Once I had purchased the $200/month plan, I did some reading and quickly realized that I had been too quick to jump to conclusions. It still left me feeling like they had pulled a fast on one me.
Maybe you can cancel your subscription or charge back?
I think it's just oversight on their part. They have nothing to gain by making people believe they would get Claude Code access through their regular plans, only bad word of mouth.
Well, take that into consideration then. Just make it an option. Instead of getting 1000 requests per day with code, you get 100 on the $10/month plan, and then let users decide whether they want to migrate to a higher tier or continue using the API model.
I am not saying Claude should stop making money, I'm just advocating for giving users the value of getting some Code coverage when you migrate from the basic plan to the pro or max.
Claude Pro and other website/desktop subscription plans are subject to usage limits that would make it very difficult to use for Claude Code.
Claude Code uses the API interface and API pricing, and writes and edits code directly on your machine, this is a level past simply interacting with a separate chat bot. It seems a little disingenuous to say it's "hostile" to users, when the reality is yeah, you do pay a bit more for more reliable usage tier, for a task that requires it. It also shows you exactly how much it's spent at any point.
The writing of edits and code directly on my machine is something that happens on the client side. I don't see why that usage would be subject to anything but one-time billing or how it puts any strain on Anthropic's infrastructure.
No, that's the whole point: predictability. It's definitely a trade off, but if we could save the work as is we could have the option to continue the iteration elsewhere, or even better, from that point on offer the option to fallback to the current API model.
A nice addition would be having something like /cost but to check where you are in regards to limits.
$200/month isn't that much. Folks, I'm hanging around with are spending $100 USD to $500 USD daily as the new norm as a cost of doing business and remaining competitive. That might seem expensive, but it's cheap... https://ghuntley.com/redlining
$100/day seems reasonable as an upper-percentile spend per programmer. $500/day sounds insane.
A 2.5 hour session with Claude Code costs me somewhere between $15 and $20. Taking $20/2.5 hours as the estimate, $100 would buy me 12.5 hours of programming.
It sounds insane until you drive full agentic loops/evals. I'm currently making a self-compiling compiler; no doubt you'll hear/see about it soon. The other night, I fell asleep and woke up with interface dynamic dispatch using vtables with runtime type information and generic interface support implemented...
Asking very specific questions to Sonnet 3.7 costs a couple of tenths of a cent every time, and even if you're doing that all day it will never amount to more than maybe a dollar at the end of the day.
On average, one line of, say, JavaScript represents around 7 tokens, which means there are around 140k lines of JS per million tokens.
On Openrouter, Sonnet 3.7 costs are currently:
- $3 / one million input tokens => $100 = 33.3 million input tokens = 420k lines of JS code
- $15 / one million output tokens => $100 = 3.6 million output tokens = 4.6 million lines of JS code
For one developer? In one day? It seems that one can only reach such amounts if the whole codebase is sent again as context with each and every interaction (maybe even with every keystroke for type completion?) -- and that seems incredibly wasteful?
That's how it works, everything is recomputed again every additional prompt. But it can cache the state of things and restore for a lower fee, and reingesting what was formerly output is cheaper than making new output (serial bottleneck) so sometimes there is a discount there.
Consider L5 at Google: outgoings of $377,797 USD per year just on salary/stock, before fixed overheads such as insurance, leave, issues like ramp-up time and cost of their manager. In the hands of a Staff+ engineer, these tools enable replication of Staff+ engineers and don't sleep. My 2c: the funding for the new norm will come from either compressing the manager layer or engineering layer or both.
These tools and foundational models get better every day, and right now, they enable Staff+ engineers and businesses to have less need for juniors. I suspect there will be [short-to-medium-term] compression. See extended thoughts at https://ghuntley.com/screwed
I wonder what will happen first - will companies move to LLMs, or to programmers from abroad (because ultimately, it will be cheaper than using LLMs - you've said ~$500 per day, in Poland ~$1500 will be a good monthly wage - and that still will make us expensive! How about moving to India, then? Nigeria? LATAM countries?)
Do you have a link to some of this output? A repo on Github of something you’ve done for fun?
I get a lot of value out of LLMs but when I see people make claims like this I know they aren’t “in the trenches” of software development, or care so little about quality that I can’t relate to their experience.
Usually they’re investors in some bullshit agentic coding tool though.
I will shortly; am building a serious self-compiling compiler rn out of an brand-new esoteric language. Meaning the LLM is able to program itself without training data about the programming language...
Honestly, I don't know what to make of it. Stage 2 is almost complete, and I'm (right now) conducting per-language benchmarks to compare it to the Titans.
Using the proper techniques, Sonet 3.7 can generate code in the custom lexical/stdlib. So, in my eyes, the path to Stage 3 is unlocked, but it will chew lots and lots of tokens.
Surprised that "controlling cost" isn't a section in this post. Here's my attempt.
---
If you get a hang of controlling costs, it's much cheaper. If you're exhausting the context window, I would not be surprised if you're seeing high cost.
Be aware of the "cache".
Tell it to read specific files (and only those!), if you don't, it'll read unnecessary files, or repeatedly read sections of files or even search through files.
Avoid letting it search - even halt it. Find / rg can have a thousands of tokens of output depending on the search.
Never edit files manually during a session (that'll bust cache). THIS INCLUDES LINT.
The cache also goes away after 5-15 minutes or so (not sure) - so avoid leaving sessions open and coming back later.
Never use /compact (that'll bust cache, if you need to, you're going back and forth too much or using too many files at once).
Don't let files get too big (it's good hygiene too) to keep the context window sizes smaller.
Have a clear goal in mind and keep sessions to as few messages as possible.
Write / generate markdown files with needed documentation using claude.ai, and save those as files in the repo and tell it to read that file as part of a question.
I'm at about ~$0.5-0.75 for most "tasks" I give it. I'm not a super heavy user, but it definitely helps me (it's like having a super focused smart intern that makes dumb mistakes).
If i need to feed it a ton of docs etc. for some task, it'll be more in the few $, rather than < $1. But I really only do this to try some prototype with a library claude doesn't know about (or is outdated).
For hobby stuff, it adds up - totally.
For a company, massively worth it. Insanely cheap productivity boost (if developers are responsible / don't get lazy / don't misuse it).
Yeah, I tried CC out and quickly noticed it was spending $5+ for simple LLM capable tasks. I rarely break $1-2 a session using aider. Aider feels like more of a precision tool. I like having the ability to manually specify.
I do find Claude Code to be really good at exploration though - like checking out a repository I'm unfamiliar with and then asking questions about it.
Some tools take more effort to hold properly than others. I'm not saying there's not a lot of room for improvement - or that the ux couldn't hold the users hand more to force things like this in some "assisted mode" but at the end of the day, it's a thin, useful wrapper around an llm, and llms require effort to use effectively.
I definitely get value out of it- more than any other tool like it that I've tried.
So I have been using Cursor a lot more in a vibe code way lately and I have been coming across what a lot of people report: sometimes the model will rewrite perfectly working code that I didn't ask it to touch and break it.
In most cases, it is because I am asking the model to do too much at once. Which is fine, I am learning the right level of abstraction/instruction where the model is effective consistently.
But when I read these best practices, I can't help but think of the cost. The multiple CLAUDE.md files, the files of context, the urls to documentation, the planning steps, the tests. And then the iteration on the code until it passes the test, then fixing up linter errors, then running an adversarial model as a code review, then generating the PR.
It makes me want to find a way to work at Anthropic so I can learn to do all of that without spending $100 per PR. Each of the steps in that last paragraph is an expensive API call for us ISV and each requires experimentation to get the right level of abstraction/instruction.
I want to advocate to Anthropic for a scholarship program for devs (I'd volunteer, lol) where they give credits to Claude in exchange for public usage. This would be structured similar to creator programs for image/audio/video gen-ai companies (e.g. runway, kling, midjourney) where they bring on heavy users that also post to social media (e.g. X, TikTok, Twitch) and they get heavily discounted (or even free) usage in exchange for promoting the product.
Why do you think it's supposed to be cheap? Developers are expensive. Claude doesn't have to be cheap to make software development quicker and cheaper. It just has to be cheaper than you.
There are ways to use LLMs cheaply, but it will always be expensive to get the most out of them. In fact, the top end will only get more and more costly as the lengths of tasks AIs can successfully complete grows.
I am not implying in any sense a value judgement on cost. I'm stating my emotions at the realization of the cost and how that affects my ability to use the available tools in my own education.
It would be no different than me saying "it sucks university is so expensive, I wish I could afford to go to an expensive college but I don't have a scholarship" and someone then answers: why should it be cheap.
So, allow me the space to express my feelings and propose alternatives, of which scholarships are one example and creative programs are another. Another one I didn't mention would be the same route as universities force now: I could take out a loan. And I could consider it an investment loan with the idea it will pay back either in employment prospects or through the development of an application that earns me money. Other alternatives would be finding employment at a company willing to invest that $100/day through me, the limit of that alternative being working at an actual foundational model company for presumably unlimited usage.
And of course, I could focus my personal education on squeezing the most value for the least cost. But I believe the balance point between slightly useful and completely transformative usages levels is probably at a higher cost level than I can reasonably afford as an independent.
The issue with many of these tips is that they require you use to claude code (or codex cli, doesn't matter) to spend way more time in it, feed it more info, generate more outputs --> pay more money to the LLM provider.
I find LLM-based tools helpful, and use them quite regularly but not 20 bucks+, let alone 100+ per month that claude code would require to be used effectively.
what happened to the "$5 is just a cup o' coffee" argument? Are we heading towards the everything-for-$100 land?
On a serious note, there is no clear evidence that any of the LLM-based code assistants will contribute to saving developer time. Depends on the phase of the project you are in and on a multitude of factors.
No, it doesn't. If you are still looking for product market fit, it is just cost.
After 2 years of GPT4 release, we can safely say that LLMs don't make finding PMF that much easier nor improve general quality/UX of products, as we still see a general enshittification trend.
If this spending was really game-changing, ChatGPT frontend/apps wouldn't be so bad after so long.
Finding product market fit is a human directional issue, and LLMs absolutely can help speed up iteration time here. I’ve built two RoR MVPs for small hobbby projects spending ~$75 in Claude code to make something in a day that would have previously taken me a month plus. Again, absolutely bizarre that people can’t see the value here, even as these tools are still working through their kinks.
If they just helped you to ship something valueless, you paid $75 for entertainment, like betting.
> Again, absolutely bizarre that people can’t see the value here, even as these tools are still working through their kinks.
Far from that, I use AI daily, regularly. I just won't pay more than 20 dollars/month for it and definitely not going to pay for usage because using it for a long time, I know that I will waste money in the long run. Generating code is not my bottleneck in my current project a long time ago, which AI indeed helped me get there. But not spending 100$ per session.
I mostly work in neovim, but I'll open cursor to write boilerplate code. I'd love to use something cli based like Claude Code or Codex, but neither of them implement semantic indexing (vector embeddings) the way Cursor does. It should be possible to implement an MCP server which does this, but I haven't found a good one.
I use a small plugin I’ve written my self to interact with Claude, Gemini 2.5 pro or GPT. I’ve not really seen the need for semantic searching yet. Instead I’ve given the LLM access to LSP symbol search, grep and the ability to add files to the conversation. It’s been working well for my use cases but I’ve never tried Cursor so I can’t comment on how it compares. I’m sure it’s not as smooth though. I’ve tried some of the more common Neovim plugins and for me it works better, but the preference here is very personal. If you want to try it out it’s here: https://github.com/isaksamsten/sia.nvim
Tool-calling agents with search tools do very well at information retrieval tasks in codebases. They are slower and more expensive than good RAG (if you amortize the RAG index over many operations), but they're incredibly versatile and excel in many cases where RAG would fall down. Why do you think you need semantic indexing?
Unfortunately I can only give an anecdotal answer here, but I get better results from Cursor than the alternatives. The semantic index is the main difference, so I assume that's what's giving it the edge.
Is it a very large codebase? Anything else distinctive about it? Are you often asking high-level/conceptual questions? Those are the questions that would help me understand why you might be seeing better results with RAG.
What's the Gemini equivalent of Claude Code and OpenAI's Codex? I've found projects like reugn/gemini-cli, but Gemini Code Assist seems limited to VS Code?
There's Aider, Plandex and Goose, all of which let you chose various providers and models. Aider also has a well known benchmark[0] that you can check out to help select models.
I would also like to know — I think people are using Cursor/Windsurf/Roo(Cline) for IDEs that let you pick the model, but I don't know of a CLI agentic editor that lets you use arbitrary models.
Hey, I'm the creator of Plandex (https://github.com/plandex-ai/plandex), which takes a more agentic approach than aider, and combines models from Anthropic, OpenAI, and Google. You might find it interesting.
Claude Code works fairly well, but Anthropic has lost the plot on the state of market competition. OpenAI tried to buy Cursor and now Windsurf because they know they need to win market share, Gemini 2.5 pro is better at coding than their Sonnet models, has huge context and runs on their TPU stack, but somehow Anthropic is expecting people to pay $200 in API costs per functional PR costs to vibe code. Ok.
The only problem is that this loss is permanent! As far as I can tell, there's no way to go back to the old conversation after a `/clear`.
I had one session last week where Claude Code seemed to have become amazingly capable and was implementing entire new features and fixing bugs in one-shot, and then I ran `/clear` (by accident no less) and it suddenly became very dumb.
You can ask it to store its current context to a file, review the file, ask it to emphasize or de-emphasize things based on your review, and then use `/clear`.
Then, you can edit the file at your leisure if you want to.
And when you want to load that context back in, ask it to read the file.
Works better than `/compact`, and is a lot cheaper.
Edit: It so happens I had a Claude Code session open in my Terminal, so I asked it:
Save your current context to a file.
Claude produced a 91 line md file... surely that's not the whole of its context? This was a reasonably lengthy conversation in which the AI implemented a new feature.
Yep I learned this the hard way after racking up big bills just using Sonnet 3.7 in my IDE. Gemini is just as good (and not nearly as willing to agree with every dumb thing I say) and it’s way cheaper.
Why doesn’t Claude Code usage count against the same plan that usage of Claude.ai and Claude Desktop are billed against?
I upgraded to the $200/month plan because I really like Claude Code but then was so annoyed to find that this upgrade didn’t even apply to my usage of Claude Code.
So now I’m not using Claude Code so much.
Some base usage included in the plan might be a good balance
It would definitely get me to use it more.
If everyone used the plan to the limit, the plan would cost the same as the API with usage equal to the limit.
It’s less autonomous, since it’s based on the Claude chat interface, and you need to write “continue” every so often, but it’s nice to save the $$
But every Claude Code user is a 1000 requests per day user, so the economics don't work anymore.
Anthropic make it seem like Claude Code is a product categorized like Claude Desktop (usage of which gets billed against your Claude.ai plan). This is how it signs off all its commits:
At the very least, this is misleading. It misled me.Once I had purchased the $200/month plan, I did some reading and quickly realized that I had been too quick to jump to conclusions. It still left me feeling like they had pulled a fast on one me.
I think it's just oversight on their part. They have nothing to gain by making people believe they would get Claude Code access through their regular plans, only bad word of mouth.
This is definitely not malicious on their part. Just bears pointing out.
I am not saying Claude should stop making money, I'm just advocating for giving users the value of getting some Code coverage when you migrate from the basic plan to the pro or max.
Does that make sense?
Claude Code uses the API interface and API pricing, and writes and edits code directly on your machine, this is a level past simply interacting with a separate chat bot. It seems a little disingenuous to say it's "hostile" to users, when the reality is yeah, you do pay a bit more for more reliable usage tier, for a task that requires it. It also shows you exactly how much it's spent at any point.
Genuinely interested: how's so?
A nice addition would be having something like /cost but to check where you are in regards to limits.
A 2.5 hour session with Claude Code costs me somewhere between $15 and $20. Taking $20/2.5 hours as the estimate, $100 would buy me 12.5 hours of programming.
The point is to get a pipeline working, cost can be optimized down after.
On average, one line of, say, JavaScript represents around 7 tokens, which means there are around 140k lines of JS per million tokens.
On Openrouter, Sonnet 3.7 costs are currently:
- $3 / one million input tokens => $100 = 33.3 million input tokens = 420k lines of JS code
- $15 / one million output tokens => $100 = 3.6 million output tokens = 4.6 million lines of JS code
For one developer? In one day? It seems that one can only reach such amounts if the whole codebase is sent again as context with each and every interaction (maybe even with every keystroke for type completion?) -- and that seems incredibly wasteful?
If your staff engineers are mostly doing things AI can do, then you don't need staff. Probably don't even need senior
- L3 SWE II - $193,712 USD (before overheads)
- L4 SWE III - $297,124 USD (before overheads)
- L5 Senior SWE - $377,797 USD (before overheads)
These tools and foundational models get better every day, and right now, they enable Staff+ engineers and businesses to have less need for juniors. I suspect there will be [short-to-medium-term] compression. See extended thoughts at https://ghuntley.com/screwed
I get a lot of value out of LLMs but when I see people make claims like this I know they aren’t “in the trenches” of software development, or care so little about quality that I can’t relate to their experience.
Usually they’re investors in some bullshit agentic coding tool though.
Using the proper techniques, Sonet 3.7 can generate code in the custom lexical/stdlib. So, in my eyes, the path to Stage 3 is unlocked, but it will chew lots and lots of tokens.
1. My company cannot justify this cost at all.
2. My company can justify this cost but I don't find it useful.
3. My company can justify this cost, and I find it useful.
4. I find it useful, and I can justify the cost for personal use.
5. I find it useful, and I cannot justify the cost for personal use.
That aside -- 200/day/dev for a "nice to have service that sometimes makes my work slightly faster" is much in the majority of the world.
---
If you get a hang of controlling costs, it's much cheaper. If you're exhausting the context window, I would not be surprised if you're seeing high cost.
Be aware of the "cache".
Tell it to read specific files (and only those!), if you don't, it'll read unnecessary files, or repeatedly read sections of files or even search through files.
Avoid letting it search - even halt it. Find / rg can have a thousands of tokens of output depending on the search.
Never edit files manually during a session (that'll bust cache). THIS INCLUDES LINT.
The cache also goes away after 5-15 minutes or so (not sure) - so avoid leaving sessions open and coming back later.
Never use /compact (that'll bust cache, if you need to, you're going back and forth too much or using too many files at once).
Don't let files get too big (it's good hygiene too) to keep the context window sizes smaller.
Have a clear goal in mind and keep sessions to as few messages as possible.
Write / generate markdown files with needed documentation using claude.ai, and save those as files in the repo and tell it to read that file as part of a question. I'm at about ~$0.5-0.75 for most "tasks" I give it. I'm not a super heavy user, but it definitely helps me (it's like having a super focused smart intern that makes dumb mistakes).
If i need to feed it a ton of docs etc. for some task, it'll be more in the few $, rather than < $1. But I really only do this to try some prototype with a library claude doesn't know about (or is outdated). For hobby stuff, it adds up - totally.
For a company, massively worth it. Insanely cheap productivity boost (if developers are responsible / don't get lazy / don't misuse it).
I use Aider. It's awesome. You explicitly specify the files. You don't have to do work to limit context.
I do find Claude Code to be really good at exploration though - like checking out a repository I'm unfamiliar with and then asking questions about it.
I definitely get value out of it- more than any other tool like it that I've tried.
In most cases, it is because I am asking the model to do too much at once. Which is fine, I am learning the right level of abstraction/instruction where the model is effective consistently.
But when I read these best practices, I can't help but think of the cost. The multiple CLAUDE.md files, the files of context, the urls to documentation, the planning steps, the tests. And then the iteration on the code until it passes the test, then fixing up linter errors, then running an adversarial model as a code review, then generating the PR.
It makes me want to find a way to work at Anthropic so I can learn to do all of that without spending $100 per PR. Each of the steps in that last paragraph is an expensive API call for us ISV and each requires experimentation to get the right level of abstraction/instruction.
I want to advocate to Anthropic for a scholarship program for devs (I'd volunteer, lol) where they give credits to Claude in exchange for public usage. This would be structured similar to creator programs for image/audio/video gen-ai companies (e.g. runway, kling, midjourney) where they bring on heavy users that also post to social media (e.g. X, TikTok, Twitch) and they get heavily discounted (or even free) usage in exchange for promoting the product.
There are ways to use LLMs cheaply, but it will always be expensive to get the most out of them. In fact, the top end will only get more and more costly as the lengths of tasks AIs can successfully complete grows.
It would be no different than me saying "it sucks university is so expensive, I wish I could afford to go to an expensive college but I don't have a scholarship" and someone then answers: why should it be cheap.
So, allow me the space to express my feelings and propose alternatives, of which scholarships are one example and creative programs are another. Another one I didn't mention would be the same route as universities force now: I could take out a loan. And I could consider it an investment loan with the idea it will pay back either in employment prospects or through the development of an application that earns me money. Other alternatives would be finding employment at a company willing to invest that $100/day through me, the limit of that alternative being working at an actual foundational model company for presumably unlimited usage.
And of course, I could focus my personal education on squeezing the most value for the least cost. But I believe the balance point between slightly useful and completely transformative usages levels is probably at a higher cost level than I can reasonably afford as an independent.
I find LLM-based tools helpful, and use them quite regularly but not 20 bucks+, let alone 100+ per month that claude code would require to be used effectively.
I find this argument very bizarre. $100 is pay for 1-2 hours of developer time. Doesn't it save at least that much time in a whole month?
On a serious note, there is no clear evidence that any of the LLM-based code assistants will contribute to saving developer time. Depends on the phase of the project you are in and on a multitude of factors.
After 2 years of GPT4 release, we can safely say that LLMs don't make finding PMF that much easier nor improve general quality/UX of products, as we still see a general enshittification trend.
If this spending was really game-changing, ChatGPT frontend/apps wouldn't be so bad after so long.
If they just helped you to ship something valueless, you paid $75 for entertainment, like betting.
> Again, absolutely bizarre that people can’t see the value here, even as these tools are still working through their kinks.
Far from that, I use AI daily, regularly. I just won't pay more than 20 dollars/month for it and definitely not going to pay for usage because using it for a long time, I know that I will waste money in the long run. Generating code is not my bottleneck in my current project a long time ago, which AI indeed helped me get there. But not spending 100$ per session.
Unfortunately I can only give an anecdotal answer here, but I get better results from Cursor than the alternatives. The semantic index is the main difference, so I assume that's what's giving it the edge.
- Aider - https://aider.chat/ | https://github.com/Aider-AI/aider
- Plandex - https://plandex.ai/ | https://github.com/plandex-ai/plandex
- Goose - https://block.github.io/goose/ | https://github.com/block/goose
[0] https://aider.chat/docs/leaderboards/
I did a Show HN for it a few days ago: https://news.ycombinator.com/item?id=43710576
The only problem is that this loss is permanent! As far as I can tell, there's no way to go back to the old conversation after a `/clear`.
I had one session last week where Claude Code seemed to have become amazingly capable and was implementing entire new features and fixing bugs in one-shot, and then I ran `/clear` (by accident no less) and it suddenly became very dumb.
Then, you can edit the file at your leisure if you want to.
And when you want to load that context back in, ask it to read the file.
Works better than `/compact`, and is a lot cheaper.
Edit: It so happens I had a Claude Code session open in my Terminal, so I asked it:
Claude produced a 91 line md file... surely that's not the whole of its context? This was a reasonably lengthy conversation in which the AI implemented a new feature.And there's CLAUDE.md. it's like cursorrules. You can also have it modify it's own CLAUDE.md.