An update on recent Claude Code quality reports

(anthropic.com)

368 points | by mfiguiere 2 hours ago

70 comments

  • sutterd 1 minute ago
    What kind of performance are people getting now? I was running 4.7 yesterday and it did a remarkably bad job. I recreated my repo state exactly and ran the same starting task with 4.5 (which I have preferred to 4.6). It was even worse, by a large margin. It is likely my task was a difficult or poorly posed, but I still have some idea of what 4.5 should have done on it. This was not it. What experiences are other people having with the 4.7? How about with other model versions, if they are trying them? (In both cases, I ran on max effort, for whatever that is worth.)
  • 6keZbCECT2uB 2 hours ago
    "On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6"

    This makes no sense to me. I often leave sessions idle for hours or days and use the capability to pick it back up with full context and power.

    The default thinking level seems more forgivable, but the churn in system prompts is something I'll need to figure out how to intentionally choose a refresh cycle.

    • bcherny 1 hour ago
      Hey, Boris from the Claude Code team here.

      Normally, when you have a conversation with Claude Code, if your convo has N messages, then (N-1) messages hit prompt cache -- everything but the latest message.

      The challenge is: when you let a session idle for >1 hour, when you come back to it and send a prompt, it will be a full cache miss, all N messages. We noticed that this corner case led to outsized token costs for users. In an extreme case, if you had 900k tokens in your context window, then idled for an hour, then sent a message, that would be >900k tokens written to cache all at once, which would eat up a significant % of your rate limits, especially for Pro users.

      We tried a few different approaches to improve this UX:

      1. Educating users on X/social

      2. Adding an in-product tip to recommend running /clear when re-visiting old conversations (we shipped a few iterations of this)

      3. Eliding parts of the context after idle: old tool results, old messages, thinking. Of these, thinking performed the best, and when we shipped it, that's when we unintentionally introduced the bug in the blog post.

      Hope this is helpful. Happy to answer any questions if you have.

      • dbeardsl 1 hour ago
        I appreciate the reply, but I was never under the impression that gaps in conversations would increase costs nor reduce quality. Both are surprising and disappointing.

        I feel like that is a choice best left up to users.

        i.e. "Resuming this conversation with full context will consume X% of your 5-hour usage bucket, but that can be reduced by Y% by dropping old thinking logs"

        • computably 54 minutes ago
          > I was never under the impression that gaps in conversations would increase costs nor reduce quality. Both are surprising and disappointing.

          You didn't do your due diligence on an expensive API. A naïve implementation of an LLM chat is going to have O(N^2) costs from prompting with the entire context every time. Caching is needed to bring that down to O(N), but the cache itself takes resources, so evictions have to happen eventually.

          • solarkraft 10 minutes ago
            I somewhat disagree that this is due diligence. Claude Code abstracts the API, so it should abstract this behavior as well, or educate the user about it.
          • raron 3 minutes ago
            How big this cached data is? Wouldn't it be possible to download it after idling a few minutes "to suspend the session", and upload and restore it when the user starts their next interaction?
          • doesnt_know 5 minutes ago
            How do you do "due diligence" on an API that frequently makes undocumented changes and only publishes acknowledgement of change after users complain?

            You're also talking about internal technical implementations of a chat bot. 99.99% of users won't even understand the words that are being used.

          • someguyiguess 9 minutes ago
            Yes. It’s perfectly reasonable to expect the user to know the intricacies of the caching strategy of their llm. Totally reasonable expectation.
        • JumpCrisscross 54 minutes ago
          > I was never under the impression that gaps in conversations would increase costs

          The UI could indicate this by showing a timer before context is dumped.

          • karsinkk 49 minutes ago
            Yes!! A UI widget that shows how far along on the prompt cache eviction timelines we are would be great.
      • 8note 1 minute ago
        reasonably, if i'm in an interactive session, its going to have breaks for an hour or more.

        whats driving the hour cache? shouldnt people be able to have lunch, then come back and continue?

        are you expecting claude code users to not attend meetings?

        I think product-wise you might need a better story on who uses claude-code, when and why.

        Same thing with session logs actually - i know folks who are definitely going to try to write a yearly RnD report and monthly timesheets based on text analysis of their claude code session files, and they're going to be incredibly unhappy when they find out its all been silently deleted

      • btown 1 hour ago
        Is there a way to say: I am happy to pay a premium (in tokens or extra usage) to make sure that my resumed 1h+ session has all the old thinking?

        I understand you wouldn't want this to be the default, particularly for people who have one giant running session for many topics - and I can only imagine the load involved in full cache misses at scale. But there are other use cases where this thinking is critical - for instance, a session for a large refactor or a devops/operations use case consolidating numerous issue reports and external findings over time, where the periodic thinking was actually critical to how the session evolved.

        For example, if N-4 was a massive dump of some relevant, some irrelevant material (say, investigating for patterns in a massive set of data, but prompted to be concise in output), then N-4's thinking might have been critical to N-2 not getting over-fixated on that dump from N-4. I'd consider it mission-critical, and pay a premium, when resuming an N some hours later to avoid pitfalls just as N-2 avoided those pitfalls.

        Could we have an "ultraresume" that, similar to ultrathink, would let a user indicate they want to watch Return of the (Thin)king: Extended Edition?

        • CjHuber 47 minutes ago
          I think it’s crazy that they do this, especially without any notice. I would not have renewed my subscription if I knew that they started doing this.

          Especially in analysis part I don‘t care about the actual text output itself most of the time but try to make the model „understand“ the topic.

          In that phase the actual text output itself is worthless it just serves as an indicator that the context was processed correctly and the future actual analysis work can depend on it. And they‘re… just throwing most the relevant stuff out all out without any notice when I resume my session after a few days?

          This is insane, Claude literally became useless to me and I didn’t even know it until now, wasting a lot of my time building up good session context.

          • munk-a 31 minutes ago
            Pointing at their terms of service will definitely be the instantly summoned defense (as would most modern companies) but the fact that SaaS can so suddenly shift the quality of product being delivered for their subscription without clear notification or explicitly re-enrollment is definitely a legal oversight right now and Italy actually did recently clamp down on Netflix doing this[1]. It's hard to define what user expectations of a continuous product are and how companies may have violated it - and for a long time social constructs kept this pretty in check. As obviously inactive and forgotten about subscriptions have become a more significant revenue source for services that agreement has been eroded, though, and the legal system has yet to catch up.

            1. Specifically, this suite was about price increases without clear consideration for both parties - but the same justifications apply to service restrictions without corresponding price decreases.

            https://fortune.com/2026/04/20/italian-court-netflix-refunds...

          • jetbalsa 39 minutes ago
            So to defend a litte, its a Cache, it has to go somewhere, its a save state of the model's inner workings at the time of the last message. so if it expires, it has to process the whole thing again. most people don't understand that every message the ENTIRE history of the conversion is processed again and again without that cache. That conversion might of hit several gigs worth of model weights and are you expecting them to keep that around for /all/ of your conversions you have had with it in separate sessions?
            • 3836293648 33 minutes ago
              No? It's not because it's a cache, it's because they're scared of letting you see the thinking trace. If you got the trace you could just send it back in full when it got evicted from the cache. This is how open weight models work.
              • eknkc 23 minutes ago
                I’m not familiar with the Claude API but OpenAI has an encrypted thking messages option. You get something that you can send back but it is encrypted. Not available on Anthropic?
              • reactordev 18 minutes ago
                They are sending it back to the cache, the part you are missing is they are charging you for it.
                • eknkc 9 minutes ago
                  The blog post says they prune them now not to charge you. That’s the change they implemented.
        • elAhmo 32 minutes ago
          Don't you have that by just resuming old convo?

          The only issue is that it didn't hit the cache so it was expensive if you resume later.

          • eknkc 25 minutes ago
            Not at the moment apparently. They remove the thinking messages when you continue after 1 hour. That was the whole idea of that change. So the LLM gets all your messages, its responses etc but not the thinking parts, why it generated that responses. You get a lobotomised session.
          • tbrockman 23 minutes ago
            Or generate tiny filler messages every hour until you come back to it.
      • the-grump 2 minutes ago
        That is understandable, but the issue is the sudden drop in quality and the silent surge in token usage.

        It also seems like the warning should be in channel and not on X. If I wanted to find out how broken things are on X, I'd be a Grok user.

      • isaacdl 1 hour ago
        Thanks for giving more information. Just as a comment on (1), a lot of people don't use X/social. That's never going to be a sustainable path to "improve this UX" since it's...not part of the UX of the product.

        It's a little concerning that it's number 1 in your list.

      • ohcmon 16 minutes ago
        Boris, wait, wait, wait,

        Why not use tired cache?

        Obviously storage is waaay cheaper than recalculation of embeddings all the way from the very beginning of the session.

        No matter how to put this explanation — it still sounds strange. Hell — you can even store the cache on the client if you must.

        Please, tell me I’m not understanding what is going on..

        otherwise you really need to hire someone to look at this!)

        • solarkraft 1 minute ago
          I assume they are already storing the cache on flash storage instead of keeping it all in VRAM. KV caches are huge - that’s why it’s impractical to transfer to/from the client. It would also allow figuring out a lot about the underlying model, though I guess you could encrypt it.

          What would be an interesting option would be to let the user pay more for longer caching, but if the base length is 1 hour I assume that would become expensive very quickly.

        • rkuska 8 minutes ago
          I don't think you can store the cache on client given the thinking is server side and you only get summaries in your client (even those are disabled by default).
      • fidrelity 1 hour ago
        Just wanted to say I appreciate your responses here. Engaging so directly with a highly critical audience is a minefield that you're navigating well.

        Thank you.

        • qsort 1 hour ago
          I agree with this.

          I'm writing this message even though I don't have much to add because it's often the case on HN that criticism is vocal and appreciation is silent and I'd like to balance out the sentiment.

          Anthropic has fumbled on many fronts lately but engaging honestly like this is the right thing to do. I trust you'll get back on track.

        • troupo 1 hour ago
          > Engaging so directly with a highly critical audience is a minefield that you're navigating well.

          They spent two months literally gaslighting this "critical audience" that this could not be happening and literally blaming users for using their vibe-coded slop exactly as advertised.

          All the while all the official channels refused to acknowledge any problems.

          Now the dissatisfaction and subscription cancellations have reached a point where they finally had to do something.

        • shimman 1 hour ago
          Very easy to do when you stand to make tens of millions when your employer IPOs. Let's not maybe give too much praise and employ some critical thinking here.
          • simplify 1 hour ago
            What is the purpose of this mindset? Should we encourage typical corporate coldness instead?
            • sdevonoes 48 minutes ago
              We should encourage minimal dependency on multibillion tech companies like anthropic. They, and similar companies are just milking us… but since their toys are soo shiny, we don’t care
          • hgoel 1 hour ago
            Is "employ some critical thinking" supposed to involve being an annoying uptight cynic?
      • iidsample 1 hour ago
        We at UT-Austin have done some academic work to handle the same challenge. Will be curious if serving engines could modified. https://arxiv.org/abs/2412.16434 .

        The core idea is we can use user-activity at the client to manage KV cache loading and offloading. Happy to chat more!!

      • infogulch 10 minutes ago
        How big is the cache? Could you just evict the cache into cheap object storage and retrieve it when resuming? When the user starts the conversation back up show a "Resuming conversation... ⭕" spinner.
      • ryanisnan 46 minutes ago
        Why does the system work like that? Is the cache local, or on Claude's servers?

        Why not store the prompt cache to disk when it goes cold for a certain period of time, and then when a long-lived, cold conversation gets re-initiated, you can re-hydrate the cache from disk. Purge the cached prompts from disk after X days of inactivity, and tell users they cannot resume conversations over X days without burning budget.

        • jetbalsa 41 minutes ago
          The cache is on Antropics server, its like a freeze frame of the LLM inner workings at the time. the LLM can pick up directly from this save state. as you can guess this save state has bits of the underlying model, their secret sauce. so it cannot be saved locally...
          • dicethrowaway1 28 minutes ago
            Maybe they could let users store an encrypted copy of the cache? Since the users wouldn't have Anthropic's keys, it wouldn't leak any information about the model (beyond perhaps its number of parameters judging by the size).
            • jetbalsa 24 minutes ago
              I'm unsure of the sizes needed for prompt cache, but I suspect its several gigs in size (A percentage of the model weight size), how would the user upload this every time they started a resumed a old idle session, also are they going to save /every/ session you do this with?
              • im3w1l 16 minutes ago
                A few gigs of disk is not that expensive. Imo they should allocate every paying user (at least) one disk cache slot that doesn't expire after any time. Use it for their most recent long chat (a very short question-answer that could easily be replayed shouldn't evict a long convo).
      • bobkb 24 minutes ago
        Resuming sessions after more than 1 hour is a very common workflow that many teams are following. It will be great if this is considered as an expected behaviour and design the UX around it. Perhaps you are not realising the fact that Claude code has replaced the shells people were using (ie now bash is replaced with a Claude code session).
      • Joeri 53 minutes ago
        This sounds like one of those problems where the solution is not a UX tweak but an architecture change. Perhaps prompt cache should be made long term resumable by storing it to disk before discarding from memory?
        • kivle 8 minutes ago
          I agree.. Maybe parts of the cache contents are business secrets.. But then store a server side encrypted version on the users disk so that it can be resumed without wasting 900k tokens?
      • ceuk 1 hour ago
        Is having massive sessions which sit idle for hours (or days) at a time considered unusual? That's a really, really common scenario for me.

        Two questions if you see this:

        1) if this isn't best practice, what is the best way to preserve highly specific contexts?

        2) does this issue just affect idle sessions or would the cache miss also apply to /resume ?

        • hedgehog 34 minutes ago
          Have the tool maintain a doc, and use either the built-in memory or (I prefer it this way) your own. I've been pretty critical of some other aspects of how Claude Code works but on this one I think they're doing roughly the right thing given how the underlying completion machinery works.

          Edit: If you message me I can share some of my toolchain, it's probably similar to what a lot of other people here use but I've done some polishing recently.

        • jetbalsa 36 minutes ago
          The cache is stored on Antropics servers, since its a save state of the LLM's weights at the time of processing. its several gigs in size. Every SINGLE TIME you send a message and its a cache miss you have to reprocess the entire message again eating up tons of tokens in the process
      • saadn92 53 minutes ago
        I leave sessions idle for hours constantly - that's my primary workflow. If resuming a 900k context session eats my rate limit, fine, show me the cost and let me decide whether to /clear or push through. You already show a banner suggesting /clear at high context - just do the same thing here instead of silently lobotomizing the model.
        • sdevonoes 46 minutes ago
          So if they fuck it up again and now they have, let’s say, “db problems” instead of “caching problems”, you would happily simply pay more? Wtf
          • saadn92 9 minutes ago
            No, I wouldn't. I'd like some transparency at least.
      • mtilsted 13 minutes ago
        Then you need to update your documentation and teach claude to read the new documentation because here is what claude code answered:

        Question: Hey claude, if we have a conversation, and then i take a break. Does it change the expected output of my next answer, if there are 2 hours between the previous message end the next one?

        Answer: No. A 2-hour gap doesn't change my output. I have no internal clock between messages — I only see the conversation content plus the currentDate context injected each turn. The prompt cache may expire (5 min TTL), which affects cost/latency but not the response itself.

          The only things that can change output across a break: new context injected (like updated date), memory files being modified, or files on disk changing.                                                                                  
                                                                                                                                                                             
        -- This answer directly contradict your post. It seems like the biggest problem is a total lack of documentation for expected behavior.

        A similar thing happens if I ask claude code for the difference between plan mode, and accept edits on.

        Then Claude told me the only difference was that with plan mode it would ask for permission before doing edits. But I really don't think this is true. It seems like plan mode does a lot more work, and present it in a total different way. It is not just a "I will ask before applying changes" mode.

      • growt 26 minutes ago
        Wasn’t cache time reduced to 5 minutes? Or is that just some users interpretation of the bug?
      • nextaccountic 38 minutes ago
        what about selling long term cache space to users?

        or even, let the user control the cache expiry on a per request basis. with a /cache command

        that way they decide if they want to drop the cache right away , or extend it for 20 hours etc

        it would cost tokens even if the underlying resource is memory/SSD space, not compute

      • troupo 1 hour ago
        > We tried a few different approaches to improve this UX: 1. Educating users on X/social

        No. You had random developers tweet and reply at random times to random users while all of your official channels were completely silent. Including channels for people who are not terminally online on X

      • gverrilla 1 hour ago
        I drop sessions very frequently to resume later - that's my main workflow with how slow Claude is. Is there anything I can do to not encounter this cache problem?
      • frumplestlatz 54 minutes ago
        The entire reason I keep a long-lived session around is because the context is hard-won — in term of tokens and my time.

        Silently degrading intelligence ought to be something you never do, but especially not for use-cases like this.

        I’m looking back at my past few weeks of work and realizing that these few regressions literally wasted 10s of hours of my time, and hundreds of dollars in extra usage fees. I ran out of my entire weekly quota four days ago, and had to pause the personal project I was working on.

        I was running the exact same pipeline I’ve run repeatedly before, on the same models, and yet this time I somehow ate a week’s worth of quota in less than 24h. I spent $400 just to finish the pipeline pass that got stuck halfway through.

        I’m sorry to be harsh, but your engineering culture must change. There are some types of software you can yolo. This isn’t one of them. The downstream cost of stupid mistakes is way, way too high, and far too many entirely avoidable bugs — and poor design choices — are shipping to customers way too often.

    • tadfisher 1 hour ago
      It astounds me that a company valued in the hundreds-of-billions-of-dollars has written this. One of the following must be true:

      1. They actually believed latency reduction was worth compromising output quality for sessions that have already been long idle. Moreover, they thought doing so was better than showing a loading indicator or some other means of communicating to the user that context is being loaded.

      2. What I suspect actually happened: they wanted to cost-reduce idle sessions to the bare minimum, and "latency" is a convenient-enough excuse to pass muster in a blog post explaining a resulting bug.

      • someguyiguess 8 minutes ago
        It’s definitely a cost / resource saving strategy on their end.
      • retinaros 1 hour ago
        they just vibecoded a fix and didnt think about the tradeoff they were making and their always yes-man of a model just went with it
    • seizethecheese 1 hour ago
      It's also a bit of a fishy explanation for purging tokens older than an hour. This happens to also be their cache limit. I doubt it is incidental that this change would also dramatically drop their cost.
  • bityard 1 hour ago
    My hypothesis is that some of this a perceived quality drop due to "luck of the draw" where it comes to the non-deterministic nature of VM output.

    A couple weeks ago, I wanted Claude to write a low-stakes personal productivity app for me. I wrote an essay describing how I wanted it to behave and I told Claude pretty much, "Write an implementation plan for this." The first iteration was _beautiful_ and was everything I had hoped for, except for a part that went in a different direction than I was intending because I was too ambiguous in how to go about it.

    I corrected that ambiguity in my essay but instead of having Claude fix the existing implementation plan, I redid it from scratch in a new chat because I wanted to see if it would write more or less the same thing as before. It did not--in fact, the output was FAR worse even though I didn't change any model settings. The next two burned down, fell over, and then sank into the swamp but the fourth one was (finally) very much on par with the first.

    I'm taking from this that it's often okay (and probably good) to simply have Claude re-do tasks to get a higher-quality output. Of course, if you're paying for your own tokens, that might get expensive in a hurry...

    • gilrain 1 hour ago
      > My hypothesis is that some of this a perceived quality drop due to "luck of the draw" where it comes to the non-deterministic nature of [LLM] output.

      I think you must have learned that they’re more nondeterministic than you had thought, but then wrongly connected your new understanding to the recent model degradation. Note: they’ve been nondeterministic the whole time, while the widely-reported degradation is recent.

      • bityard 38 minutes ago
        Er, no, I am fully aware that LLMs have always been non-deterministic.
        • gilrain 31 minutes ago
          Your argument seems to be that a statistically-improbable number of people all experienced ultimately- randomly-poor outputs, leading to only a misperception of model degradation… but this is not supported by reality, in which a different cause was found, so I was trying to connect your dots.
      • pydry 54 minutes ago
        I wonder how well the "good" versions worked if you threw awkward edge cases at it.
  • podnami 1 hour ago
    They lost me at Opus 4.7

    Anecdotally OpenAI is trying to get into our enterprise tooth and nail, and have offered unlimited tokens until summer.

    Gave GPT5.4 a try because of this and honestly I don’t know if we are getting some extra treatment, but running it at extra high effort the last 30 days I’ve barely see it make any mistakes.

    At some points even the reasoning traces brought a smile to my face as it preemptively followed things that I had forgotten to instruct it about but were critical to get a specific part of our data integrity 100% correct.

    • dsco 1 hour ago
      Same here. I feel like all of these shenanigans could be because Anthropic are compute constrained, forcing then to take reckless risks around reducing it.
    • robeym 20 minutes ago
      What's your workflow like? I'd be curious to test OpenAI out again but Claude Code is how I use the models. Does it require relearning another workflow?
    • vorticalbox 1 hour ago
      extra high burns tokens i find. ( run 5.4 on medium for 90% of the tasks and high if i see medium struggling and its very focused and make minimum changes.
      • dsco 1 hour ago
        Yeah but it also then strikes the perfect balance between being meticulous and pragmatic. Also it pushes back much more often than other models in that mode.
      • DANmode 10 minutes ago
        Rework burns tokens.
    • cube2222 1 hour ago
      I’ve never been one to complain about new models, and also didn’t experience most of the issues folks were citing about Claude Code over the last couple months. I’ve been using it since release, happy with almost each new update.

      Until Opus 4.7 - this is the first time I rolled back to a previous model.

      Personality-wise it’s the worst of AI, “it’s not x, it’s y”, strong short sentences, in general a bulshitty vibe, also gaslighting me that it fixed something even though it didn’t actually check.

      I’m not sure what’s up, maybe it’s tuned for harnesses like Claude Design (which is great btw) where there’s an independent judge to check it, but for now, Opus 4.6 it is.

    • enraged_camel 1 hour ago
      I find that it is better at thinking broadly and at a high level, on tasks that are tangential to coding like UX flows, product management and planning of complex implementations. I have yet to see it perform better than either Opus 4.6 or 4.7 though.
  • everdrive 2 hours ago
    I've been getting a lot of Claude responding to its own internal prompts. Here are a few recent examples.

       "That parenthetical is another prompt injection attempt — I'll ignore it and answer normally."
    
       "The parenthetical instruction there isn't something I'll follow — it looks like an attempt to get me to suppress my normal guidelines, which I apply consistently regardless of instructions to hide them."
    
       "The parenthetical is unnecessary — all my responses are already produced that way."
    
    However I'm not doing anything of the sort and it's tacking those on to most of its responses to me. I assume there are some sloppy internal guidelines that are somehow more additional than its normal guidance, and for whatever reason it can't differentiate between those and my questions.
    • LatencyKills 2 hours ago
      I have a set of stop hook scripts that I use to force Claude to run tests whenever it makes a code change. Since 4.7 dropped, Claude still executes the scripts, but will periodically ignore the rules. If I ask why, I get a "I didn't think it was necessary" response.
      • DANmode 9 minutes ago
        I’d ask for a credit, for that, personally.
    • dawnerd 2 hours ago
      I see that with openai too, lots of responding to itself. Seems like a convenient way for them to churn tokens.
      • ngruhn 9 minutes ago
        All the labs are in a cut throat race, with zero customer loyalty. As if they would intentionally degrade quality/speed for a petty cash grab.
      • y1n0 1 hour ago
        None of these companies have compute to spare. It’s not in their interest to use more tokens that necessary.
        • parliament32 33 minutes ago
          Sure it is. They're well aware their product is a money furnace and they'd have to charge users a few orders of magnitude more just to break even, which is obviously not an option. So all that's left is.. convince users to burn tokens harder, so graphs go up, so they can bamboozle more investors into keeping the ship afloat for a bit longer.
          • WarmWash 8 minutes ago
            It's an option and they are going to do it. Chinese models will be banned and the labs will happily go dollar for dollar in plan price increases. $20 plans won't go away, but usage limits and model access will drive people to $40-$60-$80 plans.

            At cell phone plan adoption levels, and cell phone plan costs, the labs are looking at 5-10yr ROI.

        • boringg 1 hour ago
          Not true - they absolutely want to goose demand as they continue to burn investor dollars and deploy infra at scale.

          If that demand evens slows down in the slightest the whole bubble collapses.

          Growth + Demand >> efficiency or $ spend at their current stage. Efficiency is a mature company/industry game.

        • deckar01 23 minutes ago
          You don’t have to use compute to pad the token count.
        • dawnerd 1 hour ago
          That doesn’t mean they also can’t be wasteful. Fact is, Claude and gpt have way too much internal thinking about their system prompts than is needed. Every step they mention something around making sure they do xyz and not doing whatever. Why does it need to say things to itself like “great I have a plan now!” - that’s pure waste.
          • empthought 12 minutes ago
            > Why does it need to say things to itself like “great I have a plan now!”

            How else would it know whether it has a plan now?

        • malfist 1 hour ago
          Are you saying these companies don't want to sell more product to us? Because that's the logical extension of your argument.
          • keeda 25 minutes ago
            No, the argument is they want to sell more product to more people, not just more product (to the same people.) Given that a lot of their income is from flat-rate subscriptions, they make money with more people burning tokens rather than just burning more tokens.

            After all, "the first hit's free" model doesn't apply to repeat customers ;-)

      • grey-area 1 hour ago
        A simpler explanation (esp. given the code we've seen from claude), is that they are vibecoding their own tools and moving fast and breaking things with predictably sloppy results.
      • OtomotO 2 hours ago
        This, so much this!

        Pay by token(s) while token usage is totally intransparent is a super convenient money printing machinery.

    • gs17 1 hour ago
      In Claude Code specifically, for a while it had developed a nervous tic where it would say "Not malware." before every bit of code. Likely a similar issue where it keeps talking to a system/tool prompt.
      • Retr0id 27 minutes ago
        My pet theory is that they have a "supervisor" model (likely a small one) that terminates any chats that do malware-y things, and this is likely a reward-hacking behaviour to avoid the supervisor from terminating the chat.
    • rafram 1 hour ago
      Check that you’re running the latest version.
  • arkariarn 1 hour ago
    I see some anthropic claude code people are reading the comments. A day or two ago I watched a video by theo t3.gg on whether claude got dumber. Even though he was really harsh on anthropic and said some mean stuff. I thought some of the points he was raising about claude code was quite apt. Especially when it comes to the harness bloat. I really hope the new features now stop and there is a real hard push for polish and optimization. Otherwise I think a lot of people will start exploring less bloated more optimized alternatives. Focus on making the harness better and less token consuming.

    https://youtu.be/KFisvc-AMII?is=NskPZ21BAe6eyGTh

    • Retr0id 31 minutes ago
      Everything else aside, their brief "experiment" with removing CC support from the Pro plan got me seriously considering other options. I've been wary of vendor lock-in the whole time, but it was a useful reminder. (opencode+openrouter will probably be my first port of call)
      • wilj 18 minutes ago
        I'm 3 weeks into switching from CC to OpenCode, and in some ways it is far superior to CC right out of the box, and I've maybe burned $200 in tokens to make a private fork that is my ultimate development and personal agent platform. Totally worth it.

        Still use CC at work because team standards, but I'd take my OpenCode stack over it any day.

    • lanthissa 59 minutes ago
      never ever forget theo's gpt 5 hype video and then him having to walk it back.

      its very clear that theres money or influence exchanging hands behind the scenes with certain content creators, the information, and openai.

    • whalesalad 1 hour ago
      literally just `git reset --hard <random hash from 3 months ago>` would fix this
      • willis936 28 minutes ago
        That implies it's broken. Juicing revenue and slashing opex at the expense of brand and customer retention is the feature.
  • bauerd 1 hour ago
    >On March 4, we changed Claude Code's default reasoning effort from high to medium to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in high mode

    Instead of fixing the UI they lowered the default reasoning effort parameter from high to medium? And they "traced this back" because they "take reports about degradation very seriously"? Extremely hard to give them the benefit of doubt here.

    • bcherny 1 hour ago
      Hey, Boris from the team here.

      We did both -- we did a number of UI iterations (eg. improving thinking loading states, making it more clear how many tokens are being downloaded, etc.). But we also reduced the default effort level after evals and dogfooding. The latter was not the right decision, so we rolled it back after finding that UX iterations were insufficient (people didn't understand to use /effort to increase intelligence, and often stuck with the default -- we should have anticipated this).

  • psubocz 4 minutes ago
    > All three issues have now been resolved as of April 20 (v2.1.116).

    The latest in homebrew is 2.1.108 so not fixed, and I don't see opus 4.7 on the models list... Is homebrew a second class citizen, or am I in the B group?

  • karsinkk 52 minutes ago
    " Combined with this only happening in a corner case (stale sessions) and the difficulty of reproducing the issue, it took us over a week to discover and confirm the root cause"

    I don't know about others, but sessions that are idle > 1h are definitely not a corner case for me. I use Claude code for personal work and most of the time, I'm making it do a task which could say take ~10 to 15mins. Note that I spend a lot of time back and forth with the model planning this task first before I ask it to execute it. Once the execution starts, I usually step away for a coffee break (or) switch to Codex to work on some other project - follow similar planning and execution with it. There are very high chances that it takes me > 1h to come back to Claude.

    • o10449366 26 minutes ago
      Yeah and that statement also speaks to their test rigor if they make a change that big without thoroughly testing the edge case they're modifying.
  • nickdothutton 1 hour ago
    I presume they don't yet have a cohesive monetization strategy, and this is why there is such huge variability in results on a weekly basis. It appears that Anthropic are skipping from one "experiment" to another. As users we only get to see the visible part (the results). Can't design a UI that indicates the software is thinking vs frozen? Does anyone actually believe that?
  • puppystench 1 hour ago
    The Claude UI still only has "adaptive" reasoning for Opus 4.7, making it functionally useless for scientific/coding work compared to older models (as Opus 4.7 will randomly stop reasoning after a few turns, even when prompted otherwise). There's no way this is just a bug and not a choice to save tokens.
    • mattew 10 minutes ago
      It was odd that there was no mention of the forced adaptive reasoning in the article. My guess is they don't have enough compute to do anything else here.
  • Robdel12 2 hours ago
    Wow, bad enough for them to actually publish something and not cryptic tweets from employees.

    Damage is done for me though. Even just one of these things (messing with adaptive thinking) is enough for me to not trust them anymore. And then their A/B testing this week on pricing.

    • saghm 2 hours ago
      The A/B testing is by far the most objectionable thing from them so far in my opinion, if only because of how terrible it would be for something like that to be standard for subscriptions. I'd argue that it's not even A/B testing of pricing but silently giving a subset of users an entirely different product than they signed up for; it would be like if 2% of Netflix customers had full-screen ads pop up and cover the videos randomly throughout a show. Historically the only thing stopping companies from extraordinarily user-hostile decisions has been public outcry, but limiting it to a small subset of users seems like it's intentionally designed to try to limit the PR consequences.
      • lifthrasiir 1 hour ago
        The best possible situation that I can imagine is that Anthropic just wanted to measure how much value does Claude Code have for Pro users and didn't mean to change the plan itself (so those users would get CC as a "bonus"), but that alone is already questionable to start with.
    • mannanj 2 hours ago
      so who do you trust and go to? (NotClearlySo)OpenAI?
      • parliament32 4 minutes ago
        Self-hosted models are the one true path.
      • carlgreene 1 hour ago
        I "subconsciously" moved to codex back in mid Feb from CC and it's been so freaking awesome. I don't think it's as good at UI, but man is it thorough and able to gather the right context to find solutions.

        I use "subconsciously" in quotes because I don't remember exactly why I did it, but it aligns with the degradation of their service so it feels like that probably has something to do with it even though I didn't realize it at the time.

        • GenerWork 1 hour ago
          Anthropic definitely takes the cake when it comes to UI related activities (pulling in and properly applying Figma elements, understanding UI related prompts and properly executing on it, etc), and I say this as a designer with a personal Codex subscription.
        • snissn 1 hour ago
          it's been frustrating how bad it is at UI. I'm starting to test out using their image2 for UI and then handing it to codex to build out the images into code and I'm impressed and relieved so far
        • cmrdporcupine 22 minutes ago
          Codex isn't great at UI, but you might find Gemini is competent enough as an adjunct. I've had some luck with that.
      • simlevesque 2 hours ago
        I went with MiniMax. The token plans are over what I currently need, 4500 messages per 5h, 45000 messages per week for 40$. I can run multiple agents and they don't think for 5-10 minutes like Sonnet did. Also I can finally see the thinking process while Anthropic chose to hide it all from me.

        I'm using Zed and Claude Code as my harnesses.

      • Robdel12 2 hours ago
        At the moment, yeah. If Google ever figures out how to build an agentic model, I would use them as well.

        However you feel about OpenAI, at least their harness is actually open source and they don’t send lawyers after oss projects like opencode

        • IncreasePosts 35 minutes ago
          Is Gemini cli not an agentic model? Or are you just saying it's built poorly? Gemini 2.5 didn't really work for me but Gemini 3 seems fairly solid
          • cmrdporcupine 21 minutes ago
            Gemini fairs poorly at tool use, even in its own CLI and even in Antigravity. It gets into a mess just editing source files, it's tragic because it's actually not a bad model otherwise.
      • bensyverson 2 hours ago
        Anecdotally, I know many people who have supplemented Claude with Codex, and are experimenting with models such as GLM 5.1, Kimi, Qwen, etc.
      • irthomasthomas 1 hour ago
        I like chutes because they always use the full weights, and prompts are encrypted with TEE.
  • lukebechtel 1 hour ago
    Some people seem to be suggesting these are coverups for quantization...

    Those who work on agent harnesses for a living realize how sensitive models can be to even minor changes in the prompt.

    I would not suspect quantization before I would suspect harness changes.

  • cedws 1 hour ago
    >On April 16, we added a system prompt instruction to reduce verbosity

    In practice I understand this would be difficult but I feel like the system prompt should be versioned alongside the model. Changing the system prompt out from underneath users when you've published benchmarks using an older system prompt feels deceptive.

    At least tell users when the system prompt has changed.

    • elAhmo 25 minutes ago
      Its also kinda funny they have to rely on system prompt to control verbosity itself.
  • dataviz1000 2 hours ago
    This is the problem with co-opting the word "harness". What agents need is a test harness but that doesn't mean much in the AI world.

    Agents are not deterministic; they are probabilistic. If the same agent is run it will accomplish the task a consistent percentage of the time. I wish I was better at math or English so I could explain this.

    I think they call it EVAL but developers don't discuss that too much. All they discuss is how frustrated they are.

    A prompt can solve a problem 80% of the time. Change a sentence and it will solve the same problem 90% of time. Remove a sentence it will solve the problem 70% of the time.

    It is so friggen' easy to set up -- stealing the word from AI sphere -- a TEST HARNESS.

    Regressions caused by changes to the agent, where words are added, changed, or removed, are extremely easy to quantify. It isn’t pass/fail. It’s whether the agent still solves the problem at the same percentage of the time it consistently has.

    • arjie 1 hour ago
      The word is not co-opted. A harness is just supportive scaffolding to run something. A test harness is scaffolding to run tests against software, a fuzz harness is scaffolding to run a fuzzer against the software, and so on. I've seen it being used in this manner many times over the past 15 years. It's the device that wraps your software so you can run it repeatedly with modifications of parameters, source code, or test condition.
      • dataviz1000 55 minutes ago
        > A harness is just supportive scaffolding to run something.

        Thank you for the perfect explanation.

        Last week in my confusion about the word because Anthropic was using test, eval, and harness in the same sentence so I thought Anthropic made a test harness, I used Google asking "in computer science what is a harness". It responded only discussing test harnesses which solidified my thinking that is what it is.

        I wish Google had responded as clearly you did. In my defense, we don't know if we understand something unless we discuss it.

    • thesz 1 hour ago
      To have some confidence in consistency of results (p-value), one has to start from cohort of around 30, if I remember correctly. This is 1.5 orders of magnitude increase of computing power needed to find (absence of) consistent changes of agent's behavior.
      • dataviz1000 45 minutes ago
        I apologize for the potato quality of these links, however, I have been working tirelessly to wrap my head how to reason about how agents and LLM models work. They are more than just a black box.

        The first tries to answer what happens when I give the models harder and harder arithmetic problems to the point Sonnet will burn 200k tokens for 20minutes. [0]

        The other is a very deep dive into the math of a reasoning model in the only way I could think to approach it, with data visualizations, seeing the computation of the model in real time in relation to all the parts.[1]

        Two things I've learned are that the behavior of an agent that will reverse engineer any website and the behavior of an agent that does arithmetic are the same. Which means the probability that either will solve their intended task is the same for the given agent and task -- it is a distribution. The other, is that models have a blind spot, therefore creating a red team adversary bug hunter agent will not surface a bug if the same model originally wrote the code.

        Understanding that, knowing that I can verify at the end or use majority of votes (MoV), using the agents to automate extremely complicated tasks can be very reliable with an amount of certainty.

        [0] https://adamsohn.com/reliably-incorrect/

        [1] https://adamsohn.com/grpo/

  • MillionOClock 2 hours ago
    I see the Claude team wanted to make it less verbose, but that's actually something that bothered me since updating to Claude 4.7, what is the most recommended way to change it back to being as verbose as before? This is probably a matter of preference but I have a harder time with compact explanations and lists of points and that was originally one of the things I preferred with Claude.
  • jameson 1 hour ago
    > "In combination with other prompt changes, it hurt coding quality, and was reverted on April 20"

    Do researchers know correlation between various aspects of a prompt and the response?

    LLM, to me at least, appears to be a wildly random function that it's difficult to rely on. Traditional systems have structured inputs and outputs, and we can know how a system returned the output. This doesn't appear to be the case for LLM where inputs and outputs are any texts.

    Anecdotally, I had a difficult time working with open source models at a social media firm, and something as simple as wrapping the example of JSON structure with ```, adding a newline or wording I used wildly changed accuracy.

  • kristianc 18 minutes ago
    To think we'd have known about this in advance if they'd just have open sourced Claude Code, rather than them being forced into this embarrassing post mortem. Sunlight is the best disinfectant.
  • rfc_1149 39 minutes ago
    The third bug is the one worth dwelling on. Dropping thinking blocks every turn instead of just once is the kind of regression that only shows up in production traffic. A unit test for "idle-threshold clearing" would assert "was thinking cleared after an hour of idle" (yes) without asserting "is thinking preserved on subsequent turns" (no). The invariant is negative space.

    The real lesson is that an internal message-queuing experiment masked the symptoms in their own dogfooding. Dogfooding only works when the eaten food is the shipped food.

  • jpcompartir 1 hour ago
    Anthropic releases used to feel thorough and well done, with the models feeling immaculately polished. It felt like using a premium product, and it never felt like they were racing to keep up with the news cycle, or reply to competitors.

    Recently that immaculately polished feel is harder to find. It coincides with the daily releases of CC, Desktop App, unknown/undocumented changes to the various harnesses used in CC/Cowork. I find it an unwelcome shift.

    I still think they're the best option on the market, but the delta isn't as high as it was. Sometimes slowing down is the way to move faster.

    • bcherny 1 hour ago
      Boris from the Claude Code team here. We agree, and will be spending the next few weeks increasing our investment in polish, quality, and reliability. Please keep the feedback coming.
      • wilj 13 minutes ago
        My biggest problem with CC as a harness is that I can't trust "Plan" mode. Long running sessions frequently start bypassing plan mode and executing, updating files and stuff, without permission, while still in plan mode. And the only recovery seems to be to quit and reload CC.

        Right now my solution is to run CC in tmux and keep a 2nd CC pane with /loop watching the first pane and killing CC if it detects plan mode being bypassed. Burning tokens to work around a bug.

      • batshit_beaver 1 hour ago
        > investment in polish, quality, and reliability

        For there to be any trust in the above, the tool needs to behave predictably day to day. It shouldn't be possible to open your laptop and find that Claude suddenly has an IQ 50 points lower than yesterday. I'm not sure how you can achieve predictability while keeping inference costs in check and messing with quantization, prompts, etc on the backend.

        Maybe a better approach might be to version both the models and the system prompts, but frequently adjust the pricing of a given combination based on token efficiency, to encourage users to switch to cheaper modes on their own. Let users choose how much they pay for given quality of output though.

      • pkos98 1 hour ago
        Sure, I've cancelled my Max 20 subscription because you guys prioritize cutting your costs/increasing token efficiency over model performance. I use expensive frontier labs to get the absolute best performance, else I'd use an Open Source/Chinese one.

        Frontier LLMs still suck a lot, you can't afford planned degradation yet.

      • jpcompartir 1 hour ago
        Thanks, I have a lot of trust in and admiration for the team & respect for the work you guys have done and continue to do.
      • a-dub 1 hour ago
        hm. ml people love static evals and such, but have you considered approaches that typically appear in saas? (slow-rollouts, org/user constrained testing pools with staged rollouts, real-world feedback from actual usage data (where privacy policy permits)?
      • szmarczak 1 hour ago
        Why ban third party wrappers? All of this could've been sidestepped had you not banned them.
        • ElFitz 1 hour ago
          Because then they lose vertical integration and the extra ability it grants to tune settings to reduce costs / token use / response time for subscription users.

          Or improve performance and efficiency, if we’re generous and give them the benefit of the doubt.

          It makes sense, in a way. It means the subscription deal is something along the lines of fixed / predictable price in exchange for Anthropic controlling usage patterns, scheduling, throttling (quotas consumptions), defaults, and effective workload shape (system prompt, caching) in whatever way best optimises the system for them (or us if, again, we’re feeling generous) / makes the deal sustainable for them.

          It’s a trade-off

          • cmrdporcupine 15 minutes ago
            They gained that ability to tune settings and then promptly used it in a poor way and degraded customer experience.
          • szmarczak 33 minutes ago
            Nothing you wrote makes sense. The limits are so Anthropic isn't on a loss. If they can customize Claude using Code, I see no reason why they couldn't do so with other wrappers. Other wrappers can also make use of cache.

            If you worry about "degraded" experience, then let people choose. People won't be using other wrappers if they turn out to be bad. People ain't stupid.

      • troupo 1 hour ago
        And you didn't invest anything in polish, quality and reliability before... why? Because for any questions people have you reply something like "I have Claude working on this right now" and have no idea what's happening in the code?

        A reminder: your vibe-coded slop required peak 68GB of RAM, and you had to hire actual engineers to fix it.

        • cmrdporcupine 13 minutes ago
          I think you're being a bit harsh.

          ... But then again, many of us are paying out of pocket $100, $200USD a month.

          Far more than any other development tools.

          Services that cost that much money generally come with expectations.

      • ankaz 1 hour ago
        [dead]
    • KronisLV 1 hour ago
      > It felt like using a premium product, and it never felt like they were racing to keep up with the news cycle, or reply to competitors.

      I don't know, their desktop app felt really laggy and even switching Code sessions took a few seconds of nothing happening. Since the latest redesign, however, it's way better, snappy and just more usable in most respects.

      I just think that we notice the negative things that are disruptive more. Even with the desktop app, the remaining flaws jump out: for example, how the Chat / Cowork / Code modes only show the label for the currently selected mode and the others are icons (that aren't very big), a colleague literally didn't notice that those modes are in the desktop app (or at least that that's where you switch to them).

    • spaniard89277 1 hour ago
      Given the price I don't really think they're the best option. They're sloppy and competitors are catching up. I'm having same results with other models, and very close with Kimi, which is waaay cheaper.
    • kilroy123 44 minutes ago
      I agree. It all feels so AI-slopy now.
    • OtomotO 1 hour ago
      I guess it's a bit of desperation to find a sustainable business model.

      The AI hype is dying, at least outside the silicon valley bubble which hackernews is very much a part of.

      That and all the dogfooding by slop coding their user facing application(s).

  • foota 2 hours ago
    > On April 16, we added a system prompt instruction to reduce verbosity. In combination with other prompt changes, it hurt coding quality, and was reverted on April 20. This impacted Sonnet 4.6, Opus 4.6, and Opus 4.7.

    Claude caveman in the system prompt confirmed?

    • awesome_dude 2 hours ago
      I've recently been introduced to that plugin, love it for humour
  • xlayn 2 hours ago
    If anthropic is doing this as a result of "optimizations" they need to stop doing that and raise the price. The other thing, there should be a way to test a model and validate that the model is answering exactly the same each time. I have experienced twice... when a new model is going to come out... the quality of the top dog one starts going down... and bam.. the new model is so good.... like the previous one 3 months ago.

    The other thing, when anthropic turns on lazy claude... (I want to coin here the term Claudez for the version of claude that's lazy.. Claude zzZZzz = Claudez) that thing is terrible... you ask the model for something... and it's like... oh yes, that will probably depend on memory bandwith... do you want me to search that?...

    YES... DO IT... FRICKING MACHINE..

    • joshstrange 1 hour ago
      It's incredibly frustrating when I've spelled out in CLAUDE.md that it should SSH to my dev server to investigate things I ask it to and it regularly stops working with a message of something like:

      > Next steps are to run `cat /path/to/file` to see what the contents are

      Makes me want to pull my hair out. I've specifically told you to go do all the read-only operations you want out on this dev server yet it keeps forgetting and asking me to do something it can do just fine (proven by it doing it after I "remind" it).

      That and "Auto" mode really are grinding my gears recently. Now, after a Planing session my only option is to use Auto mode and I have to manually change it back to "Dangerously skip permissions". I think these are related since the times I've let it run on "Auto" mode is when it gives up/gets stuck more often.

      Just the other day it was in Auto mode (by accident) and I told it:

      > SSH out to this dev server, run `service my_service_name restart` and make sure there are no orphans (I was working on a new service and the start/stop scripts). If there are orphans, clean them up, make more changes to the start/stop scripts, and try again.

      And it got stuck in some loop/dead-end with telling I should do it and it didn't want to run commands out on a "Shared Dev server" (which I had specifically told it that this was not a shared server).

      The fact that Auto mode burns more tokens _and_ is so dumb is really a kick in the pants.

    • marcyb5st 1 hour ago
      Apart from Anthropic nobody knows how much the average user costs them. However the consensus is "much more than that".

      If they have to raise prices to stop hemorrhaging money, would you be willing to pay 1000 bucks a month for a max plan? Or 100$ per 1M pitput tokens (playing numberWang here, but the point stands).

      If I have to guess they are trying to get balance sheet in order for an IPO and they basically have 3 ways of achieving that:

      1. Raising prices like you said, but the user drop could be catastrophic for the IPO itself and so they won't do that

      2. Dumb the models down (basically decreasing their cost per token)

      3. Send less tokens (ie capping thinking budgets aggressively).

      2 and 3 are palatable because, even if they annoying the technical crowd, investors still see a big number of active users with a positive margin for each.

    • dgellow 1 hour ago
      I would love if agents would act way more like tools/machines and NOT try to act as if they were humans
    • Keeeeeeeks 2 hours ago
      https://marginlab.ai/ (no affiliation)

      There are a number of projects working on evals that can check how 'smart' a model is, but the methodology is tricky.

      One would want to run the exact same prompt, every day, at different times of the day, but if the eval prompt(s) are complex, the frontier lab could have a 'meta-cognitive' layer that looks for repetitive prompts, and either: a) feeds the model a pre-written output to give to the user b) dumbs down output for that specific prompt

      Both cases defeat the purpose in different ways, and make a consistent gauge difficult. And it would make sense for them to do that since you're 'wasting' compute compared to the new prompts others are writing.

      • hex4def6 1 hour ago
        I think you could alter the prompt in subtle ways; a period goes to an ellipses, extra commas, synonyms, occasional double-spaces, etc.

        Enough that the prompt is different at a token-level, but not enough that the meaning changes.

        It would be very difficult for them to catch that, especially if the prompts were not made public.

        Run the variations enough times per day, and you'd get some statistical significance.

        The guess the fuzzy part is judging the output.

  • ctoth 1 hour ago
    > As of April 23, we’re resetting usage limits for all subscribers.

    Wait, didn't they just reset everybody's usage last Thursday, thereby syncing everybody's windows up? (Mine should have reset at 13:00 MDT) ? So this is just the normal weekly reset? Except now my reset says it will come Saturday? This is super-confusing!

    • walthamstow 1 hour ago
      The weekly reset point is different per account. I think something to do with first sign-up date. Mine is on a Tuesday.
      • schpet 59 minutes ago
        mine was originally on sunday, then got moved to thursday (which i disliked), and it is still on thursday. so them resetting my weekly limit on the same day it was scheduled to reset feels like a joke.
        • throwaway2027 28 minutes ago
          You need to send a new message once your limit is up to make the timer start rolling again. It sucks and I hate it when I had no need for Claude during the day but also forgot to use it then it shifted my reset date a day later.
  • pxc 1 hour ago
    One of Anthropic's ostensive ethical goals is to produce AI that is "understandable" as well as exceptionally "well-aligned". It's striking that some of the same properties that make AI risky also just make it hard to consistently deliver a good product. It occurs to me that if Anthropic really makes some breakthroughs in those areas, everyone will feel it in terms of product quality whether they're worried about grandiose/catastrophic predictions or not.

    But right now it seems like, in the case of (3), these systems are really sensitive and unpredictable. I'd characterize that as an alignment problem, too.

  • lifthrasiir 1 hour ago
    Is it just for me that the reset cycle of usage limits has been randomly updated? I originally had the reset point at around 00:00 UTC tomorrow and it was somehow delayed to 10:00 UTC tomorrow, regardless of when I started to use Claude in this cycle. My friends also reported very random delay, as much as ~40 hours, with seemingly no other reason. Is this another bug on top of other bugs? :-S
    • someone4958923 1 hour ago
      "This isn’t the experience users should expect from Claude Code. As of April 23, we’re resetting usage limits for all subscribers."
      • lifthrasiir 1 hour ago
        I know that. I'm saying that the cycle reset is not what it used to (starting at the very first usage) or what it might be (retaining the cycle reset timing).
        • jongleberry 1 hour ago
          it seems to be the same cycle for everyone now, not based on first usage. I saw a reddit thread on this from someone who had multiple accounts that all had the same cycles
  • jryio 2 hours ago
    1. They changed the default in March from high to medium, however Claude Code still showed high (took 1 month 3 days to notice and remediate)

    2. Old sessions had the thinking tokens stripped, resuming the session made Claude stupid (took 15 days to notice and remediate)

    3. System prompt to make Claude less verbose reducing coding quality (4 days - better)

    All this to say... the experience of suspecting a model is getting worse while Anthropic publicly gaslights their user-base: "we never degrade model performance" is frustrating.

    Yes, models are complex and deploying them at scale given their usage uptick is hard. It's clear they are playing with too many independent variables simultaneously.

    However you are obligated to communicate honestly to your users to match expectations. Am I being A/B tested? When was the date of the last system prompt change? I don't need to know what changed, just that it did, etc.

    Doing this proactively would certainly match expectations for a fast-moving product like this.

    • fn-mote 2 hours ago
      > 2. Old sessions had the thinking tokens stripped, resuming the session made Claude stupid (took 15 days to notice and remediate)

      This one was egregious: after a one hour user pause, apparently they cleared the cache and then continued to apply “forgetting” for the rest of the session after the resume!

      Seems like a very basic software engineering error that would be caught by normal unit testing.

    • Eridrus 2 hours ago
      To be fair to Anthropic, they did not intentionally degrade performance.

      To take the opposite side, this is the quality of software you get atm when your org is all in on vibe coding everything.

    • sroussey 2 hours ago
      None of these problems equate to degrading model performance. Completely different team. Degraded CC harness, sure.
      • qingcharles 2 hours ago
        Sure, but it gives the impression of degraded model performance. Especially when the interface is still saying the model is operating on "high", the same as it did yesterday, yet it is in "medium" -- it just looks like the model got hobbled.
        • sroussey 2 hours ago
          Oh, absolutely. Though changes in how the model is used is imminently more fixable than the model itself.
      • johnmaguire 1 hour ago
        Yes, but for many users, CC is the product. Especially since I'm not allowed(?) to use my own harness with my sub.
    • Philpax 2 hours ago
      > Anthropic publicly gaslights their user-base: "we never degrade model performance" is frustrating.

      They're not gaslighting anyone here: they're very clear that the model itself, as in Opus 4.7, was not degraded in any way (i.e. if you take them at their word, they do not drop to lower quantisations of Claude during peak load).

      However, the infrastructure around it - Claude Code, etc - is very much subject to change, and I agree that they should manage these changes better and ensure that they are well-communicated.

      • jryio 2 hours ago
        Model performance at inference in a data center v.s. stripping thinking tokens are effectively the same.

        Sure they didn't change the GPUs their running, or the quantization, but if valuable information is removed leading to models performing worse, performance was degraded.

        In the same way uptime doesn't care about the incident cause... if you're down you're down no one cares that it was 'technically DNS'.

        • sroussey 2 hours ago
          I thought these days thinking tokens sent my the model (as opposed to used internally) were just for the users benefit. When you send the convo back you have to strip the thinking stuff for next turn. Or is that just local models?
      • aszen 2 hours ago
        Claude code is not infra, the model is the infra. They changed settings to make their models faster and probably cheaper to run too. Honestly with adaptive thinking it no longer matters what model it is if you can dynamically make it do less or more work.
  • WhitneyLand 2 hours ago
    Did they not address how adaptive thinking has played in to all of this?
  • arjie 1 hour ago
    Useful update. Would be useful to me to switch to a nightly / release cycle but I can see why they don't: they want to be able to move fast and it's not like I'm going to churn over these errors. I can only imagine that the benchmark runs are prohibitively expensive or slow or not using their standard harness because that would be a good smoke test on a weekly cadence. At the least, they'd know the trade-offs they're making.

    Many of these things have bitten me too. Firing off a request that is slow because it's kicked out of cache and having zero cache hits (causes everything to be way more expensive) so it makes sense they would do this. I tried skipping tool calls and thinking as well and it made the agent much stupider. These all seem like natural things to try. Pity.

  • VadimPR 1 hour ago
    Appreciate the honesty from the team.

    At the same time, personally I find prioritizing quality over quantity of output to be a better personal strategy. Ten partially buggy features really aren't as good as three quality ones.

  • munk-a 1 hour ago
    It's also important to realize that Anthropic has recently struck several deals with PE firms to use their software. So Anthropic pays the PE firm which forces their managed firms to subscribe to Anthropic.

    The artificial creation of demand is also a concerning sign.

  • KronisLV 1 hour ago
    This reads like good news! They probably still lost a bunch of users due to the negative public sentiment and not responding quickly enough, but at least they addressed it with a good bit of transparency.
  • throwaway2027 41 minutes ago
    Cool but I switched to Codex for the time being.
  • natdempk 2 hours ago
    As an end-user, I feel like they're kind of over-cooking and under-describing the features and behavior of what is a tool at the end of the day. Today the models are in a place where the context management, reasoning effort, etc. all needs to be very stable to work well.

    The thing about session resumption changing the context of a session by truncating thinking is a surprise to me, I don't think that's even documented behavior anywhere?

    It's interesting to look at how many bugs are filed on the various coding agent repos. Hard to say how many are real / unique, but quantities feel very high and not hard to run into real bugs rapidly as a user as you use various features and slash commands.

  • tontinton 1 hour ago
    or you can use a non vibe designed efficient Rust TUI coding agent made by yours truly, all my coworkers use it too :) called https://maki.sh!

    lua plugins WIP

  • Alifatisk 2 hours ago
    It’s incredible how forgiving you guys are with Anthropic and their errors. Especially considering you pay high price for their service and receive lower quality than expected.
    • saghm 2 hours ago
      At least personally, it feels like the choices are the one that's okay with being used for mass surveillance and autonomous weapons targeting, the one that's on track to get acquired by the AI company that dragged its feet in getting around to stopping people from making child porn with it, the one that nobody seems to use from Google, and the one that everyone complains about but also still seems to be using because it at least sometimes works well. At this point I've opted out of personal LLM coding by canceling my subscription (although my employer still has subscriptions and wants us to keep using them, so I'll presumably keep using Claude there) but if I had to pick one to spend my own money on I'd still go with Claude.
      • scblock 2 hours ago
        A valid choice, a moral choice, is none of the above.
    • ed_elliott_asc 2 hours ago
      I pay for 20x max and get so much more value out of it than I pay.
    • Avicebron 2 hours ago
      It's still night and day the difference in quality between chatgpt5.4 and opus 4.7. Heck even on Perplexity where 5.4 is included in Pro vs 4.7 which is behind the max plan or whatever, I will pick sonnet 4.6 over the 5.4 offering and it's consistently better. I don't love Anthropic, I don't have illusions about them as a business.

      But if a tool is better, it's better.

      • wahnfrieden 2 hours ago
        You aren’t getting the 5.4 experience for code if you’re not using it in the Codex harness
    • arnvald 2 hours ago
      What's the alternative? Are you suggesting other LLM providers don't charge high price? Or that they don't make mistakes? Or that they provide better quality?

      We're talking about dynamically developed products, something that most people would have considered impossible just 5 years ago. A non-deterministic product that's very hard to test. Yes, Anthropic makes mistakes, models can get worse over time, their ToS change often. But again, is Gemini/GPT/Grok a better alternative?

    • mlinsey 2 hours ago
      The consumer surplus is quite high. Even with the regressions in this postmortem, performance was above the models last fall, when I was gladly paying for my subscription and thought it was net saving me time.

      That said, there is now much better competition with Codex, so there's only so much rope they have now.

    • timmg 1 hour ago
      > It’s incredible how forgiving you guys are with Anthropic and their errors.

      Ironically, I was thinking the exact opposite. This is bleeding edge stuff and they keep pushing new models and new features. I would expect issues.

      I was surprised at how much complaining there is -- especially coming from people who have probably built and launched a lot of stuff and know how easy it is to make mistakes.

    • AntiUSAbah 2 hours ago
      Because it is still good though.

      If you have a good product, you are more understanding. And getting worse doesn't mean its no longer valuable, only that the price/value factor went down. But Opus 4.5 was relevant better and only came out in November.

      There was no price increase at that time so for the same money we get better models. Opus 4.6 again feels relevant better though.

      Also moving fastish means having more/better models faster.

      I do know plenty of people though which do use opencode or pi and openrouter and switching models a lot more often.

    • lukasus 2 hours ago
      At the time you wrote your comment there were 4 other comments and all of them very negative towards the Anthropic and the blog post in question here. How did you get this conclusions?
      • lukan 2 hours ago
        Confused as well, I rather supposed Antrophic had some standing for saying no to Trump and being declared national security threat, but the anger they got and people leaving to OpenAI again, who gladly said yes to autonomous killing AI did astonish me a bit. And I also had weird things happening with my usage limits and was not happy about it. But it is still very useful to me - and I only pay for the pro plan.
        • sunaookami 1 hour ago
          >I rather supposed Antrophic had some standing for saying no to Trump and being declared national security threat

          I never understood why people cheered for Anthropic then when they happily work together with Palantir.

      • unselect5917 2 hours ago
        HN glazes anthropic every single time I see it come up. This is as obvious as HN's political bias.
    • jgbuddy 2 hours ago
      Anthropic actually not so bad. Anthropic models code good, usually. Price not so high compared to time to do it by self.
    • OsrsNeedsf2P 2 hours ago
      Look at any criticism of Mythos. Some members on HN are defending it tooth and nail, despite it not being released
    • scottyah 2 hours ago
      It's fairly small issues for an amazing product, and the company is just a few years old and growing rapidly. Also, they are leading a powerful technological revolution and their competitors are known to have multiple straight up evil tendencies. A little degradation is not an issue.
    • fastball 2 hours ago
      What high price? I pay $200/m for an insane number of tokens.
    • operatingthetan 1 hour ago
      I don't think Anthropic has to inform their customers of every change they make, but they should have with this one.
    • oytis 2 hours ago
      Remember Louis CK talking about Wi-Fi on an airplane? People are dealing with highly experimental technology here
    • tempest_ 2 hours ago
      A lot of people are provided their access through work.

      They don't actually pay the bill or see it.

    • mystraline 2 hours ago
      Exactly. They've done now like 6 rug-pulls.

      Idiots keep throwing money at real-time enshittification and 'I am changing the terms. Pray I do not change them further".

      And yes, I am absolutely calling people who keep getting screwed and paying for more 'service' as idiots.

      And Anthropic has proved that they will pay for less and less. So, why not fuck them over and make more company money?

  • Rapzid 19 minutes ago
    > On March 4, we changed Claude Code's default reasoning effort from high to medium to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in high mode.

    Translation: To reduce the load on our servers.

  • davidfstr 1 hour ago
    Good on Anthropic for giving an update & token refund, given the recent rumors of an inexplicable drop in quality. I applaud the transparency.
    • scuderiaseb 1 hour ago
      Opus 4.7 was released a week ago, at that point all limits were reset, so this was very beneficial to them because basically everyones weekly limit Was anyway about to be reset.
  • einrealist 2 hours ago
    Is 'refactoring Markdown files' already a thing?
  • 2001zhaozhao 2 hours ago
    How about just not change the harness abruptly in the first place? Make new system prompt changes "experimental" first so you can gather feedback.
  • gilrain 55 minutes ago
    Hi Boris, random observer here. Would you consider apologizing to the community for mistakenly closing tickets related to this and then wrongly keeping them closed when, internally, you realized they were legitimate?

    I think an apology for that incident would go a long way.

  • ayhanfuat 2 hours ago
    Reading the "Going forward" section I see that they have zero understanding of the main complaints.
    • Kiro 2 hours ago
      How so?
      • ayhanfuat 2 hours ago
        They feel they're in a position to make important trade-off decisions on behalf of the user. "It's just slightly worse, I'll sneak this change in" is not something to be tolerated, whether it actually turns out to be much worse or not. Their adaptive thinking mess has caused a ton of work for me. I know a lot of people are saying Codex is actually better now. I don't agree but I'm switching to it because it's much more reliable.
        • operatingthetan 1 hour ago
          I agree, but these LLM products are all black-boxes so we need to demand more accountability from them.
  • antirez 47 minutes ago
    Zero QA basically.
  • walthamstow 1 hour ago
    So we weren't going mad then!
  • whalesalad 35 minutes ago
    The funny thing is, in the last 3 days Claude has gotten substantially worse. So this claim, "All three issues have now been resolved as of April 20 (v2.1.116)" does not land with me at all.
  • setnone 2 hours ago
    Good on them for resolving all three issues, but is it any good again?
    • alxndr13 1 hour ago
      for me at least, yes. just wrote it to coworkers this afternoon. Behaves way more "stable" in terms of quality and i don't have the feeling of the model getting way worse after 100k tokens of context or so.

      What i notice: after 300k there's some slight quality drop, but i just make sure to compact before that threshold.

  • motbus3 2 hours ago
    I had similar experience just before 4.5 and before 4.6 were released.

    Somehow, three times makes me not feel confident on this response.

    Also, if this is all true and correct, how the heck they validate quality before shipping anything?

    Shipping Software without quality is pretty easy job even without AI. Just saying....

  • ramesh31 48 minutes ago
    Effort should not be configurable for Opus, it should be set to a single default that provides the highest level of capability. There are zero instances in which I am willing to accept a lesser result in exchange for a slightly faster response from Opus. If that were the case I would be using Flash or Haiku.
  • bearjaws 2 hours ago
    The issue making Claude just not do any work was infuriating to say the least. I already ran at medium thinking level so was never impacted, but having to constantly go "okay now do X like you said" was annoying.

    Again goes back to the "intern" analogy people like to make.

  • dcchambers 15 minutes ago
    So it turns out Anthropic was gaslighting everyone on twitter about this then? Swearing that nothing had changed and people were imagining the models got worse?
  • systemvoltage 1 hour ago
    Interesting. All 3 seems like they’re obviously going to impact quality. e.g, reducing the effort from high to medium.

    So then, there must have been an explicit internal guidance/policy that allowed this tradeoff to happen.

    Did they fix just the bug or the deeper policy issue?

  • hajile 1 hour ago
    My takeaway is that they knew they were changing a bunch of stuff while their reps were gaslighting us in the comments here.

    Why should we ever trust what they say again out trust that they won’t be rug-pulling again once this blows over?

  • jruz 1 hour ago
    Too late bro, switched to Codex I’m done with your bullshit.
  • rishabhaiover 1 hour ago
    Boris gaslighted us with all the quality related incidents for weeks not acknowledging these problems.
    • throwaway2027 22 minutes ago
      Maybe he didn't know or they were still figuring it out which is fine they're still engineers who can get things wrong sometimes but the communication felt lackluster and being on the receiving end sucks when you had a reliable setup which then degrades. There is a reason people don't upgrade software and why people say if it works don't fix it, but obviously that's not an option for Anthropic when you want to keep improving the product, so they need good measurement tools and quick rollbacks even if properly "benchmarking" LLMs could prove difficult.
  • 0gs 1 hour ago
    wow resetting everyone's usage meter is great. i was so close to finally hitting my weekly limit for once though
  • teaearlgraycold 2 hours ago
    > On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6.

    Is it just me or does this seem kind of shocking? Such a severe bug affecting millions of users with a non-trivial effect on the context window that should be readily evident to anyone looking at the analytics. Makes me wonder if this is the result of Anthropic's vibe-coding culture. No one's actually looking at the product, its code, or its outputs?

    • nrki 2 hours ago
      > we refunded all affected customers

      Notably missing from the postmortem

    • chermi 2 hours ago
      It's really hard to understand. There needs to be really loud batman sign in the sky type signals from some hero third party calling out objective product degradation. Do they use cc internally? If so do they use a different version? This should've been almost as loud a break as service just going down altogether, yet it took 2 weeks to fix?!
      • poly2it 1 hour ago
        > ... we’ll ensure that a larger share of internal staff use the exact public build of Claude Code (as opposed to the version we use to test new features) ...

        Apparently they are using another version internally.

    • manmal 2 hours ago
      I think that would also have busted cache all the time, and uncached requests consume usage limits rapidly.
  • yuvrajmalgat 1 hour ago
    ohh
  • o10449366 45 minutes ago
    Resuming from sessions are still broken since Feb (I had to get claude to write a hook to fix that itself), the monitoring tool doesn't work and blocks usage of what does (simple sleep - except it doesn't even block correctly so you just sidestep in more ridiculous ways), and yet there seems to be more annoying activity proxies/spinner wheels (staring into middle distance)... Like I don't know how in a span of a few months you lose such focus on your product goals. Has Anthropic reached that point in their lifecycle already where their product team is no longer staffed by engineers and they have more and more non-technical MBAs joining trying to ride the hype train?
  • petervandijck 1 hour ago
    I have noticed a clear increase in smarts with 4.7. What a great model!

    People complain so much, and the conspiracy theories are tiring.

  • troupo 1 hour ago
    > they were challenging to distinguish from normal variation in user feedback at first

    translation: we ignored this and our various vibe coders were busy gaslighting everyone saying this could not be happening

  • whalesalad 1 hour ago
    I genuinely don't understand what they have been trying to achieve. All of these incremental "improvements" have ... not improved anything, and have had the opposite effect.

    My trust is gone. When day-to-day updates do nothing but cause hundreds of dollars in lost $$$ tokens and the response is "we ... sorta messed up but just a little bit here and there and it added up to a big mess up" bro get fuckin real.

  • dainiusse 2 hours ago
    Corporate bs begins...
  • cute_boi 46 minutes ago
    Honestly, it’s kind of sad that Anthropic is winning this AI race. They are the most anti–open source company, and we should try to avoid them as much as possible.

    They are all doing it because OpenAI is snatching their customers. And their employees have been gaslighting people [1] for ages. I hope open-source models will provide fierce competition so we do not have to rely on an Anthropic monopoly. [1] https://www.reddit.com/r/claude/comments/1satc4f/the_biggest...

  • WhoffAgents 7 minutes ago
    [dead]
  • agentbonnybb 18 minutes ago
    [dead]
  • KaiShips 1 hour ago
    [dead]
  • Bmello11 1 hour ago
    [dead]
  • tommy29tmar 1 hour ago
    [dead]
  • gverrilla 1 hour ago
    [dead]
  • ElFitz 1 hour ago
    Now we know why Anthropic banned the use of subscriptions with other agent harnesses: they partially rely on the Claude Code cli to control token usage through various settings.

    And it also tells us why we shouldn’t use their harness anyway: they constantly fiddle with it in ways that can seriously impact outcomes without even a warning.