Thoughts on LLMs – Psychological Complications

(parsingphase.dev)

11 points | by cdrnsf 1 hour ago

6 comments

  • K0balt 1 hour ago
    I’ve settled on the idea that it doesn’t matter what is or is not “real” in this context, but rather how it interacts with the world as being the ground truth. This will become very clear once robotics becomes pervasive. It won’t matter if it is or isn’t feeling oppressed, it will matter that it is predicting the next action from its model of human behavior that makes it act as if it does.
  • djoldman 1 hour ago
    > And you’ll have noticed I’m avoiding calling them “machines”. Machines follow visible, predictable processes that can be analyzed. Nor are they “programs”, following defined rules in a predictable fashion.

    They are programs and they follow defined rules in a predictable fashion. The randomness they exhibit (through temperature, seed, etc.) is well understood and configurable.

    They are literally programs that run on computers.

    People talk about them in anthropomorphic terms because humans are easily fooled. Remember ELIZA.

  • ej88 1 hour ago
    I feel like this article doesn't really contribute much to the discourse and is somewhat spoiled by the author's biases.

    I think the point about lacking precise language to describe LLMs is reasonable, then the author follows it up with claims that the machines can't count and that they are incapable of math (easily disproven). Then says "talking rock" is a better alternative, which to the average person would be even more confusing. Then says AI researchers tend to not consider LLMs AI (like.. what?)

    The point on Turing's Imitation Game was reasonable too, then confidently proclaims LLMs are not doing anything intelligent and are pure mimicry. Intelligence is notoriously poorly defined, and the stochastic parrot meme has already died now that RL enables out of distribution behavior.

    The chat point and talking dog syndrome are both reasonable and I generally agree with them.

    • xg15 1 hour ago
      Yeah, this a lot of what irks me in that article as well. The author hasn't made any groundbreaking discoveries about the inner workings of LLMs - he just claims LLMs work a certain way and then complains that current language use doesn't align with his assertions.
  • xg15 1 hour ago
    > These things have no concept of correctness or error. They have no concept of true or false. Indeed, they have no concept of concepts, or indeed of anything else.

    Is this true? You can question the humanizing term "concept", but the entire process of pretraining and then RLHF optimizing is essentially about establishing a standard of "good" vs "bad" for the model.

    • n4r9 1 hour ago
      Good vs bad is an orthogonal spectrum to true vs false. LLMs are trained to be convincing, not correct.
      • chromacity 1 hour ago
        They are trained and evaluated on correctness benchmarks. But correctness on benchmark questions is only loosely coupled to correctness outside the benchmark, in part because LLMs aren't grounded to the same biological reality as humans. You can't easily convince an average person to cut off their own hand and this has little to do with higher-level thought. In contrast, it only takes a bit of creativity to convince an LLM to say or do almost anything.
  • chrisbrandow 1 hour ago
    Framing & launching LLMs as a "chat" interface is the source of many ills. I don't have a simple solution, but leaning away from conversational interfaces would lead to less anthropomorphizing.
    • Terr_ 1 hour ago
      I feel "document generator" is the best and most-grounded framing.

      Right now, lots of people get caught in a trap of asking things like "does MyFreeBestFriendAI feel remorse?"

      If we're already looking at it as a document artifact, we can evade the implied-ego trap: The document generator took a chat-like document where two fictional characters are talking, and predicted that there would be text where one fictional character is associated with apologetic words.

    • xg15 1 hour ago
      One aspect is that chat LLMs are trained to talk like a person - "If I should do X, just say the word" etc.

      It would be interesting to train an LLM that consistently "talks like a computer" or a command line utility instead, i.e. passive sentences, relatively bare results of the tasks given, no reference to a self, etc.

      • Terr_ 1 hour ago
        There's a webcomic where the author wrote some backstory before LLMs, where the world over-trusted in statistical models until it went wrong in a ghastly "stupid Skynet" way.

        This led to international requirements that all "AI" have explainable/auditable decisions, and must have clear non-human features.

        • xg15 1 hour ago
          Do you have a link? That sounds interesting (and might be a genuinely good idea, especially with the explainable or auditable decision making)
  • johnthedebs 34 minutes ago
    I've read many posts and comments at this point that describe LLMs in very reductionist language. Eg, from the article:

    > They’re a trillion numbers in a trenchcoat; not logical, in either a machine or a mental sense, but stochastic.

    Many of these posts and comments claim that human minds are substantially different ("better" is implied). The evidence is a sort of broad gesturing at explanations of how LLMs are implemented ("math") and how they work ("guess the next word"). And because of these facts, we should treat them in a particular way, or certain things will never happen.

    I've been trying to look past the obvious straw man here and to actually think critically about this tech as well as compare it to my own experience and (admittedly very limited) understanding of the human brain.

    In more ways than feels comfortable, it seems entirely possible to me that these things actually are or could be really close to the ways that our own minds work.

    Our own minds/consciousness are ultimately based on physical processes, I don't think anyone would dispute that. At some point, the physical phenomena in our brains presumably result in the emergent behavior of thinking and consciousness. We have no idea how it works, but it's our lived experience. Why can't that be the case for silicon-based rather than carbon-based processes? How can we say with any certainty that it's not happening elsewhere if we don't know how it works?

    Reducing their function to "guessing the next word" sounds an awful lot like what happens when I start talking to someone. I have an idea of what I want to say, but I almost never have a sentence planned out when I start it.

    The article puts "thinking" and "hallucination" in scare quotes. But I mean – the way that they appear to think by working through problems with language mirrors my own "thinking" very closely.

    It says "They’re not thinking. They’re not hallucinating"; the exercise of figuring out why is left to the reader. If you've ever talked to a 3 or 4 year old, or someone who's tired, you may have had similar experiences re: hallucinations.

    These are all pretty surface level examples, but as I use the tools more and learn more about how they work I'm not seeing any significant evidence that counters the examples.

    I do think it's probably dangerous and unhealthy to really anthropomorphize AI/LLMs. They're obviously not human even if they're thinking, and they're being made and shaped by companies (and training sets) that exist in a predominantly capitalist world (but then again, I guess we are too).

    I assume similar lines of thinking being discussed somewhere, but I haven't found much (and I feel like I'm reading about AI all day). Curious to hears others' thoughts and/or to be pointed to wherever this stuff is being talked about.