AI/LLM/thoughts

From Woozle Writes Code
< AI‎ | LLM
Jump to navigation Jump to search

About

For now, this is mostly a space for taking notes towards a coherent essay about LLMs and Gen3 "AI" in general. There's a lot of hype, and a lot of pushback against the hype -- where the pushback is generally legit but which tends to splash too widely and to too harshly condemn LLMs in general.

I see tremendous possibilities, and I also see the abuses, and I see the potential to "use the master's tools" to take down the master, or at least to help counter the worst of the master's abuses. Like, humane and informed uses of LLMs could save us, while allowing the techbros to monopolize it (legally or just de-facto) will likely doom us.

Notes

  • big "AI" companies using so much energy because they're in a capitalist arms-race, not because LLMs inherently demand it
    • ...although training LLMs does use a lot of power -- but it doesn't have to be on the scale of e.g. re-commissioning Three Mile Island
  • LLMs as a disability aid for professional writers with brain-damage
    • One in particular can still write but has difficulty moderating her output. With LLMs, she can get quick/easy critiques of her rambling, which help her to pare out the redundant bits and sequence it more comprehensibly.
  • not underestimating the significance of the "hidden layers", which go beyond what we usually think of as "statistics" -- from what I can see, it looks to me like they are able to conceptualize, to some degree, even if they're still not "AI".
  • "conceptualizing" vs. "reasoning" -- but there are arguably some ways in which LLMs do reason, and even the capitalists are finally putting some resources into developing this further
  • scale has a quality of its own
  • my dialogue with ChatGPT where I walked it through correcting an erroneous math answer

LLM Skepticism

  • 2025-06-09 Trusting your own judgement on ‘AI’ is a huge risk (Baldur Bjarnason)
    • I think a lot of the cognitive biases and self-deceptions he lists are things that I documented early on for Issuepedia, and which I've been kind of semi-consciously training myself to watch out for ever since (and which the {debate/truth}-mapper is specifically intended to disarm).
    • I'm not sure I like his model of "avoid cognitive hazards"; I think it's more effective to be able to recognize them, so you know to pay extra close attention to what you find yourself thinking after encountering one. He needed to propose a process -- and before you can evade a hazard, you need to be able to detect it...
    • For all his warnings against accepting gossip as evidence, it seems like he's asking the reader to accept his gossip as evidence.
    • (It also reminds me of some of the reasonable-seeming shibboleths on LessWrong, especially the admonition against "happy death spirals" -- which seemed to boil down to something like "if you keep coming up with the same solution to a variety of problems, and you're really starting to think it might be really good, it's probably actually terrible".)
    • That said, he still makes a lot of good points. The risk of self-deception when encountering a new tech like this is high. Makes me think of the Black-Scholes algorithm...
      • (The Wikipedia article on B-S throws in a lot of scary-looking calculus before getting to the risks, and I'm not sure it ever mentions that heavy reliance on B-S was one of the primary causes of the 2007 stock market crash.)
    • "Making a broken thing go faster only makes it worse." ...but... but... "move fast and break things" is supposed to be good, right? 🥺 (/s)
  • hachyderm.io/@jenniferplusplus: I think this goes too far. «There's no understanding, or intent, or knowledge, or learning, or truth in there. The best mental model to understand what it's doing is something like mad-libs. It's a mad-lib solver.» This is very much an oversimplification, and completely ignores the abstraction and conceptualization of which LLMs are clearly capable as well as how surprisingly often they are able to pull a reasonably-correct answer out of a very abstract or vague request.

Other Links