• Voroxpete@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    5 days ago

    From a nerdy perspective, LLMs are actually very cool. The problem is that they’re grotesquely inefficient. That means that, practically speaking, whatever cool use you come up with for them has to work in one of two ways; either a user runs it themselves, typically very slowly or on a pretty powerful computer, or it runs as a cloud service, in which case that cloud service has to figure out how to be profitable.

    Right now we’re not being exposed to the true cost of these models. Everyone is in the “give it out cheap / free to get people hooked” stage. Once the bill comes due, very few of these projects will be cool enough to justify their costs.

    Like, would you pay $50/month for NotebookLM? However good it is, I’m guessing it’s probably not that good. Maybe it is. Maybe that’s a reasonable price to you. It’s probably not a reasonable price to enough people to sustain serious development on it.

    That’s the problem. LLMs are cool, but mostly in a “Hey this is kind of neat” way. They do things that are useful, but not essential, but they do so at an operating cost that only works for things that are essential. You can’t run them on fun money, but you can’t make a convincing case for selling them at serious money.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      5 days ago

      Totally agree. It comes down to how often is this thing efficient for me if I pay the true cost. At work, yes it would save over $50/mo if it works well. At home it would be difficult to justify that cost, but I’d also use it less so the cost could be lower. I currently pay $50/mo between ChatGPT and NovelAI (and the latter doen’t operate at a loss) so it’s worth a bit to me just to nerd out over it. It certainly doesn’t save me money except in the sense that it’s time and money I don’t spend on some other endeavor.

      My old video card is painfully slow for local LLM, but I dream of spending for a big card that runs closer to cloud speeds even if the quality is lower, for easier tasks.

      • xavier666@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        but I dream of spending for a big card that runs closer to cloud speeds

        Nvidia’s new motto: “An A100 at every home”

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      I’ll pay a bit more for the next model of my phone that promises on device ai, or actually already did. We’ll see if that turns into something useful.

      So far the bits and pieces I’ve played with are not generative ai, but natural language processing and inferencing. The improved features definitely make my phone a more useful piece of hardware, but not revolutionary