• pcouy@lemmy.pierre-couy.fr
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    8 days ago

    You’ve probably read about language model AIs basically being uncontrollable black boxes even to the very people who invented them.

    When OpenAI wants to restrict ChatGPT from saying some stuff, they can fine tune the model to reduce the likelihood that it will output forbidden words or sentences, but this does not offer any guarantee that the model will actually stop saying forbidden things.

    The only way of actually preventing such an agent from saying something is to check the output after it is generated, and not send it to the user if it triggers a content filter.

    My point is that AI researchers found a way to simulate some kind of artificial brains, from which some “intelligence” emerges in a way that these same researchers are far from deeply understanding.

    If we live in a simulation, my guess is that life was not manually designed by the simulation’s creators, but rather that it emerged from the simulation’s rules (what we Sims call physics), just like people studying the origins of life mostly hypothesize. If this is the case, the creators are probably as clueless about the inner details of our consciousness as we are about the inner details of LLMs