There’s a bit more to it than that
There’s a bit more to it than that
London is full of excellent amazing things but they’re spread out over an absurdly large area so it’s such a pain doing anything. And everyone who lives there is so numb to it! They’ll happily indulge every day in 3-4 hours of public transport as if this is a rational way to live.
I’m very happy that they have a reasonably decent transit system, but fuck me I wanted those 4 hours in my life actually.
That’s true, I only relied on my phone until the first time it just didn’t work for no reason haha
I would have put forward “weg” but I’ll accept “weeg”
Well of course we’re going to throw poo at him
Not that much different to losing your wallet before you could pay with your phone
Oh, sorry the 45 page document is for something else. The only person who understands this dataset is Dave and he was made redundant 5 years ago. Anyway, can you get this done today?
ChatGPT is not designed to fool us into thinking it’s a human. It produces language with a specific tone & direct references to the fact it is a language model. I am confident that an LLM trained specifically to speak naturally could do it. It still wouldn’t be intelligent, in my view.
The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake. Depending on how you set it up I wouldn’t be surprised if a modern LLM could pass it, at least some of the time. That doesn’t mean they are intelligent, they aren’t, but I don’t think the Turing test is good justification.
For me the only justification you need is that they predict one word (or even letter!) at a time. ChatGPT doesn’t plan a whole sentence out in advance, it works token by token… The input to each prediction is just everything so far, up to the last word. When it starts writing “As…” it has no concept of the fact that it’s going to write “…an AI A language model” until it gets through those words.
Frankly, given that fact it’s amazing that LLMs can be as powerful as they are. They don’t check anything, think about their answer, or even consider how to phrase a sentence. Everything they do comes from predicting the next token… An incredible piece of technology, despite it’s obvious flaws.
Well he seems to survive every time so… Penultimate destination?
Jesus Christ I may have dysphoria but I don’t have thatphoria
That’s Ms Sucksthechoicestdicksforcashonly to you
I prefer to be choosy about the dicks I suck
Yes as long as it isnt one person 8 votes another person 9. Although, the electoral college somewhat is that anyway… A vote in one state is not equivalent to a vote in another.
Bookmaker gives estimates
Ah yes, the only notable thing about Luxembourg is the US military. Nothing else.
Why would the model be trained on outdated prices? I’m not talking about LLMs, but separate model designed to parse visual information - specifically websites - and extract particular elements like prices. My comment about ChataGPT was in reference to the newer models which can relay visual information, I’m not suggesting that would be the right approach for training a new model.
The applications would be broader than just prices - this would allow you to scrape any human-readable website without needing to do bespoke development.
This kind of small scale optimization is not really the best use case for AI anyway. Considering the actual cost of running that kind of code at a large scale… I’m not convinced the savings are worth it even setting aside the petrol issue.
AI doesn’t need to be in the hands of consumers. It should be a step removed, working behind the scenes to make all those basic foods cheaper before you even go shopping. It should be optimizing supply chains, reducing production costs, and otherwise making us more efficient at a societal level.
Which, well, in some cases it already is. Sadly many companies just use it to optimise their marketing 🙄
Yeah I’ve been migrating away steadily. And what a shame! Now Google don’t get to use my emails to train their shit AI.