Discussion about this post

User's avatar
DRF's avatar

“Trained on 20 years of blues, an “LLM” won’t produce rock ‘n roll”..

A badly remembered comment from one of Zitron’s blog posts.

Expand full comment
David Raynor's avatar

It strikes me that a LLM must inevitably get dreadfully confused because a lot of what it has been trained on will be fiction or incorrect. Even if the builders tried to prevent it, like for example deliberately ignoring the works of Shakespeare, there would still be all the English Literature study guides that discussed those works. I wouldn’t be surprised if they are also being trained on documents that were written by other AI systems, so the errors will accumulate, like persistent pollutants in the food chain. And in any case, as you rightly point out, they’re not actually thinking anyway, just picking a likely next word or action based on probabilities of what has happened before. When something stupid becomes a hot topic (like Mr T’s “why not try bleach?” COVID cure), then the fact it is repeated so much must surely make it even more likely to be picked? So even articles that say something is wrong, may end up reinforcing the likelihood of the same wrong thing being put out again.

We are doomed. Our only hope is that it will make such a mess of something expensive, that those in charge finally say “Ouch!” and give up on the idea.

Expand full comment
8 more comments...

No posts