irockasingranite:

himbofisher:

unclefather:

new states just dropped and white supremacist sharks live there

Actually not a bad example of the problems with LLMs.

LLMs work roughly by performing a sort of textual interpolation. The prompt corresponds to a sampling point, and the model then interpolates from its training set onto that point.

When you prompt an LLM for any kind of novel information, you’re making it perform the textual equivalent of extrapolation. And anyone who’s ever done any kind of data analysis with machine learning (remember, linear fits are machine learning!) knows the perils of extrapolation.

Whatever you get out might look like coherent text, but the information it contains is going to be the fancy equivalent of “Morth Carolina”.