socialist reformism in the age of artificial intelligence –
- all human knowledge belongs to the public commons, and so does ai trained on it. all companies forced to open source both model weights and training logs. ai companies can only sell support and expertise.
- fingerprinting model output, with penalties for not disclosing model use.
- public logs and public deliberation for any time on medium university sized gpu clusters.
- any personal data (~GDPR definition) in any ai model (ie, by default most LLMs) confers a degree of ownership over said model. the right to delete myself from ai models should be granted, necessitating retraining. we bury really large public facing models in hell.
I think this sounds great; it’s approximately what I would expect to fall out of applying ideas about stakeholder-centric government and cooperative economics to the issue of AI-as-it-currently-exists. Unfortunately I think AI-as-it-currently-exists is not going to exist for very long, I think it’s going to fairly quickly evolve into something different (probably a succession of somethings-different) over the next few decades, and since reformism is by definition not an immediate process I think this is probably important to plan for. We don’t live in a cooperative, stakeholder-centric economy right now, and by the time (if we’re lucky) we live under that or any other analogous form of socialism I think the view of AI that this is working off of will likely be obsolete. So I’m not sure how much space there is for policies like these, even if they would be desirable.