transforms images into ‘poison’ samples, so that [AI] models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms
AI wars. I see nothing wrong doing this. In fact, I like it. I'll have to think of how to poison computer source code too.
You'll have to be subtle and get it into the backups, that way they can't just restart from the latest bup.
(post is archived)