

I feel like they need a test case to figure out how to define derivative work when the creator is not human.
If i make a painting and you see it and then make one in a similar style it would be considered derivative and not a violation. In your head is a distillation of my image. It doesn’t contain the image and your output would be lossy. Similarly the LLM contains statistics and not verbatim content. So the question is “how is human synthesis different than AI synthesis.”
Until that is resolved a class action would probably fall apart. Individual damages would need to be determined and even a single example of “you put your stuff out to the public and aren’t going aftet Joe who made derivative work…” would derail the case.
But then I’d ask how do you outlaw human systematic consumption of information. The camera on my car cant watch 24/7, then why should YOU be allowed to watch 24/7? What you’re outlawing is the literal methodology.
This has always been an issue with my thoughts on AI. If the computer became sentient does the LLM learning rule go out the window? or is it because they are made of metal?