I think you meant compression. This is exactly how I prefer to describe it, except I also mention lossy compression for those that would understand what that means.
Because it is harmful to the creators that use the value of their work to make a living.
There already exists a choice in the marketplace: creators can attach a permissive license to their work if they want to. Some do, but many do not. Why do you suppose that is?
you think authorship is so valuable or so special that one should be granted a legally enforceable monopoly at the loosest notions of authorship
Yes, I believe creative works should be protected as that expression has value and in a digital world it is too simple to copy and deprive the original author of the value of their work. This applies equally to Disney and Tumblr artists.
I think without some agreement on the value of authorship / creation of original works, it's pointless to respond to the rest of your argument.
AI can “learn” from and “read” a book in the same way a person can and does
The emphasized part is incorrect. It's not the same, yet your argument seems to be that because (your claim) it is the same, then it's no different from a human reading all of these books.
Regarding your last point, copyright law doesn't just kick in because you try to pass something off as an original (by, for ex, marketing a book as being from a best selling author). It applies based on similarity whether you mention the original author or not.
AI can “learn” from and “read” a book in the same way a person can and does
This statement is the basis for your argument and it is simply not correct.
Training LLMs and similar AI models is much closer to a sophisticated lossy compression algorithm than it is to human learning. The processes are not at all similar given our current understanding of human learning.
AI doesn’t reproduce a work that it “learns” from, so why would it be illegal?
The current Disney lawsuit against Midjourney is illustrative - literally, it includes numerous side-by-side comparisons - of how AI models are capable of recreating iconic copyrighted work that is indistinguishable from the original.
If a machine can replicate your writing style because it could identify certain patterns, words, sentence structure, etc then as long as it’s not pretending to create things attributed to you, there’s no issue.
An AI doesn't create works on its own. A human instructs AI to do so. Attribution is also irrelevant. If a human uses AI to recreate the exact tone, structure and other nuances of say, some best selling author, they harm the marketability of the original works which fails fair use tests (at least in the US).
Even if it didn't outright display the code you need to enter, my guess is this and similar implementations hide further vulnerabilities like: the numbers aren't generated with a secure random number generator, or the validation call isn't resistant to simple brute force quickly guessing every possible number, or the number is known client side for validation, etc.
I have zero sympathy for those who only regret their vote for Trump now that his policies - which were always obvious - come back to haunt them personally.
Exactly. This headline could also have easily been "Republicans hold government funding hostage to force first federal anti-LGBTQ legislation in nearly 30 years."
Tesla believes it is better at reporting crash data than its competitors, and so the discrepancy in numbers makes them look bad.
It's almost as though leaving safety and associated reporting requirements in the hands of private business doesn't work out for consumers. If only there was some public institution that would hold all vehicle manufacturers accountable and enforce reporting requirements. I cannot possibly imagine how that would work though. /s
I think you meant compression. This is exactly how I prefer to describe it, except I also mention lossy compression for those that would understand what that means.