When corporations dabble in philosophy, you know they're trying to muddy the waters and skirt an ethical issue. It's not a genuine inquiry going on here; it's a "whatever argument serves the bottom line" situation.
I guess there's no such thing as intellectual property either, when you really think about it. Hence nothing wrong with me making and selling pirated samsung phones.
The statement that "There is no such thing as a real picture" isn't wrong. It kind of missed the point though. It's true that, even when a photo attempts to make the most faithful representation possible, it can only approximate what it sees. The sensors used all have flaws and idiosyncracies and software that processes the images makes different decisions in different situations to give a good image. Trying to draw a line between a "real" picture and a "fake" picture is like trying to define where the beach ends and where the ocean begins. The line can be drawn in many places for many reasons.
That said, the editing that the S24 is going to allow may be going a little far in the direction of "fake" from the sounds of things. I'm not sure if that is good or bad but it does scare me that photos can't really be relied upon to give an accurate representation of a scene anymore. Everyone having access to ti's kind of AI is going to make it tremendously difficult to distinguish between realistic and misleading images.
“There was a very nice video by Marques Brownlee last year on the moon picture,” Chomet told us. “Everyone was like, ‘Is it fake? Is it not fake?’ There was a debate around what constitutes a real picture. And actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. You can try to define a real picture by saying, ‘I took that picture’, but if you used AI to optimize the zoom, the autofocus, the scene – is it real? Or is it all filters? There is no real picture, full stop.”
If your epistemological resolution for determining the fakeness of the moon landing photos is to just assert that all photos are in a sense fake so case closed, then I feel like you aren't even wrong about the right thing.
There are certainly purposes for which one wants as much of the raw sensor readings as possible. Other than science, evidence for legal proceedings is the only thing that comes to mind, though.
I'm more disturbed by the naive views so many people have of photographic evidence. Can you think of any historical photograph that proves anything?
A more momentous occasion is illustrated by a photograph of Red Army soldiers raising the soviet flag over the Reichstag. The rubble of Berlin in the background gives it more evidentiary value, but it is manipulated. It was not only staged but actually doctored. Smoke was added in the background and an extra watch on a soldier's arm (evidence of robbery) removed.
Closer to now: As you are aware, anti-American operatives are trying to destroy the constitutional order of the republic. After the last election, they claimed to have video evidence of fraud during ballot counting. On one short snippet of video, one sees a woman talking to some people and then, after they leave, pull a box out from under a table. It's quite inconspicuous, but these bad actors invented a story around this video snippet, in which a "suitcase" full of fraudulent ballots is taken out of hiding after observers leave.
As psychologists know, people do not think in strictly rational terms. We do not take in facts and draw logical conclusion. Professional manipulators, such as advertisers, know that we tend to think in "narratives". If a story is compelling, we like to twist neutral snippets of fact into evidence. We see what we believe.
edits made using this generative AI tech will result in a watermark and metadata changes
The metadata is easy to erase. It’s only a matter of time until we start seeing some open source projects come out that can remove the watermarking the AI players are starting to try.
As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture.
I understand this as talking about a definitive original, as you get with trad analog photography. With a photographic film, you have a thin coat (a film) of a light sensitive substance on top of a strip of plastic. Taking an analog picture means exposing this substance to light. The film is developed, meaning that the substance is chemically altered to no longer be light sensitive. This gives you a physical object that is, by definition, the original. It is the real picture. Anything that happens afterward is manipulation.
An electronic sensor gives you numbers; 1s and 0s that can be copied at will. These numbers are used to control little lights in a display.
As far as I understand him, he is not being philosophical but literal. There is no (physically) real picture, just data.
They do have a point when they say AI is here to stay, and what they propose (A 'watermark' in the metadata for AI edited content) is at least a way forward. There should be also some electronic seal/signature for this to be effective though (md5?) , metadata so far is easy to tweak
I don't really understand what is unethical about AI photo edit to begin with? Like photo editing before AI existed and you could make anything you want with a ps.