I've been re-watching star trek voyager recently, and I've heard when filming, they didn't clear the wide angle of filming equipment, so it's not as simple as just going back to the original film. With the advancement of AI, is it only a matter of time until older programs like this are released with more updated formats?
And if yes, do you think AI could also upgrade to 4K.
So theoretically you could change a SD 4:3 program and make it 4k 16:9.
I'd imagine it would be easier for the early episodes of Futurama for example due to it being a cartoon and therefore less detailed.
I think it would be possible. But adding previously unseen stuff would be changing/redirecting the movie/show.
Each scene is set up and framed deliberately by the director, should AI just change that? It's a similar problem like with pan-and-scan, where content was removed to fit 4:3.
You wouldn't want to add content to the left and right of the Mona Lisa, would you? And if so what? Continuing the landscape, which adds just more uninteresting parts? Now she is in a vast space, and you already changed the tone of the painting. Or would you add other people? This removes the focus from her, which is even worse. Well this is just a one frame example, there are even more problems with moving pictures.
It would be an interesting experiment, but imo it wouldn't improve the quality of the medium, in contrary.
But adding previously unseen stuff would be changing/redirecting the movie/show.
You could see this with The Wire 16:9 remake. They rescanned the original negatives that were shot in 16:9 but framed and cropped to 4:3. As a result the framing felt a bit off and the whole thing felt a bit awkward / amateurish.
Sometimes thats true, but not all things in a shot are very important. There may be buildings or plants or people whose placement in the shot is not important. They only exist in the shot to communicate that the film is happening in a real living world. 99% of directors don't care about where a tree in the background is, unless the tree is the subject of the shot.
Ai improving a shot would be debatable, but it is definitely possible. 4:3 media on a 16:9 display is pretty annoying to most people seeing the black bars on the sides. Even if the AI only adds backgrounds or landscapes, simply removing the black bars would be an improvement enough for most viewers.
I think you're looking at it from the wrong direction. Instead of adding new stuff in to get the width, you could get AI to stretch the image to fit 16:9 and then redraw everything there to no longer look stretched out. Slim the people and words back down. Things like bottles on a table would be slimmed down to look like normal bottles but have the horizontal table be drawn a bit longer to fill in the space etc.
If it were done this way there would be a minimal amount of things that the AI would have to artificially create that weren't there in the original 4:3. It would just mostly be fixing things looking wider than they should look.
I mentioned how that would be taken care of with the bottles on tables description I made earlier. Also, the framing of shots would be changed very little.
If they were close enough in a 4:3 shot to do that, the stretching would be very minimal to go 16:9. Aside from that, ai could avoid changing spaces between physically interacting people and objects.