All this pearl-clutching over photos

Nilay Patel and The Verge really spend a lot of time worrying over what a photo is. In the latest post, "Let’s compare Apple, Google, and Samsung’s definitions of ‘a photo’ ", there is a lot of clutching of pearls over statements made from Apple, Google, and Samsung about what the companies feel is a "real" photo.
There is an accusation of "pure nihilism" about the statement from Samsung EVP of customer experience, Patrick Chomet who said:
Actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. You can try to define a real picture by saying, ‘I took that picture’, but if you used AI to optimize the zoom, the autofocus, the scene — is it real? Or is it all filters? There is no real picture, full stop.
I think he's right. What does constitute a "real picture"? If you look at the raw output of camera sensors it looks nothing like what you were looking at when you snapped the photo. It has always required a lot of massaging to get it to look right. Samsung, Google, and Apple have all been working towards making the pictures look even more like what you actually saw when you snapped the photo via machine learning techniques, sometimes with questionable results.
All of this worry over AI features to remove things, etc...why? Why is that any different than me slaving away for hours in Photoshop years ago to accomplish the same thing, to make the picture I wanted? Will AI functionality help bend reality? I'm sure it will, but reality has been bent since Ansel Adams burned and dodged his landscape photos. Will it be much easier? Absolutely, making it easier for us mere mortals to get the photos we wanted when we snapped them, not the ones we saw.
The photo at the top of this post is an example. I took this photo in late October of 2016. That is what I saw, not what the camera took. Here is the RAW version of that photo:

That looks nothing like what I saw. The colors are washed out. It's too bright and it's tending toward very cool with too much blue. Here is the JPEG version of what the iPhone did at the time:

It's closer to what I actually saw with my eyes but still not close enough. The colors are still too washed out and some changes in the contrast and brightness make it look better. Still too cool.
Here is what I actually saw, after I adjusted the picture with Adobe Lightroom:

An absolute explosion of color. The tree almost looked to me like it was on fire which is what made me snap the photo in the first place. The rest of the image is much warmer as it should be when that much light is bouncing off of thousands of leaves that are yellow, orange, and red.
So what is "real" in this instance? My last edit is the closest I could get to what I actually saw. To me that is more real than what the camera captured. I suspect that if this photo was taken today the machine learning being applied to photos by iOS would be a much closer approximation to what my eye really saw.
We seem to forget that our eye is constantly making adjustments to what we are seeing. Cameras/phones have no ability to do that without large amounts of processing. So here we are where phones are now snapping many pictures with various parameters tweaked and intelligently combining those to get something a bit closer to reality. I like where we are thank you very much.