I think you have a basis, but for the wrong reasons.
You assume that there is some 'limit' to how real you can make something look in CGI, whereas I pointed out somebody probably said that five years ago, and five years before that, and five years before that...
There are two factors beyond simple graphical fidelity, your angle, that provide need for more and more computing power: Physics and Resolution.
To be honest, games have barely scratched the surface
of simulating a physical environment. Pre-broken walls and crap in stuff like Battlefield 4 is equivalent to prebaked AO versus true AO. How far off is breaking things, things meaning all things, to a minimal (read: visible within the confines of gameplay) scale? Assets will need internally mapped materials as much as they already have externally, and the depth of possible detail is, naturally, upped by a factor of a third dimension tangental to the surface. The depth of processing plausible for properly simulating, for instance, a tank round penetrating a mixed wood and brick building is rather unfathomable by today's standards. Your assertion is hilariously shortsighted in this aspect.
Second, take any game of the current generation, and run it at 4k resolution. Hell, crank it up to a retinal grade PPD resolution for an extreme screen size or, a step further, something like the Oculus Rift with such PPD. You're facing an exponential increase in processing to approach the latter examples.
This thus falls back to a car analogy: People think car engines are getting more powerful
, when in fact car engines are actually getting more efficient
. It just so happens that when you're more efficient
with a resource, you end up producing more power
when consuming the same amount of said resource. If you tune your car's engine to make more power, you're actually making it more efficient for each revolution it makes moreso than just cramming more air and fuel in. You could certainly have the performance of today's enthusiast-class PC in 1999, it just would have been the size of an office building and been sucking down megawatts of power.
tl;dr You're rather naive for a 3D engine developer if you think games are just graphics and the uncanny valley is the last frontier.
Where you might
have a point, again for the wrong reason, is a matter of art pipeline. There's only so much art that could plausibly be produced for a game, so it becomes a money factor in producing extreme visual fidelity.
At the same time, stuff like normal maps, subsurface scattering, and hypotheticals like intelligent materials that map themselves to a shape are functions that derive a proxy of art effort from processing power, ergo you can just have raw processing power determine what an object looks like rather than having an artist design it manually. We already sort of started on this with 3D rendering in the first place: We no longer draw every angle of a subject (sprites), but instead tell the CPU/GPU it's shape and let it draw it from a camera's perspective. Exactly the same concept in that you can always spend processing power to, circling back to an earlier point, simulate the appearance of an object more precisely than even an artist can develop it. Edited, Sep 8th 2013 12:18pm by Raelix