drothgery wrote:GPU performance has improved by huge margins (4K gaming is possible with current high-end cards) and doesn't seem likely to slow down any time soon (probably because it's mostly an insanely parallel problem).
Compare the speed of improvement for GPUs in the late 90s/early 00s and today´s pace feels like a snail on glue.
No kidding. GPU performance doubled or more every one or two years.
If you compare over time, you will find that there is a fairly constant reduction in improvement over time.
drothgery wrote:Flash-based mass storage becoming mass-market over the last few years was a huge deal
Still waiting for that massive drop in pricetags that was promised LAST year though.
At least now they´re reliable and have decent longevity.
drothgery wrote:and if something similar happens with 3DXPoint or a related memory tech, it will be just as big.
We can hope, but 3DXP is still nowhere near the latencies needed to be used as RAM.
drothgery wrote:Single-threaded CPU performance arguably plateaued a decade ago (really, closer to 5 years ago; Sandy Bridge is noticeably faster clock per clock than Conroe).
That´s due to choice rather than technology however. Intel intentionally chose to go with the Nehalem architechture after Core2, knowing perfectly well that it was not optimal for desktop or singlethreaded performance.
Essentially, if Intel had taken the Core2 and applied only the optimal improvements that Nehalem got, the result would have been a CPU with probably 15-25% better singlethread performance.
Also, even despite that, every generation of Nehalem derivative have still had better singlethread performance than the previous one.
Really, you could easily get markedly improved singlethread performance as soon as Intel or AMD could design a new CPU maximized for it. IF you pay the pricetag it´s going to rack up.
For example, some models of the Broadwell is an excellent showcase, where them being fitted with huge L4 cache´s that were just twice as fast as the RAM, the effect on singlethreaded was still VERY nice.
Basically, you could take all the little optimisations, improved branch prediction, bigger and faster L1/L2/L3/L4, tracecache(and variations on it), more execution units(like Intel did with all Nehalem upgrades, but especially Skylake), more capable execution units(like AMDs superior FPU in the K7 and K8 series), more specialised execution units(Intel´s Nehalem derivate again)...
If you were willing to pay, you could probably get something with twice the singlethread performance from current tech.
But expect triple the pricetag, minimum.
drothgery wrote:CPU efficiency has improved tremendously in the last decade (hence laptops with sub-15W CPUs offering 'good enough' performance these days, 20+-'big' core server CPUs that draw under 200W
Sort of...
Thing is, a HUGE part of those improvements have nothing to do with cpu
compute efficiency as such, but simply the development of power saving technologies, and to some degree even manufacturing and materials tech(SOI is a good example).
Another thing, well, those official numbers? Especially from Intel, they have very little to with reality. Because they define their TDP not as how much wattage a cpu can draw, but how much it "should" draw at most.
Point in case, the cpu i´m running, 4790K, had lots of people having trouble with the boost feature, guess why?
Because this officially 95W cpu, if it wants to max out, it needs to have the motherboard to not disallow drawing more power than 95W(this was a standard setting on several motherboard models), because if it maxes out it draws something more like 120-150W depending on individual cpu.
(this is also why you should NEVER EVER use the standard Intel cooler, because then you end up with the CPU clockthrottling up in the 90s C, while my CPU never goes above 75C, and rarely above 60C, thanks to a good cooler)
drothgery wrote:and cell phones with CPUs that perform similarly to ten-year-old desktop CPUs
Extremely unlikely to continue though, as we are just two or three dieshrinks away from where physical limitations come up.
And most semiconductor fabs are no longer getting upgraded to later shrinks, and the next wafer upscale, nope, probably never going to happen at all, both due to the costs no longer being offset enough by being able to produce faster.
We would need to have another fully populated Earth for the next wafer upscale to be profitable, preferably 2 or 3.
And it´s still very uncertain if they are even going to try to get the last 1(maybe even 2) dieshrinks into massproduction.