‘The next decade will see the way people interact with computers and electronic devices ‘humanised’, thanks to widely available massive processing power and a variety of new technologies,’ began an EW story in 2005, ‘at least that’s Intel’s vision of the best application for powerful multi-core, multi-threaded processors, delivered at this year’s Intel Developer Forum in San Francisco. Multi-core, multi-threaded processors were themselves cited by Craig Barrett earlier this week as the best use of the vast transistor budget available from modern processes’.
‘In outlining the plan Justin Rattner, director of Intel’s Corporate Technology Group, ran through a slew of novel ideas and technologies that will have to combine in order to perform demanding generic tasks, or ‘workload models’, such as recognition, data mining, and synthesis, that Intel has identified.’
“Imagine a phone that can translate languages in real time so you can talk to people in other countries more easily, or finding a photo of your children playing with a pet from among the thousands of photos you have stored in multiple computers in your house,” said Rattner, “these tasks might seem simple, but they require levels of performance, sophistication and intelligence in both hardware and software that don’t exist today.”
“Something we’re particularly interested in is being able to predict the future. We envision an age in which computing is less prescriptive and more predictive… We’ve identified a very large number of platform ingredients.”
‘Among those ingredients are virtual platforms, photonics, special purpose hardware, speech, vision, low power, 3D packaging, and sensors. Massive parallelism through the use of multiple cores is crucial.
“I see general agreement this [parallel processing] is the way we’re going, whether we like it or not. So we better address the programmability issues. The big ‘if’ will be can the software technology come on quickly enough to drive a significant increase in core count?”
“We’re talking about potentially hundreds of cores per processor,” continued Rattner, although he later added: “I can see late in the decade four cores, and probably staying [at that number] for a long time.”
‘He then described an approach called ‘domain specific parallel programming’, in which a compiler fed with code optimised to exploit the parallelism matches the task at hand to the resources at runtime. A demonstration showed code written in a domain-specific language Intel calls ‘Baker’ executing to control eight simulated cores.’
‘Applications also need to be written to take advantage of multiple cores, and some applications will be more suitable than others. “I think Photoshop filters will be one of the first things that takes advantage of high core counts,” Rattner said.
‘Alongside all this, memory bandwidth must be increased in order to implement multiple cores. Intel is advocating a wafer-on-wafer 3D stacking process to mate DRAM cells with processors. Such an approach could create millions of connections, and Rattner said maybe 256Mbit of memory could realistically sit directly on one CPU.’
“That’s the kind of increase in memory-to-processor connections we need to meet the memory bandwidth,” said Rattner. “The fundamental problem is you literally can’t run enough wires on and off the chip.”
Electronics Weekly
