Intel Chief Discusses Building the Metaverse’s Plumbing

This site can earn affiliate commissions through the links on this page. Terms of Use.

Former AMD boss and current Intel graphics guru, Raja Koduri, posted a puff article on the Intel PR site about the company’s plans to create the “installation” of the Metaverse, which has some interesting predictions for the future contains. Of course, being Intel PR, the piece leans in the direction of how Intel can enable all kinds of amusing interactions, and has been for a long time (no argument there), but what Koduri’s article makes clear is the metaverse as recently demonstrated by Mark Zuckerberg is both a big deal and very, very far from getting close to reality.

To begin with, Koduri describes the metaverse as such, “a utopian convergence of digital experiences driven by Moore’s Law – an effort to enable rich, globally networked real-time environments for virtual and augmented reality that will enable billions of people”. to work, play, collaborate and socialize in a whole new way. ”Yes, those are a lot of buzzwords in one sentence, but essentially we’ve all seen so far: a 3D world in which our avatars are in an imaginative setting interact with each other while sitting at home wearing some type of HMD. or maybe even augmented reality glasses. As trite as the demonstrations of this technology have been so far, Raja explains that he sees the Metaverse as the next big platform in computing, similar to how mobile and the Internet have revolutionized computing in the modern age.

So what do we need to get there? First, Raja describes what it takes to create such a world, which would include: “Convincing and detailed avatars with realistic clothing, hair and skin tones – all rendered in real time and based on sensor data, the real 3D objects, gestures , Audio and much more; Data transmission with super high bandwidths and extremely low latencies; and a persistent model of the environment, which can contain both real and simulated elements. ”He concludes with the question how the company can solve this problem on a large scale – for hundreds of millions of users at the same time? The only logical answer is: “We need computing capacities that are several orders of magnitude more powerful and that are accessible with much lower latencies over a large number of device form factors,” he writes.

Intel’s next-generation Ponte Vecchio CPUs will power its first exascale supercomputer called the Aurora. (Image: Stephen Shankland / Cnet)

To do this, Intel has broken down the problem of deploying the Metaverse into three “layers” and says the company has worked on all three. They consist of: Intelligence, Ops and Computing. The “intelligence” layer is the software and tools used by developers, which Raja says must be open and based on a unified programming model to be easy to deploy. The “Ops” layer is basically processing power available to users who don’t have access to it locally, and the “Compute” layer is simply the raw power needed to run everything, and that’s where Intel comes along its CPUs and the upcoming Arc. into play GPUs. Remember, this is a public relations article, after all.

This is where things get interesting. Raja says to do all of this we are going to need 1,000 times the processing power that today’s cutting edge technology offers, and he notes that Intel has a roadmap to achieve performance in the Zetta space by 2025. That’s pretty daring to say, given that Intel will hit exascale for the first time in 2022 when it ships its Aurora supercomputer to the Department of Energy. Similarly, the United States’ first exascale computer, Frontier, is reportedly currently being installed at the Oak Ridge National Laboratory but will not be operational until next year. Having just got to the Exascale, how long will it take us to get to the Zettascale? According to Wikipedia, it took 12 years to go from the terascale to the petascala and 14 years to go from that to the exascala. So it is reasonable to assume that it could be at least ten years before we reach the Zetta scale.

Koduri concludes his piece with an end goal that he says is achievable. “We believe the dream of delivering a petaflop of computing power and a petabyte of data to every person on earth within a millisecond is within reach.” Could escalate in 2020, but we did it, albeit a year later. Note, however, that the power problem was never resolved.

Exascale brain modeling

When China claimed to have secretly built two exascale systems earlier this year, it was reported that power consumption was ~ 35 MW per system. That is significantly higher than the originally set 20 MW target. We can assume that these numbers will improve over time as machines become more efficient, but any push toward Zettascale will require a major rethink of power usage and scaling.

Read now:

You might also like

Comments are closed.