Clicky

Intel Unveils XeSS Image Upscaling Technology

In addition to a a sneak peek of their upcoming Xe-HPG architecture, the other big reveal today from Intel’s consumer graphics group comes from the software side of the business. In addition to preparing Intel’s software stack for the launch of the first Arc products in 2022, the group has also worked hard on their own interpretation of modern, neural network-driven image upscaling techniques. The product of this research is Xe Super Sampling, or XeSS, which Intel has identified as the best solution for high image quality and image upscaling with low processing costs.

As Intel briefly indicated earlier this week with the announcement of its graphics card brand Arc, the company has developed its own way of looking at image upscaling. As it turns out, they’re actually pretty advanced, so for today they’re not only announcing XeSS, but also showing footage of the technology. Even better, the first version of the SDK will be shipping to game developers later this month.

XeSS (pronounced “ex-ee-ess-ess”) is a high-level combined spatial and temporal AI image upscaling technique that uses trained neural networks to integrate both image and movement data in order to achieve a superior, higher resolution image. This is an area of research that has been heavily explored over the past half decade and was brought to the forefront of the consumer space a few years ago by NVIDIA with their DLSS technology. Intel’s XeSS technology, in turn, is designed for similar use cases and, from a technical point of view, is similar to the current DLSS 2.x technology from NVIDIA.

As with NVIDIA and AMD, Intel wants its pie and eat it when it comes to graphics rendering performance too. 4K monitors are getting cheaper and more plentiful, but the kind of performance needed to render natively in 4K in modern AAA games is beyond the reach of all but the most expensive discrete graphics cards. Ultimately, looking for ways to run these 4K monitors with more modest graphics cards and without the traditional drop in image quality, this has led recent research into intelligent image upscaling techniques and ultimately to DLSS, FSR and now XeSS.

In choosing their approach, Intel appears to have gone in a similar direction to NVIDIA’s second attempt at DLSS. That is, they use a combination of spatial data (neighboring pixels) and temporal data (motion vectors from previous frames) to feed a (seemingly generic) neural network that has been pre-trained to upscale frames from video games. Like many other aspects of today’s GPU-related announcements, Intel is not working also lots of details here. So there are many unanswered questions about how XeSS deals with ghosting, aliasing and other artifacts that can arise from these upscaling solutions. That being said, what Intel promises is not out of their reach once they’ve really done their homework.

Given the use of a neural network to handle parts of the upscaling process, it should come as no surprise that XeSS was designed to take advantage of Intel’s new XMX matrix math units, which are making their debut in the Xe-HPG graphics architecture. As we saw in our sneak peek there, Intel has quite a bit of matrix math performance built into its hardware, and the company is undoubtedly interested in making good use of it. Neural network-based image upscaling techniques continue to be one of the best ways to use this hardware in a gaming context because the workload is well allocated to these systolic arrays and their high performance keeps the overall frame rendering time low.

With this, Intel has gone one step further and is also developing a version of XeSS that does not require dedicated matrix math hardware. Due to the fact that the installation base for their matrix hardware starts at 0, they want to use XeSS on Xe-LP integrated graphics and want to do whatever it takes to encourage game developers to adopt their XeSS technology, the company is developing a version of XeSS that instead uses the 4-element vector point product instruction (DP4a). DP4a support is found in Xe-LP along with the last generations of discrete GPUs, which makes its presence almost ubiquitous. And while DP4a still doesn’t have the kind of performance that a dedicated systolic array does – or the same range of accuracy – it’s a faster way to do math that’s good enough for something a little slower (and probably something more boring). XeSS version.

By offering a DP4a version of XeSS, game developers can use XeSS on virtually any modern hardware device, including competing hardware. In this respect, Intel is taking a page out of AMD’s Playbook, targeting its own hardware and letting competitors’ customers benefit from this technology – albeit not quite as much. Ideally, this is a powerful carrot for enticing game developers to implement XeSS in addition to (or even instead of) other upscaling techniques. And although we won’t put the car in front of the horse, should XeSS meet all of Intel’s performance and image quality requirements, then Intel would be in the unique position of being able to offer the best of both worlds: an upscaling technology with the compatibility of AMDs FSR and the image quality from NVIDIA’s DLSS.

As an added kick, Intel also plans to finally open the XeSS SDK and tools. At this point in time, there are no further details on their involvement – they probably want to finalize and refine XeSS before they release their technology to the world – but that would be another feather in Intel’s cap if they can deliver on that promise as well .

In the meantime, game developers can get their first glimpse of the technology later this month when Intel releases the first, XMX-only version of the XeSS SDK. This is followed by the DP4a version, which will be released later this year.

Finally, along with today’s technology disclosure, Intel has also released some videos of XeSS in action using an early version of the technology built into a custom Unreal Engine demo. The roughly one minute of footage shows multiple comparisons of image quality between native 4K rendering and XeSS, which is upscaled from a native 1080p image.

As with all manufacturer demos, you should take Intels with an appropriate salt. We don’t have any specific frame rate data, and Intel’s demo is pretty limited. In particular, I would have liked to have seen something with more object movement – which tends to be more difficult with these upscalers – but that’s how it is right now.

Nevertheless, the image quality with XeSS is quite good at first glance. In some ways, it’s almost suspiciously good; As Ian quickly noted, the clarity of the “Ventilation” text in the above can almost rival the native 4K renderer, making it massively clearer than the illegible clutter on the original 1080p frame. This is solid evidence that as part of XeSS, Intel is also doing something outside of the scope of image scaling to improve texture clarity, possibly by forcing a negative LOD bias in the game engine.

In any case, like the rest of Intel’s upcoming GPU technologies, this won’t be the last time we hear about XeSS. What Intel has demonstrated so far certainly looks promising, but in the end it will be their ability to deliver on those promises to both game developers and gamers. And if Intel can actually deliver, then they will become a very welcome third player in the race for image upscaling technology.

Performance improvements for Intel’s core graphics driver

Last but not least, while XeSS was the star of the show for Intel’s graphics software group, the company also provided a quick update on the health of its core graphics driver, which had some interesting tidbits.

As a quick refresher these days, Intel is using a unified core graphics driver for its full range of modern GPUs. As a result, the work that has gone into the driver to prepare it for the Xe-HPG rollout can affect existing Intel products (z-HPG products. While this is no different from the way competitor AMD works, but the expansion of Intel in discrete graphics has forced the company to re-focus on the health of its graphics drivers – what was good enough for an integrated product in terms of performance and features won’t do in discrete graphics, where customers have hundreds of Spending dollars on a graphics card has higher expectations on both fronts.

Recently, Intel completely redesigned both its GPU memory manager and its shader compiler. The net effects of these changes include improving game load times by up to 25% and improving throughput of CPU-bound games by up to 18%. In the former, by getting smarter about how and where they compile shaders – including eliminating redundant compilations and better scheduling compiler threads. In addition, Intel has also revised parts of its memory management code to better optimize the VRAM utilization of its discrete graphics products. Of course, Intel only launched its first independent product on the market at the beginning of the year, the DG1. So this is a good example of the extra tweaking work Intel faces when it comes to expanding into discrete graphics cards.

Finally, in terms of features and functionality, the software group is also planning to release a number of new driver features. The most important among them will be the integration of all of their power and overclocking controls right into the company’s Graphics Command Center application. Intel will also be taking a page out of the current feature sets from NVIDIA and AMD by adding new features for game streamers, including a fast stream capture path with Intel’s QuickSync encoder, automatic game highlights, and support for AI supported cameras. These features should be available in time for Intel Arc launch in the first quarter of next year.

-