At the ACM Siggraph Conference Next Week, Fyusion Will Showcase Groundbreaking Advances in Light Field Technology That Make It Possible for Marketers to Quickly and Easily Create 3D Photo-Realistic Views of Complex Real-World Scenes and Products.
Fyusion, a worldwide leader in 3D computer vision and machine learning, will present and demonstrate new 3D imaging software next week at the ACM SIGGRAPH conference in Los Angeles. The software breaks new ground in the area of light field technology, allowing users to render stunning scenes with a degree of realism previously thought impossible. Fyusion’s software is already in-market in a number of commercial applications, including automotive and fashion, where it is helping retailers increase and enhance engagement with online shoppers.
For some time now, retailers have been using 3D imaging as part of their digital sales and marketing programs. The goal is to provide consumers with a vivid and realistic view of a product, so they feel like they’re experiencing it before buying. Earlier this year, Fyusion received a strategic investment from Cox Automotive, a leading digital wholesale marketplace for used vehicles, which is using Fyusion’s software to display 3D images of cars on its websites. And last month, Fyusion announced an investment from Itochu, one of the largest Japanese e-commerce fashion and apparel companies, which is using Fyusion to show images of models wearing outfits on its brands’ retail sites. Fyusion’s tech effortlessly handles notoriously difficult scenarios, including fine-grained textures like grass and foliage, transparent surfaces and reflections.
3D imaging isn’t new but making the technology accessible to the masses has proven elusive. Traditional 3D imaging methods rely on costly laser-scanning hardware and manual studio touch-ups; others eschew 3D entirely in favor of simple photo stitching. Fyusion’s technology is groundbreaking because it is low cost and produces the highest quality results. It’s also conceptually simple. First, Fyusion uses a deep network to promote each source view to a layered representation of the scene, advancing recent work on the multiplane image (MPI) representation. Fyusion then synthesizes novel views by blending renderings from adjacent layered representations. The result is a 4,000x decrease in the number of images needed to produce a 3D image, making it easy for anyone to create high-quality 3D images using only a smartphone.
Read More: DARPA Funds ML-Based CHIMERA Solution
“These new advancements are a big step for light field research, and as they continue to get incorporated into our products, will give us a big new competitive advantage,” says Dr. Radu B. Rusu, CEO and co-founder at Fyusion. “Our goal in announcing the work is to show that we are continuing to be at the forefront of the technology and implicitly that our products will continue to be the leading products in the space.”
“The most compelling virtual experiences completely immerse the viewer in a scene, and a hallmark of such experiences is the ability to view the scene from a close interactive distance,” says Pantelis Kalogiros, co-founder and VP of Web at Fyusion. “This is currently possible with synthetically rendered scenes, but a high level of image ‘intimacy’ has been very difficult to achieve for virtual experiences of real-world scenes. When creating the 3D image from real-world complex scenes, it is critical to maintain the accuracy and photographic fidelity of the digitized 3D scene as the camera view point is changed. Our new technology is the best in the world at this.”