StyleScan’s technology has been described as “digital dressing on-demand”. Our algorithms understand the human form in 3-dimensions, and therefore are able to render apparel on different poses and rotations. This output is impossible to achieve via current two-dimensional graphic design tools.
Furthermore, our algorithms also understand apparel physics, draping stretching and movement, making the output images lifelike. Among our competitive advantages are extreme accuracy, ultra-high resolution, and the ability to render garments on natural human poses, even while in motion.
StyleScan's upcoming software release will also allow to digitally dress people in videos in addition to photos.
At StyleScan, we approach visual content generation through the prism of machine learning and game theory, in which flexible statistical models are applied to computer vision problems. Rather than having humans photoshop product images onto the images of people, StyleScan’s algorithms create the visually superior effect of users “virtually wearing” the garment in just a few seconds.
Besides augmented apparel try-on, our proprietary AI breakthroughs have been applied across a wide array of other applications, including financial markets, gaming and medical diagnostics.
Prior to StyleScan our scientists co-developed a 3-D body scanning technology for the Bill and Melinda Gates Foundation, named one of the world-changing innovations of 2018.