Microsoft for Startups is proud to support Sync Labs, a pioneering startup specializing in training controllable and consistent generative models for creating and editing human video content. Through a strategic partnership with Y Combinator, Sync Labs gains exclusive access to a dedicated Azure GPU cluster, providing the advanced computing power necessary to drive their groundbreaking AI-assisted video editing technology.
Sync Labs is on a mission to revolutionize video creation, enabling anyone to produce professional-quality videos in a single take. With their innovative lip-syncing models, users can record once, effortlessly edit speech and expressions, and publish polished videos in minutes. Their models, accessible via a web app and API, can generate thousands of videos rapidly, making high-quality video production more accessible than ever.
Scaling with Azure GPUs
The access to Azure’s powerful A100 GPUs has been crucial for Sync Labs in training their cutting-edge models. By leveraging these high-end GPU clusters, Sync Labs has been able to process large-scale video datasets, consisting of thousands of hours of footage, at unprecedented speeds and resolutions. This capability has allowed Sync Labs to release new models almost every month, significantly enhancing their product offerings and accelerating their growth trajectory.
Azure’s robust infrastructure has been pivotal in Sync Labs’ ability to scale. The dedicated GPU instances with 8 A100s, coupled with extensive CPU cores and RAM, enable efficient parallelization and mixed precision training using PyTorch’s advanced features. The seamless integration with tools like TensorBoard, Weights and Biases, and Jupyter Notebooks ensures that Sync Labs can monitor, track, and log all training experiments with ease.
The technological advancements made possible by Azure have translated into tangible business results for Sync Labs. The improvements in their models and technology have led to a remarkable 30x increase in revenue and a 100x expansion of their customer base, underscoring the scalability and effectiveness of their AI-driven solutions. Sync Labs continues to push the boundaries of video editing technology, with new model offerings such as sync 1.5, 1.6, and 1.7, all developed and trained on Azure.
Responsible AI Practices
Sync Labs have been utilizing responsible AI practices to ensure the AI-generated content is correctly attributed. They automatically moderate content that can potentially be harmful deepfakes. They also embed a signature into videos generated on their platform to identify the video is AI modified by Sync Labs and the user who generated it. The focus is on enabling people to modify content and people they have consent to, rejecting content that is harmful in nature, and making sure every video generated is identifiable as AI modified + attributable back to an end user on our platform.
If you are interested to try out the product, you can sign up for free here.