Mosaic Faces

Project MOSAIC

A Generative AI experience designed to capture and dynamically display public discourse around AI as Art.

Vision

AI is evolving incredibly fast and its impact on society promises to be as great as its one-hundred-year predecessor, the Industrial Revolution. As AI rapidly evolves and seeps into daily life, so does our experience with it. People’s perceptions of AI, and how they see it changing their lives today, is a critical area of inquiry for researchers. Our challenge was that traditional methods of surveys are often static, less inclusive, and ill-suited to accurately capture the breadth and depth of the AI event.

To address this issue, we introduced Project Mosaic – a Generative AI experience designed to capture and dynamically display public discourse around AI. It implicitly acted as an active visual barometer while also sampled with other input metrics (economic, sociopolitical) served to infer the collective predictions, helping to measure and shape societal impact of AI.

Experience

Mosaic leveraged the speed and creativity of Generative AI to elevate and highlight narratives around public sentiment while promoting a more inclusive experience. It encouraged the public to engage with it by answering survey questions and seeing their response reflected as responsive art. The framework acted as a living survey model that posed questions over time creating new data verticals and generative experiences for ongoing societal engagement and research. It’s novel approach to visualize an individual’s response using AI and tell a visual story via the interactive Mosaic experience created a unique public display of interactive Art.

Technical approach

The technical approach involved a modular architecture that leveraged Azure cloud services for scalability and reliability. The system consisted of a responsive web-based front-end user experience, a serverless back-end API implementing an AI orchestrator, and a scalable data storage layer. At its core, the orchestrator coordinated and executed multiple AI services performing sentiment analysis and art generation.

To support research experimentation, the solution was extensible in the three areas of visualization, AI models and data storage. Visualizations were driven by a data-API offering scalable content delivery (CDN) of large volumes of artwork and metadata artifacts for interactive (near-)real-time rendering and exploration. The orchestrator leveraged a plug and play mechanism (e.g., Semantic Kernel) coordinating multiple multi-modal sentiment extraction as well as image generation agents. The storage layer supported easy addition of new data points without requiring significant changes to the underlying information architecture through document-oriented storage paired with Azure storage.