Faculty Summit 2019
2019年7月17日 2019年7月18日

Faculty Summit 2019 | The future of work

地点: Redmond, Washington, USA

Productivity

  • Presenters: Pamela Bhattacharya [Video]

    Abstract: Scheduling meetings is tedious. It gets even more challenging when people use different calendaring systems or meet across different time zones. People need to have multiple dialogues to find an optimal time for them to meet that takes away their ability to focus on more demanding tasks. Surveys on hundreds of information workers, in a wide range of industries and roles, have citied scheduling meetings as the most cumbersome task. While online calendar sharing tools like Outlook and Google calendar and polling tools like Doodle make scheduling less cumbersome, users still do not have a seamless scheduling experience. Leveraging state-of-the-art artificial intelligence (AI), we created a virtual assistant that could handle the conversational back-and-forth required for scheduling meetings, much the same way that executive admins schedule meetings for CEOs. Calendar.help is a system that provides fast, efficient scheduling through structured workflows. Users interact with the system via email, delegating their scheduling needs to the system as if it were a human personal assistant.

    Learn more at https://calendar.help/about (opens in new tab)

  • Presenters: Nikolay Trandev

    Abstract: The networked and digitized world we live in has fundamentally changed how we work. The pace of change is quicker than ever before and with change comes uncertainty. Effective responses to uncertainty require people and companies to continuously learn and refine their mental models. This demands understanding and facilitating efficient flows of information and enabling agile execution. We are using large scale collaboration networks, machine learning and management science to create a visual data experience that brings to life information flows in the enterprise, helps executives see their companies in new ways and assess whether they have the right structural foundation to achieving their business goals.

  • Presenters: Ben Zorn [Video]

    Abstract: Spreadsheets allow users to combine data, computation, and presentation in compelling ways that allow analysis, exploration, insight, and communication. By applying machine learning and AI based on deep neural networks to spreadsheets, and combining that signal with traditional program analysis and program synthesis, we are able to extract user intent from weak signals like formatting and presentation and use this information to make spreadsheets more correct and easier to use.

  • Presenters: Shi Han [Video]

    Abstract: Ideas in Excel aims at such one-click intelligence—when a user clicks the Ideas button on the Home tab of Excel, the intelligent service will empower the user to understand his or her data via automatic recommendation of visual summaries and interesting patterns. Then the user can insert the recommendations to the spreadsheet to help further analysis or as analysis result directly. To enable such one-click intelligence, there are underlying technical challenges to solve. At the Data, Knowledge and Intelligence area of Microsoft Research Asia, we have long-term research on spreadsheet intelligence and automated insights accordingly. And via close collaboration with Excel product teams, we transferred a suite of technologies and shipped Ideas in Excel together. In this demo, we will show this intelligent feature and corresponding technologies.

    Learn more at https://www.microsoft.com/en-us/research/project/spreadsheet-intelligence/

  • Presenters: Derek Johnson, Kostas Seleskerov [Video]

    Abstract: PowerPoint now includes AI technologies to help people create better presentations and become better presenters. Come see how AI helps make creating presentations quicker and easier with Designer and Presenter Coach. Learn how PowerPoint can listen to you practice and provide helpful tips for improvement.

  • Presenters: Bongshin Lee, Kate Lytvynets [Video]

    Abstract: Charticulator is an interactive authoring tool that enables the creation of bespoke and reusable chart layouts. Charticulator is our response to most existing chart construction interfaces that require authors to choose from predefined chart layouts, thereby precluding the construction of novel charts. In contrast, Charticulator transforms a chart specification into mathematical layout constraints and automatically computes a set of layout attributes using a constraint-solving algorithm to realize the chart. It allows for the articulation of novel layouts with expressive glyphs and links between these glyphs, without requiring any coding or knowledge of constraint satisfaction. Furthermore, thanks to the constraint-based layout approach, Charticulator can export chart designs into reusable templates that can be imported into other visualization tools such as Microsoft Power BI. In this exhibit, we will demo how easily and quickly you can create a wide range of charts using Charticulator, and then reuse the chart designs as custom visuals in Microsoft Power BI.

  • Presenters: Jethro Seghers, Juliano Menegazzo Souza [Video]

    Abstract: See how researchers from the Heart Research Institute collaborate in real-time on documents, meet online and chat with confidence their intellectual property is protected using Microsoft Teams. Save your grant funding because Microsoft Teams is free for education and brings the ability to also integrate 3rd party apps to streamline workflows.

  • Presenters: Arpita Verma, Faith Allington, Meg Grounds [Video]

    Abstract: We are seeing a shift in the way people work and collaborate across spaces and locations. Designed for modern workplaces, Microsoft Whiteboard is a freeform intelligent canvas where teams can brainstorm, plan, and iterate. It enhances teamwork by allowing all team members to collaborate directly on the canvas from any device in real-time, no matter where they are.

  • Presenters: Dan Marshall [Video]

    Abstract: Immersive visualization and storytelling techniques allow insights in complex data to be more effectively communicated to users.

Intelligence & Society

  • Presenters: Yan Xu, Eric Chang [Video]

    Abstract: Fungal diseases are an increasing threat to human health around the world and have been estimated to kill more than 1.5M and impact over 1 billion people worldwide each year. Accurate diagnosis and treatment of fungal infections are still challenging in many regions around the world due to the lack of knowledge and training in healthcare professionals. In this collaborative project between Peking Union Medical College Hospital, Beihang University, Pfizer, and Microsoft, we are using computer vision techniques to help doctors diagnosis the most commonly seen fungal infections in hospitals in China. The goal is to create a system that will help fungal infection specialists to more quickly and accurately diagnosis the types of fungal infection and provide relevant information.

  • Presenters: Dan Morris [Video]

    Abstract: Microsoft’s AI for Earth program develops tools at the intersection of machine learning and environmental science, through both external grants and first-party development. This demo will showcase our work on using computer vision to break the “annotation logjam” facing ecologists, who collectively spend millions of hours and millions of dollars each year annotating images to estimate wildlife populations.

  • Presenters: Chinmay Singh, Jake Hofman, Dan Goldstein [Video]

    Abstract: The Perspective Engine is an AI tool that automates the process of putting giant and unfamiliar numbers into familiar context, such as demonstrating the capacity of a CD-ROM with a stack of paper hundreds of feet tall. It is already live in Bing, where it improves the answer to the questions like “How big is Afghanistan?” by supplementing “251,827 square miles” with the context “about the size of Texas” for US readers. It puts demographics, nutritional information, fuel efficiency and much more into perspective. Our demo will feature interactive touchscreen guessing games about unfamiliar numbers and will show people how the Perspective Engine can help make these numbers easier to understand. We will demonstrate the Perspective Engine in Bing, Microsoft Edge, and Office365.

  • Presenters: Ece Kamar, Besmira Nushi

    Abstract: Developing and maintaining reliable and unbiased machine learning systems requires a deep understanding of system failures and rigorous evaluation processes. Aggregated and high-level evaluation methods such as single-score performance numbers or even multi-class confusion matrices often hide important conditions of failure and thus does not provide guidance for improvement. In this demo, we present ongoing work to build an error analysis tool, which helps engineers accelerate the development process by moving beyond aggregate scores to reveal a broader error terrain. The tool uses interpretable machine learning to discover combination of patterns in data that can seperate model successes from failures and provides multiple views to enable interactivity with developers.

Developer Tools

  • Presenters: Alejandro Caraza Duhne

    Abstract: The work place is changing. People are collaborating more than ever – with more than 70% of work now involving multiple people. Remote work is rising, and is a constant part of most work across industries and functions. And companies are under continued pressure to be more agile, cost efficient and flexible. In order to meet the needs of this modern workplace, we’re launching the Surface Hub 2S. A device built for collaboration, from premium and flexible hardware to the custom tailored software, Surface Hub 2S enables immersive collaboration, bringing to life Teams and Whiteboard and provided unmatched group productivity to users.

  • Presenters: Badrish Chandramouli [Video]

    Abstract: Management of large application state is one of the hardest problems for cloud and edge apps today. We demonstrate FASTER, a new open-source concurrent key-value store from Microsoft Research, that supports larger-than-memory data while providing unprecedented performance for the hot working set in main memory. FASTER achieves up to orders-of-magnitude better throughput than systems deployed widely today. FASTER is available in C# and C++ and can work with any storage backend such as local SSD and cloud storage. FASTER is currently being integrated into services such as Azure Stream Analytics. Our demonstration focuses on: (1) the ease with which cloud applications and state stores can deeply integrate state management into their C# or C++ logic at low overhead; and (2) the innovative system design and the resulting high performance, adaptability to varying memory capacities, durability, and natural caching properties of our system.

  • Presenters: Chetan Bansal, Rahul Kumar, Chandra Maddila [Video]

    Abstract: With the shift from boxed products to services, rich data is available from all stages of the Software Development Life Cycle. By leveraging this data, AI can assist software engineers, break down organizational boundaries and make our products more robust. We will demonstrate several AI powered features like reviewer recommendation, test load reduction and automated root causing for boosting developer and infrastructure productivity.

  • Presenters: Lars Liden, Swadheen Shukla [Video]

    Abstract: Project Conversation Learner aims to revolutionize the way task-oriented bots are built, by applying the tenets of machine teaching to empower non-developers to create and maintain task-oriented bots. By leveraging machine learning models to drive conversations, researchers from Microsoft Research abstracted building a bot—from coding a set of rules to simply providing a set of example interactions.

    To realize this vision, the team first introduced Hybrid Code Networks (HCNs), which combine an RNN with domain-specific knowledge, and which demonstrate the capability of the system on an industry-standard sample data set. Later, by collaborating with product and customer-facing teams, we created a customer-ready SDK and shipped it through a pre-preview release channel.

  • Presenters: Ashish Tiwari, Sumit Galwani [Video]

    Abstract: PROSE (PROgram Synthesis using Examples) is a program synthesis technology that can generate programs from a few input-output examples. It is a general-purpose technique that can be instantiated to specific domains by designing a domain-specific language (DSL) over which programs are synthesized.

    In the past, program synthesis was mainly used as a black-box to perform data transformation tasks. Recently, there is interest in opening up the technique by introducing interactivity in the program synthesis process, and also exposing the synthesized program to the user. This leads us to two problems. The first pertains to finding the input on which to query the user so that the program synthesis engine can quickly converge to the user-intended program. The second problem is concerned with finding programs that are easier to translate into readable code.

    For the first problem, we use an information-theoretic approach to find the input that provides the most additional information about the identity of the user-intended program. The second problem is an instance of multi-objective program synthesis. We present a three-phase modular approach for this problem that combines classical program synthesis with a second phase that performs a global search for the optimal program and a third phase that performs a local search to further improve the discovered program.

    We have built these new approaches into PROSE, and we now have a Python SDK for PROSE. The Python SDK allows users to synthesize Python code that users can see, edit, execute, and incorporate in their larger project. To illustrate the power of these new developments, we demonstrate the use of the PROSE Python SDK on a data preparation task. PROSE Python SDK can be used to significantly expedite tasks pertaining to data ingestion, data inspection, data transformation, and datatype detection.

  • Presenters: Mark Wilson-Thomas, Shengyu Fu, Gustavo Soares [Video]

    Abstract: Visual Studio IntelliCode brings the wisdom of your coding community to your aid right inside your development environment, delivering context-sensitive machine-learning assisted recommendations in your code editor, and soon delivering PROSE-based tools to discover cases where you may have missed a refactoring when making repeated edits, as you edit. Mark, Shengyu and Gustavo will demonstrate these capabilities and explain how they have been developed.

  • Presenters: Alan Geller, Andres Paz, Chris Granade, Bettina Heim, Guang Hao Low, Mariia Mykhailova [Video]

    Abstract: Microsoft’s Quantum development kit (QDK) and its programing language, Q#, are poised to empower both research and education in quantum computing and quantum programming. We can show how the resources estimator provided with the QDK is used to perform profiling of quantum algorithms, allowing researchers to report accurate costs for quantum algorithms, and helping assess commercial applications of quantum computing. For example, we can show how to import descriptions of chemical problems into Q# with our chemistry library (developed by QuArC researchers), and how to further process it there by applying several different quantum algorithms. Finally, we show how to use the quantum katas, each a sequence of programming tasks of increasing complexity, to expand and empower the quantum community through education. The QDK is provided with kata-based tutorials help to teach basic concepts of quantum computing such as superposition or measurement, or quantum algorithms (e.g. Grover search algorithm), providing a valuable tool for developers looking to get started in quantum computing.

  • Presenters: Tom Ball, James Devine (Lancaster University), Peli de Halleux, Steve Hodges, Teddy Seyed (University of Calgary) [Video]

    Abstract: The goal of Project Alava is to enable the easy construction, programming and debugging of microcontroller-based systems, followed by refinement to an optimized realization. High-level programming of the desired system in TypeScript using capability-based APIs translates into requirements on the hardware, allowing for many kinds of hardware components to be used with no code changes. A web-based hardware simulator allows for program testing, debugging and performance estimation before deployment. Alava compiles the high-level program into an executable containing a compatible hardware manifest, which greatly aids the configuration and debugging of the resulting system. To facilitate rapid prototyping, Alava leverages JACDAC (https://jacdac.org) which provides a lightweight but powerful bus-based communication protocol for embedded systems. JACDAC allows boards and modules to be combined simply and quickly. The first application of Project Alava to fashion technology (Project Brookdale) took place in May 2019.

Accessibility

  • Presenter(s): Gopal Srinivasa [Video]

    Abstract: We have all heard the saying that a picture is worth a thousand words. It is particularly true in STEM fields where technical diagrams are used universally as a core construct for communicating ideas. And people with visual impairments are denied the wealth of information that is present in these diagrams simply because they are inaccessible. Drawiz aims to remedy this situation by generating detailed, navigable, and audible descriptions of technical diagrams for people with visual impairments.

  • Presenters: Adam Fourney [Video]

    Abstract: People with visual impairments are expert users of audio interfaces, including voice-activated virtual assistants and screen readers. Through interviews and surveys of this population, we learned that virtual assistants are convenient and accessible, but lack the ability to deeply engage with content (for example, to read beyond the first sentence of a Wikipedia article), and the ability to present a quick overview of the information landscape (for example, to list other search results and search verticals). In contrast, traditional screen readers are powerful and allow for deeper engagement with content (when content is accessible), but at the cost of increased complexity and decreased walk-up-and-use convenience. Our prototype, VERSE (Voice Exploration, Retrieval, and SEarch), combines the positive aspects of virtual assistants and screen readers to better support free-form, voice-based web search. As with screen readers, VERSE addresses the need to provide shortcuts and accelerators for common actions. Specifically, VERSE allows users to perform gestures on a companion device such as a phone or smart watch. These companion devices are not strictly necessary, but help overcome the long activation phrases that can become tedious when repeated to smart speakers.

  • Presenters: Ryen W. White, Robert Sim [Video]

    Abstract: The Multi-Device Experiences (MDX) toolkit is an extensible cloud AI service hosted in Azure that enables research and development on task-focused scenarios spanning multiple devices. The toolkit enables developers to create intelligent, interactive experiences that capitalize on the strengths of using multiple devices simultaneously, such as an Echo plus an iPad. It provides support for user authentication and authorization, session management, and state reflection across devices. The toolkit leverages machine intelligence for intent understanding and the contextual answering of questions. Our demo features a proof-of-concept cooking scenario powered by the MDX toolkit, which allows users to follow a recipe hands-free, with voice navigation and contextual Q&A. The toolkit is designed to be extended to scenarios beyond cooking, including education, gaming, and accessibility. Our exhibit also describes the MDX research platform, which enables researchers to stand up new multi-device scenarios with minimal infrastructure investment, easily onboard new devices and device form factors, and access collated and synchronized logs from the service, model providers, and devices.

  • Presenters: Mar Gonzalez Franco

    Abstract: As devices become more multimodal and render higher resolutions they create new capabilities for perceptual experiences that drive our interactions. We present a series of visio-tactile illusions inside Virtual Reality, that can help us touch the void, but might also be applicable to pen and touch or other visuo-tactile interactions.

  • Presenters: Michel Pahud

    Abstract: We realize fine-grained posture sensitivity through a combination of sensors—including orientation (tilt) sensitivity, raw capacitance image data from the touchscreen, and a bezel-integrated electric field sensor for detecting hand proximity above and around the screen.

    Our demo will show that these sensors enable devices such as tablets, which transition frequently between various types of stationary and mobile use, to adapt their interfaces and interactive behavior to uniquely suit the user—however the user is gripping, holding, or using the device.

    In general this notion of posture awareness is important to the next wave of more natural, hybrid desktop/mobile computing that uniquely fits Microsoft’s productivity-on-the-go vision, and is appearing in various forms in Microsoft’s line of products, as well as new form-factors being considered for the future.

  • Presenters: Eyal Ofek [Video]

    Abstract: We present SeeingVR, which enables users with low vision to better enjoy VR and complete tasks more quickly and accurately. It consists of a set of 14 tools that enhance a VR application for people with low vision by providing visual and audio augmentations. A user can select, adjust, and combine different tools based on their preferences. SeeingVR enables developers to increase their applications’ accessibility using a simple Unity Toolkit, or modify existing VR applications via a plugin.