Academic research plays such an important role in advancing science, technology, culture, and society. This grant program helps ensure this community has access to the latest and leading AI models.

Brad Smith, Vice Chair and President
green icon of a person standing on a circle with four smaller circles connected

AFMR Goal: Align AI with shared human goals, values, and preferences via research on models

which enhances safety, robustness, sustainability, responsibility, and transparency, while ensuring rapid progress can be measured via new evaluation methods

These projects aim to make AI more responsible by focusing on safety, preventing misinformation, and improving auditing in a way that’s easy to understand. They look into protecting against harmful attacks and inappropriate responses, using feedback and fact-checking to combat misinformation, and incorporating logical reasoning for better auditing. The plans also address the safety of personalized AI models, reducing bias by involving multiple perspectives, and creating a thorough evaluation system for responsible AI. The methods involve comparing different approaches, using fact-checking, integrating reasoning into the framework, involving human collaboration, and comparing benchmark data. The expected outcomes include better defenses against certain types of attacks, improved accuracy in information, safer personalized AI models, unbiased solutions, and an evolving evaluation system for responsible AI.

  • Alabama A&M University: Xiang (Susie) Zhao (PI)

    Environmental justice analysis fosters the fair treatment and involvement of all people, regardless of race, color, national origin, or income, in economic development and sustainability, resource allocation, environment protection, etc. Especially, it plays a critical role in intelligent disaster recovery and city planning which saves lives, assets, and energy. Many government agencies including NASA, NOAA, CDC, EPA provide full and open access to their datasets, which can be used to support environmental justice research and identify vulnerable populations and environmental challenges. However, it is difficult for researchers and students at HBCUs/MSIs to understand and use these datasets due to various or complex data formats, limited computing resources and heavy workload. This project aims to bridge this gap and strengthen the research and education capabilities at HBCUs/MSIs using Microsoft foundational models and Azure cloud platform. Azure OpenAI GPT-4 and DALL-E 2 will be used for natural language processing to survey and process scientific literature, government reports and blogs related to environmental justice, disaster recovery and city planning. A RA-Bot will be developed to assist the researchers and decision makers to answer inquires, generate summaries, and perform classification and sentiment analysis.

  • Monash University Malaysia: Sailaja Rajanala (PI)

    The proposal aims to enhance auditing of large language models (LLMs) by integrating causal and logical reasoning into the Selection-Inference (SI) framework, offering a deeper understanding of how LLMs function and make decisions. It looks to identify and mitigate biases, and ensure LLM-generated content is ethically compliant. The research also seeks to create auditing pipelines that could be transferred to other AI systems.

  • University of Texas at Arlington: Faysal Hossain Shezan (PI)

    The prevalence of vulnerable code poses a significant threat to software security, allowing attackers to exploit weaknesses and compromise systems. Traditional methods of manual vulnerability detection are expensive, requiring substantial domain expertise. Automated approaches, particularly those based on program analysis techniques like symbolic execution, have shown promise but face challenges in path convergence, scalability, accuracy, and handling complex language features. We propose to introduce a hybrid approach that combines a large language model (LLM), such as GPT-4, with a state-of-the-art symbolic execution tool like KLEE. Our approach aims to enhance symbolic execution by mitigating its inherent challenges. The strategy involves dynamically prioritizing execution paths based on contextual relevance and potential vulnerability disclosure. The LLM will guide symbolic execution towards paths likely to yield significant outcomes, adapting strategies based on evolving context and analysis information. Additionally, we will incorporate semantic information from the LLM to generate more meaningful constraints, reducing the complexity of constraints and directing symbolic execution towards pertinent paths.

    Related papers:

  • Kean University: Yulia Kumar (PI)

    The research delves into the robustness of Large Language Models (LLMs) such as GPT-4-Turbo and Microsoft Copilot, augmented with tools like DALLE-3, against adversarial attacks in multimodal contexts that merge text and imagery. The objective is to unearth vulnerabilities in these advanced LLMs when they interpret manipulated textual and visual stimuli. Our approach involves the creation of adversarial test cases featuring subtle textual modifications and visually altered images, based on the orthogonal array coverage of most likely attack scenarios to AI models. These are designed to test the LLMs’ capacity to process and react to multimodal data in misleading scenarios while examining the underlying Transformer architecture and self-attention mechanisms, if accessible. The study scrutinizes the models’ vulnerability to both isolated and simultaneous cross-modal attacks, seeking to expose potential shortcomings in their ability to handle multimodal information and any biases in their outputs. Anticipated outcomes include valuable insights into AI’s resilience against sophisticated adversarial tactics, enhancing multimodal AI systems. This research is crucial for AI security, emphasizing the need to bolster the accuracy and dependability of AI across various applications. It aims to contribute to developing robust and secure AI systems, effectively navigating the complexities of an increasingly multimodal digital environment.

    Related paper:

  • IIT Kharagpur: Somak Aditya (PI)

    Explore both OpenAI and similar open-source models (such as FLAN-T5, LLAMA) to draw out comparisons of the effect of “jailbreaks” and their mitigation. Our goal for Foundation Models Academic Research is two folds: 1) analysis, categorization, and defense of prompt injection attacks, and 2) safeguarding against unethical, hateful, or adult responses. We provide a brief description of the two streams of the proposed research.

    Related papers:

  • University of North Texas: Yunhe Feng (PI)

    In light of increasing concerns over demographic biases in image-centric AI applications, this proposal introduces RAG-PreciseDebias, a novel framework designed to address these biases in image generation. Our approach integrates fine-tuned Large Language Models (LLMs) with text-to-image generative models within a retrieval augmented prompt generation architecture. This system autonomously refines generic text prompts to align with specified demographic distributions, as informed by an information retrieval system. This proposal builds upon prior methodologies in prompt engineering and model bias assessment, addressing the limitations of existing approaches that either rearrange existing images or require manual demographic specifications. RAG-PreciseDebias distinguishes itself by its capability to automatically provide contextually relevant demographic data, thereby improving the precision and adaptability of image generation. Our novel instruction-following LLM is central to this framework, designed to adapt prompts to reflect specific demographic groups at predetermined rates, thus guiding the biased image generation model towards more representative outputs. RAG-PreciseDebias leverages data from reliable sources, including the U.S. Bureau of Labor Statistics and the United Nations, to generate images that are not only effective but also fair and representative of diverse populations, marking an advancement in responsible AI development.

  • New York University: Mengye Ren (PI)

    In an era where Large Language Models (LLMs) are becoming integral to various applications, their safety and alignment with human values are paramount. LLMs have demonstrated remarkable progress in recent years, exhibiting unprecedented capabilities in understanding and generating natural language text. As chatbots and other applications increasingly adopt LLMs, there is a growing trend towards more personalized and customized models. Our project proposes to study the critical question: Is it safe to allow continuous and personalized finetuning on LLMs without compromising its previous values, such as learning biasedness, toxicity, and harmfulness? We hypothesize that a safety review process after custom finetuning can mitigate the risks associated with personalizing LLMs. We also propose to study several learning mechanisms for the sequential personalization and safety review procedure.

    Related papers:

  • Pennsylvania State University: Qingyun Wu (PI)

    The proposal aims to mitigate potential biases in solutions/decisions generated by LLM-based AI systems through human-in-the-loop, multi-agent collaboration. The project will explore effective ways to integrate additional agents into the AI system, enabling it to overcome inherent shortcomings, including blind spots and biases, while leveraging the strengths of each individual agent. The goal is to perform bias mitigation during inference and to fine-tune the model using data from multi-agent collaboration.

    Related papers:

  • Illinois Institute of Technology: Kai Shu (PI)

    The project proposes to address the factuality issues in large language models (LLMs) that may result in hallucination and misinformation. It aims to leverage knowledge-grounded feedback to enhance the factuality of LLMs and investigates the fragility of LLMs’ factuality property via factuality attacking methods and corresponding defense approaches.

    Related papers:

  • Singapore University of Technology and Design: Soujanya Poria (PI)

    Address the challenge of ensuring safety and responsibility in foundational models. Our team proposes evaluating and comparing original models with safer versions on benchmark datasets, including a safety benchmark, to assess their performance and potential trade-offs. Through this evaluation, we can gain a comprehensive understanding of the performance of the foundational models and identify any potential trade-offs between safety, responsibility, and generalization. By doing so, we aim to establish a framework for evaluating the safety and responsibility of foundational models, which can pave the way for future advancements in this area.

    Related papers:

  • University of North Texas: Tao Wang (PI)

    The growth amount of connected mobile devices urges a more efficient and effective resource management to improve spectrum efficiency and accommodate various user requirements in the next generation broadband wireless access networks. A recent trend in resource allocation is to incorporate sophisticated neural networks for decision making to improve the efficiency of the system. Nevertheless, current practices of resource allocation for emerging wireless techniques are typically implemented as black-box strategies in commercial products, which lack model interpretability and transparency, and may yield inscrutable predictions and biased decisions.

    The project aims at developing an AI-empowered automation toolkit that can systematically examine the potential risks of resource allocation schemes for emerging networking techniques and further developing effective countermeasures. Specifically, the PIs aim to accomplish the following three tasks:

    1. Developing a resource allocation strategy inspector to understand the internal workings of different black-box resource allocation strategies and improve their explainability and transparency.
    2. Uncovering potential attack surfaces and developing corresponding countermeasures.