Funding Opportunities
The Fidelity Center for Applied Technology (FCAT) University Research Award provides up to $50,000 in funding for six-month fundamental and applied research projects
The program aims to explore emerging technologies and sociocultural trends—such as AI, quantum computing, and cybersecurity—that could significantly impact the future of the financial services industry. By partnering researchers with FCAT teams, the award seeks to develop innovative tools and proof-of-concept solutions that address complex business challenges.
The four research theme areas for the 2026 FCAT University Research Award are:
- The Adaptive Horizon – Navigating the Age of Artificial Intelligence: This theme explores the broad implications and advancements of AI technology.
- Explainability, Observability, and Monitoring of Generative AI in Financial Services: This area focuses on the technical oversight and transparency of generative AI models within the financial sector.
- Future of Savings and Investments in the Age of AI Agents: This theme investigates how autonomous AI agents will change consumer behavior and financial management.
- Security, Forensics, and Oversight in the Age of AI: This priority area addresses the safety, security, and regulatory challenges posed by modern artificial intelligence.
Ohio State may submit two proposals per theme; maximum of 8 proposals.
KEY DATES
- April 1, 2026 at 5PM ET: Internal Deadline
- April 27, 2026: Fidelity Application Deadline
NOMINEE SELECTION PROCESS
** Rather than offering this as a competitive limited submission opportunity this cycle, we are designating nominees on a first come, first serve basis to eligible applicants. Eligibility will be determined based on whether the proposed research is responsive to the priorities of the Fidelity Center for Applied Technology (FCAT) University Research Award. The opportunity will close once eligible nominees are identified.**
APPLICATION INSTRUCTIONS
This process includes a concise fillable application form that integrates the proposal summary with key information selected from a PI’s C.V. By standardizing the application format, we have enhanced the review process, ensuring greater efficiency, equity, and alignment with other internal faculty recognition procedures.
The fillable application includes both mandatory fields (highlighted in red) and optional ones. Complete the optional fields only if they are relevant to the opportunity.
Include the following in a single PDF and upload using the link at the bottom of the funding opportunity page. Please download the fillable form to make your edits.
- Application Fillable Form
- One page of relevant figures (optional).
Contact Email: gilley.34@osu.edu
- Health & Life Sciences: This includes projects aimed at decoding the fundamental mechanisms of life, such as functional genomics, brain mapping, cellular dynamics, and the modeling of molecular interactions to accelerate drug discovery and precision medicine.
- Climate Resilience & Environmental Science: This covers research that addresses critical questions about the planet's living systems, including biodiversity mapping, climate system modeling, disaster resilience, and sustainable agricultural innovations.
Amount: $100,000 - $500,000
Due: 04/30/2026
This RFP seeks proposals for AI-driven methods that can reliably predict properties governed by defects, interfaces, disorder, and multi-scale dynamics, including methods that bridge length and time scales that conventional simulation cannot connect, for any technologically relevant material or molecule. We particularly encourage submissions from ongoing efforts where additional resources would enable a step-change in ambition.
This RFP has two funding tracks: Track I: Up to $100k (Up to 12 months); Track II: $100k – $500k (12–18 months).
Informational Webinar: April 7, 2026, 4-5pm EST Register
Amount: $50,000 - $750,000
Due: 04/30/2026
This is a pilot program. We are seeking evidence that unconventional compute hardware can tackle substantial, real-world problems that go beyond toy metrics. Schmidt Sciences seeks to fund research that can catalyze hardware fundamentally different from today's CPU–GPU paradigm, and the training/inference methods co-designed to operate under the constraints imposed by such hardware.
This RFP has two funding tracks: Track I: 50k–150k; Track II: 150k–750k.
Informational Webinar: April 7, 2026, 3-4pm EST. Register here
Amazon Research Awards (ARA) is announcing the Spring 2026 call for proposals (CFP) for the AI for Information Security, AWS Agentic AI, Amazon 2030, Amazon Security, Build on Trainium: Accelerating Post-Training, Build on Trainium: Kernels for ML Acceleration, and Robotics research areas. The deadline for submissions is May 6, 2026 (11:59PM Pacific Time)
Proposals will be reviewed for the quality of their scientific content, creativity, and their potential for impact at scale. Proposals related to theory, practice, and novel new techniques are all welcome.
ARA provides grant recipients unrestricted funds and AWS promotional credits. Funded projects are assigned an Amazon research contact, and recipients also receive training resources, including AWS tutorials and hands-on sessions with Amazon scientists and engineers.
Before applying, researchers are encouraged to visit the ARA website and read the frequently asked questions for more specific program information.
Amount: $1,000,000 - $5,000,000
Due: 05/17/2026
This program supports technical research that improves our ability to understand, predict, and control risks from frontier AI systems whilst enabling their trustworthy deployment. It spans three aims: Aim 1: Characterize and forecast misalignment in frontier AI systems; Aim 2: Develop generalizable measurements and interventions; Aim 3: Oversee AI systems with superhuman capabilities and address multi-agent risks.
Tier 1: Up to $1M; Tier 2: $1M–5M+
Informational Webinar: April 15th, 2026, 2-3pm ET. Register here
Amount: $300,000 - $1,000,000
Due: 05/26/2026
This RFP seeks new methods for detecting and mitigating deceptive behaviors from AI models, such as when models knowingly give misleading or harmful advice to users. If this pilot uncovers signs of meaningful progress, it may unlock a significantly larger investment in this space. The goal of this program is to develop interpretability methods that (1) detect deceptive behaviors exhibited by LLMs and (2) steer their reasoning to eliminate these behaviors.
Informational Webinar: