OpenAI Safety Fellowship 2026–2027 AI Alignment Fellowship for Global Researchers
The OpenAI Safety Fellowship 2026–2027 is a highly selective, prestigious program designed for talented researchers and professionals passionate about ensuring the safe and beneficial development of artificial intelligence. Hosted by OpenAI, this fellowship offers a unique opportunity to work at the forefront of AI safety, alignment, and responsible AI deployment.
As AI capabilities continue to advance rapidly, the need for dedicated experts in AI safety has never been greater. This fellowship provides a platform for individuals to tackle some of the most pressing technical and societal challenges in the field.
Program Overview
- Duration: 5 months (September 14, 2026 – February 5, 2027)
- Format: Flexible — fully remote or in-person in Berkeley, California
- Level: Advanced research training (suitable for pre-PhD, PhD candidates, and experienced professionals)
- Focus Areas:
- AI Safety & Alignment
- Machine Learning Robustness
- AI Evaluation & Oversight
- Privacy-Preserving AI
- Cybersecurity for AI Systems
- AI Governance, Ethics & Policy
- Oversight of Autonomous Agents
Fellows are expected to produce tangible research outputs such as technical papers, safety benchmarks, datasets, or comprehensive reports.
Benefits
- Competitive monthly stipend
- Access to significant compute resources and OpenAI API credits
- Direct mentorship from leading OpenAI researchers
- In-person workspace support through Constellation (for Berkeley-based fellows)
- Opportunity to collaborate on cutting-edge AI safety projects
- Networking with top minds in AI research and policy
Eligibility Criteria
The fellowship is open to applicants worldwide with no nationality restrictions. Ideal candidates possess:
- Strong technical or research background in AI, machine learning, or related fields
- Demonstrated ability to execute complex research projects
- Passion for AI safety and alignment challenges
- Strong analytical, programming, and problem-solving skills
- Proven track record (publications, projects, or practical contributions)
Formal academic degrees are not required exceptional self-taught researchers and professionals with strong portfolios are encouraged to apply.
Application Requirements
- Updated CV / Resume
- Research portfolio or description of past relevant work
- Letters of recommendation
- Completed online application form
Important Dates
- Application Deadline: May 3, 2026
- Results Announcement: July 25, 2026
Published: April 18, 2026
The OpenAI Safety Fellowship represents a rare opportunity to contribute meaningfully to one of the most important technological challenges of our time. Whether you are an emerging researcher or an experienced professional, this program offers the resources, mentorship, and platform to advance AI safety research with real-world impact.
Frequently Asked Questions (FAQs)
Is the OpenAI Safety Fellowship fully funded? It provides a monthly stipend, compute resources, API credits, and mentorship. While not a traditional fully-funded scholarship, it offers substantial financial and technical support.
Who can apply? The program is open to candidates from any country. Strong technical/research experience in AI or related fields is more important than formal credentials.
Can international applicants participate? Yes. The fellowship welcomes global talent and supports both remote and in-person participation.
What kind of output is expected? Fellows are expected to deliver meaningful contributions such as research papers, safety benchmarks, datasets, or technical reports.