Geoff Ralston (the former president of Y Combinator) has re-entered the venture capital landscape with the launch of a new fund focused on artificial intelligence safety and responsible deployment, named the Safe Artificial Intelligence Fund (SAIF). SAIF is dedicated to supporting research and initiatives that ensure AI technologies are developed and implemented in ways that are safe, ethical, and aligned with human values.
“We’ll invest early, help shape go-to-market, and use our experience to increase the odds of downstream success. I will be hands-on with each startup, including weekly office hours in order to help them find product market fit, get into Y Combinator, and/or raise seed capital,” Ralston said in his statement.
This newly introduced fund aims to address the growing concerns about the potential risks associated with advanced AI systems, including issues related to autonomy, decision-making, and long-term societal impacts.
Ralston plans to invest in startups using $100,000 checks through SAFEs (Simple Agreements for Future Equity) — a type of investment instrument originally developed by Y Combinator. A SAFE is not an immediate purchase of equity but rather an agreement that converts into equity at a later date, typically when the startup raises its next round of financing.
Importantly, these SAFEs include a valuation cap set at $10 million. This means that when the SAFE eventually converts into equity, Ralston will receive shares as if the company were valued at no more than $10 million, regardless of its actual valuation at that time.
However, despite all these efforts, Geoff Ralston’s new AI safety fund may face challenges in a rapidly growing yet underregulated field. Globally, AI safety research receives less than 2% of the estimated $200 billion invested annually in AI, which highlights a significant funding gap. Reports suggest that in 2023, estimately $500 million was allocated to AI safety initiatives, while the broader AI research funding surged to over $50 billion. Earlier this year, President Donald Trump scrapped a 2023 AI safety-related executive order introduced by Joe Biden, arguing that it hindered progress and innovation in artificial intelligence.
In the meantime, several other initiatives have also been launched to promote AI safety. The AI Safety Fund (AISF), for example, is a collaborative effort by leading AI developers like Anthropic, Google, Microsoft, and OpenAI (alongside philanthropic partners) aiming to accelerate AI safety research by providing grants to independent researchers focusing on critical safety risks associated with frontier AI models.
In fact, last month, Schmidt Sciences launched a $10 million program supporting 27 projects developing foundational methods for testing and evaluating large language models (LLMs).