India pushes in, raising concerns about AI safety
  • Elena
  • February 27, 2026

India pushes in, raising concerns about AI safety

In the last two years, many new nonprofit organisations have been created to focus on the risks of artificial intelligence (AI). These are not AI companies building products, but groups working to make AI safer.

Some of these organisations include Fathom, Current AI, the International Association for Safe and Ethical AI (IASEAI), and the AI Futures Project. They work with governments and companies to create rules and safety guidelines for AI, which is rapidly changing industries and economies.

As AI becomes more widely used, safety is becoming very important. AI researchers, founders, and governments are now discussing how to control risks.

Call for Global AI Regulation

At a recent AI Summit, Sam Altman, founder of OpenAI, said powerful technologies need strong safeguards. He suggested the world may need a global organisation for AI, similar to the International Atomic Energy Agency (IAEA), to coordinate safety efforts.

Andrew Freedman, co-founder and CEO of Fathom, said AI is developing so fast that governments are struggling to keep up. He believes AI governance must be different from traditional regulation.

Concerns from AI Experts

Well-known AI researchers like Stuart Russell and Yoshua Bengio (often called the “Godfather of AI”) have warned about serious risks from advanced AI.

Stuart Russell, a professor at UC Berkeley and president of IASEAI, said there are two main safety problems:

  1. How safe AI systems actually are
  2. What level of risk is acceptable when using them

He believes unsafe AI systems should not be released and should require proper testing and licensing.

Yoshua Bengio has started a nonprofit called LawZero, which works on technical solutions to ensure AI systems are developed and used safely.

Different Approaches to AI Safety

Fathom is creating legal and technical frameworks to check if AI systems are safe. It works with verification platforms like Avery, METR, and Apollo, especially in regulated sectors like insurance, finance, healthcare, and construction.

Current AI is focusing on inclusive AI. In partnership with Bhashini, it has launched a hardware device using open-source AI models. The device helps rural people and people with disabilities communicate in their native languages. The project will later be opened to startups to build more tools.

Funding and Impact

Most of these organisations receive funding from AI companies, donations, or philanthropic groups. Current AI received $400 million in funding at the AI Impact Summit in Paris. Fathom is mainly donation-funded.

Some of their work is already influencing policy. For example, model AI legislation created by Fathom is being discussed in six US states, including California, Virginia, and Ohio. One law called SB-53 (Transparency in Frontier Artificial Intelligence Act) has already been adopted in California and came into effect on January 1.

However, experts say it is still early. Discussions are happening, but creating strong AI safety systems will take time.