Nucleus AI logo

Software Engineer, AI Safety Systems

Nucleus AI
3 hours ago
Full-time
On-site
India
Engineer

Help build the systems that make advanced AI safer, more trustworthy, and more resilient in the real world. At Nucleus, we are building large-scale intelligent systems that people and organizations can rely on. As a Software Engineer, AI Safety Systems, you will design and operate the infrastructure that helps detect abuse, support content moderation, and improve alignment between model behavior and our safety standards. This role sits at the intersection of engineering, policy, and product, translating safety goals into robust production systems.


In this role, you will
  • Build backend systems and pipelines for abuse detection, moderation, policy enforcement, and safety evaluation.
  • Develop services that support real-time and offline detection of harmful or policy-violating behavior.
  • Design tooling that helps researchers, product teams, and operations teams investigate and respond to safety issues.
  • Improve the reliability, latency, and scalability of safety-critical systems across Nucleus products and APIs.
  • Partner closely with research, policy, and trust teams to turn evolving safety requirements into production-grade engineering.


You may be a good fit if you have
  • Strong software engineering fundamentals in backend or distributed systems.
  • Experience building moderation, detection, fraud, abuse prevention, or trust-and-safety systems.
  • Comfort working across ambiguity, where product, policy, and engineering constraints intersect.
  • A thoughtful approach to safety, reliability, and user impact.


What makes Nucleus different

Safety is not a layer added after the fact. At Nucleus, it is part of how we build. You’ll work on foundational systems that shape how advanced models behave in production and how trust is earned over time. If you care about building AI systems that are both powerful and responsible, we’d love to hear from you.