Six month in: Trust and Safety Jobs recap. Learn More 👉
S

Machine Learning Engineer, GenAI Applied ML

Scale AI
On-site
New York, New York, United States
Engineer
This role will lead the development of machine learning systems to detect fraud, abuse, and trust violations across Scale’s contributor platform. As a core part of our Generative AI data engine, these systems are critical to ensuring the quality, safety, and reliability of the data used to train and evaluate frontier models. You will build scalable ML services that analyze behavioral and content signals, incorporating both classical models and advanced LLM-based techniques. This is a high-impact, product-focused role where you’ll collaborate across engineering, product, and operations teams to proactively surface misuse, defend against adversarial behavior, and ensure the long-term health of our human-in-the-loop data workflows. Responsibilities include designing and deploying machine learning models to detect fraud, quality issues, and violations in large-scale contributor workflows; building real-time and batch detection systems that evaluate account, behavioral, and content-level signals; combining traditional ML techniques with LLMs and neural networks to improve detection capabilities and reduce false positives; creating robust evaluation frameworks and actively tuning for extremely imbalanced detection scenarios; and collaborating closely with product and engineering teams to embed detection systems into contributor-facing workflows and backend infrastructure. Candidates should have 3+ years of experience building and deploying ML models in production environments, experience with trust & safety, fraud detection, abuse prevention, or adversarial modeling, proficiency in ML and deep learning frameworks such as scikit-learn, PyTorch, TensorFlow, or JAX, familiarity with LLMs and experience applying foundation models for structured downstream tasks, strong software engineering fundamentals, and excellent communication skills. Nice to have are hands-on experience designing or scaling trust & safety detection systems, familiarity with data quality pipelines or contributor platform risk analysis, contributions to open-source LLM fine-tuning efforts or internal LLM alignment projects, and research or published work in top ML venues. Compensation includes base salary, equity, and benefits such as comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. The role requires office presence 3x per week in SF or NYC.