Variance logo

Research Engineer, Evals

Variance
3 hours ago
Full-time
On-site
San Francisco, California, United States
Engineer

Role

At Variance, we are teaching machines to make the hardest judgment calls at scale. That means building AI agents for the high-stakes gray area of risk investigations, fraud, and identity reviews.

We’re a small, talent-dense team in San Francisco working on a problem at the edge of what AI systems can reliably do: making good decisions in messy, adversarial, real-world environments. We focus on building, high-consequence systems problems where the edge cases matter most.

We’re looking for a Research Engineer to help define how we measure and improve model quality. You’ll build the benchmarks, datasets, tooling, and evaluation loops that tell us whether our systems are actually getting better on the tasks that matter. This role sits at the center of research, product, and engineering. It is about creating rigorous, domain-specific evaluations that reflect real customer workflows, expose meaningful failure modes, and drive the next generation of model and agent improvements.

You’re a fit if you:

  • Care deeply about craftsmanship and have strong opinions about model quality, measurement, and experimental rigor

  • Want to work on core model and agent behavior, not just surface-level product metrics

  • Are excited by the challenge of defining what “good” looks like in messy, high-stakes environments

  • Think in tight loops: hypothesis, benchmark design, evaluation, failure analysis, iteration

  • Have strong engineering fundamentals and like building robust systems around ambiguous research problems

  • Thrive in environments where success criteria are initially underspecified and need to be sharpened through work

  • Are willing to do the work in the trenches: reviewing outputs, grading edge cases, curating datasets, and refining tasks until the evaluation actually measures what matters

  • Care deeply about building systems that protect people from fraud, scams, and abuse

What you’ll do

  • Build proprietary benchmarks and datasets to evaluate models and model systems on fraud, identity, and risk workflows

  • Design and run offline and online evals that measure model performance on real customer tasks, not just abstract benchmarks

  • Define quality metrics for judgment systems, including precision, calibration, consistency, abstention, and failure handling

  • Study where models and agents break, and turn those failures into better evals, better datasets, and better training loops

  • Build reusable evaluation tools and quality building blocks that can be used across different product surfaces and workflows

  • Partner closely with research, engineering, product, and design to improve system quality through rigorous experimentation

  • Help create a strong culture of scientific experimentation, clear measurement, and continuous iteration

  • Push the boundary of how AI systems are evaluated in regulated, adversarial, and high-consequence environments

What success looks like

  • We have a clear, trusted view of how our systems perform across the workflows that matter most

  • Our evals predict real-world quality better than generic benchmarks

  • We identify meaningful failure modes earlier and improve system behavior faster

  • We develop differentiated datasets, benchmarks, and quality loops that compound over time

  • Research and engineering teams use your work to make better decisions about what to train, ship, and improve next

  • Variance becomes known for rigorous, domain-specific evaluation of judgment systems

Preferred background

  • Experience training, evaluating, or improving modern ML systems

  • Strong programming skills and comfort working in research-heavy codebases

  • Experience building benchmarks, datasets, evaluation pipelines, or quality systems

  • Familiarity with LLMs, agent systems, retrieval, post-training, or adjacent areas

  • Ability to design clean experiments and draw reliable conclusions from noisy results

  • Strong engineering judgment and a bias toward building

  • Interest in fraud, risk, trust and safety, compliance, or other regulated and adversarial domains

Our culture

We believe in ownership, urgency, and craft. We enjoy spirited debate, wild ideas, and building things we’re proud of. We’re fully in-person in San Francisco.

What we offer

  • Competitive salary and meaningful equity

  • Platinum-level medical, dental, and vision insurance

  • Unlimited PTO, sick leave, and parental leave

  • Up to $100 per month in reimbursement for personal health and wellness expenses

  • 401(k) plan