Crafter logo

Senior AI/ ML Engineer 7 + years only || Hyderabad

Crafter
4 hours ago
Full-time
Remote
India
Engineer

Nova is a communication intelligence company.


We build systems that understand how humans communicate, not just what they say.

Nova’s work applies to dating today, but the underlying technology is much bigger: emotionally aware messaging, intent clarity, boundary detection, trust signals, and respectful exits in human conversations. If you’ve ever felt that modern communication tools are fast but emotionally dumb, this is exactly the problem we’re solving.


We’re looking for an AI Engineer who wants to build real-time, emotionally aware, safety-first communication systems that sit inside live human conversations.


This role is not about growth hacks, swipe mechanics, or engagement tricks. It’s about building intelligence that reduces anxiety, miscommunication, manipulation, and emotional harm — while still keeping conversations human.


Role summary

You’ll design and ship AI systems that analyse live conversations for tone, intent, emotional risk, and safety signals. These systems run in real time, inside active chats, influencing nudges, boundaries, pacing, and trust decisions. Your work will directly affect how people feel while communicating.


This is a part-time role to start, with a minimum commitment of 20 hours per week. This is non-negotiable. We’re raising our pre-seed in the next three months, and this role is designed to become a full-time founding position post-raise.


If you’re exploring casually, undecided, or spreading yourself thin across multiple gigs, this won’t be a fit. We’re building with intent.


What you’ll work on

You’ll build NLP and LLM-based systems for real-time message analysis across tone, intent, consent boundaries, manipulation patterns, and emotional escalation. You’ll fine-tune and integrate LLaMA-based models focused on communication safety and emotional clarity, not generic chatbot behaviour.

You’ll design hybrid systems where ML models and rule-based policy engines work together, because not every human decision should be left to a probability score. You’ll build low-latency inference APIs that can safely operate inside live conversations without creating friction or fear.

You’ll implement vector-based memory and context retrieval to help the system understand conversational history, not just isolated messages. You’ll monitor model behaviour in production, study false positives carefully, and continuously refine the system so it intervenes less — but more meaningfully.

Above all, you’ll think in second-order effects. When the system nudges a user, does it create calm or pressure? Clarity or defensiveness? That judgement matters here as much as technical correctness.


Core technical requirements

Strong Python experience with production-grade systems

PyTorch and HuggingFace Transformers used beyond experimentation

Hands-on experience with PEFT / QLoRA fine-tuning

Experience working with LLaMA 3.x class models (8B or 13B)

Safety-oriented models like LLaMA-Guard or equivalents

Lightweight NLP classifiers using BERT or RoBERTa

Hybrid ML + rule-based policy systems

JSON-based policy schemas, thresholds, and escalation logic

FastAPI-based inference services with async pipelines

REST and WebSocket communication flows

Vector databases like FAISS or ChromaDB

Docker-based deployments

AWS EC2 experience with CPU/GPU inference

Basic CUDA understanding and performance trade-offs

Experiment tracking with MLflow or Weights & Biases


Nice to have

Experience with trust, safety, or moderation systems

Work on chat, messaging, or social communication platforms

Model optimisation, quantisation, or latency tuning

Strong intuition for restraint — knowing when not to intervene


How we work

You’ll work closely with the founder and core product team. There’s no “throw it over the wall” culture. You’ll be part of product thinking, ethical decisions, and system design discussions.

This role requires emotional maturity. You’ll be building systems that interact with silence, rejection, vulnerability, and boundaries. If you only enjoy optimising metrics, this role will feel uncomfortable. That’s by design.


Logistics

Part-time initially, minimum 20 hours per week

Remote, India time preferred

Founding-role trajectory post pre-seed

Equity-first mindset expected


One clear boundary

Nova is serious about what it’s building. This is not an experiment in social apps. It’s an attempt to redefine how technology supports human communication.


If that excites you and you’re ready to commit, we should talk.


If you’re unsure, indecisive, or just browsing opportunities, please ignore this role