Location: Onsite / San Francisco
Stage: Seed
Type: Full-time, founding team
attunement.ai
Engineering Observability and Accountability into AI for Behavioral Health
Attunement is building the compliance infrastructure for AI in behavioral health. Our goal is to make AI systems in clinical settings auditable, explainable, and accountable by design.
Today, clinics using Attunement cut audit preparation time by 80% and documentation costs by 40%. We are building the technical standard for safety and integrity in AI-assisted behavioral care.
What You'll Do
As an early engineer, you’ll design and implement the technical foundation for compliant and reliable AI in healthcare. You’ll build systems with our forward deployment engineer and product designer to make compliance and transparency operational.
Your work will include:
- The core compliance intelligence layer: secure, explainable, and continuously learning from real clinical workflows.
-
Data pipelines that connect with EHRs and healthcare APIs (FHIR, HL7) to create real-time, auditable feedback loops.
You Might Be Right for This If
- You’ve built production systems end-to-end , backend to frontend in security-sensitive or regulated environments (HIPAA, SOC 2, or similar).
- You’ve worked with healthcare data standards (FHIR, HL7, or EHR integrations) and understand the nuance of data lineage, auditability, and interoperability.
- You have experience with LLMs or ML Ops, particularly in designing explainability, safety, or audit systems around AI models.
- You’re fluent in React / Next.js and Python / FastAPI (or equivalent frameworks), with strong fundamentals in database architecture, API design, and observability.
- You care deeply about reliability, data integrity, and user trust
- (Bonus) You have a background or strong interest in clinical psychology, AI safety, or human-centered systems design, and you want to build software that genuinely improves human wellbeing.
Why this matters
This role shapes how AI systems are integrated into healthcare. You’ll collaborate with a founding team with backgrounds in neuroscience, AI safety, and clinical psychology to define the technical and ethical standards for responsible AI in clinical environments.
You’ll have meaningful ownership, early equity, and the opportunity to influence not only the product architecture but also the principles that govern how AI supports human decision-making in care.