About Mystic
At Mystic we enable companies to deploy ML models anywhere with just a few lines of code. We abstract the entire infrastructure required to efficiently deploy ML models so that data scientists can focus on their ML models, not servers.
We currently have 2 products. Pipeline Catalyst and Pipeline Core.
Pipeline Catalyst helps developers and startups get their models deployed quickly. They can upload it onto our platform and they get an endpoint that they can use to run inference in their products. Currently powering over 8,000 models for thousands of users.
Pipeline Core helps scale-ups and enterprises deploy their models in their infrastructure of choice. Our production-ready platform brings over 3 years of engineering experience, adds only 40ms of overhead to their ML inference and allows to manage thousands of models and environments at scale.
Engineers are responsible for developing our application in accordance with our roadmap and customer needs, and for designing and implementing robust and scalable development practices. Engineers will set the direction of our product, culture and company.
As a DevOps engineer
As a DevOps Engineer at Mystic, you will play a crucial role in maintaining our on-premises infrastructure and helping us design and deploy our solutions to AWS, GCP, Azure, and other cloud vendors. You will work closely with our software engineering team to ensure smooth deployment, monitoring, and management of our API across various platforms, and contribute to the overall performance and reliability of our system.
Essential requirements
- BS/MS in Computer Science or a related field;
- 3+ years of professional software engineering experience;
- Experience programming in Python;
- Expertise in cloud platforms like AWS, GCP, and Azure, and their respective services, tools, and APIs;
- Deep understanding of containerization and orchestration platforms (e.g., Docker, Kubernetes, Amazon EKS, Google Kubernetes Engine);
- Familiarity with IaC tools such as Terraform or CloudFormation;
- Experience with setting up, configuring, and optimizing CI/CD pipelines using tools like Jenkins or Github actions;
- Experience in using configuration management tools like Ansible, Puppet, or Chef;
- Knowledge of monitoring, log aggregation, and visualization tools like Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Loki;
- Understanding of network protocols, DNS, load balancing, firewalls, and security best practices; and
- Knowledge of security best practices, regulatory requirements, and implementing compliance standards (e.g., GDPR, HIPAA, SOC2).
Benefits
- Competitive salary;
- Equity compensation;
- Huge impact, as an early-stage startup your daily work will have a direct impact in the company's success;
- Whatever you want to learn about, we'll make it happen;
- Cycle to work scheme;
- Flexible work-hours; and
- The job is hybrid (in-office at least 3-days a week) and onsite at our offices in London.