Overview
Come and join our multicultural team!
* 5 locations
* 27 languages
We are looking to fill this role immediately and are reviewing applications daily. Expect a fast, transparent process with quick feedback.
Why join us?
We are a European deep-tech leader in quantum and AI, backed by major global strategic investors and strong EU support. Our groundbreaking technology is already transforming how AI is deployed worldwide — compressing large language models by up to 95% without losing accuracy and cutting inference costs by 50–80%.
Joining us means working on cutting-edge solutions that make AI faster, greener, and more accessible — and being part of a company often described as a “quantum-AI unicorn in the making.”
We offer
* Competitive annual salary starting from €55,000, based on experience and qualifications.
* Two unique bonuses : signing bonus at incorporation and retention bonus at contract completion.
* Relocation package (if applicable).
* Fixed-term contract ending in June 2026.
* Hybrid role and flexible working hours.
* Be part of a fast-scaling Series B company at the forefront of deep tech.
* Equal pay guaranteed.
* International exposure in a multicultural, cutting-edge environment.
As a MLOps Engineer, you will
* Deploy cutting-edge ML / LLMs models to Fortune Global 500 clients.
* Join a world-class team of Quantum experts with an extensive track record in both academia and industry.
* Collaborate with the founding team in a fast-paced startup environment.
* Design, develop, and implement Machine Learning (ML) and Large Language Model (LLM) pipelines, encompassing data acquisition, preprocessing, model training and tuning, deployment, and monitoring.
* Employ automation tools such as GitOps, CI / CD pipelines, and containerization technologies (Docker, Kubernetes) to enhance ML / LLM processes throughout the Large Language Model lifecycle.
* Establish and maintain comprehensive monitoring and alerting systems to track Large Language Model performance, detect data drift, and monitor key metrics, proactively addressing any issues.
* Conduct truth analysis to evaluate the accuracy and effectiveness of Large Language Model outputs against known, accurate data.
* Collaborate closely with Product and DevOps teams and Generative AI researchers to optimize model performance and resource utilization.
* Manage and maintain cloud infrastructure (e.G., AWS, Azure) for Large Language Model workloads, ensuring both cost-efficiency and scalability.
* Stay updated with the latest developments in ML / LLM Ops, integrating these advancements into generative AI platforms and processes.
* Communicate effectively with both technical and non-technical stakeholders, providing updates on Large Language Model performance and status.
Required Qualifications
* Bachelor\ 's or master\'s degree in computer science, Engineering, or a related field.
* Mid or Senior : 3+ years of experience as an ML / LLM engineer in public cloud platforms.
* Proven experience in MLOps, LLMOps, or related roles, with hands-on experience in managing machine / deep learning and large language model pipelines from development to deployment and monitoring.
* Expertise in cloud platforms (e.G., AWS, Azure) for ML workloads, MLOps, DevOps, or Data Engineering.
* Expertise in model parallelism in model training and serving, and data parallelism / hyperparameter tuning.
* Proficiency in programming languages such as Python, distributed computing tools such as Ray, model parallelism frameworks such as DeepSpeed, Fully Sharded Data Parallel (FSDP), or Megatron LM.
* Expertise in with generative AI applications and domains, including content creation, data augmentation, and style