We are hiring an MLOps Engineer for a fixed-term contract ending in June 2026, based in Zaragoza or Barcelona (hybrid working model).
This opportunity is with a leading European deep-tech company operating at the intersection of AI and advanced computing. The role focuses on deploying and operating large-scale machine learning and large language model systems used by global enterprise clients, including Fortune 500 companies.
What is offered:
* Competitive annual salary starting from 45,000 EUR, depending on experience and qualifications
* Signing bonus upon joining and a retention bonus at contract completion
* Relocation support if applicable
* Fixed-term contract until June 2026
* Hybrid working model with flexible hours (3 days onsite, 2 days remote)
* International, highly technical environment within a fast-scaling Series B company
Role responsibilities:
* Design, build, deploy, and monitor ML and LLM pipelines across the full model lifecycle
* Deploy production-grade ML and LLM solutions to enterprise customers
* Implement CI/CD, GitOps, containerization, and orchestration using tools such as Docker and Kubernetes
* Monitor model performance, data drift, and system health, including alerting and truth analysis
* Manage and optimize cloud infrastructure (AWS and/or Azure) for scalability and cost efficiency
* Collaborate closely with Product, DevOps, and AI research teams
* Communicate model performance and system status to technical and non-technical stakeholders
Required experience:
* Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
* 3+ years of experience in MLOps, ML Engineering, LLMOps, or DevOps roles
* Strong experience with public cloud platforms (AWS and/or Azure)
* Proficiency in Python and distributed ML frameworks such as Ray, DeepSpeed, FSDP, or Megatron-LM
* Solid understanding of LLM architectures, deployment patterns, and retrieval-based systems
* Experience with CI/CD pipelines, containerized environments, and Kubernetes
* Fluent English;
Spanish is a plus
Preferred experience:
* Experience with Mixture-of-Experts models
* Multi-cloud or hybrid cloud environments
* Real-time or streaming ML systems
* LLM observability, inference optimization, and API management
This is a fixed-term contract role ending in June 2026.