PbJob Description: Senior MLOps Engineer /b /ppLocation: Remote from Spain (Spanish employment contract) /ppbr/ppWe are seeking an experienced MLOps Engineer with expertise in Google Cloud Platform (GCP) to design, build, and optimize end-to-end AI, ML, and data engineering pipelines. This role involves deploying machine learning models, LLMs, and traditional AI models, as well as managing data processing workflows in a GCP-first environment. /ppThe ideal candidate will have experience working with Google Kubernetes Engine (GKE), Apache Spark, Dataproc, Terraform, Vertex AI, and Airflow (Cloud Composer) to ensure scalable and efficient AI/ML operations. While Amazon Web Services (AWS) experience is a plus, it is not required. /ppbr/ppbRequirements: /b /ppbr/pulli4-year degree preferred relevant experience will be considered /lili3+ years of MLOps/DevOps/Data Engineering experience, with expertise inb Google Cloud Platform /b (Vertex AI, Dataproc, BigQuery, Cloud Functions, Cloud Composer, GKE). /liliHands-on experience building AI/ML pipelines and data engineering workflows using Apache Airflow (Cloud Composer), Spark, Databricks, and distributed data processing frameworks. /liliExperience working with LLMs and traditional AI/ML models, including fine-tuning, inference optimization, quantization, and serving. /liliProficiency in CI/CD for ML, version control (Git), and workflow orchestration (Airflow, Kubeflow, MLflow). /liliStrong experience with bTerraform /bfor infrastructure automation. /liliStrong knowledge of bApigee /b for deploying, managing, and securing machine learning APIs at scale. /liliProduction-ready AI/ML solutions: Proven ability to build, deploy, and maintain AI modelsin real-world production environments. /liliProgramming Skills: Proficiency in bPython /band familiarity with Bash, Scala, or Terraform scripting. /liliExperience with security best practices for ML models, including IAM, data encryption, and model governance. /li /ulpbr/ppbBonus Qualifications/Experience: /b /pulliExperience with multi-cloud AI/ML solutions. /liliFamiliarity with AWS AI/ML services (SageMaker, EMR, Lambda, EKS, DynamoDB). /liliKnowledge of Feature Stores (Feast, Vertex AI Feature Store, AWS Feature Store). /liliUnderstanding of AIOps and ML observability tools. /liliExperience with real-time AI inference pipelines and low-latency model serving. /liliGitlab CI/CD with focus on CI/CD for GCP deployments /liliExperience working with PHI/PII in HIPAA and/or GDPR compliant environments /li /ulpbr/ppbResponsibilities: /b /pulliBuild, deploy, and automate AI and ML pipelines on Google Cloud Platform (GCP) using tools such as Vertex AI, BigQuery, Dataproc, Cloud Functions, and GKE. /liliDeploy, optimize, and scale Large Language Models (LLMs) and other AI/ML models using platforms like Hugging Face Transformers, OpenAI API, Google Gemini, Meta Llama, TensorFlow, and PyTorch. /liliDesign and manage data ingestion, transformation, and processing workflows using Apache Airflow (Cloud Composer), Spark, Databricks, and ETL pipelines. /liliDeploy AI/ML models and data services using Docker, Kubernetes (GKE), Helm, and serverless architectures including Cloud Run. /liliAutomate and manage ML/AI deployments using Infrastructure as Code tools such as Terraform and CI/CD pipelines with GitHub Actions or GitLab. /liliDevelop scalable, fault-tolerant ML pipelines to train, deploy, and monitor models in production environments. /liliDeploy AI models using TensorFlow Serving, TorchServe, FastAPI, Flask, and GCP-native serverless technologies like Cloud Run. /liliImplement monitoring, drift detection, and performance tracking for AI/ML models using MLflow, Prometheus, Grafana, and Vertex AI Model Monitoring. /liliEnsure security, governance, access control, and compliance best practices across AI and ML workflows. /liliDesign cloud-native architectures with GCP as the core platform, utilizing its AI/ML and data engineering tools. /li /ul