Empleo
Mis anuncios
Mis alertas
Conectarse
Encontrar un trabajo Consejos empleo Fichas empresas
Buscar

Senior data engineer

Madrid
Holcim
Publicada el 2 abril
Descripción

SUMMARY OF THE JOB

We are seeking a seasoned Senior Data Engineer to design, build, and optimize our next-generation data platform. You will be responsible for architecting scalable data pipelines, managing large-scale distributed systems, and ensuring our data infrastructure in AWS and Databricks is robust and efficient. The ideal candidate is a Spark expert with a deep understanding of the AWS ecosystem and a passion for automation.

MAIN ACTIVITIES / RESPONSIBILITIES

1. Pipeline Architecture: Design and implement complex batch and streaming ETL/ELT pipelines using Python, SQL, and Spark to process massive datasets.

2. Cloud Infrastructure: Leverage AWS Data Analytics services to build scalable, secure, and cost-effective data solutions.

3. Orchestration & DevOps: Manage and automate data workflows using Airflow, while utilizing Docker and ECS for containerized application deployment.

4. System Optimization: Monitor and tune the performance of distributed systems (Spark Cluster) to ensure high availability and low latency.

5. Infrastructure as Code: Utilize AWS CloudFormation or Terraform to manage data infrastructure, ensuring repeatable and version-controlled environments.

6. Cost Optimization: Monitor and optimize AWS spend by selecting appropriate instance types (Spot vs. On-Demand) and refining data storage strategies.

7. Security & Compliance: Implement IAM roles, bucket policies, and encryption (KMS) to ensure data is secure at rest and in transit.

8. Collaboration: Work within an Agile framework to deliver iterative value, collaborating closely with Data Scientists and Stakeholders to translate business needs into technical reality.

JOB DIMENSIONS

List of direct reports:

9. Up to 2 Direct Reports, and around 15 externals

Key interfaces, stakeholders and relationships:

10. Internal:

GDS: product manager, application manager, data & analytics & AI team

Country business stakeholders

11. External : 3rd party vendors

PROFILE REQUIRED

12. Experience: Minimum 4+ years of hands-on experience in active Big Data environments and 2+ years specializing in Data Analytics within AWS.

Compute & Processing:Amazon EMR: Architecting and managing Spark clusters for large-scale distributed processing.

AWS Glue: Developing serverless ETL jobs, managing the Data Catalog, and implementing Glue Crawlers.

Storage & Warehousing:

Amazon S3: Implementing "Data Lake" best practices, including partitioning, compression (Parquet/Avro), and lifecycle policies.

Amazon Redshift: Designing star/snowflake schemas and optimizing query performance for high-volume data warehousing.

Amazon Athena: Performing ad-hoc SQL analysis directly on S3 data.

Experience with open table formats (iceberg/delta)

Orchestration & Integration:

Amazon MWAA (Managed Workflows for Apache Airflow): Deploying and scaling Airflow environments.

AWS Lambda: Building event-driven data triggers and micro-services.

Streaming (Advantage):Amazon Kinesis or MSK (Managed Streaming for Kafka) for real-time data ingestion.

13. Core Engineering: Expert-level proficiency in Spark, Python, and SQL.

14. Infrastructure & Tooling: Proven experience with Airflow for orchestration and Docker/ECS for containerization.

15. Good knowledge in Databricks and data mesh architectures. Good understanding in how to implement and maintain Lakehouse data models (bronze / silver / gold layers) using Delta Lake for reliability, ACID transactions, time travel and schema evolution.

16. Solid software engineering practices: Git, CI/CD for data pipelines, automated testing, code quality and documentation.

17. Communication: Excellent written and oral English communication skills, with the ability to explain complex technical concepts to non-technical audiences.

18. Degree in Computer Science, Engineering, Mathematics or related field, or equivalent practical experience.

PREFERRED “PLUS” QUALIFICATION

19. Real-time Processing: Experience with streaming and distributed messaging applications like Flink and Kafka.

20. Core Tech:Java programming.

21. Industrialise ML use cases

22. Data Visualization: Experience with QlikView or QlikSense to support BI initiatives.

23. Agile: Experience working in a fast-paced Scrum or Kanban environment.

24. Certifications: AWS Certified Data Engineer – Associate/Professional or AWS Certified Solutions Architect, Databricks Data engineer (Associated/Professional) certification

25. DevOps: Experience with Openshift, Github Actions or Jenkins for CI/CD of data workflows.

Enviar
Crear una alerta
Alerta activada
Guardada
Guardar
Oferta cercana
Gestor de mercado cemento y morteros (madrid y castilla - león) 1
Madrid
Holcim
Gestor
Oferta cercana
Cybersecurity detect expert
Holcim
Oferta cercana
Gestor de mercado cemento y morteros (madrid y castilla - león) 1
Madrid (28001)
Holcim
Gestor
Ofertas cercanas
Empleo Holcim
Empleo Holcim en Madrid
Empleo Madrid
Empleo Provincia de Madrid
Empleo Comunidad de Madrid
Inicio > Empleo > Senior Data Engineer

Jobijoba

  • Dosieres empleo
  • Opiniones Empresas

Encuentra empleo

  • Ofertas de empleo por profesiones
  • Búsqueda de empleo por sector
  • Empleos por empresas
  • Empleos para localidad

Contacto/ Colaboraciones

  • Contacto
  • Publiquen sus ofertas en Jobijoba

Menciones legales - Condiciones legales y términos de Uso - Política de Privacidad - Gestionar mis cookies - Accesibilidad: No conforme

© 2026 Jobijoba - Todos los Derechos Reservados

Enviar
Crear una alerta
Alerta activada
Guardada
Guardar