Empowering people. Unlocking innovation.
With 1,000+ professionals and over a decade of experience, we’ve built an environment where talent is trusted, supported and continuously challenged to grow.
Si desea conocer los requisitos para este puesto, siga leyendo para obtener toda la información relevante.
People-first culture built on trust and real proximity.
Stable environment with turnover clearly below industry average.
International, high-impact projects powered by modern tech stacks.
€1,200 annual training budget per employee.
Real flexibility, not just a promise.
Continuous feedback culture with monthly follow-ups and annual 360º reviews.
Private health insurance, versátil compensation and Wellhub.
Active tech communities where knowledge is shared and innovation evolves.
A team that delivers and celebrates together.
Ready to grow with us? Take a look at this opportunity
We are looking for a Data Engineer to join an international German client in the automotive sector. This role is ideal for someone with strong experience in Azure-based data environments, hands-on expertise in Spark, and a solid background working with distributed data processing and large-scale datasets.
Key Responsibilities
- Design, develop, and maintain robust data pipelines using Azure Data Factory and related orchestration tools to ingest, transform, and process data from multiple sources into Azure Data Lake and other target systems.
- Work with Databricks and Spark environments for distributed data processing, transformation, and analytics.
- Collaborate with cross-functional teams to translate business and technical requirements into scalable data solutions.
- Optimize and troubleshoot existing data workflows to improve performance, reliability, and scalability.
- Contribute to data documentation, governance, and good engineering practices across the platform.
- Stay up to date with Azure data engineering best practices and contribute to continuous improvement initiatives.
- Participate actively in code reviews, documentation, and knowledge sharing within the team.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- Proven experience working in Azure-based data environments.
- Strong hands-on experience with Spark in distributed processing environments.
- Experience working with large volumes of data and building scalable data solutions.
- Solid experience with Databricks, Azure Data Lake, SQL, and Python.
- Strong problem-solving skills and attention to detail.
- Ability to work both independently and collaboratively in a dynamic environment.
- Strong communication and interpersonal skills.
Nice to Have
- Experience with Scala, especially in Spark-based environments.
- Experience with Azure Data Factory or other orchestration tools.
- Certification in Azure Data Engineering or a related area.
- Knowledge of Agile methodologies and experience working in agile teams.
- Experience with CI/CD pipelines and version control workflows, especially GitHub.
Location: 100% Remote
Schedule: 40h/week. Flexible, with reduced hours on Fridays
Language: English (C1), Spanish (B2)
Want to know more? xcskxlj Click here and find out!
See what people say about us Glassdoor Reviews
Feel free to send us your profile, we are excited to meet you!
The employee will adhere to information security policies:
-Will have access to confidential information related to Capitole and the project they are working on.
-Must comply with the security policies and internal policies of the company and the client.
-Must sign an NDA.