Your Benefits
*Agile Environment*
Embrace a dynamic work culture, where you and your team adapt quickly to meet evolving needs.
*Continuous Training*
Sharpen your skills and advance your expertise with our professional development programs.
*Great Team*
It's truly the people that make the difference — and with us you'll join the best team around.
*Mobile Work*
Enjoy the freedom to work from home.
*Personalized Growth*
Accelerate your career with development opportunities, tailored to your talents and interests.
*Room for Innovation*
Turn your ideas into action in an environment that champions creativity and empowers you to lead change.
Job Description
What you can expect
About the Role:
We are looking for an experienced and solution-oriented
Azure Data Platform Engineer
to develop, operate, and optimize our modern Azure-based data platform. In this role, you will focus on
Azure, Databricks, data infrastructure, and CI/CD
, supporting
multi-tenant environments
and enabling reliable, scalable data solutions.
What You Can Expect:
- Develop a Modern Azure Data Platform: Design, build, and operate end-to-end data solutions using Azure Data Factory, Azure Data Lake Storage Gen2, Databricks, and Azure Synapse Analytics.
- Create Data Pipelines: Develop and maintain scalable ETL/ELT pipelines using PySpark and Spark, with a strong focus on data quality, reliability, and performance.
- Multi-Tenant & Environment Support: Support and operate multi-tenant data platforms across multiple environments (development, test, production) with clear separation and governance.
- Infrastructure & Platform Operations: Provision, configure, and maintain Azure data infrastructure, ensuring stability, security, and scalability.
- CI/CD for Data Platforms: Build and maintain CI/CD pipelines for data pipelines and Databricks workloads, enabling automated deployments across environments.
- Cost-Efficient & Best-Practice Azure Usage: Apply Azure best practices to optimize performance and cost, including resource sizing, lifecycle management, and cost monitoring.
- Collaboration with BI & Data Teams: Work closely with BI and data teams to support efficient data models and reporting solutions.
- Data Governance & Security Basics: Support data governance requirements such as access control, secure data handling, and basic metadata management.
What you bring
- Azure Data Platform Experience: Several years of hands-on experience with Azure Data Factory, ADLS Gen2, Databricks, and Azure Synapse Analytics.
- PySpark & Spark: Strong experience building distributed data processing pipelines using PySpark and Spark.
- ETL / ELT Knowledge: Solid understanding of ETL/ELT concepts and data modeling practices.
- CI/CD & Automation: Experience with CI/CD pipelines for data workloads and basic automation of deployments.
- SQL Skills: Strong SQL skills and experience optimizing analytical queries.
- Data Formats: Practical experience with Parquet and/or Avro.
- Infrastructure Awareness: Good understanding of Azure resource structure, environments, and operational best practices.
- Analytical & Team-Oriented Mindset: Solution-focused approach with the ability to work independently and collaboratively.
- Language Skills: Fluency in English is required; knowledge of German is an advantage.
This is a remote role in the European Union. Candidates need to be based in a country where Rhenus Overland Transport is already established.