Responsibilities :
* Develop and maintain ETL processing pipelines for batch and near-real-time data, as well as analytical and machine learning workflows. Work with a modern stack including Airflow, Kubernetes, Starburst, dbt, and Kafka.
* Design, implement, and extend our enterprise data warehouse to support analytics and a data-driven business philosophy.
* Extract data from various transactional databases, third-party tools, and APIs, such as Google Analytics (BigQuery), Google AdWords, Salesforce, MSSQL, among others.
* Write code that automatically generates and extends pipelines.
* Maintain and expand CI/CD pipelines using GitLab to automate testing and deployment.
Minimum Requirements :
* At least a bachelor's degree in a quantitative subject, e.g., computer science, physics, mathematics, engineering, or comparable.
* Profound knowledge of SQL, particularly in connection with dbt.
* Conceptual knowledge of RDBMS and Columnar Database concepts is a plus.
* Fluency in Python and experience in Linux platforms.
* Working knowledge of Airflow is a plus.
* Experience with cloud data architectures (AWS or Azure) is preferred.
* Willingness to learn and stay updated with relevant technologies.
* Ability to work independently and rigorously while improving processes.
* Excellent English skills.
Additional Skills :
* German language skills are a plus.
Benefits :
* Join our success story in an innovative working environment.
* Flexible hybrid or full remote work options.
* Employee discounts, attractive salary, and 30 days of vacation.
* Inclusive workplace celebrating diversity and unique talents.
#J-18808-Ljbffr