HAYS is collaborating with the leading company in the pharmaceutical sector, TOP 10 pharma. They create innovative medicines for people and animals, adding value through innovation for all three business areas: human pharmaceuticals, animal health, and contract manufacturing of biopharmaceuticals. Now they are growing in animal area too, it was created 3 years ago. Our client is forming a technological hub that will serve the entire group internationally. It is growing and has more than 500 specialists.
We are looking for a Data Engineer to join the team. You’ll play a key role in implementing and maintaining transformation logic using dbt and working within a Snowflake environment to support downstream analytics, reporting, and different use cases.
What will your duties be?
Implement and maintain data transformation logic using, following already defined models and specifications.
Write clean, modular, and efficient SQL code tailored for Snowflake, focusing on data cleaning, normalization, and enrichment.
Enforce data quality standards through testing, monitoring, and integration with data observability practices.
Orchestrate and manage pipeline execution using Apache Airflow, ensuring reliability and reusability.
Participate in documentation efforts, including technical specs, transformation logic, and metadata definitions.
Contribute to CI/CD pipelines, version control workflows (Bitbucket), and best practices for data development.
Continuously optimize transformation processes for performance, cost-efficiency, and maintainability in Snowflake.
Which are the requirements?
3+ years of professional experience in data engineering or a related field, with a strong focus on data transformation and quality assurance.
Proficiency in DBT, including hands-on experience writing and managing models, tests, and macros.
Demonstrated ability to write clean, efficient, and high-performance SQL in Snowflake, particularly for complex data transformation and cleaning workflows.
Experience with Apache Airflow or similar pipeline orchestration tools.
Familiarity with Bitbucket, Git workflows, and DevOps/CI/CD practices.
Solid understanding of data quality frameworks, testing methodologies, and data observability principles.
Excellent verbal and written communication skills, with a proven ability to collaborate effectively in a remote, global, and cross-functional environment.
Fluency in English (both spoken and written) is required.
Preferred Qualifications:
o Familiarity with Jira and Confluence for task and knowledge management.
o Knowledge of Data Vault modeling principles.
o Experience with AWS cloud services.
What we offer?
Participate in international projects.
Hybrid work model: remote.
Location: Sant Cugat del Vallès.
Flexible schedule (Monday to Friday from 8-9 to 5-6).