Location:
BarcelonaSanofi has recently embarked into a vast and ambitious digital transformation program. A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions, to accelerate R&D, manufacturing and commercial performance and bring better drugs and vaccines to patients faster, to improve health and save lives.
About the job
We are seeking an experienced Data Engineering Specialist interested in challenging the status quo to ensure the seamless creation and operation of the data pipelines that are needed by Sanofi’s advanced analytic, AI and ML initiatives for the betterment of our global patients and customers.Main responsibilities:
Establish technical designs to meet Sanofi requirements aligned with the architectural and Data standardsOwnership of the entire back end of the application, including the design, implementation, test, and troubleshooting of the core application logic, databases, data ingestion and transformation, data processing and orchestration of pipelines, APIs, CI/CD integration and other processesFine-tune and optimize queries using Snowflake platform and database techniquesOptimize ETL/data pipelines to balance performance, functionality, and other operational requirementsAssess and resolve data pipeline issues to ensure performance and timeliness of executionAssist with technical solution discovery to ensure technical feasibilityAssist in setting up and managing CI/CD pipelines and development of automated testsDeveloping and managing microservices using pythonConduct peer reviews for quality, consistency, and rigor for production level solutionDesign application architecture for efficient concurrent user handling, ensuring optimal performance during high usage periodsOwn all areas of the product lifecycle:
design, development, test, deployment, operation, and supportAbout you
Key Requirements & Qualifications:
5+ years of relevant experience developing backend, integration, data pipelining, and infrastructureExpertise in database optimization and performance improvementExpertise in Python, PySpark, and SnowparkExperience data warehousing and object-relational database (Snowflake and PostgreSQL) and writing efficient SQL queriesExperience in cloud-based data platforms (Snowflake, AWS)Proficiency in developing robust, reliable APIs using Python and FastAPI FrameworkUnderstanding of data structures and algorithmsUnderstanding of DBT is a plusExperience in modern testing framework (SonarQube, K6 is a plus)Strong collaboration skills, willingness to work with others to ensure seamless integration of the server-side and client-sideKnowledge of DevOps best practices and associated tools is a plus, especially in the setup, configuration, maintenance, and troubleshooting of associated tools:
Containers and containerization technologies (Kubernetes, Argo, Red Hat OpenShift)Infrastructure as code (Terraform)Monitoring and Logging (CloudWatch, Grafana)CI/CD Pipelines (JFrog Artifactory)Scripting and automation (Python, GitHub, Github actions)Experience with JIRA & ConfluenceWorkflow orchestration (Airflow)Message brokers (RabbitMQ)
Education:
Bachelor’s degree in computer science, engineering, or similar quantitative field of studyLanguages:
English is a mustPursue
Progress. Discover
Extraordinary .Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing:
a desire to make miracles happen. So, let’s be those people.At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, ability or gender identity.Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.Com!#LI-Hybrid #BarcelonaHub #SanofiHubsnull#J-18808-Ljbffr