Working at Freudenberg: We will wow your world!
Responsibilities:
Design and Build Data Pipelines: Construct and maintain robust, scalable data pipelines to seamlessly integrate data from multiple sources.
Data Acquisition and Storage: Develop efficient database solutions to collect, store, and secure data, ensuring accessibility and protection.
Data Transformation: Create algorithms to convert raw data into actionable insights, aligning with our business objectives.
Pipeline Optimization: Continuously evaluate and enhance existing data systems and processes for improved efficiency and reliability.
Quality Assurance: Implement systems to validate and ensure the accuracy and consistency of data sets.
Collaboration: Work closely with machine learning engineers, data scientists, analysts, and other stakeholders to meet the organization's data-centric needs.
Qualifications:
Educational Background: Bachelor's degree in Computer Science, Data Engineering, or a related field (Master's degree preferred).
Proven Experience: Demonstrated experience as a Data Engineer or in a similar role.
Skills needed to be successful in this position:
Programming and Database Expertise: Proficiency in programming languages such as Python, strong knowledge of SQL, and experience with both relational and non-relational databases.
Pipeline and Data Tools: Experience in building and optimizing data pipelines, architectures, and data sets; familiarity with tools like Databricks, Azure Data Factory, Celonis.
Cloud and Infrastructure: Knowledge of infrastructure as code (e.g. Terraform) and state-of-the-art cloud technologies, particularly the Azure platform.
Team Player: Dedicate your creativity and motivation to the team’s vision, and work well within a collaborative environment.
Communication Skills: Strong communication skills in English (German is a plus).