We are your Energy Technology Partner. We electrify, automate, and digitalize every industry, business, and home, driving efficiency and sustainability for all.At Schneider Electric, our values – IMPACT (Inclusion, Mastery, Purpose, Action, Curiosity, Teamwork) – are the foundation of everything we do. Becoming an Impact Maker means turning sustainability ambitions into actions at the intersection of automation, electrification, and digitization.Are you ready to lead the digital transformation to create a more sustainable world?If you are up to challenge your creativity and make an impact, we are excited to welcome you!Schneider Digital is the digital department of Schneider Electric, leading the digital transformation in the company by giving support globally to our internal teams and our clients.
\n
Schneider
\n
Digital consists of 6 Digital Hubs worldwide which are strategically located to ensure a 24/7 support across the company (France, China, India, USA, Mexico and Spain).
\n
Our Digital
\n
Hub in Barcelona is formed by +450 employees working in strategic projects and different roles such as Data, Cybersecurity, ERP, Cloud, Infrastructures, IT Project Management or Digital Marketing.Barcelona Digital Technology Center is part of Schneider Digital: enabling Schneider-Electric digital transformation by delivering Business Requirements.We are looking for a skilled Data Integration & Data Warehousing Engineer to design, build, and manage scalable data pipelines and enterprise data warehouse solutions using AWS Glue, AWS Lambda and Informatica.This role will focus on enabling reliable data ingestion, transformation, modelling, and delivery across the organization while ensuring performance, quality, governance, and scalability.The ideal candidate should have strong experience in ETL/ELT frameworks, data warehousing design, advanced SQL development, and cloud-based data engineering.Key Responsibilities:Data Integration & Pipeline Development A continuación, encontrará un desglose completo de todo lo que se requiere de los posibles candidatos, así como la forma de presentar su candidatura. ¡Mucha suerte!
\n
Design, develop, and maintain scalable data pipelines using Python, AWS Glue, and Informatica Power CenterBuild robust ETL/ELT workflows for ingesting structured data from multiple sources (databases, APIs, files, Saa S systems)Develop reusable ingestion frameworks to support data migration from on-prem to cloud Data Warehousing & Modelling
\n
Develop and maintain dimensional models (fact/dimension tables, SCD handling)Optimize data models for analytics and reporting performance Cloud Data Engineering
\n
Develop and manage AWS Glue jobs using Py Spark / PythonBuild and orchestrate workflows across AWS ecosystem (Redshift, Lambda)Ensure efficient compute and storage utilization for performance and cost optimization Monitoring & Operations
\n
Monitor ETL pipelines and job performanceTroubleshoot data failures and performance bottlenecksImplement logging, alerting, and recovery mechanisms Data Engineering & Processing
\n
Build scalable data processing solutions using Spark and distributed computing techniquesPerform data profiling, reverse engineering, and ad-hoc analysisSupport downstream systems by delivering clean, reliable, and optimised datasets What qualifications will make you successful for this role?Experience & Qualifications
\n
Bachelor's or Master's degree in Engineering, Computer Science, or related fieldTypically 5 years of experience in data engineering, ETL, or data warehousing rolesExperience working in enterprise-scale data environmentsStrong hands-on experience in Python / Py SparkStrong proficiency in SQL (querying, optimization, transformations)Experience in building and maintaining data pipelines (ETL/ELT)Experience with AWS services (AWS Glue, AWS Lambda, S3)Version control using GIT Good to Have
\n
Hands-on experience with Informatica Power CenterExperience with workflow orchestration tools (Airflow / Step Functions)Strong understanding of data warehousing concepts (EDW, ODS, staging layers)Knowledge of CI/CD pipelines for data engineeringExposure to cloud-based data lake architecturesKnowledge of BI tools like Oracle Analytics Cloud and Thought Spot Key Competencies
\n
Strong analytical and problem-solving skillsDeep understanding of data structures and relationshipsAttention to data quality and accuracyAbility to work in cross-functional teamsOwnership mindset for delivery and operations What will you get?We adapt to you:With our adaptable schedule, you'll have the freedom to adjust your work hours to accommodate your personal needs and responsibilities.We know how great it is to work from home. With our hybrid work plan, you can enjoy working from the comfort of your home.Need more time to relax and disconnect? With our Holy Pack, you can purchase additional vacation days to recharge when you need it mos