Overview
Who We Are We are looking for a skilled Data Engineer with strong experience in Databricks and the Azure ecosystem to help build and optimize modern data pipelines for our client, one of the UK’s leading energy providers. The role focuses on developing scalable, high‑performance data processing solutions, implementing robust ETL workflows, and enabling advanced analytics across large‑scale operational and customer datasets. Your expertise will support client’s efforts to modernize its data landscape, enhance real‑time insights, and drive key initiatives in energy distribution, sustainability, and grid innovation within a highly regulated environment.
What You\'ll Be Doing
* Client Engagement & Delivery
* Data Pipeline Development (Batch and Streaming)
* Fabric and Azure Architectures
* Data Modelling & Optimisation
* Collaboration & Best Practices
* Quality, Governance & Security
* Client stakeholders up to Head of Data Engineering, Chief Data Architect, and Analytics leadership
* Delivery of high-performing, scalable, and secure data pipelines aligned to client requirements
* High client satisfaction and successful adoption of Fabric and Azure based solutions
* Improve data engineering practices
* Contribution to the growth of the practice through reusable assets, accelerators, and technical leadership
What You\'ll Bring Along
* Minimum 3–8 years in data engineering, data warehousing, or data architecture roles, with at least 3+ years working with Fabric
* BSc/MSc in Computer Science, Data Engineering, or related field
* Proven experience in data engineering and pipeline development on Fabric, Azure and cloud-native platforms
* Familiarity with Fabric Workflows and other orchestration tools
* Proficiency in ETL/ELT tools such as DBT, Matillion, Talend, or equivalent
* Strong SQL and Python (or equivalent language) skills for data manipulation and automation
* Exposure to AI/ML workloads desirable
* Proficiency in cloud ecosystems (Specifically Azure, optionally AWS and GCP are an advantage) and infrastructure-as-code (e.g., Terraform)
* Knowledge of data modelling methodologies (star schemas, Data Vault, Kimball, Inmon)
* Familiarity with medallion architectures, data lakehouse principles and distributed data processing
* Experience with version control tools (GitHub, Bitbucket) and CI/CD pipelines
* Understanding of data governance, security, and compliance frameworks
* Strong consulting values with ability to collaborate effectively in client-facing environments
* Hands-on expertise across the data lifecycle: ingestion, transformation, modelling, governance, and consumption
* Strong problem-solving, analytical, and communication skills
* Experience leading or mentoring teams of engineers to deliver high-quality scalable data solutions
* Fabric and Azure certifications highly desirable
#J-18808-Ljbffr