Position Summary:
brWe are seeking an experienced and versatile Data Engineer to join our Data Analytics team. This role focuses on designing and maintaining data pipelines that integrate Oracle Cloud applications and other operational systems with both AWS Data Lake and Microsoft Fabric environments. The successful candidate will play a critical role in enabling enterprise reporting and advanced analytics through robust data ingestion, transformation, and integration frameworks.
br
br
brKey Responsibilities:
brBuild and maintain data pipelines to extract and move data between Oracle Cloud (ERP, HCM, SCM) and other operational systems, plus between AWS Data Lake (S3, Glue, Redshift) and Microsoft Fabric (OneLake, Lakehouse, Data Factory).
brDesign and optimize ETL/ELT processes for large-scale, multi-source data ingestion and transformation.
brIntegrate with external cloud-based systems (e.g., Salesforce, ServiceNow, MS Business Central) using APIs, flat files, or middleware.
brUtilize Oracle Integration Cloud (OIC), FBDI, BIP, and REST/SOAP APIs for data extraction and automation.
brLeverage Microsoft Fabric components, including Data Factory, Lakehouse, Synapse-style notebooks, and KQL databases, to enable structured data availability.
brCollaborate with BI developers to enable Power BI semantic models, apps, and enterprise-wide reporting.
brImplement monitoring, logging, and error-handling strategies to ensure reliability and performance of data pipelines.
brAdhere to best practices in data governance, security, lineage, and documentation.
brPartner with data architects, analysts, and business stakeholders to translate business needs into scalable data solutions.
br
br
brRequired Skills Qualifications:
br~ Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or related field.
br~3+ years of experience in data engineering, including cloud data integrations and enterprise data pipeline development.
br~ Experience with Oracle Cloud (ERP, HCM, or SCM) and its integration mechanisms (FBDI, BIP, REST APIs, OIC).
br~ Familiarity with AWS Data Lake architecture : S3, Glue, Redshift, Athena, Lambda, etc.
br~ Hands-on experience in the Microsoft Fabric ecosystem, including Data Factory (Fabric), OneLake, Lakehouse, Notebooks, and integration with Power BI.
br~ Proficiency in SQL, Python, and experience with ETL orchestration tools (e.g., Airflow, Step Functions).
br~ Strong knowledge of data modeling, data quality, and pipeline optimization.
br~ Experience with Power BI datasets and reporting enablement, particularly in semantic model design.
br
br
brPreferred/Desirable Skills:
brFamiliarity with streaming data tools (e.g., Kafka, AWS Kinesis, Fabric Real-Time Analytics).
brExperience with Git-based version control, CI/CD for data pipelines, and infrastructure as code (e.g., Terraform, CloudFormation).
brKnowledge of metadata management, data lineage tracking, and data governance frameworks.
brCloud certifications (e.g., AWS Certified Data Analytics, Microsoft Certified: Fabric Analytics Engineer, Oracle Cloud Certified ) are a strong plus.
br
br
brSoft Skills:
brStrong problem-solving and analytical thinking.
brExcellent communication skills, with the ability to collaborate across business and technical teams.
brHighly organized, detail-oriented, and self-motivated.
brComfortable in fast-paced environments with shifting priorities.