- We are seeking Expert Data Engineers to join our team, responsible for designing, developing, testing, and deploying robust data solutions. The ideal candidates will have hands-on experience with AWS data services such as S3, Glue, Athena, EMR, EC2, and Lambda, and a strong understanding of modern data architectures including data lakes, lake houses, and data mesh. They will collaborate with cross-functional teams to deliver scalable data pipelines, data warehouses, and data marts, ensuring data quality, security, and compliance.
- 4 Expert Data Engineers
Job Responsibilities / Role
- Design, develop, test, and deploy data pipelines and data platforms using AWS services
- Implement and maintain data lake, lake house, or data mesh architectures
- Collaborate with data engineers, analysts, and DevOps teams to deliver business solutions
- Ensure compliance with data governance, security, and records management policies
- Troubleshoot and resolve data related issues, supporting change management processes
- Demonstrate ownership of projects from inception to delivery
Mandatory Skills
- In depth understanding of when to use a range of tools as opposed to limited practical experience in a specific set of tools:
- Experience level to be a minimum of 3+ working years in cloud
- Spark/Scala
- Python
- SQL
- AWS expertise in services such as S3, Glue, Athena, EMR, EC2, and Lambda
- ETL design and data modeling
- Analytical thinking: ability to anticipate all edge cases and scalable issues in the future
- Problem solving skills
- Attention to detail
Nice To Have
- Familiarity with CI/CD tools (e.G., GitHub Actions)
- Experience with Apache Iceberg or similar table formats
- Exposure to data quality frameworks (e.G., Great Expectations, Deequ)
- Knowledge of Bash and HCL would be an added advantage
- Strong communication skills
- Proactive and self motivated
- Commitment to continuous learning and improvement
- University degree in technology, preferably in computer science