Spark Scala Developer for an International Project
Are you a Big Data expert looking for a global challenge? Join an international financial sector project, working with cutting-edge technologies in a dynamic and collaborative environment.
Responsibilities include:
1. Design, develop, and optimize large-scale data pipelines.
2. Create and maintain systems for processing structured and unstructured data.
3. Implement solutions using scalable and high-performance code in Python, Spark, and Scala.
4. Collaborate with data scientists and analysts to optimize workflows.
5. Ensure data quality, security, and integrity across all systems.
6. Implement and deploy infrastructure using cloud architectures.
Mandatory Requirements:
1. 5+ years of experience as a Big Data Developer.
2. Strong proficiency in Hadoop, Spark, Hive.
3. Expertise in Python or Scala and advanced SQL.
4. Solid understanding of distributed computing and cloud architectures (AWS, Azure, GCP).
5. Strong analytical and problem-solving skills with a focus on efficiency.
6. Fluent communication in international environments.
Nice to Have:
* Experience in data governance and security standards.
* Familiarity with DevOps and CI/CD practices in Big Data environments.
* Knowledge of Machine Learning frameworks such as TensorFlow and PyTorch.
What We Offer:
* 100% remote work from any location.
* Engagement in the financial sector using cutting-edge technologies.
* Collaboration with a global, multicultural team.
* Work in a challenging and dynamic environment.
#J-18808-Ljbffr