Gestamp is an international group dedicated to the design, development and manufacture of metal components for the automotive industry. We currently have more than 100 plants, 13 R&D centers and more than 41000 employees located in 21 companies
Por favor, verifique que tiene el nivel de experiencia y las cualificaciones adecuadas leyendo la descripción completa de esta oportunidad a continuación.
We are looking for a highly motivated individual to join a growing team working on exciting Industry 4.0 projects. You will help building Gestamp ́s path towards a SMART FACTORY by integrating and maximizing the potential that new technologies can offer, to achieve connected, smart and highly efficient manufacturing plants.
Responsibilities
We are looking for a Senior Data Software Engineer with a strong background in designing and building scalable data systems. This role is ideal for someone who enjoys hands-on technical work while mentoring others and influencing the technical direction of the team. You will be a key player in developing reliable data pipelines and systems while ensuring high performance and scalability.
About you
* Strong expertise in Java (8/11+), functional programming, concurrency, and microservices architecture.
* 5+ years of backend software engineering experience, with at least 2+ years in data streaming/real-time systems.
* Proven experience leading and mentoring small engineering teams.
* Strong problem-solving skills in low-latency, fault-tolerant distributed systems.
* Hands-on in both system design and production troubleshooting.
* Effective communicator, capable of collaborating with data engineers, DevOps, and product teams.
* Ownership mentality with the ability to drive initiatives end-to-end and balance delivery with technical quality.
Skills & Knowledge
(What will you do)
* Team Leadership & Delivery: Lead and mentor a team of 4–5 engineers, setting technical direction and best practices for streaming platforms. Drive sprint planning, technical decision-making, and delivery of scalable real-time data solutions. Promote a culture of ownership, continuous improvement, and operational excellence. Collaborate cross-functionally with product, platform, and data teams to align on priorities and architecture.
* Stream Processing & Fault Tolerance: Own and guide the implementation of real-time pipelines using Apache Flink, including stateful processing and CEP. Define best practices for checkpointing, savepoints, and exactly-once guarantees. Ensure platform reliability, resilience and high availability.
* Time Semantics & Windowing: Lead the design of event-time processing strategies, watermarking, and windowing for accurate and scalable computations. Establish standards for handling out-of-order and late-arriving data.
* Messaging & Event Systems: Architect and oversee systems based on kafka (brokers, streams, connect, schema registry) and RabbitMQ (exchanges, queues, routing strategies).
* Industrial Protocols: Strong understanding of industrial communication protocols, with hands-on experience in OPC UA, MQTT, and AMQP for integrating IoT devices, messaging systems, and real-time data pipelines.
* Infrastructure & Deployment: Guide the adoption of Docker, CI/CD pipelines, and Kubernetes-based deployments. Collaborate with DevOps to ensure robust, automated, and scalable infrastructure.
* Observability & Optimization: Familiar with Prometheus, Grafana, ELK stack for monitoring, tuning, and debugging streaming pipelines.
Nice to have:
* Infrastructure-as-Code (e.g., Terraform, Helm). xohynlm
* Knowledge of Chaos Engineering or fault injection testing.