PstrongDeep Learning Engineer /strong /ppbr/ppA fantastic opportunity for a driven Deep Learning Engineer to join fast-growing deep-tech company, who provide hyper-efficient software to global companies across finance, energy, manufacturing and cybersecurity to gain an edge with quantum computing and artificial intelligence. You will have the opportunity to work on challenging projects, contribute to cutting-edge research, and shape the future of LLM and AI technologies. /ppbr/pp***This is initially a strongFixed Term Contract until the end of June 2026 /strong, with Hybrid working from sites in Madrid, Barcelona or Zaragoza.*** /ppbr/ppbr/ppbr/ppstrongResponsibilities /strong /pulliDesign, train, and optimize deep learning models from scratch (including LLMs and computer vision models), working end-to-end across data preparation, architecture design, training loops, distributed compute, and evaluation. /liliApply and further develop state-of-the-art model compression techniques, including pruning (structured/unstructured), distillation, low-rank decomposition, quantization (PTQ/QAT), and architecture-level slimming. /liliBuild reproducible pipelines for large-model compression, integrating training, re-training, search/ablation loops, and evaluation into automated workflows. /liliDesign and implement strategies for creating, sourcing, and augmenting datasets tailored for LLM pre-training and post-training, and computer vision models. /liliFine-tune and adapt language models using methods such as SFT, prompt engineering, and reinforcement or preference optimization, tailoring them to domain-specific tasks and real-world constraints. /liliConduct rigorous empirical studies to understand trade-offs between accuracy, latency, memory footprint, throughput, cost, and hardware constraints across GPU, CPU, and edge devices. /liliBenchmark compressed models end-to-end, including task performance, robustness, generalization, and degradation analysis across real-world workloads and business use cases. /liliPerform deep error analysis and structured ablations to identify failure modes introduced by compression, guiding improvements in architecture, training strategy, or data curation. /liliDesign experiments that combine compression, retrieval, and downstream finetuning, exploring the interaction between model size, retrieval strategies, and task-level performance in RAG and Agentic AI systems. /liliOptimize models for cloud and edge deployment, adapting compression strategies to hardware constraints, performance targets, and cost budgets. /liliIntegrate compressed models seamlessly into production pipelines and customer facing systems. /liliMaintain high engineering standards, ensuring clear documentation, versioned experiments, reproducible results, and clean modular codebases for training and compression workflows. /liliParticipate in code reviews, offering thoughtful, constructive feedback to maintain code quality, readability, and consistency. /li /ulpbr/ppbr/ppstrongQualifications: /strong /pulliMaster’s or Ph.D. in Computer Science, Machine Learning, Electrical Engineering, Physics, or a related technical field. /lili3+ years of hands-on experience training deep learning models from scratch, including designing architectures, building data pipelines, implementing training loops, and running large-scale distributed training jobs. /liliProven experience in at least one major deep learning domain where training from scratch is standard practice, such as computer vision (CNNs, ViTs), speech recognition, recommender systems (DNNs, GNNs), or large language models (LLMs). /liliStrong expertise with model compression techniques, including pruning (structured/unstructured), distillation, low-rank factorization, and architecture-level optimization. /lili Demonstrated ability to analyze and improve model performance through ablation studies, error analysis, and architecture or data-driven iterative improvements. /liliIn-depth knowledge of foundational model architectures (computer vision and LLMs) and their lifecycle: training, fine-tuning, alignment, and evaluation. /liliSolid understanding of training dynamics, optimization algorithms, initialization schemes, normalization layers, and regularization methods. /liliHands-on experience with Python, PyTorch and modern ML stacks (HuggingFace Transformers, Lightning, DeepSpeed, Accelerate, NeMo, or equivalent). /liliExperience building robust, modular, scalable ML training pipelines, including experiment tracking, reproducibility, and version control best practices. /liliPractical experience optimizing models for real-world deployment, including latency, memory footprint, throughput, hardware constraints, and inference-cost considerations. /liliExcellent problem-solving, debugging, performance analysis, test design, and documentation skills. /liliExcellent communication skills in English /li /ulpbr/ppbr/ppemBy applying to this role you understand that we may collect your personal data and store and process it on our systems. For more information please see our Privacy Notice ( /em /p