PSenior Machine Learning Engineer (LLM GPU Architecture) /ppThis is a great opportunity to work with one of the biggest growing tech start-ups based in Spain, they are well-funded and one of the most well-known quantum software companies in Europe. They are a provider of hyper-efficient software to companies seeking to gain an edge with quantum computing and artificial intelligence, across finance, energy, manufacturing, defence, cybersecurity, life sciences, and chemistry, delivering practical applications and tangible value with the use of their new AI now LLM product. /ppJob Overview /ppIn this role you will have the opportunity to leverage cutting-edge quantum and AI technologies to lead the design, implementation, and improvement of our language models, as well as working closely with cross-functional teams to integrate these models into our products. You will have the opportunity to work on challenging projects, contribute to cutting-edge research, and shape the future of LLM and NLP technologies. /ppResponsibilities /pulliDesign and develop new techniques to compress Large Language Models based on quantum-inspired technologies to solve challenging use cases in various domains. /liliConduct rigorous evaluations and benchmarks of model performance, identifying areas for improvement, and fine-tuning and optimising LLMs for enhanced accuracy, robustness, and efficiency. /liliUse your expertise to assess the strengths and weaknesses of models, propose enhancements, and develop novel solutions to improve performance and efficiency. /liliAct as a domain expert in the field of LLMs, understanding domain-specific problems and identifying opportunities for quantum AI-driven innovation. /liliMaintain comprehensive documentation of LLM development processes, experiments, and results. /liliParticipate in code reviews and provide constructive feedback to team members. /li /ulpRequired Qualifications /pulliMaster's or Ph.D. in Artificial Intelligence, Computer Science, Data Science, or related fields. /lili3+ years of hands-on experience with deep learning models and neural networks, preferably working with Large Language Models and Transformer architectures, or computer vision models. /liliHands-on experience using LLM and Transformer models, with excellent command of libraries such as HuggingFace Transformers, Accelerate, Datasets, etc. /liliSolid mathematical foundations and expertise in deep learning algorithms and neural networks, both training and inference. /liliExcellent problem-solving, debugging, performance analysis, test design, and documentation skills. /liliStrong understanding with the fundamentals of GPU architectures. /liliExcellent programming skills in Python and experience with relevant libraries (PyTorch, HuggingFace, etc.). /liliExperience with cloud platforms (ideally AWS), containerization technologies (Docker) and with deploying AI solutions in a cloud environment /liliExcellent written and verbal communication skills, with the ability to work collaboratively in a fast-paced team environment and communicate complex ideas effectively. /liliPrevious research publications in deep learning is a plus. /li /ulpKey Words: Large Language Models / LLM / Machine Learning / AI / Quantum Computing / GPU Architecture / GPGPU / GPU Farms / Multi-GPU / AWS / Kubernetes Clusters / DeepSpeed / SLURM / RAY / Transformer Models / Fine-tuning / Mistral / Llama /ppBy applying to this role you understand that we may collect your personal data and store and process it on our systems. For more information please see our Privacy Notice ( /p