Pbu Senior Machine Learning Engineer (LLM GPU Architecture) /u /b /p p This is a great opportunity to work with one of the biggest growing tech start-ups based in Spain, they are well-funded and one of the most well-known quantum software companies in Europe. They are a provider of hyper-efficient software to companies seeking to gain an edge with quantum computing and artificial intelligence, across finance, energy, manufacturing, defence, cybersecurity, life sciences, and chemistry, delivering practical applications and tangible value with the use of their new AI now LLM product. /p pbu Job Overview /u /b /pp In this role you will have the opportunity to leverage cutting-edge quantum and AI technologies to lead the design, implementation, and improvement of our language models, as well as working closely with cross-functional teams to integrate these models into our products. You will have the opportunity to work on challenging projects, contribute to cutting-edge research, and shape the future of LLM and NLP technologies. /p pbu Responsibilities /u /b /pulli Design and develop new techniques to compress Large Language Models based on quantum-inspired technologies to solve challenging use cases in various domains. /lili Conduct rigorous evaluations and benchmarks of model performance, identifying areas for improvement, and fine-tuning and optimising LLMs for enhanced accuracy, robustness, and efficiency. /lili Use your expertise to assess the strengths and weaknesses of models, propose enhancements, and develop novel solutions to improve performance and efficiency. /lili Act as a domain expert in the field of LLMs, understanding domain-specific problems and identifying opportunities for quantum AI-driven innovation. /lili Maintain comprehensive documentation of LLM development processes, experiments, and results. /lili Participate in code reviews and provide constructive feedback to team members. /li /ul pbu Required Qualifications /u /b /pulli Master#39;s or Ph.D. in Artificial Intelligence, Computer Science, Data Science, or related fields. /lili3+ years of hands-on experience with deep learning models and neural networks, preferably working with Large Language Models and Transformer architectures, or computer vision models. /lili Hands-on experience using LLM and Transformer models, with excellent command of libraries such as HuggingFace Transformers, Accelerate, Datasets, etc. /lili Solid mathematical foundations and expertise in deep learning algorithms and neural networks, both training and inference. /lili Excellent problem-solving, debugging, performance analysis, test design, and documentation skills. /lili Strong understanding with the fundamentals of GPU architectures. /lili Excellent programming skills in Python and experience with relevant libraries (PyTorch, HuggingFace, etc.). /lili Experience with cloud platforms (ideally AWS), containerization technologies (Docker) and with deploying AI solutions in a cloud environment /lili Excellent written and verbal communication skills, with the ability to work collaboratively in a fast-paced team environment and communicate complex ideas effectively. /lili Previous research publications in deep learning is a plus. /li /ul p Key Words: Large Language Models / LLM / Machine Learning / AI / Quantum Computing / GPU Architecture / GPGPU / GPU Farms / Multi-GPU / AWS / Kubernetes Clusters / DeepSpeed / SLURM / RAY / Transformer Models / Fine-tuning / Mistral / Llama /p pem By applying to this role you understand that we may collect your personal data and store and process it on our systems. For more information please see our Privacy Notice ( /em /p