LLM Engineer – Quantum AI / Model Compression / Deep Learning¿Quiere presentar una candidatura? Asegúrese de que su CV está actualizado y luego lea atentamente las siguientes especificaciones del puesto antes de solicitar.We are currently partnered with a fast-growing deep-tech company operating at the intersection of artificial intelligence and quantum-inspired computing. As part of their expanding AI engineering organisation, they are looking to hire LLM Engineers to develop next-generation Large Language Model technologies focused on efficiency, optimisation, and real-world deployment.Key responsibilitiesDesign and develop novel techniques for compressing and optimising Large Language Models using advanced AI and quantum-inspired approachesTrain, fine-tune, evaluate, and optimise transformer-based models for performance, robustness, and efficiencyConduct benchmarking and rigorous evaluation of model accuracy and inference performanceDevelop innovative solutions to improve model scalability, portability, and deployment efficiencyAct as a technical expert within the LLM domain, identifying opportunities for AI-driven innovation across multiple industriesCollaborate closely with cross-functional teams to integrate AI models into production-grade products and platformsKey requirementsMaster's or PhD in Artificial Intelligence, Computer Science, Data Science, or related field2-5+ years of experience designing, training, or fine-tuning deep learning and transformer-based modelsStrong practical experience with Hugging Face ecosystem tools (Transformers, Accelerate, Datasets, etc.)Strong theoretical understanding of deep learning, neural networks, and modern AI training/inference workflowsStrong understanding of GPU architectures and high-performance AI workloads xhfqzwm Excellent programming skills in Python with frameworks such as PyTorchKeywords:LLM / Large Language Models / AI Engineering / Deep Learning / Transformer Models / Hugging Face / PyTorch / NLP / Model Compression / Quantum AI / GPU Computing / AI Optimisation / RAG / TensorRT / vLLM / MLOps / HPC / AWS / Docker / Generative AIIf you are interested in this position, please send a CV By applying to this role you understand that we may collect your personal data and store and process it in line with our privacy policy.