Multiverse Computing
\n
Todos los candidatos deben asegurarse de leer atentamente la siguiente descripción del puesto y la información antes de enviar su solicitud.
\n
Multiverse is a well‑funded, fast‑growing deep‑tech company founded in 2019. We are the largest quantum software company in the EU and have been recognized by CB Insights (2023 and 2025) as one of the 100 most promising AI companies in the world. With 180+ employees and an international, multicultural team, we deliver hyper‑efficient software for companies seeking a competitive edge through quantum computing and artificial intelligence.
\n
Our flagship products, CompactifAI and Singularity, address critical needs across various industries:
\n
CompactifAI is a groundbreaking compression tool for foundational AI models based on Tensor Networks. It enables the compression of large AI systems—such as language models—to make them significantly more efficient and portable.
\n
Singularity is a quantum‑ and quantum‑inspired optimization platform used by blue‑chip companies to solve complex problems in finance, energy, manufacturing, and beyond. It integrates seamlessly with existing systems and delivers immediate performance gains on classical and quantum hardware.
\n
We’ll be working alongside world‑leading experts to develop solutions that tackle real‑world challenges. We’re looking for passionate individuals eager to grow in an ethics‑driven environment that values sustainability and diversity. We’re committed to building a truly inclusive culture—come and join us.
\n
Job Overview
\n
Machine Learning Engineer with a strong background in Large Language Models to join our team. In this role you will have the opportunity to leverage cutting‑edge quantum and AI technologies to lead the design, implementation, and improvement of our language models, as well as work closely with cross‑functional teams to integrate these models into our products. You will work on challenging projects, contribute to cutting‑edge research, and shape the future of LLM and NLP technologies.
\n
Responsibilities
\n
Design and develop new techniques to compress Large Language Models based on quantum‑inspired technologies to solve challenging use cases in various domains.
\n
Conduct rigorous evaluations and benchmarks of model performance, identifying areas for improvement, and fine‑tune and optimise LLMs for enhanced accuracy, robustness, and efficiency.
\n
Build LLM‑based applications such as RAG and AI agents.
\n
Use your expertise to assess the strengths and weaknesses of models, propose enhancements, and develop novel solutions to improve performance and efficiency.
\n
Act as a domain expert in the field of LLMs, understanding domain‑specific problems and identifying opportunities for quantum AI‑driven innovation.
\n
Design, train and deliver custom deep‑learning models for our clients.
\n
Work in diverse areas beyond LLM, e.g., computer vision.
\n
Maintain comprehensive documentation of LLM development processes, experiments, and results.
\n
Share your knowledge and expertise with the team to foster a culture of continuous learning, guide junior members of the team in their technical growth and help them develop their skills in LLM development.
\n
Participate in code reviews and provide constructive feedback to team members.
\n
Stay up to date with the latest advancements and emerging trends in LLMs and recommend new tools and technologies as appropriate.
\n
Required Minimum Qualifications
\n
Bachelor’s, Master’s or Ph.D. in Artificial Intelligence, Computer Science, Data Science, or related fields.
\n
2+ years of hands‑on experience with designing, training or fine‑tuning deep‑learning models, preferably with transformer or computer‑vision models.
\n
2+ years of experience using transformer models, with excellent command of libraries such as HuggingFace Transformers, Accelerate, Datasets, etc.
\n
Solid mathematical foundations and theoretical understanding of deep‑learning algorithms and neural networks, both training and inference.
\n
Excellent problem‑solving, debugging, performance analysis, test design, and documentation skills.
\n
Strong understanding of GPU architectures and LLM hardware/software infrastructures.
\n
Excellent programming skills in Python and experience with relevant libraries (PyTorch, HuggingFace, etc.).
\n
Experience with cloud platforms (ideally AWS), containerization technologies (Docker) and with deploying AI solutions in a cloud environment.
\n
Excellent written and verbal communication skills, with the ability to work collaboratively in a fast‑paced team environment and communicate complex ideas effectively.
\n
Previous research publications in deep learning or any tech field is a plus.
\n
Fluent in English.
\n
Work location: San Sebastian (relocation required).
\n
Preferred Qualifications
\n