Empleo
Mis anuncios
Mis alertas
Conectarse
Encontrar un trabajo Consejos empleo Fichas empresas
Buscar

Phd position: development of a standardized framework for ai threat models and robustness (project genome)

Plaza
i2CAT Research Centre
Publicada el 15 marzo
Descripción

IMPORTANT: HOW TO APPLY

Applications must be submitted exclusively through the official GENOME project website. Please note that applications sent via LinkedIn, the i2CAT website, or any other platform will not be considered. To ensure your candidacy is valid, you must follow the formal application process on the GENOME portal.

[Apply here on the GENOME Website]



Lea atentamente toda la información sobre esta oportunidad y luego utilice el botón de solicitud de abajo para enviar su CV y su candidatura.

Artificial Intelligence is no longer just a tool; it is the backbone of our critical infrastructure. However, as AI integrates into the core of our society, it brings unique vulnerabilities—from adversarial attacks to data poisoning—that traditional cybersecurity cannot handle.

As part of the MSCA DN GENOME project, you will join the Cybersecurity & Blockchain group at i2CAT in Barcelona. Your mission is to bridge the gap between AI innovation and security by developing a standardized framework that ensures the next generation of AI-enabled systems is not only intelligent but provably robust and trustworthy. You will be part of an elite network of 15 Doctoral Candidates (DCs) across Europe, gaining unparalleled research and training opportunities.

What you will do

As a PhD Candidate in this project, your responsibilities will include:

* Develop a standardized framework for AI threat modeling, integrating global knowledge bases like MITRE ATLAS and the MIT AI Risk Repository.


* Design specialized security frameworks for agentic AI systems, focusing on the MAESTRO architecture to monitor memory integrity and adaptation logic.


* Research and implement robust AI techniques, including adversarial training (PGD) and certified robustness methods.


* Enhance scalability in formal verification by exploring relaxation methods like abstract interpretation for Deep Neural Networks (DNNs).


* Validate your research through extensive system-level simulations and deployment on a specialized AI-security testbed.


* Collaborate with an international network of researchers and industry partners within the GENOME consortium.



Where to apply

Applications must be submitted exclusively through the official GENOME project website: :


MUST HAVE:

* Master's degree in Computer Science, Artificial Intelligence, Cybersecurity, Telecommunications Engineering, or a related field.
* Fluency in English.
* Hands-on experience with AI/ML frameworks (e.g., Python, PyTorch, TensorFlow).
* Solid understanding of cybersecurity principles and interest in Adversarial Machine Learning (AML).
* Track record of research excellence (e.g., Master's thesis or publications).

NICE TO HAVE:

* Knowledge of formal verification tools (e.g., SMT/MILP solvers) or abstract interpretation techniques.
* Familiarity with existing AI risk taxonomies such as MITRE ATLAS, NIST AI RMF, or STRIDE-AI. xqbhyrx
* Knowledge of cloud-native technologies (e.g., Docker, K8s).
* Evidence of working on agentic or autonomous systems during or after the Master's degree.

Enviar
Crear una alerta
Alerta activada
Guardada
Guardar
Ofertas cercanas
Empleo Plaza
Empleo Provincia de Zaragoza
Empleo Aragón
Inicio > Empleo > PhD position: Development of a standardized framework for AI threat models and robustness (Project GENOME)

Jobijoba

  • Dosieres empleo
  • Opiniones Empresas

Encuentra empleo

  • Ofertas de empleo por profesiones
  • Búsqueda de empleo por sector
  • Empleos por empresas
  • Empleos para localidad

Contacto/ Colaboraciones

  • Contacto
  • Publiquen sus ofertas en Jobijoba

Menciones legales - Condiciones legales y términos de Uso - Política de Privacidad - Gestionar mis cookies - Accesibilidad: No conforme

© 2026 Jobijoba - Todos los Derechos Reservados

Enviar
Crear una alerta
Alerta activada
Guardada
Guardar