Empleo
Mis anuncios
Mis alertas
Conectarse
Encontrar un trabajo Consejos empleo Fichas empresas
Buscar

Llm evaluator (model response analyst)

Odixcity Consulting
Modelo
Publicada el 8 marzo
Descripción

Job Title: LLM Evaluator (Model Response Analyst)

Location: Remote (Worldwide)

Job Summary: We are seeking a detail-oriented and analytical LLM Evaluator to assess, analyze, and improve the performance of large language models (LLMs). In this role, you will evaluate AI-generated content for accuracy, coherence, factual reliability, bias, safety, and alignment with defined guidelines.

Responsibilities
* Evaluate and rank model-generated text based on complex rubrics covering dimensions such as factuality, coherence, safety, instruction‑following, and creativity.
* Review multiple model responses to the same prompt and determine which output a human would prefer, providing justifications for your choices.
* Provide clear, concise feedback to the modeling and training teams regarding recurring failure models observed during evaluation sessions.
* Attempt to “break” the model by crafting prompts designed to elicit biased, harmful, or insecure outputs to help patch safety vulnerabilities.
* Collaborate with the quality assurance team to suggest improvements to evaluation guidelines when you encounter ambiguous or unclassifiable edge cases.
* Participate in regular “cross-checking” sessions with other evaluators to calibrate scoring standards and ensure inter‑rater reliability across the global team.
* When a model underperforms, dig deeper than the surface score to hypothesize “why” the model made a specific error (e.g., training data vs. prompt misinterpretation).
* Identify and flag novel or unexpected model behaviors to the research team, contributing to a living library of unique model outputs and failure modes.
Requirements
* Minimum of 2 years of professional experience in a relevant field such as computational linguistics, data analysis, technical writing, quality assurance (specifically for NLP/AI), or cognitive science.
* Bachelor’s degree in Computer Science, or a related field.
* Deep understanding of how to craft prompts to elicit specific behaviors and test model limits.
* Ability to look at a text output and explain “why” it is “good” or “bad” based on logic, tone, factuality, and instruction adherence.
* Experience working with Reinforcement Learning from Human Feedback (RLHF) data collection.
* Proven experience monitoring and improving consistency among evaluation teams. Ability to analyze IAA scores and conduct calibration sessions to align judgment.
* Experience sourcing, cleaning, and annotating datasets specifically for fine‑tuning or evaluating LLMs. Understanding of data distribution and its impact on model performance.
* Familiarity with A/B testing concepts applied to AI. Ability to help design experiments to test if a new model version is truly “better” than the previous one.
#J-18808-Ljbffr

Enviar
Crear una alerta
Alerta activada
Guardada
Guardar
Oferta cercana
480 viviendas asequibles con alquiler regulado refuerzan el modelo btr en andalucía
Sevilla
Indefinido
Jobleads
Modelo
Oferta cercana
Desarrollador senior php/laravel con ia – modelo híbrido
Sevilla
Indefinido
Jobleads
Modelo
Oferta cercana
Desarrollador senior php/laravel con ia – modelo híbrido
Melt Group
Modelo
44.000 € al año
Ofertas cercanas
Empleo Provincia de Sevilla
Empleo Andalucía
Inicio > Empleo > Empleo Cultura > Empleo Modelo > Empleo Modelo en Provincia de Sevilla > LLM Evaluator (Model Response Analyst)

Jobijoba

  • Dosieres empleo
  • Opiniones Empresas

Encuentra empleo

  • Ofertas de empleo por profesiones
  • Búsqueda de empleo por sector
  • Empleos por empresas
  • Empleos para localidad

Contacto/ Colaboraciones

  • Contacto
  • Publiquen sus ofertas en Jobijoba

Menciones legales - Condiciones legales y términos de Uso - Política de Privacidad - Gestionar mis cookies - Accesibilidad: No conforme

© 2026 Jobijoba - Todos los Derechos Reservados

Enviar
Crear una alerta
Alerta activada
Guardada
Guardar