Empleo
Mis anuncios
Mis alertas
Conectarse
Encontrar un trabajo Consejos empleo Fichas empresas
Buscar

Data engineer

Staq.io
Publicada el 27 febrero
Descripción

PbAbout Us /b /ppStaq is a leading Banking-as-a-Service (BaaS) and embedded finance platform, transforming the way businesses integrate banking and financial services. At Staq, we empower our clients to innovate, expand, and streamline their financial services offerings using our cutting-edge platform. Our mission is to bridge the gap between traditional banking and the digital era by providing seamless, scalable, and secure financial solutions. /p pbThe Role /b /ppOur agents, recommendation systems, and automations are only as good as the data they consume. An agent giving financial advice needs rich, accurate, timely context about a user’s accounts, transactions, spending patterns, and financial goals. A recommendation engine needs well-structured feature data. An automation trigger needs reliable signals. /ppRight now that data plumbing doesn’t have a dedicated owner. As we scale from one product to an SDK that multiple banking applications use, the data layer becomes a shared dependency that every AI feature builds on top of. This role owns the pipelines that feed the intelligence platform, the evaluation data that tells us if our AI is working, and the infrastructure that lets us iterate on data quality without slowing down AI development. /ppbr/ppbKey Responsibilities /b /ppbContext Feature Pipelines for AI /b /pulliBuild and maintain the data pipelines that transform raw financial data (Plaid transactions, bank accounts, credit data, subscription records) into the enriched context that agents consume at runtime /liliDesign the feature store or context layer that serves real-time and batch features to agents, recommendation engines, and automation triggers /liliEnsure data freshness, quality, and consistency across all pipelines feeding the intelligence platform /liliBuild the context enrichment that makes the difference between a generic chatbot and a financial assistant that actually understands a user’s financial situation /li /ulpbEvaluation Observability Data /b /pulliBuild the data infrastructure for AI evaluation — collecting agent decisions, recommendation results, automation outcomes, and user feedback into queryable, analyzable datasets /liliOwn the LLM observability data layer — structured collection of call latencies, token usage, cost per flow, error rates, and model performance metrics across all agent and automation flows /liliCreate dashboards and data products that let the AI team measure agent quality, recommendation relevance, automation success rates, and LLM operational health /liliSupport A/B testing and experiment tracking data infrastructure so we can iterate on AI behavior with evidence, not intuition /li /ulpbSDK Data Contracts /b /pulliDesign data contracts and schemas that serve both Zeen and future banking applications that plug into the intelligence platform SDK /liliOwn the ingestion layer for partner and third-party data sources — as the SDK expands to other banks, each will bring their own data formats and integration patterns /liliBuild the feedback loops that connect production outcomes back to agent and recommendation improvement /li /ulpbData Quality Operations /b /pulliOwn data quality monitoring, validation, and alerting across all pipelines /liliBuild data lineage tracking so we can trace any agent decision back to the data that informed it /liliEnsure PII handling in data pipelines aligns with platform policy — financial data requires careful treatment, and the AI layer has strict boundaries around what data reaches LLMs /li /ulpbTechnical Environment /b /pulliPython for pipeline development; SQL for analytics and data modeling /liliFinancial data sources: Plaid, partner APIs, internal domain services (banking, credit, subscriptions, journal/ledger) /liliOpenTelemetry traces and structured artifacts as data sources for AI evaluation /liliCloud-native infrastructure; containerized services /liliFinancial data with strict handling requirements /li /ulpbr/ppbWhat We Are Looking For /b /ppbMust Have /b /pulli3+ years building and operating production data pipelines /liliStrong Python and SQL; experience with data transformation frameworks /liliExperience designing schemas and data contracts for consumption by application services or ML/AI systems /liliUnderstanding of data quality practices — validation, monitoring, alerting on pipeline failures /liliComfort working with sensitive financial data and understanding why data handling discipline matters /li /ulpbStrong Signals /b /pulliExperience building data infrastructure that feeds AI/ML systems (feature stores, context pipelines, evaluation datasets) /liliFintech or financial services background /liliFamiliarity with observability data (OpenTelemetry, structured logs) as a data source /liliExperience building monitoring and analytics for LLM systems — latency tracking, cost attribution, and performance dashboards /liliExperience with data lineage, audit trails, or data governance /liliExposure to real-time streaming alongside batch processing /liliExperience designing data contracts for multi-tenant or multi-product platforms /li /ul

Enviar
Crear una alerta
Alerta activada
Guardada
Guardar
Ofertas cercanas
Empleo Castilla-La Mancha
Inicio > Empleo > Data Engineer

Jobijoba

  • Dosieres empleo
  • Opiniones Empresas

Encuentra empleo

  • Ofertas de empleo por profesiones
  • Búsqueda de empleo por sector
  • Empleos por empresas
  • Empleos para localidad

Contacto/ Colaboraciones

  • Contacto
  • Publiquen sus ofertas en Jobijoba

Menciones legales - Condiciones legales y términos de Uso - Política de Privacidad - Gestionar mis cookies - Accesibilidad: No conforme

© 2026 Jobijoba - Todos los Derechos Reservados

Enviar
Crear una alerta
Alerta activada
Guardada
Guardar