Data Engineer 2 page is loaded## Data Engineer 2remote type:
\n
¿Es usted el candidato adecuado para esta ocasión? Asegúrese de leer la descripción completa a continuación.
\n
Remotelocations
\n
Madrid, ESPtime type:
\n
Full timeposted on
\n
Opublikowano dzisiajjob requisition id
\n
REQ485485JLL empowers you to shape a brighter way.Our people at JLL are shaping the future of real estate for a better world by combining world class services, advisory and technology for our clients. We are committed to hiring the best, most talented people and empowering them to thrive, grow meaningful careers and to find a place where they belong.
\n
Whether you’ve got deep experience in commercial real estate, skilled trades or technology, or you’re looking to apply your relevant experience to a new industry, join our team as we help shape a brighter way forward.JLL es una empresa comprometida con la igualdad de oportunidades entre hombres y mujeres / JLL as a company is committed to equal opportunities for men and women.Find your next move at JLL and build a fulfilling careerAt JLL, we value what makes you unique, and we’re committed to give you the opportunity, knowledge and tools to own your success. Explore opportunities to advance your career from within, whether you’re looking to move up, broaden your experience or deepen your expertise.to submit them for the opportunity.*Data Engineer P2 Job Description*About JLL Technologies (JLLT):• JLL Technologies is a specialized group within JLL that delivers unparalleled digital advisory, implementation, and services solutions to organizations globally• We provide best-in-class technologies to bring digital ambitions to life by aligning technology, people, and processes• Our goal is to leverage technology to increase the value and liquidity of the world's buildings while enhancing the productivity and happiness of those who occupy themWhat the Job Involves:• We are seeking a Data Engineer P2 who is a self-starter to work in a diverse and fast-paced environment as part of our Capital Markets Data Engineering team• This individual contributor role is responsible for designing and developing data solutions that are strategic to the business and built on the latest technologies and patterns• This is a global role that requires partnering with the broader JLLT team at the country, regional, and global levels by utilizing in-depth knowledge of data, infrastructure, technologies, and data engineering experienceResponsibilities:Technical Development• Design and implement robust, scalable data pipelines using Databricks, Apache Spark, and Delta Lake as well as BigQuery• Design and implement efficient data pipeline frameworks, ensuring the smooth flow of data from various sources to data lakes, data warehouses, and analytical platforms• Troubleshoot and resolve issues related to data processing, data quality, and data pipeline performance• Document data infrastructure, data pipelines, and ETL processes, ensuring knowledge transfer and smooth handovers• Create automated tests and integrate them into testing frameworksPlatform Engineering·
\n
Configure and optimize Databricks workspaces, clusters, and job scheduling·
\n
Work in a Multi-cloud environment including Azure, GCP and AWS·
\n
Implement security best practices including access controls, encryption, and audit logging·
\n
Build integrations with market data vendors, trading systems, and risk management platforms·
\n
Establish monitoring and performance tuning for data pipeline health and efficiencyCollaboration & Mentorship Collaborate with cross-functional teams to understand data requirements, identify potential data sources, and define data ingestion Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver data solutions that meet their needsRequirements:*Education & Experience:• Bachelor's degree in Computer Science, Data Engineering, or a related field (Master's degree preferred)• Minimum 3-5 years of experience in data engineering or full-stack development, with a focus on cloud-based environmentsTechnical Skills:• Strong expertise in managing big data technologies (Python, SQL, PySpark, Spark) with a proven track record of working on large-scale data projects• Strong Databricks experience• Strong database/backend testing with the ability to write complex SQL queries for data validation and integrity• Strong experience in designing and implementing data pipelines, ETL processes, and workflow automation• Familiarity with data warehousing concepts, dimensional modeling, data governance best practices, and cloud-based data warehousing platforms (e.g., Google BigQuery, Snowflake)• Experience with