Sr. ML Kernel Performance Engineer, AWS Neuron, Annapurna Labs
Es posible que un gran número de candidatos se presenten a este puesto, así que asegúrese de enviar su CV y su solicitud lo antes posible.
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team works at the forefront of maximizing performance for AWS's custom ML accelerators, crafting high‐performance kernels for ML functions and ensuring every FLOP counts in delivering optimal performance for our customers' demanding workloads.
As part of the broader Neuron Compiler organization, our team works across multiple technology layers—from frameworks and compilers to runtime and collectives. We not only optimize current performance but also contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance. This role offers a unique opportunity to work at the intersection of machine learning, high‐performance computing, and distributed architectures where you will help shape the future of AI acceleration technology.
This is an opportunity to work on cutting‐edge products at the intersection of machine‐learning, high‐performance computing, and distributed architectures. You will architect and implement business‐critical features, publish cutting‐edge research, and mentor a brilliant team of experienced engineers. We operate in spaces that are large yet our teams remain small and agile, inventing and experimenting in a very unique learning culture. The team works closely with customers on model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators.
Key Responsibilities
* Design and implement high-performance compute kernels for ML operations, leveraging the Neuron architecture and programming models.
* Analyze and optimize kernel‐level performance across multiple generations of Neuron hardware.
* Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks.
* Implement compiler optimizations such as fusion, sharding, tiling, and scheduling.
* Work directly with customers to enable and optimize their ML models on AWS accelerators.
* Collaborate across teams to develop innovative kernel optimization techniques.
A Day in the Life
* Design and code solutions to help our team drive efficiencies in software architecture, creating metrics, implementing automation and other improvements, and resolving root causes of software defects.
* Build high‐impact solutions for our large customer base.
* Participate in design discussions, code reviews, and communicate with internal and external stakeholders.
* Work cross‐functionally to help drive business decisions with your technical input.
* Operate in a startup‐like development environment, focusing on the most important work.
About the Team
* Diverse Experiences – AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed, we encourage candidates to apply.
* Why AWS – Amazon Web Services is the world's most comprehensive and broadly adopted cloud platform, pioneering cloud computing and continuously innovating.
* Inclusive Team Culture – We embrace our differences and are committed to furthering a culture of inclusion, supporting diverse perspectives and continuous learning.
* Work/Life Balance – The team values work‐life balance and offers flexibility in working hours to help you bring energy to both personal and professional life.
* Mentorship & Career Growth – We support new members with a broad mix of experience levels and leverage knowledge sharing and mentorship to enable growth.
Basic Qualifications
* 5+ years of non‐internship professional software development experience.
* 5+ years of programming with at least one software programming language.
* 5+ years of leading design or architecture (design patterns, reliability, and scaling) of new and existing systems.
* Experience as a mentor, tech lead, or leading an engineering team.
Preferred Qualifications
* 5+ years of full software development life cycle experience, including coding standards, code reviews, source control management, build processes, testing, and operations.
* Bachelor's degree in computer science or equivalent.
* Expertise in accelerator architectures for ML or HPC such as GPUs, CPUs, FPGAs, or custom architectures.
* Experience with GPU kernel optimization and GPGPU computing such as CUDA, NKI, Triton, OpenCL, SYCL, or ROCm.
* Demonstrated experience with NVIDIA PTX and/or AMD GPU ISA.
* Experience developing high‐performance libraries for HPC applications.
* Proficiency in low‐level performance optimization for GPUs.
* Experience with LLVM/MLIR backend development for GPUs.
* Knowledge of ML frameworks (PyTorch, TensorFlow) and their GPU backends.
* Experience with parallel programming and optimization techniques.
* Understanding of GPU memory hierarchies and optimization strategies.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Location & Salary: Canada (ON) – 150,700.00 – 251,700.00 CAD annually.
Compensation – Amazon's total compensation package may include sign‐on payments and restricted stock units (RSUs). Final compensation will be determined based on experience, qualifications, and location. xohynlm Amazon offers comprehensive benefits, including health insurance, retirement savings plans, paid time off, and resources to improve health and well‐being.