
Job description
Huawei Canada has an immediate 12-month contract opening for a Developer.
About the team:
Founded in 2012, the Noah’s Ark lab has evolved into a prominent research organization with notable achievements in academia and industry. The lab’s mission focuses on advancing artificial intelligence and related fields to benefit the company and society. Driven by impactful, long-term projects, the aim is to enhance state-of-the-art research while integrating innovations into the company's products and services, including LLMs, RL, NLP, computer vision, AI theory, and Autonomous driving.
About the job:
Design, implement, and optimize high-performance Triton and CUDA kernels to accelerate LLM training and inference.
Collaborate with researchers to prototype, integrate, and evaluate novel kernel-level optimizations for large-scale AI workloads.
Contribute to applied research projects at Huawei by proposing efficient solutions, developing implementations, and conducting experiments.
Assist in training and fine-tuning models, building scalable prototypes, and enabling cutting-edge research through kernel-level enhancements.
Deliver high-impact results through project contributions, presentations, and publications in top AI/ML venues.
Stay current with the latest advances in NLP, AI systems, GPU acceleration, and compiler technologies to bring new insights and opportunities to the team.
Job requirements
About the ideal candidate:
Bachelor’s, Master’s, or PhD in Computer Science, Electrical/Computer Engineering, or a related field with strong emphasis on high-performance computing, GPU programming, or machine learning systems.
Solid background in CUDA programming, GPU architecture, and performance optimization, with experience in writing and debugging custom kernels (CUDA/Triton).
Strong coding skills in Python and C++, with the ability to bridge between high-level ML frameworks and low-level GPU implementations.
Familiarity with PyTorch and experience integrating custom GPU kernels into large-scale ML training pipelines.
Understanding of fundamentals of machine learning, deep learning, and LLM architectures, and how low-level optimizations impact scalability and efficiency.
Strong interest in systems for AI (compilers, distributed training, model parallelism) and staying at the forefront of LLM/ML acceleration research.
or
All done!
Your application has been successfully submitted!