Job description
Huawei Canada has an immediate permanent opening for a Technical Expert
About the team:
The Data and Privacy Protection Technology Lab is dedicated to ensuring user data flows while maintaining privacy. Researchers focus on key areas such as user identity authentication, data integrity, privacy protection, extensive model privacy assessment, multi-modal data identification, differential privacy, and federated learning. The lab supports deep research and encourages publications in leading journals. Research outcomes are applied across various Huawei product lines, including mobile phones, smart devices, and communications technologies.
About the job:
- Research and analyze state-of-the-art AI data security technologies applicable to scenarios such as consumer applications and cloud environments, covering all stages of AI training and inference, including traditional AI and large language models(LLMs) scenarios.
- Ensure the integrity, confidentiality, and availability of AI systems, and guard against misuse.
- Design and implement technology prototypes for validating and demonstrating their feasibility, and support their integration into data centers, network equipment or consumer-facing devices.
- Write design documentation and publish research results in well-known conferences.
- Lead industry analysis and insights, and plan new features and strategic directions.
Job requirements
About the ideal candidate:
- PhD’s degree or equivalent experience in computer / electrical engineering or related fields, with a research mindset and with 4+ years of industry-relevant R&D experience.
- Proficiency in at least one programming language: C++, Python, or Java.
- Extensive experience in system architecture design, with a proven track record of leading the security and privacy aspects of large-scale AI systems (e.g., speech assistants, advertising, fraud detection, etc.).
- Expertise in detection and prevention techniques for training data extraction, such as Model Distillation, Knowledge Extraction Limiting and Feature Selection & Data Pruning.
- Specialized knowledge in model theft protection, preventing malicious users from stealing models through reverse engineering, inference attacks, repeated API calls, and other methods.
- Specialized knowledge in removing memorized information or nullifying the impact of problematic training data on a model, such as Model Disgorgement and Machine Unlearning.
- Possessing expertise in adversarial attack detection, the design of modification-resistant models, and the detection and authorization of dual-use capabilities during inference.
#LI-MB1
or
All done!
Your application has been successfully submitted!