Skip to content

Technical Expert- AI Data Security

    • Waterloo, Ontario
  • 4v5o4

Job description

Our team has an immediate  permanent  opening for a Technical Expert

Responsibilities:

  • Research and analyze state-of-the-art AI data security technologies applicable to scenarios such as consumer applications and cloud environments, covering all stages of AI training and inference, including traditional AI and large language models(LLMs) scenarios.
  • Ensure the integrity, confidentiality, and availability of AI systems, and guard against misuse.
  • Design and implement technology prototypes for validating and demonstrating their feasibility, and support their integration into data centers, network equipment or consumer-facing devices.
  • Write design documentation and publish research results in well-known conferences.
  • Lead industry analysis and insights, and plan new features and strategic directions.

Job requirements

What you’ll bring to the team:

  • PhD’s degree or equivalent experience in computer / electrical engineering or related fields, with a research mindset and with 4+ years of industry-relevant R&D experience.
  • Proficiency in at least one programming language: C++, Python, or Java.
  • Extensive experience in system architecture design, with a proven track record of leading the security and privacy aspects of large-scale AI systems (e.g., speech assistants, advertising, fraud detection, etc.).
  • Expertise in detection and prevention techniques for training data extraction, such as Model Distillation, Knowledge Extraction Limiting and Feature Selection & Data Pruning.
  • Specialized knowledge in model theft protection, preventing malicious users from stealing models through reverse engineering, inference attacks, repeated API calls, and other methods.
  • Specialized knowledge in removing memorized information or nullifying the impact of problematic training data on a model, such as Model Disgorgement and Machine Unlearning.
  • Possessing expertise in adversarial attack detection, the design of modification-resistant models, and the detection and authorization of dual-use capabilities during inference.

#LI-MB1

or