Discover 3 key automations to optimize your moderation efforts Read 3 Essential Automations for Smarter Moderation

GenAI Security Technical Lead

Ramat Gan, IL, Tel Aviv District/ Remote

About the position

As a GenAI Security Research Technical Lead, you’ll push our redteaming efforts to a new level, while supporting teammates in their growth and tackling difficult technical challenges.

You will innovate new ways on tackling challenges of AI safety, further enhancing our GenAI redteaming doctrine.

You will conduct redteaming operations for finding and addressing risks to ensure AI models are robust, secure, and future-proof.

As a Security Research Technical Lead, you will:

  • Push the boundaries of AI safety by conducting sophisticated black-box redteaming operations to identify and exploit vulnerabilities in generative AI models and related infrastructure.
  • Innovate new techniques to bypass state-of-the-art AI security mechanisms and set new standards for redteaming practices.
  • Assess and strengthen the security of our AI models and infrastructure by identifying weaknesses and collaborating to implement effective solutions.
  • Automate security testing processes and contribute to establishing best practices in partnership with cross-functional teams.
  • Stay at the forefront of AI security trends, ethical hacking techniques, and emerging cyber threats to ensure we remain cutting edge.
  • Document redteam activities meticulously, prepare detailed findings, and present actionable reports to senior management and other key stakeholders.
  • Provide guidance, training, and technical expertise to both technical and non-technical teams, helping them navigate the evolving landscape of Generative AI.

Requirements

Key Qualifications:

  • 5+ years of experience in offensive cybersecurity with a strong focus on web application and API security.
  • Proficient programming and scripting skills (e.g., Python, JavaScript) that are directly applicable to AI security contexts.
  • Deep understanding of AI technologies, particularly generative models such as GPT, DALL-E, etc.
  • Strong knowledge of AI vulnerabilities and effective mitigation strategies.
  • Exceptional problem-solving, analytical, and communication skills to articulate findings clearly and lead technical discussions.
  • Experience in end-to-end product development, including infrastructure and system design.
  • Hands-on experience with cloud development.

Preferred Skills to Set You Apart:

  • Great interpersonal skills
  • Experience presenting security findings to various audiences, including executives.
  • Certifications in offensive cybersecurity (e.g., OSWA, OSWE, OSCE3, SEC542, SEC522) are highly desirable.
  • Familiarity with AI security frameworks, compliance standards, and ethical guidelines.
  • Ability to excel in a fast-paced, evolving environment, with a passion for pushing the boundaries of AI security.

About ActiveFence

ActiveFence is the leading tool stack for Trust & Safety teams, worldwide. By relying on ActiveFence’s end-to-end solution, Trust & Safety teams – of all sizes – can keep users safe from the widest spectrum of online harms, unwanted content, and malicious behavior, including child safety, disinformation, fraud, hate speech, terror, nudity, and more. 

Using cutting-edge AI and a team of world-class subject-matter experts to continuously collect, analyze, and contextualize data, ActiveFence ensures that in an ever-changing world, customers are always two steps ahead of bad actors. As a result, Trust & Safety teams can be proactive and provide maximum protection to users across a multitude of abuse areas, in 70+ languages. 

Backed by leading Silicon Valley investors such as CRV and Norwest, ActiveFence has raised $100M to date; employs 300 people worldwide, and has contributed to the online safety of billions of users across the globe.