Discover 3 key automations to optimize your moderation efforts Read 3 Essential Automations for Smarter Moderation

GenAI Security Researcher

Ramat Gan, IL, Tel Aviv District / Full-time / Hybrid

About the position

As a GenAI Security Researcher, you’ll dive deep into the challenges of AI safety, conducting redteaming operations to identify vulnerabilities in generative AI systems and their infrastructure.You will conduct redteaming operations for finding and addressing risks to ensure AI models are robust, secure, and future-proof.

As a Security Researcher, you will:

  • Conduct sophisticated black-box redteaming operations to uncover vulnerabilities in generative AI models and infrastructure.
  • Design new techniques to bypass the latest AI security mechanisms.
  • Evaluate and strengthen the security of AI systems, identifying weaknesses and collaborating to implement improvements.
  • Work with cross-functional teams to automate security testing processes and establish best practices.
  • Stay ahead of emerging trends in AI security, ethical hacking, and cyber threats to ensure we’re at the cutting edge.

Requirements

Key Qualifications:

  • 3+ years in offensive cybersecurity, especially focused on web applications and API security.
  • Strong programming and scripting skills (e.g., Python, JavaScript) relevant to AI security.
  • In-depth understanding of AI technologies, particularly generative models like GPT, DALL-E, etc.
  • Solid knowledge of AI vulnerabilities and mitigation strategies.
  • Excellent problem-solving, analytical, and communication skills.

Preferred Skills That Set You Apart:

  • Certifications in offensive cybersecurity (e.g., OSWA, OSWE, OSCE3, SEC542, SEC522) are a big plus.
  • Experience in end-to-end product development, including infrastructure and system design.
  • Proficiency in cloud development.
  • Familiarity with AI security frameworks, compliance standards, and ethical guidelines.
  • Ability to thrive in a fast-paced, rapidly evolving environment.

About ActiveFence

ActiveFence is the leading tool stack for Trust & Safety teams, worldwide. By relying on ActiveFence’s end-to-end solution, Trust & Safety teams – of all sizes – can keep users safe from the widest spectrum of online harms, unwanted content, and malicious behavior, including child safety, disinformation, fraud, hate speech, terror, nudity, and more. 

Using cutting-edge AI and a team of world-class subject-matter experts to continuously collect, analyze, and contextualize data, ActiveFence ensures that in an ever-changing world, customers are always two steps ahead of bad actors. As a result, Trust & Safety teams can be proactive and provide maximum protection to users across a multitude of abuse areas, in 70+ languages. 

Backed by leading Silicon Valley investors such as CRV and Norwest, ActiveFence has raised $100M to date; employs 300 people worldwide, and has contributed to the online safety of billions of users across the globe.