Stay ahead of the curve. Learn about this year's latest trends Download the State of Trust & Safety 2024 Report
Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
As the Ukraine war grinds on, the Kremlin has created increasingly complex fabrications online to discredit Ukraine’s leader and undercut aid. Some have a Hollywood-style plot twist.
Read More
Online platforms face unavoidable responsibilities for Trust and Safety, particularly in maintaining election integrity by combating disinformation and other dangers. AI further complicates these challenges, not only through the creation of deepfakes but also by empowering more malicious entities.
Amidst the conflict between Hamas and Israel, a disturbing surge in antisemitic and Islamophobic hate speech has swept across social media platforms. Extremist influences, fueled by the ongoing conflict between Israel and Gaza, have played a significant role in exacerbating this alarming rise in hate speech online.
The shift away from in-house trust and safety teams has created an opportunity for consultancies and startups to introduce something novel: trust and safety as a service.
When the militant group Hamas launched a devastating surprise attack on Israel on Oct. 7, some fighters breached the country’s defenses in motorized paragliders. In the following days, photos and illustrations of Hamas forces coasting by wing became highly charged, controversial symbols: an emblem of Palestinian resistance to some, a glorification of terrorism to others.
The startup ActiveFence, a trust and safety provider for online platforms, is one company sounding the alarm about how predators are abusing generative AI, and helping others in the tech industry navigate the risks posed by these models.
TikTok became the world’s window into the conflict in Israel. Clips from a music festival in southern Israel, where 260 attendees were killed and more taken hostage according to Israel rescue agency Zaka, broke through the algorithm’s regularly scheduled lighthearted programming. For the most part,Noam Schwartz thinks TikTok has played a positive role in the conflict. “People would not believe the magnitude of this event without it being amplified in social media,” he said.
ActiveFence, one of the bigger startups building tech for trust and safety teams, has acquired Spectrum Labs, another key startup in the space building AI tools to track online toxicity.
Russian propaganda is spreading into the world’s video games. Propaganda is appearing in Minecraft and other popular games and discussion groups as the Kremlin tries to win over new audiences.
The revolution in artificial intelligence has sparked an explosion of disturbingly lifelike images showing child sexual exploitation, fueling concerns among child-safety investigators that they will undermine efforts to find victims and combat real-world abuse.
Child safety experts are growing increasingly powerless to stop thousands of "AI-generated child sex images" from being easily and rapidly created, then shared across dark web pedophile forums. This explosion of disturbingly realistic images could normalize child sexual exploitation, lure more children into harm's way, and make it harder for law enforcement to find actual children being harmed.
Child predators are exploiting generative artificial intelligence technologies to share fake child sexual abuse material online and to trade tips on how to avoid detection, according to warnings from the National Center for Missing and Exploited Children and information seen by Bloomberg News.
Noam Schwartz provides key strategies for the US government to counter Russian disinformation campaigns targeting Ukraine. By implementing a comprehensive approach, the US can effectively combat the spread of false narratives. This article offers valuable insights and recommendations for policymakers and those invested in countering disinformation.
Companies have become accustomed to the EU’s Global Data Protection Regulation (GDPR), but a new European regulation coming into effect soon will introduce new challenges for them.
Live content moderation is a well-known challenge to Trust & Safety teams. Read how combining AI and human expertise can be the solution.
Seeing false and toxic information as a potentially expensive liability, companies in and outside the tech industry are angling to hire people who can keep it in check, ActiveFence being one of them.
Worried parents? Rightfully so. While our children are exposed to offensive content such as bullying, incitement to terrorism, the spread of disinformation, and sexual abuse - the Trust&Safety teams are training the artificial intelligence that will know how to locate malicious content before it reaches platforms.
Incitement, violence and fake news are frequently distributed on social networks with the clear aim of increasing user involvement in the network and increasing exposure to advertisements. CEO and Co-Founder Noam Schwartz: "It's a combination of conversation between humans and an algorithm that spreads what people say to each other"
Online platforms and their users are susceptible to a barrage of threats – from disinformation to extremism to terror. Daniel and Chris chat with Matar Haller, who is using a combination of AI technology and leading subject matter experts to provide Trust & Safety teams with tools to protect users and ensure safe online experiences.
Are the accusations that Musk is leading Twitter into the toxic realm of unmoderated content legitimate? Noam Schwartz, CEO and Co-Founder of ActiveFence gives his commentary on the changing face of Twitter.
Professional phishing attacks and fake e-commerce websites exist year round - but the high traffic, high stakes shopping season makes them even more prominent, sophisticated, and dangerous. ActieFence’s VP Mobile, Sandra Grodensky explains what buyers should be looking out for, and the safety precautions they should take as they shop online.
Disinformation has long been a feature of politics. Yet wading through the muck ahead of this year’s midterm elections in one fiercely contested state, Pennsylvania, shows just how thoroughly it now warps the American democratic process.
Federal officials are warning that China is working to interfere in November's midterm elections. Rachael Levy, Director of Geopolitical Risk at ActiveFence, joined CBS News to discuss the Communist Party's tactics in attempting to influence U.S. politics.
On top of widespread disinformation around election fraud, ActiveFence has detected online discourse promoting military intervention and suggesting the military should play a more active role in the electoral process.
In this episode of Reckoning, Kathryn Kosmides speaks with Noam Schwartz about the history of trust and safety on the internet, why companies are investing millions of dollars into Trust & Safety, and proactive vs. reactive online harm prevention.
Dennis Kahn, research lead at ActiveFence, talks about extremist online content in Brazil, saying he is most concerned about calls for military intervention and a violent coup in favor of Bolsonaro, threats that have appeared on Telegram, Gettr, and local platform PatriaBook.
Exposing your child's photos on social networks is already known to be fertile ground with many dangers. Here are the measures recommended to take before uploading a photo of your children online.
Tune in to hear about the struggles of building a startup and working with the largest internet platforms to detect and moderate harmful content.
Amit Dar, senior director of strategy at ActiveFence, adds to the conversation about the vulnerabilities of cross-chain bridges.
Inbal Goldberger, ActiveFence VP of Trust & Safety, shares how scaled detection of online abuse can reach near-perfect precision by combining the power of innovative technology, off-platform intelligence collection, and the prowess of subject-matter experts.
Metaverse and Web3 have become terms that describe aspects of the future internet, these technologies are building immersive worlds that intersect digital and real life. As more people migrate to the metaverse, real-world complications are bound to arise.
Armed demonstrators and extremist groups have increasingly gathered at abortion-related protests in the aftermath of the Supreme Court’s overturning of Roe v. Wade, causing analysts to warn of a rising threat of violence.
Noam Schwartz, Co-Founder and CEO at ActiveFence, discusses how to fight online toxicity with technology.
Creating inclusive online spaces is at the heart of user trust and safety. In celebration of Pride Month, ActiveFence shares eleven ways Trust & Safety teams can facilitate inclusivity online.
ActiveFence ranked as the #6 most promising startup of 2022!
An interview with CEO and Co-Founder Noam Schwartz on the importance of proactive content detection in preventing online harm.
Today’s guest is Noam Schwartz, the CEO and Co-Founder of ActiveFence, which raised $100M for the software that helps keep the internet safe.
We often hear and read about digital security, but digital safety concerns have also become a key issue for online platforms, creating a need for services and tools to address online integrity.
In the early stages of the internet, moderators of small platforms may have been able to hire a few people to ensure the content users were sharing was both truthful and non-violent. Today, there’s so much information being shared every second, the field of content moderation requires constant innovation to keep up and continue doing its job.
ActiveFence was chosen to be one of Globes 10 promising startups of 2021 for helping Internet companies deal with dangerous, malicious content, from pedophilia to Nazism.
You might want to change all your passwords after reading this.
Online abuse, disinformation, fraud and other malicious content are growing and getting more complex to track. Today, a startup called ActiveFence is coming out of the shadows to announce significant funding on the back of a surge of large organizations using its services.
"Even if all European QAnons support the standard narrative, that is to say they support Trump and far-right ideas, each group adapts these messages to local circumstances," said the director of strategy at the Israeli cybersecurity company ActiveFence, Nitzan Tamari.
The “metaverse” is no longer a far-off concept in Sci-Fi novels. With this new reality, here are four evolving areas to watch as online platforms grapple with new and growing abuse vectors and the new phase of accountability.