Improve your detection and simplify moderation - in one AI-powered platform.
Stay ahead of novel risks and bad actors with proactive, on-demand insights.
Proactively stop safety gaps to produce safe, reliable, and compliant models.
Deploy generative AI in a safe and scalable way with active safety guardrails.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Stay ahead of industry news in our exclusive T&S community.
Protect the most vulnerable users on your platform
Technology has greatly advanced, changing how children access the internet. In the past, kids could easily bypass age restrictions by simply ticking a box to access restricted websites or play mature-rated games.
However, in late 2023, industry groups began advocating for US officials to approve facial age estimation technology to protect children’s privacy. This innovative tool could be integrated into games and websites alongside traditional age verification methods. Its purpose would be to act as a gatekeeper for activities with an age restriction – like the bouncer at the dance club or the clerk at the liquor store.
Facial age estimation technology would also help obtain parental consent for children to access online sites and services. Current regulations require parental consent for collecting or using personal information from children under 13. By using age-estimation technology, the process of securing and enforcing this consent could be streamlined.
Face age estimation is a computer vision technique that uses deep learning and convolutional neural networks (a machine learning algorithm used for image classification and object recognition, among others) to analyze facial features and predict a person’s age in images or videos. By training on large datasets, these models learn to identify patterns in facial attributes like wrinkles and skin texture that correlate with specific age ranges.
Face age estimation and facial recognition use similar technologies but serve different purposes. Facial recognition involves matching faces with databases and has become notorious for privacy violations and racial biases, especially when used by government agencies for law enforcement.
In contrast, face age estimation predicts a person’s age without identifying them, focusing solely on age-related features. This method does not link faces to personal identities, making it more privacy-friendly and suitable for applications like age verification.
The difference also lies in the databases used for each technology. Facial recognition databases contain labeled photos with identities (like full names, citizenships, etc.), facilitating the identification of faces. Age estimation databases, however, label photos with age estimates (e.g., “mid-40s”), avoiding any personal details.
Face age estimation is something people naturally do all the time, consciously or unconsciously. The focus on facial features rather than full-body images for age estimation is driven by several factors, each influenced by the specific requirements and applications of the technology:
While face-centric age estimation is widely adopted, it has its limitations. In certain scenarios, full-body images or a combination of facial and body features might be more appropriate. The choice between face and body depends on the specific application and the available information in a given context. For instance, if we aim to estimate age in surveillance images, relying solely on facial features makes no sense, especially in lower resolution and less controlled environments. In such cases, using alternative methods like clothing choices or gait can be essential when the face is not always visible.
With regulations like the American Children’s Online Privacy Protection Rule (COPPA) and The Kids Online Safety Act (KOSA) requiring age assurance, estimation software is emerging as a privacy-focused solution.
However, the question remains: How accurate are these algorithms?
A recent study from the National Institute of Standards and Technology (NIST) revealed that while there have been improvements in software estimating age from facial features, there is still a margin of error of about 3-4 years—particularly notable for women and people from diverse ethnic backgrounds.
While humans naturally estimate age from faces, teaching computers to do so is challenging. Training such algorithms requires huge sets of images labeled with ages, and since these aren’t always available, human annotators are used to estimate ages, which can produce inconsistent results. Annotators often provide “apparent age” labels or estimate age ranges, making data labeling less precise.
Facial features further complicate age estimation: ethnic and racial variations, facial expressions, and other factors like glasses, makeup, facial hair, and cosmetic procedures (like botox), can create bias or obscure facial characteristics, further complicating the task of age labeling. To overcome this, training data should be robust and highly diverse.
Finally, as in any computer vision task, it’s essential to use “clean” training data, which means the data must be of high quality and free from errors, inconsistencies, or biases that could affect the algorithm’s performance. Factors like lighting conditions, shadows, angle of capture, and partial occlusions present plenty more challenges.
Another wrinkle in age verification technology (pun intended) is the concern for privacy in the general public. Digital rights groups argue that implementing age recognition systems presents significant privacy challenges, asserting that while major platforms use face-based age detection systems to identify minors, they collect and store large-scale biometric data, which can lead to privacy violations and misuse as surveillance mechanisms. Government and civil organizations worldwide are actively debating these issues, weighing the benefits of protecting minors against the potential intrusion of personal privacy for everyone.
Face age estimation technology is steadily improving in both accuracy and privacy safeguards. With true positive rates exceeding 98%, reduced biases across genders and different skin tones, and decreasing mean absolute errors (MAE) due to growing and diverse face databases, the technology is becoming more reliable.
Essentially, as more people use this technology, the databases become bigger and more diversified, enabling algorithms to train better, make fewer mistakes, and reduce bias. This makes face-age estimation an increasingly dependable method for automatically verifying age.
In real-life immediate scenarios where accuracy is crucial, companies implement a safety buffer to account for potential errors. This ensures an added layer of safety tailored to their specific needs and regulatory requirements.
Currently, face age estimation is often used to block minors from accessing age-restricted goods and services, such as tobacco or alcohol sales, websites with adult content like violence, nudity, or gambling, and even common websites that require financial details like credit card credentials. However, its application extends beyond access control. It plays a crucial role in automatically identifying graphic content like Child Sexual Abuse Material (CSAM). The technology’s ability to accurately differentiate between adults and minors, coupled with its capacity to swiftly process large volumes of content, can reduce the workload of human moderators who previously had to review such material manually.
ActiveFence’s detection solutions use face age to automate real-world tasks like identifying CSAM or violations of child safety policies on platforms. Our algorithms classify individuals into age groups based on visual cues in images or video. By integrating face age estimation with other image analyses, our tools are better able to understand content in context, accurately identifying violations and preventing potential harm.
For example, an alcohol-detecting model can identify liquor bottles in images. Using this model alone might flag a picture of adults drinking alcohol, which is generally considered benign. However, by adding an age detector, we can determine whether the people in the image are minors. In this case, the image can be flagged for further review to assess potential violations. This approach is similarly effective for images containing weapons, drugs, nudity, or other illegal content involving minors.
Plus, in a recent update to our underage detector, we’ve enhanced our tools to include the capability of identifying minors based on their online conversations. This involves analyzing user interactions and integrating visual age estimation with textual analysis. By combining these capabilities, we gain a deeper understanding of the precise age of the user in question.
Accurately determining the age of an online user is challenging for a computer. However, by layering detection algorithms and integrating various technologies together with face age estimation, we can empower automated tools to navigate this complexity more effectively. This combination of technologies significantly enhances our ability to ensure trust and safety online. Face age estimation technology positions itself as a key component in keeping the online world safe from harm, and its continuous improvement in accuracy and privacy safeguards makes it an invaluable tool for age verification and content moderation.
Editor’s Note: The article was originally written by Damian Kaliroff, Data Scientist at ActiveFence, and published on ActiveFence’s Engineering Blog on Medium. It has been modified, updated with new information and edited for clarity.
Learn more about the online threats children face and how to protect them
Discover the hidden dangers of non-graphic child sexual exploitation on UGC platforms and learn to combat these pervasive threats.
Companies with online communities can help improve child safety online through these 5 actionable strategies from ActiveFence.
ActiveFence reviews actions platforms can take to keep children safe online.