
Back in the old days, you’d snag an older sibling’s expired license or put on some makeup and try your best to sneak into a bar or 18 and over venue. Well, it’s 2026 and kids are no different. They’re using someone else’s IDs and drawing on facial hair to get into the hottest venue in town: the internet.
A new report from Internet Matters revealed that a third of U.K. children have found ways to get past age verification systems designed to protect them online, with some resorting to creative workarounds including drawing facial hair on themselves to fool age-estimation technology.
The report, The Online Safety Act: Are Children Safer Online?, published by Britain’s leading not-for-profit for online child safety, examines the early impact of the U.K.’s Online Safety Act on families. While new safety measures are becoming more visible across children’s online spaces, the systems meant to enforce them are widely seen as weak and easy to circumvent. Nearly half of children report experiencing harm online, including exposure to violent and hateful content, despite the Act’s protections having come into force.
Earlier this week the U.K. government said it would impose some form of age or functionality restrictions on social media for under-16s, and pressure is mounting as other countries, including Australia, move to ban children from platforms outright.
Fake birthdays, borrowed logins and false moustaches
The research, which surveyed 1,270 U.K. children aged 9–16 and their parents, found that nearly a third (32%) of children admitted to bypassing age checks in just a two-month period. The most common method was simply entering a fake birthday (13%), followed by using someone else’s login (9%) or someone else’s device (8%). Others used a VPN or submitted photos and videos of other people, or even fictional characters, to trick facial age-estimation tools.
One of the more striking findings involved children physically altering their appearance to deceive the technology. One mother told researchers: “I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old.” The report noted this technique was reported as working in multiple instances.
Nearly half (46%) of children said they believed age checks were easy to bypass, with older children even more confident: 52% of those aged 13 and over said getting past the systems was straightforward. “I don’t class it as being a deterrent,” another parent told researchers. “If anything, because they’ve had a barrier put up, kids will do everything they can to be the first one to get through it.”
The report also found that parents are not always working against these workarounds. A quarter (26%) of parents said they had allowed their child to bypass age checks, with one in six (17%) actively helping their children get around the restrictions. A further 9% admitted to allowing it or turning a blind eye.
Beyond the age verification gaps, the report details children’s ongoing exposure to harmful material. Almost half (49%) of children said they had experienced harm online soon after the new measures came into force, including seeing violent content (12%), material promoting unrealistic body types (11%) and hateful content including racist or homophobic material (10%), all categories that are prohibited under the Act’s Protection of Children Codes. Some children in focus groups described being exposed to graphic content through their social media feeds, including footage of the assassination of Charlie Kirk, which left some deeply distressed.
Children frequently encounter AI-generated videos and images, some of which are difficult to identify as artificial, raising worries about misinformation and inappropriate content. One 16-year-old girl told researchers: “I had something happen to one of my friends where someone took her face and made her nude.”
“Children are smart, and they will test the limits of any age check. That’s why basic checks are not enough,” Ricardo Amper, Founder and CEO of a fraud prevention and biometric authentication company Incode Technologies, told Fortune. “The technology has to be trained for fraud, with liveness and deepfake detection built in, so it can tell the difference between a real child, a replayed video, an altered face, or an AI-generated attempt to bypass the system.”
Some signs of progress
It’s not all bleak. Around seven in ten children (68%) and parents (67%) report seeing more safety measures online, including improved reporting tools, content filters, and restrictions on functions such as live streaming. Over half of children (53%) say they have recently been asked to verify their age, and the majority (54%) report that online content has become more child-friendly. Some 39% of parents and 42% of children feel the online world has become safer recently, though 28% of parents and 16% of children believe it has become less safe.
Children also described struggling to regulate their own screen time, with platforms’ addictive design features compounding the problem. “I definitely say I spend a lot of time on my phone. I’m on it at 3AM on a school night,” said one 16-year-old girl. Another, aged 12, described the pull of short-form video: “With TikTok or YouTube shorts… it’s just the endless cycle of scrolling. It never has a point where it stops.”
The findings suggest the Online Safety Act has not yet delivered the step change families were promised. Only 22% of parents and 31% of children believe the government is doing enough to protect children online. Support for a blanket ban on social media for under-16s is strong among parents (62%), though many doubt it would work in practice and worry it could remove important social connections. Stronger enforcement of the existing law, stricter age checks and restricting harmful platform features were the most popular alternatives to a ban.
“This report offers an early snapshot of how the Online Safety Act is affecting children’s safety and wellbeing online,” read a statement to Fortune from Rachel Huggins, CEO of Internet Matters. “While some families are beginning to see improvements, progress is patchy and far too slow. Children are still being exposed to harmful content at unacceptable levels, and their experience of age verification systems show they are too often weak or easily tricked.”
“Just one in five parents, and fewer than a third of children, think the government is doing enough to keep children safe online,” she continued. “Parents are also clear that social media companies must do more and be properly held to account.”
The report calls for online services to be built on safety-by-design principles, with children’s access to platforms determined by the level of risk they pose rather than through blanket bans. It urges highly effective age assurance that actually works in practice, stronger media literacy support for families, and robust enforcement of existing legislation.











