What We Told the UK Government About Children and Social Media

This week Stamplo submitted a response to the UK government's national consultation on children's digital wellbeing. The consultation asks important questions about age restrictions, platform design, verification and the role of artificial intelligence in children's online lives. We wanted to contribute not as commentators, but as practitioners who have built a children's platform from the ground up with safety as the founding principle rather than an afterthought.
What we actually see when children write letters online
Most contributions to this debate are grounded in research, theory or lived experience as a parent. Stamplo offers something slightly different: direct observation of how children behave when digital communication is designed to be slow, supervised and deliberate rather than instant and algorithmic.
Because letters on Stamplo take time to write, require parental approval before delivery, and arrive days later rather than seconds later, the patterns we observe differ markedly from what mainstream platforms report. Children typically take several days to reply rather than minutes. Friendships develop across long threads of letters rather than fragmented bursts of chat. Messages are longer and more considered. Conversations that began months ago are still active today.
None of this behaviour is enforced through rules. It emerges from the structure of the platform itself. When communication is framed as writing a letter rather than sending a message, children approach it differently. There is no follower count, no algorithmic feed, no notification pulling them back. The result is something closer to traditional pen pal correspondence than modern social media, and the children using it are forming friendships that last. We currently have families across 41 countries, with some pen pal pairs having exchanged letters consistently for more than five months. That is the behaviour that thoughtful platform design can produce.
On the age of digital consent
We believe the age of digital consent should be raised, with proportionate controls that reflect the actual risks of different services rather than a single threshold applied uniformly. Our view is that social platforms with algorithmic feeds and stranger interaction should require verified parental consent up to 16, with unrestricted access only at 18. Lower risk services, including platforms like Stamplo that are designed specifically for children with built-in parental oversight, should require verified parental consent at any age regardless of the child's age. The threshold matters less than whether the verification behind it is genuine.
On Stamplo, every parent must complete a real-time liveness check before their child can access the platform. This is not a checkbox or a self-declaration. It is a genuine identity verification step that links a real adult to every child account on the platform. The result is a community where accountability is structural rather than assumed. Even so, we have found that a small number of parents, all of whom chose Stamplo specifically because of its safety model, raised the verification process as a friction point. If motivated, safety-conscious parents find identity verification burdensome, the dropout rates on mainstream platforms that have not made safety their primary value proposition will be considerably higher. Raising the age of consent without addressing verification usability will not produce the outcomes policymakers intend.
A tiered approach to platform regulation
We argued strongly in our response that a single age threshold applied uniformly across all online services is both blunt and unworkable. The risks associated with a pen pal platform for children are categorically different from those of an algorithmically-driven social network or a platform where anonymous strangers can communicate in real time. The regulation should reflect that difference.
The closest analogy we have in the UK is the film certification system. Everyone understands it, it scales with the level of risk, and it is consistently applied. A similar tiered framework for online services, where the required level of verification and parental oversight is proportionate to the potential for harm, would be more effective than a single threshold that treats a children's educational platform the same as a social media network with two billion users.
The verification infrastructure problem
One observation we shared that we feel is underrepresented in the wider debate is that the technical barrier to implementing robust age verification is lower than the industry often claims. For a small platform, integrating a reputable third-party identity verification service such as Didit is not technically complex. The real hurdles are incentives and UX friction — platforms are not commercially motivated to introduce steps that reduce sign-up conversion, and parents will abandon flows that feel cumbersome even when they understand the reason for them. Regulation that sets clear, enforceable technical standards would change that calculation for platforms of all sizes.
More broadly, we believe the current model of platform-by-platform verification is fundamentally inefficient and will always contain gaps. Your bank already knows your age. Your phone contract is tied to your identity. A device-level or browser-level verified age signal could allow a child to verify once rather than hundreds of times across hundreds of services. This would need to be built on privacy-preserving foundations, sharing a verified age range rather than personal data, with proper governance and open standards to avoid creating a centralised identity database that carries its own significant risks. The technical foundations for something like this exist. What is missing is the policy coordination and industry commitment to make it a standard rather than a proposal.
On artificial intelligence and children
Early in Stamplo's development, when our user base was small and children could not always find pen pals quickly, we built a restricted AI chatbot to bridge the gap. It operated within tightly defined guardrails, and it served a practical purpose during that period. When the platform grew large enough for children to connect with real peers, we removed it without hesitation.
What prompted us to act was not a safety incident but something quieter. Parents began messaging to say their child had enjoyed talking to it. Children were forming an attachment to a language model. The interaction was pleasant, even enriching in some respects, but it felt fundamentally misaligned with what Stamplo exists to do. We are built on the belief that real human connection, however slow and imperfect, is irreplaceable. An AI that approximates friendship well enough to satisfy a child is not a feature. It is a distraction from the thing that actually matters.
Our view is that embedded AI assistants operating within transparent, restricted environments, where parents can see the prompts and guardrails governing responses, can be appropriate for children in an educational context. General purpose AI systems with open-ended conversational capability should be restricted to adults. The capacity of these systems to adapt to individual users and sustain what feels like a genuine relationship is precisely what makes them powerful, and precisely what makes them inappropriate for children without safeguards that do not yet exist at scale.
A note on news and the online watershed
One area we raised that falls outside the typical scope of these discussions is the accessibility of distressing news content to children. A child loading a major news homepage today may be immediately confronted with graphic headlines about violence, disaster or conflict, with no warning and no age check. Television broadcasting in the UK operates a watershed precisely because society has decided that certain content should not reach children without intention. That principle has not travelled with us into the online world, and we believe it should.
Why we submitted
We submitted to this consultation because we think the voice of people who have actually built things, made real decisions under real constraints, and watched real children use what they created, has value in a conversation that can otherwise become abstract. Stamplo is a small platform. But it is a working one, and the decisions we made to keep it safe were not complicated or expensive. They simply required the commitment to treat child safety as a non-negotiable rather than a feature to be balanced against growth.
The generation growing up online today will not remember a world without the internet. What they will remember is whether the adults who built it took their safety seriously. That is the real question facing the industry, and it is one that has gone unanswered for too long.