
As I write, the jury is deliberating on its verdict in the months-long social media addiction trial in LA. After a decade of rising demands for legislation — and steadily falling student achievement scores — the trial marks the first time representatives of the major platforms, Google’s YouTube and Meta’s Instagram and Facebook, have appeared in court.
They did what exploiters always do: they blamed the victim. As effective as that strategy generally is, especially when the accuser was a girl, it is going to be hard for the jury to ignore the thousands of pages of highly classified internal evidence the case brought to light. It documents a toxic and deliberate culture of exploitation.
And no matter what the verdict is in this case, there are many more to come. Another precedent-setting trial starts in May. And another in July. Of 1600 pending lawsuits in the U.S., 20 have been selected as “bellwether” trials. These omnibus cases represent the others and combine the evidence. The verdicts will set legal precedents and guide legislation.
As for the evidence, there’s an avalanche.
Much concerns the targeting of children. While the platforms say you need to be at least 13 to sign up, don’t you believe it. Children certainly don’t.
A November 2020 document called “Business Case for Kids and Families at Google” declares (after the first three pages, which are blacked out): “Solving Kids is a Massive Opportunity.”
“Kids under 13 are the fastest-growing internet audience in the world,” it crows. “40 percent of new internet users going online are kids. 170k kids around the world go online for the first time every day. Digital Marketing aimed at kids is growing 25 percent year over year.”
The Business Case establishes just how lucrative Google’s “Family Group” construct will be. You thought you were being offered the opportunity to supervise your child’s internet use? Think again.
You were offered an opportunity to get them onto Google young.
A confidential Tik Tok document on underage metrics counts 38 million global users under 13, and a further 24 million under 16.
When Mark Zuckerberg was called to a U.S. Senate hearing in 2021, he was about to launch something called Instagram Youth, aimed at capturing 9-to-11 year-olds just getting onto the internet.
The name, “Instagram Youth,” was a misnomer. An internal memo notes the term, “youth,” which sounds sort of like teen, was to cover kids from 8 to 18.
These marketing campaigns put a whole new spin on George Orwell’s famous observation in 1984: “Who controls the past controls the future. Who controls the present controls the past.”
In this case it’s who controls the young.
Nothing is going to change without legislation. Australia’s ban of kids under 16 — the first of its kind in the world — has had dramatic effects. Almost 5 million kids were on social media platforms in that country. After years of dithering and delay on the part of tech companies who couldn’t seem to enforce their own age restrictions, within months of the legislation’s passing, in December 2025, the offending accounts were shut down.
What is Canada doing? In 2024, after four years that included a national commission, four citizens assemblies, an expert panel and wide public consultation, Bill 63, the Online Harms Act, was tabled — only to be shelved with the Trudeau government.
That legislation, however, was excellent. While the trend, following Australia, is to restrict kids from accessing the platforms, that means nothing much will change with the social media companies themselves. It is not as though suddenly, when a teen turns 16, Instagram will lose its allure.
Nonetheless, Denmark, France, the EU, Germany, Greece, India, Indonesia, Italy, Malaysia, Norway, Poland, Slovenia, Spain, and the U.K. either have already implemented, or are in the process of drafting, age-restricting legislation.
That is just one piece of what needs to be a multi-faceted approach.
Canada’s Bill 63 proposes a regulatory framework for social media platforms. It puts the duty of responsibility and care on the companies. They must manage the risks of harm to children and be transparent about what they do.
It makes it clear that if the content made available on a platform results in harm to a child, the platform is liable.
Last week, the Carney government reconvened the advisory group that helped to draft Bill 63, “to engage on new and emerging issues related to online harms.”
Even since the bill was shelved in 2025, the landscape has changed. As the social media addiction trial made clear, online harms are not just the result of bad content. They are embedded in the design of the whole experience. Push notifications, auto playing reels, tailored algorithms that give you more of what you want and keep you waiting for it, ephemeral content, the endless scroll, the rabbit holes of misinformation and disinformation, the fake accounts, the likes, the followers.
It’s been likened to a casino on one hand, to tobacco on the other. But social media platforms are far more pernicious, and far more sophisticated, than either of those things.
And it’s all about to get exponentially worse with the advent of AI’s freewheeling chatbots, happy to give advice about how to make a cake, have a relationship, or conduct a mass school shooting. No questions asked. No one responsible.
The Online Harms Act forgoes an age limit in favour of making it clear the global corporations profiting from social media are liable for their effects. That is a far-reaching approach. But given the importance to these companies of recruiting kids young as a marketing strategy that guarantees their future, both techniques should be on the table.
Demand these companies clean up their algorithms, and prevent those most vulnerable from logging on in the first place.
See it in the newspaper