NZ Herald: Meta’s NZ charm offensive should worry every parent
As published in The New Zealand Herald, Saturday 7 March 2026
Over the past six weeks two very different narratives have been playing out in New Zealand about young people and social media.
One comes from scientists, educators and parents raising the alarm about the harms young people are experiencing online. Their concerns are increasingly being echoed in courts and parliaments around the world.
The other comes from the platforms themselves, and from those funded by them, suggesting the risks are manageable and that responsibility sits largely with families.
These narratives cannot both be true.
This week New Zealand’s Education and Workforce Committee released the final report from its inquiry into online harms facing young people. After receiving more than 400 submissions and hearing nearly 14 hours of evidence, the committee concluded that the degree of social, psychological and physiological harm warrants urgent government action.
Among its recommendations are age restrictions for social media platforms, stronger liability for technology companies, regulation of algorithmic recommendation systems and a ban on apps that create non-consensual deepfake sexual imagery.
Parliament has formally acknowledged that something serious is happening.
At almost exactly the same moment, Meta has been running a carefully managed public relations campaign in New Zealand.
Influencers have been paid thousands of dollars to promote features and “safety tools” on Meta platforms as part of what the company describes as “safety camps”. Panels have appeared with carefully selected experts discussing digital wellbeing. Messaging has centred on parental controls, balance and responsible use.
On the surface it looks constructive. Collaborative, even.
But context matters.
Meta is currently defending itself in landmark litigation in the United States brought by families and children alleging its products are addictive and harmful. The company disputes those claims. But testimony emerging in court has revealed internal research that raises serious questions about the parental tools now being promoted here.
One internal initiative, known as Project MYST, reportedly found that parental supervision and controls had little association with teens’ ability to moderate their social media use. It also found that teenagers who had experienced adverse life events were less able to moderate their use. In other words, the young people most vulnerable offline may also be the most vulnerable online.
That finding sits uncomfortably alongside the message currently being promoted to parents: that the solution lies largely in the settings menu.
This is where incentives matter.
Influencers are typically paid according to the size and value of the audience they reach. The larger the following, the higher the fee they can command for sponsored posts. But audience composition matters too. Influencers whose followers skew young are particularly valuable to advertisers trying to reach teenagers and young adults, a demographic increasingly difficult to access through traditional media.
That creates a simple commercial reality. The larger an influencer’s youth audience, the more valuable they become. In other words, the system quietly rewards those who capture and hold the attention of young people.
Youth attention is not a by-product of the system. It is the asset being packaged and sold to advertisers. That makes youth safety an awkward topic when youth engagement drives revenue.
A growing body of global research suggests this tension has real-world consequences.
A major report from Sapien Labs analysed data from 2.5 million people across 85 countries to examine mental health trends across generations. Young adults used to report better mental health than older generations. That pattern has now reversed. In every country examined, young adults are doing worse than older adults in the same country.
Sapien Labs identified four factors that together predict most of this generational decline: diminished family bonds, diminished spirituality, increased consumption of ultra-processed food and smartphones being introduced at increasingly younger ages.
One finding should give policymakers pause. A younger age of first smartphone ownership is associated with increased suicidal thoughts, aggression and other mental health difficulties later in life.
Gen Z is the first generation to grow up with a smartphone in their pocket, and among this cohort earlier access is linked not only to anxiety and sadness but also to detachment from reality, suicidal ideation and aggression.
Researchers outline plausible mechanisms. Early smartphone access disrupts sleep, increases exposure to harmful or explicit content and heightens the risk of cyberbullying during critical developmental years. Time spent online displaces opportunities to develop social cognition, the ability to read facial expressions, interpret tone and navigate complex social dynamics.
Place these findings alongside what is emerging from the courtroom in California and a troubling pattern begins to appear.
According to plaintiffs’ filings, Meta once tested a feature known internally as Project Daisy that would hide “likes” on Instagram after research suggested doing so would make teenagers significantly less likely to feel worse about themselves. The initiative was later reportedly shelved after internal assessments concluded it was negative for key platform metrics, including advertising revenue.
Meta disputes aspects of the plaintiffs’ claims. But the episode illustrates a deeper dilemma: when safety and engagement pull in different directions, which prevails?
Algorithmic feeds are engineered to optimise engagement. Intermittent variable rewards, unpredictable likes and comments are powerful behavioural drivers. Notifications are designed to pull users back. These are not neutral design choices.
If a product is deliberately designed to be difficult to put down, it is disingenuous to suggest responsibility rests primarily with families.
For years the dominant narrative has been parental responsibility. If a child is struggling online, the assumption is that parents simply need to supervise more closely or set stricter limits. But if internal research suggests parental controls have limited association with teens’ ability to moderate their use, that argument weakens considerably.
If this were alcohol or tobacco, we would recognise the pattern immediately.
When the harms of smoking became undeniable, tobacco companies funded scientists, amplified alternative explanations and promoted “smoke responsibly” messaging that shifted the debate toward individual choice.
Alcohol companies have long promoted moderation campaigns while continuing aggressive marketing.
The parallels are difficult to ignore. A powerful industry facing growing scrutiny emphasises personal responsibility while continuing to design and promote products that maximise consumption.
New Zealand sits squarely in the group of countries the Sapien Labs research identifies as most exposed to these risks. We are wealthy, highly connected and give smartphones to children early.
Our own Parliament has now acknowledged the scale of harm and called for urgent action.
The data is global. The litigation is public. The parliamentary scrutiny is formal. The lobbying is local.
The question is not whether reform is coming. It is how quickly we are willing to move before more children pay the price.