A key focus of the Bill is to force companies to assess the risks faced by children while they're on their sites or platforms and put strategies in place to minimise them.
It will include social media platforms and search engines, but it will also cover thousands of other online companies who can be fined for failing in their duties.
Ofcom will be the designated regulator of the regime, so we'll know more about how it will work in practice once it becomes law.
What is the Online Safety Bill?
The Online Safety Bill is a wide-ranging piece of draft legislation that aims to improve the safety of children and adults online.
At the time of writing, the passage of the Online Safety Bill has halted at the report stage, so there is some way to go before it can become law. Changes in Government departments may also affect the Bill's progress.
The basic aims of the Online Safety Bill in relation to children are to force companies to:
- Assess the risks faced by children online
- Take action to tackle illegal activity that threatens the safety of children online
- Prevent access to material that is harmful for children
- Ensure there are strong systems in place to protect against activities that are harmful to children
It should also be straightforward for parents and children to report harmful content or activity, and platforms must take action on these reports.
Furthermore, companies will also have a duty to report any child sexual exploitation or abuse on their platforms to the National Crime Agency (NCA).
Communications regulator Ofcom is set to be given oversight of the Online Safety regime and will be able to impose sanctions on companies that don't meet their responsibilities.
This could include fines up to a maximum of £18m or 10% of a company's annual turnover for egregious breaches.
Which platforms are affected by the Online Safety Bill?
The short answer to this is that we don't yet know exactly which companies will be covered under the Online Safety Bill. This will only become clear after the Bill becomes law.
However, an impact assessment published by the Government says they expect over 25,000 platforms to fall within the framework's scope. This includes:
- User-to-user services that allow users to generate and share content (social media platforms, online marketplaces, games with chat options etc)
- Search services (search engines and other search options for scouring multiple websites/databases)
So, we can expect to see big names like Facebook, Twitter, Google, Snapchat and TikTok brought into scope, but there will also be plenty of smaller names required to abide by the rules too.
There are different thresholds and categories depending on the risk of harm, with Category 1 platforms those with the highest risks of harm and with the highest reach (so, the major social media platforms).
Ofcom could have up to 18 months to analyse all this once the Bill has passed Royal Assent, meaning however quickly the legislation passes, there may still be some significant delays in letting companies know which responsibilities they need to adhere to.
How will the Online Safety Bill protect children?
Under the Online Safety Bill, companies that are likely to have children using them must undertake risk assessments and manage any subsequent risks of harm.
Services will need to:
- Ensure children are made aware (and understand) the terms of service
- Provide higher standards of protection for children than adults
- Consider the different needs of children in different age groups
They'll have a duty to keep their children's risk assessment up to date and to notify Ofcom if it's breached or anything else comes up.
If this all sounds a little woolly, that's partly because the Bill is written in legislative language and partly because Ofcom will need to set out details of how the regime will work in practice and what steps companies will need to take to comply.
The basic premise in the Online Safety Bill in terms of protecting children could be summed up as:
- Platforms must take a greater responsibility for protecting children from harm online and can be sanctioned if they fail to do that
So, this covers direct harm such as child sexual abuse and hate crimes that are classed as illegal content, but it also covers anything legal that might cause psychological and physical harm in the future.
As an example, this could include eating disorder content.
It also seeks to target underage exposure to content and activity that could result in future harm, with underage access to pornography and violent content mentioned as specific examples.
Whether the Online Safety Bill will truly improve the protections of children online may depend on the final guidelines drawn up by Ofcom and how they are applied to companies.
What do children's organisations say?
Children's organisations and charities are frustrated by the ongoing delays to the Online Safety Bill.
There are also criticisms that it still doesn't go far enough.
For example, the NSPCC is pressing for a statutory watchdog to advocate for the rights of children online. Under their proposals, this would be funded by a levy on the tech industry.
They ran a poll that found 88% of adults believed it was necessary for the Online Safety Bill to introduce an independent body to protect children at risk from online harm. The poll also showed that:
- 72% believe children should receive at least the same amount of representation from an independent body as customers in other regulated sectors (such as postal services and transport)
- 79% of those with an opinion think it's likely tech companies will avoid having to comply fully with regulations while 77% think it's likely social media companies will seek to downplay the impact of their products on children
- 58% believe that children will be less protected from harm if the Government doesn't commit to an independent body
However, during the Committee stage of the Bill, the Digital Minister ruled out the idea of an independent body - at least until the regime has been running for a few years.
Evolution of online child protection
The protection of children online has been high on the Government's agenda since 2008 when then Children's Secretary Ed Balls announced the launch of the UK Council for Child Internet Safety (UKCCIS).
This morphed into the UK Council for Internet Safety (UKCIS) in October 2018, with a broader remit than just protecting children online.
UKCIS played a role in developing the Online Harms White Paper that became the Online Safety Bill, and we've seen the idea that a regulator needed to be appointed for online harm become reality (even if we didn't know when we discussed it in 2018 that the regulator would be Ofcom).
Yet the progression of the Online Harms White Paper and Online Safety Bill have moved at a glacial pace. It's been over four years since the Government first put the idea in motion and two prime ministers have come and gone since then.
While there is obviously a need to ensure the legislation is effective and doesn't overreach, it could be argued that the length of time it's taken to bring this into law means many children have encountered harmful content or behaviour online that may have been preventable.
That said, changes have been brought in over the years to improve child safety online.
Parental controls
The expansion of parental controls means that the four biggest broadband providers are required to offer router-level controls to households.
So, we see BT, Sky, Virgin Media and TalkTalk encouraging customers to switch on parental controls at the time of set-up, although Sky Broadband Shield has long been opt-out rather than opt-in (meaning it's on by default and customers have to act to switch it off).
Parental controls are contentious, and some children can find a way around them.
For example, Ofcom research in 2022 found that 6% of children had circumvented parental controls, with 19% deleting their browsing history and 21% using "incognito mode" to surf the internet.
This suggests that relying on router-level controls to stop children seeing inappropriate online content is not wholly effective, although it might be one piece in a larger puzzle.
Age Appropriate Design Code
The Age Appropriate Design Code (also known as The Children's Code) came into force in September 2020 and was fully implemented a year later.
The Code complements the aims of the Online Safety Bill, prompting companies to put children's privacy at the heart of the online services they use such as apps, games and news services likely to be accessed by children.
It includes 15 standards such as:
- Considering the best interests of the child when designing and developing services
- Factoring in different ages and development needs of children
- Minimising the amount of data needed to access the service
- Providing online tools to help children report concerns about their data privacy
The Information Commissioner's Office (ICO) say that major companies such as Facebook, YouTube and TikTok made significant changes to their child safeguarding during the transition period of the Code.
However, critics argue that the Code places the onus on safety tech and sweeping decisions on how children develop at different ages rather than improving parental involvement and helping children learn the skills they need to respond to potentially risky situations online.
What are the risks for children online?
To effectively protect kids online, it's important to understand what specific risks there are for children when they're browsing and interacting.
There are three major types of risk children run into online:
- Content risk where children receive mass-distributed content which exposes them to inappropriate material such as violence, hate speech and pornography.
- Conduct risk where children participate in a situation which causes harm, whether that's harm to themselves or someone else. For example, imitating dangerous behaviour such as drug taking or bulimia can be harmful to the child themselves while bullying, sexting, online aggression and the promotion of harmful behaviour are further instances where dangerous interactive situations can develop.
- Contact risk where children are at risk of becoming victims in situations such as bullying, meeting strangers and privacy threats.
While these umbrella terms are useful, it's also helpful to look at specific types of harm children are at risk of online to show how varied these can be. So, children can be at risk of:
- Cyberbullying
- Viewing pornographic material
- Sexting
- Peer pressure
- Stranger danger
- Self-harm or suicide
- Illegal online gambling
It's clear than the Online Safety Bill can start to address some of these problems by making such tech companies and online platforms more responsible for what children encounter online.
However, it isn't a catch-all solution and the fact that many children have profiles on social media when they're below the required age demonstrates that tech tools can only go so far.
Summary: Will the Online Safety Bill protect kids?
The aims behind the Online Safety Bill can be summed up in a couple of points:
- It introduces a duty of care for companies to safeguard children on their sites and platforms.
- Ofcom will take on the role of regulator and set a framework for companies to follow.
- Companies can be fined if they fail to live up to their responsibilities.
So far, so good, yet we won't know how all this is going to work in practice until after the Online Safety Bill becomes law and Ofcom have published a Code of Practice covering it.
We've seen various initiatives over the years to protect children online, with some falling by the wayside such as the pornography age verification policy that was scrapped in 2019.
Getting to the point where the Online Safety Bill is close to becoming law is a step in the right direction and the provisions it sets in place should be welcomed.
However, it's also important to consider feedback from organisations like the NSPCC that say it doesn't go far enough - and that companies may simply try to find ways around it.
Comments