*/
How much of our freedom on the internet should we be denied to guard against online harms?
In the Online Safety Bill, the government gives its tortuous and inconclusive answer to this fiendishly tricky question.
This Bill aims to prevent harm to internet users in the UK from online content, by imposing ‘duties of care’ on service providers (SPs) in relation to certain ‘illegal’ and ‘harmful’ content.
They will have to restrict or remove some such content and be regulated by Ofcom under the direction of the government.
Article 10 of the European Convention on Human Rights, protecting freedom of expression, gives rights to disseminate information and ideas, including via the internet. Public authorities must justify any interferences with these rights resulting from legislation. These must be necessary in a democratic society.
I believe the Bill, if enacted, will result in such interferences.
There is no issue about the need to restrict some of the content which has been discussed in the Bill’s evolution, such as content harmful to children or which may encourage self-harm.
But the Bill has the potential to restrict many other types of content.
Is it sufficiently, and sufficiently clearly, targeted at content which can justifiably be restricted in Article 10 terms?
How will the SPs, especially the big tech platforms, respond to such legislation under the regulatory influence of Ofcom and the Department of Culture?
And when writers and speakers have their content restricted unjustifiably will they, and their audience, be able to obtain swift and straightforward redress?
The Bill is long and complex. The following are merely the headline provisions:
This is content which the SP has ‘reasonable grounds to believe’ amounts to a ‘relevant offence’ or is such an offence when disseminated. These are of four types: terrorism; child sexual exploitation and abuse; offences in, or ‘of a description’ in, regulations made by CSec; or any other offence where the intended victim is an individual. Where an offence is covered by the regulations it is ‘priority illegal content’.
All regulated SPs would have to maintain an ‘illegal content risk assessment’ of the risks of users encountering illegal content on the service and the level of risk of harm to individuals from such illegal content; and as to how the design and operation of the service may increase or decrease the risks.
Various ‘safety duties’ would be imposed. Proportionate steps would be required to mitigate the identified risks of harm from such content and to operate the service minimising priority content. U2U providers would take down any illegal content of which they become aware and search services would minimise the risk of such content being encountered on their service.
Content would be HTA if it is priority content of such ‘a description’ in regulations made by CSec. Also, if the SP has ‘reasonable grounds to believe’ either: that the ‘nature of the content is such that there is a material risk of [it] having, or indirectly having, a significantly adverse physical or psychological impact on an adult of ordinary sensibilities’; or that its dissemination creates such a ‘material risk’.
There are also three categories of HTC content. These are the same as for adults, but with a child instead of an adult being referred to in the definition.
The duties relating to HTA content would apply to Cat1 services. There are duties: to maintain a risk assessment (like that for illegal content) and to notify Ofcom of the kinds of, and incidence of, non-priority HTA on the service; and to ‘protect adults’ online safety’ by specifying in clear and accessible terms of service how HTA is to be dealt with by the service.
All regulated SPs would have to assess whether it is possible for children to access the service. This is presumed unless there are systems meaning children are ‘not normally able’ to do this. Where it is possible, SPs would assess whether ‘the child user condition’ is met. This is where the service has, or is likely to attract, a significant number of child users. If so, the service is deemed ‘likely to be accessed by children’.
There must then be a ‘children’s risk assessment’ of the service (again like that for illegal content), with any incidence of HTC content being notified to Ofcom. There are four duties to ‘protect children’s online safety’. Proportionate steps would be required to mitigate and manage the risks and impact of harm to children in different age groups from the content; and to have systems and processes designed to protect such children. For U2U services these should prevent children ‘of any age’ from encountering primary HTC content. For search services they should ‘minimise the risk’ of such an encounter. Again, clear and accessible terms of service or policy statements should specify how protection is given.
When implementing safety policies, all regulated SPs would ‘have regard to the importance of’ protecting user rights to freedom of expression ‘within the law’ and protecting them from ‘unwarranted infringements of privacy’. There are duties on providers of Cat1 services to ‘take into account’ the importance of freedom of expression in relation to content of ‘democratic importance’ and ‘journalistic content’.
There are also user reporting and redress duties. All regulated services must have systems that allow users and ‘affected persons’ easily to report illegal content. Cat1 services must have these for HTA content. Services that can be accessed by children must have them for HTC content.
All SPs would have complaints procedures that provide for ‘appropriate action’ to be taken by the provider in response. These would either be complaints: about content and/or an SP’s failure to comply with the duties of care; or by users whose content has been taken down or restricted or who have had their ability to access the service restricted by the SP because of their content.
Ofcom would consult on, and draft, codes of practice for SP compliance. But CSec could direct changes to a code, to reflect government policy, before laying before Parliament. Ofcom would enforce compliance. A compliance notice could be enforced in civil proceedings by court order. A penalty could be imposed on the respondent in any such procedure of up to £18m or 10% of its qualifying worldwide revenue.
Each of the two approaches to defining the targeted content is concerning.
It is not clear how the identification of such content via CSec regulations will be achieved in practice, or how wide the regulations will go. A power to identify by ‘description’ implies a wide SP discretion to decide whether and when the statutory description is met. The wider the description, the wider the discretion.
The ‘reasonable grounds to believe’ approach is even more concerning. The content does not have to be illegal or harmful as defined, just content which might be (according to this threshold).
It is also questionable whether SPs should be deciding questions to do with criminality, rather than passing concerns to the police and Crown Prosecution Service. You only need to skim the wording of the terrorism/CSEA offences to see the expertise required. Section 13(1A) of the Terrorism Act 2000, for example, criminalises publication of an image of an article ‘in such a way or in such circumstances… as to arouse suspicion’ that a person is a supporter of a proscribed organisation. This could be a T-shirt or banner with text. How qualified is the SP to make the required contextual judgement here?
The definition of harmfulness is elusive and vague, turning as it does on a ‘material risk’ of the specified consequences to a hypothetical person.
The scope for SPs getting it wrong here, including in ECHR Article 10 terms, is considerable. The judgements required would be difficult for humans. If they are left in the first instance to artificial intelligence, the scope is even greater. I doubt if AI really gets satire, ironic humour or polemical comment.
The Bill’s freedom of expression duty does not protect against this. SPs are not told to apply Article 10 principles. It identifies a more limited freedom, where the speech is adjudged to be ‘within the law’ (whatever this may mean). Article 10 is a fundamental right and not simply what is left outside of domestic law prohibitions. And the Bill’s limited right does not have to be respected. The duty is simply to ‘have regard’ to its importance when restricting and removing content.
There is a risk that users whose speech is restricted will struggle to get redress, or will simply not bother, chilling online free speech. There is very little in the Bill about redress for them, still less providing comfort that their rights will be properly respected.
Of course, SPs could engage in ‘light touch’ regulation. Closely scrutinised, the Bill gives them much discretion, with little compulsion to restrict or remove. Indeed, none at all for HTA content. Much will depend on precisely where Ofcom and the government decide to lead them. But the messaging in the Bill suggests there will be some pressure to be proactive in restricting and removing content that may be illegal or harmful under the legislation.
The big SPs may want to be seen to be doing this, to find favour with government given concerns about being taxed in the UK and protecting their monopolies from competition legislation. But the costs burden of compliance could be considerable for start-ups and SMEs, which may inhibit competition and innovation.
It looks as though the government has decided on a potentially wide reaching and flexible piece of legislation to attract public approval (reflecting growing disapproval of big tech platforms) and as a tester for Facebook, Google et al. A simpler piece or pieces of legislation, aimed at more narrowly defined problematic content would have been preferable, with a clear statement that Article 10 will apply.
How all this will play out remains to be seen. But it seems clear that the days of the wild west, unregulated domestic internet are ending. One way or another.
How much of our freedom on the internet should we be denied to guard against online harms?
In the Online Safety Bill, the government gives its tortuous and inconclusive answer to this fiendishly tricky question.
This Bill aims to prevent harm to internet users in the UK from online content, by imposing ‘duties of care’ on service providers (SPs) in relation to certain ‘illegal’ and ‘harmful’ content.
They will have to restrict or remove some such content and be regulated by Ofcom under the direction of the government.
Article 10 of the European Convention on Human Rights, protecting freedom of expression, gives rights to disseminate information and ideas, including via the internet. Public authorities must justify any interferences with these rights resulting from legislation. These must be necessary in a democratic society.
I believe the Bill, if enacted, will result in such interferences.
There is no issue about the need to restrict some of the content which has been discussed in the Bill’s evolution, such as content harmful to children or which may encourage self-harm.
But the Bill has the potential to restrict many other types of content.
Is it sufficiently, and sufficiently clearly, targeted at content which can justifiably be restricted in Article 10 terms?
How will the SPs, especially the big tech platforms, respond to such legislation under the regulatory influence of Ofcom and the Department of Culture?
And when writers and speakers have their content restricted unjustifiably will they, and their audience, be able to obtain swift and straightforward redress?
The Bill is long and complex. The following are merely the headline provisions:
This is content which the SP has ‘reasonable grounds to believe’ amounts to a ‘relevant offence’ or is such an offence when disseminated. These are of four types: terrorism; child sexual exploitation and abuse; offences in, or ‘of a description’ in, regulations made by CSec; or any other offence where the intended victim is an individual. Where an offence is covered by the regulations it is ‘priority illegal content’.
All regulated SPs would have to maintain an ‘illegal content risk assessment’ of the risks of users encountering illegal content on the service and the level of risk of harm to individuals from such illegal content; and as to how the design and operation of the service may increase or decrease the risks.
Various ‘safety duties’ would be imposed. Proportionate steps would be required to mitigate the identified risks of harm from such content and to operate the service minimising priority content. U2U providers would take down any illegal content of which they become aware and search services would minimise the risk of such content being encountered on their service.
Content would be HTA if it is priority content of such ‘a description’ in regulations made by CSec. Also, if the SP has ‘reasonable grounds to believe’ either: that the ‘nature of the content is such that there is a material risk of [it] having, or indirectly having, a significantly adverse physical or psychological impact on an adult of ordinary sensibilities’; or that its dissemination creates such a ‘material risk’.
There are also three categories of HTC content. These are the same as for adults, but with a child instead of an adult being referred to in the definition.
The duties relating to HTA content would apply to Cat1 services. There are duties: to maintain a risk assessment (like that for illegal content) and to notify Ofcom of the kinds of, and incidence of, non-priority HTA on the service; and to ‘protect adults’ online safety’ by specifying in clear and accessible terms of service how HTA is to be dealt with by the service.
All regulated SPs would have to assess whether it is possible for children to access the service. This is presumed unless there are systems meaning children are ‘not normally able’ to do this. Where it is possible, SPs would assess whether ‘the child user condition’ is met. This is where the service has, or is likely to attract, a significant number of child users. If so, the service is deemed ‘likely to be accessed by children’.
There must then be a ‘children’s risk assessment’ of the service (again like that for illegal content), with any incidence of HTC content being notified to Ofcom. There are four duties to ‘protect children’s online safety’. Proportionate steps would be required to mitigate and manage the risks and impact of harm to children in different age groups from the content; and to have systems and processes designed to protect such children. For U2U services these should prevent children ‘of any age’ from encountering primary HTC content. For search services they should ‘minimise the risk’ of such an encounter. Again, clear and accessible terms of service or policy statements should specify how protection is given.
When implementing safety policies, all regulated SPs would ‘have regard to the importance of’ protecting user rights to freedom of expression ‘within the law’ and protecting them from ‘unwarranted infringements of privacy’. There are duties on providers of Cat1 services to ‘take into account’ the importance of freedom of expression in relation to content of ‘democratic importance’ and ‘journalistic content’.
There are also user reporting and redress duties. All regulated services must have systems that allow users and ‘affected persons’ easily to report illegal content. Cat1 services must have these for HTA content. Services that can be accessed by children must have them for HTC content.
All SPs would have complaints procedures that provide for ‘appropriate action’ to be taken by the provider in response. These would either be complaints: about content and/or an SP’s failure to comply with the duties of care; or by users whose content has been taken down or restricted or who have had their ability to access the service restricted by the SP because of their content.
Ofcom would consult on, and draft, codes of practice for SP compliance. But CSec could direct changes to a code, to reflect government policy, before laying before Parliament. Ofcom would enforce compliance. A compliance notice could be enforced in civil proceedings by court order. A penalty could be imposed on the respondent in any such procedure of up to £18m or 10% of its qualifying worldwide revenue.
Each of the two approaches to defining the targeted content is concerning.
It is not clear how the identification of such content via CSec regulations will be achieved in practice, or how wide the regulations will go. A power to identify by ‘description’ implies a wide SP discretion to decide whether and when the statutory description is met. The wider the description, the wider the discretion.
The ‘reasonable grounds to believe’ approach is even more concerning. The content does not have to be illegal or harmful as defined, just content which might be (according to this threshold).
It is also questionable whether SPs should be deciding questions to do with criminality, rather than passing concerns to the police and Crown Prosecution Service. You only need to skim the wording of the terrorism/CSEA offences to see the expertise required. Section 13(1A) of the Terrorism Act 2000, for example, criminalises publication of an image of an article ‘in such a way or in such circumstances… as to arouse suspicion’ that a person is a supporter of a proscribed organisation. This could be a T-shirt or banner with text. How qualified is the SP to make the required contextual judgement here?
The definition of harmfulness is elusive and vague, turning as it does on a ‘material risk’ of the specified consequences to a hypothetical person.
The scope for SPs getting it wrong here, including in ECHR Article 10 terms, is considerable. The judgements required would be difficult for humans. If they are left in the first instance to artificial intelligence, the scope is even greater. I doubt if AI really gets satire, ironic humour or polemical comment.
The Bill’s freedom of expression duty does not protect against this. SPs are not told to apply Article 10 principles. It identifies a more limited freedom, where the speech is adjudged to be ‘within the law’ (whatever this may mean). Article 10 is a fundamental right and not simply what is left outside of domestic law prohibitions. And the Bill’s limited right does not have to be respected. The duty is simply to ‘have regard’ to its importance when restricting and removing content.
There is a risk that users whose speech is restricted will struggle to get redress, or will simply not bother, chilling online free speech. There is very little in the Bill about redress for them, still less providing comfort that their rights will be properly respected.
Of course, SPs could engage in ‘light touch’ regulation. Closely scrutinised, the Bill gives them much discretion, with little compulsion to restrict or remove. Indeed, none at all for HTA content. Much will depend on precisely where Ofcom and the government decide to lead them. But the messaging in the Bill suggests there will be some pressure to be proactive in restricting and removing content that may be illegal or harmful under the legislation.
The big SPs may want to be seen to be doing this, to find favour with government given concerns about being taxed in the UK and protecting their monopolies from competition legislation. But the costs burden of compliance could be considerable for start-ups and SMEs, which may inhibit competition and innovation.
It looks as though the government has decided on a potentially wide reaching and flexible piece of legislation to attract public approval (reflecting growing disapproval of big tech platforms) and as a tester for Facebook, Google et al. A simpler piece or pieces of legislation, aimed at more narrowly defined problematic content would have been preferable, with a clear statement that Article 10 will apply.
How all this will play out remains to be seen. But it seems clear that the days of the wild west, unregulated domestic internet are ending. One way or another.
The Chair of the Bar sets out how the new government can restore the justice system
In the first of a new series, Louise Crush of Westgate Wealth considers the fundamental need for financial protection
Unlocking your aged debt to fund your tax in one easy step. By Philip N Bristow
Possibly, but many barristers are glad he did…
Mental health charity Mind BWW has received a £500 donation from drug, alcohol and DNA testing laboratory, AlphaBiolabs as part of its Giving Back campaign
The Institute of Neurotechnology & Law is thrilled to announce its inaugural essay competition
How to navigate open source evidence in an era of deepfakes. By Professor Yvonne McDermott Rees and Professor Alexa Koenig
Brie Stevens-Hoare KC and Lyndsey de Mestre KC take a look at the difficulties women encounter during the menopause, and offer some practical tips for individuals and chambers to make things easier
Sir Geoffrey Vos, Master of the Rolls and Head of Civil Justice since January 2021, is well known for his passion for access to justice and all things digital. Perhaps less widely known is the driven personality and wanderlust that lies behind this, as Anthony Inglese CB discovers
The Chair of the Bar sets out how the new government can restore the justice system
No-one should have to live in sub-standard accommodation, says Antony Hodari Solicitors. We are tackling the problem of bad housing with a two-pronged approach and act on behalf of tenants in both the civil and criminal courts