This week the European Parliament Culture and Education (CULT) Committee’ voted on the European Commission draft directive “combating the sexual abuse, sexual exploitation of children and child pornography” (repealing Framework Decision 2004/68/JHA).
This directive is intended to be a comprehensive look at child protection, ranging from ensuring that possession of child pornography is considered a serious offence across the European Union, to making sure that a registered sex offender in one member state cannot get a job working closely with children in another.
There is, of course, strong agreement about the appalling nature of child abuse on the internet and the need to tackle it across Europe. The internet knows no boundaries and therefore has to be viewed internationally.
Images of children being abused are not simply evidence of the abuse, but actually represent a further abuse of their rights every time that image is viewed or downloaded from the internet. The police and internet service providers should be doing everything they can to stop people accessing illegal images and track down and prosecute the creators, publishers and the people downloading them.
Ideally, once such images are discovered, they would be deleted from their host server and investigated by the relevant police force. This is indeed what happens in a lot of instances. But there are difficulties sometimes in getting the images deleted because internet service providers in other countries aren’t necessarily going to be responsive when they are made aware that there is illegal content on their servers. There is little if anything we can do at the moment to compel servers in third countries to remove illegal images. This means that though the authorities may be aware of abuse images, they cannot stop them from being viewed within their own territory. So in effect they must stand by and allow images of child abuse to continue to be viewed and downloaded within their own country, hoping that the images will eventually be removed, which might never happen.
There has been an often heated debate in the CULT Committee about whether or not blocking of internet sites should be allowed to combat online sexual abuse images. Blocking would allow a wall to go up around the site that contains illegal images. This would mean an internet user within a country that utilises blocking technology would be unable to access the site and would be confronted with a warning message when they tried to access illegal material.
Blocking is not without its problems, but as a temporary or ‘last resort’ measure, when deletion is difficult or impossible, it can be effective. There is an argument that says that it is too easy to circumvent the blocking. I don’t buy this argument since it is difficult and I don’t believe most people would know how to do it. Furthermore, there is strong evidence that blocking does stop people from being able to access illegal content. The reason we have this evidence is because a number of countries already block illegal sites, including the UK where the main children’s charities strongly support blocking as one tool in the anti-child abuse armoury.
In 2009 BT announced that they estimated their blocking solution is preventing up to 40,000 attempts per day to access known child abuse web sites over their broadband network. Extrapolated across the whole of the UK broadband network this suggests that blocking is preventing up to 58 million attempts per year. Five months after blocking was launched in Denmark in 2006 the Danish police estimated that 238,000 users had attempted to reach known illegal child abuse images on the web. It was estimate that in Norway blocking was stopping between 10 and 12,000 attempts per day and in Sweden it was in the order of 20 – 30,000 per day. These are substantial numbers which give us an insight into the scale of offending which blocking addresses. The Danish police referred to the number of users, but the UK, Norwegian and Swedish numbers refer to attempts, many of which will be “bots” (bits of software that scour the internet) but even so each and every attempt whether by a human or an automated system represents a criminal act of some sort being prevented.
So it seems obvious that blocking can be an important tool in the fight against the distribution of child abuse images over the internet, and that is why the commission’s directive would make it mandatory for all member states to use it.
In the very difficult process of negotiations that happened in the run up to our vote in CULT, Ms. Kammerevert removed mandatory blocking, making it non-mandatory and subject to strict regulations. I’m not against having more safeguards in place to stop innocent sites being blocked, but to remove the mandatory element from the directive means that the situation in the EU does not substantively change. Member states can already introduce blocking, the point is to make it an EU wide measure, to make sure everything that can be done is being done to stop child abuse images being available.
Never-the-less, non-mandatory blocking is what the Socialist and Democrat group on CULT wanted and, though I am the coordinator of the group, this is what they voted for. I thank Perta for all her hard work on this opinion, but I do hope that in the subsequent stages of this directive’s passage through the parliament, blocking will be reintroduced as a mandatory action for member states. The Civil Liberties committee is responsible for producing the main report and Angelili, the rapportuer, I know is in favour of blocking, so it remains a distinct possibility that the final report the parliament produces will contain it as a mandatory measure.