Hot on the heels of the Home Secretary’s speech which focused on the sexual exploitation of children in the online world, technology seems to have been in the headlines most days. Sajid Javid spoke of his ambition to hold ‘internet giants’ to account in the fight against online exploitation, however just this last week we have seen further frustration from the police regarding the hoops they have to jump through to access data relevant to abuse cases.
According to National Police Chief’s Council figures, referrals relating to the popular Kik app have risen from 48 per month in 2016 to 195 per month in 2018 – resulting in police forces opening more than 1,500 investigations involving Kik so far this year. Senior police officers and operational staff alike have commented on the both the unsafe nature of apps like Kik, and also the difficulty in working with the company to gather evidence in cases on online exploitation. It remains to be seen as to whether the Home Secretary can bring enough pressure to bear on tech companies to tighten up their operating procedures whilst improving cooperation with law enforcement agencies.
The second headline-grabbing story related to technology within the field of child sexual exploitation originated in a Guardian article highlighting several local authorities who have begun to use ‘big data’ to try and predict children and families who may require intervention to prevent child abuse. Data, and the algorithms which interrogate it, are already used in a wide range of industries to target resources effectively. However, most of those industries do not work to protect children from harm, and the data analysed is not personal information from vulnerable families.
Commentators have raised legitimate concerns about this ‘Minority Report-style’ approach to predicting which children may be at risk of abuse or exploitation. We do children and young people an injustice if we take a fatalistic approach to their opportunities based on their early life experiences, which this use of technology could be accused of doing. However, there is a compelling counter-narrative around the impact of multiple and sustained adverse childhood experiences in terms of increased risk of harm later in life.
Further concerns have been raised regarding the skewed nature of the data being analysed – given that it is being drawn from the records of children and families already known to services, it could lead to an overly narrow focus on families from particular socio-economic backgrounds, stigmatising these groups, whilst overlooking the risk to children from other backgrounds.
Regardless of whether technology is used to harvest data, what is critical to any analysis is the input of human oversight into the process, to identify these types of limitations and ultimately support knowledgeable professionals to make decisions, rather than have those decisions made for them by algorithms or processes. The Information Commissioners’ Office is taking a keen interest in the use of families’ personal data and as yet may render all of the above debate futile if they feel there are data protection issues with the practice.
What remains crystal-clear to me is the need for our multi-agency responses to exploitation to be better supported by technology. At the moment perpetrators are hands-down winning the battle in terms of using cutting edge innovations to facilitate the abuse of children. Whilst some of the initiatives mentioned above may, or may not be the solution, we need to focus the minds of the tech industries to consider the safeguarding implications of their innovations.
CSE Response Unit Lead