THE DEAL between Google's DeepMind and the Royal Free London NHS Foundation Trust has been criticised for its lack of transparency and its failure to inform patients about what it's doing with their data.
The stinging criticism is found in a new academic study written by Cambridge University academic Julia Powles, and The Economist's Hal Hodson, dubbed 'Google DeepMind and healthcare in an age of algorithms'.
It asks why DeepMind was given easy access to millions of NHS patient records and suggests that there were many mistakes made when the two organisations signed an agreement back in 2015 to develop the acute kidney injury alert app, Streams.
The paper emphasised that patients "should not be hearing about these things only when they become front page scandals".
"The public's situation is analogous to being interrogated through a one-way mirror: Google can see us, but we cannot see it," it said.
In response to the claims, both Royal Free and Deep Mind have hit back stating that the paper "completely misrepresents the reality of how the NHS uses technology to process data. It makes a series of significant factual and analytical errors, assuming that this kind of data agreement is unprecedented".
They argue that that the data is being used for ‘direct care', whereby the trust retains control of the data in what has become common practice between NHS trusts and digital health suppliers. Within ‘direct care', the trust and the company assume that the individual has given implied consent for their data to be shared in a bid to prevent, investigate or treat illnesses.
But the study states that the data transferred includes individuals who have never had a blood test, never been tested or treated for kidney injury, or indeed patients who have since left the constituent hospitals or even passed away.
"The position that Royal Free and DeepMind assert—that the company is preventing, investigating or treating kidney disease in every patient—seems difficult to sustain on any reasonable interpretation of direct patient care," it said.
The authors said the deal should serve as a "cautionary tale and a call to attention" for other companies. They warned that while artificial intelligence and machine learning may well offer great promise, the ‘special' relationship between Royal Free and DeepMind "does not carry a positive message".
"Digital pioneers who claim to be committed to the public interest must do better than pursue secretive deals and specious claims in something as important as the health of populations. For public institutions and oversight mechanisms to fail in their wake would be an irrevocable mistake".
In a response to DeepMind and Royal Free's claims that the paper was factually inaccurate, the authors have called for the parties to speak on record in an open forum. They intend to write a second paper on the topic in due course.
DeepMind's deal with the Royal Free has been under scrutiny for a few years. It began when Hal Hodson, then of the New Scientist, found that the deal went far beyond what had been announced up until that stage - including that DeepMind would have access to a wide range of healthcare data on the 1.6 million patients that pass through the three London hospitals run by the Royal Free NHS Trust - Barnet, Chase Farm and the Royal Free.
This would include information about people who are HIV-positive, as well as details of drug overdoses and abortions.
Despite DeepMind and the NHS Trust updating the initial deal in November 2016, the first agreement is still being investigated by both the Information Commissioner's Office and the National Data Guardian.
And, er, not much else
To serve, protect, and get incredibly hot and dusty
Symantec links attack to prolific Lazarus hacking group
Chinese firms drive global smartphone growth in first quarter