Cookie Consent by Popupsmart Website

Connecting to LinkedIn...

Website Blog Headers (4)

News & Social Media

AI has no place in the NHS if patient privacy isn’t assured

Posted on 6/09/2017 by

810

DeepMind is working on a technical solution to boost transparency when it comes to AI in healthcare – but it's a long road to machines gaining patient trust

Tech companies are asking to step into doctors' offices with us, and eavesdrop on all the symptoms and concerns we share with our GPs. While doctors and other medical staff are bound by confidentiality and ethics, we haven't yet figured out what it means when a digital third party — the apps and algorithms — are allowed in the room, too.

Healthcare isn't the place to mimic Facebook's former motto to "move fast and break things", or push regulations to see where they bend, a la Uber. Instead, patients need to trust who's in the consultation room with them, says Nathan Lea, senior research associate at UCL's Institute of Health Informatics and the Farr Institute of Health Informatics Research. "You want the individual to be able to share with the doctor or clinical team as much detail as necessary without the anxiety that someone else will be looking at it," he says.

Artificial intelligence, machine intelligence and algorithms are crunching through medical data, helping to read images more accurately and even suggesting a diagnosis. But the use of such technology in healthcare has proven problematic. An agreement between the London-based machine learning firm and the Royal Free London NHS Foundation Trust was ruled unlawful by the Information Commissioner's Office earlier this year.

The failure by the Royal Free to ask for patient consent isn't just bad news for privacy, it could also slow or even prevent potentially life-saving machine learning advancements from being used in healthcare. In fact, it's already happened. Before DeepMind, the now-collapsed NHS care.data digital patient records scheme was shelved not for technical faults — though it did leak patient data online — but because it failed to ask for informed consent from patients. Lea worries machine learning will fall at the same hurdle, making the public "technophobic".

That concern has occurred to DeepMind, which is working on a technical solution to boost transparency and, in turn, trust, revealing a tracking project called Verifiable Data Audit (VDA). "It is designed to allow partners to check on who has accessed data, when, and for what reason," says Andrew Eland, engineering lead for health at DeepMind. "It increases transparency by ensuring accurate ‘spot checks’ can be made on data access, creating real accountability."

DeepMind hits back at criticism of its NHS data-sharing deal

DeepMind hits back at criticism of its NHS data-sharing deal

VDA differs from existing audit systems as it uses cryptography to protect data from being changed without anyone noticing. "Whilst VDA will be useful for auditing access to health data, it could also be used to build trust in systems more generally," Eland says. "For example, creating unforgeable timestamps to make clear when something was written or created, or ‘watermarking’ data sets to ensure they have not been tampered with – something very important for the machine learning and academic communities." Such a system sounds ideal for keeping tabs on public health data as it meanders through private companies, helping to build trust and confidence, but there's a problem. VDA doesn't yet exist.

While DeepMind is already handling real patient data from NHS hospitals, its technical measures to ensure privacy and confidentiality remain works in progress. "This is complex technology and while DeepMind is making strong progress in the underlying technology, there are many practical challenges to overcome before VDA systems will be ready to use in practice," Eland says, pointing to interoperability issues with data formats in particular.

There's another problem. Eland explains that the initial implementation would only allow partner hospitals to check that DeepMind is using patient data for approved purposes — extending that ability to patients, patient groups, or presumably regulators would raise "complex design questions", Eland says. That's problematic. With the DeepMind privacy case, it was in fact the Royal Free NHS Foundation Trust that was called out by the ICO — so hospitals are also at fault when it comes to patient privacy.

But it's a start, and if DeepMind's smart auditing fails to do the trick, there are other technologies available, such as the blockchain. DeepMind's VDA is in some ways similar to a public ledger system, and while the company notes that blockchain's distributed nature means it would be unwieldy for most hospitals and eat up too mych energy, its open nature could address the access issues Eland highlights.

A Royal Society report published earlier this year suggested other technical solutions for the privacy concerns raised by machine learning, using machine learning itself. "Privacy-preserving machine learning offers an interesting technological approach to addressing questions about data access and governance," says Peter Donnelly, Chair of the Royal Society Machine Learning Working Group.

"If a patient can understand what's being proposed, can see the benefits, you'll find they'll be more compelled to participate in something like this"


Nathan Lea, UCL

For example, differential privacy introduces randomness into aggregated data, reducing the risk of re-identification while preserving conclusions made from the data, explains Dr Mark Wardle, a consultant neurologist and health informatics expert.

Another technique is homomorphic encryption, which "allows information such as private medical data to be encrypted and subsequently processed without needing decryption," Wardle explains, adding: "such technology is, as far as I am aware, at a very early stage as it is extremely computationally-intensive."

In the meantime, the Wellcome Trust's policy adviser Natalie Banner points to the UK Anonymisation Network, which offers frameworks to help guide anyone working with personal data, as well as to the use of synthetic data sets, so there's no risk to patient data when companies are training their AI. "There are different kinds of controls that you can insert to try and ensure you're protecting privacy," she says.

But those technical measures protect data from misuse once it's already in the hands of third-parties, which happens — or should happen — only after one key aspect of our data protection laws is followed: informed consent. While DeepMind is seeking to build trust in its systems by engineering verification systems, that doesn't solve the problem of asking patients whether they'll allow its machine-learning apps and tools into the consultation room with them.

Answering that aspect of data protection and privacy may not need a technological answer. While there are plenty of tech tools to help improve communication and ask a yes/no question, ensuring people genuinely understand the risks and benefits of any tech system may well require no more or less than a conversation with their GP — an idea mentioned in the National Data Guardian's review of the subject. "You already have a basis to communicate what you're doing with the patient, and that's through their carer, their practitioner," says UCL's Lea. "If a patient can understand what's being proposed, can see the benefits, and can make the balance for themselves of the risk to their privacy versus the benefit to their health, I think you'll find they'll be more compelled to participate in something like this."

In the end, the obstacles of AI in medicine aren't solved by technology or legislation, argues Wardle. "The main conclusion is that there is no single answer but a combination of solutions: give people choice, engender public trust by appropriate regulatory and legislative frameworks, adopt appropriate technical safeguards and de-identify information at a level that minimises re-identification risk and satisfies the specific requirements for analytics on that occasion, use verifiable logs to ensure access to confidential information is logged and capable of inspection."

That may seem a long list of challenges, but if we can teach machines to search for signs of cancer in photos of our eyes or collate streams of data into life-saving predictions, they're hurdles we can clear in order to welcome AI and its kin into the consultation room. "Medical data is terrifically valuable, powerful and offers tremendous scope to do good, but we also have a great responsibility to protect that data and ensure access is safe, secure and transparent," Wardle says.

Source: Wired