Data privacy: advertising, Amazon and artificial intelligence
The security Data Privacy: of patient healthcare data has never felt so precarious, and large corporations have seemingly unfettered access to Advertising, Artificial Intelligence - personal information about millions of people. But are these concerns quite as pressing as they seem?
Image by ©Pixbay Artificial Intelligence |
Data Privacy:- Advertising, Many people don’t understand quite how their data is used, or why, and a lot of companies aren’t forthcoming with this information. Credit: Shutterstock
Menstrual tracking apps are some of the most popular on the digital health market. Through collecting intimate data about users – when their period is, when they last had sex, their mood and general health, even which sanitary products they use – the app offers an estimate for when they are the most fertile and when to expect their next period.
According to a study recently carried out by Privacy International (PI), several menstrual tracking apps have been Data Privacy sharing user data advertising with Facebook via the social network’s software development kit (SDK).
The SDK allows software developers to receive analytics on which aspects of their app are most popular with users. Developers using SDK can also use Facebook services to monetise their apps through the Facebook Audience Network. Facebook may also use SDK data to provide individual users with more personalised adverts when they use the platform through the ‘Login with Facebook’ function available on many applications, with the Facebook user’s prior consent.
Whether this consent is given knowingly is questionable, with the clause often buried in lengthy terms and conditions pages consumers are unlikely to read.
Breaching the privacy of millions
PI found that the most popular menstrual apps on the market – two apps both named Period Tracker from software developers Leap Fitness Group and Simple Design, Flo and Clue – did not share data with Facebook. However, smaller apps which still boast millions of users, such as Maya by Plackal Tech, MIA by Mobapp and My Period Tracker by Linchpin Health, all did.
Facebook later told the BBC that its terms of service prohibit app developers who use SDK from sharing sensitive health information, and that it actively seeks to remove this data from the platform.
However, it’s hard to ignore that Maya, MIA and My Period Tracker alone have over seven million collective downloads, with all of the resulting information made available to Facebook via SDK. This isn’t a case of one small app with only a few hundred users flying under the radar, but of the sensitive data of millions of people being shared with a targeted advertising wing in one of the biggest social media giants in the world.
In its report, PI stated: “The responsibility should not be on users to worry about what they are sharing with the apps they have chosen. The responsibility should be on the companies to comply with their legal obligations and live up to the trust that users will have placed in them when deciding to use their service.”
Artificial intelligence and patient data
It’s understandable that tech users don’t want sensitive information about their health in the hands of big conglomerates, and it’s not just Facebook’s activities that have raised eyebrows. In the UK, news that the Amazon Alexa home assistant was going to start responding with NHS-ratified information when asked healthcare questions by users sparked concerns of a data protection disaster waiting to happen.
But some large corporations do appear to be taking steps to protect patient data. Google and the Mayo Clinic recently signed a ten-year partnership in which patient data will be stored in the cloud, to support the building of high-tech patient care products.
Despite Mayo’s data being stored in Google’s cloud, only the clinic itself will have access to patient information. Mayo may decide to share de-identified patient data with Google and other parties for specific research projects, but this data won’t be used to target advertising – unlike the data shared by the collective of menstrual tracker apps.
AI in medicine itself raises several significant legal and ethical concerns surrounding data and privacy. For instance, data broking giants in the US, like LexisNexis and Axicom, are known to mine personal data and engage in AI development.
These companies could theoretically sell their AI-acquired healthcare data to third parties like marketers, employers and insurers. While the US Health Insurance Portability and Accountability Act (HIPAA) forbids health care providers and insurers from obtaining or disclosing patient information without consent, it doesn’t apply to other types of businesses.
Nothing to fear?
“The assumption that patient data is put at risk through AI is not necessarily true,” says AI echocardiography analysis firm Ultromics co-founder Professor Paul Leeson. “AI medical applications are being developed appropriately through regulatory systems to pretty much mirror what is required for any medical practitioner. Any AI applications that fail to start from that principle will fail to get adopted or get killed by authorities.”
Leeson isn’t wrong about this.
The UK Information Commissioner’s Office (ICO) recently found that an agreement between Google’s DeepMind AI platform and the Royal Free London NHS Foundation Trust was in breach of the law.
Royal Free London was providing vast quantities of patient data to DeepMind for the development of its Streams platform, without adequately informing patients that their data was being used in this way.
The Trust was required to establish a proper legal basis for future data processing and complete a privacy impact assessment, as well as commission an independent audit into the processing of patient data during the implementation of Streams during the breach period. The partnership has continued, with no ICO objection to the new state of affairs.
As 2021 draws to a close,
patient data is on undeniably strange tides. Many people don’t understand quite how their data is used, or why, and a lot of companies aren’t forthcoming with this information either. Facebook, Amazon and Google all claim the mass amounts of healthcare data they’re acquiring are being used to improve services or for other innocuous reasons, but that can be hard to believe when you’re being constantly bombarded with targeted adverts.
But if something as innocuous as a period tracking app can be caught up in a data protection scandal, it might be time to start looking at the terms and conditions a little more closely.