GDPR and increasing sensitivity over biometric data means organisations need to ensure that they have fully understood the risks associated with losing facial data.
Whilst we continue to take and share a huge number of photos, there is an increasing concern about how people’s biometric data is kept and used. This is mainly due to a certain level of consumer ignorance about what can happen to their biometric data, and an increasing realisation by government and civil liberty groups that facial identities need to be protected. Indeed, the General Data Protection Regulation (GDPR) legislation covers biometric data protection, placing the responsibility of protecting facial data with the organisation that is holding the information. The following blog provides a perspective on the increasing use of facial recognition technology, the consequences of losing data and how to protect against it.
The number of photos continues to rise.
It is unsurprising that the number of photos taken each year continues to grow. Mobile device penetration continues to increase, the use of data sharing platforms has also grown, whilst our appetite to openly share our experiences online shows no sign of slowing down. Several online sources estimate that around 1.4 trillion photos will be taken in 2019 alone. The number of photos shared through online platforms provides some further context and perhaps debunks the online research. Facebook has 300 million photos uploaded daily, Instagram 95 million, whilst Snapchat has 3 billion “snaps” uploaded per day. Regardless of whether it is 1.4 trillion photos or many more, it’s a huge number. People readily share their photos on a range of social platforms in exchange for providing personal data; and share their facial image with governments and commercial organisations in return for a service – whether it’s the ability to cross a border or to get a bank account.
Increases in facial recognition accuracy and availability
Facial recognition technology is expanding rapidly alongside the growth in the number of facial images, improving in accuracy and becoming increasingly available to all. A comparison of 7 different analyst firms that track facial recognition estimate an industry growth rate of 23% between 2015 – 2025.
Chart 1: Facial Recognition Analytics Market Growth
In addition to its increasing use, the software is becoming more accurate. The National Institute of Standards & Technology (NIST), which has become the de facto global evaluator of facial recognition efficacy, reported that its latest test showed a considerable improvement in the rate of false positives. In other words, the technology is getting better.
“Between 2014 and 2018, facial recognition software got 20 times better at searching a database to find a matching photograph, according to the National Institute of Standards and Technology’s (NIST) evaluation of 127 software algorithms from 39 different developers—the bulk of the industry.”
The improving accuracy has also coincided with increasing availability. Digital platforms such as Google Images has facial recognition capability that allows consumers to search for photos of themselves on the internet. Betafence and PimEyes provide similar services whilst PicTriev and FaceApp allow users to change their appearance. In addition to platforms, opensource software such as OpenFace, provides developers with the tools to build new software and applications which means that the technology is available to all, including organised crime networks.
There are strong reasons why facial recognition deployment will continue to grow. The face is a Unique Identifier and is being used in an increasing number of critical applications as proof of identity. A well-known use case is in Border Control where biometrics match a traveller’s face to their records to provide entry to a country. It also has key applications in KYC (Know Your Customer) where financial institutions need to be sure of the identity of its customers as part of Anti Money Laundering best practice. From a consumer perspective facial recognition helps users access phones and laptops and this is likely to extend to accessing homes and cars.
Facial images also help to build trust. In large corporations spread across the world, putting a face to a name can help build rapport between distant colleagues. This will inevitably lead to employee databases containing photos and personal data. Consumer facing organisations are also collecting more customer data and images to help them to improve and tailor experiences, thereby increasing loyalty and sales per head.
The growing number of facial image use cases is leading to an increase in biometric data being stored on corporate and government databases. This can be a problem for organisations if the data is not adequately protected.
How to lose a face and why it’s not good news
Many people assume that large organisations who are financially stable and outwardly trustworthy, will be holding data securely. This is not always the case. Although regulated industries holding confidential customer data have invested significantly in cyber security technologies, policies and training, there will always be the risk of data loss and it is a problem that comes in several guises. Although in the majority of cases data loss is from the outside, perpetrated by a malicious external outsider intent on stealing data, disgruntled employees or accidental data loss can also result in organisations losing confidential biometric data.
Chart 2: Sources of Data Loss
Source: Risk Based Security Inc. (Data Breach Intelligence Report)
The consequences of losing data, or a face, can be severe. Reputational damage can have a long-term impact on organisations, leading to the loss of customers, investors and partners. Equally, the financial consequences of data loss can be crippling, from poor stock market performance, to a loss of revenues and regulatory fines that impact company profitability.
There are an increasing number of case studies that highlight the data security challenges that organisations face. Although Equifax is an old example of data loss from 2017, it is very relevant now that the financial consequences are clear. The attack by an external actor resulted in the loss of Social Insurance Numbers and Account Numbers and affected around 1 million people. The damage to the organisation has been significant. The share price fell from $141 to $93 in the space of two weeks and has taken two years to recover. The organisation has also settled on fines and customer support that will amount to around $700 million.
British Airways also lost a significant amount of customer data in September 2018 and has been fined £183.9m for its breach of GDPR. The Information Commissioner went on record as saying;
“People’s personal data is just that – personal. When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”
In August 2019 two clear cases emerged where biometric data was lost. Binance, a cryptocurrency exchange, lost customer verification data – a Know Your Customer use case - that included facial images. The Suprema story also broke in the same month when researchers discovered a route into the databases of the South Korean organisation who holds the biometric records for many global customers. It is not clear how much data has been stolen though the researchers estimated that it could be as many as 30 million records including authentication data for around 1 million users. The consequences for Suprema are not yet clear though the share price dipped 20% and the organisation’s reputation has been severely damaged.
Regulation and consumer fears
The objective of GDPR, launched on May 25th, 2018, is to encourage organisations to protect customer information, including biometric data. Those that fail to do so will be hit with a pecuniary fine that can reach 4% of global revenues. The wording related to biometrics in GDPR is set out below.
“‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data”
Although there is no federal law in the United States, there are now several state acts that are focussed on the protection of personal data. The fiscal consequences of losing biometric data and the increasing sensitivity about the potential misuse of biometrics has recently resulted in some interesting case studies.
In May 2019 San Francisco voted to ban facial recognition across the city. A month later Microsoft deleted a database of 100,000 people that contained 10 million images to help train facial recognition systems. In August 2019 the Irish Government was asked to delete a database of 3.2 million people’s facial biometric data by the Data Protection Commission. A few weeks later the Swedish Data Protection Authority fined a Swedish School for trialling facial recognition to track class attendance. It was deemed in contravention of GDPR.
What can we make from these trends? Westlands Advisory believes that whilst facial recognition will continue to grow, as will the size of databases containing facial images, the sensitivity around biometric data will not diminish and the threat of financial and reputational damage to organisations that loses data will persist. Organisations need to be better at protecting customer information.
Protecting facial images
There are many positive applications for facial recognition. The industry is still relatively immature and will require frequent evaluation of practices and revision to policies to ensure that the use of the technology remains relevant and proportionate. The focus of this article is not to debate what the norms should be, but rather to raise the issue of how to protect facial data. Following a review of suggested approaches, Westlands Advisory recommends that organisations should consider the following.
1. Conduct a Risk Assessment including a Data Protection Impact Assessment that specifically reviews how facial data is stored, where the vulnerabilities are how to mitigate the risk of facial data loss
2. Ensure network, application and endpoint security policies are effective
3. Deploy multi factor authentication
4. Ensure databases are not public facing
5. Anonymise data through encryption and hashing
6. Image de-identification
7. Deploy Data Loss Prevention tools and Behavioural Analysis to protect against deliberate or accidental internal data loss
8. Ensure Incident Response and Data Recovery policies and plans are in place to reduce the impact of a data breach
The benefits of biometric data are significant, and organisations should not be afraid of exploring how the technology can improve operational performance or customer experience. However, it is also clearly the organisation’s responsibility to protect the biometric data that is held. Equal consideration should be given to how the customer service can be delivered whilst limiting the risk of losing data and the trust that goes with it.