Each year the Privacy Commissioner hosts privacy week and last week the theme was privacy rights in the digital age. A theme that highlights the massive gap between the advancement of technology and the lack of change in privacy law. The Privacy Act 2020 is now nearly three years old, which may seem new however if we look at all the new introductions in technology over the last three years, the Privacy Act doesn’t come close to protecting an individual’s privacy.
Michael Webster, the Privacy Commissioner stated, “that further legislative changes are needed to ensure New Zealand’s privacy law is fit for purpose in the digital age.” The issue then becomes how do we change it to ensure that privacy law can advance ahead of technology, or at the very least keep up with it.
Artificial Intelligence (AI) is a prime example of where technology has advanced beyond the protection of the current privacy principles.
AI has become a tool that is widely used and in some instances is invaluable due to its efficiency and ability to link information that to the human mind had no relation. And here lies one of the eminent risks that AI poses to privacy; the ability to alter the meaning of personal information. For something to be considered private, it ought to be personal information. Under the current legislation personal information is essentially information that can be used to identify someone. The obvious examples are a person’s email or home address. However, AI can combine ordinary pieces of information that alone aren’t considered personal information but together can identify an individual. Without AI these connections probably couldn’t be made, or at least wouldn’t be made, leaving it unprotected by New Zealand’s Privacy Law.
The interpretation of personal information is not the only issue that AI presents. Principles 1-4 of the Privacy Commission dictate how data is collected. To collect data employers must:
Have a purpose for collecting the information,
Collect it from the individual unless authorised or collect information from publicly available sources.
Take reasonable steps to inform the individual.
Collect the information legally and in a fair and reasonable manner.
The operation of AI is inconsistent with these four principles. When an AI is tasked with finding information about an individual it collects and analyses data over and above what a person can do. In doing so AI retains that data and may use it in the future for something outside of the original purpose.
One of the key issues that AI presents in the employment sector, occurs during pre-employment in the hiring stage. Employers are using AI to conduct checks on prospective employees and while the employer may be following the principles, the AI is not. During this process, an AI may collect data from sources that aren’t practically accessible or information that falls outside the original purpose, which could cause the employee to be disadvantaged whether they are successful in getting the role or not. The solution to these issues isn’t to stop using AI but to adapt the privacy principles so that AI fits within the bounds of the law. An example of this is to inform the individual that they will be using AI or give them the opportunity to elect how an organisation receives their personal information.
Michael Webster has said that “there’s work to be done around how we regulate artificial intelligence, and we need to look at how organisations that fail to protect people’s privacy are held to account.” Webster’s suggestion to penalise organisations for their failure may force these organisations to rethink how they use AI, but should organisations be punished for a system that they may not be able to control?