Against the dehumanisation of decision-making: Algorithmic decisions at the crossroads of intellectual property, data protection, and freedom of information

Noto La Diega, Guido (2018) Against the dehumanisation of decision-making: Algorithmic decisions at the crossroads of intellectual property, data protection, and freedom of information. Journal of Intellectual Property, Information Technology and Electronic Commerce Law, 9 (1). ISSN 2190-3387

[img] Text (Full text)
Noto La Diega - Against the Dehumanisation of Decision-making.docx - Accepted Version

Download (160kB)
Official URL: https://www.jipitec.eu/issues/jipitec-9-1-2018/467...

Abstract

This work presents ten arguments against algorithmic decision-making. These revolve around at the concepts of ubiquitous discretionary interpretation, holistic intuition, algorithmic bias, the three black boxes, psychology of conformity, power of sanctions, civilising force of hypocrisy, pluralism, empathy, and technocracy. Nowadays algorithms can decide if one can get a loan, is allowed to cross a border, must go to prison. Artificial intelligence techniques (natural language processing and machine learning in the first place) enable private and public decision-makers to analyse big data in order to build profiles, which are used to make decisions in an automated way. The lack of transparency of the algorithmic decisions does not stem merely from the characteristics of the relevant techniques used, which can make it impossible to access the rationale of the decision. It depends also on the abuse of and overlap between intellectual property rights (the “legal black box”). In the US, nearly half million patented inventions concern algorithms; more than 67% of the algorithm-related patents were issued over the last ten years and the trend is upward. To counter the increased monopolisation of algorithms by means of intellectual property rights (with trade secrets leading the way), this paper presents three legal routes that enable citizens to ‘open’ the algorithms. First, copyright and patent exceptions, as well as trade secrets are discussed. Second, the EU General Data Protection Regulation is critically assessed. In principle, algorithms are not allowed to process personal data to take decisions that have legal effects on the data subject’s life or similarly significantly affect them. However, when they are allowed to do so, the data subject has still the rights to obtain human intervention, to express their point of view, as well as to contest the decision. Additionally, the data controller shall provide meaningful information about the logic involved in the algorithmic decision. Third, this paper critically analyses the first known case of a court using the access right under the freedom of information regime to grant an injunction to release the source code of the computer program that implements an algorithm. Only an integrated approach – which takes into account intellectual property, data protection, and freedom of information – may provide the citizen affected by an algorithmic decision of an effective remedy as required by the Charter of Fundamental Rights of the EU.

Item Type: Article
Uncontrolled Keywords: algorithmic decision-making, algorithmic bias, right not to be subject to an algorithmic decision, GDPR, software copyright exceptions, patent infringement defences, freedom of information request, algorithmic transparency, algorithmic accountability, algorithmic governance, Data Protection Act 2018
Subjects: G500 Information Systems
G700 Artificial Intelligence
M200 Law by Topic
Department: Faculties > Business and Law > Northumbria Law School
Related URLs:
Depositing User: Paul Burns
Date Deposited: 05 Apr 2018 13:21
Last Modified: 10 Oct 2019 20:31
URI: http://nrl.northumbria.ac.uk/id/eprint/33906

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics