Agosto 11, 2020

Artificial Intelligence in Criminal Investigation

For centuries, human beings have strived to automate and simplify tasks in order to make them more efficient: achieving the proposed objective at a lower cost and faster speed. That effort continues to this day as optimization in handling high volumes of information.

Artificial Intelligence (AI) is the combination of algorithms designed with the purpose of creating devices that present capabilities similar to those of the human being. A type of technology that is beginning to be present in everyday life in the most common applications, even for home use such as Siri and Alexa cell phone assistants, or facial recognition applications such as those used by the Argentine government in systems such as ANSES (National Administration of Social Security “Administración Nacional de la Seguridad Social”) and the AFIP (Federal Administration of Public Revenue “Administración Federal de Ingresos Públicos”).

Authors Stuart Russell and Peter Norvig, two academic classics of Computer Science, defined the “types” of artificial intelligence according to their application in the following categories:

– Systems that “think” like humans (e.g., artificial neural networks).

– Systems that act like humans (e.g., robots).

– Systems that learn and generate new knowledge (e.g., expert systems).

Within the branch of systems that emulate the human way of thinking in the aforementioned categories, we find ourselves with two techniques that are increasingly used: the Deep Learning and Machine Learning algorithms.

It can be said that Machine Learning has a side called Deep Learning. While both technologies refer to systems capable of learning on their own, Deep Learning is more complex and sophisticated, and it is also more autonomous, which means that once the system has been programmed, human intervention is minimal.

More dangerous than the famous ‘fake news’, the ‘deepfake’ are videos manipulated using artificial intelligence techniques such as those cited. The result is extremely realistic.

Another example is Deepfakeapp published as application that allowed any computer novice to manipulate videos, a tool especially designed for those popularly known as ‘revenge porn’ (*), that is, the unauthorized and malicious publication of intimate images.

In 2018, a video in which an alleged Barak Obama called Donald Trump an imbecile circled the world. It was a fake recording in which actor Jordan Peele and Buzzfeed CEO Jonah Peretti were trying to raise awareness of the danger of unverified information and the Deepfake. In any case, one of the first steps when investigating the origin of a video or image is to verify the source: Who sent this? Who signs it? Is it reliable? Tracing the path of the so-called Deepfake, seeing where it was first shared, and who published it are some basic steps to take that don’t require advanced knowledge, just common sense.

In 2019, what was classified as the “first crime committed with artificial intelligence” was discovered in the United Kingdom and brought to justice in that country. In a short article published by The Wall Street Journal, explains the story of a group of cybercriminals who managed to impersonate the voice of the executive director of an energy company and demanded an urgent transfer of 243,000 euros and that worked for them as a deception method. The CEO of the company reportedly thought he was on the phone with the CEO of the parent company, who asked him for the money for a suspected supplier in Hungary. The cybercriminal made the request seem extremely urgent, saying the money needed to be transferred within an hour. The victim, in subsequent statements, said that she even heard her boss’s slight German accent, as well as the tone of her voice.

The predictions about this type of attack are not very encouraging: the voice recordings necessary to train the algorithm in high-profile people are very easy to obtain: in television interviews, radio, social networks, and WhatsApp audios, have enough minutes of recording so that the algorithm is in a position to replace any voice tone with that of the person you want to impersonate.
How will we validate false “confessions” made with these techniques? How will we argue that someone did not say what we are hearing? Will the videos that prove the alleged presence of a person in a place to try to exonerate them be valid from these techniques?

In the framework of a criminal investigation, we must begin to request the technical opinion of experts from the Scientific Police. We can no longer rely solely on the image and the video to consider them, alone, proof. In our prospective analysis, we must include the acquisition of forensic imaging and video tools, in the same way that today practically all investigative agencies are clear that it is necessary to have tools for the analysis of mobile devices.

And what happens when we apply these techniques to Robotics?

How do we deal with the “responsibility” or “attribution” of a crime when the one who commits it is a Robot? A robot is neither more nor less than a machine (hardware) that contains an operating system (software) and that performs operations through different algorithms. Since the first robotic arms used to handle materials, much progress has been made.

One aspect of this area that worries the field of law most is civil liability. That is, the obligation to indemnify a third party that arises from damage caused involuntarily. The problem that arises is that, under current legislation, a robot cannot be responsible for acts or omissions that may cause harm to third parties. Judges judge people, not robots, let alone algorithms.

It seems reasonable that the responsible party is “the manufacturer”, but as observed in different legal discussions on this topic in international settings, producers will be responsible for the damages caused by their products only in the case where they are defective.

Therefore, what happens if the damage caused is not a consequence of a manufacturing defect? What happens if it is damage caused by a rule that the robot learned with Deep Learning and Machine Learning techniques? What happens if someone “teaches” or, as we said above, “trains” the algorithm for unwanted behavior by the manufacturer and causes damage? What if the robot suffers a cyber attack and its learning and inference rules change?

Different options are evaluated in the world when determining what type of “legal status” should be applied to a robot and an algorithm. As an example of these proposals, regarding possible “legal natures” we can cite the opinion made by María José Santos González, coordinator of the Legal Department of the National Institute of Cybersecurity in Spain, which based on existing legislation in Europe she makes a very interesting rundown and analysis of the well-known figures, summarized for the Ibero-American Legal News Review:

a) “(…) Robot as a natural person. This possibility does not seem adequate given that article 30 of the Civil Code determines that a live birth is necessary to acquire personality. Therefore, this cannot happen in a robot. “
b) “Robot as a legal person. Nor does it seem appropriate to endow robots with this type of personality because robots can interact directly with the environment and even cause damage, while, in the case of a legal person, it will always be the company’s representatives who make the decisions in last resort and will therefore be responsible. “
c) “Robot as an animal. The fact that a robot has no biological or genetic basis or the fact that a robot today cannot have feelings makes it impossible to equate a robot to an animal. “
d) “Robot as a thing. For the Civil Code, concretely in article 333, a thing is an inanimate being, devoid of life, characteristics that a robot does not have, given that it can move and interact with the environment (…)”

Given that both a robot and an algorithm do not fit into any of these categories, will a new legal framework be necessary for these issues? Should we rethink the concept of life as some propose?
Let’s imagine for a moment a Robot or an algorithm as a subject of law. What would be the penalty? Who applies it? Where is the data stored to “turn it off”?

I would love to give the reader answers on these issues, but it is not the idea at all: let these pages serve as a trigger for investigators, justice operators and legislators. The time to think and imagine is now.

Dear reader, after this review of the probable evolution of the many legal loopholes that exist and about which we have dealt with some and of their daily application: Do you still believe that artificial intelligence is something from the future? Do you think we are ready to use it in research? Do you consider that we are prepared to analyze the crimes committed with it? And to prevent them?

Share this:

About Pablo Lazaro

Pablo Lazaro

Pablo Lázaro is a Computer Science Engineer, specialized in cybersecurity and cybercrimes, with a master's degree in National Strategic Intelligence from the UNLP, and a postgraduate degree in International Relations and Public Policy. Currently, He is CEO of Cyber Oprac. Previously, he was director of cybercrime investigations of the Ministry of National Security; He served as Director of Technology for the Airport Security Police (PSA) and as cybersecurity coordinator of the four federal forces through the CSIRT (Cybersecurity Incident Response Center) and the SOC (Cybersecurity Operations Center) of the Ministry of Security.

  • Email