What if I told you that humans can now predict a crime before it even happens? Well, I know what your first response will be. You will think that it is a ridiculous joke unless, of course, psychics were actually a thing. But this is not a joke, and the so-called psychic in our case will be artificial intelligence.
According to The Intercept, the Michigan State Police (MSP) has purchased software that will allow them to predict violence and unrest. Obviously, the first question that comes to our mind is, “How does it even work?”
ShadowDragon’s software allows cops to aggregate information from social media and other online sources, such as Amazon, dating apps, and the dark web, to hunt down suspects and map out their networks during investigations.
The company promises to reduce profiling time from months to minutes by enabling powerful searches across more than 120 different internet platforms and a decade’s worth of archives.
When used in conjunction with other similar platforms and companion applications, the ShadowDragon programme begins to resemble what the article refers to as “algorithmic crime-fighting.”
Now the major issue with this type of crime-fighting thinking and the broader hype around artificial intelligence (AI) is that both believe that human behaviour and experience can be encapsulated in computer code.
In reality, the most ardent proponents of AI argue that it can and will learn in the same way that humans do and that it will outperform our capacities.
Now in some aspects, computers have surpassed humans. They are significantly faster at computations and can perform even the most complex ones far faster than people using pencil and paper or even a calculator.
Furthermore, computers, as well as their machine and automatic extensions, do not become fatigued. They are capable of doing complex, repetitive activities with incredible accuracy and speed.
But this doesn’t mean that they are capable of interpreting human behaviour. Even thousands of years of history and literature could not suffice human behaviour and experience, so it is not rational to think that it can be summed up in lines of codes.
This algorithmic crime-fighting system can lead to some hazardous and misleading conclusions like racial and ethnic profiling and gender bias conclusions.
We would also face the unavoidable problem of finding the relevance of each and every piece of information available.
Since this software compiles data from multiple sources, it is difficult to identify the relevant information, making it exceptionally troublesome for software that creates peace and tackles unrest and chaos.
Such systems may lure us with the benefit of detecting possible troublemakers. Still, they are untrustworthy because, in the end, we will end up creating an even more dangerous environment by erecting unneeded lines of defence and may even have to face the fate depicted in the film “War on Terror”, where a massive number of innocents lost their lives to the obsessive desire of dodging the inevitable.
Because artificial intelligence lacks common sense, the only way to rely on it is to keep humans in the loop. Because artificial intelligence’s success depends on how it is used, it should be merely used as a tool and not as an end to the means.
As a result, it should be underlined that the future and scope of artificial intelligence may not be as broad as we imagined because, in the end, we need to have humans at the top of the food chain and the ultimate leaders to avoid mishaps and chaos.