ASEAN's Premier News and Eco System Portal for Big Data, IoT and AI

Home > Big news > What If People Profiling Goes Wrong
What If People Profiling Goes Wrong
June 16, 2016

When someone says “You’re being profiled”, it either means you’re at a job interview or your job is spying.

That has changed over recent years. Now being profiled has become part of our everyday lives. Being profiled could mean a trip to the store for groceries. It could mean about what movies you watch. It could mean about what food you eat. Which would entail all your movements, all your preferences, your purchase decisions and such will be collected, tabulated, analysed and used to create the convenience you would expect.

Of course this type of intrusion is an acceptable evil for a better lifestyle, but still an intrusion nonetheless. After all, what does buying something from the store have anything to do with where I live, or my hand-phone number, or what my race is.

But have we missed something in all our bustle to get on with life and is there a price to pay for this intrusion on privacy?

You may argue that profiling in the modern age is a normality and does help with narrowing down the search canvas. Buying a second hand car or home or even looking for a partner, using a search engine makes the medicine go down easier. The more information entered, the more precise results you would hope to reap.

The emergence of Big Data analytics is the catalyst responsible for changing the game somewhat. By ‘somewhat’ I mean drastically. The data from all the profiling is enormous and constant. Without Big Data tools, there would simply be no harnessing it.

So Big Data tools for analysing and profiling can solve as simple a problem as which word you prefer to use while texting, to as convoluted as predictions on disease and crime.

Big Data algorithms were deployed at the Chicago Police Department to look for heat areas or areas prone to crime. A ‘hit list’ of individuals with a “higher chance of committing a crime” is also generated by algorithms based on profiling.

These predictive profiling based on algorithms created by Miles Wernick, professor of electrical engineering at the Illinois Institute of Technology, was quoted as saying that he believes it does not have any racial, neighbourhood, or other such connotations and will be unbiased and quantitative.

Although Wernick’s claim can be accepted on some level, does it indemnify the algorithm from making mistakes? From creating a profile pattern it recognizes to be true and therefore sending its own calculated predictions to authorities. Since its calculations are made from entries, the entries too need to be flawless. That itself has its own pitfalls and faults.

So now it comes down to the ethical views where there is much to be understood as to what algorithms will tolerate and what will falter. There have been a number of such experiments done on algorithms through machine learning, and the results have been less than impressive. The argument in its defense is that an algorithm isn’t biased because its quantitative.

The technology to predict disease, crime or natural catastrophes is in our midst. The technology to eradicate famine and bring peace in our time has quite possibly arrived. When will it be used as such might take a while. So before the machine learns the importance and rules of ethics though, it would be good if humans do so first. That, might take a bit longer.