লেখক পরিচিতি
লেখা সম্পর্কিত
Are We Becoming Slaves of Algorithms?
Are We Becoming Slaves of Algorithms?
Farhad Hussain
“When I considered what people generally want in calculating, I found that it is always a number.” –is the first sentence of the famous book “The Algebra of Mohammed ben Musa”, written by the ninth-century polymath Muhammad ibn Musa al-Khwarizmi. A millennium later, people want a lot of calculating, and luckily al-Khwarizmi lent his name to the simple mathematical concept that helps us to meet that need: the algorithm.
An algorithm is only a set of step-by-step instructions for carrying out a task. Yet algorithms are more powerful than they first appear. Google’s internet search is based on algorithms. Amazon’s book recommendations are based on algorithms. Facebook’s news feed is based on algorithms. It is not just our online lives that are heavily influenced by algorithms: they have huge and increasing reach into the stock market, law enforcement, immigration controls, and even the most private spaces of our personal lives.
How much of your holiday shopping this year did you do online? Did you go online, with a specific product in mind and purchase precisely what you wanted? Or did you outsource the thinking and ask Alexa to do your shopping for you? Top of Form
Bottom of Form
For more than a decade, organizations have focused on search as a key discovery mechanism in digital, but with the emergence of new interfaces, such as messaging, chatbots, and voice powered by artificial intelligence, customers have new ways to explore their possibilities.
This poses a potential problem for consumer-packaged goods and retailers. If there is no longer a physical place where a brand might exist and come to life for consumers, how will brands connect with shoppers? Managers must learn to navigate and engage with the algorithms and face facts about their brands. Brands dependent on visual cues in an auditory world where voice enabled recommendations and transactions are the norm may suffer
As the algorithms behind those interfaces become ever more powerful, their impact on marketing grows exponentially, especially in the product space where optimizing for algorithms will soon became an important task.
A key consideration will be for whom an algorithm works. Does it work for the user or for the platform? (Alexa, for example, works for Amazon, often prioritizing Amazon Prime products.)
How do we tell if a news story we read online is true or not, for example? A common technique is to go to Twitter to see if that topic is trending. If it is in the trending topics list, if that many people are discussing it, then it must be true we reason. But who knows how the trending topic list is generated on Twitter. The answer is that nobody does as Twitter does not tell us how the trending topic list is generated.
Or what do we expect to get when we search for things on Google – for example, the search term “Where can I get a cardiac intervention” will take you to some websites. Not necessarily the information the user was looking for or expecting the Google search algorithm to deliver.
There is increasing evidence that relying on algorithms to make decisions is problematic due to their inherent bias. Nobody knows how any of the algorithms actually work, except for a handful of people in the organizations that develop them. Sometimes not even they know; machine learning is increasingly leading to those algorithms evolving in ways that are not easy to predict. Why is this happening, and what does it mean?
So as we increasingly rely on algorithms to help us in our lives, or to sort through the wealth of information that is available to us online, we need to more actively interrogate what these algorithms are and what biases they might have. Indeed in many cases we need to actively understand that these decisions are being made by algorithms at all.
We need to understand the data we are looking at, the algorithms we are using and how these algorithms are designed and programmed. And this is becoming increasingly important as trust in technology and digital tools is being questioned more and more.
We are now more aware and conscious of the technology we are using from ever before. In the early days of Facebook we may not have considered that things we shared, seemingly with our friends, was also available for others to see or for advertisers (and others) to access and use. As technology has become a more prevalent part of our lives, and as the role of technology is increasingly discussed in the media, we are now more aware of this. We have become less trusting of technology, more skeptical of what firms are doing with our data and how they are acting.
And as we become more conscious of technology so we become more skeptical about it, more questioning of what it does and how it works and more demanding of the platforms and services that provide it. We are more aware that we are being targeted with advertising, for example, and we want to know how that works.
So we should be questioning the algorithm more. We should expect people to question the algorithm more. And we should expect that the full benefits of the many new technological advances will only be realized at large once we understand not just what they do but how they do it.
The algorithmic future also goes hand in hand with a tidal wave of automation – self-driving cars, for example, rely on algorithms – which are predicted to destroy jobs on an unprecedented scale. In a futile attempt to make our robot overlords less terrifying, some suggest that automation is least likely to affect jobs that are based on human relations, such as care work with old or sick people.
Ideally the humanitarian economy is relational, based on solidarity. Solidarity can only be built on a foundation of human relations, but automation threatens to undermine that foundation by accelerating the transition of the humanitarian economy from a relational model to a transactional model.
A transactional model is not built on solidarity: it is built on contracts. Solidarity means trust; contracts indicate a lack of trust. The transactional model therefore has to be built on technical standards and key performance indicators and logical frameworks – all of which are desirable, but none of which are sufficient to satisfy the humanitarian imperative, which risks being swept away.
Algorithmic humanitarianism does not have to be apocalyptic for the humanitarian sector, but only if we invest in ensuring that our algorithms reflect our values. That means that rather than be overtaken by software companies, we may need to become software companies – otherwise our lack of computer literacy means that the coding is going to be left to the hyperactive imagination of the hackathon.
The end result could be much worse than just overhyped, underperforming, and outright bullshit mobile apps; it could be a hollow humanitarianism, it is essential humanity discounted by machines of loving grace.
Algorithms increasingly inform and are part of many tasks we do online and many parts of our lives. We are willing to put our trust in them to find information on Google, to find news via trending topics on Twitter or to recommend products for us to buy. They help to drive automatic cars or to target us with advertising that is thought to be right for us.
“But we do not really know how they work,” stated Mark Hansen, Professor of Journalism at Colombia University. There can be a temptation to think that things that are programmed, that are powered by the machine, lack the bias that human analysis might bring. And whilst, when we think about it, it might be patently clear that this is not true, we continue to put our faith in algorithms without interrogating how they work.
It is a common perception today that people rely more and more on technology rather than their own brains to solve an issue. While saying so, we tend to ignore the fact that these growing technologies are also the invention of great human brains. If they help to solve the problem more easily than humans, should not mean the work given to the human brain decreases. We will truly become “slaves” of technology when machines take over the world and start ruling us - a sneak peek of which we have seen in The Matrix and other such flicks.
Though it is true that most people who own smart phones want to see the world through its camera, the basis of the desire is human experience. Humans want their experience to be recorded, shared, relived and retrospect. When technology aids that, it seems as though we have fallen prey to it, but if there were a way the brain is able to telepathically WhatsApp or Facebook, we would have done it already.
Facebook and all social media platforms are bringing people together today. There is this self-righteous judgment on them today but at the heart of it, they have helped people connect, and reflect a collective resonation of brain vomit. Today social media and latching on to a smart phone is purely a human choice - not a necessity. More and more humans are making that choice. It is far easier to click a button and support Digital Bangladesh than go to an underprivileged village in Bangladesh and fund/setup a free Wi-Fi zone there. However, at the same time collective brain vomit tells us about what is trending in human minds today - no doubt.
There are still many people who want to have zero social connect with their profile on the Internet. They are not there on LinkedIn, Facebook, Twitter or Nowhere. However, the big question is, are they missing out? The truth is yes. More people have started realizing this truth and have now attempted to start a small profile in some corner. It is amusing and astounding at the same time. That said we must with immediate effect stop judgment of people with smart phones and posting food pictures in social media. They are equally living their life as someone who does not. Life experience is relative.
If you use Google map to seek directions instead of asking a passerby, does not mean you are using less brains. On the contrary, you could be using more brains to use the technology. In early 90s, to make school projects a lot of research was done. It could include referring to various magazines, newsletters, etc. Nowadays, kids just go to Google, and get readymade data, pictures etc. Is it a bad trend? Answer would not be straight forward, as we might think. Other side of coin, the kids could be saving manual research time and utilizing same in study or other constructive work. May be that is why today we see kids are doing wonders in the field of innovation.
The goal is to get machines to do mundane activities and let humans focus on better things. Our increasingly connected world, combined with low-cost sensors and distributed intelligence, will have a transformative impact on industry, producing more data than humans would ever be able to process. Internet of Things (IoT) and Artificial intelligence (AI) are changing how industries and customer-oriented companies are doing business.
Artificial intelligence and machine learning are not novelty innovations. Examples of everyday application of machine learning include recommendations made by online services (Amazon, Netflix) or automatic credit ratings by banks. Google’s self-driving vehicles are hardly news to anyone today. By combining the advanced features of modern cars (speech recognition, adaptive cruise control, lane assistant, navigator and parking assistants), we are close to a completely independent operating vehicle.
The way companies continue to increase their digital footprints, “identify and diagnose” capabilities are not enough to remediate against a growing fundamental business challenge for organizations of all shapes and sizes.
We are heading towards a direction where our day-to-day survival will be in the hands of machines. A machine will drive us to work. A machine will order us food. A machine will tell us what to do with life. It is not easy to make a slave out of the human brain