Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The word “algorithm” has become very recurrent and polemic lately in the world of digital technologies and trends. It symbolizes the dangers and consequences of an automated world conditioned by commercial logic. Before judging negatively or positively its impact and role in Google’s searching process, Facebook’s news feeds and recommendations on Amazon, it is important to define an algorithm first.

An algorithm is “a series of instructions allowing to get a certain result”, explains sociologist Dominique Cardon in her book “A quoi rêvent les algorithms” (What do algorithms dream of). This notion was only familiar to mathematicians before becoming popular with the development of information technology and becoming a key element in internet mechanisms.

“We are literally surrounded by algorithms,” explains expert in information sciences Olivier Ertzscheid. “Every time we open Facebook, Google or Twitter, we’re directly exposed to choices” that algorithms make for us and “sometimes we’re influenced by them.”

They are omnipresent in trading desks where buying and selling orders take place and have become auxiliary to policemen because of their ability to predict where infractions might be perpetrated.  

The use of algorithms is significantly increasing as data is massively created and analyzed by companies and governments. Some consider that this is the age of algorithms that are closely linked to machine learning and deep learning – technologies that are evolving at a very rapid pace.

Google is based on algorithms. PageRank (PR) was created in the 1990s by Larry Page and Sergey Brin at Standford and allows to rank the popularity of webpages. PR is the basis of search engine Google that, within a split second, responds to a demand formed by keywords. Today, Google uses “tens of algorithms each one including thousands of parameters and variables,” said Olivier Ertzscheid.

Moreover, Facebook uses sophisticated algorithms to offer personalized content to its around 1.2 billion daily users, especially for the News Feeds that compile friends’ messages, shared articles, photos, etc. selected according to the activity and contacts of each user.

The main danger here is the “filter bubble”, according to American Eli Pariser, who explained this notion in his book having the same title. Being subject to information filtered by algorithms depending on his friends, taste and previous digital choices, the user enters unwittingly into a “cognitive bubble” that reinforces his perception of the world and his personal beliefs.

Another dangerous element is the circulation of fake news or hoax. Facebook’s algorithms weren’t designed to spot the difference between true and false information – a task that is rather complicated for artificial intelligence. However, Mark Zuckerberg’s social media platform defined as a service and not a media, suggested at the end of 2016 tools that allow users to report questionable information.

Algorithms can be classified into four categories according to the following functions: calculating webpages’ popularity, classifying their authority, evaluation the notoriety of social media users and predicting the future.

Sociologist Dominique Cardon believes that the fourth category indicated above attempts to anticipate our behaviors based on tracks we left on the website. This is, for example, how Amazon recommends new books to clients based on their recent reading history.

Algorithms have had dangerous consequences in several fields and industries. For example, some local communities depend on some hotspots chosen according to these mathematical formula to allocate resources. However, the means of entering data can influence results and create a negative vicious circle for the disadvantaged communities.

In the financial field, decisions in terms of loans and insurance are taken based on algorithms, which increase the risk of further discriminating those who are the most penalized.

Frank Pasquale, professor of law at Maryland University, confirms, however, that unfair uses of algorithms have to be faced with consequences stipulated by laws on consumers’ protection. Even though algorithms are used in sectors that impact societies, no company can be held accountable or responsible for algorithms actions.

However, algorithms do not only imply negative consequences; positive outcomes were also registered in many cases. They have helped spark breakthroughs in science and technology and helped people in their daily tasks, thus helping in enhancing productivity. Algorithms can execute many tasks that humans cannot and can assist them in several ways.

Their positive impact can be noticed in healthcare, transportation and advertising sectors where algorithms play an essential role in assisting staff, enhancing services and promoting products.

Setting a list of a technology’s pros and cons before adopting it is crucial. Exploring the consequences before building new business models and elaborating new pertinent strategies is more important than just pointing them out after adopting it. That way, companies and individuals will know where the technology is best applied and can draw its limitations.   

Some believe that people should not be submissive to algorithms but know when to rely on the choice they make for them. Such technologies and systems are not meant to dehumanize the workforce but rather provide assistance at the level of certain tasks.

In order to make a good use of algorithms, governance and accountability are inherent because they are driven by consumer data that has to be protected. Misuse of data used by algorithmic processes can lead to very dangerous consequences. In addition, consumers must be educated and informed about algorithm mechanisms and applications to sectors and fields.

Pin It