One of the benefits of experts gathering to discuss a broad range of inter-related issues, like the World Economic Forum (WEF) at Davos, is the dissemination of multifaceted views of any given topic. And one of those topics WEF gatherings have addressed for two consecutive years is artificial intelligence (AI). We have learned plenty of the AI upside, but downside narratives have also figured prominently.
Scopus columns in this newspaper consistently portrayed the upside (July 13, 2017; February 15 & 19, 2018), particularly how, by shifting to AI substitutes, corruption can be effectively handled in a country like Bangladesh: passport issuance, visa acquisition, traffic-controls, and disaster coordination can all be more efficiently administered by displacing greedy hands, nonchalant observers, and an entire range of middle-men (or 'dalals'). The downside, though, deserves equal discussion. At least two types demand attention: the instrumental and the methodological.
As evident, the instrumental type utilises AI facilitated discourses for extraneous, even ulterior motives. Government 'spying' over any kind of social media is an example: the less democratic the government, the more the scrutiny, with escalating punitive actions as a consequence. Even as this is being written, how the Russian government resorted to penetrate privileged Democrat Party exchanges utilising social media exposes the damage that can be wrought. Eventual beneficiaries of this hacking will be less-desired personnel, policies, and platforms than the more-desired: Donald J. Trump was elected into office with fewer votes than Hilary R. Clinton based on strategically-timed information leaks made possible by AI contraptions; and though the public did not vote the way they did because of Russian interference, the damage was done by exposing the vulnerabilities at stake.
It gets worse, particularly as it targets individuals with harmful or injurious consequences. Clearly this is not the only example, since intelligence agencies in more 'civilised' and democratic countries also indulge in it by way of security considerations, but China, the world's largest communist country and successor to an ideology begun in next-door Russia which actually consumed the Soviet Union, indulges in this rampantly and massively.
Blocking websites, as Xiao Qiang notes in a relevant piece (The rise of China as a digital totalitarian state, Washington Post, February 21, 2018), has now become a growth industry in China, under what is dubbed "the Great Firewall of China." Not only mainstream media, like Facebook, Google, Twitter, and YouTube, but also virtual private networks (VPNs) have constantly expanded given how fashionable and popular it has become to go online. In this sense, online attractiveness in less developed countries breeds circumstance for continued authoritative practices and persecution wherever either democracy has not fully transited from dictatorial rule, such as Egypt, or where populist moods have hijacked democratic counterparts, such as in India.
China's cases again illustrate the damages being done. Though such surveillance mechanisms as WeChat that 'spies' upon social media postings, or actual pro-government Internet commentators, like 50 cents army, opponents are identified, more often baited into a trap, then captured before any punishment or officially sanctioned 'disappearances'.
As one of the foremost technological 'equalisers' sociologically, politically, economically, intellectually, and innovatively, the Internet has also helped authoritarian, totalitarian, terroristic, and genocidal instincts and perpetrators to adapt to the monumental civilisation transformations underway: China continues as example exemplar of many of the above; Al Qaeda gained fame by recruiting online; and much of what has been happening in Myanmar and Syria with perpetrated groups profited from online information acquisition, dissemination, and conversion into instruments. Not only that, but also to adapt successfully.
Shifting to the built-in methodological biases possible with AI contraptions, we can see how old divisions, along gender and race particularly, continue, if not deepen, our discriminatory trajectories. General-purpose facial analysis illustrates this point (for which Larry Hardesty's MIT News piece on February 14, 2018 is recommended reading: https://www.weforum. org/agenda/2018/02/study-finds-gender-and-sk. . .): more errors were created in recognising females than males and darker-skinned individuals than lighter-skinned tones. Precedence-setting innovative applications, like the proverbial rolling stone, can only gather more moss, in this case, discrimination-nurturing moss.
Clearly these discriminatory outcomes were not originally intended by the contraption designers: they just happened, and with conscious remedying and redesigning inputs, can be corrected. Yet, the entire idea of using this wherewithal for discrimination, which is not new at all since race or intelligence purists have often flirted with genetic engineering to facilitate such outcomes, may further feed the discriminatory industries wherever they are. In a paradoxical age of breathtaking technological advancements and stiff resurgent populism that is being directed particularly, and in some cases officially, towards immigrants, a Pandora's Box of discriminatory practices and policies just opened up. What must follow is a string of effective countervailing measures, though paradoxically the original fear of globalising Internet reaches only begets more thickened nationalistic responses.
Passport-controls in some countries have already been experimenting with eye-recognition, among other types of identity-establishing tools. While the need for border-controls increases owing to the explosive smuggling industry and shuddering terroristic-spread, the scope of innocent passengers/travelers proceeding fret-free also gets clipped in one way or another. Without socially-conscious and politically-neutral administrators, designers, reformers, and leaders, all the massive AI gains possible can be reduced to naught without rigid cross-checking oversight. Somehow the lines drawn between what is national and what is global will need to be sensitive not to clip into the burgeoning legitimate merchandise and service transactions in order for AI developments to become more user-friendly.
Ultimately how much more that empowers the government over the free-flowing citizen desired by globalists, regional integrationists, business corporations, and the individuals themselves will become the future million dollar question. Since we end up complicating our own better senses with the bugs we ourselves have failed to cleanse, we might have to turn to those contraptions to bail us out with some acceptable solutions.
Dr. Imtiaz A. Hussain is Professor & Head of the newly-built Department of Global Studies & Governance at Independent University, Bangladesh.