Pause for Thought: Ethical Issues in Artificial Intelligence

Consider Cambridge Analytica – the British consulting firm which allegedly carried out psychometric profiling of US audiences to enable targeted distribution of material to voters. Simply by creating a Facebook app that was downloaded by about 300k users, it gathered information about around 87 million people (friends and connections of the people who installed the app). The power of simple AI to make deductions from the users of the quiz and the ‘likes’ of their friends makes one realise the astonishing implications of AI. Having access to data and using a very simple algorithm possibly altered the results of key democratic processes.

Data harvesting and the way it was used by a well-funded determined group is an ethical issue. One almost needs to revisit as to what being a democracy means; people have always relied on market surveys and making deduction and targeting audiences. People will look for the failings in the chain of data privacy, but I feel the issue is far deeper and it will continue to become even more convoluted and complex in the era where big data enables so much. Incidentally the US regulator (The Federal Trade Commission (FTC)) has approved a $5bn fine on Facebook to settle the claim into data privacy violations by a 3-2 vote (a fairly close call in my view).

Ethics around AI is becoming a big subject; cursory research indicates that more than 17,000 scientific and academic articles have been written on the subject since 2018. There are centres of excellence around the globe and an increasing number of books and blogs on the subject. The issue is also of tremendous importance to companies such as Capita. As an introductory guide, I recommend The Hitchhiker’s Guide to AI Ethics.

Many people would see the main ethical issues around AI to fall into the following categories:

  1. Unemployment: consider automation – self-drive vehicles; the automations possible in 78% of manual repetitive processes; the democratisation of knowledge (as tackled on one of my earlier posts); etc. At Capita, we could probably, and will attempt to automate, around 80% of our repetitive procedures and tasks. There are studies that indicate that AI will create more jobs that it gets rid of, but the jury is out on the subject yet. In 2014, roughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley … only in Silicon Valley there were 10 times fewer employees. There will be a positive spin on the subject like “one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live”, but we will have to redefine what work means.
  2. Inequality Created by AI: global commerce depends on goods and services. Those economies and corporate entities that can effectively make use of new technologies can outdo all others. This has the potential for those that can invest (the richer countries, companies, individuals) to dominate the future landscape for wealth. Inequality breeds revolutions, insecurity, and schisms.
  3. Machines Affecting Human Behaviour: as we enter the era when machines can mimic human responses (e.g., Eugene Goostman), beat humans in traditional tests of intelligence (e.g., chess, go, poker), more of us becoming aware as to how machines are altering our behaviour. Look around the people when you are on the tube or when you are in a restaurant. People are glued to their mobile devices – be it playing games or endlessly checking their social media communications. The click baits, and the tremendously optimised A/B testing, ensures that our reward centres keep us addicted to the need to be connected; all in aid for selected marketing. Algorithms increasingly and clandestinely affect everything we do: from how we shop to how we vote. They will increasingly shape how we learn and what we feel is the purpose of existence.
  4. Mistakes Embedded in Learning Algorithms: machines learn through examples – just like humans. Humans are not scalable endlessly and have a finite life time. When machines learn and become good at particular tasks, they are endlessly scalable and have the potential to dominate a particular issue – say assessing insurance claims. In the not too distant future, these algorithms will be so accurate and advanced that no human will be able to compete with their logic. This will make it increasingly difficult to challenge such algorithms even if there are mistakes in those algorithms. Have a look at some of the biggest failures in AI last year.
  5. Bias Embedded in AI: humans have their memes and cultures that embeds particular biases and prejudices in their being. With machines and AI, it is the data. If the data has bias, then AI will have bias. Vast majority of AI’s applications today are based on the category of algorithms known as deep learning; it is this class of deep-learning algorithms find patterns in data. Algorithms can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. Indeed, there have been many examples where the current generation ‘fair’ algorithms perpetuate discrimination.
  6. Keeping AI Away from ‘Bad’ Use: There are always people underground (for instance in the dark web) who research the use of AI for nefarious reasons. These involve gaining control of financial systems, weapon systems, personal and commercially sensitive data, disrupting due judicial processes, and much more. Have a look at this much shared YouTube video to give you a relatively naïve and sensationalist feel for the use of AI in warfare. Use of AI in cybersecurity and, next generation neural and quantum cryptography will be field of increasing importance to ensure that current institutions that we know, and trust can continue to exist. Given that there are so many state actors involved the bad uses of AI, it is almost a non-subject as the technology is almost democratised and ultimately it relies on motivations and need. I believe this is an impossible ask – keeping AI away from bad use.
  7. Unintended Consequences, Singularity and Humane Treatment of AI: Intelligence in systems is superficial and the current generation of deep-learning systems are little more then tensor algebra and calculus with particular ways of efficient optimisation. However, there is research underway to look at the building blocks for generalised artificial intelligence that may lead to a form of self-consciousness. Just as we value the rights of animals and the planet as a whole, there will emerge procedures and declarations of the rights of the machines. This is to enable the machines to classified according to rights and responsibilities rather than for humans to be seen as some form of overlords of all creation. It may be that machines have vastly more intelligence (theoretically 10k times more than is possible for biological systems – just in terms of speed, copper conducts signals 10k faster than biological neural connections), but that their form of consciousness (feelings for reward and aversion) may be very different from ours. We may start by implementing our value system in neuromorphic hardware, but the neuroplasticity (as in being to enable modify their mechanisms for learning) of such system may ensure that they find their own rightful place in nature with the possibility of self-assembly, evolution and purpose. Depending on the value-sets that such systems deem necessary for their own sustenance, humans do ponder the issue of “pulling the plug” if we begin to become irrelevant or a hinderance to such intelligence.

In an old, and not a well cited, paper Nick Bostrom of the Future of Humanity Institute (FHI) at the University of Oxford, concludes that “Although current AI offers us few ethical issues that are not already present in the design of cars or power plants, the approach of AI algorithms toward more humanlike thought portends predictable complications. Social roles may be filled by AI algorithms, implying new design requirements like transparency and predictability. Sufficiently general AI algorithms may no longer execute in predictable contexts, requiring new kinds of safety assurance and the engineering of artificial ethical considerations. AIs with sufficiently advanced mental states, or the right kind of states, will have moral status, and some may count as persons—though perhaps persons very much unlike the sort that exist now, perhaps governed by different rules. And finally, the prospect of AIs with superhuman intelligence and superhuman abilities presents us with the extraordinary challenge of stating an algorithm that outputs superethical behavior. These challenges may seem visionary, but it seems predictable that we will encounter them; and they are not devoid of suggestions for present-day research directions.”. The FHI divides its work into four main categories: Macrostrategy, AI Safety, Center for the Governance of AI and Biotechnology.

I hope that by now the reader sees that the issues around the Ethics of AI are deep and wide. AI has the possibility to change us and the fabric of our society. Governance and regulatory frameworks of traditional institutions are no match for technologies and means made possible by AI. One just has to see the debacle around Brexit, elections and selections around the globe to see the impact of using information and advertising selectively and in a targeted manner. Couple this with the nefarious use of AI as well as the possible emergence of superintelligence (read Nick Bostrom’s Superintelligence and James Lovelock’s Novacene: The Coming Age of Hyperintelligence), and one has enough material for a Shakespearian tragedy.

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *