Why AI Could Be Detrimental to the Future of Media
Written by Macie Shadbolt.
It is no secret that artificial intelligence is one of the most up and coming topics of conversation. But what exactly is AI? Artificial intelligence, known as AI, is a high technology branch of computer science. It has the ability as a robot or computer to manage and complete human tasks that usually require the human’s mind of intelligence and judgement.
Why should we fear the developing evolution of AI?
In June 2020, Open AI, an AI research laboratory founded by Elon Musk, introduced ‘Generative Pre-trained Transformer 3’ (GPT-3). An autoregressive language model that is available for developer and enterprise use. GPT-3 is the most potent and powerful language model ever created. The model has 175 billion parameters compared to Open AI’s previous model, GPT-2 with 1.5 billion parameters. To put this into perspective, GPT-2 was only released in February 2019. This shows the rapid advancement of the model and why we should be fearing what is yet to come in years’ time. We are already being taken over by automation and how advanced it is becoming. We have been astonished by self-driving cars but, as well as this we have autonomous weapons; artificial intelligence systems that are programmed to kill. What if these fell into the hands of the wrong person? These machines could easily cause mass casualties.
There are jobs that people do today that these machines will take over. This will be a huge transition for society and will require big changes to training and education programmes to prepare our future ahead of us. AI has been described as the ‘biggest existential risk’ by Elon Musk and argued that someday could lead in mass human extinction. Many AI systems that currently exist surpass a human’s capability. “The development of full artificial intelligence could spell the end of the human race.” Said Stephen Hawking. Most people who are manufacturing ‘safe’ AI systems must start by explaining why AI systems are automatically dangerous. These risks affect almost every party of our lives on a day-to-day basis, from security to politics. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking described how AI can be programmed to do good but, they are capable of unintentionally causing danger and harm. If the scientists that are creating these programmes and machines can openly admit the bad they can do, what reasons do we have to not fear the evolution of AI that is rapidly advancing?
The algorithms that are built for AI can produce a built-in bias by those who introduce them to it. If these said AI algorithms are built with a bias or data with negative training, this is what the AI will learn from. This was seen when Microsoft’s Twitter chatbot became racist as a result of accumulating the discriminatory algorithms, causing major consequences. A survey found that opinions on artificial intelligence differed by sex. Only 17% of women felt optimistic about their development whereas there was 28% of men. 13% of men believe they could be friends with a robot as opposed to 6% of women. Shockingly, one in five men (22%) think that super intelligent robots could be trusted to carry out sex work vs 13% of women.
It is clear to see that the negatives outweigh the positives when it comes to the development of AI. With how the future of the world is going regarding climate change, AI is not the only danger we could be facing in the up-and-coming years. With this, there are many ambiguities regarding AI, we should devote extreme efforts and attention to laying the foundations for the safety of future systems and with clearer knowledge and awareness of the implications of such developments and advancements.