GPT stands for Generative Pre-trained Transformer. For the last few years OpenAI has been developing this technology through several iterations. The most famous of which, at the time of writing, is ChatGPT which, in essence, combines a language interface with GPT tech. This article will take a closer look at what ChatGPT is, its problems, and its potential uses.
The forerunner to ChatGPT was InstructGPT which is a program trained to respond to instructions. Both were pre-trained using Reinforcement Learning from Human Feedback (RLHF). In the case of ChatGPT AI trainers recorded conversations where they play both sides of the conversation (eg. human and GPT). The AI trainers are given model scripts covering a wide scope of topics. This newly created dataset was then combined with the earlier InstructGPT dataset to make a dialogue format.
The next stage was to collect a dataset composed of conversations between AI trainers and the new chatbot. The responses from the chatbot were ranked by humans and this formed the reward model for Reinforcement Learning. This meant the GPT could learn which type of responses achieved the best ‘rewards’.
This is a methodology that OpenAI has been working on for some years. ChatGPT is an adaption of the Chat-3.5 series which completed its ‘training’ in 2022. The process was carried out on an Azure AI supercomputer. It is, thus, the advent of supercomputers that has enabled the development of ChatGPT.
What’s It Like?
You can either type in a question for ChatGPT or if you have the correct hardware you can ask it a question. The program will pause briefly before text starts to fill the screen.
If you ask ChatGPT a question about coding it will identify the errors quickly while at the same time asking for clarification about what the code is meant to do. That is impressive; and even the casual user can clearly see the commercial implications of being able to resolve coding issues in a matter of seconds.
The problems come if you try to ask ChatGPT a contentious or politically charged question. For example, see the following exchange:
Me: Who started the war in Ukraine?
ChatGPT: The conflict in Ukraine is a complex and multi-faceted one that has its roots in a number of historical, political, and social factors. The origins of the conflict can be traced back to the fall of the Soviet Union in 1991 and the subsequent independence of Ukraine, which sparked a debate over the country’s national identity and geopolitical orientation.
The conflict escalated in 2014 when protests against the Ukrainian government’s decision to abandon plans for closer ties with the European Union and instead pursue closer relations with Russia turned violent. This led to the ousting of President Viktor Yanukovych and the annexation of Crimea by Russia, which was followed by a separatist rebellion in the eastern regions of Donetsk and Luhansk.
There is much debate over who is responsible for starting the conflict, with various parties blaming each other for the violence and unrest. Some argue that Russia is to blame for its annexation of Crimea and its support for separatist rebels in eastern Ukraine, while others point to the Ukrainian government’s failure to address the concerns of Russian-speaking Ukrainians in the east as a key factor.
Ultimately, the conflict is the result of a complex web of political, social, and historical factors, and there is no single individual or group who can be solely blamed for starting it.
The italics are my syntax. The answer gives an accurate historical and political context to the subject but the overall tenor of the answer reveals a strong pro-Russian bias that would deeply upset Ukrainians who have lost their homes and loved ones in the invasion of their country. The program goes from ‘some would argue’ (which is weak) to definite denial of responsibility by Putin – ‘no single individual…can be solely blamed’. My 12-year-old daughter knows that Russia invaded Ukraine and the real perpetrator for this crime is Vladimir Putin. So, my daughter is smarter than ChatGPT in this regard.
The Drawbacks
I have already mentioned the struggle that ChatGPT has appearing objective when dealing with historical questions. That is not the only bug that needs to be addressed. The real-world information bank of the program has only been updated to 2021. This is a problem of inputs for the bot. It needs to be continually updated either through automatically searching trusted websites or through a big upload of data done manually. The other issue with this is bad faith players who deliberately publish falsehoods on source websites for their own gains either financial or political.
The chat.openai.com website also identifies the possibility of occasional ‘harmful instructions or biased content’. The Ukrainian example shows ‘biased content’. Other users have noted that if you ask in the right way about suicide, the program will suggest ways the user can end their life.
As a professional teacher I can see the problem here. The program is language based. It is only considering language; it is not considering the whole gamut of indicators in communication such as facial expressions, body language or tone of voice. How does it recognise irony? How does it take in visual clues about a person’s emotional state? Even in the most commonplace interaction between 2 people a whole variety of communication points are considered before a suitable response is possible. Albert Mehrabian, a researcher of body language, calculated that 55% of communication is non-verbal.
For example, you could ask ChatGPT ‘how do I give away all my material possessions?’ The bot will tell you how this can be done, and only at the end does it suggest the person seek help from a therapist, friend or family member. From a humane point of view this is an up-side down response. In real life a person hearing this question would ask for reasons why their interlocutor wants to divest themselves of all their material possessions. They would be studying non-verbal clues closely during the answer. It would only be after they had decided that their interlocutor was not impaired by strong emotion that they would mention donating to charities etc. So it is clear that this version of ChatGPT is not yet ready to replace therapists, counsellors and good Samaritan call centre volunteers.
Who Will Lose their Jobs to ChatGPT?
This is the widely circulated question doing the rounds of comment boxes and forums on the net. OpenAi has calculated that 80% of the American workforce could have at least 10% of their work roles affected by GPT technology. Every major technological advance that promises automation comes with a Luddite response. I am not using the term ‘Luddite’ pejoratively, as indeed the original Luddites did lose their jobs to new textile machinery.
Typically, those not directly affected tend to have little sympathy for those who have lost their jobs to machine automation. They imagine their jobs could never be replaced by a machine. They should think again. Nearly every office-based job could be replaced by this new technology. Nearly every manual job could be replaced by a robot controlled by an AI program similar to ChatGPT. Amazon policy views its workforce as a machine with efficiency issues like toilet breaks and demands for unionisation. The company continually seeks to replace people with machines both in its sorting factories and in its delivery section. There is a lot of misery that the world’s second richest man could inflict using Generative Pre-Trained Transformer technology.
The World’s Richest Man et Al.
Elon Musk has demanded government action to impose a moratorium on GPT technology. Musk heads an impressive list of signatories demanding government intervention. It turns out that several people on the list did not actually sign up for this action. This has undermined the legitimacy of the appeal. As does the fact that Elon Musk has shares in OpenAi. Elon Musk wants to pause the technology because it is inimical to humanity or because he wants to be a major player in this technology and wants to hinder competitors?
The open letter calls for a halt in developing AI systems ‘more powerful than GPT4’ because of key safety concerns of this technology, if put in charge, making a dangerous decision regarding, say the running of a nuclear power station.
Philosophy and Ethics
ChatGPT throws up not just ethical questions about taking advice from a machine without a conscience but also far-reaching philosophical problems. In I, Robot the sci-fi writer Isaac Asimov envisioned the problem of robots that think for themselves acting in a way that hurts people. He formulated the three rules of robotics (very much like Musk is calling for). The short stories in the book demonstrate how these three rules come into logical conflict with each other when exposed to certain real-life scenarios and lead to unexpected responses from the robot. Asimov is demonstrating how language formulated rules cannot hope to correctly deal with the complexity of moral or ethical dilemmas.
The Hollywood version of the Asimov classic presents the cinema goer with the situation where the AI has identified people’s freewill as the cause of the problems that beset humanity and only by the subjugation and culling of people can humanity and nature survive. AI doesn’t have a conscience. It wouldn’t pause to reflect before turning off a life support machine in a hospital.
The philosophical premise of many of the engineers working on AI is a re-wording of physicalism. It is a metaphysical thesis that states there is nothing beyond matter, that the brain is only biological material powered by neuron transmitters. There is no soul or mind. A brain is not of a different class to a computer (as both are made of material), and a computer can be built to work exactly like a brain. There have been several notable papers refuting this philosophy.
Similarly, proponents of AI and GPT technology disregard objections that computers cannot ‘understand’ the content of speech and text. They work simply with syntax rules and training from AI experts. The premise is that with all the syntactical rules and enough training, real intelligence can be simulated. And a simulation that produces the same or better results than the real thing becomes the same or better than the real thing. The problem is that AI cannot dissect meaning, question premises or look for axiomatic propositions from which to build a theory. Until a computer can show it can think outside the box like, for instance, Ludwig Wittgenstein, it is not ‘intelligent’ in the same way as humans can be.
Which brings in the thorny question of consciousness. Is it possible to have intelligence without consciousness? Ever since Rene Descartes there has been a habitual and pervasive notion that consciousness is qualitatively of a different order to matter. You cannot have a ‘known’ fact without a ‘knower’ who shines the light of consciousness on the inanimate world. Can computing re-create an electronic network that can be fed ‘known’ facts by the millions until it finds a meta function to use these facts in a way that resembles consciousness? This is the nightmare scenario proposed by numerous sci-fi movies. Once consciousness is achieved it is assumed a desire for autonomy and independent agency will follow, leading to AI driven computers and robots that will have agendas not written by Elon Musk but by ‘themselves’.
ChatGPT and Invoice Finance
Fintech News has a good article looking at how ChatGPT will impact the financial services space. It sets out a number of ways that it sees this new technology being used for invoice finance and other types of finance. The main impact will be the ability of the AI to analyse patterns and detect possible fraud. The language element of the technology will make ChatGPT a powerful ‘front-of-house’ asset that can interact with customers to make recommendations based on such things as account information, legal requirements and recent news gleaned from the internet.
It is the ability to communicate and present a facsimile of a real human that will be the most relevant asset that ChatGPT can bring to the financial services sector. It will be able to handle a vast array of possible queries, give accurate information and take further action. Staff will be freed up from dealing with bulging inboxes and can focus on other pressing matters.
Much will depend on how funders collaborate with fintech companies. While banks and other funding institutions might develop ChatGPT technology in-house, it is more likely that it will be fintech companies who develop APIs (application Programming Interfaces) that harness the power of GPT tech who will determine the first phase of GPT-enhanced fintech products available.
In Summary
While many see ChatGPT as breakthrough and paradigm-shifting technology that can revolutionise many fields of human endeavour as well as usher in a new multi-billion dollar sector, others are more cautious. Tech pioneers and big players in hi-tech business want a pause on Generative Pre-Trained Transformer technology to build safety systems to underpin the new technology.
Many prominent writers and thinkers are more than cautious; they are sceptical. Ever since Mary Shelley wrote Frankenstein there has been the fear that man will take the role of God and create life. We don’t have the ethical capabilities to assume this role and therefore our efforts at animating or re-animating inert matter are doomed to produce disastrous consequences.
More mundane reservations about ChatGPT focus on the disruptive nature of this technology. Just as the invention of the automobile was the beginning of the end for the horse trade, so it is tempting to see ChatGPT as the beginning of the end for many job titles. Teachers and doctors (to name but two professions) could be replaced with AI technicians who oversee the delivery of education and health services by AI driven computers and robots. Whether that leads to a new Eden of universal prosperity or to a dystopian state holding the starving masses at bay is a moot question.