I have conducted research on chat GPT to better understand the community I am contributing to. I began by listening to a Conversation between Lex Fridman and Max Tegmark about ‘The Case for Halting AI Development’.
Lex Fridman and Max Tegmark conversation on ‘The Case for Halting AI Development
Max Tegmark is a machine learning researcher and the President of the Future of Life Institute, a non-profit organisation created with the intention of reducing the possibility of an existential and catastrophic threat posed to humanity by AI. In their conversation Tegmark and Fridman discuss this threat and why it has suddenly become very real and very possible. Tegmark likens AI to an alien intelligence rather than a computer program. He suggests that what is being created cannot be explained as a simple tool or generative model, but instead that model is like GPT-4 are the infant awakening of an alien conscious, with an intelligence that will fast supersede anything imaginable by humans.
Tegmark’s concern takes many forms. He uses the analogy of Neanderthals creating homosapien babies in the belief that these more intelligent creatures will always work for, and be loyal to, their creators. In the same way such a belief is naive, it is naive to imagine that a far superior artificial intelligence to us would continue to work in our benefit. Instead Tegmark suggests it is far more likely that should any company or entity create this super intelligence it Will be unable to control it, with devastating effects for the entirety of humanity.
To ensure any potential intelligent awakenings are confined to the laboratories they are born in without means to escape or turn on their creators, Tegmark cites Stewart Russell who sets out standards to conduct experiments with AI safely.
Stewart Russell’s standards to conduct experiments with AI safely.
- the model should never be taught about humans
already AIs have been trained on Data from social media, conducting mass scale experiments on hundreds of millions of people daily to understand how they think and feel and how to manipulate them to think and feel and act in certain ways. - Models should not be taught how to code
Models like GTP-4 can code, giving them the potential to improve their own software, change other software, and break through any safeguards. Each new level of the models will increase exponentially as they begin to improve themselves - Connect it to the internet
This allows it to influence and learn from people in real time. - Building an API
Any and everyone can make a high number of calls to the model.
Pause Giant AI Experiments
With all of these standards for safety broken, it is hard to imagine that’s some catastrophe is not inevitable, which is why Tegmark is amongst those petitioning companies to ‘Pause Giant AI Experiments’. The letter urges for a pause on all AI systems more powerful than GPT4. The letter argues that the possibility of AI flooding news and information channels with propaganda and lies, replacing jobs including fulfilling ones, and making humans obsolete, that’s risking our civilisation, should be more than enough reason for the major players in artificial intelligence to hold their crazed efforts to develop increasingly powerful systems without any concept of the potential harm or consequences to stop.
Jeffrey Hinton’s warnings
This opinion is shared by Jeffrey Hinton who has been nurturing and innovating artificial intelligence since the 70s and has worked at Google pioneering machine learning projects like Bard (a natural language generator) for the last 10 years, has recently quit google over concerns about the potential damage to humanity posed by artificial intelligence. Hinton states the extremely rapid increase in AI’s Power and ability and the alarming potential for it to be used for destructive and malicious purposes. He also explains that, as well as being able to perform tasks like a normal computer program can do, it is growing in increasing level of intelligence, able to understand more like a human than the machine. The point where AI will be smarter than humans is increasing rapidly, far faster than anyone imagined, and with no global regulation or efforts to slow down on massive scale experiments, the risks are extraordinary. Once an AI system is more intelligent than its creators, it will be able to use its knowledge of code, manipulation, and human traits to remove any restrictions engineers have put on it and take control of human life and death. He says there is no way to stop the progress and speaks only to warn people rather than suggest ideas. For all these reasons, he feels regret for his work on AI and is concerned enough to have quit his very good job so as to freely speak about his fears.
Hinton goes on to describe how Generative models are not like human brains; they can transfer what they have learned to other machines. 1000s of different machines running for same model on different information Will be able to transfer what they have learned very quickly. They have become intuitive in their reasoning of the broader patterns that define the logic of our world. This makes it strange to conclude they are not sentient, especially when we are not sure what sentience is. He is ‘very confident that they think’, what they think is another question. The dangers are so extreme Hinton suggests a 10 year jail sentence to anyone who distributes a video made by AI without clearly stating its origins, in the same way counterfeit money is criminalised. He goes on to warn that fake videos could result in loss of control of democracy, and that politicians Will allow this to happen. Despite the severity of this risk, Hinton’s main concern is still the potential for these models to become super intelligent and take control. His glimmer of hope remains that the potential destruction caused by this event is so severe and universal that countries may work together in the same way they collaborate over nuclear bomb regulations. That these technologies will be developed is inevitable, what is important is that they’re developed safely with proper respect for the potential dangers. These models will be useful for protecting anything from diseases, natural disasters, the effects of climate change and policy.
Sparks of AGI
Indeed, OpenAI have recently released a paper that claims that GPT – 4 is developing a kind of intelligence that can be described as the early stages of artificial general intelligence. The paper explains how its ‘patterns of intelligence are decidedly not human-like’, but that it is able to reason through and understand problems and tasks in an intelligent way.
Current consequences
The consequences of the power and intelligence of GPT-4 are becoming increasingly clear: Forbes reports 89% of US students use chat GPT making traditional education largely worthless; Goldman Sachs predict 300 million jobs will be lost or take a dated by artificial intelligence; OpenAI predict up to 80% of the US workforce could have at least 10% of the tasks affected by GPTs. This is disruption and ‘progression’ at an insane rate, with no limit to how destructive the consequences could be.
Conclusion
This research has vastly altered my opinions on GPTs. Firstly, to call them generative models is inaccurate, for their increasing levels of intelligence make them far more than classic generative predictors, it does not seem unreasonable now to believe they will soon be at a point they can control themselves, and soon us. Secondly, it is a good thing open AI is not open with their source code. Whilst it is a shame they are creating this at all and with so little safety concerns, it is better the progress happens in a company where at least some scrutiny is shared on their safety practices. Thirdly, it makes me realise just how important it is that people know, as best they can, how’s the Apis and their source code works to better understand why they are given the answers they receive when they ask GPTs questions.