The clouded Separation of Algorithms against Intelligence

Published by: James Liang

Source: Australian Photography – AI generated image wins Australian photo comp

 

In the contemporary landscape, Artificial Intelligence (AI) has already become interwoven within our modern world. From powering the search recommendations on your phone, diagnosing patients in the medical field, or identifying malicious patterns in financial industries, the applications of AI have intricately transformed the way that we live.

 

‘Moore’s Law: The principle that the speed and capability of computers can be expected to double every two years, due to increases in the number of transistors a microchip can contain. (Britannica, 2023)’

 

From a holistic perspective, the rising prominence of AI has only been a recent occurrence, caused by the relentless growth of technology and its milestone achievements. For example, in 1997, IBM’s Deep Blue computer program, defeated the reigning world champion Gary Kasparov. Soon after, Cynthia Breazeal from MIT developed a robot called Kismet, a first of its kind that could recognise and display emotions (Harvard, 2017).

 

However, as AI becomes more ‘intelligent’, concerns are growing about the potential displacement of humans, particularly in domains where innovation and creativity traditionally thrive. Especially in a world where computers can create new and novel content, from domain expertise to artistic expressions, what truly separates us from these ‘intelligent’ machines?

 

In this article, we seek to examine the separation of human and artificial intelligence, and in  doing so, explore how one might navigate through a world where AI is an increasingly tangible and pervasive presence.

Source: BodyCare – Image of AI and Human

 

What is Artificial Intelligence?

Artificial intelligence is a field that represents the intersection between computer science and big data, where algorithms seek to make increasingly accurate predictions or classifications based on input data (IBM, 2022). Whilst the topic of AI extends to subfields such as machine learning, deep learning, or computer vision, in this article, we will only be exploring the most recent and hyped application relating to Large Language Models (LLMs).

 

Large Language Models – A race for supremacy

LLMs are deep learning algorithms that can recognize, summarize, translate, predict, and generate content using very large datasets (NVIDIA, 2024). One of the most prominent examples is OpenAIs ChatGPT, a state of the art AI chatbot that took a mere 5 days to reach 1 million users (statista, 2023), where it continues to retain its popularity to this day. Its rapid development was made possible through the introduction of the transformer architecture, which Google published in its 2017 research paper: “Attention is All You Need” (see here if interested).

 

So, why are LLMs such a big deal? Perhaps the source of the hype stems from huge, general-purpose capabilities exhibited by these models. For the first time, we are seeing AI models gain the ability to generate human-like content, with human-like intelligence. As a result, the applications for these LLMs span across a plethora of use cases, such as online question answering (as used on Quora), sentiment analysis for retailers, summarisations in a legal settings, or even practical medical diagnoses (NVIDIA, 2024). As a result, various companies are already making major investments into this field, such as Google, that allegedly made a $2 billion investment into the artificial intelligence space in 2023 (Bloomberg, 2023).

 

The question is, what does this mean for this technology? Well, most recently in December 2023, Google released its first multi-modal model – Gemini, which reportedly outperforms OpenAI’s GPT-4 by 3.6% when evaluated against general capabilities (Google, 2023). Furthermore, by being multimodal, Gemini has the potential to transform any type of input, into any type of output. If you are interested in using Gemini to generate some images or answer some prompts of your own, you can check it out here:

https://gemini.google.com/app

Source: Google Deep Mind – Sample input and output of Google Gemini

 

More recently, in February of 2024, OpenAI announced Sora, a text to video model (inspired by the LLM architecture) that can generate videos up to a minute long whilst still maintaining visual quality and adherence to the user’s prompts (OpenAI, 2024). With the alarmingly quick progression of large language models within the tech industry, it is natural that both excitement and fear has been elicited from the public community. By interacting more and more with the reality around us, artificial intelligence is becoming an increasingly tangible presence that we cannot ignore. So, as we lie at the cusp of the AI revolution, it is important for us to contemplate how this technology should be used – and whether we should be afraid of it.

 

What is the role of AI in our future?

The most common belief surrounding this technology, is that AI will alter the way we work, by complementing and augmenting human capabilities rather than displacing them (Harvard Business Review, 2018). In what is being termed as collaborative intelligence, it is expected that the greatest benefits of AI will only be realised by people know how to utilise them most effectively.

 

For example, as part of Googles’ aim for Gemini, they envision that the general-purpose model will be taken up by developers and enterprises, who can then build them for their specific use cases (Google, 2024). Even within research and academia, LLMs are being used in an exploratory manner, to see how these models can support and improve the results of traditional research problems. Moreover, the creation and development of AI systems also require more machine learning engineers, data scientists and specialised researchers. (Forbes, 2023).

 

Outside of the tech space, it is predicted that any role involving creating, using, or exchanging information, has the potential to be made more efficient. This includes even diverse roles ranging from doctors and lawyers to plumbers and carpenters. For example, in a study conducted by the National Bureau of Economic Research, it was found that customer support agents using generative AI tools saw a nearly 14% increase in their productivity (Harvard Business Review, 2023).

 

Change has always been an inevitability in our tumultuous history, but what we have learnt is that in the long run, technology adoption seems to create a multitude of new jobs and unleashes a demand for existing ones, ultimately creating more jobs than it destroys (McKinsey, 2017). According to the World Economic Forum’s “The Future of Jobs Report 2020,” AI is expected to replace 85 million jobs worldwide by 2025 and create 97 million new jobs in that same timeframe (builtin, 2023). Perhaps, that is some reassurance as we head towards an increasingly AI driven future.

 

Looking further ahead.

The human capacity for growth has always been termed as limitless. Whilst the boundaries between artificial and human intelligence may be increasingly hazy, it is important to remember that AI is ultimately just a tool designed to improve the way that we live. Perhaps, one possible future to consider, is one where the strengthened collaboration between AI and human intellect, creates mutual growth that empowers us to achieve the very best of our collective intelligence.

Source: DALL·E 2 – generated via the prompt:
“An expressive oil painting of a basketball player dunking, depicted as an explosion of a nebula.”

This article is published by CCA, a student association affiliated with Monash University. Opinions published are not necessarily those of the publishers. CCA and Monash University do not accept any responsibility for the accuracy of information contained in the publication.

CCA