Globant Presents its Vision on the Future of Artificial Intelligence

“RevolutionAI: The Future of Artificial Intelligence” covers myths, domains, build models, risks, and regulation of the most fashionable technology.

During the presentation “RevolutionAI: The future of Artificial Intelligence” by Globant, the team of experts formed by Juan José López Murphy, head of data science and Artificial Intelligence at Globant; Gonzalo Zarza, CDO at LaLiga Tech; Fede Constantino, head of product and platforms for Europe at Globant X and José María San José, AI Expert at Globant, offered us the company’s vision on fundamental aspects that must be taken into account when building Artificial Intelligence models and applying them in real use cases.

To begin the presentation, this same team of Globant experts defined Artificial Intelligence as an area for creating computers and machines that perform tasks that would normally require human intelligence to reason, learn and act or that involve data whose scale exceeds what humans can analyse.

Having defined the concept of Artificial Intelligence, and although they argue that the definitions are very diverse, depending on the use to which Artificial Intelligence is to be put, Juan José López Murphy began his presentation by debunking the myths about AI and relating the current reality of this technology.

AI: Myths vs Reality

In recent years, Artificial Intelligence (AI) has become more and more present in our daily lives, but many myths still circulate about this technology. One of the most common is that AI will replace human creativity and thinking. López Murphy has explained that, in reality, AI has been developed to enhance human capabilities and optimise everyday processes.

Another common myth is that AI is 100% accurate. But the reality is that human intervention is still needed to verify accuracy and context. While AI models can learn and improve based on their training and the datasets used to train them, their effectiveness ultimately depends on the human ability to monitor and control their performance. It is important to understand that AI does not develop on its own without human support.

Another myth is that only large companies can use AI. However, AI tools are constantly being released and developed, which makes access easier and reduces costs. This means that large, medium and small organisations alike can take advantage of AI. Moreover, AI has broad applications across different business units, which contradicts the myth that it is only applicable to repetitive and automated tasks. In short, it is important to understand the reality of AI to make the most of its potential and avoid falling into unfounded myths.

What can be done to humanise AI?

AI research is exploring different domains to replicate human intelligence and develop applications that allow computers to interact more naturally and effectively with the world.

One of the domains is the sensory aspect, where work is being done to develop the ability of computers to interact with the world through vision, hearing, touch, taste and smell. In this way, the aim is for AI to be able to interact in a more natural and human-like way with the environment.

Another domain being worked on is the sense of sequence. This involves the ability to anticipate and predict, modelling cause and effect to understand the implications of actions. Techniques such as linear optimisation, simulation and robotics are being used to achieve this. In addition, work is being done on the sense of the whole, where AI is being trained to see the disparate parts of a system so that it can plan and optimise more effectively.

Finally, work is also being done in the domain of formal logic, where code is being developed to teach a computer to carry out complex logical operations, so that it can be rational and use logic to make decisions.

Generative AI

López Murphy continued his presentation by talking about the generative AI layer, which is able to give back to the world in its own domain what is happening. In other words, it is no longer just about processing information and making decisions, but about generating concrete materials that other humans or machines can build.

This generative layer is related to tools such as ChatGPT, Machine Learning and Dall-E, and is used to describe images, tell stories or talk instead of simply interacting with the world. It is a new way of interacting with Artificial Intelligence and is expected to have a big impact in the future.

Generative AI models

Regarding the idea of generative AI and how this generative layer can create concrete materials in different domains, including text, audio, image and code, Lopez Murphy continued his explanation by explaining that, within the text domain, generative AI can generate text from instructions, summaries and restaurant recommendations. In the audio domain, it can generate voices of any person in any language, which has been used in political campaigns and in accessibility.

On the image side, generative AI can create realistic 2D and 3D images from descriptions, including in the context of gaming. In addition, generative AI can generate code from instructions, which can have a major impact on the ability to learn and optimise our performance.

Main concepts of Generative AI

Regarding the main concepts of Generative AI, Lopez Murphy talked about self-supervised learning, which allows models to learn from data on the internet, including data that is not catalogued or clean.

On the other hand, the use of foundation models, which are sufficiently generalist and can be used in different projects or use cases. The third concept deals with human feedback reinforcement learning, which allows the model to receive an evaluation of the response it has given, in order to refine its learning process.

Finally, the concept of prompt engineering is discussed, which refers to the way of interacting with the text model to obtain different results depending on what is asked of it and how it is asked. In addition, prompt engineering allows for a more natural interaction with text models, which makes them more useful in different situations.

Overall, these concepts allow Generative AI models to learn autonomously and refine themselves as they receive feedback from people. However, Globant also recognises that these technological advances have the potential to be dangerous if used for things that are illegal or should not be used. It is therefore important to be aware of the risks and use technology responsibly.

Building Models

Lopez Murphy also referred to the different types of building models that exist, and stresses the importance of understanding that generative models are just the tip of the iceberg and that there are earlier models that allowed them to evolve.

Although generative models can be fascinating and have many capabilities, there are other statistical or machine learning models that are more effective in certain business contexts, such as analysing data tables to find relationships. It is important to have expertise criteria when choosing which model to use, and not just rely on what is flashy or popular at the time, as this can generate more problems than benefits.

Applications of Generative AI

Regarding the applications of Generative AI, López Murphy has broken down the different types of AI models and their capabilities. Traditional models have a table of data as input and their output is a value, which allows forecasts, predictions and classifications to be made. Deep learning models have the ability to process images, audio, text, among others, and their output is still a value, but with a higher degree of accuracy. Finally, generative models have the ability to transform any type of input into any type of output, which gives them great power and differentiation, but their use should be carefully considered, as they can generate more pain than effectiveness if some kind of linking modality with the sensory world does not need to be created.

Risks of AI

As a final part of his speech, López Murphy wanted to explain the risks that the use of generative Artificial Intelligence may entail. These risks translate into biases, bubbles and power dynamics; legal exposure; rug-pulls; old data, hallucination, non-retrieval, loss of meaning; costs and dependency; and unwarranted moats.

On the risks associated with generative AI, there is a risk of biases, bubbles and power dynamics that can have serious consequences. These models tend to privilege certain outcomes based on characteristics that should not be considered, such as gender, age, ethnicity, religion, among others. Furthermore, the use of such models can lead to the creation of information bubbles and reduce our experience of the world, as content and recommendations are tailored to what the algorithm thinks we want to see. So it can lead to an increasing polarisation of issues and an imbalance of power in favour of the large organisations that use these algorithms.

In terms of the risk of legal exposure, there is the potential for algorithms to have a negative impact on privacy and the balance of power between organisations and individuals. Lopez Murphy, has mentioned an example of a comic book artwork that could not be patented in the US because it is considered to have been generated by an algorithm, which raises questions about intellectual property and authorship. Finally, the problem of style replication and the possible copyright and intellectual property infringement that can arise from this is discussed.

López Murphy also discusses the risk of “rug-pulls” in projects where over-dependence on an external entity can be problematic, as in the case of companies that rely on mining models or code that later cease to exist or have to adapt to new entities. This dependency also occurs in cases where access to an application may suddenly disappear due to regulatory issues or network saturation.

One of the most striking risks is the risk of hallucination, which Lopez Murphy has referred to as the possibility that generative AI models create things that do not exist, either in natural language or in the citation of scientific publications. This can lead to the spread of false information and confusion in research. The importance of always verifying sources and resources used in generative AI is emphasised.

On the other hand, López Murphy also talked about the risks of cost and dependency of AI models such as ChatGPT. These risks are the possibility that the external entity controlling the model may decide to raise prices or change conditions, which could affect the development of projects. There are several cases of startups that were built on an earlier version of ChatGPT, but were no longer viable when OpenAI launched a newer version with a much lower price. Therefore, it is important to understand the dependencies and alternatives when using these models.

Finally, Lopez Murphy explained the risk of unjustified moats, i.e. the danger of basing a company’s competitive advantage on technologies such as AI without having a clear and justified reason for doing so. A company’s differential value cannot depend solely on the implementation of a technology such as AI, as competitors can easily replicate it. Instead, AI should be used where it can really make a difference and bring unique value to customers.

Globant’s concerns

In terms of concerns related to Artificial Intelligence and its impact on society, Lopez Murphy, supported by Globant’s group of experts, has indicated as the first of two major concerns, the generation of deepfakes, which are fake artefacts that can be indistinguishable from reality, which can lead to misinformation and manipulation of public opinion.

The second concern is related to the social effect it could have, such as adapting to personal bubbles that would override our ability to interact with others and handle frustration.

As a solution, Globant’s team of experts talks about the growing need to educate people on how to deal with the information they receive online, as it is increasingly difficult to know what information is true and what is not. Lopez Murphy suggests that instead of slowing down the development of AI, we should reorient its use and provide tools for people to learn how to verify information and deal with data and information effectively.

Need to Regulate AI?

In the face of recent European regulations that aim to limit the use of Artificial Intelligence, Globant’s AI team believes that we need to educate rather than regulate. From his personal perspective, López Murphy has stated: “What I think we do need to regulate is a principle of responsibility. That is, whoever is available is the algorithm is responsible for the consequences of the algorithm. If I as a company say that it is the algorithm that decided not to give you access to this health service. The company is not responsible, there is no responsibility on the algorithm, there is responsibility on who uses it and who deploys it. And yes, it is necessary to regulate it, because that gives clarity to the rules of the game so that the companies make sense”.