Veridas: ‘Biometrics is the Only Factor that Guarantees Your Real Identity’

In this interview, Eduardo Azanza, CEO of Veridas, analyses the role of biometrics in digital authentication, its adoption by businesses and its regulatory framework.
Biometric technology has established itself as one of the most effective and secure solutions for identity verification in digital and physical environments. It is based on the recognition of unique and non-transferable characteristics of each person, such as their face, voice or fingerprint. Its adoption has grown significantly in sectors such as banking, public administration, transport and security. This evolution has been driven by the need to combat fraud, improve the user experience and comply with stricter regulatory requirements. Today, biometrics not only allows for accurate identification of a person, but also guarantees fast, convenient and privacy-friendly authentication in a context marked by the rise of deepfakes and the fragmentation of digital identity.
In this context, Eduardo Azanza, CEO and co-founder of Veridas, reviews the company’s evolution from its beginnings in 2012 as a response to document fraud to its consolidation as a global benchmark in facial, voice and proof-of-life biometrics. He highlights the growing role of biometrics as an authentication standard and also emphasises its positive impact on user experience, security and cost savings. From a regulatory point of view, he points out that the new European Artificial Intelligence Regulation establishes a clear framework. And, in line with Europe, Azanza is committed to the self-sovereign identity model, based on personal digital wallets.
Origins and evolution of Veridas
– How was Veridas born and what milestones would you highlight on the path that has led you to become a global benchmark in digital identity?
Veridas was founded on 25 May 2012, 13 years ago. We started out fighting fraud in the world of currency and identity document printing, which is probably one of the industries that has been dealing with fraud for the longest time: from the day after currency appeared, there were already counterfeits. In those early years, the focus was on developing technology for that sector. At the end of 2012, we began to apply artificial intelligence and deep learning models to that world. It was in that context that the first major change arose. After three years, we crossed paths with BBVA, which presented us with a challenge they had as a bank: how to combat fraud in a 100% digital environment, that is, how to know who is on the other side of the screen or the phone.
That need marked Veridas’ entry into the world of digital identity. We believed we had enough ideas and knowledge in AI, identity documents and biometrics to take on that challenge. We spent two years working on a project with them and, after two years, we jointly created Veridas. Since then, the evolution has been continuous. With the launch of selfie onboarding that BBVA presented eight years ago, we were already behind it providing the technology. Today, what we do is very clear: we differentiate between a false identity and a true one. Our mission is to ensure that who you say you are is who you really are, and not a deepfake or someone impersonating you.
– How does your biometric technology work?
Veridas works by verifying documents, extracting data, validating security features, and comparing the image on the document with a selfie of the user using facial biometrics and proof of life technologies. We verify that the person is alive, that they are not a mask or a deepfake. If there is a chip in the ID card, we read its data and compare it with the subject’s face. We check everything to make sure they are who they say they are.
In Spain, this process is not connected to official databases, but in other countries it is. In Mexico, for example, we have a service for the government that allows you to send the selfie photo with the ID number and the comparison is made directly within the government system using our technology. Based on this ‘genesis of identity’, Veridas offers authentication: Once we know that they are who they say they are, it is possible to access a website with your face, a call centre with your voice, or enter a physical space: a stadium, a corporate environment, a sports club, with your face. All the technology is 100% our own. We do not use open source. Everything is done by us: the engines, the neural networks, the experience. And organisations such as NIST in the US have repeatedly ranked us in the top 3 worldwide in facial biometrics, voice biometrics and liveness testing.
– Which sectors are you bringing your biometric technology to?
Veridas works in a wide range of sectors: banking, insurance, public administration, mobility, tourism, prevention of underage gambling, transport… Anywhere where identity or age needs to be reliably verified. The company currently operates in 25 countries, with offices in eight and a team of around 225 people. The largest is in Navarre, where they have almost 200 people, but there are also teams in Mexico, the United States, Colombia, Argentina, Brazil, Italy, the United Kingdom and the Netherlands. In 2024, the company had a turnover of around 20 million euros and we expect to grow by more than 30% this year.
The internet was born without an identity layer. We provide a solution to that gap: determining whether an identity is real or fake, in a convenient, secure and private way.
Biometrics as an authentication standard
– And how has the business perception of biometrics as an authentication standard evolved?
It has clearly grown. For example, in Spain, in 2016, the Bank of Spain, through SEPBLAC, authorised banks to open accounts remotely. This regulation enabled the development of these technologies. First it was banking, then insurance and online gaming, which also needed to validate identities. Today, public administrations are adopting it: for example, we provide technology to the National Mint so that you can obtain a digital certificate without having to go there in person.
– Why is biometrics key and how do you guarantee the accuracy of your technology?
Because compared to other authentication factors, such as passwords or tokens, which are supposedly yours, biometrics is the only one that guarantees your real identity. That’s why we insist: in any authentication scheme, be sure to use biometrics. In Europe, for example, the PSD2 regulation requires multiple factors for payments over £50. We always recommend that one of them be biometric.
Adoption is growing. For example, at Endesa’s call centre, you can now identify yourself with your voice in three seconds, saying whatever you want, without having to answer security questions. As for how we guarantee accuracy, we use AI engines that we continuously train through reinforced learning. Get it right, and you get a mathematical reward. Get it wrong, and you get a penalty. This is how we improve the models every day.
In addition, we are training systems that detect deepfakes, both voice and face. Because fraud has become democratised. Today, anyone can generate a fake face or voice. The industry, as with counterfeit banknotes, is responding with new security measures. Our technological recipe, unlike Coca-Cola’s, changes every day.
– And how do we justify the ROI to our customers?
We measure three key variables: user experience, security and cost. The experience improves because you can carry out transactions without having to go anywhere. Security translates into fraud prevention: 0.5% of our transactions are fraud attempts. Stopping them can mean avoiding losses of up to 8,000 euros per attempt. And in terms of costs, if you authenticate with your voice in three seconds, you save minutes in call centres, which equates to significant financial savings. That is our contribution: experience, security and efficiency.
Regulation and Privacy
– How does your biometric technology fit into the current European regulatory context?
There is very good news in this area. The European regulatory framework is setting a clear line, and in 2024 the Artificial Intelligence Regulation was approved, which is now in the process of being transposed by the Member States. In Europe, a regulation is direct law: Spain, for example, cannot adapt it to suit its own preferences. If you want to change something, you have to go to Europe, discuss it for years and modify it for everyone. Therefore, the framework we have is solid, consistent and legally binding.
Furthermore, this regulation is based on a risk analysis approach, which is key. It focuses on how technology is used, not on banning it per se. A knife can be used to peel an apple or to commit a crime. The same is true of biometrics: it can improve security, reduce fraud and make people’s lives easier, but it could also be used for mass surveillance. And that risk is what is regulated.
– What exactly does the European regulation regulate in relation to biometrics?
The regulation establishes that biometrics used remotely, in open public spaces, without consent or voluntariness, is prohibited even for security forces, except in very limited cases: terrorist threats, missing children or serious crimes with judicial authorisation. There is no place for mass surveillance as might be imagined in other models of society.
However, when biometrics are used with explicit consent and in controlled environments, such as in a banking app, they are considered low or no risk. This is because there is no inherent threat in transforming your face into a modern biometric vector: it is multiple, irreversible and non-interoperable. The myth that if your biometric template is leaked, your identity is compromised, is false. This is even endorsed by the National Cryptology Centre.
Complementarily, the General Data Protection Regulation (GDPR) remains fully in force and applies in parallel with AI. Both have the same regulatory status. Added to this are sectoral regulations: banking (EBA), digital signatures, gaming, etc. Taken together, Europe offers a fairly clear and secure legal framework for operating with biometrics.
– And what do you think of the debate on mandatory identification on social media using ID cards?
That’s another discussion altogether, which is no longer technical but philosophical and political. I believe that anonymity on the internet must be defended. Just as I can walk down the street without identifying myself, I should also be able to do so in cyberspace. The problem is not biometrics, mobile phones or IP addresses. The real debate is what model of society we want. A model of absolute control like the Chinese one may work for some, but I don’t agree with it. The day you dissent, you simply disappear. That is why the debate on digital identity must be approached with great caution.
The future of digital identity
– What is the identity model we are moving towards?
The future of identity involves solving a fundamental problem: the internet was born without an identity layer. And what we have had since then are systems based on information silos. We give our personal data, name, ID number, address, to the bank, the insurance company, the leasing company, the hotel… to everyone. And they all store it with varying degrees of security, some even commercialise it. It is estimated that each person has an average of 54 digital profiles. In other words, our identity is completely fragmented.
In 2010, federated identities appeared. The term may not ring a bell, but if we talk about ‘Log in with Google or Facebook’, we know what it means. These giants act as brokers: they facilitate access in exchange for your data. From the user’s point of view, it’s convenient. But from a privacy perspective, it involves a significant transfer of data. And that’s the cost: your data is the commission.
Europe is betting on the self-sovereign identity model, based on an ‘identity wallet’. It’s like a digital wallet where you store your credentials. You have a main credential, such as your ID card or passport, and from there you can generate derivative credentials: university student, club member, etc. The interesting thing is that you control what you share and with whom, and everything is recorded. If, for example, you buy wine online, the system can verify that you are over 18 without revealing your date of birth.
– Is this model already a reality?
Yes. We have already developed it technically. We have a credential container, issuance and verification systems, and a user-friendly interface. We are currently working on projects with INCIBE, the Government of Navarre and ten Latin American countries. The model is technically self-sovereign, but its actual implementation will depend on each government. The technology is ready. The question is: how far do states want to go? It is probably one of the most important projects in Europe in the coming years, comparable to the euro in terms of strategic impact.
– How does it differ from current wallets such as those from Apple or Google?
Commercial wallets, such as those from Apple or Google, do not know your real identity, cannot verify specific attributes and are not interoperable. The European model does. It is designed so that you are the sole custodian of your data, and credentials are verifiable according to a common standard. The system is based on protocols such as OpenID for Verifiable Credentials, which guarantee privacy and security.
By 2027, all Europeans should have at least one digital identity wallet. The FNMT will issue it on behalf of the National Police, and it will contain the DNI as a verifiable credential. From there, other credentials can be generated, just as we now add airline tickets or boarding passes to our mobile phones.
Impact of quantum computing on biometrics
– How will the emergence of quantum computing affect the development of biometric technology?
Quantum computing is generating a lot of interest and also some noise, but we need to put things into context. On the one hand, there is a bit of a bubble: people talk about quantum computing as if it were going to revolutionise everything, and that is not the case. Quantum computing is not going to solve all problems; it will solve certain very complex problems very well, but not all of them. Today, 95% of tasks will continue to be solved by conventional computing. Perhaps by 2040 we will see 5% of very critical problems solved with quantum computing: climate simulations, DNA or protein studies, geological calculations to predict earthquakes… things like that. Quantum computing has its niche, and it will not be particularly large.
In security, the discourse is similar. Quantum-resistant encryption algorithms already exist. They are more computationally intensive and are not yet widely used, but they will eventually become standard. Therefore, in the field of cryptography, there will be an evolution towards these algorithms, but it will be gradual.
– What about biometrics?
Well, honestly, it affects us very little. Because in our biometric system we don’t store your image or your voice as such, but rather a mathematical representation, what we call a ‘string of numbers’, which is renewable, unique and does not allow the original image to be reconstructed. So even if a quantum computer accessed that string, it would have no way of going back. And if it did, the most it could get is your photo, which is probably already on LinkedIn or any social network. In other words, there is no real impact on biometric security as we conceive it.
– ISACA spoke of ‘A world without secrets’ in regards to quantum decryption
Regarding these predictions that in a quantum world all encryption keys will be broken and we will live in a world without secrets, a clarification must be made. In theory, yes, with sufficient quantum capacity, current 256-bit algorithms that would take millions of years to decrypt today could be broken. But that is theoretical. It’s like saying: if I could travel on an electron, I would go at 300,000 km/s. Well, yes, but get on it and see if you can. Sometimes possibilities are extrapolated without taking practical feasibility into account.
Even so, it is important to invest and stay informed, because quantum computing will undoubtedly have spectacular applications. But it is not going to immediately and broadly transform the security ecosystem, let alone biometrics. In our case, the impact is minimal. As I often say, if fear were traded on the stock market, I would invest. Because it is very easy to sell fear with this quantum computing thing.