The global cybercrime industry will reach US$8 trillion in 2023, causing alarm among institutions that see technology as a threat but also as an ally, Mastercard says.
New technologies such as physical biometrics, quantum computing or generative artificial intelligence (GenIA) are at the forefront of the financial industry’s latest efforts to stop the rapid expansion of cybercrime.
At about $8 trillion globally last year, if cybercrime were a country, it would have the third-largest economy behind the United States and China, Mastercard officials said this week. This, in addition to hindering financial inclusion, will cost banks, fintechs and payment processors an estimated US$100 billion by 2030.
George Madaloni, Mastercard’s chief technology officer, emphasizes that the challenge is to explore tools to increase the sustainability of the movement of money, but without affecting its agility. “Using quantum computing, payments should be more secure because quantum computers will have more opportunities for fraud as well as encryption. So we need to increase encryption capabilities to ensure quantum-resistant payments,” said the CTO during a press tour of the brand’s technology center in New York, where he was invited. Iupana.
Although the company admits that quantum computing, which is used to solve more complex problems faster, is just being created, it says it is testing payments in this area. Similarly, Maddaloni added that blockchain, API development and the cloud are also major tools in his labs, proof that technology is playing an increasingly important role in the financial industry.
Biometrics of text typing on a mobile phone
Chris Reid, executive vice president of Identity Solutions, explains that with the help of artificial intelligence, they have identified physical signs that allow us to suspect coercion or fear in users who have become victims of digital fraud. “Simply put, the way we behave with our devices: the angle at which we hold the device, the way we place our thumbs (…) The challenge was to know when someone who was going to become a victim of fraud, shows changes in his behavior, while sending money, while interacting with his phone, tablet, – he explains.
“And yes, there are several identifiable anomalies in how people interact with their devices when they’re scared. “It allows us to signal the banks,” he said.
Looking ahead, Andrew Reiskind, the company’s chief data officer, emphasized the importance of having “data ready” to take advantage of generative AI, such as to create co-pilot financial services such as education and investment. chatbots as well as platforms on the security front.
“When we move from traditional data analytics, a lot of AI models are described as black boxes: you don’t know how they got to those results, and it’s gotten much worse with generative AI, especially with hallucinations. So, one of the things we have to do is make sure we have the data ready (…) it’s fresh, high-quality, reliable and understandable,” he listed.
Major banks in the region recently agreed that GenIA hallucinations, i.e. when published information does not conform to any identifiable pattern, cis an obstacle to the implementation of the instrument. In addition, institutions must ensure that they build governance principles that ensure that algorithms are not built on gender, religious or racial biases and that data is preserved as a fundamental right of customers, Reiskind stressed.