Online fraud is a significant and costly issue, causing billions of dollars in losses annually for ASEAN economies. Traditional banks and fintech companies have faced substantial threats from rapidly evolving fraud techniques. A study in 2024 reported that the average cost of a data breach in ASEAN hit a record high of USD 3.33 million. There is a clear recognition from regulators and businesses alike of the need for effective fraud management strategies.
Various authorities have already taken steps to improve fraud management. For instance, the Monetary Authority of Singapore announced the Shared Responsibility Framework in October 2024, implementing relevant duties for financial institutions and telecommunications companies to mitigate phishing scams and compensate affected victims. In parallel, Malaysia’s efforts have focused on establishing a fraud intelligence network across ASEAN banks to enable real-time sharing of data on fraudulent activities, enhance threat detection, and foster a unified response. Collaborative efforts, such as the partnership between Globe Telecom and the Bankers Association of the Philippines, have also contributed to reducing financial scams.
The Need for Privacy-First Collaborative AI
To access better intelligence to combat online fraud, Human Managed proposes federated learning (FL) and privacy preservation techniques. This approach aims to address issues of data, privacy, and cost in fraud management, offering a new dimension to AI utilization. Fraud management requires intelligence that is traceable, timely, and fresh. However, using AI to build continuous fraud management systems is limited by the quality of data, privacy concerns, and high costs of training large language models (LLMs).
One of the greatest barriers to adopting AI and LLMs for fraud management is the quality of data available. A survey of 600 data leaders revealed that 42% cited “quality of data” as the top obstacle to adopting generative AI and LLMs, followed by data privacy and protection concerns at 40%. Experts predict that available datasets for training AI models may deplete between 2026-2032 if current trends continue. Ensuring the availability of high-quality data remains a formidable challenge for leveraging AI in combating fraud.
The protection of data privacy is also essential for enterprises. According to Cisco’s 2024 Data Privacy Benchmark study, 94% of organizations reported losing customer trust if they did not adequately protect data. Promoting privacy is deemed as beneficial, with 95% of respondents agreeing that the benefits surpass costs and average organizations realizing a 1.6x return on their investment. The high costs associated with training LLMs are another major obstacle. Current models cost around USD 100 million to train, the next generation potentially costing USD 1 billion, and subsequent iterations reaching USD 10 billion. Specialized technologies and a mix of models are being developed to reduce processing time and costs, but these expenses remain a significant barrier.
Blockchain and Cryptocurrencies: A Limited Solution
Blockchain and cryptocurrencies have also attracted regional interest. Initiatives like Project Inthanon explored blockchain for cross-border payments in 2018, while the Payment Services Act in Singapore provided regulatory guidance for digital token services in 2019. Although these technologies are of interest, they do not address the primary requirement of better quality data and AI models.
Federated learning (FL) with privacy preservation techniques offers a viable solution. FL allows models to be trained across decentralized devices while keeping the data localized and secure — Privacy-First, Collaborative AI. This approach paves the way for continuous and consistent fraud intelligence through a comprehensive analysis of vast data, resulting in timely, insightful, and relevant information.
Use Cases and Perspectives on Federated Learning
There are several compelling use cases and perspectives on FL’s application in fraud management. One notable example is the application of FL in projects at PayNet. PayNet’s head, Aloysius Chong Kin Faa, emphasized the potential of FL as a secure and collaborative approach to fraud detection within their ecosystem. The National Fraud Portal (NFP) in Malaysia, developed in collaboration with Bank Negara Malaysia (BNM), aims to enhance collaboration and data sharing across financial institutions to combat online financial fraud and scams. This initiative exemplifies the use of FL to strengthen fraud management across multiple institutions.
Another example of FL’s application is in centralized intelligence for fraud management. NFP serves as a centralized platform that consolidates incidents received by the National Scam Response Centre (NSRC) and financial institution complaint channels, tracing and intercepting victim funds. FL applications enhance collaborative model development and fraud detection, especially when customer data resides in financial institutions’ silos. This centralized approach enables comprehensive analysis and more efficient detection of fraudulent activities.
FL also demonstrates potential in electronic Know Your Customer (eKYC) solutions. Deploying eKYC facial recognition and proofing solutions across devices ensures that sensitive biometric data like facial images remain secure on users’ devices, thus addressing privacy concerns while enhancing identity verification processes. Additionally, FL supports credit risk scoring by enabling banks to share insights on risky or fraudulent loan applications. This collaborative effort creates a standardized dataset for credit assessment and may facilitate the development of a federated learning global model as an alternate reference for internal credit risk scoring models.
Overcoming Challenges for Broader Adoption
One major obstacle to Federated Learning (FL) adoption in fraud management is the socio-economic challenge of motivating participation, ensuring centralized management, and understanding regulations. This raises two key questions: Why should organizations cooperate, and who controls the centralized intelligence system? Companies need strong economic incentives to join an FL system due to the costs related to aligning their data with standardized systems.
For FL to thrive, central bodies like Central Banks or national health authorities should spearhead and regulate the platform, guaranteeing better results for the public and requiring organizations to participate. Successfully scaling FL in ASEAN demands modular technologies that tackle communication, computation, data, and model heterogeneity issues. Modular data platforms provide notable advantages such as interoperability, agility, relevance, and privacy, enabling efficient data processing and real-time information exchange.
At Human Managed, the hm.works platform exemplifies a collective intelligence system, delivering AI-native solutions through federated learning and privacy-preserving techniques. This platform processes relevant data within contextual frameworks, structures data for efficient exchange, and ensures privacy by collaborative training without actual data sharing.
In conclusion, collective intelligence frameworks are vital for improving fraud detection and mitigation, unlocking the potential of the ASEAN digital economy. With digital financial services in Southeast Asia expected to generate substantial revenue and a digital payments market projected to exceed USD 1 trillion by 2025, adopting advanced AI and collaborative frameworks is essential. Federated learning with privacy protection enhances fraud management outcomes, necessitating a shift towards collective intelligence and industry cooperation.
The article emphasizes the rising importance of privacy-first collaborative AI in combating online fraud, showcasing technological innovations, challenges, regulatory efforts, and trends in fraud management within the ASEAN banking sector.