Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

No smurfs: Revealing fraud chains in mobile money transfers

 
: Zhdanova, M.; Repp, J.; Rieke, R.; Gaber, C.; Hemery, B.

:

Institute of Electrical and Electronics Engineers -IEEE-; IEEE Computer Society:
Ninth International Conference on Availability, Reliability and Security, ARES 2014 : Fribourg, Switzerland, 8 - 12 September 2014; Including workshops
Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2014
ISBN: 978-1-4799-4223-7
ISBN: 978-1-4799-7876-2
S.11-20
International Conference on Availability, Reliability, and Security (ARES) <9, 2014, Fribourg>
Englisch
Konferenzbeitrag
Fraunhofer SIT ()

Abstract
Mobile Money Transfer (MMT) services provided by mobile network operators enable funds transfers made on mobile devices of end-users, using digital equivalent of cash (electronic money) without any bank accounts involved. MMT simplifies banking relationships and facilitates financial inclusion, and, therefore, is rapidly expanding all around the world, especially in developing countries. MMT systems are subject to the same controls as those required for financial institutions, including the detection of Money Laundering (ML) - a source of concern for MMT service providers. In this paper we focus on an often practiced ML technique known as micro-structuring of funds or smurfing and introduce a new method for detection of fraud chains in MMT systems. Whereas classical detection methods are based on machine learning and data mining, this work builds on Predictive Security Analysis at Runtime (PSA@R), a model-based approach for event-driven process analysis. We provide an e xtension to PSA@R which allows us to identify fraudsters in an MMT service monitoring network behavior of its end-users. We evaluate our method on simulated transaction logs, containing approximately 460,000 transactions for 10,000 end-users, and compare it with classical fraud detection approaches. With 99.81% precision and 90.18% recall, we achieve better recognition performance in comparison with the state of the art.

: http://publica.fraunhofer.de/dokumente/N-351174.html