Good morning Community!
I recently became interested in philosophy as it tackles the nature of reality and how do we go about it. So our present reality gives us another challenge - AI deepfakes! Here are some updates on this
Denmark has become one of the first countries to introduce legislation specifically targeting the use of deepfakes and AI-generated content, aiming to strengthen copyright protections in the digital age. The new law, passed in June 2025, requires creators of AI-generated media to clearly label synthetic content and ensures that artists and rights holders are compensated when their work is used to train AI models.
This law means that your body, face, voice legally belong to you! Oh my...
This move reflects growing global concern over the misuse of generative AI in media and the need for clearer legal frameworks to protect intellectual property and public trust. Read full Guardian article here
SSRN paper "AI generated deepfakes and financial system". This paper explores the emerging risks that AI-generated deepfakes pose to the global financial system, particularly in the context of market manipulation, fraud, and investor deception. It argues that deepfakes-especially those impersonating CEOs, analysts, or financial influencers-can distort market signals and erode trust in digital communications. The authors call for urgent regulatory attention, proposing a framework that includes mandatory disclosure of synthetic media, enhanced digital identity verification, and cross-border cooperation to mitigate systemic risks. The paper also highlights the need for financial institutions to invest in detection technologies and update internal controls to stay ahead of this fast-evolving threat.
How concerned are you with this?
------------------------------
Aya Pariy
------------------------------
Original Message:
Sent: 01-05-2025 15:19
From: Aya Pariy
Subject: Deepfake scams are getting smarter - are we ready?
Helly everyone,
I am usually an optimistic cheerful lass looking into the future with a lot of hope. But after watching Netflix's series called "Years and Years" my optimism somewhat subsided.
If we read the latest news, we read more and more about AI investment fraud. So I am bringing this up here, so that we are aware. Deep-fake driven scams are a growing concern in finance and tech. The deepfake videos are so convincing. It takes only 15 seconds for the AI to hear your voice and then create an extremely convincing video. And the victims of this type of fraud include very crude investors and specialists. Some examples:
CNN
Finance worker from multinational firm tricked into paying USD 25 million to fraudsters using deepfake technology to pose as that company's CFO.
GBP 1.24 million was lost to fraudsters using deepfake technology on the Isle of Man
Bolster article on how these investment scams are done
This raises important questions about risk, verification and trust for investors. Let's discuss what it means for the investment professionals:
What technologies or protocols could help prevent fraud using AI-generated impersonations?
Should investment professionals be trained to detect deepfakes - and how?
What role should regulation play in combatting deepfake-related financial crime?
How might the risk of deepfake scams affect trust in digital client relationships?
Are there any tools or vendors you've seen that are doing a good job of detecting or preventing this type of fraud?
Keen to hear from you.
------------------------------
Aya Pariy
------------------------------