NEWS
NEWS

Your face will have copyright: the pioneering law that will protect our biometric data

Updated

The future Danish Intellectual Property Law will recognize the voice, face, and body of its citizens as cultural creations to protect them from AI-generated montages

People are reflected in a window of a hotel at the Davos Promenade in Davos, Switzerland
People are reflected in a window of a hotel at the Davos Promenade in Davos, SwitzerlandAP

In 2014, the Danish Consumer Council came up with a clever campaign to raise awareness about the risks of giving away personal data online. The organization, which safeguards consumer rights in the Nordic country (Forbrugerrådet Tænk), rented a bakery in Copenhagen, installed hidden cameras inside, and had a saleswoman who did not charge customers based on gender. Instead, she proposed a barter.

A loaf of bread in exchange for a peek at the last five messages received on the phone.

Half a dozen muffins for a family member's phone number.

A cinnamon roll if the buyer agreed to reveal where and with whom they had spent the previous afternoon.

The experiment aimed to highlight how in the digital world we behave much more carelessly than in the real world regarding valuable information such as contact networks, routines involving geolocation, or special events that end up in the phone's photo reel.

As seen in a video from back then with an unmistakable Dogma cinema aroma, the reaction of most customers was the same: "Nej, tak" (No, thank you). They preferred to go home empty-handed rather than give even a crumb of privacy to that inquisitive saleswoman. An attitude that contrasts with what they might have shown if tech giants like Meta (Instagram, WhatsApp, Facebook) or Alphabet (Google, Gmail, YouTube) had asked for it to access their services.

Thanks to that campaign, Forbrugerrådet Tænk collected over 10,000 signatures from citizens demanding greater protection against Silicon Valley's big platforms. And the prop bakery became an iconic reference, at least for those working in defense of security and privacy on the internet.

Denmark is once again making headlines for rebelling against Big Tech. This time, led by the Ministry of Culture and in the context of the hyper-accelerated development of generative artificial intelligence (IAGen), which makes it easier than ever to create fake videos, images, or audio that appear credible and, often, with degrading purposes. However, 2026 is not 2014. Creating a digital montage and circulating it now requires as little effort as -to use the analogy again- going to buy bread. And its impact is no longer limited to the confines of social media.

As summarized in this same newspaper a few days ago by Margaret Mitchell, former co-director of Google's Ethics department. "There is a growing singular risk: the increasing difficulty in distinguishing between reality and fiction. Generative AI has simplified the creation of content that appears authentic but is not. The lack of disclosure standards or watermarks makes it impossible to know if something was generated by AI, and in fact, much of the public does not even want to know," she emphasized.

Coinciding with its EU presidency, Denmark decided last summer to reform the Intellectual Property Law (Ophavsretsloven) to curb deepfakes. Concerned not only about the widespread dissemination of photos and clips with sexually explicit content, defamatory speeches against any minority, fake news, and manipulated political statements, but also by the increase in cyberattacks using social engineering and cutting-edge technology to steal personal or financial information, Minister Jakob Engel-Schmidt set out to pioneer a regulation in Europe: one that places matters related to biometric identity protection within the framework of copyright. Under this, the voice, face, and physical appearance of each Dane will have the legal-economic consideration (copyright) of a book or a movie.

After incorporating various amendments, the new law was set to come into force on March 31. The early call for presidential elections announced a couple of weeks ago forced the temporary suspension of its application. Nevertheless, given that the draft had received unanimous multi-party support, it would be logical for the incoming government to pick it up again, despite having to reintroduce it to Parliament.

Copenhagen aims to make this protection a European standard. "We are sending a clear message: everyone has the right to their own body," declared the Culture Minister when presenting the draft. "Humans can be put into a photocopier and misused for all kinds of purposes, and I am not willing."

The Danish approach is based on several key issues. For example, the introduction of specific sections for the general public (article 73-a) that restrict unauthorized use of another person's image or voice in AI-generated montages. Or the separate provision for artists or performers (article 65-a) that extends the protection of their works against unauthorized copies or manipulations for 50 years after their death.

"We risk turning our democracy into a 'deepfake' battle"

The Statute of Anne, which came into effect in England in 1710, is considered the first modern copyright. It was a pioneer in recognizing the author's exclusive right -not the printer's- over their works, providing legal coverage and introducing the concept of public domain (free use after a specified time). Since then, copyright has been the shield protecting the highest expressions of human creativity from plagiarism and plunder: novels, paintings, scores, designs, blueprints...

The aforementioned article 73 of the Danish law makes an exception for reproductions that are "an expression of caricature, satire, parody, pastiche, power criticism, social criticism, etc.," unless the imitation constitutes disinformation that could seriously harm the rights or interests of third parties. By shifting the focus from privacy to ownership, Denmark opens a parallel path in that tradition and adopts a singularly empowering and coercive approach. According to its proponents, the regulation will facilitate legal actions against these pesky deepfakes. Thus, citizens will have the right to demand the removal of infringing material from the hosting tech platform. If they fail to do so promptly after being notified, these platforms could face severe fines from the Danish regulatory body and demands for high economic compensations from affected users.

"We are prepared to take additional measures," Engel-Schmidt made clear last summer. If the tech megacorporations turn a blind eye, the matter could even reach the European Commission. "That's why I think they will take it seriously," the minister predicted.

Danish technologists had noticed that requests to remove content on platforms like Instagram and TikTok are rarely executed. The restorative justice demanded by the common citizen often ends up in limbo. "Some providers actively encourage the generation of deepfakes; others passively allow it," admits Professor Anders Søgaard, an expert in Natural Language Processing and Machine Learning at the University of Copenhagen.

"We believe that this protection will provide us with better conditions to enforce user rights," agrees Thomas Heldrup, Head of Content Protection and Compliance at the Danish Alliance for Rights.

The reform of the Ophavsretsloven could be seen as a safe haven against the shortcomings of the Digital Services Act (DSA), approved by the EU in 2022 and criticized for being too vague. It could also be seen as an almost desperate firewall against technology used for nefarious purposes.

"There are things I thought were clear since Roman Law. But if we are granting legal personality to the Mar Menor and rights to animals..."

While serious, vishing, or voice cloning, seems rudimentary compared to the wonders that generative AI can perform. A year ago, the consulting firm Arup was defrauded of $25 million after an employee in its accounting department believed they were video calling a senior executive, not a digital copy. The car manufacturer Ferrari nearly suffered a similar scam, thwarted at the last minute when an employee asked the fake executive a question only the real one could answer.

What's coming is even more perilous. In mid-2023, the startup Worldcoin—co-founded by Sam Altman, the creator of ChatGPT—managed to get thousands of young people worldwide to have their irises scanned in exchange for tokens valued at around ¤100. The Spanish Data Protection Agency (AEPD) was then forced to issue a warning to Worldcoin.

"What the Danes are proposing is extraordinarily interesting. We need to move towards greater regulation of digital identity and greater protection of our image," acknowledges Ricard Martínez, Professor of Constitutional Law at the University of Valencia and Director of the Chair of Privacy and Digital Transformation. "We run a systemic risk by allowing large corporations to control aspects of our personalities that have required the coercive and regulatory power of the State and its institutions to guarantee everyone's freedoms. We risk turning our democracy into a battle of deepfakes and Goebbelsian manipulations. Personally, I align myself with the states that are beginning to set limits. The implementation of this law should be immediate and urgent."

Martínez explains that we have arrived at the current situation after a series of unfortunate events. First, social media operators convinced us to readily hand over our data for user profiling and market research. Later, some of that information ended up generating a multi-million dollar business for these operators—personalized advertising—against which regulators failed to act. Finally, the complete lack of effort in programming to assign traceable identities to the images and IAGen's rapid development have led to a flood of mass impersonation. "It's 20 years of a poorly resolved problem," he summarizes.

Borja Adsuara, a lawyer specializing in Digital Law, believes that the Danish law is "misguided" and argues that the rights to honor, privacy, and one's own image are sufficiently protected in Spain by Article 18 of the Constitution, Organic Law 1/1982, and the Penal Code. "Your image is not your own creation, no matter how much of an artist you are or how much you use makeup to create a persona," he clarifies. "There are things I thought were clear since Roman Law. But if we're giving legal personhood to the Mar Menor lagoon and rights to animals..."

The Danish strategy also carries risks and could have unforeseen consequences. The rights to honor and personal and family privacy are inalienable, but image rights can be exploited commercially, as is the case with footballers and models. To avoid a situation like Scarlett Johansson's, a victim of deepfake porn and the brazenness of OpenAI—which gave ChatGPT a modulation very similar to hers as the operating system in the film Her—Matthew McConaughey went to the US Patent and Trademark Office (USPTO). There, he became the first actor to register his voice, his image, and the phrase ("Alright, alright, alright") that catapulted him to fame. McConaughey is not an algorithmic hater; in fact, he is an investor in ElevenLabs, a catalog company that makes the voices of celebrities available with full legal guarantees.

His Danish colleague, Mads Mikkelsen, has fared much worse. Last December, the U.S. Department of Homeland Security posted a clip of the actor dancing drunkenly in the film "Another Round" on its X account to celebrate the success of its raids against undocumented immigrants. The studio Zentropa demanded the clip's removal, emphasizing that the unauthorized use of the scene from their film violated copyright law. Homeland Security stated in another comment that it would continue posting "about Mads" until "the atmosphere" improved, echoing President Trump's messages on Truth Social.

But institutional shitposting is a whole different ballgame... or a whole different ballgame.