The lack of copyright protection leaves creators vulnerable.
The ever-evolving landscape of AI innovation means we are constantly navigating new challenges. One issue currently in focus is the inadequacy of our copyright frameworks when it comes to AI-generated materials and the training data used in the development of AI models.
These frameworks, relics of a pre-digital era, struggle to regulate the complexities of AI-generated content, revealing gaps in their adaptability to technological advancements.
This comes with a real risk: without legal confidence, will innovative businesses have the technological confidence to take positive economic steps forward with generative AI?
Dating back to the 1990s, current copyright laws were not designed to regulate issues around AI-generated content. As a result, only human-produced content can be reliably trademarked, leaving companies vulnerable when it comes to leveraging generative AI, potentially not under IP protection.
The lack of copyright protection leaves creators vulnerable, potentially discouraging them from their work and resulting in a loss of innovation for everyone.
The emergence of AI has also revealed shortcomings in understanding how existing copyrighted material may be used in training Large Languages Models including in lawsuits from authors like G.R.R. Martin. In this case, the author’s work was used as training data for ChatGPT without his approval.
Any related output cannot be considered original as it was based off his work. These legal battles underscore the urgent need for updated frameworks in a landscape where human and AI creativity are intersecting.
Issues also arise in the tech space, where 44% of developers reportedly use AI tools in their development process. This could easily lead to the emergence of copyright issues later down the line for businesses, if their developers are using code which was written with the help of an LLM. Applications that GenAI coding tools produce, for example, are surely not protected by current law.
Businesses are exposing themselves to potential copyright sanctions. Furthermore, murkier issues may occur if a developer use LLM written code for just a small part of a project. Can a business copyright the whole program, or could they be subject to potential litigation further down the line?
What needs to be done?
Businesses must carefully assess the legal standing of any of their GenAI output to be fully aware of how, or whether, their content is protected by current law. It is also critical that businesses understand the potential risk and liability of using content, knowingly or unknowingly, trained using someone else’s work.
The widespread use of AI in content creation has raised concerns about the potential for copyright infringement and intellectual property theft. As AI algorithms have the capacity to generate vast quantities of content, much of which may inadvertently incorporate copyrighted material. Without clear guidelines and regulations in place, businesses risk unknowingly violating copyright laws, exposing themselves to costly legal battles and reputational damage.
The UK is positioned to lead the charge in crafting legislative frameworks to address these challenges. By establishing regulations around when AI-generated content can be recognised as copyrightable, businesses can innovate with greater confidence, secure in the knowledge that their intellectual property is legally protected.
Parliamentary committees should actively engage with professionals working in AI businesses, gathering their input on the advantages and disadvantages of current copyright frameworks. While the UK’s IPO working group was unable to agree to a voluntary code, this shouldn’t mark the end of the government’s attempt at drafting legislation. By harnessing the collective expertise of industry leaders, policymakers can develop effective regulatory frameworks that foster innovation while safeguarding intellectual property rights.
Simultaneously, lawmakers need to thoroughly reevaluate existing legislation, ensuring that it aligns with the realities of AI innovation. Failure to do so risks stalling the potential of AI while leaving creators vulnerable to exploitation.
Balancing governance without limiting innovation
We can’t lose sight of the delicate balance between innovation and risk when drafting AI regulations. Successfully striking this balance requires policy makers to have a nuanced understanding what the government can do to help UK AI businesses thrive, and the broader global context of AI regulation.
The EU AI Act serves as a good example model of what innovative AI regulations might look like. The risk-based approach outlines prohibited uses of AI while providing clarity on acceptable practices. Similarly, refining copyright restrictions on AI will lead to new requirements of viable alternatives and guidance for ensuring businesses remain compliant.
Unlike traditional forms of creation, which can be easily attributed to human authors, AI-generated content blurs the lines of authorship and ownership. Determining the rightful ownership of this content presents a unique set of challenges, requiring lawmakers to rethink traditional notions of copyright and intellectual property.
In response to these challenges, policymakers must prioritise transparency and accountability in the development and implementation of AI-related regulations. This includes providing clear guidelines and standards for AI developers and users, as well as establishing mechanisms for monitoring and enforcing compliance.
Ultimately, the challenges posed by outdated copyright frameworks in the AI era are complex and multifaceted, requiring comprehensive and forward-thinking solutions.
As seen with the issues that can surround LLM created content, it is not a black and white issue. By adopting a proactive and collaborative approach to regulation, policymakers can ensure that copyright frameworks are equipped to address the unique challenges of AI-generated content while fostering innovation and protecting intellectual property rights.
In doing so, they can lay the groundwork for a future where AI technologies drive positive change and enrich content development processes globally.
Harry Borovick is general counsel at Luminance.
Thanks for signing up to Minutehack alerts.
Brilliant editorials heading your way soon.
Okay, Thanks!