FFmpeg Developer Accuses OxideAV of Using AI to Launder Code License
An FFmpeg contributor claims OxideAV used generative AI to rewrite their LGPL-licensed code and distribute it as proprietary software, circumventing the original license terms.
On May 6th, a developer from FFmpeg opened a public issue in the OxideAV repository pointing to something specific and serious: their MagicYUV codec implementation, originally published under an LGPL license, appears to have been processed with generative AI to produce a "new" version that OxideAV distributes as proprietary code. The thread is available on GitHub and was highlighted on Hacker News the same day.
The accusation has a direct name: AI-assisted license laundering. The mechanism, according to the affected developer, would consist of passing the original source code through a language model to obtain a syntactically different but functionally identical rewrite, with the goal of obscuring its origin and evading LGPL obligations—among them, publishing the source code of modifications and maintaining attribution notices.
What is being alleged exactly
The developer is not accusing OxideAV of copying their code literally. What they point out is more subtle: AI would have acted as an intermediary to produce a derivative work that, to the naked eye, appears original, but reproduces the same logic, the same design decisions, and in some fragments, structures nearly identical to FFmpeg's source code.
This distinction matters. If the rewrite were purely manual and sufficiently transformative, the legal debate would already be complicated enough. With AI in the mix, an additional layer is added: who is the author of the model's output? Can a model "infringe" a license? And what about the company instructing it with others' code?
For now, no court has definitively ruled whether the output of an LLM trained or instructed with protected code constitutes a derivative work for copyright purposes. But that doesn't mean the practice is legally neutral: the obligations of copyleft licenses don't require proving intent, but rather tracing provenance.
Why this goes beyond an isolated case
FFmpeg is one of the foundations of the open source multimedia ecosystem. Its components appear, explicitly or not, in media players, streaming platforms, editing tools, and video processing pipelines worldwide. Its community has spent decades monitoring license compliance with unusual diligence in the open-source world.
What makes this case relevant—beyond the specific conflict with OxideAV—is that it illustrates a vector of evasion that the free software community hadn't needed to consider until relatively recently: using AI models as a license laundry. If the practice proves effective or tolerable, it creates a structural incentive for less scrupulous actors to adopt it at scale.
From the AI tools side, the problem is equally relevant. Claude Code, for example, can generate reimplementations of functions from descriptions or even from existing code. Anthropic includes restrictions in its terms of use regarding using the model to infringe third-party rights, but practical oversight falls on the operator and end user. There is yet no standard technical mechanism to prevent instructing an LLM with others' code to produce a "clean" version of it.
For whom this debate matters
For teams integrating open-source code into commercial products, this case should serve as a reminder that using AI in the development process doesn't neutralize license obligations. If a model's input is code under LGPL, GPL, or Apache 2.0, the restrictions of that license don't disappear because the output looks different.
For maintainers of free projects, the case raises the question of whether current mechanisms for detecting violations—source code comparison, structural similarity analysis—are sufficient when the infringer uses AI to transform the form but not the substance.
And for the AI ecosystem in general, it adds a concrete case to a legal debate that moves slower than the technology it aims to regulate.
---
EP Opinion: The OxideAV case is neither the first nor the last. What is striking is the transparency with which the affected developer has documented their accusation in the open. That this kind of conflict is played out on GitHub before courts is, at least for now, a sign that the open-source community still trusts public pressure as the first mechanism of accountability.
Sources
Read next
Reinventing the Wheel Makes More Sense Than It Seems
Andrew Quinn argues that building existing tools is a necessary learning step, not wasted time. Simon Willison highlighted it, and it deserves your attention.
Claude usage limits push users toward cheaper Chinese alternatives
A Hacker News thread reflects a growing trend: developers migrating to GLM, Kimi, or MiniMax as Claude quota cuts force them to seek alternatives.
Why HTML Could Be Better Than Markdown as Claude Output
An engineer from Claude Code at Anthropic argues for HTML over Markdown as output format. Million-token windows change the calculation.