Skip to Content

Contributors

Re: Guidelines for LLM generated contributions



On Fri, Sep 19, 2025, 5:22 AM Matthieu Mequignon <notifications@odoo-community.org> wrote:

Hi!

While I understand the concern and the need for compromise here (because I know this is going to happen, no matter what is decided), I'm gonna be «this guy»: I am totally against LLM generated contributions.

Regarding migrations, we already have great tools to facilitate developments, such as oca-port doing migrations in seconds.
I would be ok with community scripts doing the boring/automated changes, such as `tree` -> `list`, `_` -> `self.env._` etc…
From there, the remaining work is the hardest, and LLM can (at best) only assist.

The point is LLMs such as Claude 
-Code or Gemini-CLI now use tools like bash tools. So you can instruct them to run oca-port as the start and do the remaing code adjustments to run the tests and even get the pre-commit green. So it'not anyone about using an LLMs alone.

You can also feed the AI context with OpenUpgrade analysis and scripts, the migration diffs of the dependency modules migrations and diffs such as 

So at the end the LLM will have plenty of migration context and can do A LOT extremely fast.

I would say more and more that's you that will assist the AI and not the reverse...

About basic functional knowledge: well may be it's hard to admit but they have more functional ERP knowledge out of the box than most of Odoo developpers have... 

And when you feed the LLM with the OCA code before aksing in the prompt, it can quickly "understand" what the code does. I would say the module authors still beat it, but if you are not the author/main contributor, chances are the LLM will "understand" the modules details quicker than you (disclaimer: you really need to pass the module code and its dependencies to the LLM for this go happen though otherwise yes you get mostly hallucinations).

About installing, screenshots etc: AI such as Gemini-CLI can run inside a Github action were Odoo is installed exactly like the OCA CI is. At the moment I run it in a virtualenv were I have my Odoo installed and it can run pytest-odoo untill it makes tests passes. I woukd say it's a matter of just a few months/weeks before we can have the screenshots of the changes from the LLM+tools directly (people using popular stacks like react have it already).

But meanwhile full human control and responsability is what I advocated for.

Also Gemini-CLI tends to work better than Claude Code because the Odoo codebase is huge so if you don't want the LLM to hallucinate you need to feed its context window with all the relevant Odoo/OCA code required. And with 1 millions tokens Gemini beats Claude and most other AI by a faire margin for Odoo. (Claude is only 256k tokens and just account/models from odoo/odoo will consume 350k tokens and max it out). I'll talk more about that in another thread.

At the end yes I think we should forbid lazy AI made PRs by people who just ask anything to chatGPT or similar tools without any testing/critical thinking.

But my point is: an OCA specialist with an LLM will do 10x more than what he does today with AI help and the same quality or better (yes it already really writes better code than you for the 50% of the easy code).

So when the entire industry is making the shift, when our customers will not expect you quote them 5 days for something that could now be done in 1 day, when they will not want to use an Odoo version lagging 2 or 3 versions behind and getting exponentially better, we will have to use AI or die. Like mechanical engineering now use computer simulation instead of only human calculus on sheets of paper.

In this context, it's easy to claim AI will not do it as good and boycott it entirely. But sadly that's simply not true.

Finally, about all the ethical and energy issues raised in other answers, yes I agree this is absolutely a major concern. I think we should act politically regarding this if it is even possible to avoid people would do the bad things and submit you from other countries/juridictions.

by Raphaël Akretion - 02:41 - 19 Sep 2025

Reference

  • Guidelines for LLM generated contributions
    Dear all,
    
    at least one contributor is planning again to flood the OCA projects 
    with PRs for module migrations: https://github.com/OCA/web/issues/3285. 
    This volume is likely made possible through automation, with an LLM 
    generating the actual migration code (on top of, hopefully, a more 
    deterministic tool like OCA's odoo-module-migrator).
    
    Regardless of the volume and the submitter, if the submitter has 
    reviewed, refined and tested the code generated by an LLM, this should 
    not be a problem but as a reviewer I'd like to know what I can expect. 
    Holger Brunn pointed out to me that in other projects, this may be 
    covered by a demand in the guidelines to disclose LLM usage and its 
    extend. For an example, see 
    https://github.com/ghostty-org/ghostty/pull/8289/files.
    
    I would very much like to see such an addition to the OCA guidelines. 
    Additionally, I would like to suggest that the basic premise (the 
    generated code is indeed first self-reviewed, refined and tested) is 
    also made explicit, and that it is unacceptable to pass on reviewer 
    comments to the LLM only to copy back the LLM's response (which has 
    happened to me on one or two occassions).
    
    Can I have a temperature check for your support for such an addition to 
    the guidelines? Or do you have other ideas or perspectives on the matter?
    
    Cheers,
    Stefan
    
    
    -- 
    Opener B.V. - Business solutions driven by open source collaboration
    
    Stefan Rijnhart - Consultant/developer
    
    mail:stefan@opener.amsterdam
    tel: +31 (0) 6 1447 8606
    web:https://opener.amsterdam
    
    
    

    by Stefan Rijnhart - 09:40 - 18 Sep 2025