- Mailing Lists
- Contributors
- Re: Guidelines for LLM generated contributions
Archives
- By thread 1419
-
By date
- August 2019 59
- September 2019 118
- October 2019 165
- November 2019 97
- December 2019 35
- January 2020 58
- February 2020 204
- March 2020 121
- April 2020 172
- May 2020 50
- June 2020 158
- July 2020 85
- August 2020 94
- September 2020 193
- October 2020 277
- November 2020 100
- December 2020 159
- January 2021 38
- February 2021 87
- March 2021 146
- April 2021 73
- May 2021 90
- June 2021 86
- July 2021 123
- August 2021 50
- September 2021 68
- October 2021 66
- November 2021 74
- December 2021 75
- January 2022 98
- February 2022 77
- March 2022 68
- April 2022 31
- May 2022 59
- June 2022 87
- July 2022 141
- August 2022 38
- September 2022 73
- October 2022 152
- November 2022 39
- December 2022 50
- January 2023 93
- February 2023 49
- March 2023 106
- April 2023 47
- May 2023 69
- June 2023 92
- July 2023 64
- August 2023 103
- September 2023 91
- October 2023 101
- November 2023 94
- December 2023 46
- January 2024 75
- February 2024 79
- March 2024 104
- April 2024 63
- May 2024 40
- June 2024 160
- July 2024 80
- August 2024 70
- September 2024 62
- October 2024 121
- November 2024 117
- December 2024 89
- January 2025 59
- February 2025 104
- March 2025 96
- April 2025 107
- May 2025 52
- June 2025 72
- July 2025 60
- August 2025 81
- September 2025 124
- October 2025 63
- November 2025 22
Contributors
Re: Guidelines for LLM generated contributions
On Fri, Sep 19, 2025, 5:22 AM Matthieu Mequignon <notifications@odoo-community.org> wrote:
Hi!
While I understand the concern and the need for compromise here (because I know this is going to happen, no matter what is decided), I'm gonna be «this guy»: I am totally against LLM generated contributions.Regarding migrations, we already have great tools to facilitate developments, such as oca-port doing migrations in seconds.
I would be ok with community scripts doing the boring/automated changes, such as `tree` -> `list`, `_` -> `self.env._` etc…
From there, the remaining work is the hardest, and LLM can (at best) only assist.
The point is LLMs such as Claude
-Code or Gemini-CLI now use tools like bash tools. So you can instruct them to run oca-port as the start and do the remaing code adjustments to run the tests and even get the pre-commit green. So it'not anyone about using an LLMs alone.
You can also feed the AI context with OpenUpgrade analysis and scripts, the migration diffs of the dependency modules migrations and diffs such as
So at the end the LLM will have plenty of migration context and can do A LOT extremely fast.
I would say more and more that's you that will assist the AI and not the reverse...
About basic functional knowledge: well may be it's hard to admit but they have more functional ERP knowledge out of the box than most of Odoo developpers have...
And when you feed the LLM with the OCA code before aksing in the prompt, it can quickly "understand" what the code does. I would say the module authors still beat it, but if you are not the author/main contributor, chances are the LLM will "understand" the modules details quicker than you (disclaimer: you really need to pass the module code and its dependencies to the LLM for this go happen though otherwise yes you get mostly hallucinations).
About installing, screenshots etc: AI such as Gemini-CLI can run inside a Github action were Odoo is installed exactly like the OCA CI is. At the moment I run it in a virtualenv were I have my Odoo installed and it can run pytest-odoo untill it makes tests passes. I woukd say it's a matter of just a few months/weeks before we can have the screenshots of the changes from the LLM+tools directly (people using popular stacks like react have it already).
But meanwhile full human control and responsability is what I advocated for.
Also Gemini-CLI tends to work better than Claude Code because the Odoo codebase is huge so if you don't want the LLM to hallucinate you need to feed its context window with all the relevant Odoo/OCA code required. And with 1 millions tokens Gemini beats Claude and most other AI by a faire margin for Odoo. (Claude is only 256k tokens and just account/models from odoo/odoo will consume 350k tokens and max it out). I'll talk more about that in another thread.
At the end yes I think we should forbid lazy AI made PRs by people who just ask anything to chatGPT or similar tools without any testing/critical thinking.
But my point is: an OCA specialist with an LLM will do 10x more than what he does today with AI help and the same quality or better (yes it already really writes better code than you for the 50% of the easy code).
So when the entire industry is making the shift, when our customers will not expect you quote them 5 days for something that could now be done in 1 day, when they will not want to use an Odoo version lagging 2 or 3 versions behind and getting exponentially better, we will have to use AI or die. Like mechanical engineering now use computer simulation instead of only human calculus on sheets of paper.
In this context, it's easy to claim AI will not do it as good and boycott it entirely. But sadly that's simply not true.
Finally, about all the ethical and energy issues raised in other answers, yes I agree this is absolutely a major concern. I think we should act politically regarding this if it is even possible to avoid people would do the bad things and submit you from other countries/juridictions.
by Raphaël Akretion - 02:41 - 19 Sep 2025
Reference
-
Guidelines for LLM generated contributions
Dear all, at least one contributor is planning again to flood the OCA projects with PRs for module migrations: https://github.com/OCA/web/issues/3285. This volume is likely made possible through automation, with an LLM generating the actual migration code (on top of, hopefully, a more deterministic tool like OCA's odoo-module-migrator). Regardless of the volume and the submitter, if the submitter has reviewed, refined and tested the code generated by an LLM, this should not be a problem but as a reviewer I'd like to know what I can expect. Holger Brunn pointed out to me that in other projects, this may be covered by a demand in the guidelines to disclose LLM usage and its extend. For an example, see https://github.com/ghostty-org/ghostty/pull/8289/files. I would very much like to see such an addition to the OCA guidelines. Additionally, I would like to suggest that the basic premise (the generated code is indeed first self-reviewed, refined and tested) is also made explicit, and that it is unacceptable to pass on reviewer comments to the LLM only to copy back the LLM's response (which has happened to me on one or two occassions). Can I have a temperature check for your support for such an addition to the guidelines? Or do you have other ideas or perspectives on the matter? Cheers, Stefan -- Opener B.V. - Business solutions driven by open source collaboration Stefan Rijnhart - Consultant/developer mail:stefan@opener.amsterdam tel: +31 (0) 6 1447 8606 web:https://opener.amsterdam
by Stefan Rijnhart - 09:40 - 18 Sep 2025