Contributors mailing list archives

Browse archives


Re: Proposal for new workflow, incorporating "Optimistic Merging"

ClosingAp Open Source Integrators Europe, LDA, Daniel Reis
- 28/06/2016 23:51:57

Raphael, about incubator repos/rings of maturity, I agree but the big question is "how?".
I believe that the workflows behind "satellite repo integration" can be an important piece for that solution for this.
I'm working on a proof of concept for that.
I hope that in a second iteration it can grow to also embrace/support those concepts.

Wrapping up the optimistic merging discussion, it ends up being about having some OM-friendly code review guidelines.
I'm putting together ideas on that, and looking close to what GitHub offers us for this, and I will probably draft a proposal for that.

On the tooling and workflow side, I don't think there's much to change here, other than the "satellite repo integration" possibility.

We should embrace the existing ones and expand them.
I read that best practice is to automate code reviews as much as possible, and in this chapter the OCA is very well served:
As of today the MQT tools perform checks on the code I didn't think it would be possible.
And more sophisticated tests are being added, thanks to the great work of Moises Lopez.

Instead of being the bad guy, MQT tests should be seen as out most hard working reviewer, producing very good advice
to improve everyone's code - new comers and seasoned contributors alike.



Citando Raphaël Valyi <>:

some comments from a pre-OCA old timer: I think this again boils down to the scope of the OCA. I think this is perfectly fine to have a strongly reviewed OCA core, just like the Odoo core by Odoo SA today.

Now I think the OCA sometimes fails to define its borders and try to embrace the world. IMHO this is doomed to fail. Open source ecosystems do have its economics behind. Until recently Odoo wasn't very stable nor full featured enough to be appealing to governments or large companies (unlike Linux, openoffice etc..). And unlike some web toy, It was also too complex for occasional users to contribute good stuff. That means it was mostly pushed by specialized integrator companies who derive their revenues from being specific project leaders and getting marketing visibility with it. That's a classical pattern. These integrators can agree on many things, but there is no money flowing in that market yet to afford making them agree on everything. And I think this is fine. Like the Rails ecosystem has a strongly reviewed core of 4-5 gems, the rest of the scope is very active, very professional and very attractive without any official review bureaucratie.

So again, I advise having several rings of compliance in the OCA with possible incubator styles repo for stuff far from that common core.
Finally, do no underestimate the possibility to do cross team reviews and PR's. We do that a lot at Akretion. So for instance somebody made a PR with interesting ideas in a repo we review but that's not enough to be merged and/or the author is not reactive enough: so we fork that PR into our own repo, make additional changes and possibly review and merge it into the repo. We also have partners making PR against our own repos with work in progress topics.
I cannot imagine a world where we would depend on looser review for critical code, we already have enough problems with the monthly regressions in the core of Odoo.
An other thing I suggest is electing revocable "super committers" who may need fewer reviews or have their review count double. Very often 80% of the code is made by 20% of the contributors or less. I think what is the most important is making sure these guys are productive and don't become crazy with the burocratie that is imposed to them.
On Fri, Jun 17, 2016 at 6:23 PM, Daniel Reis <> wrote:

Hi David, that's off topic for this discussion.

For what is worth, we want code style conformity.
If you do it by hand or use some code formatting tool, it's up to you.

Personally, I'm not fond of automatic formatting,
For example, IMO difficulties with line lengths are usually a symptom
that some refactoring is needed, such as isolating pieces of logic
inside a function, or using variables as aliases to longer expressions.




Citando David Arnold <>:

Hi Daniel
as part of endless discussions and time consuming and discouraging ping-ping (and to many just a waste of time) is code style, you should have a look at this tool, I already proposed some time ago: - It came to end code formatting pedantism..
Best, David
El mié., 8 jun. 2016 a las 17:57, David Arnold (<>) escribió:
Daniel, I thank you for the initiative. You definitely are speaking out of my heart when you reveal the problems! I really like your proposal. To me, Bidoul's analysis incorporating Maxime's feedback about the wheelhouse, was the most complementary one: Having more distributed responsibility in smaller units is a definite win! There is a common wisdom in management cybernetics: "you need to meet chaos with chaos, if you want to succeed"! I also want to highlite what Jairo said about the labeling and it's psychological effects of feeling "welcome" and it's very positive effects for self-education.
From what I was reading, the following triad might satisfy all relevant concerns:
- Easy Onboarding: Quicker and less restrictive merge, encourage first basic Self-QualityAssurance by the proposed labeling approach.
- Distribute Maintenance Burden: Smaller Units of Responsibility
- Quality: Release Workflows (aka Tags and wheelhouse builds - focus on reducing burden by improving tooling)
Best Regards, and I really haven't enough thumps to put up for this initiative!
El mié., 8 jun. 2016 a las 5:08, Daniel Reis (<>) escribió:


> I just talked to one of the contributors of OpenUpgrade, and we agreed this repo might be a good testbed

I agree that optimistic merging is a good fit for OpenUpgrade.
The tests + merge on green sounds like a good strategy, and it can be implemented regardless of whatever comes out of this discussion.
I can't find much to add, other than mentioning the project's specific merge criteria somewhere in the docs.

While it is certainly an interesting experiment, I'm afraid it won't be able to provide evidence about many of the concerns presented for "regular" repos.



Citando Holger Brunn <>:

> The goal would be for a few volunteers to drive an *experiment* with
> optimistic merging, in a couple of selected repos. It should keep having
> stable versions with peer reviews and mandatory passing tests. Quality of
> published modules must not be compromised in the process.

I just talked to one of the contributors of OpenUpgrade, and we agreed this 
repo might be a good testbed: Here, it's in the nature of the topic that a 
contribution only covers some specific cases (you often only migrate the parts 
of a module you actually use), there are outrageous times for PRs to get 
merged because we're way too few people, and things are not helped by the fact 
that you need in-depth knowledge of Odoo n, Odoo n+1, the code of both 
versions, and a deep functional understanding of the domain to say something 
useful about the code in question.

So what we talked about was:

- introduce rigid tests
- everything that doesn't fail the tests can me merged more or less at once

This will help us with the for this repo quite notorious problem that the 
migration scripts for module X are partially done, but given there's a pending 
PR for months, other people are discouraged from adding fixes because it clumsy 
to PR on the PR's branch.

Would the proponents of optimistic merging maybe be willing to write out their 
exact vision on ? I plan to start 
with the testing part in the coming days (we need that independently of using 
this approach or any other). Another beauty of this repo is that there's no 
real issue of deployment here. It seems to me that many people in this thread 
tie the question of the merge process to the process of deployment, which 
complicates stuff unnecessarily in my opinion, because they are pretty much 
disparate topics.

PS: I still think for most repos this is not the solution. In my opinion, the 
solution is rather some kind of karma/kudo system where you need to have done 
reviews before your stuff is reviewed and subsequently merged. Sounds simple, 
but I fear it's hard to come up with a measurement of what a 'sensible review' 
is. That put aside, it would make reviewing part of the submitting process, 
and this way give us a lot more reviewers. And as soon as you've done a few 
reviews, you'll notice how much you learn from it and do it anyways 
afterwards. If you want to talk about this, please start a new thread in order 
not to hijack this one.

Therp - Maatwerk in open ontwikkeling

Holger Brunn - Ontwerp en implementatie


Post to:


Post to:

Post to:


Post to:

Raphaël Valyi
Founder and consultant
+55 21 3942-2434

Post to: