Add AI policy to committer's guide after D54366 merged.
Details
Diff Detail
- Repository
- R9 FreeBSD doc repository
- Lint
No Lint Coverage - Unit
No Test Coverage - Build Status
Buildable 70046 Build 66929: arc lint + arc unit
Event Timeline
Frankly, I don't think this policy is strong enough. "You are responsible" doesn't have any teeth. What happens when somebody makes a contribution that they're "responsible" for that plagiarizes from copyrighted code and/or violates a license? How is FreeBSD protected when somebody "considers" licensing and decides they don't care?
I like the proposed policy in D50650. What was wrong with it?
It's the same thing we face when people copy linux drivers. It's nothing new. Responsibility lies with the tool users.
I like the proposed policy in D50650. What was wrong with it?
It was unworkable. AI isn't some binary 'one shot that produced code' but a thousand different ways that AI can help the developer. The vast majority of them are fine and pose no risk whatsoever outside the system that the AI is being run on. So banning it entirely is a non-starter. That ship has sailed. Outlining responsible use is really the only path forward today.
| documentation/content/en/articles/committers-guide/_index.adoc | ||
|---|---|---|
| 2418 | I'd add 'and haven't tested' to the last sentence even though it's repetitive. | |
| 2419 | Refining the prompt can re-introduce problems corrected in prior iterations, so check it all each iteration as best you can. | |
| 2422 |
| |
It is not at all the same thing, because the risk associated with it isn't properly assigned. You can say that the person who uses the tool bears responsibility for the contribution that used it, but there is no way to assign accountability. For the sake of argument, what happens if a person's "due diligence" fails to turn up the fact that their AI-contributed code infringes on copyrighted code (perhaps because that code isn't publicly visible but was nonetheless present in the coding agent's training set; perhaps because the person just didn't do proper due diligence). Who bears legal and financial responsibility for that infringement?
I like the proposed policy in D50650. What was wrong with it?
It was unworkable. AI isn't some binary 'one shot that produced code' but a thousand different ways that AI can help the developer. The vast majority of them are fine and pose no risk whatsoever outside the system that the AI is being run on. So banning it entirely is a non-starter. That ship has sailed. Outlining responsible use is really the only path forward today.
There is no currently accepted way to "prove" that an AI assisted contribution is non infringing, and there is still significant uncertainty around whether AI assisted code can even be copyrighted at all. Accepting AI-assisted code contributions now, before a relatively stable legal framework is established around it, introduces legal and financial risk both to the project that accepts the code and to the end users of that project. I don't know how you're determining what a "vast majority" are, or how you can claim that they "pose no risk," because literally all I see is risk. Accepting AI contributions before there's a way to "prove" them is an unworkable problem.
And sure, you can say "it's good enough for IBM and Red Hat / Canonical and Ubuntu / Microsoft / Amazon" but they have vastly more resources to deal with legal, financial, or coding challenges that may later appear as precedents are established. FreeBSD is already a very small player in comparison to that landscape, and I think the potential risks are not worth the perceived value. If you think FreeBSD should accept those contributions, then before you do that you need to figure out a gameplan for how you can prove code is acceptable and copyrightable, how can pay the legal fees if you get sued, and how you can deal with the sudden absence of an accepted feature if the only way to resolve any infringement is just to yank out the change and revert to how things were before it was accepted.