Skip to main content
Back to Pulse
announcementFirst of its KindSlow BurnArc: Ai Coding Tools War (ch. 25)
ZDNet

The new rules for AI-assisted code in the Linux kernel: What every dev needs to know

Read the full articleThe new rules for AI-assisted code in the Linux kernel: What every dev needs to know on ZDNet

What Happened

Linus Torvalds and maintainers just finalized the Linux kernel's new AI policy - but it might not address the biggest challenge with AI-generated code. Here's why.

Our Take

Linux kernel maintainers now require AI-generated patches to be explicitly labeled with 'AI-generated' tags and human sign-off, banning opaque model outputs from kernel contributions entirely.

This kills the lazy workflow of pasting Copilot suggestions into kernel drivers—each AI-assisted patch needs manual review comparable to a 200-line security audit, turning your '5-minute fix' into a 45-minute liability.

Small driver teams can ignore this; anyone touching core mm/ or networking/ code needs to budget 3x review time or stick to handcrafted patches.

What To Do

Use CodeLlama locally for kernel prototyping instead of cloud-based Copilot because you'll need full code provenance for the mandatory sign-off tags

Builder's Brief

Who

developers contributing AI-assisted patches to open-source projects

What changes

disclosure requirements and review friction increase for any AI-assisted kernel contribution starting now

When

now

Watch for

GCC, LLVM, or FreeBSD adopting or explicitly rejecting a similar policy within 60 days

What Skeptics Say

The policy adds disclosure overhead without any enforcement mechanism—maintainers cannot reliably detect AI-generated code, so this functions as liability theater rather than a real quality gate for the kernel.

1 comment

M
Mikael Strand

linus blessing any ai code at all is the news. the restrictions are fine print

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...