Engineering Throughput Per Developer Rose 116% Across Six Big Tech Organizations Between Q1 2025 and Q1 2026
Five quarters of commit-level analysis from Navigara Research show output composition shifting toward growth work, with most of the rise concentrated in a single quarter`
The productivity gains from AI tools are real, and the speed-up is not linear. Teams are shipping substantially faster. But every company and team differs, and developers are still learning how to achieve high, sustainable performance over a longer period of time,” said Jirka Bachel, co-founder and CEO of Navigara. “Velocity is subjective and breaks at scale. Engineering Throughput Value (ETV) gives you a consistent, data-driven view of engineering output.
Average engineering throughput per software engineer rose 116% across the public-repository activity of Cloudflare, Vercel, OpenAI, Google, Meta, and Microsoft between Q1 2025 and Q1 2026, according to research published today by Navigara. The figure is descriptive. The study makes no causal claim about why it moved. This is the first edition from Navigara’s open source research team. Navigara has a commercial interest in the conclusion that engineering work is difficult to measure, and the paper says so on the first page. Methodology, sample boundaries, and known limitations are open.
The Headline Number
Across the open cohort of 565 to 676 qualifying engineers, developer-weighted mean Engineering Throughput Value (ETV) rose 116% year over year, with a 95% confidence interval of [+84%, +148%]. On a fixed panel of 418 engineers active in every quarter of the window, the same metric rose 98% (95% CI [+63%, +140%]). About 85% of the headline survives holding the population constant. The remainder is cohort growth as new engineers cleared the qualifying threshold.
The Trajectory, Composition, and Mechanism
The five-quarter series is +0%, +9%, +18%, +25%, +116%. Most of the rise sits in Q1 2026. A steady-slope rise and a late-window inflection have different causal stories, and the report does not attempt to choose between them. Whether the Q1 2026 inflection holds, decays, or accelerates is a question for the next edition. Maintenance share fell 10 percentage points, from 56% of classified output in Q1 2025 to 46% in Q1 2026. Growth share rose 7 points, from 29% to 36%. Fixes share rose 3 points, from 15% to 18%. Maintenance remains the largest classification across all organizations in the sample. Maintenance and fixes together account for between 57% and 70% of Q1 2026 output, depending on the organization. Commits per qualifying engineer rose 35% over the window. Performance per commit rose 51%. The intensity of each unit of work moved more than the frequency. A commit in Q1 2026 carries 51% more weight than a commit in Q1 2025. The data describes the shift without claiming the cause.
The Spread
Per-developer movement varies by organization, rebased to each org’s first qualifying quarter:
- OpenAI: +373%
- Cloudflare: +157%
- Microsoft: +137%
- Vercel: +92%
- Google: +56%
- Meta: +51%
OpenAI’s cohort entered the report in Q2 2025 and grew from 6 to 31 qualifying engineers across the window. Read its line as a ramp, not a baseline. The AI-forward sample (Cloudflare, Vercel, OpenAI) moved more than the incumbent sample (Google, Meta, Microsoft). The direction is consistent with the AI-adoption hypothesis. It is not proof of it. Cohort size and movement are inversely correlated in this sample.
“The productivity gains from AI tools are real, and the speed-up is not linear. Teams are shipping substantially faster. But every company and team differs, and developers are still learning how to achieve high, sustainable performance over a longer period of time,” said Jirka Bachel, co-founder and CEO of Navigara. “Velocity is subjective and breaks at scale. Engineering Throughput Value (ETV) gives you a consistent, data-driven view of engineering output.”
Sample and Limits
The six organizations were not sampled randomly. Cloudflare, Vercel, and OpenAI were selected because each frequently makes public claims about AI productivity gains within its engineering organization. Google, Meta, and Microsoft were selected as incumbents of substantially larger scale with strong public reputations for engineering talent density. Public-repository activity at Google, Meta, and Microsoft is concentrated in developer tooling, SDKs, and open-source frameworks (TypeScript, React, VSCode, Playwright, Perfetto, Hermes, Buck2, and similar) rather than the internal product codebases where most engineering at those companies happens. Public-repo activity at Cloudflare, Vercel, and OpenAI is closer to their core product surface but is still public by construction. Findings describe activity in the in-scope public repositories. Generalization to private internal engineering at any of these organizations is not supported by the data. The engine cannot distinguish AI-assisted commits from non-assisted commits when the author is a person. Bots are excluded by pattern match on email and display name. Whether the work in the sample shipped the right things is a separate question. This study cannot answer it.
The Methodology A per-commit scoring engine produces three sub-scores for every merged commit to the default branch of 66 public repositories: Growth, Maintenance, and Fixes. Each begins with a context-complexity signal, scaled by an engagement multiplier that weights targeted modifications in complex areas above equivalent changes in trivial code. Decay factors discount specific patterns. A similarity dampener reduces credit for mechanical refactors and copy-paste. A blame decay factor discounts changes that overwrite the same author’s recent work over a short business-day window. A copy decay factor reduces credit for commits that include a high proportion of duplicated lines from elsewhere in the codebase. For Fixes, a waste multiplier reflects how long the original code existed and how often the affected area had been modified. Machine-learning components tune thresholds and coefficients within this structure; a large-language-model classifier resolves ambiguous work classifications where pattern-based signals are insufficient. The report layer sums the three sub-scores into a single scalar per commit, the Engineering Throughput Value (ETV). The cross-org headline is the developer-weighted mean ETV across qualifying Software Engineers, with each engineer contributing one observation per quarter, regardless of organization size. A contributor qualifies as a SWE in a given quarter when the role classifier assigns Software Engineer and the contributor has recorded commit activity in at least 10 weeks of the measurement window. Per-organization results appear only when a cohort reaches the 20-SWE sample floor. Confidence intervals are computed by bootstrap over SWEs within each quarter (1,000 iterations, seeded for reproducibility). Single-quarter movements should be read against the band; the multi-quarter slope is the signal.
About the Research
This is the first edition from Navigara’s open source research team, which studies engineering output at the commit level across public code. Code, data definitions, methodology, and the full technical appendix for this study are available at research.navigara.com.
About Navigara
Navigara is a performance measurement platform for engineering organizations. Based in San Francisco and Prague, it connects to GitHub, Jira, and Linear to translate engineering execution into signals leadership can act on.
( Press Release Image: https://photos.webwire.com/prmedia/71136/354112/354112-1.png )
WebWireID354112
- Contact Information
- Michal Habdank
- Chief Storyteller
- Navigara
- hello@navigara.com
This news content may be integrated into any legitimate news gathering and publishing effort. Linking is permitted.
News Release Distribution and Press Release Distribution Services Provided by WebWire.
