Monday, September 22, 2025

When Your Agent Writes Code Faster Than You Can Review

image

Agentic coding has the promise to unlock a turbocharged workweek, cranking out pull requests faster than any keyboard warrior could manually manage. The real bottleneck isn’t how quickly code appears—it's how quickly humans can keep up with the reviews. If you’re thinking about coasting on those newly liberated hours, beware: the pace is relentless and those who snooze will get lapped. Adapt, hustle, and wield your extra time, or get comfortable watching the leaderboard from the bottom.

PS! I continue to recommend human code reviews and remain open to updating my perspectives as AI evolves :)

Short take: review is the constraint; ship more anyway.

A PR takes ~6 hours to write and 15–20 minutes to review—per reviewer. Agentic coding (Copilot and friends) doesn’t cancel your review policy; it makes it matter. As adoption spreads, you spend more time reviewing and still ship more.
It does not matter if the agentic multiplier is 1.2x, 2x, 4x or more, your team will still benefit. However, when does that flip and review starts choking the flow? Read on.

It doesn’t “flip” mathematically, but in practice review starts choking the flow when mr ≈ 1 — about 10× coding speed with two reviewers (r≈0.097) or ~20× with one (r≈0.049) given 6h/PR and 15–20m per reviewer.

Assumptions (and please argue my assumption data)

  • One author. We model one‑reviewer and two‑reviewer policies.
  • Coding per PR ≈ 6h. Review per reviewer ≈ 15–20m. (Yes, short. Keep it that way.)
  • That gives a review/coding ratio r per reviewer of ~0.042–0.056; we use ~0.049. Two reviewers ≈ double r.
  • Agentic coding is a multiplier m on coding speed (1.2×, 1.5×, 2×, 3×, 4×).

Light math, heavy impact

Split the week between writing and reviewing. If writing gets m× faster, more PRs arrive, so review minutes go up. A back‑of‑napkin math formula for team throughput vs. today:

gain ≈ m·(1+r) / (1+m·r)

r is total human review time divided by authoring time for a PR. Bigger r → more RAT (Review-Added Tapering): the gains flatten as review effort eats into coding speedups.

Case A — One human reviewer

Throughput climbs, and review time takes a bigger slice of the week, but you have headroom at these review times.

image
Figure 1 — Team throughput vs. agentic multiplier (one reviewer; 15–20 min sensitivity band).


image 
Figure 2 — Share of team time spent on review (one reviewer).

Case B — Two human reviewers

Two required approvals roughly double review time per PR, so you feel the constraint sooner.
image 
Figure 3 — Team throughput vs. agentic multiplier (two reviewers; sensitivity band).

 

image
Figure 4 — Share of team time spent on review (two reviewers).

Clearer throughput tables

Two quick tables; both assume a team of eight and the same PR size/quality.

Table A — Adoption curve (8 people; each adopter is 1.2×)

Adopters (of 8) Avg multiplier (m̄) Throughput × (1 reviewer) Throughput × (2 reviewers)
0 1.000 1.00 1.00
1 1.025 1.02 1.02
2 1.050 1.05 1.05
3 1.075 1.07 1.07
4 1.100 1.09 1.09
5 1.125 1.12 1.11
6 1.150 1.14 1.13
7 1.175 1.17 1.16
8 1.200 1.19 1.18

Table B — Throughput as average team multiplier grows (to 4×)

Avg multiplier (m̄) Throughput × (1 reviewer) Throughput × (2 reviewers)
1.0× 1.00× 1.00×
1.2× 1.19× 1.18×
1.5× 1.47× 1.44×
2.0× 1.91× 1.84×
3.0× 2.75× 2.55×
4.0× 3.51× 3.16×

 

About those “free” minutes

Agentic coding frees time. If it doesn’t become more (or better) PRs, it still has to land somewhere:

  • Best: quality. More tests, tighter diffs, cleaner commits. Review gets faster when diffs are tidy.
  • Good: non-code work. Specs, comms, reviews, mentoring—all the glue.
  • Worst: Xbox/Netflix. Teams that reward shipped work will see the reinvestors lap the coasters.

Push the pivot out

  • Keep PRs small—cheap to read is cheap to ship.
  • Let CI and AI do pre-review: lint/tests/security, AI summaries and risk flags.
  • Block time for review—don’t background it.
  • Two approvals only when it truly buys risk down.

Bottom line

Manual review doesn’t kill agentic gains. With 6h write / 15–20m per reviewer, one-approval teams have plenty of headroom; two-approval teams still gain but meet the review wall sooner. Convert freed minutes into smaller, better PRs—or more of them— and the meter on shipped work keeps spinning faster.

Wednesday, August 13, 2025

Governing Feedback Data Sharing in Microsoft 365 Apps and Copilot

ComfyUI_00083_


Microsoft treats user feedback as confidential and uses it to improve the product experience. That does, however, involve real people reading what’s submitted. For many organizations that’s fine; for others it’s something they want to control—particularly when feedback is sent from Copilot chat, where a message might contain sensitive business context.

Good news: you can govern what can (and cannot) be sent in the feedback dialog.

Note: Prompts and responses used inside a Copilot conversation are not viewed by Microsoft. What I’m covering here is only the optional feedback form users can submit.

Where to configure Feedback policies

image

These settings live in the Microsoft 365 Apps admin center as Cloud Policy:

  • Go to config.office.comCustomizationPolicy Management.

  • Create or edit a policy configuration for Microsoft 365 Apps and scope it to the users/groups you want.

  • Search for Feedback to find the policies below.

Official docs for Cloud Policy: https://learn.microsoft.com/microsoft-365-apps/admin-center/overview-cloud-policy.

These policies apply to apps and web experiences that use the standard “Send feedback to Microsoft” UX from Microsoft 365 Apps (including Copilot surfaces that use that dialog). Teams uses its own policy model for feedback, so manage Teams separately.

Also important: these settings are not “restrictive by default.” If you don’t explicitly disable them, users can include logs and content samples by default.

See https://learn.microsoft.com/en-us/microsoft-365/admin/manage/manage-feedback-ms-org for a list of products covered by feedback policies set at config.office.com

The four knobs that matter (set to Disabled to restrict)

I’m listing them in the order I recommend you evaluate them.

1) Allow Microsoft to follow up on feedback submitted by users

  • What changes: the small text at the bottom of the dialog that says Microsoft may contact the user.

  • Why it matters: disables the consent for follow-up (no contact back). Many orgs prefer no outbound follow-ups to end users.

2) Allow users to include screenshots and attachments when they submit feedback to Microsoft

  • What changes: hides the Include a screenshot control in the form.

  • Why it matters: screenshots often contain customer content.

  • Gotcha: this does not cover log/content attachments (that’s the next policy).

3) Allow users to include log files and relevant content samples when feedback is submitted to Microsoft

  • What changes: removes the option that shares the prompt, generated response, relevant content samples, and additional log files with the feedback.

  • Why it matters: this is the big one for Copilot. When enabled, users can (and by default will) include parts of the conversation and context data. Setting it to Disabled prevents those from being sent.

4) Allow users to submit feedback to Microsoft

  • What changes: blocks the feedback dialog from appearing at all.

  • Why it matters: the nuclear option. If you don’t want any feedback sent from the product UI, turn this off.

What users will see

Here’s how the UI shifts as you apply the policies:

  • With default settings, users can add a screenshot and (by default) include prompt/response + logs/content samples.

    feedback-default

  • Disable follow-up → the small “Microsoft may contact you…” text is removed/changed.       

    feedback-no-contact[1]

  • Disable screenshots → the screenshot checkbox/button disappears (users cannot attach images), but log/content sharing may still be available unless you disable it too.

    feedback-noscreenshot[1]

  • Disable log files and content samples → the “Share prompt, generated response, … and additional log files” option is removed, so no conversation context is shared.

    feedback-nologs[1]

  • Disable submit feedback → the dialog doesn’t show.

Recommended approaches

Pick the level that matches your risk tolerance:

  • Leave as is: The default behavior allows Microsoft to capture valuable feedback to adjust products and experiences to the better for those using them.

  • Balanced: Disable screenshots and log files/content samples, allow feedback, and optionally disable follow-ups.

  • Strict: Disable screenshots, log files/content samples, and follow-ups.

  • Locked down: Disable submit feedback entirely.

If you do allow feedback, you may want to consider disabling the log files and relevant content samples option for Copilot users. While this data is handled securely by Microsoft, turning it off can help ensure that no conversation snippets or contextual information are included in feedback—something some organizations prefer for peace of mind or to align with their internal data handling practices.

Final thoughts

I work at Microsoft, and I know there are actual people reading feedback to make our products better. That’s a feature, not a bug—but each organization has different requirements for customer data. With Cloud Policy you can decide what’s appropriate for your tenant, from light filtering to full lockdown.

Again, this doesn’t change how Copilot processes prompts during normal use—those aren’t viewed by Microsoft. We’re only talking about the separate, optional act of sending product feedback according to your level of comfort, and is why the options are there.

If you’ve got a mixed environment (e.g., Teams), remember to set feedback controls where that app expects them.

Happy governing!

Tuesday, May 27, 2025

Taming the Context-Switch Hydra: How an AI Sidekick Saved My Sanity



TL;DR:

GitHub Copilot (in Agent mode with Codespaces) dramatically cuts down the cost of context switching, perfectly fitting my chaotic, multitasking workstyle. It lets me stay productive and in flow, even as I juggle coding, helping others, and specs—all while being 51 and still loving the craft.

For Those Who Prefer the Director’s Cut …keep on reading