Starting April 24, GitHub will use interaction data from Free, Pro, and Pro+ Copilot users to train AI models — unless you opt out. Business and Enterprise customers are exempt.
On March 25, 2026, GitHub quietly announced a significant change to its Copilot data usage policy. Individual users on Free, Pro, and Pro+ plans will have their interaction data used for AI model training by default. If you don't want your code helping train the next generation of Copilot, you have until April 24 to opt out.
How to Opt Out (30 Seconds)
- Go to github.com/settings/copilot
- Scroll to the Features section
- Find "Allow GitHub to use my data for AI model training"
- Toggle it off
- Save
If you previously opted out of data collection for product improvements, your preference is preserved. You don't need to do anything. But verify — settings can reset during plan changes.
What Data Is Collected?
GitHub's updated policy specifies that "interaction data" includes:
- Code snippets — the code you write and the context around Copilot suggestions
- Accepted suggestions — which completions you kept vs. dismissed
- Navigation patterns — how you move through files during coding
- File names and repository structure — project layout metadata
- Feedback signals — thumbs up/down, edits to suggestions
This goes beyond just "code completions." It's a detailed picture of how you code, what you're working on, and how you interact with AI assistance.
Who Is Affected?
- Copilot Free ($0/mo) — affected, opt-out available
- Copilot Pro ($10/mo) — affected, opt-out available
- Copilot Pro+ ($39/mo) — affected, opt-out available
- Copilot Business ($19/seat/mo) — NOT affected, data never used for training
- Copilot Enterprise ($39/seat/mo) — NOT affected, data never used for training
Individual developers paying up to $39/month get weaker privacy protections than corporate users. If data privacy matters to you, only the Business ($19/seat) and Enterprise ($39/seat) tiers guarantee your code stays out of training data — regardless of opt-out settings.
Why This Matters for Your Tool Choice
This policy change adds a new dimension to the "which AI coding tool?" decision. If you're privacy-conscious:
- GitHub Copilot — requires active opt-out for individual plans. Business/Enterprise are safe.
- Cursor — uses third-party models (OpenAI, Anthropic, Google). Privacy mode available that doesn't store code server-side. Business tier adds zero-retention.
- Claude Code — Anthropic's usage policy states API inputs are not used for training by default. Max subscription users also opt out by default.
- Windsurf — offers zero-retention mode on enterprise plans. FedRAMP certified for government use.
- Tabnine — markets itself on privacy. Enterprise tier runs entirely on-premise.
For developers handling sensitive or proprietary code, this is a meaningful differentiator. Our team comparison guide breaks down privacy features by tool.
The Broader Trend
GitHub isn't the first to make training opt-out rather than opt-in. But doing it for a tool that sits inside your IDE — with access to your code, your file structure, and your workflow — raises the stakes.
Expect other tools to face similar pressure. The question isn't whether AI tools will want your data — it's which ones are transparent about it and which make opting out easy.
What to Do Now
- Opt out at github.com/settings/copilot if you don't want your data used for training
- Check your plan — Business and Enterprise users are automatically exempt
- Evaluate alternatives if data privacy is a hard requirement — see our Copilot pricing breakdown and full tool comparison
- Set a calendar reminder for April 24 if you want to verify your settings before the change takes effect
Compare privacy features across all AI coding tools
Team & Enterprise Comparison →