The Default Trap: Why Anthropic's Data Policy Change Matters
Read the terms of service. Don’t make assumptions. Don’t pick defaults.Yesterday, Anthropic quietly flipped a switch. If you're a Claude user, your conversations are now training data unless you actively say no. Not when you give feedback. Not when you explicitly consent. By default, from day one.Here's what changed: Previously, Claude didn't train on consumer chat data without your explicit thumbs up or down. Clean, simple, respectful. Now? Everything you type becomes model training fodder unle...
Read more at natesnewsletter.substack.com