An insider’s reflection from Trust & Safety
One of the most uncomfortable moments in moderation is this:
You’re looking at content that is factually accurate.
And it’s still causing harm.

People often assume moderation is about stopping misinformation. False claims. Fabricated stories. Manipulated media.
But some of the hardest decisions involve content that is technically true.
So the question becomes uncomfortable very quickly.
If something is true, should it always stay online?
Truth and Harm Are Not Opposites
In Trust & Safety, we learn early that truth does not automatically equal safety.
A real video of a violent incident can retraumatize victims’ families.
Publishing someone’s real home address can put them at physical risk.
Sharing accurate but private medical information about a person can destroy their livelihood.
The information may be correct.
The impact can still be harmful.
Platforms are not courts. They are not journalism outlets in the traditional sense. They operate at massive scale, where amplification changes consequences.
And amplification is powerful.
The Scale Problem
If a harmful truth is spoken in a small room, the damage is limited.
If it’s pushed to millions through algorithms, the impact multiplies.
I’ve seen content that exposes real past mistakes of individuals. Old records. Arrest reports. Personal incidents.
Technically true.
But shared in a way designed to harass, shame, or mobilize targeted abuse.
At scale, truth can become a weapon.
That’s where moderation decisions get complicated.
Public Interest vs Private Harm
There’s an important distinction between public interest and private exposure.
Reporting on corruption by a public official serves a civic purpose.
Exposing the private address of a non public individual does not.
Both could be true.
But the intent and the public value are very different.
Moderation policies often rely on this distinction. They assess whether the content contributes to public understanding or primarily enables harassment, exploitation, or risk.
Truth alone is not the only factor.
Context Is Everything
A factual statement shared in a documentary has a different impact than the same statement posted with a call to harass someone.
A real image shared by a human rights group to document abuse differs from the same image reposted to glorify violence.
The surrounding context shapes the harm.
Automation struggles with this. It can verify facts in limited ways, but it cannot fully measure consequences.
That’s why human review remains essential.
What I’ve Learned From the Inside
After years in Trust & Safety, I’ve stopped seeing moderation as a battle between truth and censorship.
It’s more nuanced than that.
The real question is not whether something is true.
It’s whether the way it’s shared creates disproportionate harm.
Platforms have a responsibility to protect users from targeted abuse, exploitation, and real world danger.
Sometimes that means limiting content that is technically accurate but operationally harmful.
That decision is never comfortable.
But in a world where amplification is instant and global, truth without responsibility can cause real damage.
And balancing those two is one of the most difficult parts of the job.