Silence and Noise: Why Self-Censorship Happens on Social Media and How People Share Honest Opinions
Silence and Noise: Self-censorship and Opinion Expression on Social Media
HCI Today summarized the key points
- •This article summarizes a study that investigated self-censorship—where people express thoughts differently from what they truly think—on social media.
- •The research team compared and analyzed the gap between publicly stated opinions and private thoughts through a survey of 390 participants and interviews with 20 people.
- •People tended to hold back more when they felt they wouldn’t receive support or when they believed many others were watching; even when they did speak, they often adjusted their comments to match the surrounding 분위기.
- •In particular, the more conflict-heavy the topic—such as politics and social issues—the more frequently silence and opinion adjustment appeared.
- •The study suggests that this phenomenon can reduce the range of opinions and make it easier for extreme ideas and misinformation to spread.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article clearly shows why, on social media, people either say nothing at all—or adjust what they say by slightly changing their thoughts. For HCI and UX practitioners, it’s important to treat not just post counts or engagement metrics, but also the invisible experiences users go through before speaking—such as the sense of burden, signals of support, and the surrounding atmosphere—as design variables. In particular, it prompts us to consider what kind of interface is needed to support both safe expression and healthy participation.
CIT's Commentary
What’s especially interesting is that this study doesn’t treat silence as mere ‘inactivity’; instead, it brings silence into the decision-making process right before someone speaks. If you look only at whether something gets posted, you can easily miss ‘utterances with lowered tone’ or ‘cautious participation’—but these may actually be the core behaviors. This perspective has major implications when translating findings into product design. For example, fact-checking can be reinterpreted not only as post-hoc enforcement, but as a confidence-support mechanism right before posting. And in environments with large community sizes, buffering mechanisms such as small-group modes or signals of support may be necessary. However, such interventions also carry the risk of feeling like excessive control while increasing expression. So transparency, reversible choices, and a safe way to back out if something goes wrong should be designed together.
Questions to Consider While Reading
- Q.What interface interventions—no matter how minimal—could effectively reduce the anxiety and silence that occur right before posting?
- Q.How can we measure the side effects of devices like support signals or fact-checking tools that might instead suppress expression?
- Q.On platforms with different contexts—such as Korea’s Naver, Kakao, and domestic communities—how do ‘community size’ and ‘social pressure’ manifest differently?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.