-Advertisement-

-Advertisement-

Does Reducing Social Media Time Improve Mental Health?

Whether social media does or does not impact youth mental health remains an issue of furious date. At the moment, evidence from social science is weak (though often overblown), and it’s hard to know whether patterns in teens’ mental health can be attributed to social media since they mainly mirror similar patterns in adult mental health.

This might suggest something broader is happening in society affecting everyone. During times of moral panic, people often reduce their critical thinking, accepting even weak evidence for a pattern they already believe in.

A recent study sought to examine the issue experimentally. What would happen if we took one group of college students and randomly assigned them to reduce their social media time, while a second control group continued using social media as normal?

According to a new study from Iowa State University, the group that reduced their social media time reported feeling less depressed and anxious. At first blush, this would seem like important data: In an experiment, reducing social media time reduced depression and anxiety. But does it actually tell us anything useful?

This isn’t the only study with this basic design. Other studies have done similar things, involving either abstinence from or a reduction of social media time. Results have been somewhat mixed, but whether they find effects or not they all experience a fairly basic flaw: It’s obvious to the participants how they are supposed to respond.

This issue, often called demand characteristics is well known in research. It’s why in most good medical experiments the control group is given a placebo or older medication and nobody (often not even the nurses administering the medications in a double-blind experiment) knows which treatment they are getting (of course, it’s monitored by the scientists running the study who don’t interact with the participants).

But in these social media studies, everyone knows what group they are in. Being able to guess the study’s hypotheses is generally associated with false positive results—meaning, unfortunately, that this type of study is without much value.

There are some other odd issues with this particular study. The experimental group is much smaller (99 participants) than the control group (131). That seems improbable to have occurred just via random assignment.

One possibility is that more people dropped out of the experimental group. Unfortunately, if true, that also calls the results into question. It could mean that only participants who liked reducing their social media time stayed, but those who felt their mental health worsening dropped out. Failing to include their data would, of course, cause a false positive result. But it’s hard to know since dropout data don’t appear to have been reported.

It’s also not really clear if the student participants actually reduced their time or just said they did, telling the researchers what they wanted to hear. There appears to be no actual monitoring of the participants’ social media time.

The other thing I look for in studies is something called “citation bias.” This occurs when authors cite only studies fitting their worldview and ignore studies that do not. Citation bias is often a red flag for potential researcher expectancy effects that can inflate effect sizes.

In this study, I was concerned that the authors linked social media use to mental health concerns with no qualifications whatsoever, despite that many studies actually don’t find this purported effect. This is misinformative. To be fair, when they discuss prior experimental studies, they are more honest about discrepancies in the findings, and that’s great. However, when they state, “Social media use is associated with increases in anxiety, depression, loneliness, and FoMO (fear of missing out).

In general, spending extensive time on social media can have negative consequences on psychological well-being,” that’s just not an honest appraisal of the evidence which remains mixed and correlational.

I’m somewhat sympathetic to people trying to design experiments in this area. I admit that it’s difficult to imagine what a proper randomized controlled trial might look like that wouldn’t give away the study hypotheses. Unfortunately, the studies we have at present (whatever effects they find) just isn’t it.

Even worse, at present the way people are looking at data fits the classic patterns of a moral panic: cherry-picking suicide data to fit an ecological fallacy, ignoring null results, hyping poor-quality studies, and a reverse burden of proof that disincentivizes critical thinking. There are relatively few smart quality control checks on “the narrative.”

Fundamentally, we probably need to dial back and consider suicide across all age groups which, according to CDC data, are demonstrating a similar pattern, nothing unique to teens.

But that will take people switching out of a confirmatory mode into a proper scientific mode and, as we know, that’s difficult to get people to do during a moral panic.

Leave A Comment

Your email address will not be published.

You might also like
where to buy viagra buy generic 100mg viagra online
buy amoxicillin online can you buy amoxicillin over the counter
buy ivermectin online buy ivermectin for humans
viagra before and after photos how long does viagra last
buy viagra online where can i buy viagra