Surgeon General Warns of Increasing 'Harm' Linked to Social Media

— Vivek Murthy Advocates for Greater Accountability and Deeper Research on Social Media's Impact on Health

In an exclusive video interview, Jeremy Faust, MD, Editor-in-Chief of MedPage Today, engages in conversation with U.S. Surgeon General Vivek Murthy, MD, MBA, to discuss his recent advisory on social media's effects on youth mental health.

The following is a transcript from their discussion:

Faust: Hi, I’m Jeremy Faust, Editor-in-Chief at MedPage Today, and I’m thrilled to be sitting today with Dr. Vivek Murthy.

Dr. Murthy has held the position of Surgeon General of the United States for two terms, as the 19th and 21st. He has been at the forefront of issuing several key public health directives. Dr. Murthy, thanks for being with us today.

Murthy: It’s a pleasure to be here, Jeremy.

Faust: Lately, there’s been growing attention on the role of social media in our lives, particularly regarding your recent advisory on this issue. You’ve approached the topic with a lot of nuance. The key takeaway I saw is that, like many things, it’s about the dose and context – that certain uses of social media are more damaging than others. On page five of your report, there’s a note stating both positive and negative outcomes of social media use for adolescents.

Before delving into the negative, let's start with the positives. What do you believe are the beneficial aspects of social media for young individuals?

Murthy: You’re right, Jeremy. There are certainly benefits that some young people derive from social media. Among them are the opportunities for greater self-expression and creativity. They can connect with communities they otherwise might not have access to – groups that share similar interests, challenges, or life experiences. It’s also used to maintain bonds with friends, such as those from school or college, long after they’ve moved away. These are undoubtedly positive developments.

However, like with any product – and this is something our field understands well – we need to weigh the benefits against the risks. One thing that concerns me is how binary the conversation around social media has become. People frequently ask, “Is it good or bad?”

In medicine, we know it’s never that simple. Even a treatment with benefits for many might be risky enough for some that it’s no longer recommended. Take Vioxx, for instance. It helped many with chronic pain, but the rising evidence of cardiovascular risks led to reevaluating its place in therapy. It became apparent that the risks outweighed the benefits for the population at large.

In the case of social media, while some benefits are evident, the growing body of research documenting its harms is alarming. And what struck me is how frequently I’ve been hearing concerns not just from parents, but from the youth themselves. That motivated my work on a comprehensive Surgeon General’s Advisory in 2023, focusing on the mental health impact of social media on young people.

Faust: The report is filled with data and research insights, but what happens now? For instance, with tobacco, we got a black-box warning, which led to significant positive change. I’m curious about what you envision. In a few years time, will simply clicking "I agree" on terms and conditions on social media platforms still cut it, or will we see bigger efforts at accountability?

Murthy: I’m glad you asked. This isn’t a straightforward problem. With tobacco, the warning label, for instance, helped raise awareness, but it was just one part of a broader strategy.

Similarly, social media also requires a multi-faceted approach. I’ve previously called for safety labels to ensure parents know two things: firstly, that we lack sufficient evidence to confidently say that social media is safe for children, and secondly, that there is strong evidence linking social media use with increased mental health issues among adolescents.

Beyond that, I’ve called for comprehensive regulations designed to make social media safer. These could include adjustments to how platforms are designed – like limiting features that drive excessive use, such as infinite scroll mechanisms and autoplay functions. Importantly, I’ve also pushed for more transparency from these platforms. They need to release the data they collect on the mental health effects of their services.

At the heart of this is something deeply concerning to me, and not just as a doctor, but as a parent: researchers continually find themselves unable to access key information about the mental health impacts of social media, because companies choose not to share it. As a father, it worries me that details about the platforms my children use are being held back by the companies that produce them.

So, there are several measures we need to consider. And about the warning label itself: we’ve seen with tobacco and alcohol that determining the most effective label requires a rigorous scientific process. Each aspect – font size, placement, visual aids – needs to be tested thoroughly. A similar approach should inform how social media platforms present digital warnings to users.

Faust: I see what you mean. But how do you plan to enforce that transparency and regulation? For instance, one parent might find certain content on social media utterly objectionable, based on personal or religious values, while another wouldn’t care. How can we navigate that gray area, especially when it involves more controversial material?

Murthy: You’re touching on a real challenge here, Jeremy – the subjective nature of content. What’s acceptable or harmful can vary significantly between individuals.

But when it comes to children, I believe we can reach a consensus on specific categories of harmful content. For example, I think most agree that minors shouldn’t be exposed to pornography. We have clear content rating systems for movies and TV that serve as a guide. Yet currently, children can easily stumble upon explicit sexual material through their social feeds, which is clearly unacceptable.

Another category involves content promoting self-harm or suicide. Although it may seem shocking, many parents have shared heartbreaking stories of their children being shown algorithm-suggested videos providing instructions on harming themselves.

So, while there is a lot of space for debate, there are clear boundaries we should all be able to agree on. And, importantly, these platforms have a responsibility. They’ve built these algorithms, and I believe that those who create products should be accountable for ensuring their safety.

Imagine, Jeremy, if you and I built a hospital where patients continuously faced unnecessary risks like infections or accidents. No one would excuse us just because we were also providing essential care. We’d be held accountable to ensure the environment’s safety – and social media companies should be held to a similar standard.

Faust: I completely agree. The analogy with hospital safety standards really hits home. But in terms of research, what kind of studies are necessary to determine safety, in your view? We know how certain studies, like those assessing cancer treatments, look. But for social media, what does a rigorous safety assessment look like?

Murthy: That’s an excellent question. To begin answering it, we need more investment in independent research around these issues. While companies are performing their own internal studies, what we need is a broader, more transparent body of evidence.

There are several avenues to explore, such as observing the impacts of limiting social media use or assessing the influence of new platform features designed to reduce harm. Additionally, controlled studies where participants reduce or discontinue social media use and then reporting mental health outcomes can track both subjective well-being and measurable clinical outcomes.

We’ve heard many anecdotal stories from young people. Numerous college students, for instance, report feeling significantly better after stepping away from social platforms. Initial withdrawal symptoms are common, but after a few days, many report improved mental health.

But these effects need to be validated on a broader scale. And ultimately, it’s about giving parents and youth the tools to use social media in responsible, balanced ways.

Faust: It’s insightful that you raise the importance of nuance in this. For instance, an LGBTQ young person may find life-saving community and support online – and surely, we wouldn’t want parental consent restricting that, right? What’s your view on this?

Murthy: Absolutely, Jeremy. I’ve spoken with many LGBTQ youth whose online connections served as a vital lifeline when they couldn’t find that kind of support in their physical communities. It can truly be life-changing.

On the flip side, studies have shown that LGBTQ youth are also more likely to experience cyberbullying and harassment. So, while social media can offer a critical space for connection, it can also expose them to harmful experiences.

The tragic reality is that currently, these youth often face a difficult choice: accept the potential benefits of online connection but at the cost of being vulnerable to significant harms. That’s an unfair dilemma. A safer online environment should ensure these young people can continue finding support while minimizing the harms they’re exposed to.

Ultimately, the goal isn’t to eliminate social media but to build a balanced, safer space where those benefiting can do so without unnecessary risk.

Disclosure: Faust disclosed being a paid writer for Bulletin, a newsletter platform owned by Meta, between 2021 and 2023.

3399 likes 123 802 views
No comments
To leave a comment, you must .
reload, if the code cannot be seen