When Facebook’s recent promise to help users get accurate health information hit my newsfeed, the words “medical misinformation” caught my eye (see Facebook to Limit Posts Promoting Medical Misinformation.) While misinformation on Facebook has gotten much attention in the realm of politics, there has been less coverage of medical misinformation on the Internet in general, and Facebook in particular.
In general, it’s impossible to know how much of the information on the internet qualifies as “medical information.” What we do know is that roughly seven percent of Google’s daily searches are health-related, amounting to one billion health questions every day (see Google Receives More Than 1 Billion Health Questions Every Day). And we know that 40% of consumers use social media for health information (see 24 Outstanding Statistics & Figures on How Social Media has Impacted the Health Care Industry). But in a recent study reported in The Guardian, 7 out of 10 of the most-shared articles were fake (see Health Articles Shared on Facebook Include False Information, Researchers Say). Because of this, the risk that consumers are “believing everything they read” is real.
No one knows better than those of us in the health and human service field what can happen when these stories go viral, especially when consumers or their families so desperately want answers. One bad example of this has been autism “cures”, as reported by NBC News earlier this year—”Some parents credit turpentine or their children’s own urine as the secret miracle drug for reversing autism. One of the most sought-after chemicals is chlorine dioxide — a compound that the Food and Drug Administration warns amounts to industrial bleach, and doctors say can cause permanent harm” (see Parents are Poisoning Their Children with Bleach to “Cure” Autism).
How widespread is this kind of fake medical news? Haider Warraich, a fellow in heart failure and transplantation at Duke University Medical Center, noted late last year, “While misinformation has been the object of great attention in politics, medical misinformation might have an even greater body count” (see ‘Fake Medical News’ Has A ‘Body Count,’ One Doctor Warns. Here’s How To Fight Back). As a result, internet platforms are making some efforts to address the issue. Last year, Google made a concerted effort to curb medical misinformation by embargoing addiction treatment search advertising in the U.S. after news articles reported fraudulent patient brokering and deceptive marketing practices—a move they expanded worldwide a few months later (see Google Suspends Addiction Treatment Ads – How Do You Compete On An Uneven Playing Field?). And now, Facebook is proposing a fix on its platform. The approach Facebook is taking to combat misinformation involves tweaking its typical ranking process by watching for posts with “exaggerated or sensational health claims,” as well as posts trying to sell products or services based on exaggerated claims. Then, once these potentially false or misleading posts are identified by the system, they will show up lower in the user’s newsfeed than they normally would.
In a recent blog post (see Addressing Sensational Health Claims), Facebook project manager Travis Yeh explained the move—”In order to help people get accurate health information and the support they need, it’s imperative that we minimize health content that is sensational or misleading.”
Mr. Yeh goes on to assert that as an organization, Facebook has made two updates to how posts are “ranked” on the site. Ranking is the result of Facebook’s computer algorithm, or programming “recipe,” that aggregates all available posts that can display on a user’s news feed and delivers them in an order based on how likely that user will have a positive reaction.
But is this enough? Sure, these “down rankings” might affect searches or users might now miss them as they fall on the priority list—but what about when a Facebook user is a member of a group that shares misinformation? These ranking changes will have no effect there. And Facebook groups are prevalent, as you know; recently groups have also come under fire for appearing to be confidential while sharing information with third parties (see Facebook Makes Changes to Health Support Groups to Better Protect Users’ Privacy). I think these groups of like-minded information seekers and sharers are actually the bigger problem, especially if a user goes directly to a group rather than browsing through the news feed that Facebook has served up. After all, who do you trust more than your “friends”?
This issue is part of a broader challenge of how people get “information” in the age of the Internet. Google and Facebook executives cling tenuously to the assertion that their platforms are just that – platforms with no responsibility for the information that appears on the platform. That is why efforts to curb health care misinformation by these platforms will always be limited.
Both Google and Facebook are doing other interesting work in the health and human service space. For example, we recently covered Facebook’s use of artificial intelligence (AI) tools as a way to be proactive in recognizing and intervening when suicidal posts are detected (see Facebook Uses AI Tools for Suicide Prevention). And Google’s artificial intelligence (GAI) network is 94% accurate at predicting hospital inpatient death risks (see Google AI Predicts Hospital Inpatient Death Risks With 94% Accuracy). Tech-enabled automation, driven in large part by the need to deliver on value-based reimbursements, is coming to pharmaceutical companies; and a series of HIPAA-compliant Amazon Alexa Skills that allow users to consult the digital assistant with health-focused questions are now available (see What Should Keep You Up At Night?). But for all of these positive contributions, I think they will be dwarfed in impact by the “medical misinformation problem” without actions with more effect.
And for more on using and understanding social media, check out these resources from the OPEN MINDS Industry Library:
- Bot, Anyone? The Question—What Services Can You Automate?
- Getting, & Keeping, Consumers Engaged With Technology
- Oh, Those Consumer Reviews
- Don’t Let The Big Disruptors Out Of Your Sight
- 40%+ Of Consumers Won’t Know You Without A Social Media Plan
- Social Media Listening As Consumer Engagement Strategy
- When Consumers Find Your Organization Online, Will They Pick You?
- What Are Your Online Professional Standards?
- Social Media – The Dog With The Bite Or Best Friend?
- Best Practice Online Marketing On A Budget!
Looking for more on the exciting world of health care tech? Then don’t forget to mark your calendars for The 2019 OPEN MINDS Technology & Informatics Institute on October 28-30, 2019, at the Loews Philadelphia Hotel, in Philadelphia, Pennsylvania.