I was only a little surprised to read yesterday during my commute about fake news. About a month ago I saw an advertisement online titled "Bigger Than Snowden" someone actually paid money towards an online ad campaign to purport that electromagnetic weapons are being used by the government to covertly target, monitor, and torture victims. Seriously. The link above goes to google search results and you can see how this nonsense has been smeared around the internet as if it's true. Which it's not.
Did fake news catapult It (I'm not saying It's name) into the White House? NYTimes seems to think so. And you know what, I think so too. As a User Experience designer I know how to pull the same exact tricks that creators of fake news pulled. Any of us web professionals could easily fool a lot of people with fake stories.
Then I was a little surprised to read today about a group of college students who already solved the fake news problem in a hackathon.
It took only 36 hours for these students to solve Facebook's fake-news problem
Good for them.
For months leading up to the election I was flagging content on Facebook. Content that was offensive but not technically abusive by Facebook's terms. There were a lot of mean, belittling, inaccurate posts that are designed to stoke people's anger (or fear) that are OK to post on Facebook. I saw people being really mean with an image composed of 2 photos, one of Michelle Obama and the other of Beyonce. This wasn't fake news. It's perfectly fine to put this on Facebook. A lot of meanness and vitriol was cultivated about the Obama's on social media, and I didn't see the point of cultivating all this anger, until November 9th.
Google started including comments on YouTube videos in their search results. So this enabled people to game google's search and get a piece of content like a YouTube video to appear on an unrelated search term.
It's against Google's terms of use to trick web pages into ranking higher in search. But it still works. Google is going to have a serious problem of scale on their hands if they allow this ecosystem of misinformation persist. I remember the ad-farms that used to pollute my search results in 2006, before Google tamped down on websites that were really only a collection of ads. My search results are becoming polluted now, and again, this is bad User Experience and makes me want to use Google less.
I know Google has the capabilities to apply an extra layer of information to their search results to make it obviously clear that the content on that website is deemed fake. After all, here is the Denver Guardian, A fake news site. The way this site is presented in Google's search results makes it appear as if it's completely legitimate. This screenshot was taken today, November 16th, a day after the NY Times article called it out as a fake site. Here is Google giving the impression that this "FBI agent..." story is legit.
If Google is "Organizing the World's Information" I would have to give it a D grade. Does Google expect newspapers to fight back with stories??? If these tech companies don't fix these loopholes of misinformation, then rational normal content creators with suffer and we will all be poorer as a result.
Did fake news catapult It (I'm not saying It's name) into the White House? NYTimes seems to think so. And you know what, I think so too. As a User Experience designer I know how to pull the same exact tricks that creators of fake news pulled. Any of us web professionals could easily fool a lot of people with fake stories.
Then I was a little surprised to read today about a group of college students who already solved the fake news problem in a hackathon.
It took only 36 hours for these students to solve Facebook's fake-news problem
Good for them.
For months leading up to the election I was flagging content on Facebook. Content that was offensive but not technically abusive by Facebook's terms. There were a lot of mean, belittling, inaccurate posts that are designed to stoke people's anger (or fear) that are OK to post on Facebook. I saw people being really mean with an image composed of 2 photos, one of Michelle Obama and the other of Beyonce. This wasn't fake news. It's perfectly fine to put this on Facebook. A lot of meanness and vitriol was cultivated about the Obama's on social media, and I didn't see the point of cultivating all this anger, until November 9th.
Facebook Needs to Make an Effort
Frankly, it diminishes the User Experience of Facebook to visit the site and see a litany of angry memes. I like that Facebook let's me unfollow friends that are bothering me, but the flagging could be a lot more robust.- Bring the humans back. Just re-hire human moderators. You can afford it.
- There should be an "inappropriate" flag that aligns with standard etiquette. Maybe "inappropriate" content should persist but with an overlay of a scarlet letter. like this: I . So people can continue to share it but they know that they are sharing something that's deemed rude.
- There should be a "false information" flag.
- Would love it if originators of content are revealed. There should be origination information about a post including the publishing website, country of origin, and other meta data.
- Display suspected fake content in front of people NOT tuned to like it. Let people vet news posts that they normally would not see because these posts live outside of their personalized experience. And then allow these users to apply the "false information" flag.
- Remove the fake news content from Facebook. The only way to stop fake news from being circulated to begin with, is to discourage it entirely by making the effort to post it too tedious.
Google Needs to Make an Effort Too
The problem I am seeing in Google Search is pretty simple but still bad. Some of the more outrageous lies are spread via Google, sortof like it's algorithms have run amok and nobody's paying any attention to what they're doing. Here's what I saw and I brushed it off at the time. I'm trying to search for a popular news story, and the top result is something completely different. The #1 result was coming back with some scandalous political YouTube post completely unrelated to my search.Google started including comments on YouTube videos in their search results. So this enabled people to game google's search and get a piece of content like a YouTube video to appear on an unrelated search term.
It's against Google's terms of use to trick web pages into ranking higher in search. But it still works. Google is going to have a serious problem of scale on their hands if they allow this ecosystem of misinformation persist. I remember the ad-farms that used to pollute my search results in 2006, before Google tamped down on websites that were really only a collection of ads. My search results are becoming polluted now, and again, this is bad User Experience and makes me want to use Google less.
I know Google has the capabilities to apply an extra layer of information to their search results to make it obviously clear that the content on that website is deemed fake. After all, here is the Denver Guardian, A fake news site. The way this site is presented in Google's search results makes it appear as if it's completely legitimate. This screenshot was taken today, November 16th, a day after the NY Times article called it out as a fake site. Here is Google giving the impression that this "FBI agent..." story is legit.
If Google is "Organizing the World's Information" I would have to give it a D grade. Does Google expect newspapers to fight back with stories??? If these tech companies don't fix these loopholes of misinformation, then rational normal content creators with suffer and we will all be poorer as a result.
Comments
Post a Comment