Thursday 17 November 2016

Google and Facebook crack down on fake news sites

Channel NewsAsia, 16 Nov 2016

WASHINGTON: Google and Facebook moved on Tuesday (Nov 15) to cut off advertising revenue to fake news sites, after a wave of criticism over the role misinformation played in the US presidential election.

The move by the two tech giants aims to choke off funds to an industry fuelled by bogus, often sensational "news" circulating online and seen as a potential influence on public opinion.



A Google statement to AFP said new policies "will start prohibiting Google ads from being placed on misrepresentative content, just as we disallow misrepresentation in our ads policies."

The shift will mean Google restricts ads "on pages that misrepresent, misstate, or conceal information about the publisher, the publisher's content, or the primary purpose of the web property," the statement said.

Google chief executive Sundar Pichai said the company receives billions of queries daily and admitted errors had been made.

"There have been a couple of incidences where it has been pointed out and we didn't get it right. And so it is a learning moment for us and we will definitely work to fix it," he said in a BBC interview.

Pichai said there should be "no situation where fake news gets distributed" and committed to making improvements.

"I don't think we should debate it as much as work hard to make sure we drive news to its more trusted sources, have more fact checking and make our algorithms work better, absolutely," he said.

On Monday, internet users searching on Google were delivered a bogus report saying Republican Donald Trump had won the popular vote in addition to the electoral college.

The numbers on a blog called 70News - contradicting official results tallied so far by states - said Trump received 62.9 million votes to 62.2 million for Hillary Clinton.

The blog urged those petitioning for the electoral college to switch their votes to reflect popular will to scrap their effort.

EXPLICIT BAN

Facebook is implementing a similar policy, a spokesman said.

"In accordance with the Audience Network Policy, we do not integrate or display ads in apps or sites containing content that is illegal, misleading or deceptive, which includes fake news," a Facebook statement said.

"While implied, we have updated the policy to explicitly clarify that this applies to fake news."

One report said Facebook had developed a tool to weed out fake news but did not deploy it before the US election, fearing a backlash from conservatives after a controversy over its handling of "trending topics." Facebook denied the report.



Some critics have gone so far as to blame Facebook for enabling Trump's victory, saying it did not do enough to curb bogus news that appeared to help rally his supporters.

Stories that went viral in the run-up to the vote included such headlines as "Hillary Clinton Calling for Civil War If Trump Is Elected" and "Pope Francis Shocks World, Endorses Donald Trump for President."

The prevalence of fake news has prompted calls for Facebook to consider itself a "media" company rather than a neutral platform, a move which would require it to make editorial judgments on articles.

Facebook executives have repeatedly rejected this idea, but since the election have pledged to work harder to filter out hoaxes and misinformation.

In a weekend post, Facebook chief Mark Zuckerberg dismissed the notion that fake news helped sway the election, and said that "more than 99 percent of what people see is authentic."

Still, he said that "we don't want any hoaxes on Facebook" and pledged to do more to curb fake news without censoring content. "Identifying the 'truth' is complicated," he said.

"While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted."

Ken Paulson, a former USA Today editor who is dean at the media school of Middle Tennessee State University, said Facebook and other platforms should not be required to filter out news but that it would be good for business.

"My hunch is that most of Facebook's loyal customers would welcome a cleaning up of the town square," he said.

- AFP/de




 



















































 















Facebook chief lists ways of dealing with fake news
Warning labels, third-party verification among methods named by Zuckerberg
The Straits Times, 21 Nov 2016

SAN FRANCISCO • A week after trying to reassure the public that it was "extremely unlikely" that hoaxes had changed the outcome of the United States presidential election, Facebook founder Mark Zuckerberg outlined several ways the firm might try to stop the spread of fake news on the platform in the future.

"We've been working on this problem for a long time and we take this responsibility seriously. We've made significant progress, but there is more work to be done," Mr Zuckerberg wrote in a post on his own Facebook page.

He then named seven approaches the company was considering to address the issue, including warning labels on false stories, easier user reporting methods and the integration of third-party verification.

"The problems here are complex, both technically and philosophically," he cautioned, repeating Facebook's long-standing aversion to becoming the "arbiters of truth".

Instead, it prefers to rely on third parties and users to make those distinctions. "We need to be careful not to discourage sharing of opinions or mistakenly restricting accurate content," he said.

While none of the listed ideas are particularly specific, his post does provide more details on the firm's thinking about the problem.



Facebook's concern with fake news predates the 2016 elections. Hoaxes have long plagued the site's algorithms, which encourage the creation of content that its users would like to share, true or not.

Among the fake news reports that circulated ahead of the election were those erroneously alleging Pope Francis had endorsed Republican candidate Donald Trump and that a US federal agent who had been investigating Democratic candidate Hillary Clinton was found dead.

However, fake news - and specifically, Facebook's role in spreading it - became a story of wide interest just after the elections, when critics accused the platform of influencing voters by allowing political hoaxes to regularly go viral, particularly those favourable to Mr Trump, now the President-elect.



Mr Zuckerberg has strongly denied that this was true, and that fake news "surely had no impact" on the outcome.

He did not contradict this denial on his post but it reflects Facebook's growing acknowledgment that it is going to have to do a lot more about the plague of hoaxes and fake stories on the platform.

Facebook had announced it was going to crack down on fake news sites that use its ad services to profit off hoaxes.

One idea Mr Zuckerberg presented on his post indicates that the firm wants to go further in "disrupting fake news economics". It is also considering more policies, along with stronger "ad farm detection". Another idea promises stronger detection of misleading content.

News Feed can already make some guesses about whether a post is authentic or not based on the user behaviour around it.

Mr Zuckerberg said in his post that Facebook currently watches for things like "people sharing links to myth-busting sites such as Snopes" to decide if a post might be misleading or false.

He did not go into specifics about what more Facebook might be looking to do on this front.

Facebook also indicated it is trying to find ways to rely more on users and third parties to help flag and classify fake stories.

Mr Zuckerberg listed "stronger reporting" methods for users, and listening more to "third party verification" services such as fact-checking sites.

He also said Facebook was considering how to use third-party and user reports of fake news as a source for displaying warnings on fake or misleading content.

The firm would also work with third-party verification organisations and journalists on fact-checking efforts.

WASHINGTON POST, REUTERS
































Facebook not sole culprit of filter bubbles - users at fault too

By Philip Seargeant and Caroline Tagg, Published The Sunday Times, 11 Dec 2016

Following the shock results of Brexit and the Trump victory, a lot of attention has focused on the role that Facebook might have played in creating online political ghettos in which false news can easily spread.

Facebook now has serious political influence, thanks to its development from a social networking tool into a primary source of news and opinions. And for many, the way it manages this influence is in need of greater scrutiny. But to put the blame solely on the company is to overlook how people use the site and how they themselves create a filter-bubble effect through their actions.

Much of this debate has focused on the design of Facebook itself. The site's personalisation algorithm, which is programmed to create a positive user experience, feeds people what they want. This creates what the chief executive of viral content site Upworthy, Mr Eli Pariser, calls "filter bubbles", which supposedly shield users from views they disagree with.

People are increasingly turning to Facebook for their news - 44 per cent of American adults now report getting news from the site - and fake news is not editorially weeded out. This means that misinformation can spread easily and quickly, hampering the chance people have for making informed decisions.

Over the past few weeks, there have been frequent calls for Facebook to address this issue. United States President Barack Obama himself has weighed in on the issue, warning of the perils that rampant misinformation can have for the democratic process.

But much of the debate around this has had an element of technological determinism to it, suggesting that users of Facebook are at the mercy of the algorithm. In fact, our research shows that the actions of users themselves are still a very important element in the way that Facebook gets used.



Our research has been looking specifically at how people's actions create the context of the space in which they communicate. Just as important as the algorithm is how people use the site and shape it around their own communication. We have found that most users have an overwhelming view that Facebook is not ideally suited to political debate and that posts and interactions should be kept trivial and light-hearted.

This is not to say that people do not express political opinions on Facebook. But for many people, there is a reluctance to engage in discussion and a sense that anything that might be contentious is better handled by face-to-face conversation. People report that they fear the online context will lead to misunderstandings because of the way that written communication lacks some of the non-linguistic cues of spoken communication, such as tone of voice and facial gestures.

There is strong evidence in our research that people are actually exposed to a great deal of diversity through Facebook. This is because their network includes people from all parts of their life, a finding that echoes other research. In this respect, the algorithm does not have a marked influence on the creation of filter bubbles. But because people often want to avoid conflict, they report ignoring or blocking posts, or even "unfriending" people, when confronted with views which they strongly disagree with.

They also report taking care of what they say themselves so as not to antagonise people - such as family members or work colleagues - whose views differ from theirs, but whose friendship they wish to maintain. And finally, they talk of making a particular effort to put forward a positive persona on social media, which again stops them from engaging in debate which might lead to argument.

NOT SO EASY TO FIX

The idea that algorithms are responsible for filter bubbles suggests it should be easy to fix (by getting rid of the algorithms), which makes it an appealing explanation. But this perspective ignores the part played by users themselves, who effectively create their own filter bubbles by withdrawing from political discussions and hiding opinions they disagree with.

This is not done with the intention of sifting out diversity but is instead because of a complex mix of factors. These include the perceived purpose of Facebook, how users want to present themselves in an effectively public form, and how responsible they feel for the diverse ties that make up their online network.

The fact that manipulation by the algorithm is not the only issue here means that other solutions - for example, raising people's awareness of the possible consequences that their online actions have - can help encourage debate. We have to recognise that the impact of technology comes not just from the innovations themselves but also from how we use them, and that solutions have to come from us as well.

Philip Seargeant is senior lecturer in applied linguistics and Caroline Tagg is lecturer in applied linguistics and English language at The Open University. This article first appeared in theconversation.com, a website of analysis from academics and researchers.





Fake news: Don't make the cure worse than the disease

By Margaret Sullivan, Published The Straits Times, 17 Dec 2016

When a guy with an assault rifle walks into a pizza joint to "self-investigate" the made-up conspiracy theory he found on the Internet about a non-existent child-prostitution ring, there is no doubt we have got a problem.

And regular folks are reasonably alarmed.

A new Pew Research Centre study finds that two in three United States adults say that fabricated news stories cause "a great deal of confusion about the basic facts of current issues and events". This sense is shared widely across incomes, education levels, political affiliations and most other demographic characteristics, according to the study.

Pope Francis agreed, memorably comparing the consumption of fake news to the eating of excrement. (A much-shared fake story said he had endorsed Mr Donald Trump for president.) President Barack Obama has chimed in on the dangers, too: "When there's so much active misinformation and it's packaged very well," he said, it poisons political discourse.

Facebook, initially reluctant to step into the fray, announced on Thursday that it would take some first steps. "We've focused our efforts on the worst of the worst," wrote Mr Adam Mosseri, a Facebook vice-president.

Those efforts include testing ways for users to report what they suspect is fake news; working with the Poynter Institute's International Fact-Checking Network to provide users with verified information on "disputed" stories; and reducing the financial incentives to spammers.

The idea is to slow the spread of fake news without turning Facebook into a worldwide censor.

It is a promising start, given Facebook's outsize role as a purveyor of fake news to its nearly 2 billion users.


And it certainly beats one of the ideas that surfaced in the Pew survey - that the government or politicians should act to stop the spread of fake news. (Asked who should tackle the problem, respondents gave about equal weight to government, tech companies such as Facebook and Google and the public.)

Government involvement is a seriously bad idea. It could put the question of what constitutes real news and what constitutes fake news in the hands of those who may be most affected by it.

And given the ascendancy of Mr Trump, who traffics in falsehoods on a regular basis - and has been clear about wanting to limit long-established press rights - it is an even worse notion.

"We need to be very rigorous in defining the problem and thinking through the implications," said Mr Jameel Jaffer, director of the Knight First Amendment Institute at Columbia University.

For one thing, he says, "there's a very narrow category of content that everybody would agree upon".

The term "fake news" is fuzzy. It can refer to a multitude of problems, including disinformation, propaganda, conspiracy-mongering or what Mr Jaffer calls "very biased takes on public affairs".

"I don't think we want the government - or, for that matter Facebook - to be the arbiter of what's true and what's false," Mr Jaffer told me.

So what else can be done?

Mr Eli Pariser, founder of the viral-news site Upworthy, has set up an online clearinghouse for potential solutions. One of these: verified news-media pages. A news organisation would have to apply to be verified as a credible news source, after which its stories would be published with a "verified" mark, similar to Twitter's check mark.

Another is adding a "fake news" flag to questionable articles. This could be user-generated or crowdsourced; it would mean that readers would at least see a warning box before they click through, thus potentially slowing the spread.

As the Guardian noted in a recent survey, this could be vulnerable to gaming the system: "Users could spam real articles with fake tags." All of these ideas are open to claims of bias.

In a world increasingly plagued by social-media filter bubbles and partisan echo chambers, it is tough to get agreement even on the colour of the sky - much less the role of Russian cyber-intrusion into the American presidential election.

Ms Amy Mitchell, Pew's director of journalism research, told me that the new survey reinforces earlier research findings: "Americans have a hard time agreeing on the facts." But the Pew survey - mostly done before the gunman walked into the aforementioned Washington, DC, pizza shop, Comet Ping Pong - makes clear that people find the proliferation of fake news confusing, and want action.

Facebook and other tech giants need to keep moving on this, while being ever-mindful of legitimate free-speech concerns. That is a very tricky balance, with hazards everywhere.

The answers do not lie in government oversight, which can quickly turn to censorship.

Perhaps, more importantly, we all must get smarter about what we are reading and viewing.

Schools should be redoubling their efforts to teach news literacy, civics and history. News literacy organisations deserve more support than ever. Fact-checking, and good judgment, informed by radical scepticism, matter most. And yes, a slower trigger finger on the share buttons, will help as well.


Truth may indeed be hard to pin down. But facts do exist - and underground tunnels at Comet Ping Pong do not.

WASHINGTON POST









New Facebook tools to expose fake news

Users and third-party fact checkers can flag stories that have dubious content
The Straits Times, 17 Dec 2016

WASHINGTON • Facebook Inc is changing its powerful news feed in an effort to stamp out fake stories following a firestorm around the social network's role in spreading false information.

The new features, rolled out to select users in the United States on Thursday, add options for readers and third-party fact checkers to flag articles, tweak Facebook's algorithm and provide more restrictions on advertising. A month ago, chief executive officer Mark Zuckerberg said these changes were coming, responding to extensive criticism in the wake of the US presidential election.

The issue has grown more heated since. A Pew Research Centre survey released on Thursday revealed almost one-quarter of Americans believed they shared fake news and a greater percentage were concerned about its consequences.

Still, Facebook framed its moves carefully, showing its aversion to being seen as taking an editorial or political stance. "Fake news means different things to different people," said Mr Adam Mosseri, Facebook's vice-president of product management. "What we're focused on is the worst of the worst. We're not looking to get into the grey area of opinion." 




Two of the incoming changes are very visible. Facebook users will be able to flag content on the site as a "fake news story". Articles deemed false by Facebook's partner, Poynter Institute's International Fact Checking Network, will have a new tag attached: "Disputed by 3rd Party Fact-Checkers." Publishers behind these articles will no longer be able to promote these articles as Facebook paid advertisements.

The social network will also be working with fact-checking organisations Politfact, Snopes and FactCheck.org, as well as ABC News and the Associated Press, to identify articles as fake. But Facebook is not scrubbing these articles from its site altogether. "If something is being disputed, we're going to let you know," Mr Mosseri said. "But you can still share it because we believe in giving people a voice."



Facebook also said it is snipping financial incentives for publishers of fake news. It plans to cut off producers of content from hoax domains from buying on its ad networks. Mr Mosseri said these websites represent a negligible part of Facebook's advertising revenue. He noted that the adjustments are algorithmic and will not rely on editors employed by Facebook.

He also dismissed the likelihood that users would take advantage of the new features to bombard articles or publications they disagree with, rather than those stories they find blatantly false.

These incidents "happen many times less - orders of magnitude less", Mr Mosseri said. "Most people aren't going to report anything negative."


BLOOMBERG















No comments:

Post a Comment