The Ministry informed that these channels and websites belong to a coordinated disinformation network operating from Pakistan. Naya Pakistan Global, The Punch Line, Historical Facts and more 17 channels used to post divisive content in a coordinated manner on topics like Kashmir, Indian Army, minority communities in India, Ram Mandir, General Bipin Rawat among other various sensitive subjects related to India, said the ministry.
With over 35 lakh subscribers, Pakistan-based The Naya Pakistan Group (NPG) and some other standalone channels were spreading fake information.
The ministry also informed that some of the YouTube channels of the Naya Pakistan Group were being operated by 'anchors of Pakistani news channels'. The decision to ban these channels was taken by the ministry in close coordination with the intelligence agencies.
A picture claiming that the Indian government has announced a 2-year of relaxation in the age limit for the Army recruitment 2022 was in circulation recently. However, a fact check agency of the government has busted the fake news.
The PIB Fact Check has shared a screenshot of the fake news on its Twitter handle. “A picture claims that the Indian government has given 2 years relaxation in the age limit for the 2022 army recruitment. #PIBFactCheck This claim is fake. There is no such change in the age limit,” the PIB Fact Check said in the tweet.
एक तस्वीर में दावा किया गया है कि भारत सरकार ने 2022 की सेना भर्ती के लिए आयु सीमा में 2 साल की छूट दी है।#PIBFactCheck
— PIB Fact Check (@PIBFactCheck) November 3, 2021
➡️ यह दावा फर्जी है।
➡️ आयु सीमा में ऐसा कोई बदलाव नहीं किया गया है।
➡️ कृपया ऐसी फर्जी संदेश/तस्वीर साझा न करें।
पढ़ें:https://t.co/4YFdn3U5o3 pic.twitter.com/SA7wQpA8VJ
It further stated that no revision has been made in the existing age limit to apply for the various posts in the Indian Army. It also advised people to avoid such fake news.
Falling prey to such fake news is only possible when one is not informed and avoids confirming something from the authentic source or official sites. Hence, candidates applying for jobs in the Indian Army must go through the official website for details of the recruitment process.
Check Eligibility & Other Details Related To Recruitment In Indian Army
A bench headed by Chief Justice N. V. Ramana said: "On web portals, there is no control of anybody, they can publish anything...If you go to YouTube, you will find how fake news is freely circulated and anyone can start a channel on YouTube."
The bench also comprising Justices Surya Kant and A. S. Bopanna observed that the content shown in a section of private media bears a communal tone.
The Chief Justice told Solicitor General Tushar Mehta, "Ultimately, this country is going to get a bad name. Have you made an attempt for a self-regulatory mechanism (for these private channels)?"
Mehta submitted before the bench that Centre has come out with new Information and Technology rules, which address concerns flagged by the top court. He added that many petitions have been filed challenging the new rules in various high courts. Mehta submitted that Centre has filed a plea to transfer all these petitions to the Supreme Court.
The Chief Justice added that the social media platforms do not respond if an issue is raised in connection with the content. "I have not come across any public channel, Twitter, Facebook or YouTube ... they never respond to us and there is no accountability, about the institutions they have written badly about, and they don't respond and say this is their right," said the Chief Justice.
He added, "Do not know who to approach...they are only concerned with the people who are powerful... judges, common man, they are not bothered."
The top court made these sharp observations during the hearing of Jamiat Ulama-i-Hind petition, against fake and motivated news in connection with the Nizamuddin Markaz incident in the national capital.
According to a report in USA Today late Tuesday, Facebook "swiftly removed misinformation, such as posts and memes, urging Republicans and Democrats to vote on the wrong day and claims that federal immigration agents would be patrolling polling places".
Twitter said it was combating voter suppression but "declined to share any details about what tweets were removed".
According to a CNET report, Facebook took action against inaccurate posts and memes that told Republicans and Democrats to vote on different days.
"The team is closely monitoring the election from our war room and are in regular contact with our partners in government. So far we haven't seen anything unexpected," Facebook spokesperson was quoted as saying.
"We will continue to monitor activity closely and act quickly against content that violates our policy."
According to Carlos Monje, Twitter's Director of Policy and philanthropy for US and Canada, attempts to game its systems or to spread deliberately malicious election content will be removed from Twitter.
"We continue to have success in this regard and are enforcing our policies vigilantly, particularly against automation and voter suppressive content on the service. As always, we encourage users to think before sharing," Monje said in a statement.
Facebook on Monday said it blocked 30 accounts on its platform and 85 accounts on Instagram that may be engaged in "coordinated inauthentic behaviour" from foreign entities into the midterms.
According to the social networking giant, US law enforcement contacted them about online activity that they recently discovered and which they believe may be linked to foreign entities.
"We immediately blocked these accounts and are now investigating them," Facebook said in a blog post.
More than 80 per cent of the Twitter accounts linked to spread of disinformation during the 2016 election are still active, said the study by the Knight Foundation on Thursday.
These accounts continue to publish more than a million tweets in a typical day, the study said.
Using tools and mapping methods from Graphika, a social media intelligence firm, the researchers studied more than 10 million tweets from 700,000 Twitter accounts that linked to more than 600 fake and conspiracy news outlets.
Twitter, along with other social media platforms including Facebook came under intense scrutiny of policymakers in the US for their failure to stop the spread of misinformation on their platforms during the 2016 election.
The microblogging site since then has stepped up its efforts to curb the spread of divisive messages and fake news on its platform.
To further protect the integrity of elections, Twitter earlier this week announced that it will now delete fake accounts engaged in a variety of emergent, malicious behaviours.
As platform manipulation tactics continue to evolve, the micro-blogging platform said it is expanding rules to better reflect how it identifies fake accounts and what types of inauthentic activity violate its guidelines before the US mid-term elections in November.
As part of the new rules, accounts that deliberately mimic or are intended to replace accounts were previously suspended for violating rules may be identified as fake accounts, Twitter said.
The Knight Foundation study found more than 6.6 million tweets linking to fake and conspiracy news publishers in the month before the 2016 election.
Yet disinformation continues to be a substantial problem postelection, with 4.0 million tweets linking to fake and conspiracy news publishers found in a 30-day period from mid-March to mid-April 2017, the study said.
Sixty-five percent of fake and conspiracy news links during the election period went to just the 10 largest sites, a statistic unchanged six months later.
"Machine Learning models estimate that 33 percent of the 100 most-followed accounts in our postelection map -- and 63 percent of a random sample of all accounts -- are "bots," or automated accounts," the study said.
"Because roughly 15 per cent of accounts in the postelection map have since been suspended, the true proportion of automated accounts may have exceeded 70 per cent," it added.
In a post on Friday, Zuckerberg said that Facebook started on the platform sanitising project in 2017 and "even this work will extend through 2019, I do expect us to end this year on a significantly better trajectory than when we entered it".
"My personal challenge for 2018 has been to fix the most important issues facing Facebook -- whether that's defending against election interference by nation states, protecting our community from abuse and harm, or making sure people have control of their information," the Facebook founder wrote.
After his grilling in the US Congress in April over the Cambridge Analytica data scandal and the Russian interference in the 2016 US presidential election, COO Sheryl Sandberg again testified at the US Senate Intelligence Committee hearing on election security on September 5.
Along with Twitter CEO Jack Dorsey, she faced the committee which is probing the Russian interference from an angle to publicly hold Facebook and Twitter accountable for allowing Russian operatives on their platforms.
"I'm spending a lot of time on these issues, and as the year winds down I'm going to write a series of notes outlining how I'm thinking about them and the steps we're taking to address them," said Zuckerberg.
The first note will be about the steps Facebook is taking to prevent election interference on Facebook, which is timely with the US mid-terms and Brazilian presidential elections approaching.
"I'll write about privacy, encryption and business models, and then about content governance and enforcement as well in the coming months," he added.
Another part of its strategy in some countries is partnering with third-party fact-checkers to review and rate the accuracy of articles and posts on Facebook, Tessa Lyons, a Facebook product manager on News Feed focused on false news, said in a statement on Thursday.
The social media giant is facing criticism for its role in enabling political manipulation in several countries around the world. It has also come under the scanner for allegedly fuelling ethnic conflicts owing to its failure stop the deluge of hate-filled posts against the disenfranchised Rohingya Muslim minority in Myanmar.
"False news is bad for people and bad for Facebook. We're making significant investments to stop it from spreading and to promote high-quality journalism and news literacy," Lyons said.
Facebook CEO Mark Zuckerberg on Tuesday told the European Parliament leaders that the social networking giant is trying to plug loopholes across its services, including curbing fake news and political interference on its platform in the wake of upcoming elections globally, including in India.
Lyons said Facebook's three-pronged strategy roots out the bad actors that frequently spread fake stories.
"It dramatically decreases the reach of those stories. And it helps people stay informed without stifling public discourse," Lyons added.
Although false news does not violate Facebook's Community Standards, it often violates the social network's polices in other categories, such as spam, hate speech or fake accounts, which it removes remove.
"For example, if we find a Facebook Page pretending to be run by Americans that's actually operating out of Macedonia, that violates our requirement that people use their real identities and not impersonate others. So we'll take down that whole Page, immediately eliminating any posts they made that might have been false," Lyons explained.
Apart from this, Facebook is also using machine learning to help its teams detect fraud and enforce its policies against spam.
"We now block millions of fake accounts every day when they try to register," Lyons added.
A lot of the misinformation that spreads on Facebook is financially motivated, much like email spam in the 90s, the social network said.
If spammers can get enough people to click on fake stories and visit their sites, they will make money off the ads they show.
"We're figuring out spammers' common tactics and reducing the distribution of those kinds of stories in News Feed. We've started penalizing clickbait, links shared more frequently by spammers, and links to low-quality web pages, also known as 'ad farms'," Lyons said.
"We also take action against entire Pages and websites that repeatedly share false news, reducing their overall News Feed distribution," Lyons said.
Facebook said it does not want to make money off of misinformation or help those who create it profit, and so such publishers are not allowed to run ads or use its monetisation features like Instant Articles.
Experts from the research lab will work closely with Facebook's security, policy and product teams to get real-time insights and updates on emerging threats and disinformation campaigns from around the world.
"This will help increase the number of 'eyes and ears' we have working to spot potential abuse on our service - enabling us to more effectively identify gaps in our systems, pre-empt obstacles, and ensure that Facebook plays a positive role during elections all around the world," Katie Harbath, Global Politics and Government Outreach Director at Facebook, said in a blog post.
According to Facebook CEO Mark Zuckerberg, it is important to make sure no one interferes in any more elections, including in India.
"Our goals are to understand Facebook's impact on upcoming elections-like Brazil, India, Mexico and the US midterms-and to inform our future product and policy decisions," he said while testifying before the US Congress in April.
"The most important thing I care about right now is making sure no one interferes in the various 2018 elections around the world," he told a panel of 44 Senators.
Facebook is doubling the number of people who work on safety and security and using technology like Artificial Intelligence (AI) to more effectively block fake accounts during elections
Facebook will also use the Atlantic Council's Digital Research Unit Monitoring Missions during elections and other sensitive moments.
"This will allow us to focus on a particular geographic area - monitoring for misinformation and foreign interference and also working to help educate citizens as well as civil society," Harbath said.
The Atlantic Council and Facebook's partnership will promote and supplement @DFRLab's existing #ElectionWatch efforts and allow for greater capacity building with journalists and civil society to incorporate similar methods into their own work.
"Through the innovative work of the Digital Forensic Research Lab, we are building a digital solidarity movement, a community driven by a shared commitment to protect democracy and advance truth across the globe," noted Fred Kempe, Atlantic Council President and CEO.
Last month, Facebook announced an independent commission to help fund and organise research into the impact of social media on society - starting with elections.
As part of its new strategy to combat fake news, Facebook wants its users to miss these stories at the time of scrolling their News Feed, while not withdrawing them altogether so as to walk a fine line "between censorship and sensibility", according to a media report.
When an article is verified as inaccurate by the social network's third-party fact-checkers, Facebook will shrink the size of the link post in the News Feed, TechCrunch reported on Saturday.
"We reduce the visual prominence of feed stories that are fact-checked false," a Facebook spokesperson was quoted as saying.
To combat the menace of fake news, Facebook earlier introduced red warning labels. But this led some users to share the false stories even more aggressively, forcing the social network to ditch the red flag.
Facebook then started showing "Related Articles" from trusted news sources in the hope of offering its users the correct perspective.
The move to reduce the visual prominence of inaccurate stories is another effort in the same direction.
Facebook detailed its new tactics to fight fake news at its Fighting Abuse @Scale event in San Francisco, according to TechCrunch.
Facebook, the report said, is also now using machine learning to look at newly published articles and scan them for signs of falsehood.
"We use machine learning to help predict things that might be more likely to be false news, to help prioritize material we send to fact-checkers (given the large volume of potential material)," a Facebook spokesperson was quoted as saying.
According to The Verge, a hacker hijacked the verified Twitter account of Vadim Lavrusik, who works as the Product Manager at the video sharing platform and used the profile to implicate a YouTube broadcaster in the shooting.
The hoax tweets reportedly began posting 20 minutes after the executive marked himself safe apparently while he was near the shooting site.
The hoax tweets were reported several times and Twitter removed them shortly afterwards.
Twitter CEO Jack Dorsey later said the company was "on it" and taking care of the situation.
"Even after Twitter executives became aware of the compromise, hoax tweets were still being regularly posted to Lavrusik's account, only to be instantly deleted," the report said.
One person was killed and four others were wounded in a shooting on Tuedsay at the YouTube headquarters in San Bruno in the US state of California.
Police officer Ed Barberini said the suspect, a female shooter, appeared to have shot herself after injuring multiple people at the campus of the YouTube facility on Tuesday, Xinhua news agency reported.
The shooting took place in an outdoor cafe at the YouTube campus which houses at least 1,700 employees.
But following widespread criticism, Prime Minister Narendra Modi ordered to withdraw the order earlier in the day.
The order issued on Monday night said a journalist's accreditation would be suspended once a charge of fake news was proved against him, which would be later determined by the Press Council of India and the News Broadcasters Association (NBA).
If found that the news was indeed fake, he or she could also lose their accreditation for a limited period or permanently and thus be denied access to government institutions.
Journalists and opposition parties took a serious note of the order issued by the Information and Broadcasting Ministry and described the guideline as an attack on the freedom of press.
On Tuesday afternoon, the I&B Ministry said in a press release that the "Guidelines for Accreditation of Journalists amended to regulate Fake News issued on April 2 stand withdrawn".
A joint statement issued by the Press Club of India, Indian Women's Press Corps, Press Association and Federation of Press Clubs of Indi expressed their "deep concern" over the Monday order.
They said: "There is ample scope for introspection and reform of journalistic practices; yet a government fiat restraining the fourth pillar of our democracy is not the solution.
"The Press Council of India was primarily set up to protect the freedom of the press, not to clamp down on it."
The associations also welcomed the move to retract the Monday's order.
Speaking at an event in which the statement was released at the Press Club of India here, senior journalist Rajdeep Sardesai said though the Monday's order had been withdrawn, there was very little to celebrate.
"What was the need of the circular in the first place? The government would consider such a circular is worrisome," Sardesai said.
He said that the government was into the "business of propaganda", which was also "fake news" and added that the government should be kept away to discuss the subject of fake news, a point which was raised by other speakers also.
He also said that the government had to step in as the media had failed to rein in fake news. "We should name and shame serial offenders of fake news."
TV journalist Ravish Kumar said the opinion of journalists was not taken before Monday's circular was issued and added that such attacks would not stop.
Press Club of India President Gautam Lahiri told IANS that they would explore legal options on the arbitrary way in which the government was constituting the Central Press Accreditation Committee.
He also said that they would form a group of senior journalists to act as media watchdogs.
According to Adam Mosseri, Head of News Feed, Facebook will sample a new set of users each day.
"There is always the risk that people will try and game any system. In this case we (1) randomly sample people, (2) actively work to de-bias the data so it's representative of the population and (3) re-run the surveys every day," Mosseri tweeted on Thursday.
"It's worth noting this isn't a rating system, nobody can opt into rating a publisher as trustworthy.
"We randomly sample new people each day, and only their responses are used. I'm sure some bad actors will try and game the system, but it's not as easy as you suggest," he tweeted to one user who criticised the survey.
The two survey questions are: "Do you recognise the following website? Yes or No" and "How much do you trust each of these domains? Entirely, A lot, Somewhat, Barely, Not at all".
Facebook CEO Mark Zuckerberg, in a lengthy post last week, revealed about the survey to determine sources that are "broadly trusted".
"As part of our ongoing quality surveys, we will now ask people whether they're familiar with a news source and, if so, whether they trust that source," he posted.
"The idea is that some news organisations are only trusted by their readers or watchers, and others are broadly trusted across society even by those who don't follow them directly," he added.
"This update will not change the amount of news you see on Facebook. It will only shift the balance of news you see towards sources that are determined to be trusted by the community," Zuckerberg noted.
"Instead of Disputed Flags, we'll use Related Articles to help give people more context about the story," Tessa Lyons, Product Manager at Facebook, wrote in a blog post on Friday.
Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs.
"Related Articles, by contrast, are simply designed to give more context, which our research has shown is a more effective way to help people get to the facts," Facebook said.
Facebook also announced to start a new initiative to better understand how people decide whether information is accurate or not based on the news sources they depend upon.
"This will not directly impact News Feed in the near term. However, it may help us better measure our success in improving the quality of information on Facebook over time," the company said.
"Demoting false news (as identified by fact-checkers) is one of our best weapons because demoted articles typically lose 80 per cent of their traffic.
"This destroys the economic incentives spammers and troll farms have to generate these articles in the first place," Facebook posted.
In their bid to fight fake news and help readers identify trustworthy news sources, Facebook, Google, Twitter and several media organisations have joined the non-partisan "The Trust Project".
"The Trust Project" is led by award-winning journalist Sally Lehrman of Santa Clara University's Markkula Centre for Applied Ethics.
As part of the initiative, an icon will appear next to articles in Facebook News Feed. When you click on the icon, you can read information on the organization' s ethics and other standards, the journalists' backgrounds, and how they do their work.
The additional information about a news article will be pulled from across Facebook and other sources to identify and remove false news.
The other sources are information from the news publisher's Wikipedia entry, a button to follow their Page, trending articles or related articles about the topic and information about how the article is being shared by people on Facebook, the social media platform said in a blog post on Friday.
"In some cases, if that information is unavailable, we will let people know, which can also be helpful context," the Facebook blog post added.
The move is important in the wake of the Las Vegas shooting where Google, Facebook and Twitter failed miserably to stop publishing fake news on their platforms.
Facebook's "Security Check" page -- that lets people involved with disasters and accidents post messages for friends and loved ones -- published a blog post from "Alt-Right News" that said "the killer may have been a Trump-hating American television host Rachel Maddow fan" in an apparent reference to the misidentified killer's Facebook page.
Facebook said its security staff saw the post and removed it.
"However, its removal was delayed by a few minutes, allowing it to be screen captured and circulated online. We are working to fix the issue that allowed this to happen in the first place and deeply regret the confusion this caused," Fast Company quoted the social media giant as saying.
"The new button reflects feedback from our community, including many publishers who collaborated on its development as part of our work through the Facebook Journalism Project," Facebook said in the new blog post.
Helping people access this important contextual information can help them evaluate if articles are from a publisher they trust, and if the story itself is credible, it added.
According to a report in The Washington Post late on Sunday, in November last year, Obama made a personal appeal to Zuckerberg to take the threat of fake news and political disinformation seriously.
"Unless Facebook and the government did more to address the threat, Obama warned, it would only get worse in the next presidential race," the report added.
However, Zuckerberg acknowledged the problem posed by fake news but "told Obama that those messages weren't widespread on Facebook and that there was no easy remedy".
Now, after an extensive legal and policy review, the social media giant has announced it would now share those 3,000 Russian ads with Congressional investigators.
Facebook announced it had found more than 3,000 ads addressing social and political issues that ran in the US between 2015 and 2017 and that appear to have come from accounts associated with a Russian entity known as the Internet Research Agency.
Facebook earlier handed over the details to American Special Counsel Robert Mueller that included copies of the ads and details about the accounts that bought them and the targeting criteria they used.
Zuckerberg, who returned after taking parental leave after his second daughter was born, said in a post last week that he deeply cares about the democratic process and protecting its integrity.
Zuckerberg said "We will continue our investigation into what happened on Facebook in this election. We may find more, and if we do, we will continue to work with the government".
The Facebook CEO noted that the social media company has already provided information regarding this issue with the special counsel and have also briefed Congress about it.
He stressed the company would now make political advertising more transparent. When someone buys political ads on TV or other media, they are required by law to disclose who paid for them.
The company would strengthen the ad review process for political ads. Facebook would now increase investment in security and specifically election integrity.
"In the next year, we will more than double the team working on election integrity. In total, we'll add more than 250 people across all our teams focused on security and safety for our community," Zuckerberg said.
As part of its ongoing push to build relationships with local publishers, Facebook is testing products that meaningfully engage with their community, a report on Poynter.org website said on Monday.
The tests, part of the Facebook Journalism Project, have just begun and are on three products, Poynter quoted a Facebook spokesperson as saying.
"One points users in community-linked Facebook groups to additional local news. Another, launching Tuesday, offers users who make their cities of residence public a badge identifying them as a local when they comment on a local publisher's stories. A third helps people find local groups," the spokesperson said.
The social media giant is looking to establish baseline metrics for the availability and discoverability of news and identify the levers that move users to consume, share, comment and form community around local news.
"This test allows administrators of groups that regularly discuss local news to add a news unit to the group. This unit will be dynamically populated based on our local news classifier with recent articles from publications that serve the location of the group. Members can then easily share an article from the unit as a link share in the group," he said.
According to the spokesperson, publishers want new ways to connect local stories to people in the communities they serve and are encouraged to see the company testing product ideas specific to local news distribution.
Apart from academics and non-profits, Mozilla and Wikipedia founder Jimmy Wales are also part of the consortium.
The News Integrity Initiative aims to develop tools that will help people be sensitive towards stories they read online.
"The City University of New York Graduate School of Journalism will administer the initiative and will spearhead new literacy and aim to increase trust in journalism around the world," a news release on the website of CUNY Graduate School of Journalism said.
After US President Donald Trump was declared winning candidate in 2016 presidential elections, Facebook came under severe criticism for promoting and breeding fake news on its platform.
To fight the fake news and re-build its credibility, the networking giant announced a number of projects and users to identify and weed out fake stories.
"The initiative will address the problems of misinformation, disinformation and the opportunities the internet provides to inform the public conversation in new ways," Facebook's head of news partnerships, Campbell Brown, said in a statement.
Simon Hegelich, professor of political science at the Technical University of Munich and who was asked by Merkel to brief the CDU executive committee on the fake news movement, told Xinhua news agency that fake news became high priority for German politicians after the US elections.
Hegelich believes if fake news is distributed in high frequency, say, by social bots, trolls or algorithms, it could change public perception of a topic for a short amount of time, and that high-frequency fake news before the election or at times of strategical importance could be dangerous.
"Overtime, fake news contributes to an atmosphere of uncertainty and angst, which could help populist parties," said Hegelich.
Subsequently, Merkel and her party plan to deal with social bots and Internet trolls, which they deem as "the biggest threat to disseminating high frequency fake news".
To help combat open misinformation channels on social media sites, Merkel and her CDU party plan to give Facebook and other social platforms users more flexibility in registering complaints about fake news and any offensive content.
Any victims of fake news would also have the right to know who wrote the source material. To ensure the action plan is followed, any news portals who do not comply with the proposed terms will be fined -- the current suggested amount is 500,000 euros.
Companies such as Facebook and Google have already started to clamp down on fake news. However, Facebook continues to be heavily criticised in Berlin, for failing to deal with racist hate speech on its news feeds.
In response, the social media giant is implementing new filtering tools tailored specifically for Germany, which include using a third-party fact checker.
Nadine Schoen, a senior CDU MP and one of the politicians directly involved in the CDU fake news action plan, does not think that companies like Facebook go far enough.
"The platform operators have simply not established the necessary mechanisms that allow for fake news stories to be investigated promptly and to help those affected find legal redress,"she said.
A fake news white paper published by news aktuell earlier this week showed that 68 per cent of Germans have come across fake news from traditional media or social media in the last 12 months, and 63 per cent of Germans use the Internet as a main source for the news.
The paper also raises concerns about the speed at which news can be shared on social media through likes and shares, without any barriers or application of traditional journalism standards.
It warns that any solution to curbing fake news must be rigorously tested to be able to effectively control the pandemic.
The new feature uses non-partisan third-party organisations like Snopes and Politifact to assess the factual accuracy of stories reported as fake by users.
On its help centre page, Facebook has added a question "How is news marked as disputed on Facebook?" However, the section noted that this feature is not yet available to everyone.
It is unclear how many people currently have access to the "fake news" debunking feature, rt.com reported.
The new tool was first revealed by users on Twitter, who shared screenshots which identified links to sites known to produce misinformation.
Facebook had introduced a solution to false stories last December amid outcries that so-called fake news influenced the outcome of the US presidential election.
Thus, the technology giant partnered with fact checkers that are signatories of the journalism non-profit Poynter's International Fact Checking Code of Principles and included ABC News, FactCheck.org,Snopes and Politifact, the report said.
Stories flagged by Facebook users as 'fake news' are passed on to these fact checkers for verification. If the fact-checkers agree that the story is misleading, it will appear in News Feeds with a "disputed" tag, along with a link to a corresponding article explaining why it might be false.
These posts then appear lower in the news feed and users will receive a warning before sharing the story.
Similar efforts are planned in Europe amid threats from the European Union to reduce on the spread of misinformation. The social networking site recently revealed fact checking partnerships in Germany and France ahead of respective elections in each country.
Presently, Facebook bans content that directly calls for violence but the new policy will cover fake news that has the potential to stir up physical harm which includes both written posts and manipulated images, CNET reported.
Facebook has been accused of helping to spur violence in Myanmar, Sri Lanka and India. The social network also has drawn intense criticism for its policies surrounding misinformation in general. The social network said last week that it would not ban InfoWars, a right-wing website known for pushing conspiracy theories, the report said.
In India, Facebook-owned messaging service WhatsApp is facing the flak for allowing the circulation of large number of irresponsible messages filled with rumours and provocation that has led to growing instances of lynching of innocent people.
The company will work with local organisations to help judge which posts fall under that category. If Facebook can't make a definitive call working with one organisation, it might bring in other organizations to help, the report said.
"There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down," a Facebook spokeswoman was quoted as saying. "We will be begin implementing the policy during the coming months."
Yesterday, in trying to explain Facebook's stances on fake news, CEO Mark Zuckerberg sparked outrage by saying the company would not ban content from Holocaust deniers from the platform, because, "I don't think that they're intentionally getting it wrong," he said.
Hours later, he tried to clarify his comments by saying he finds Holocaust denial "deeply offensive," and Facebook would suppress content like that by making sure fewer people see it on their news feeds.
Facebook has been accused of not doing enough to remove anti-Muslim posts and fake news that is been linked to violence against the minority Rohingya Muslims in Myanmar.
Such fake news and hate online hate have also added to sectarian violence in Sri Lanka.
As for the new policy on removing misinformation that could lead to violence, Facebook said it has already begun trying it out.
Last month, the company removed content that alleged Muslims are poisoning food that is given and sold to Buddhists in Sri Lanka. Facebook worked with a local group that said the post could contribute to potential violence, and removed the post.
According to the PMO, any decision on fake news should be taken by bodies like the Press Council of India and the News Broadcasters Association, official sources said.
The Information and Broadcasting Ministry had yesterday announced measures to contain fake news, saying the accreditation of a journalist could be permanently cancelled if the scribe is found generating or propagating fake news.
Terming the circulation of photographs and video as ‘fake news,’ Nilambar categorically stated that Daringbadi as well as Kandhamal is absolutely a ‘safe’ destination for tourists.
It is pertinent to mention here that pictures of a tiger attacking picnickers had triggered panic on social media as it was claimed that the incident occurred somewhere in Daringbadi. Following such reports, a detailed investigation was carried out by district police as well as the Forest department, sources said.
“The pictures are not from Daringbadi. Such fake news was spread to instil a sense of fear in the minds of picnickers.
Daringbadi as well other locations in Kandhamal are absolutely safe for tourists,” said Nilambar.
Nilambar further informed that preliminary discussions with senior police officials have been conducted and action will be taken against the culprits who circulated such fake news on social media.
He clarified that such information floating on the internet is fake and asked people not to be misled by such false news. The senior official clarified that Special Relief Commissioner’s office will issue the required guidelines at appropriate time. SRC Pradeep Jena also warned miscreants to refrain from spreading incorrect information on social media.
He has also urged Odisha DGP to carry out an investigation into the matter to ward off unnecessary confusions in public.
As part of the partnership, WhatsApp and NASSCOM Foundation will train nearly 1,00,000 Indians to spot false information and provide tips and tricks to stay safe on WhatsApp.
The co-created curriculum, which includes real-world anecdote tools that can be used to verify a forwarded message and actions that users can take like reporting problematic content to fact checkers and other law enforcement agencies, will be disseminated in multiple regional languages.
"We are excited to expand our partnerships with civil society to advance crucial digital literacy skills that can help combat misinformation share on WhatsApp," Abhijit Bose, Head of India, WhatsApp, said in a statement.
"This training educates people throughout India to be mindful of the messages they receive and to verify the facts before forwarding," he added.
The training will be imparted by volunteers from NASSCOM Foundation who will launch the "each one teach three" campaign that mandates every volunteer to share their learnings with three more persons leading to a network effect.
These volunteers will post their takeaways from the workshops on their social media handles to increase the reach of these safety messages.
The first training will be on March 27 in Delhi and will be followed by more planned interventions like hosting training workshops for representatives from rural and urban areas along with roadshows across numerous colleges.
"The use of technology platforms like WhatsApp are inherently meant to foster social good, harmony, and collaboration, but are sadly being used by a small number of miscreants to entice anger and hatred by spreading false and doctored information," Ashok Pamidi, CEO, NASSCOM Foundation, said.
"I would like to urge all the connected citizens who want to join this fight against the spread of fake information, to come and help volunteer towards the cause," Pamidi added.
Aspiring volunteers can register at www.mykartavya.nasscomfoundation.org
NASSCOM Foundation is the social arm of the industry body, National Association of Software and Services Companies (NASSCOM).
In addition to the earlier TV, print and radio ads, the new campaign would educate people on the controls available in WhatsApp so they are empowered to stop the spread of misinformation, the company said in a statement.
The first phase of the campaign successfully reached hundreds of millions Indians in both rural and urban areas, claimed the company, adding that the messaging platform is building on the campaign with a second round focused on supporting a safe election process.
"Proactively working with the Election Committee and local partners for a safe election is our top priority. Expanding our education campaign to help people easily identify and stop malicious messages is another step towards improving the safety of our users," said Abhijit Bose, Head of India, WhatsApp.
WhatsApp's digital literacy partners, including DEF and NASSCOM, would share these videos to grow awareness among the people while the print ads are aimed to act as reminders on how to spot, verify and stop sharing of misinformation that can cause harmful outcomes during the sensitive period of polling.
Over the last several months, WhatsApp has made a series of changes, including labeling forwarded messages to inform users when they have received something not from their immediate contacts and set a limit on how forwarded messages can be sent.
In addition, WhatsApp bans accounts that engage in unwanted automated activity.
WhatsApp, including other social media firms, will now have to process any request from the Election Commission of India to take down content within three hours during the 48-hour period before voting days.
For the research, the team from Lancaster University in the UK, compiled a novel dataset, or corpus, of more than 500 April Fools articles sourced from more than 370 websites and written over 14 years.
Using a Machine Learning 'classifier', they identified articles into three categories: April Fools hoaxes, fake news, and genuine news stories.
April Fools hoaxes and fake news articles tend to contain less complex language, an easier reading difficulty, and longer sentences than genuine news.
Important details for news stories, such as names, places, dates and times, were found to be used less frequently within April Fools hoaxes and fake news.
However, proper nouns such as the names of prominent politicians are more abundant in fake news than in genuine news articles or April Fools hoaxes, which have significantly fewer.
First person pronouns, such as 'we', are also a prominent feature for both April Fools and fake news. This goes against traditional thinking in deception detection, which suggests liars use fewer first person pronouns, the researchers said.
"Our findings suggest that there are certain features in common between different forms of disinformation and exploring these similarities may provide important insights for future research into deceptive news stories," said Alistair Baron, from the varsity.
The research will be presented at the forthcoming 20th International Conference on Computational Linguistics and Intelligent Text Processing in La Rochelle, France.
In a discussion with Mathias Dopfner, CEO of Europe's largest publisher Axel Springer, Facebook CEO Mark Zuckerberg on Monday talked over how the platform should create more high-quality news for its over 2 billion users globally.
"I don't know how many fake accounts you think Facebook has, but it seems to be quite a big amount. Some people are saying 700 million. I have no clue, but that has to be dealt with as a very serious problem," said the 34-year-old CEO.
"We have to make a business in order to finance investigative journalists and correspondents, and big foreign networks, they cannot afford to do that for free," he added.
Zuckerberg said he would focus on making sure what makes the offering and it's structuring on Facebook attractive for the hundreds of thousands of journalists, bloggers, digital native publishers, legacy publishers, that they are attracted to put their best content on that platform.
"We're not going to have journalists making news. What we want to do is make sure that this is a product that can get people high-quality news," said the Facebook co-founder.
Facebook could have a direct relationship with publishers in order to make sure that the content is really high-quality.
"There's a whole set of questions around how do we build a service that is contributing to high-quality journalism through increasing monitorisation," said the American tech entrepreneur.
The Menlo Park-based online social media and social networking service company is battling the menace of fake news and misinformation on its platform, especially during election times, including in India where it has removed thousands of fake accounts, groups and pages linked with political parties.
Launched by PROTO, a media skilling start-up, the tipline will help create a database of rumours to study misinformation during elections for Checkpoint -- a research project commissioned by WhatsApp, the Facebook-owned company said in a statement.
People in India can submit misinformation or rumours to the "Checkpoint Tipline on WhatsApp" at +91-9643-000-888.
Dig Deeper Media and Meedan, who have previously worked on misinformation-related projects around the world, are helping PROTO to develop the verification and research frameworks for India.
"The goal of this project is to study the misinformation phenomenon at scale -- natively in WhatsApp," said PROTO's founders Ritvvij Parrikh and Nasr ul Hadi.
When a WhatsApp user shares a suspicious message with the tipline, PROTO's verification centre will seek to respond and inform the user if the claim made in the message shared is verified or not.
The response will indicate if information is classified as true, false, misleading, disputed or out of scope and include any other related information that is available.
"The centre can review rumours in the form of pictures, video links or text and will cover four regional languages including Hindi, Telugu, Bengali and Malayalam, other than English," said WhatsApp.
Following the project, PROTO aims to submit learnings to the International Center for Journalists to help other organisations learn from the design and operations of this project.
"The research from this initiative will help create a global benchmark for those wishing to tackle misinformation in their own markets," said Fergus Bell, Founder and CEO, Dig Deeper Media.
According to Proto, a media skill start-up WhatsApp has partnered with for its "Checkpoint Tipline" initiative, the service is "not a helpline" but a research project.
"The 'Checkpoint Tipline' is primarily used to gather data for research, and is not a helpline that will be able to provide a response to every user," Proto posted in an FAQ on its website.
A WhatsApp spokesperson confirmed to Buzzfeed News that the announcement hadn't meant to imply that every request would receive a response.
People in India can submit misinformation or rumours to the "Checkpoint Tipline on WhatsApp" at +91-9643-000-888.
Unfortunately, it is too late for WhatsApp to spot and take action on fake news for the elections via this project as the first phase of voting begins on April 11.
In a statement, WhatsApp said that it had clarified in the very beginning that the tipline was meant to help create a database of rumours to study misinformation during elections for Checkpoint.
When the tipline was first announced on April 2, the Facebook-owned WhatsApp said its users in India would be able to share messages with the tipline, in order to help Proto verify their authenticity.
"This combined effort by WhatsApp and industry organisations will help contribute to the safety of the elections, by giving people means to know if the information is verified and deter people from sharing rumours that have no basis in fact," said the company.
Proto clarified that "over the next four months, we expect to aggregate these signals at scale, to better understand how misinformation during large events of public interest in India such as the elections spreads across languages, regions, even issues".
It means the project is of no use when it comes to spot and remove misinformation in the upcoming general elections.
The response time will vary based on the complexity of the submissions. However, verifications will not be instant.
"If the new rumour is both within scope and verifiable, the verification centre will prioritize requests based on their urgency.
"Finally, if the new rumour is both within scope, verifiable and prioritized, the verification centre may take up to 24 hours to send back a report," said Proto in its FAQ.
In a nutshell, the WhatsApp project is to gain insights into how fake news spread and is not going to help the Indian government curb misinformation in the April 11-May 23 election period.
For Lyric Jain, a 22-year-old Cambridge and MIT graduate, social media platforms and other stakeholders, including the government, may design solutions to fight fake news but there will be glitches, as is with the case with any technology.
"India needs to prepare better as the stakes are high. Facebook is taking the problem of fake news seriously but there are many other digital platforms that aren't working towards that direction," Jain told IANS.
Conceived by Jain, Logically has been developed by a diverse team of data scientists, coders, designers and journalists.
Adding a layer of credibility to the Internet to battle misinformation, the Logically platform acts as a real-time, user-friendly filter, ensuring users can quickly consume information that is fair, authentic, credible and trusted (FACT).
"News isn't just limited to media houses anymore. The idea is to create 'responsible sharing' among people," said Jain.
"Logically will analyse whether the information is fake or not, even if the information is being provided by a well-known journalist from a credible publication," he added.
When asked how the technology works, Jain replied: "It is a human-centric AI effort".
"We analyse the content or text, check the metadata that is being mined and see how information is being circulated across networks.
"We then combine these indicators and conclude whether the news is credible or not. Also, our human fact-checking team complies with the international fact-checking standards," Jain said, adding that it was the fake news and political interference debate around Brexit and the 2016 US Presidential elections that drove him to launch the Logically platform.
As the India elections inch closer, Jain said the platform will try its best to analyse the flow of information.
Logically taps deep learning algorithms and web graphs of millions of web sites from top publishers around the world to identify top quality sources for trending, quality news, per category, query, or article.
"Logically will look for information that is misleading, distorting or interfering with the elections," he added.
(Story by Vivek Singh Chauhan)
The Facebook-owned platform is already working on two features that will help its over 1.5 billion users know how many times a message has been forwarded.
The "Forwarding Info" and "Frequently Forwarded" features are not available yet but WhatsApp is working on these features in its Beta update for Android, said wabetainfo.com that tracks WhatsApp updates.
Now, in the "2.19.97 beta update, WhatsApp is testing a new feature in groups that allows to choose to stop sending Frequently Forwarded Messages in the group," it said.
The option will be available in Group Settings and only administrators can see and edit it.
When the feature is enabled, nobody will be able to send a Frequently Forwarded message in the group.
A user can copy the Frequently Forwarded message and send it as a new message but this will slow the process.
A message is Frequently Forwarded when it has been forwarded more than four times.
Currently, WhatsApp has limited the forwards to a maximum of five in India.
San Francisco: In yet another bid to tame fake news, Facebook will crack down on Groups that repeatedly share misinformation by reducing that Group's overall News Feed distribution.
The social media platform will also hold admins of the Facebook Groups more accountable for the Community Standards violations.
"When people in a group repeatedly share content that has been rated false by independent fact-checkers, we will reduce that group's overall News Feed distribution. Starting today, globally," Guy Rosen, Vice President of Integrity at Facebook said in a blog post on late Wednesday.
Facebook said that starting in the coming weeks, when reviewing a Group to decide whether or not to take it down, it will look at admin and moderator content violations in that Group -- including member posts they have approved as a stronger signal that the group violates its standards.
"We're also introducing a new feature called Group Quality, which offers an overview of content removed and flagged for most violations, as well as a section for false news found in the group," added Tessa Lyons, Head of News Feed Integrity at Facebook.
The company has incorporated a "Click-Gap" signal into News Feed ranking.
"Click-Gap" looks for domains with a disproportionate number of outbound Facebook clicks compared to their place in the web graph.
"This can be a sign that the domain is succeeding on News Feed in a way that doesn't reflect the authority they've built outside it and is producing low-quality content," said Facebook.
The company is also expanding the Context Button to images on Instagram.
Launched in April 2018, the Context Button feature provides people more background information about the publishers and articles they see in News Feed so they can better decide what to read, trust and share.
"We're testing enabling this feature for images that have been reviewed by third-party fact-checkers," said Facebook.
Facebook said it will bring the "Verified Badge" into its Messenger service.
"This tool will help people avoid scammers that pretend to be high-profile people by providing a visible indicator of a verified account," said the company.
Social media "warriors" are busy propagating "reworked" and "reoriented" content related to political news, government scams, historical myths, patriotism and nationalism on Facebook, WhatsApp and Twitter.
"Apart from the fake news and doctored contents, chatbots are sending bulk WhatsApp messages on active mobile numbers, not only on WhatsApp but on Facebook Messenger as well," nation's leading social media expert Anoop Mishra told IANS.
There are several cases where people who joined Facebook renamed their Pages, Groups and accounts later, only to use it for spreading their political agenda in the election season.
Despite Facebook's efforts, such misinformation is thriving and has reached mammoth levels, say experts.
"Over 90,000 groups on WhatsApp and more than 200 fake Facebook Pages and accounts are currently influencing the group members and followers with biased political content," said Mishra.
The content involves fake statistics of the government's policies to news prompting regional violence, from manipulated political news, government scams, historical myths, propaganda to patriotism and Hindu nationalism.
Two main political parties are leaving no stone unturned in reaching out to voters via various social media platforms.
Social media platforms, however, say they are proactively disabling bots and fake accounts being used for political interference in India.
Facebook said recently it is removing or blocking about one million abusive accounts a day with the use of Artificial Intelligence (AI) and Machine Learning (ML) tools.
The social media giant has also removed nearly 700 Pages, Groups and accounts in India for violating its policies on what it calls "coordinated inauthentic behaviour" and spam.
It now has Ad Library, a searchable database, in place in India. Indians spent around Rs 7 crore for running political ads on Facebook and Instagram in the first 20 days of April, while the amount spent on these platforms in February-March was about Rs 10 crore.
On the other hand, Twitter has announced a new tool within the platform to make it easier for users to report attempts to interfere in the general elections in India through spread of misleading information about voting.
It earlier launched an Ads Transparency Centre that allows anyone to view details on political campaigning ads and billing information in India.
WhatsApp has also launched a research project in India where over 200 million users in the country can tip off fake news, misinformation and rumours related to elections.
Launched by PROTO, a media skilling start-up, the tipline will help create a database of rumours to study misinformation during elections for Checkpoint -- a research project commissioned by WhatsApp.
Are these measures sufficient to curb the fake news in the world's biggest election?
"The social media giants began their work towards safeguarding the India elections a bit late and some of the measures were announced near to the poll dates. It is difficult to tell if these measures will bear fruits and tame the fake news factories or not," lamented Mishra.
The legislation titled, "Protection from Online Falsehoods and Manipulation Bill", is not a political tool for the ruling party to wield power, said Home Affairs and Law Minister K. Shanmugam after it was passed late Wednesday night, but is about shaping the kind of society that Singapore should be, The Straits Times reported.
"(Debates) should be based on a foundation of truth, foundation of honour, and foundation where we keep out the lies, that's what this is about.
"It's not about the Workers' Party or the PAP (People's Action Party) or today, it's about Singapore," Shanmugam said responding to the 31 MPs who spoke during the debate on the draft law aimed at protecting society from fake news that harms public interest.
At around 10.20 p.m., the Bill was passed with 72 MPs saying "yes", nine WP MPs saying "no", and three Nominated MPs abstaining.
The legislation will allow the government to decide what information is listed as false based on two criteria: when a false statement or announcement is issued, and when this action is considered to be of public interest.
The law, which according to the authorities will not apply to opinion, criticism, satire or parody, provides for a maximum penalty of 10 years' imprisonment and fines of up to $733,000.
Pritam Singh, leader of the Workers' Party -- Singapore's lone opposition party in parliament -- who had strenuously objected to the new law for giving ministers too much power, had called for a division in which each MP's vote is recorded.
The bill was also criticised by the Amnesty International's East and South East Asia Regional Director Nicholas Bequelin, saying that it will give the authorities more power to repress their critics, reports Efe news.
Asia Internet Coalition said it was "deeply disappointed" by the lack of public consultation during the drafting process.
Prime Minister Lee Hsien Loong last week defended the measures by describing them as necessary to protect Singapore with the aim of avoiding "hostile" interference intended to "cause disorder in our society".
Singapore, one of the most prosperous countries in the world, has been criticised on numerous occasions for its tight control of public and private media.
In 2018, Singapore was ranked 151st out of 180 countries in Reporters Without Borders' Press Freedom Index, behind countries such as Afghanistan, Russia and Myanmar.
In no time, she joined WhatsApp and started getting updates from family, relatives and friends. Then started the flood of forwarded messages from people in her contact list.
These forwards, many of which contained fake news, surged during the election time. Arora had no idea that these could be propaganda material. She became aware of the problem only after one of her sons alerted her about a fake political message she had forwarded.
"I never knew how a post could be fake or bogus. Photoshopped? I could never figure out if the message loaded with political information was right or wrong. For me, it was just information, which I kept sharing with friends and family members," Arora told IANS.
Arora is among an estimated 300 million users -- mostly first-time smartphone users, from the smaller towns and rural areas with no prior digital experience -- who are particularly vulnerable to sharing fake information on social media platforms.
"The biggest challenge to fighting fake new is that over 300 million of the 550 million smartphone and broadband users in the country are low on literacy and digital literacy and are especially gullible," leading tech policy and media consultant Prasanto K. Roy told IANS.
"For them, we need prominent messaging and public education on the dangers -- that fake news kills," Roy emphasised.
The country has 366 million Internet subscribers in urban locations and 194 million in rural areas, says the latest TRAI report.
The "ICUBETM 2018" report from market research firm Kantar IMRB said that the number of Internet users in the country will reach 627 million by the end of 2019.
According to Govindraj Ethiraj, Founder - BOOM, which has collaborated with Facebook, Google and Twitter, among others, to fight misinformation, educating new social media users about the dangers of fake news is a major challenge.
"Although millennials are no less vulnerable to fake news, they could be taught about its dangers through the introduction of education programmes in schools or advertisements. Reaching out to the old people, who are newly getting introduced to smartphones and social media is a greater challenge," Ethiraj told IANS.
He, however, noted that once awareness increases among the general population, old people could also be educated.
"Many times, children teach their grandparents a lot of things," Ethiraj said, while adding that fighting fake news is a daunting challenge.
"The spread of fake news reached an all-time high in the run up to the 2019 general election, despite social media platforms fighting them back by combining people (fact checkers) and technology," Ethiraj added.
But this "fight back" has warned the organised fake news peddlers as they run the risk of getting exposed by fact checkers.
The number of eligible voters in the Lok Sabha elections this year was around 900 million. Both Facebook and WhatsApp have nearly 300 million users each in India.
Facing flak from different quarters for the spread of misinformation on its platform that were linked to dozens of lynching cases in India last year, Facebook-owned WhatsApp also introduced advertisement education programme in over 10 languages.
All these efforts, however, had only limited success in curbing spread of disinformation during this election season.
"Fake news has been a primary and significant driver of sentiment and passion through this election," Roy said.
"Even now, on the eve of the counting day, fake news is being seeded by political influencers on Twitter (for example, Bollywood actress Payal Rohatgi saying Khan Market in Delhi is named after a Mughal invader and must be renamed Valmiki Market) and instantly being circulated on WhatsApp," he added.
A part of the problem is that for many of the social media platforms India is a bigger market than their "home" market, said Ethiraj.
"These products were probably not originally designed to deal with the diversity and vastness of the India market, but they are now trying to adapt to the Indian situation and deal with the unique challenges that the country poses," he said.
Facebook remains flooded with fake review groups, despite being ordered to take urgent action by UK regulator Competition and Markets Authority (CMA), the study found.
"Our latest findings demonstrate that Facebook has systematically failed to take action while its platform continues to be plagued with fake review groups generating thousands of posts a day," Natalie Hitchins, Which? Head of Products and Services, said in a statement.
"It is deeply concerning that the company continues to leave customers exposed to poor quality or unsafe products boosted by misleading and disingenuous reviews," Hitchins added.
Which? found dozens of groups on the social networking site in the UK that are recruiting people to write fake or incentivised reviews, with sellers offering free products in exchange for highly-rated reviews for products listed on Amazon.
During the investigation researchers joined ten of these Facebook review groups and found 3,511 new posts generated in just one day, and more than 55,000 posts over a 30-day period.
The true overall figure could well be higher as Facebook caps the number of posts it displays, Which? said.
In June, UK's Competition and Markets Authority warned Facebook and eBay to conduct an urgent review of their sites after it found "troubling evidence" of a thriving marketplace for fake online reviews.
The platforms were told to remove and prevent these groups from reappearing.
While eBay seems to have largely eradicated listings offering five star reviews for sale, Facebook continues to be full of fake review groups, the research found.
The rise in fake reviews could increase the chance of people potentially being duped into buying poor quality or even unsafe products that have been boosted by disingenuous reviews.
A Which? survey of the public showed that 97 per cent of people use online when researching a purchase.
"Facebook must immediately take steps to not only address the groups that are reported to it, but proactively identify and shut down other groups, and put measures in place to prevent more from appearing in the future," Hitchins said.
Issuing a clarification on this, Puri Raja Nahar (the palace) sources stated, "The photo circulated on social media claiming coronation of new Puri King is fake. The person in the traditional attire is Tikayet Janmejay Mardaraj of Raj-Nilagiri, the only child of Nilagiri Raja Saheb who is on his (Puri Gajapati) left."
"This photo was taken at Nilagiri Palace on the occasion of Tilak (wedding engagement) ceremony of Janmejay in 2016" added Nahar sources.
Even Nilagiri King Jayant Chandra Mardaraj Harichandan has clarified saying, "The person picturised as the new Puri King is my son Tikayet Janmejay Mardaraj of Nilgiri. Alongside him are Puri Gajapati and Raja Kamakhya Prasad Singh Deo of Dhenkanal and their families during his (Janmejay's) engagement ceremony at Nilagiri Palace in December 2016."
It is pertinent to mention here that Gajapati Dibyasingha Deb was coronated 49 years ago on July 7, 1970 and since then he has been the chief servitor of the sibling deities in Puri.
The government will not take any step that may curb media freedom, the minister said and suggested there should be some kind of regulation on over-the-top platforms (OTT), as there is for the print and electronic media as well as films.
OTT platforms include news portals and also 'streamers' such as Hotstar, Netflix and Amazon Prime Video, which are accessible over the internet or ride on an operator's network.
In an interaction with PTI journalists at the news agency's headquarters here, Javadekar said several mainstream media outlets have conveyed to the government that there was no level-playing field with OTT platforms being completely unregulated.
"I have sought suggestions on how to deal with this because there are regular feature films coming on OTT -- good, bad and ugly. So how to deal with this, who should monitor, who should regulate. There is no certification body for OTT platforms and likewise news portals also," he said.
At the same time, he said the government has not taken any decision on the matter.
The Press Council of India takes care of the print media, the News Broadcasters Association (NBA) monitors news channels, the Advertising Standards Council of India is for advertising while the Central Board of Film Certification (CBFC) takes care of films, he said.
"However, there is nothing for the OTT platforms," the I & B minister said.
There has been a spurt in news portals in India with several of them seeing a rise in the number of online subscribers.
Javadekar also expressed concern over fake news, saying it is "more dangerous than paid news".
"Fake news has to be stopped and that is our joint work. It is not just the government's job, it is everybody's job. Those who are in the business of genuine news, they all must strive hard (to combat it)," the minister said.
He said several media channels are tackling the menace by showing the truth with programmes such as "Viral Sach", and added that the print media should also carry columns on similar lines uncovering the truth of fake news.
"We have seen in the last few months that fake news on social media and gossip, rumours on child lifting have resulted in the deaths of more than 20-30 people in mob violence," he said.
Javadekar said the government is doing its bit to combat the menace and has run programmes on Doordarshan News such as 'Kashmir ka Sach' to tackle fake news about Kashmir, where Article 370 provisions were abrogated on August 5.
"We will be fast enough to react if there is fake news on government matter but the government is also concerned about overall public order. We are also asking state administrators to act (in countering fake news)," he said.
Many district magistrates have placed facts before the public to counter fake news about their area, he said.
Talking about paid news, the minister said it is unethical and the media community has to stop it.
"It has to give us (the government) suggestions so that we all can act together to ensure that the small percentage of media that indulges in this are punished and this practice goes away," Javadekar said.
On the 10 per cent customs duty to be levied on imported newsprint and the subsequent demands for a rollback from the print media industry, Javadekar said discussions have taken place on the issue involving all stakeholders and the matter will be settled.
Speaking at an event organised by the Press Council of India to observe National Press Day and confer National Awards for Excellence in Journalism, the union minister for information and broadcasting also touched upon the issue of "lynching" and how only "one kind of lynching" was being talked about.
"A fake news was circulated about child-lifters and it claimed 20 lives. When there is discussion about lynching, only one lynching is talked about but these 20 people also died due to lynching. That is not talked about," he said.
Talking about how fake news goes viral on social media, he said even if a tweet is deleted, it stays somehow.
"Press freedom has to be responsible freedom. We need responsible freedom. Mediapersons need to introspect. Fake news has more TRPs," he said, adding fake news was a bigger menace than paid news.
"There should be discussion about how to deal with the menace of fake news. You can give suggestions to the government," he told the gathering.
Javadekar also spoke about how press freedom was curbed during the British rule and the Emergency in 1975.
He said as a student activist of the RSS-affiliated Akhil Bharatiya Vidyarthi Parishad, he had fought for press freedom during the Emergency and had even gone to jail.
"Received a forward that looks too good to be true!!! or maybe came across a piece of news that you want verified !! Send it across and we will Fact Check it for you, no questions asked," the Ministry of Information and Broadcasting said on its official Twitter handle.
"Ever wondered if a WhatsApp forward is true or just fake news? Or if a tweet/FB (Facebook) post is real? Fret no more!" the Ministry said in the banner along with the hashtag 'PIB Fact Check'.
The Information and Broadcasting Ministry also urged the people to email snapshots of any "dubious material" they come across on any platform including social media and they will get it checked, but added that only material related to government ministries, departments and schemes will be fact-checked.
On number of occasions, Information and Broadcasting Minister Prakash Javadekar has called for combating fake news, terming it more "dangerous" than "paid news".
(IANS)
The report claimed that shutdown has been announced in Byasanagar and Chandikhol areas of Jajpur from June 30 today 8 pm to July 6, 8 pm in view of Covid-19 pandemic.
OTV has issued a clarification that the report is false and no such news has been aired by the news channel.The graphics of an old news item aired by OTV was craftily edited and used in the fake news.
The news channel appealed its readers not to believe in such fake news and not forward the news to any individual or receive it on their WhatsApp and other social media platforms.
It may be mentioned that spreading of such fake news during the Covid-19 pandemic is a punishable offence under the Epidemic Act.
This is not the first time such misinformation was spread using name of OTV.
On May 1, Jajpur Police had arrested one Himansu Baral, the prime accused who created rumours on Jajpur shutdown by posting fake news using OTV logo & graphics on social media. Case was registered against him U/S 188, 269, 270, 505 (b), 34 (IPC), 3 ED Act, 54 DM Act.
Ahead of the Patkura poll last year, a fake video clipping with OTV logo purportedly showing a clash between supporters of BJP national vice president Jay Panda and party candidate Bijoy Mohapatra was circulated on social media.
Similarly, Commissionerate Police had detained a youth on charges of circulating fake news on Odisha Chief Minister Naveen Patnaik’s health by misusing the logo of OTV on social media in August, 2019.
(Edited By Suryakant Jena)
Read More:
Fake News on CM’s Health: Cops Nab Youth For Misusing OTV Logo
https://youtu.be/4DmZmhpJroA
Panic gripped people as the fake news of shutdown in Paralakhemundi and the block headquarters from July 28 to July 30, spread like wildfire yesterday.
Today morning, without any clue, thousands of people rushed to markets to buy essentials. However, the district administration swung into action and both the Gajapati district collector and the SP rushed to the markets to allay panic among people.
Later, speaking to the media, Gajapati SP Tapan Kumar Patnaik said, "We will investigate the matter and trace the person who spread such hoax. We will definitely identify them and take stringent action. I appeal people not to believe any such fake news."
[caption id="attachment_465132" align="aligncenter" width="750"] Gajapati SP Tapan Kumar Patnaik[/caption]
He further said that people need not panic as the administration will inform them much in advance about any shutdown or lockdown in case it has any such plans in future taking into account the COVID-19 situation in the area.
Read More:
Fake News: Deliberate Attempt To Defame OTV?
Odisha Patients Caught In Quagmire Of Medical Negligence & COVID-19
#FakeNewsAlert Don't believe this 'FAKE' forwarded message on #Shutdown extension! There is NO such official order extending the shutdown in #Cuttack city (CMC Area). pic.twitter.com/sujp2o3EP4
— CMC,Cuttack (@CMCCuttack) July 27, 2020
It is not for the first time that OTV graphics and logo have been misused for such fake news regarding shutdown during COVID-19 pandemic.
Coming across such a morphed & fake piece of news, the Cuttack Municipal Corporation (CMC) today tweeted, “Don't believe this 'FAKE' forwarded message on #Shutdown extension! There is NO such official order extending the shutdown in #Cuttack city (CMC Area).”
On July 23, a false news snap shot with OTV logo claiming arrest of three individuals from Bhubaneswar for sharing fake messages about a flour brand being infected with COVID-19 virus had gone viral.
This picture carrying OTV Twitter handle logo is FAKE. We request our viewers not to believe/forward/share such FAKE news
OTV is in the process of tracking the source of such fake pictures & action will be taken with help of police against such mischief mongers pic.twitter.com/s1NSxuRdUk
— OTV (@otvnews) July 23, 2020
Earlier on July 9, similar piece of fake news using OTV logo was found circulating on social media. In another instance on June 30, one more fake picture prepared using OTV logo and graphics regarding shutdown in some areas of Jajpur district was found doing rounds on social media platforms. Later, Jajpur Police arrested one Himansu Baral, the prime accused who created rumours on Jajpur shutdown by posting fake news using OTV logo & graphics on social media.
#FakeNewsAlert : This news plate being circulated with OTV logo is fake. We would request our viewers not to believe in such fake news or forward it to anyone.
Anyone found peddling & circulating such fake news will be traced and action initiated with help of police pic.twitter.com/J03lL8MATf
— OTV (@otvnews) July 9, 2020
On May 2, two youths were detained by Police at Buden Police Station in Bargarh district for spreading rumours regarding the spread of COVID-19 by posting fake news using OTV logo & graphics on social media.
Jajpur Police arrests one Himansu Baral, the prime accused who created rumours on Jajpur shutdown by posting fake news using OTV logo & graphics on social media. Case registered against him U/S 188, 269, 270, 505 (b), 34 (IPC), 3 ED Act, 54 DM Act, informs Jajpur SP#COVID19 pic.twitter.com/3vEdoP9hNx
— OTV (@otvnews) May 1, 2020
OTV has continuously alerted its users & subscribers not to believe/forward/share such fake news. At the same time, OTV is making every effort of tracking the sources of such fake pictures & news and has and will be initiating action with the help of police against such mischief mongers.
Also Read: Fake News Using OTV Logo Found Circulating On Social Media Again!
The accused identified as Niranjan Senapati of Khurda was reportedly stealing various content of OTV without any authorisation and/or consent.
An FIR in this connection was lodged by the OTV Network at Infocity Police Station citing unauthorised uploading of exclusive digital content from OTT platform of OTV Network – Tarang PLUS on to a YouTube channel infringing copyright.
The accused youth said, “I uploaded Tarang’s Tara Tarini TV show episodes on my YouTube channel when I saw many other channels doing so without facing any action. But my channel was suspended on charges of copyright infringement, so I urge others not to do any such mistakes.”
Moreover, with frequent circulation of fake news & hoaxes with morphed OTV logo and graphics on social media, the Commissionerate Police has assured that exemplary action will be taken against people found involved in such unscrupulous acts. “A case has been registered for posting such unethical content in the name of OTV. Action will be taken and the culprits will be nabbed,” said Bhubaneswar DCP, Umashankar Dash.
Speaking in this connection, Creative Head of Odisha Television Limited Network, Ranjan Satpathy said the arrest of the youth for piracy is a warning for all those who are blatantly using our exclusive content on their channels.
“Despite awareness, those who are still uploading our content don’t understand that it is leading to huge revenue losses. We want to make it clear that they should immediately stop such acts or else face legal action,” said Satpathy.
(Edited By Bikram Keshari Jena)
Badu said Dey is being treated at a private hospital in CDA area of Cuttack for illness and his health condition remains stable.
He slammed the Odisha Law Minister, Pratap Jena for reportedly spreading a false news on social media about Dey's death.
“The fake news that went viral on social media made many of his supporters angry and uncomfortable. The person who shared the news should have thought about the consequence before writing such things on social media,” Badu said.
No comments could be obtained from Pratap Jena in this connection.
(Edited By Ramakanta Biswas)
Also Read:
BJP VP Baijayant Panda Calls For Freedom Of Speech, BJD Sparks Twitter War
"Social media is flooded with misinformation. Many a time, the information comes on social media and even on mainstream media that is not true. So, you have to fight against the misinformation. It is now the greatest enemy to social harmony. If we won't remain alert, it will destroy our society and our unique identity," said Patnaik during BJD's state executive meeting.
The BJD chief, who attended the meeting through video conference, called upon the students' wing to fight the fake news and said, "It's your duty to fight against the misinformation and bring the truth before people."
The BJD chief recently said that right from Sarpanchs to Chairmen at Block levels, Zilla Parishad Presidents, MLAs, Ministers including the Chief Minister and government officials- from entry-level to the State Chief Secretary- will make their property details public.
“Corruption is the biggest enemy of development and the Odisha government is giving adequate focus on transparency. All such efforts will go in vain if we do not give emphasis on transparency and eliminate corruption," said the BJD supremo adding that action has been taken against 91 corrupt officials in the State in the past one year.
"There is freedom of speech but Article 19A says that this is subject to reasonable restrictions," said Communications, Electronics & Information Technology Minister Ravi Shankar Prasad.
"We respect social media a lot, it has empowered common people. Social media has a big role in the Digital India programme. However, if social media are misused to spread fake news and violence, then action will be taken on the misuse of social media in India whether Twitter or else," said Prasad.
He said all the social media platforms will have to adhere to the constitution of India. The Indian constitution allows criticism of the government and the Prime Minister, but spreading fake news will not be allowed, he said.
Prasad said, "We have flagged certain issues to Twitter and social media has to take into consideration of the Indian Laws if they want to do business in the country.
"Different parameters can't be allowed for different countries. It can't be different for the Capitol Hill incident and some other parameters and for the Red Fort incident."
The Indian government on Wednesday expressed displeasure over Twitter's delayed compliance on its order to remove "provocative" tweets amid the ongoing farmers' protests.
IT Secretary Ajay Prakash Sawhney expressed Centre's displeasure to Twitter's management.
An official statement issued late Wednesday night said the Secretary told Monique Meche, Vice President, Global Public Policy and Jim Baker, Deputy General Counsel and Vice President Legal of Twitter that delayed compliance to lawfully passed orders are "meaningless".
"Lawfully passed orders are binding on any business entity. They must be obeyed immediately. If they are executed days later, it becomes meaningless," Sawhney was quoted as saying in the statement.
The official expressed his deep disappointment to Twitter leadership about the manner in which Twitter has unwillingly, grudgingly and with great delay complied with the substantial parts of the order, the statement said.
He also told Twitter that in India its Constitution and laws are supreme.
A bench headed by Chief Justice S.A. Bobde issued the notice on a plea by BJP leader Vinit Goenka seeking a mechanism to check content on Twitter spreading hatred through fake news and instigative content through bogus accounts. The top court tagged the matter with similar pending petitions seeking social media regulation.
The plea, filed through advocate Ashwani Kumar Dubey, argued that presently total number of Twitter handles in India are around 35 million and total number of Facebook accounts are 350 million and experts says that around 10 per cent Twitter handles (3.5 million) and 10 per cent Facebook accounts (35 million) are duplicate/bogus/fake.
The petitioner argued that these fake Twitters handles and bogus Facebook accounts, have been created, in the name of eminent people and high dignitaries including the President of India, Vice President of India, Prime Minister of India, Chief Ministers, Cabinet Ministers, Chief Justice of India and the judges of the Supreme Court and High Courts.
Goenka said that these fake Twitter handles and Facebook accounts use real photos of constitutional authorities and eminent citizens. As a result, the common man relies upon the messages published from these Twitter handles and Facebook accounts.
The petitioner insisted that fake news is the root cause of many riots, including the one in Delhi earlier this year. Moreover, bogus accounts are used to promote communalism, which is a threat to the unity of the country. The plea argued that political parties use fake social media accounts to tarnish the image of opponents during elections, and also self-promotion.
The plea sought the apex court to issue direction to the Centre, Ministry of Information & Broadcasting, Ministry of Law & Justice, and Twitter India to make a mechanism against the social media accounts in order to stop the hatred, fake, instigative and other news which are contrary to the law of the country or violates the law of the country.
The plea also sought directions to make a law to initiate action against Twitter and their representatives in India for wilfully abetting and promoting anti India tweets and penalising them.
The petitioner cited that the Ministry of Home Affairs has banned Sikhs for Justice (SFJ) under the Unlawful Activities and Prevention Act on July 10, 2019, yet this banned organisation continues to have an active presence on Twitter promoting its agenda. The petitioner argued despite sending a representation to the authority concerned no action has been taken against social media giant Twitter.
The Facebook page ‘Political Drama’ in a post claimed that Lekhashree made a shocking statement on the when asked about the rising cooking gas price. “Women should cook fewer items that means only rice and dal,” the Facebook page quoted Lekhashree as saying.
However, the BJP leader denied making such statement and alleged that such fake news was intended to blackmail people and extort money from them.
“I have never made such a statement. There is a disturbing trend of circulating fake news with intent to blackmail & extortion. I request immediate action from the police,” Lekhashree tweeted.
.@cpbbsrctc @CIDOdisha @DGPOdisha
This Facebook page "ପଲିଟିକାଲ ଡ୍ରାମା" is promoting fake and derogatory news against me. I have never made such statement. There is a disturbing trend of circulating fake news with intent to blackmail & extortion.
Request immediate action. pic.twitter.com/kEbU5aDM5l— Lekha Samantsinghar (@DrLekhaShree) March 13, 2021
Speaking to reporters on the issue, she said, “Lies have been spread against me and ill attempts made to defame me. I strongly condemn such efforts. I have tweeted the matter tagging the Director General of Police, Crime Branch cyber cell and Twin City Police Commissionerate.”
“I request the officials to treat the tweet as an FIR, conduct a probe into the matter and take stringent action against those behind it,” she added.
(Edited By Suryakant Jena)