TRUMP SAYS: HUNTER MAKES FORTUNE FROM SHADY DEALS!
BIDEN FAMILY STINKS TO HIGH HEAVENS OF CORRUPTION!
DON'T GET LEFT OUT: HUNTER MUST BE STOPPED!
This article was originally published at Activist Post.
We previously have covered the many weighty claims made by the progenitors of A.I. algorithms who claim that their technology can stop crime before it happens. Similar predictive A.I. is increasingly being used to stop the spread of misinformation, disinformation, and general “fake news” by analyzing trends in behavior and language used across social media.
However, as we’ve also covered, these systems have more often than not failed quite spectacularly, as many artificial intelligence experts and mathematicians have highlighted. One expert in particular — Uri Gal, Associate Professor in Business Information Systems, at the University of Sydney, Australia — noted that from what he has seen so far, these systems are “no better at telling the future than a crystal ball.”
Please keep this in mind as you look at the latest lofty pronouncements from the University of Sheffield below. Nevertheless, we should also be aware that — similar to their real-world counterparts in street-level pre-crime — these systems most likely will be rolled out across social media (if they haven’t been already) regardless, until further exposure of their inherent flaws, biases, and their own disinformation is revealed.
AI can predict Twitter users likely to spread disinformation before they do it
A new artificial-intelligence-based algorithm that can accurately predict which Twitter users will spread disinformation before they actually do it has been developed by researchers from the University of Sheffield.
A new artificial-intelligence-based algorithm that can accurately predict which Twitter users will spread disinformation before they actually do it has been developed by researchers from the University of Sheffield.
A team of researchers, led by Yida Mu and Dr. Nikos Aletras from the University’s Department of Computer Science, has developed a method for predicting whether a social media user is likely to share content from unreliable news sources. Their findings have been published in the journal PeerJ.
The researchers analyzed over 1 million tweets from approximately 6,200 Twitter users by developing new natural language processing methods – ways to help computers process and understand huge amounts of language data. The tweets they studied were all tweets that were publicly available for anyone to see on the social media platform.
Twitter users were grouped into two categories as part of the study – those who have shared unreliable news sources and those who only share stories from reliable news sources. The data was used to train a machine-learning algorithm that can accurately predict (79.7 percent) whether a user will repost content from unreliable sources sometime in the future.
Results from the study found that Twitter users who shared stories from unreliable sources are more likely to tweet about either politics or religion and use impolite language. They often posted tweets with words such as ‘liberal’, ‘government’, ‘media’, and their tweets often related to politics in the Middle East and Islam, with their tweets often mentioning ‘Islam’ or ‘Israel’.
In contrast, the study found that Twitter users who shared stories from reliable news sources often tweeted about their personal life, such as their emotions and interactions with friends. This group of users often posted tweets with words such as ‘mood’. ‘wanna’, ‘gonna’, ‘I’ll’, ‘excited’, and ‘birthday’.
Findings from the study could help social media companies such as Twitter and Facebook develop ways to tackle the spread of disinformation online. They could also help social scientists and psychologists improve their understanding of such user behavior on a large scale.
Dr. Nikos Aletras, Lecturer in Natural Language Processing at the University of Sheffield, said:
Social media has become one of the most popular ways that people access the news, with millions of users turning to platforms such as Twitter and Facebook every day to find out about key events that are happening both at home and around the world. However, social media has become the primary platform for spreading disinformation, which is having a huge impact on society and can influence people’s judgement of what is happening in the world around them.
As part of our study, we identified certain trends in user behaviour that could help with those efforts – for example, we found that users who are most likely to share news stories from unreliable sources often tweet about politics or religion, whereas those who share stories from reliable news sources often tweeted about their personal lives.
We also found that the correlation between the use of impolite language and the spread of unreliable content can be attributed to high online political hostility.
Yida Mu, a Ph.D. student at the University of Sheffield, said:
Studying and analysing the behaviour of users sharing content from unreliable news sources can help social media platforms to prevent the spread of fake news at the user level, complementing existing fact-checking methods that work on the post or the news source level.
The study, Identifying Twitter users who repost unreliable news sources with linguistic information, is published in PeerJ. To access the paper in full, visit: https://doi.org/10.7717/peerj-cs.325
It Took 22 Years to Get to This Point
Speaking to author and podcaster Dana Parish, former Centers for Disease Control and Prevention...
The United States just made a decision that could lead to World War 3. The current ruling class...
Russia has just announced that conscription in the former Soviet Union is "unnecessary" as...
Commenting Policy:
Some comments on this web site are automatically moderated through our Spam protection systems. Please be patient if your comment isn’t immediately available. We’re not trying to censor you, the system just wants to make sure you’re not a robot posting random spam.
This website thrives because of its community. While we support lively debates and understand that people get excited, frustrated or angry at times, we ask that the conversation remain civil. Racism, to include any religious affiliation, will not be tolerated on this site, including the disparagement of people in the comments section.
That doesn’t sound like any A.I. that I ever heard of.
Sounds more like some Yankee Queer.
So people who are concerned with the real world are likely to post ” ” fake ” ” news, but people who tweet about their “emotions” and interactions with friends, are reliable.
Riiiiiiiight. Liberals are so inside out and upside down and backwards I can’t tell if they’re just taking the piss or if theyre actually serious.
I think, it’s a legitimate discussion, what is subjective (opinion) vs. objective (palpable reality).
But, narcissistic personalities will never agree, as to the basic facts or to a customary level of politeness. Radicals will never agree to a baseline standard for reality, and neither will the AI.
https://en.wikipedia.org/wiki/Gaslighting
“Gaslighting depends on “first convincing the victim that [the victim’s] thinking is distorted and secondly persuading [the victim] that the victimizer’s ideas are the correct and true ones”. Gaslighting induces cognitive dissonance in the victim, “often quite emotionally charged cognitive dissonance”, and makes the victim question their own thinking, perception, and reality testing, and thereby tends to evoke in them low self-esteem and disturbing ideas and affects, and may facilitate development of confusion, anxiety, depression, and in some extreme cases, even psychosis. After the victim loses confidence in their mental capacities and develops a sense of learned helplessness, they become more susceptible to the victimizer’s control. Victims tend to be people with less power and authority.”
Speaking of learned helplessness, what does most online discussion empower you to do, palpably, objectively. Did you gain any useful leverage from the discussion.
I would like to have a quiet and productive year.
I can guarantee you, that while you are still processing (reality checking) the next troll, he will have made 10 or 20 more topposts and went on with his life, victoriously. We are still stuck on processing people, not entitled to an opinion.
“This group of users often posted tweets with words such as ‘mood’. ‘wanna’, ‘gonna’, ‘I’ll’, ‘excited’, and ‘birthday’.”
I got a good laugh out of that one. That’s just like those stunted-development adult children, and their adult-preschool style info-graphics. The permanent lower caste of the future is going to be comprised of such babies, their critical thinking faculties forever crippled by childhood traumas induced by state-funded educational facilities.
After sleeping on it, I blame the machine for spreading disinfo and accusing other people. As an adversarial presence, at the scene of every thought crime, it has the motive and opportunity.