Sunday, January 28, 2018

Why Should Russian Computer Bots Scare Us?



Robot, Українська: Робот

Not all persuasion involves people giving speeches. Computers also express political opinions, and may have affected the 2016 United States election. Personally, I want to know what people think, but don't care what robots think.

Russian Bots Are Out There
Twitter recently announced that computer bots linked to Russian accounts retweeted Donald Trump’s tweets about 470,000 times near the end of the 2016 presidential campaign. Russian or Russian-linked accounts were responsible for at least 48% of WikiLeaks’ activity at that time. Russian bots communicated more than 2 million tweets during the campaign’s closing month, and received about 455 million reads. Russian accounts related to the Russian government routinely reached out in personal outreach to a great many news organizations. Over 100 news agencies published stories that were at least partly based on this activity. Twitter is trying to suppress much of this activity, albeit a bit late.

Similarly, over 60,000 Americans responded to Facebook events that Russian computer bots had created about the campaign. More than one out of twenty-five of retweets of Trump’s messages were the work of Russian systems. Altogether, it is estimated that more than 120 million Americans received information from online Russian disinformation campaigns.

Congressman Devin Nunes’s Republican staff has prepared a secret memo purporting to give evidence of FBI misconduct in the Trump-Russia investigation. Should this memo, whose partisanship is obvious and whose credibility is questionable, be released to the public? The Twitter hashtag “ReleaseTheMemo” has been spread by about 600 Russian-linked accounts. The obvious purpose is to discredit the FBI investigation and thus protect Mr. Trump. One also sees readers’ comments that mix the Roman and Cyrillic alphabets; it is hard not to suspect that Russian bots are behind these.

Bots Don't Bother Conservatives - Yet!
Yet, few conservatives seem troubled. Writing on a conservative website,  David Harsanyi questions the “veracity” of a report that Russian bots were behind “ReleaseTheMemo.” He asks, “has anyone yet produced a single voter who lost his free will during the 2016 election because he had a Twitter interaction with an employee of a St. Petersburg troll form? Or do voters tend to seek out the stories that back their own worldviews?” Fair questions. Should the Russian bots worry us? Yes, they should, and here's why:

Bots, Which Are Not Real People, Persuade Real People
Bots can be persuasive. 

First, people are vulnerable to the bandwagon effect. When pro-Trump messages flood your Twitter account by the zillion, it is hard to avoid feeling that Trump is on a public relations roll. If many of those messages are actually fakes produced by a foreign government, you are jumping on a bandwagon that isn’t real. It just looks real.

Second, by cognitive dissonance theory, people whose beliefs are disproven by events are reluctant to change those beliefs. Instead, under certain conditions, they instead seek to evangelize nonbelievers to their point of view. In other words, if many people agree with you, you are likely to continue to believe that unlikely or silly things are actually true. You're OK with that, as you have social support from other people who believe silly things. However, if the other people are just bots, your social support exists only in your own imagination. It isn't real.

So, no, people don’t lose their free will when they read nonsense published by Russian bots. They can, however, get conned.

How to Detect Bots 
I generally oppose censorship. I do think that people should have enough sense not to let bots persuade them. Here, from medium.com, are some ways to spot bots:

  1.  A bot is likely to post far more often than a person could. Look at how many posts a day the account published. More than 50 posts a day are very suspicious. Real people don’t post that much.
  2. Bots give little personal information. Real social media accounts usually say things about the author. My accounts on Blogger, Twitter, and Facebook give personal information about me. Bots don't usually bother.
  3. Bots tend to publish posts that show little real thought; instead; they repost headlines and links.
  4. Bots are likely to publish lots of retweets and little original content.
  5. Bots that belong to a network tend to post the same content at the same time.
  6. Bots often use profile pictures copied from the Internet.
  7. Look for handles or ID’s that are meaningless groups of letters or symbols, rather than actual names.
It can actually be hard to identify a bot. I have on occasion caught myself arguing with bots on Twitter. Oops! I might just be wasting my energy, since the bot isn’t a person at all.  Still, maybe my replies help put the record straight against the bot’s disinformation campaign. What do you think?

Yes, political bots are dangerous, and they can have a persuasive effect. No, they don't take your free will away. Yes, you need to be vigilant not to think of them as people. 

Look at the cute robot at the top of this post. Doesn't the robot look cute and innocent? Of course! But political bots are not cute and innocent. No one creates political bots for innocent purposes. Bots look real, but they are just part of a con.

Public Domain Image from Wikimedia Commons

No comments:

Post a Comment