On Fake News

April 1, 2021

I have a very fun story on this topic, actually. Here on the Russian internet we have a news website Nothing on this website is true, except the tiny almost invisible letters “satirical periodical” under the logo. Not only do they post fake news, they deliberately invent the most absurd over-the-top headlines and body copy.

That’s not what’s funny about this story. What’s really funny is that sometimes, more often than not, people in social media mistake the news from Panorama for real stuff and repost them with all seriousness. It happens so often that when looking at some arbitrary repost with some over-the-top headline people ask in the comments whether it’s the crazy imagination of one of the Panorama authors or something which actually happened.

To be frank, I would really like to say that it happens the other way around, that some news from that website happened to become real stories with time but no, so far, as far as I know, they are too over-the-top for our reality to bear.

Back there in 2008 there was one interesting scandal related to the topic. A group of scientists utilized the SCIgen, MIT’s invention in the AI field of natural language processing, which generated pseudoscientific gibberish (with pictures and formulas!). They made an article using this tool, translated it using the machine translation (!), which believe me was not as good 13 years ago as nowadays, and sent the result for publication in one of the scientific journals. The article was published successfully, to everybody’s amusement, except the journal’s editorial team some time later.

All this raises some interesting questions on truth and faith. What should one believe in the age of the web?

I don’t dare try to speculate how people solved trust issues before the printing press existed. I can guess that after that, after the concept of “books” appeared, it was safe to assume that the person who was literate was telling the truth. A person who spent a lot of time and effort to learn how to write and construct comprehensible text in general probably was ethical enough to not deliberately lie. Then the bar was raised slightly when the newspapers appeared. Written forgery could be commonplace, but at least the newspaper publishers were trusted because of their reputation. Publishing a paperback book was a significant effort so the person doing it was probably not a phony, too.

Even after the web appeared, there was still hope left. Making and publishing a website was still an effort, so even while the majority of websites, especially the “news” ones, were not to be trusted to the same degree as already established respectable newspapers, it was still pretty safe to assume that nobody would waste their time setting so much up just to post some bullshit.

I watched the birth of Web 2.0. I have been there, in the late 2010 — early 2011 when the very concept was finalized and the technologies to support that truly sprawled. The smartphone tsunami just started swallowing up the world, and almost nobody could believe that something more dynamic than the phpBB-based forums and imageboards was possible.

“Proofpic or didn’t happen” was born on the imageboards. Anonymous knew from the very beginning how dangerous it is to allow anybody to post anything. And here we are, in 2021, completely buried under the huge amount of information billions of users dump onto us from their pages on social media.

How should a layman react when they see the news that Bitcoin is on the rise? Should we buy? Should we care? What source of this information is to be trusted, somebody claiming to be an insider or a respectable financial institution? Is this even a thing - an “insider” in the world of cryptocurrency? If your family member actively promotes electric cars to you because of an educational video they watched, it becomes a pretty complicated task to decide whether you should buy into this information or if it’s better to remain sceptical for another decade. Does that video include important information about the infrastructure required to support such transportation? Does it gloss over the environmental impact of making the batteries? Do you even know about such details in the first place?

I was but a teenager when an old acquaintance of mine came up with an enormously important idea of his own, which he proudly presented to me, a diehard man of truth back then. He said, “Is it even a lie if the only one who knows the truth is you?” He didn’t sway my morals back then, and not even once since then did I see a web of lies so elaborate to never be untangled, but the mere fact that it may be possible still haunts me to this day.

I see myself to be more problem-solving and pragmatic in nature. What matters to me is whether a thing helps me make decisions or hinders my plans instead. I don’t even watch the “news” normally. To protect oneself from fake and biased information, one would have to cross-check every fact they encounter, re-test every discovery themselves. I don’t intend to waste my resources doing that.

Instead, what I do and what I suggest everybody to do as well is to set a baseline of belief. A group of facts which you just take for granted and don’t waste effort questioning, unless an undeniable truth appears which demonstrates falsehood of something in your baseline. In such a case, you incorporate that into your baseline and move forward.

Albeit it looks exactly like something you use to discredit other points of view and start flame wars, however my intention is to reduce the mental burden in the decision making process as long as the baseline actually helps me solve a problem.

I don’t necessarily believe in crypto for example. I save enormous amounts of mental power just ignoring all the information on it, automatically considering it a biased fraud. It allows me to concentrate on safer investments which are numerous enough as is. All in all, it’s not a part of my life: I don’t work with cryptocurrencies professionally, nothing around me accepts payments in this form and I have no savings in crypto. But if, say, ETH becomes as ubiquitous as smartphones one day, or quantum computing arrives and cryptocurrencies do not get a death blow because of that, it would be unwise to not take it into account.

To put forward another example, I decided for myself that the human mind is something more than a machine replicable via technical means. This helps me to not worry about AI replacing my job, or anybody else’s at that, and ignore all the fear mongering articles on this topic, saving me a lot of mental health as a result. I would be really happy, though, for an AI to replace the jobs which explicitly ask to be replaced: mundane automatic repetitive tasks a robot could be tailored to perform. Otherwise, I think that we are at least a whole century away from just passing the Turing test alone. The most advanced chatbot in the world, “Alice” from Yandex, is still leagues away from how a living human would behave. But in case something truly groundbreaking appears which completely shifts our understanding of AI, and it actually demonstrates its usefulness in solving problems, I’d happily incorporate it into my life.

Incidentally, despite the glaring similarities with religious beliefs, it’s exactly how scientific progress works.

Previous: It's just a Game Next: On Video Game Addiction