Why We Should Support Snapchat’s Good Old Fashioned Approach To Content Quality Control.
Snapchat, best known for its self-deleting messaging service and augmented reality selfie filters, is investing in becoming a reliable source of news by employing real life editors and journalists, while Facebook continues to eliminate human resources and replace them with Artificial Intelligence (AI); a dangerous precedent and a portent of things to come.
In my short story The Cort Syndrome, by the year 2035, journalism and marketing, professions had ceased to exist; at least in the way most of us would remember them. ‘Writers had been replaced by algorithm wielding computers and, in an age of total automation and widespread connectivity, marketers had been disconnected, cut out of the information loop.
By 2030, 100 billion things were connected, yielding $40 trillion in new revenues, most of which was accrued to private-sector corporations such as iScribe, who has found a way to marry pattern recognition software with the revolution in natural language generation to create algorithms that resembled a writer. Within two years iScribe’s computers were providing most of the world’s online news platforms with content. Within five years iScribe had achieved world domination. Ninety-five per-cent of online news content was being churned out by iScribe.’
Of course, this is only my interpretation of a future dominated by AI, but recent events at Facebook, which has become the world’s largest distributor of news, suggest that it could only be a matter of time before scientists, enabled and motivated by Facebook’s billion dollar research budget, crack the code to robot thinking, judgement, problem solving, openness, self-control and self-analysis — everything humans could be.
Like UIX Corp, a company born out of iScribe in The Cort Syndrome, Facebook’s top secret machine intelligence unit at Menlo Park in California arguably has the potential to eventually develop next generation humanoid robots with levels of emotional intelligence that would be greater than many humans.
Yann LeCun, the brilliant — some would say obsessed — director of AI research at Facebook Artificial Intelligence Researchers (FAIR), is pioneering machine intelligence advancement such as the EntNet — a machine capable of reasoning from text — and has the full backing of megalomaniac Facebook founder Mark Zuckerberg — who’s vision of a world without poverty, war, angst and solitude will be made possible by AI controls — to advance, at all costs, the field of machine intelligence.
Fellow tech billionaire Elon Musk (Tesla, Space X) dismisses Zuckerberg’s understanding of the threat posed by artificial intelligence as “limited,” and warns that there needs to be regulation on AI development before it’s too late, adding that the (AI) technology represents a “fundamental risk to the existence of civilisation.” Zuckerberg, however, remains defiant and his management team continue to put AI development before the needs of human beings, including their own people.
Twelve months ago Facebook’s entire trending team, whose job it was to choose what links would appear on the Facebook “trending” box on the right side of a user’s newsfeed, was fired without notice in a meeting with a security guard present.
The 40 former employees were replaced by machine intelligence; a fully automated trending module, designed to tackle so-called fake news, more effectively and efficiently than a group of humans, but which soon screwed up by pushing out an article to Facebook users entitled: “BREAKING: Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary.” Megyn Kelly is still employed by Fox News and has not endorsed Hillary Clinton for president.
Facebook removed the offending article, published by a website called Ending the Fed and linking to another little known site, Conservative 101. Under Facebook’s old guidelines, human news curators stuck to a list of trusted media sources. Neither of these sources were on that list.
In stark contrast Snapchat is actually increasing its in-house team of journalists, which play a key role in helping the smaller social network cover the news accurately. As an example, Snapchat told the BBC it had fact-checked user-generated coverage from the recent Charlottesville protests with local police before highlighting the posts.
Snapchat’s audience may be a fraction of Facebook’s — 173 million active daily users compared to two billion monthly ones — but its reputation as a publisher of quality, trustworthy content is skyrocketing. CNN has recently started streaming a daily show on the platform, and several other mainstream media outlets, including The New York Times, The Wall Street Journal, The Washington Post and The Daily Telegraph.
Snapchat have a bona fide news team; experienced journalists who work with editorial staff from contributing media outlets to look at content that comes in to the platform, evaluate it, determine whether it is accurate, whether it is relevant and how they can add additional context, unlike Facebook’s ex-trending team that had very little journalistic experience, if any at all.
Whether Snapchat’s human news team is a cost-effective way of delivering news and stories, however, is a different matter. Any human writer will tell you that producing a piece of high-quality written story or news, takes time.
That is why the combination of robotics, automation, AI and machine learning will outperform humans in most activities, including writing stories, in the next 20 to 30 years. It is also why Google is funding a $1 million AI-based service called RADAR (Reporters And Data And Robot), harnessing something called the Natural Language Generation (NGL) application to create 30,000 stories and news every month, and why South Korea’s Yonhap news agency has introduced an automated reporting system “Soccerbot” to produce news on football games, and why the Associated Press and Thomson Reuters are using machine learning algorithms to write stories.
In The Cort Syndrome, set in 2035, I write about an “upside to this new technological world order, where the machines had virtually eradicated humankind’s propensity for self-destruction. The manufacturing, agricultural and transportation sectors of the economy were almost entirely automated and employed very few humans. Across the world, poverty, war and disease were almost non-existent thanks to technology alleviating want. Computer intelligence had become superior to human intelligence in all areas.
In many ways, the world was a better place. The concept of life expectancy had become irrelevant to humans and machines thanks to medical immortality and advanced computers. If the scientists employed by the world’s wealthiest corporations had their way machines would eventually, within another ten years, attain equal legal status with humans. Humans and machines would merge in the physical and mental realms, thanks to the development of cybernetic brain implants to enable humans to fuse their minds with robot humanoids.
The consequence would be that clear distinctions between humans and machines would no longer exist and that, I believe, should not be allowed to happen.
As Steve Jobs once famously said “It’s not faith in technology, it’s faith in people.”
Click here to read The Cort Syndrome
Originally published at https://www.linkedin.com on August 31, 2017.