Nyhus Lede
Digging through the headlines to give you the news, ideas and best practices we love.
Nov 2017
We, like much of the world, have been watching reports of Russian election-tampering closely. Maybe even obsessively. The volume of information can make it difficult to understand just how the United States and its allies have been affected.
Our aim in this month’s newsletter is to focus entirely on this issue in order to improve understanding about what happened, and what it means for all of us.
Certainly, foreign meddling can affect how we vote. But this issue also has broad implications for how we consume information and share stories. How do we know what’s true? How will more strict social media regulation impact our usage? Who, exactly, is responsible for preventing foreign influence in the future? Let’s explore.
First, outdated laws created an opening…
In 2015, Mother Jones reporter Russ Choma made a prediction that online political ads would skyrocket during the 2016 presidential election, thanks in large part to “antiquated campaign finance laws.” These laws, designed with billboard, radio and newspaper ads in mind, were no match for social media. Virtually anyone can break the internet with viral videos of baby goats in pajamas—or salacious and misleading attack ads on political opponents—and remain completely anonymous.
…that foreign bots and trolls exploited…
The scope of the damage is made more apparent almost daily. We’ve already added “bots,” “trolls” and “fake news” to our vocabularies. If you’re unclear just how these digital actors worked together to undermine our election, take a few minutes to watch this video. It’s estimated that Russian influence reached 126 million users…on Facebook alone.
So Congress called Big Tech to the Hill to get some answers.
In late October, representatives from Facebook, Google and Twitter appeared before Congress to discuss how Russian entities were able to reach so many users, and why their online trolling spree was never reined in. With the world watching, the Big Three admitted their open systems and automated ad platforms weren’t without some risks. One analyst called those risks “existential.”
Tech company approaches are a bit all over the place.
The tech companies are scrambling to respond. No one is expecting an instant fix to what we see as inadequate campaign finance laws and the spread of fake or misleading news. But businesses must start somewhere. Twitter is actively trying to remove some of the thousands of bots on its platform, but you can still buy armies of bots to retweet your message. President Trump might have 13 million fake followers. Facebook will add a disclaimer to identify some political advertisements. And those “dark posts” placed on Facebook anonymously? Facebook will now improve transparency by requiring those to live online, like an archive, on their owners’ pages.
So what should digital marketers do next?
The internet is essential, powerful and—sometimes—dangerous. Fake news. Bots. Trolls. Digital marketers have been working hard to protect their brands and clients. We believe sanity online can be restored. It’s time for us to rebuild trust with care and genuine engagement. In our blog, we offer ideas to get you started.
Sign up to receive the Nyhus Lede using the submit form in the footer below.