Data dictating leadership?
Data-driven decision-making is increasingly at the top of a business leader’s job specification. Years of judgement, experience, and network now have an additional complementary layer of data science. It supports moves to invest, divest, or restructure the strategic future of entire businesses. So, what then is the tipping point between human leadership and leadership by data analytics? Where and when do we draw the line?
A complex trade-off
The mining of customer data, in one hand, can help increase customisation of services. It can also improve efficiencies of our marketing, delivery, and product innovation processes. On the other hand, it can be, as pointed out by INSEAD, the source of economic inequalities. It causes power imbalances between those who hold the data and those whose data is controlled.
In this complex trade-off, how can we ensure that the economics of privacy and data will result in a future that is fair and sustainable for the next generation? Will our children become mere pawns in a game they cannot escape and are powerless to control?
In 2018, the information of as many as 87 million Americans were allegedly unethically gathered by a company called Cambridge Analytica via Facebook. It was covered by New York Times, Forbes, and just about every other media outfit in the world.
In an interview by the Guardian with Cambridge Analytica’s former-Director turned “whistleblower” and Worldwebforum 2020 speaker Brittany Kaiser, it was uncovered that the data was allegedly used to target swing-voters to favor Donald Trump and the Leave.EU campaigns. The company allegedly used its datasets and, in her words, “bombarded them (“the persuadables”) through blogs, websites, articles, videos, ads, every platform you can imagine. Until they saw the world the way we wanted them to… Until they voted for our candidate.”
In her book Targeted, Brittany goes inside the secretive meetings with Trump campaign personnel and details the promises Cambridge Analytica made to win.
Data freely given by users on the social media platform in question was used to expose them to content that was tailor-made to influence. It was designed to sway their opinions about a certain person and political cause. In the 2018 Netflix documentary “The Great Hack”, it was revealed that the “Crooked Hillary” and “Lock her up” campaigns were the product of Cambridge Analytica’s efforts to switch the behaviour of voters in Swing states.
Data-driven behavior change
But isn’t mass manipulation by new media as old as mass media themselves? Every media technology ever known to man has been used to provoke paranoia. It has been used to incite fear, create clamor, and has toyed with the emotions of the masses. Every marketing technique and tactic is designed essentially to drive behavioral change to favor the capitalist and the investor.
How then is Cambridge Analytica’s use of data any different from the well-carried out campaign of Obama’s team of digital experts back when he was telling the entire world, “Yes, we can!”? Was the uproar for the Cambridge Analytica scandal caused by the result of the elections or just their allegedly dubious means to procure our data?
The “persuadables” or undecided is a common subset present in any persona category in every industry imaginable. And so is creating a campaign to target them. When ethics are in question though, how do we regulate the threshold for what is socially acceptable and deplorable?
Data rights are human rights
Are we outraged because we are coming to the realization that there is a major player centralising all of our data? Is it because its power and control are far too overreaching and encompassing? Are we enraged because Facebook feels like a pair of velvet handcuffs? It seems that we just couldn’t pass-up enjoying the benefits of sharing. This, despite knowing we are being deduced to mere numbers in an equation where we are at the losing end.
Are we mad because we feel that we, too, should have a share in the pie? If we own the data that social platforms are selling to their clients, why are we not incentivised for sharing it?
Worldwebforum 2019 speaker Sabine Seymour’s company SUPA, for instance, is incentivising the youth with tokens. Her company produces wearable devices that gather the youth’s healthcare data. Her customers can choose how to spend the tokens based on their personal interests. Matthew Hjelmstedt of Utopia is using artificial intelligence to ensure that all personalities creating music is paid for their contribution fairly. Matthew makes a strong point, “Once you see your data, you see your worth”. Arif Khan of Singularity.net and Worldwebforum 2020 Artificial Intelligence Track Moderator is democratising the use of AI to benefit all.
Should the regulation then be more hinged on the end result and societal benefits of the use of the new technology? Should data givers be remunerated for giving up a piece of their privacy?
Imagine when data is centralised but the incentives are fragmented to benefit both the giver and the user. Its results, mitigated and monitored by government and civil society. Perhaps control will then be diffused. Maybe then radical change caused by new technology will create more value for those who will live through the effects of it.