A recent report conducted by the Global Web Index (GWI) took a hard look at peoples’ complicated relationship with the media, analyzing our wants, needs, and habits while forecasting what’s in store for us in the foreseeable future. Along with some unique lessons learned about bricks-and-mortar stores in the wake of Coronavirus, GWI delivered game-changing insights about the myriad ways both AI and data affect our lives.
In 2019, 61% of internet users reported a growing concern about personal privacy under the omnipotent influence of the internet. The same question was posed only six years earlier in 2019, garnering the response of 56% of internet users. A five percent rise in privacy concerns within six years may seem inconsequential, but makes for a pretty compelling argument against Big Tech’s access to our personal information.
Fast-forward to 2020, where a decrease in privacy concerns around the world suggests a chicken-or-the-egg scenario: has the erosion of privacy softened public opinion over privacy or vice versa? Like most of the profound changes that occurred in the last year, the pandemic is to blame — or, at least, partially blame.
GWI asked global what spurred this sudden trust in tech, with 30% pointing to telehealth as the reason they regained their trust in technology. For the average internet user who feels torn between security and privacy, the question remains: who benefits from data sharing? We looked to Dirigo Collective’s Digital Director, Matty Oates for guidance.
How exactly do companies benefit from artificial intelligence and data-collection on a widespread scale?
Although we may feel – as any human has when faced with a mechanical or automated form of themselves – that AI threatens jobs, livelihoods, and in the worst case scenario, some form of dystopian “Matrix” world.
But it really is opening doors in ways we can’t even yet imagine, by taking the tedious, time-sucking jobs from our task list and freeing us up to do the jobs that humans are best at. Marketing AI can write ten versions of an email subject line to test out, while the writer that would have been tasked with that can now spend that time writing the meaningful stuff.
And it’s not only in Marketing; although a lot of auto assembly plants have turned to robotics and AI to take on most of the assembly, there are numerous cases in manufacturing where humans simply have to remain because of that unexplainable, learned muscle memory and birds-eye view of a task. We often forget that AI, in its current state, is often a system that’s trained to do one task, and only one task, very well. That’s where humans have the clear advantage.
Let’s talk more about that human advantage
Like any new technology, it all comes down to the person(s) wielding it. Compare it to nuclear physics; in one set of hands it can power cities, in another it can destroy them. Too often, we let the excitement of a new technology run ahead of us, before we’ve put the rules in place to use it responsibly.
Although the term ‘AI’ has been used since 1955, it’s only since the mid-90s, when a mix of large data sets, powerful processors, open source libraries, and advances in neural networks enabled it to become an effective reality. And in looking at how quickly technology has developed over the last thirty years, we feel it’s more important than ever to get concurring thoughts on how the tech we have now, today, should be responsibly used.
Give me some examples
The use of AI throughout 2020 and the global pandemic was abundant, from helping with contact tracing, to comparing transmission rates in relation to population density and aggregated device movement in all 50 states.
Reciprocally, we have to remember that an AI system is only as good as the data it’s been fed, and when very well-known ecommerce company (who shall remain unnamed here) tried using AI to do their hiring and diversify their workforce – free from personal bias – they were astounded when the system kept recommending resumes that were essentially the Board of Directors from the bank in Mary Poppins; middle-aged, white males. They realized it’s because all the examples of ‘ideal’ candidates they fed into the system beforehand were essentially middle-aged, white males. Like our own opinions, an AI is only as good and as useful as its education.
You mentioned an AI code of ethics for Dirigo Collective. What does that look like?
It’s currently based off of three DOs and three DON’Ts. We were inspired by a company called Phrasee out of the UK, and took our first cues from them. They’re listed below:
Things we won’t do:
Things we will do:
The first one can be easily illustrated through Facebook; in 2019, Facebook began to limit targeting capabilities for certain campaigns that offered housing, employment, or credit. They had found that people were using existing targeting groups to either include or exclude groups of people from ad campaigns. For example, they discovered that certain high-end housing developments were excluding people of color from their ad sets, and found the same for ads promoting job opportunities. Similarly, they found ads for credit offers targeting those just recently turned 18 and in college, when we are the least financially educated and most susceptible to racking up credit card debt.
As a result, Facebook no longer allows you to target by age, gender, or by neighborhood for these three products, and although it made some of our own ads more difficult to target (a banking client of ours has a Home Equity Loan product that is likely of little use to an 18-year-old), we recognize that it’s for the greater good, and we support it fully.
The negative emotion item is also an important one, and one that’s been in use throughout the history of advertising; most notably in the form of ‘Buy now before it’s too late!’ Even that line preys on fear, in its most innocent form. But it is preying on fear nonetheless, and we will not run an ad campaign through our system that searches for those people and capitalizes on their emotions.