<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=61497&amp;fmt=gif">

Striking a balance: ethics in the data age

Posted by Perceptive Team - 14 June, 2021

By 2025, humans will create 463 exabytes of data every day1. That’s 463,000,000 terabytes. And new technologies mean we can explore and gather this data in sophisticated and exciting ways, allowing us to connect, innovate, modernise and fuel our economy.

But for all the opportunities data brings, businesses need to be aware of its ethical use. To burrow from New Zealand’s data strategy and road map: Just because data can be used in new, and innovative ways, does not always mean that it should be.

We’ve seen that data and the insights it brings has the potential to be abused. Many of our staff have experienced instances of mentioning a product aloud and found advertisements for that product following them around on social media for days afterwards.

Even our data scientists find it creepy.

However, businesses need data to operate. Without it, they might as well be shouting into the void with no notion of who their customers are and whether they’re even seeing their marketing. Meanwhile, customers don’t want their privacy breached but also want services to be connected and as easy as possible.

So what gives? Privacy or economy? Customer or business?

While data might be ones and zeros, its application and ethics are not so black and white. Which is why businesses and consumers must find a middle ground.

Thought-Leadership-Sho_Striking-A-Balance

The problem with “balance”

It’s easy to think that responsible data usage should lie with both businesses and consumers. Consumers should be aware of what data they generate and sign away; businesses should be transparent with what data they collect and how it is used. That’s balanced and fair, right?

The reality is the onus of ethical data use lies on businesses—not the consumer.

While businesses might argue it is the responsibility of the consumer to be aware of how their data is being used, most customers aren’t aware. Most consumers don’t read T&Cs. And while you can physically make them opt into things, half the time they’ll just tick the box without reading it.

Coupled with the sheer amount of data we individually generate—about 146.88 gigabytes per person every day—it’s not feasible or reasonable for a consumer to know how every bit of data they generate is being used.

Which is why businesses must weigh up what is ethical data use and not—because often your customers simply don’t have oversight on how their data is being used.

 

The two guiding principles of data ethics

Data ethics shares many principles with research ethics, such as informed consent, protecting confidentiality and providing the right to withdraw. However, in our view data ethics it all boils down to two overarching principles:

  1. Be beneficial – will this benefit the people you’re collecting it from? That could include improving a service, product or experience of your customers to providing useful information to consumers when they need it (e.g. a sale on an item they were looking for).
  2. Do no harm – will this disadvantage or be detrimental to consumers in anyway? This could include gathering data without consent, collecting data you don’t need and using gender, sexuality, racial, political and religious stereotypes to drive outcomes.

For example, if a bank wanted to create an algorithm to determine the credit worthiness of a person, they might build into their analysis variables such as income and credit history. However, the bank should not incorporate gender, age, and ethnicity variables. If it did, those variables would be given certain weights that would influence the outcome. If the bank weighted its ethnicity variable to match census data, it would indicate that Māori and Pacific Islanders are, broadly speaking, financially worse off. So, if you said you were a Pacific Islander, the credit algorithm would automatically output a lower score than a Pākehā or someone of Asian descent, regardless of how strong your income and credit history is.

The good news is this kind of data discrimination is illegal in the banking world, but it illustrates how certain variables can be incorporated into data algorithms to the detriment of a certain groups of people.

 

The line between useful and creepy

There are data laws around information protection, security and privacy—the New Zealand Privacy Act and the GDPR to name two—these go a long way to reducing risk and harm. However, while the laws are reasonably black and white, data ethics is often grey. It’s not illegal to use data to increase your return on investment. But the methods you use to do it could be unethical.

Here’s an example:

Say you’re a home appliances retailer and you’re collecting data from your website in near real-time. If you have a washing machine sale coming up, you might decide to run online advertising targeted at the people who recently browsed your washing machine range. So far, nothing unethical here. You’re only collecting IP addresses and doing so benefits both you and the customer—they are getting what they want, when they want it, and at a discount too. You are getting your product in front of them and a potential sale.

Of course, this kind of data use has a lot of grey areas. It’s very easy to slip from being beneficial to being creepy—such as when your phone seemingly listens in on an offline conversation. Moreover, that line between beneficial and creepy is blurred—what one person may see as useful another will perceive as gone too far.

 

How far is too far?

The benefit/harm checks mentioned earlier are a good way to assess whether you’re starting to walk the fence between ethical and unethical data use. However, as we see it, the clincher that will pitch any business into the unethical yard is when they gather and use data without consumers being aware of it.

To return to the home appliances retailer from earlier. Let’s say this retailer has built an app for customers to control their smart home appliances. A customer will likely agree to give the app access to those appliances. However, some apps have the ability to also tap into system data, allowing it to track data that is completely irrelevant to the apps purpose; information about you from what TV shows you watch, what school your kids go to, and your heart rate when you sleep.

That same app might also trawl through your social media contacts and collect data on them. From one person, the app might gather data from 100 different social media profiles—none of which authorised the app to gather this information. From that data, the white goods retailer can start identifying trends around a group of people and use its insights to sell their products to them.

Of course, this is highly unethical. There is no benefit for the consumers at all here. Moreover, it’s harmful. The app is gathering data it doesn’t need to gather and is doing so without consumers being aware of it.

And before you say “that won’t happen”, it already has.

In fact, this kind of unethical data use can be downright dangerous. The amount of sway advertising has on a consumer is already massive. Power it up with deep data and insights and you can steer people in a certain direction—as we saw in the Cambridge Analytica scandal.

To quote Uncle Ben: “with great power comes great responsibility”. Ultimately, data ethics comes down to the morals of the business and your intentions with the data. Are you truly trying to benefit the consumer—or is that just lip service so you can aggressively increase your ROI?

So before you cross the line, stop and ask yourself: who are you serving?

 


Does the idea of data daunt you? It doesn't need to. Learn the basics of how you can start using  data to make smarter business decisions with our free guide: Get Data Smart.

New call-to-action


 

1. Raconteur, 2019. A Day in Data.

Topics: Data Science


Recent Posts