Data is everywhere. We generate it, share it and rely on it every second of the day. From how we shop online to the content we interact with on social media. Data plays a part in every decision, click and scroll. Businesses know this. Platforms depend on it. And that’s why big data is growing faster than we can manage it.
But with that growth comes serious responsibilities and ethical challenges involving technology. Who collects the data? What are they doing with it? Are they protecting it? Are they using it fairly? These are no longer questions for the future. They are urgent problems we need to face now.
Surveillance Culture & the Death of Privacy
Surveillance is no longer hidden; it’s embedded. We invite it in with every smart device, in over 1.8 billion homes globally as of 2024. A fitness tracker logs your heart rate. A voice assistant records your commands. A fridge tracks your shopping. Surveillance today is bought, not imposed.
These systems quietly profile us, turning data into predictions, and predictions into behavioural nudges. For instance, Facebook once manipulated user emotions in an experiment on 689,000 people using such systems. Google, on the other hand, can track you across 80% of websites. So, overall, users aren’t really users, they’re the dataset that is processed, analysed and manipulated from the second they enter the internet. No longer is the product the product. You are now the product.
Such extractions fuel some of the biggest ethical issues in technology, along with giving rise to a trillion-dollar economy for big tech to dip their toes into, while users get nothing. Instead, they are robbed of their identities and interests.
These extractions can also cause data bias. For example, if a large number of negative searches are coming from a certain area, they are flagged and can cause large-scale discrimination. Algorithms running on these data sets then misidentify minorities, reinforce hiring bias and over-police marginalised communities. The tech isn’t neutral; it encodes past injustices.
There are laws, but they lag way behind. For instance…
- The UK and EU’s GDPR have depth but lack consistency.
- The US has no federal privacy laws in place.
- Fines, such as the €1.2B penalty inflicted on Meta, act as a mere drop in the ocean compared to trillions of pounds of profits.
Surveillance isn’t inevitable. It’s a choice, coded, funded, and sold. It’s not so much about the data collection in itself; it’s about how it is used. There are both good and bad things that come from companies having these vast data sets. Unfortunately, it seems right now, profits are more important than privacy.
The Rise of Voluntary Exposure & Algorithmic Influence
Recommendation engines now guide over 75% of what we watch and 35% of what we buy. But they aren’t just recommending anymore, they’re influencing our decisions, slowly and subtly.
These systems don’t understand preferences or needs; they are fueled by engagements. What you do online is what you are. And when profits are tied to the watch time, the algorithms can’t differentiate between what interests you and what harms you.
A sex addict pausing on adult content is logged as a niche. A compulsive buyer, stopping for a quick look at a pair of shoes, gets bombarded with shoe ads for the next 10 days. TikTok reportedly learns your preferences within 40 minutes.
For instance, Instagram’s own research revealed it made 40% of teenage girls feel worse about their bodies, yet Insta still pushed the same content. The model doesn’t care. It’s not ethical. It’s statistical.
Are Phones Listening? Ambient Data & Ethical Boundaries
Smart devices, phones, TVs, and speakers all have embedded microphones that constantly scan for trigger phrases. But research from Northeastern University (2019) and many other studies suggest these devices may capture short audio bursts even without intentional activation. Once collected, this “ambient data” can include private conversations, background noise, and emotional tone, often stored, analysed, and fed into machine learning pipelines.
As per a Business Insider report, the global voice assistant market is worth $33.47 billion and is expected to hit $104.37 billion by 2032. Ambient voice data is especially valuable: it reveals spontaneity, urgency, and intent better than typed queries.
Even when anonymised, data can be re-identified. A 2021 study published on Computing.UK showed 99.98% of anonymised datasets can be traced back with just a few cross-referenced data points.
Legal protections vary. GDPR offers consent-based safeguards in the EU, but in the U.S., laws lag behind. No federal regulation fully addresses passive voice data collection.
The microphones are live. And yes, you have likely opted into the terms to agree to this. But what about the people who come over to your house for dinner? Have they agreed to be listened to by your tech? This is one of the biggest issues when it comes to tech ethical concerns. It isn’t possible to distinguish who’s being listened to.
The Myth of Ethical Objectivity in Data
Data isn’t neutral; it’s curated. As Nietzsche said, “There are no facts, only interpretations.” Every dataset reflects choices: what’s included, what’s ignored, and whose perspective drives the selection. These choices shape systems that feel objective but aren’t.
In the U.S., a 2016 ProPublica study found that a commercial crime risk algorithm was nearly twice as likely to falsely label Black defendants as high-risk compared to white ones. The model wasn’t broken; it was trained on biased historical data. That’s not just technical oversight. It’s systemic replication.
Roughly 85% of AI research datasets come from North America and Western Europe. This skews global AI development toward Western norms, values, and social assumptions. It’s a subtle but powerful form of digital colonisation, biased by omission.
Ethical data work demands more than fairness checklists. It means interrogating data provenance, purpose, and power. Ignoring context doesn’t neutralise bias, it just hides it. Objectivity, in practice, often becomes the aesthetic of a dominant worldview.
Striving for Ethical Data Use in a Complex Reality
Perfect systems don’t exist. But better ones do. Companies can take steps to improve how they handle data. That includes regular checks for bias. Clear consent forms. Faster response times for data requests. Local teams to shape local solutions. Stronger rights for users to delete or move their data, and partnering up with Enablis to build a secure environment to ensure effective and secure usage of data and AI in your business.
None of this is easy. But it is necessary. Because every step towards transparency is a step towards trust and the resolution of current ethical issues in information technology today.
