The scene in the 1973 Woody Allen film Sleeper, where Miles Monroe wakes up from his cryogenic freeze, is absurd, chaotic, and obvious. Everything is exaggerated and loud. You can clearly tell something is wrong.
That is not the future we got.
The future we got is subtle, quiet and polite and wrapped in convenience and framed as “customer centric.”
I was reminded of that recently when a client emailed me late one evening. He had just read a Substack article titled “51% Confidence: The Price of Living Online.” It scared the heck out of him and he wanted my take.
My first reaction wasn’t fear, it was frustration.
I have been talking about versions of this problem for well over a decade and it seems most people don’t seem to care. The benefits of free services, social media, personalization, and convenience are immediate. The tradeoffs around privacy, fairness, and free will feel abstract and distant. Until they don’t.
Back in 2010, I was working with a fast-growing online retail company evaluating platforms to power their next phase of growth. During one vendor demo, the sales team proudly explained how their system could automatically pull in Facebook and other social media profiles.
If a customer called into support or opened a chat, the sales representative would instantly see inferred interests, preferences, and behaviors. The goal was to guide the conversation in a “more powerful way.”
All of this happened without the customer’s knowledge.
I remember sitting there thinking, wait, what?
The buyer had no idea they were being profiled, categorized, and influenced based on their digital exhaust. This was framed as a customer centric experience.
I actually love the idea of customer centricity. When I walk into a local shop and the owner knows me, asks what I’m working on and makes thoughtful suggestions, that feels human. There is shared context and both parties understand the rules of engagement.
But I realized what was being discussed in this demo was different. Here, the salesperson held asymmetric power. They knew things about the customer that the customer did not know were known. The assumptions being made, how those assumptions shaped messaging, pricing, and options, all of it was invisible to the customer.
Imagine a salesperson saying, “Hey John, I see from your social profile that you’re into aggressive hiking. You should really consider this higher-end boot.” That would at least be honest.
Instead, the influence happens silently.
The Substack article that scared my client leans into corporate malfeasance and government surveillance, and then offers familiar advice such as using private browser session, a privacy VPN and to void public computers and Wi Fi.
None of that actually solves the problem it describes. It’s a total fallacy that any of this provides real privacy.
The real issue is not that companies know who you are, they don’t need to!
They track the middle-aged white male in zip code 02465 who likes cybersecurity, tennis, and certain brands. They don’t need your name. Once the profile is accurate enough, the distinction becomes meaningless.
And increasingly, that distinction is gone anyway.
Facial recognition companies like Clearview AI can take a single photo and match it against images scraped from social media and the open internet. From there, they can often positively identify a person, their home address, their family and their history. This can then be tied to a public digital profile. While there are legitimate uses for this technology, finding a murder by using a Ring Doorbell image, there are also many terrifying ones like stalking a person simply by taking a picture of someone at a bar.
The same data exhaust that’s used by corporate America to market shoes to a targeted audience can also be used by governments. Governments no longer have to break the laws to track their citizens and dissidents, they simply have to pay the private sector for the information. And commercial surveillance tools do not stop at national borders.
This is what makes the conversation so uncomfortable. Not because it’s science fiction, but because it’s mundane and because it’s now.
AI is supercharging all of this. Physical movement is tracked through mobile devices, vehicles, foot traffic sensors, and private beacons placed on private property that happens to sit next to public roads. That data is sold, aggregated, enriched, and resold. What was once “anonymous” becomes trivially identifiable when enough dots are connected.
So yes, privacy as we understood it is gone, but panic is not the right response.
What we still have is agency, awareness and the ability to shape rules.
We have data protection laws, imperfect as they are, in places in like Massachusetts, California, and the EU. And here in the US, we still have due process. This must be where we begin.
The most dangerous outcome is resignation, the belief that nothing can be done so nothing should be attempted.
The future didn’t arrive with a bang, it began with free services, better recommendations, slightly better ads, and just enough comfort to keep us from asking hard questions. We must start asking them anyway.
