I look at apps like Grindr and Tinder and see how they’ve rewritten sex culture — by creating a sexual landscape filled with vast amounts of incredibly graphic site-specific data — and I can’t help but wonder why there isn’t an app out there that rewrites political culture in the same manner. I don’t think there is. Therefore I’m inventing an app to do so and I’m calling it Wonkr — which somehow seems appropriate for a politically geared app. I dropped the “e” to make it feel more appy.
What does Wonkr do? Primarily, you put Wonkr on your phone and it asks you a quick set of questions about your beliefs. Then, the moment there are more than a few people around you (who also have Wonkr), it tells you about the people you’re sharing the room with. You’ll be in a crowded restaurant in Nashville and you can tell that 73 per cent of the room is Republican. Go into the kitchen and you’ll see that it’s 84 per cent Democrat. You’ll be in an elevator in Manhattan and the higher you go, the percentage of Democrats shrinks. Go to Germany — or France or anywhere, really — and Wonkr adapts to local politics.
The thing to remember is: Wonkr only activates in crowds. If you’re at home alone, with the apps switched off, nobody can tell anything about you. (But then maybe you want to leave it on . . . Many political people are exhibitionists that way.)
Wonkr’s job is to tell you the political temperature of a busy space. “Am I among friends or enemies?” But then you can easily change the radius of testability. Instead of just the room you’re standing in, make it of the block or the whole city — or your country. Wonkr is a de facto polling app. Pollsters are suddenly out of a job: Wonkr tells you — with astonishing accuracy — who believes what, and where they do it.
Here’s an interesting fact about politics: people with specific beliefs only want to meet and hang out with people who believe the same things as themselves. It’s like my parents and Fox News . . . it’s impossible for me to imagine my parents ever saying, “What? You mean there are liberal folk nearby us who have differing political opinions? Good Lord! Bring them to us now and let’s have a lively and impartial dialogue, after which we all agree to cheerfully disagree . . . maybe we’ll even have our beliefs changed!” When it comes to the sharing of an ethos, history shows us that the more irrational a shared belief is, the better. (The underpinning maths of cultism is that when two people with self-perceived marginalised views meet, they mutually reinforce these beliefs, ratcheting up the craziness until you have a pair of full-blown nutcases.)
So back to Wonkr . . . Wonkr is a free app but why not help it by paying say, 99 cents, to allow it to link you with people who think just like you. Remember, to sign on to Wonkr you have to take a relatively deep quiz. Maybe 155 questions, like the astonishingly successful eHarmony.com. Dating algorithms tell us that people who believe exactly the same things find each other highly attractive in the long run. So have a coffee with your Wonkr hook-up. For an extra 29 cents, you can watch your chosen party’s attack ads together . . . How does Wonkr ensure you’re not a trouble-seeking millennial posing as a Marxist at a Ukip rally? Answer: build some feedback into the app. If you get the impression there’s someone fishy nearby, just tell Wonkr. After a few notifications, geospecific algorithms will soon locate the imposter. It’s like Uber: you rate them; they rate you. Easily fixed.
What we’re discussing here is the creation of data pools that, until recently, have been extraordinarily difficult and expensive to gather. However, sooner rather than later, we’ll all be drowning in this sort of data. It will be collected voluntarily in large doses (using the Wonkr, Tinder or Grindr model) — or involuntarily or in passing through other kinds of data: your visit to a Seattle pot store; your donation to the SPCA; the turnstile you went through at a football match. Almost anything can be converted into data — or metadata — which can then be processed by machine intelligence. Quite accurately, you could say, data + machine intelligence = Artificial Intuition.
Artificial Intuition happens when a computer and its software look at data and analyse it using computation that mimics human intuition at the deepest levels: language, hierarchical thinking — even spiritual and religious thinking. The machines doing the thinking are deliberately designed to replicate human neural networks, and connected together form even larger artificial neural networks. It sounds scary . . . and maybe it is (or maybe it isn’t). But it’s happening now. In fact, it is accelerating at an astonishing clip, and it’s the true and definite and undeniable human future.
So let’s go back to Wonkr.
Wonkr may, in some simple senses, already exist. Amazon can tell if you’re straight or gay within seven purchases. A few simple algorithms applied to your everyday data (internet data alone, really) could obviously discern your politics. From a political pollster’s perspective, once you’ve been pegged, then you’re, well, pegged. At that point the only interest politicians might have in you is if you’re a swing voter.
Political data is valuable data, and at the moment it’s poorly gathered and not necessarily well understood, and there’s not much out there that isn’t quickly obsolete. But with Wonkr, the centuries-long, highly expensive political polling drought would be over and now there would be LOADS of data. So then, why limit the app to politics? What’s to prevent Wonkr users from overlapping their data with, for example, a religious group-sourcing app called Believr? With Believr, the machine intelligence would be quite simple. What does a person believe in, if anything, and how intensely do they do so? And again, what if you had an app that discerns a person’s hunger for power within an organisation, let’s call it Hungr — behavioural data that can be cross-correlated with Wonkr and Believr and Grindr and Tinder? Taken to its extreme, the entire family of belief apps becomes the ultimate demographic Klondike of all time. What began as a cluster of mildly fun apps becomes the future of crowd behaviour and individual behaviour.
Wonkr (and Believr and Hungr et al) are just imagined examples of how Artificial Intuition can be enhanced and accelerated to a degree that’s scientifically and medically shocking. Yet this machine intelligence is already morphing, and it’s not just something simple like Amazon suggesting books you’d probably like based on the one you just bought (suggestions that are often far better than the book you just bought). Artificial Intuition systems already gently sway us in whatever way they are programmed to do. Flying in coach not business? You’re tall. Why not spend $29 on extra legroom? Guess what — Jimmy Buffett has a cool new single out, and you should see the Tommy Bahama shirt he wears on his avatar photo. I’m sorry but that’s the third time you’ve entered an incorrect password; I’m going to have to block your IP address from now on — but to upgrade to a Dell-friendly security system, just click on the smiley face to the right . . . And none of what you just read comes as any sort of surprise. But 20 years ago it would have seemed futuristic, implausible and in some way surmountable, because you, having character, would see these nudges as the trivial commerce they are, and would be able to disregard them accordingly. What they never could have told you 20 years ago, though, is how boring and intense and unrelenting this sort of capitalist micro-assault is, from all directions at all waking moments, and how, 20 years later, it only shows signs of getting much more intense, focused, targeted, unyielding and galactically more boring. That’s the future and pausing to think about it makes us curl our toes into fists within our shoes. It is going to happen. We are about to enter the Golden Age of Intuition and it is dreadful.
I sometimes wonder, How much data am I generating? Meaning: how much data do I generate just sitting there in a chair, doing nothing except exist as a cell within any number of global spreadsheets and also as a mineable nugget lodged within global memory storage systems — inside the Cloud, I suppose. (Yay Cloud!)
Did I buy a plane ticket online today? Did I get a speeding ticket? Did my passport quietly expire? Am I unwittingly reading a statistically disproportionate number of articles on cancer? Is my favourite shirt getting frayed and is it in possible need of replacement? Do I have a thing for short blondes? Is my grammar deteriorating in a way that suggests certain subcategories of dementia?
In 1998, I wrote a book in which a character working for the Trojan Nuclear Power Plant in Oregon is located using a “misspellcheck” programme that learnt how users misspell words. It could tell my character if she needed to trim her fingernails or when she was having her period, but it was also used down the road to track her down when she was typing online at a café. I had an argument with an editor over that one: “This kind of program is simply not possible. You can’t use it. You’ll just look stupid!” In 2015 you can probably buy a misspellcheck as a 49-cent app from iTunes . . . or upgrade to Misspellcheck Pro for another 99 cents.
What a strange world. It makes one long for the world before DNA and the internet, a world in which people could genuinely vanish. The Unabomber — Theodore “Ted” Kaczynski — seems like a poster boy for this strain of yearning. He had literally no data stream, save for his bombs and his manifesto, which ended up being his undoing. How? He promised The New York Times and Washington Post that he’d stop sending bombs if they would print his manifesto, which they did. Then his brother recognised his writing style and turned him in to the FBI. Machine intelligence — Artificial Intuition — steeped in deeply rooted language structures, would have found Kaczynski’s writing style in under one-10th of a second.
Kaczynski really worked hard at vanishing but he got nabbed in the 1990s before data exploded. If he existed today, could he still exist? Could he unexist himself in 2015? You can still live in a windowless cabin these days but you can’t do it anonymously any more. Even the path to your shack would be on Google Maps. (Look, you can see a stack of red plastic kerosene cans from satellite view.) Your metadata stream might be tiny but it would still exist in a way it never did in the past. And don’t we all know vanished family members or former friends who work hard so as to have no online presence? That mode of self-concealment will be doomed soon enough. Thank you, machine intelligence.
But wait. Why are we approaching data and metadata as negative? Maybe metadata is good, and maybe it somehow leads to a more focused existence. Maybe, in future, mega-metadata will be our new frequent flyer points system. Endless linking and embedding can be disguised as fun or practicality. Or loyalty. Or servitude.
Last winter, at a dinner, I sat across the table from the VP of North America’s second-largest loyalty management firm (explain that term to Karl Marx), the head of their airline loyalty division. I asked him what the best way to use points was. He said, “The one thing you never ever use points for is for flying. Only a loser uses their miles on trips. It costs the company essentially nothing while it burns off swathes of points. Use your points to buy stuff, and if there isn’t any stuff to buy,” (and there often isn’t: it’s just barbecues, leather bags and crap jewellery) “then redeem miles for gift cards at stores where they might sell stuff you want. But for God’s sake, don’t use them to fly. You might as well flush those points down the toilet.”
Glad I asked.
And what will future loyalty data deliver to its donors, if not barbecues and Maui holidays? Access to the business-class internet? Prescription medicines made in Europe not in China? Maybe points could count towards community service duty?
Who would these new near-future entities be that want all of your metadata anyway? You could say corporations. We’ve now all learnt to reflexively think of corporations when thinking of anything sinister but the term “corporation” now feels slightly Adbustery and unequipped to handle 21st-century corporate weirdness. Let’s use the term “Cheney” instead of “corporation”. There are lots of Cheneys out there and they are all going to want your data, whatever their use for it. Assuming these Cheneys don’t have the heart to actually kill or incarcerate you in order to garner your data, how will they collect it, even if only semi-voluntarily? How might a Cheney make people jump on to your loyalty programme (data aggregation in disguise) instead of viewing it with suspicion?
Here’s an idea: what if metadata collection was changed from something spooky into something actually desirable and voluntary? How could you do that and what would it be? So right here I’m inventing the metadata version of Wonkr, and I’m going to give it an idiotic name: Freedom Points. What are Freedom Points? Every time you generate data, in whatever form, you accrue more Freedom Points. Some data is more valuable than other, so points would be ranked accordingly: a trip to Moscow, say, would be worth a million times more points than your trip to the 7-Eleven.
Well then, what do Freedom Points allow you to do? They would allow you to exercise your freedom, your rights and your citizenship in fresh modern ways: points could allow you to bring extra assault rifles to dinner at your local Olive Garden restaurant. A certain number of Freedom Points would allow you to erase portions of your criminal record — or you could use Freedom Points to remove hours from your community service. And as Freedom Points are about mega-capitalism, everyone is involved, even the corn industry — especially the corn industry. Big Corn. Big Genetically Modified corn. Use your Freedom Points that earn discount visits to Type 2 diabetes management retreats.
The thing about Freedom Points is that if you think about them for more than 12 seconds, you realise they have the magic ring of inevitability. The idea is basically too dumb to fail. The larger picture is that you have to keep generating more and more and more data in order to embed yourself ever more deeply into the global community. In a bold new equation, more data would convert into more personal freedom.
At the moment, Artificial Intuition is just you and the Cloud doing a little dance with a few simple algorithms. But everyone’s dance with the Cloud will shortly be happening together in a cosmic cyber ballroom, and everyone’s data stream will be communicating with everyone else’s and they’ll be talking about you: what did you buy today? What did you drink, ingest, excrete, inhale, view, unfriend, read, lean towards, reject, talk to, smile at, get nostalgic about, get angry about, link to, like or get off on? Tie these quotidian data hits within the longer time framework matrices of Wonkr, Believr, Grindr, Tinder et al, and suddenly you as a person, and you as a group of people, become something that’s humblingly easy to predict, please, anticipate, forecast and replicate. Tie this new machine intelligence realm in with some smart 3D graphics that have captured your body metrics and likeness, and a few years down the road you become sort of beside the point. There will, at some point, be a dematerialised, duplicate you. While this seems sort of horrifying in a Stepford Wife-y kind of way, the difference is that instead of killing you, your replicant meta-entity, your synthetic doppelgänger will merely try to convince you to buy a piqué-knit polo shirt in tones flattering to your skin at Abercrombie & Fitch.
This all presupposes the rise of machine intelligence wholly under the aegis of capitalism. But what if the rise of Artificial Intuition instead blossoms under the aegis of theology or political ideology? With politics we can see an interesting scenario developing in Europe, where Google is by far the dominant search engine. What is interesting there is that people are perfectly free to use Yahoo or Bing yet they choose to stick with Google and then they get worried about Google having too much power — which is an unusual relationship dynamic, like an old married couple. Maybe Google could be carved up into baby Googles? But no. How do you break apart a search engine? AT&T was broken into seven more or less regional entities in 1982 but you can’t really do that with a search engine. Germany gets gaming? France gets porn? Holland gets commerce? It’s not a pie that can be sliced.
The time to fix this data search inequity isn’t right now, either. The time to fix this problem was 20 years ago, and the only country that got it right was China, which now has its own search engine and social networking systems. But were the British or Spanish governments — or any other government — to say, “OK, we’re making our own proprietary national search engine”, that would somehow be far scarier than having a private company running things. (If you want paranoia, let your government control what you can and can’t access — which is what you basically have in China. Irony!)
The tendency in theocracies would almost invariably be one of intense censorship, extreme limitations of access, as well as machine intelligence endlessly scouring its system in search of apostasy and dissent. The Americans, on the other hand, are desperately trying to implement a two-tiered system to monetise information in the same way they’ve monetised medicine, agriculture, food and criminality. One almost gets misty-eyed looking at North Koreans who, if nothing else, have yet to have their neurons reconfigured, thus turning them into a nation of click junkies. But even if they did have an internet, it would have only one site to visit, and its name would be gloriousleader.nk.
To summarise. Everyone, basically, wants access to and control over what you will become, both as a physical and metadata entity. We are also on our way to a world of concrete walls surrounding any number of niche beliefs. On our journey, we get to watch machine intelligence become profoundly more intelligent while, as a society, we get to watch one labour category after another be systematically burped out of the labour pool. (Doug’s Law: An app is only successful if it puts a lot of people out of work.)
The darkest thought of all may be this: no matter how much politics is applied to the internet and its attendant technologies, it may simply be far too late in the game to change the future. The internet is going to do to us whatever it is going to do, and the same end state will be achieved regardless of human will. Gulp.
Do we at least want to have free access to anything on the internet? Well yes, of course. But it’s important to remember that once a freedom is removed from your internet menu, it will never come back. The political system only deletes online options — it does not add them. The amount of internet freedom we have right now is the most we’re ever going to get.
If our lives are a movie, this is the point where the future audience is shouting at the screen, “For God’s sake, load up on as much porn and gore and medical advice, and blogs and film and TV and everything as you possibly can! It’s not going to last much longer!”
And it isn’t.
Douglas Coupland is currently artist in residence at the Google Cultural Institute in Paris.
Prints by Douglas Coupland. Acrylic on archival pigment print 49’x37’ (2014); courtesy of The Daniel Faria Gallery, Toronto
Get alerts on Artificial intelligence when a new story is published