Listen to this article
This is an experimental feature. Give us your feedback. Thank you for your feedback.
What do you think?
It was after midnight and many of the guests had already gone to bed, leaving behind their amber-tailed tumblers of high-end whiskey. The poker dealer who had been hired for the occasion from a local casino had left a half hour earlier, but the remaining players had convinced her to leave the table and cards so that they could keep playing. The group still hovering over the felt and chips was dwarfed by the vaulted, wood-timbered ceiling, three stories up. The large wall of windows on the far side of the table looked out onto a long dock, bobbing on the shimmering surface of Lake Tahoe.
Sitting at one end of the table, with his back to the lake, twenty-nine-year-old Erik Voorhees didn’t look like someone who three years earlier had been unemployed, mired in credit card debt, and doing odd jobs to pay for an apartment in New Hampshire. Tonight Erik fitted right in with his suede oxfords and tailored jeans and he bantered easily with the hedge fund manager sitting next to him. His hairline was already receding, but he still had a distinct, fresh-faced youthfulness to him. Showing his boyish dimples, Erik joked about his poor performance at their poker game the night before, and called it a part of his “long game.”
“I was setting myself up for tonight,” he said with a broad toothy smile, before pushing a pile of chips into the middle of the table.
Erik could afford to sustain the losses. He’d recently sold a gambling website that was powered by the enigmatic digital money and payment network known as Bitcoin. He’d purchased the gambling site back in 2012 for about $225, rebranded it as SatoshiDice, and sold it a year later for some $11 million. He was also sitting on a stash of Bitcoins that he’d begun acquiring a few years earlier when each Bitcoin was valued at just a few dollars. A Bitcoin was now worth around $500, sending his holdings into the millions. Initially snubbed by investors and serious business folk, Erik was now attracting a lot of high-powered interest. He had been invited to Lake Tahoe by the hedge fund manager sitting next to him at the poker table, Dan Morehead, who had wanted to pick the brains of those who had already struck it rich in the Bitcoin gold rush.
For Voorhees, like many of the other men at Morehead’s house, the impulse that had propelled him into this gold rush had both everything and nothing to do with getting rich. Soon after he first learned about the technology from a Facebook post, Erik predicted that the value of every Bitcoin would grow astronomically. But this growth, he had long believed, would be a consequence of the multilayered Bitcoin computer code remaking many of the prevailing power structures of the world, including Wall Street banks and national governments—doing to money what the Internet had done to the postal service and the media industry. As Erik saw it, Bitcoin’s growth wouldn’t just make him wealthy. It would also lead to a more just and peaceful world in which governments wouldn’t be able to pay for wars and individuals would have control over their own money and their own destiny.
It was not surprising that Erik, with ambitions like these, had a turbulent journey since his days of unemployment in New Hampshire. After moving to New York, he had helped convince the Winklevoss twins, Tyler and Cameron, of Facebook fame, to put almost a million dollars into a startup he helped create, called BitInstant. But that relationship ended with a knock-down, drag-out fight, after which Erik resigned from the company and moved to Panama with his girlfriend.
More recently, Erik had been spending many of his days in his office in Panama, dealing with investigators from the US Securities and Exchange Commission—one of the top financial regulatory agencies—who were questioning a deal in which he’d sold stock in one of his startups for Bitcoins. The stock had ended up providing his investors with big returns. And the regulators, by Erik’s assessment, didn’t seem to even understand the technology. But they were right that he had not registered his shares with regulators. The investigation, in any case, was better than the situation facing one of Erik’s former partners from BitInstant, who had been arrested two months earlier, in January 2014, on charges related to money laundering.
Erik, by now, was not easily rattled. It helped that, unlike many passionate partisans, he had a sense of humor about himself and the quixotic movement he had found himself at the middle of.
“I try to remind myself that Bitcoin will probably collapse,” he said. “As bullish as I am on it, I try to check myself and remind myself that new innovative things usually fail. Just as a sanity check.”
But he kept going, and not just because of the money that had piled up in his bank account. It was also because of the new money that he and the other men in Lake Tahoe were helping to bring into existence — a new kind of money that he believed would change the world.
How Music Got Free
As I was browsing through my enormous list of albums one day a few years ago, a fundamental question struck me: where had all this music come from, anyway? I didn’t know the answer, and as I researched it, I realized that no one else did either. There had been heavy coverage of the mp3 phenomenon, of course, and of Apple and Napster and the Pirate Bay, but there had been little talk of the inventors, and almost none at all of those who actually pirated the files.
I became obsessed, and as I researched more, I began to find the most wonderful things. I found the manifesto from the original mp3 piracy clique, a document so old I needed an MS‑DOS emulator just to view it. I found the cracked shareware demo for the original mp3 encoder, which even its inventors had considered lost. I found a secret database that tracked thirty years of leaks—software, music, movies— from every major piracy crew, dating back to 1982. I found secret web‑ sites in Micronesia and the Congo, registered to shell corporations in Panama, the true proprietors being anyone’s guess. Buried in thousands of pages of court documents, I found wiretap transcripts and FBI surveillance logs and testimony from collaborators in which the details of insidious global conspiracies had been laid bare. My assumption had been that music piracy was a crowdsourced phenomenon. That is, I believed the mp3s I’d downloaded had been sourced from scattered uploaders around the globe and that this diffuse network of rippers was not organized in any meaningful way. This assumption was wrong. While some of the files were indeed untraceable artifacts from random denizens of the Internet, the vast majority of pirated mp3s came from just a few organized releasing groups. By using forensic data analysis, it was often possible to trace those mp3s back to their place of primary origination. Combining the technical approach with classic investigative reporting, I found I could narrow this down even further. Many times it was possible not just to track the pirated file back to a general origin, but actually to a specific time and a specific person.
That was the real secret, of course: the Internet was made of people. Piracy was a social phenomenon, and once you knew where to look, you could begin to make out individuals in the crowd. Engineers, executives, employees, investigators, convicts, even burnouts — they all played a role.
I started in Germany, where a team of ignored inventors, in a blithe attempt to make a few thousand bucks from a struggling business venture, had accidentally crippled a global industry. In so doing, they became extremely wealthy. In interviews, these men dissembled, and attempted to distance themselves from the chaos they had unleashed. Occasionally, they were even disingenuous, but it was impossible to begrudge them their success. After cloistering themselves for years in a listening lab, they had emerged with a technology that would conquer the world.
Then to New York, where I found a powerful music executive in his early 70s who had twice cornered the global market on rap. Nor was that his only achievement; as I researched more, I realized that this man was popular music. From Stevie Nicks to Taylor Swift, there had been almost no major act from the last four decades that he had not somehow touched. Facing an unprecedented onslaught of piracy, his business had suffered, but he had fought valiantly to protect the industry and the artists that he loved. To my eyes, it seemed unquestionable that he had outperformed all of his competitors; for his trouble, he’d become one of the most vilified executives in recent memory.
From the high-rises of midtown Manhattan I turned my attention to Scotland Yard and FBI headquarters, where dogged teams of investigators had been assigned the thankless task of tracking this digital samizdat back to its source, a process that often took years. Following their trail to a flat in northern England, I found a high-fidelity obsessive who had overseen a digital library that would have impressed even Borges. From there to Silicon Valley, where another entrepreneur had also designed a mind-bending technology, but one that he had utterly failed to monetize. Then to Iowa, then to Los Angeles, back to New York again, London, Sarasota, Oslo, Baltimore, Tokyo, and then, for a long time, a string of dead ends.
Until finally I found myself in the strangest place of all, a small town in western North Carolina that seemed as far from the global confluence of technology and music as could be. This was Shelby, a landscape of clapboard Baptist churches and faceless corporate franchises, where one man, acting in almost total isolation, had over a period of eight years cemented his reputation as the most fearsome digital pirate of all. Many of the files I had pirated—perhaps even a majority of them—had originated with him. He was the Patient Zero of Internet music piracy, but almost no one knew his name.
Over the course of more than three years I endeavored to gain his trust. Sitting in the living room of his sister’s ranch house, we often talked for hours. The things he told me were astonishing — at times they seemed almost beyond belief. But the details all checked out, and once, at the end of an interview, I was moved to ask:
“Dell, why haven’t you told anybody any of this before?”
“Man, no one ever asked.”
From How Music Got Free by Stephen Witt, published on June 16, 2015 by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright by Stephen Witt, 2015.
Losing the Signal
The students at Prince of Wales Public School had long since stopped paying attention to Reg Nicholls squeaking away on the blackboard. Every few minutes the math teacher frowned, erasing part of his work. Then: more numbers, a spiraling out-of-control formula, and that awful scraping of chalk on blackboard. Finally, the classroom fell silent. Poor Nicholls stood motionless. “Can anyone tell me where I went wrong?” he asked.
An answer came from the back of the room: “When you were born.”
The room erupted. Nicholls raced to the back of the class, dragging his heckler into the hallway. The sputtering, mottle-faced instructor pinned twelve-year-old Jim Balsillie against a wall of lockers. Balsillie stared right back at Nicholls. Balsillie’s real punishment came the next day when he was kicked out of math. He’d have to study on his own for the rest of term. See how far that gets you, his teacher said. Oh, and you’re still going to have to join classmates for the compulsory provincewide math test in a couple of weeks.
Later that month, Balsillie rejoined his class for the big test at the Peterborough, Ontario, school. The smart-ass, it turns out, really was smart. Studying all on his own, the lippy twelve-year-old math castoff scored first in the grade 7 test, not just at Prince of Wales but in the entire province. A regional superintendent traveled to the school to bestow the 1974 math honor on him. When he raced home to tell his mom, Laurel, about winning the award, she just shook her head, laughing, repeating a line she often used to sum up her difficult middle child: “Jim, you always fall in shit and come up smelling like roses.”
Getting in trouble was relatively easy in Peterborough’s working-class west end, where houses were small and ambitions were oversized; where lawns doubled as parking lots and sports games frequently ended in fights. Young Jim, the middle of three children born within three years, fit right in with the time and territory. “I was always a troublemaker,” he says, “mouthy and cocky.” Growing up, Balsillie played a lot of hockey and lacrosse and loved watching Peterborough Petes junior hockey games at Memorial Centre with his father, who had seasons tickets. Many Petes players made it to the NHL—including Bob Gainey and Steve Yzerman—and Balsillie dreamed of one day following them and returning to his hometown with hockey’s greatest trophy, the Stanley Cup.
Even more important to Balsillie than Petes players was the team’s coach. “The leading figure in my eyes was Roger Neilson—an innovative coach in so many ways.” Neilson was junior hockey’s infamous trickster. When pulling his goalie for an extra attacker, Roger had his net- minder leave his stick across the mouth of the crease to stop long shots. When he was managing a local baseball team, Neilson had a catcher hide a pared apple in his equipment. When a runner for the other team dangled off third base, the catcher fired the apple over his third basemen’s head. The jubilant runner then dashed home, smiling, only to be touched out with the real ball by Roger Neilson’s catcher at home plate.
When he wasn’t pulling a fast one, Neilson fought the rules. That’s how he became known as “Rule Book Roger.” The establishment—referees and umpires, who were league officials—hated Rule Book Roger. Not teenage Jim Balsillie: he loved the maverick as much as he loved the game. Neilson’s skirmishes mirrored the deep-rooted conflicts with authority that defined Balsillie’s teenage years. He was close to his mother and her parents, but he sparred frequently with his father; he was a bright student who alienated teachers with a razor-sharp tongue. Although suspicious of figures of power, Balsillie also aspired to join Canada’s business establishment. Balsillie would struggle throughout his career to make peace with his warring two-headed demon: the positive force of ambition versus a deep-rooted distrust of authority.
Predictably, perhaps, Balsillie’s trouble with those in charge first became manifest in dealings with his father, Ray Balsillie, a descendent of French Métis, Canadian aboriginals of mixed European and indigenous ancestry that trace their roots to the fur trade. The Balsillies were a complicated bunch. One wing of the family worked at Saskatchewan’s fabled Cumberland House, a northern Hudson Bay Company trading post that once housed the ill-fated Franklin expedition to the Arctic—Scottish explorers who perished in the far north in the 1840s. The Balsillie clan shares both Scot and Métis blood. All of which explains Jim Balsillie’s piercing blue eyes, sharp cheekbones, and olive skin.
Ray Balsillie whose family moved from Manitoba to a small town south of Waterloo when he was a boy, left the family home as a teenager to make a fresh start in Seaforth, Ontario, with the Royal Canadian Air Force. As an adult Ray Balsillie seldom spoke of his native heritage, and his two sons and daughter were discouraged from raising the subject. It was only when Jim traveled as an adult to Winnipeg that he learned that an aunt was one of that city’s most notorious residents. Gladys Balsillie, who died in 1987, began her career as a pilot before opening a popular restaurant and music venue, the Swinging Gate. When the restaurant closed, she made her mark managing exotic dancers at Winnipeg hotels. At her peak, the “Queen of the Strippers” managed more than one hundred male and female performers. Ray may have tried to hide his family’s colorful past under the lush blue-green carpet of Ontario cottage country, but there was a strain of restless adventure in Balsillie blood—a history of flesh and fur traders.
Jim was born in 1961 in Seaforth, a small town near Lake Huron. Shortly after, Ray began moving the family around, accepting positions as an electrical repairman with various Ontario companies. Eventually the Balsillies settled in Peterborough, a small, conservative city in the heart of Ontario that, apart from their neighborhood, was straight as an accountant’s ruler. When Jim was growing up, Peterborough was a predominantly white, churchgoing community defined by Trent University, a handful of U.S. manufacturing branch plants, and the summer influx of affluent Toronto cottagers. According to Jim, Ray Balsillie viewed himself as an outsider in the upbeat town; he gradually adopted a forlorn, Willy Loman–like air of defeat. “He grappled with insecurities,” Balsillie says of his father. He and his dad’s relationship “wasn’t all hugs and kisses.”
As Ray Balsillie withdrew from social activity, devoting his spare time to storing found objects and oddities in the family house, Jim flew in the opposite direction, growing increasingly ambitious. He cut his teeth as a salesman at age seven, selling Christmas cards door-to-door as his mother supervised from the sidewalk. Soon there were multiple paper routes, a painting business, and a job manning the lift at a nearby ski hill.
“I wanted the independence. I wanted nice things. If you wanted books, records, a car, athletic gear, you had to go earn it,” he says.
What Balsillie really wanted was to be someone. Upon reading Peter C. Newman’s seminal 1975 study of Canada’s cozy business aristocracy, The Canadian Establishment, the tradesman’s son decided that he had to join the country’s most inbred club. Tracing the education and early career paths of powerful corporate chieftains mapped out in Newman’s book, Balsillie realized he needed to take three giant steps: first, be accepted by an elite undergraduate school; second, land an accounting job at the establishment firm of Clarkson Gordon; and third, graduate from Harvard Business School. Balsillie had been an indifferent student who, except for his grade 7 home run in math, earned only average marks. He threw himself into studies his final year of high school. Upon being accepted by the University of Toronto’s prestigious Trinity College, Balsillie replaced his childhood dreams of professional hockey with a new yearning. “I remember deciding I was going to be the best student in the history of the University of Toronto, set every academic record imaginable, prepare for every assignment, get 100 percent on everything,” Balsillie says. “I was pretty sure they were going to put up a statue of me.”
Excerpted from LOSING THE SIGNAL: The Untold Story Behind the Extraordinary Rise and Spectacular Fall of BlackBerry. Copyright © 2015 by Jacquie McNish and Sean Silcoff. Excerpted by permission of Flatiron Books, a division of Macmillan Publishers. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Early in my teaching career I managed to get most of the students in my class mad at me. A midterm exam caused the problem.
I wanted the exam to sort out the stars, the average Joes and the duds, so it had to be hard and have a wide dispersion of scores. I succeeded in writing such an exam, but when the students got their results they were in an uproar. Their principal complaint was that the average score was only 72 points out of 100.
What was odd about this reaction was that I had already explained that the average numerical score on the exam had absolutely no effect on the distribution of letter grades. We employed a curve in which the average grade was a B+, and only a tiny number of students received grades below a C. I told the class this, but it had no effect on the students’ mood. They still hated my exam, and they were none too happy with me either. As a young professor worried about keeping my job, I wasn’t sure what to do.
Finally, an idea occurred to me. On the next exam, I raised the points available for a perfect score to 137. This exam turned out to be harder than the first. Students got only 70 percent of the answers right but the average numerical score was 96 points. The students were delighted!
I chose 137 as a maximum score for two reasons. First, it produced an average well into the 90s, and some students scored above 100, generating a reaction approaching ecstasy. Second, because dividing by 137 is not easy to do in your head, I figured that most students wouldn’t convert their scores into percentages.
Striving for full disclosure, in subsequent years I included this statement in my course syllabus: “Exams will have a total of 137 points rather than the usual 100. This scoring system has no effect on the grade you get in the course, but it seems to make you happier.” And, indeed, after I made that change, I never got a complaint that my exams were too hard.
In the eyes of an economist, my students were “misbehaving.” By that I mean that their behavior was inconsistent with the idealized model at the heart of much of economics. Rationally, no one should be happier about a score of 96 out of 137 (70 percent) than 72 out of 100, but my students were. And by realizing this, I was able to set the kind of exam I wanted but still keep the students from grumbling.
This illustrates an important problem with traditional economic theory. Economists discount any factors that would not influence the thinking of a rational person. These things are supposedly irrelevant. But unfortunately for the theory, many supposedly irrelevant factors do matter.
Economists create this problem with their insistence on studying mythical creatures often known as Homo economicus. I prefer to call them “Econs”— highly intelligent beings that are capable of making the most complex of calculations but are totally lacking in emotions. Think of Mr. Spock in “Star Trek.” In a world of Econs, many things would in fact be irrelevant.
No Econ would buy a larger portion of whatever will be served for dinner on Tuesday because he happens to be hungry when shopping on Sunday. Your hunger on Sunday should be irrelevant in choosing the size of your meal for Tuesday. An Econ would not finish that huge meal on Tuesday, even though he is no longer hungry, just because he had paid for it. To an Econ, the price paid for an item in the past is not relevant in making the decision about how much of it to eat now.
An Econ would not expect a gift on the day of the year in which she happened to get married, or be born. What difference do these arbitrary dates make? In fact, Econs would be perplexed by the idea of gifts. An Econ would know that cash is the best possible gift; it allows the recipient to buy whatever is optimal. But unless you are married to an economist, I don’t advise giving cash on your next anniversary. Come to think of it, even if your spouse is an economist, this is not a great idea.
Of course, most economists know that the people with whom they interact do not resemble Econs. In fact, in private moments, economists are often happy to admit that most of the people they know are clueless about economic matters. But for decades, this realization did not affect the way most economists did their work. They had a justification: markets. To defenders of economics orthodoxy, markets are thought to have magic powers.
There is a version of this magic market argument that I call the invisible hand wave. It goes something like this. “Yes, it is true that my spouse and my students and members of Congress don’t understand anything about economics, but when they have to interact with markets. ...” It is at this point that the hand waving comes in. Words and phrases such as high stakes, learning and arbitrage are thrown around to suggest some of the ways that markets can do their magic, but it is my claim that no one has ever finished making the argument with both hands remaining still.
Hand waving is required because there is nothing in the workings of markets that turns otherwise normal human beings into Econs. For example, if you choose the wrong career, select the wrong mortgage or fail to save for retirement, markets do not correct those failings. In fact, quite the opposite often happens. It is much easier to make money by catering to consumers’ biases than by trying to correct them.
Perhaps because of undue acceptance of invisiblehandwave arguments, economists have been ignoring supposedly irrelevant factors, comforted by the knowledge that in markets these factors just wouldn’t matter. Alas, both the field of economics and society are much worse for it. Supposedly irrelevant factors, or SIFs, matter a lot, and if we economists recognize their importance, we can do our jobs better. Behavioral economics is, to a large extent, standard economics that has been modified to incorporate SIFs.
SIFs matter in more important domains than keeping students happy with test scores. Consider definedcontribution retirement plans like 401(k)s. Econs would have no trouble figuring out how much to save for retirement and how to invest the money, but mere humans can find it quite tough. So knowledgeable employers have incorporated three SIFs in their plan design: they automatically enroll employees (who can opt out), they automatically increase the saving rate every year, and they offer a sensible default investment choice like a target date fund. These features significantly improve the outcomes of plan participants, but to economists they are SIFs because Econs would just figure out the right thing to do without them.
These retirement plans also have a supposedly relevant factor: Contributions and capital appreciation are taxsheltered until retirement. This tax break was created to induce people to save more. But guess what: A recent study using Danish data has compared the relative effectiveness of the SIFs and a similar tax subsidy offered in Denmark. The authors attribute only 1 percent of the saving done in the Danish plans to the tax breaks. The other 99 percent comes from the automatic features.
They conclude: “In sum, the findings of our study call into question whether tax subsidies are the most effective policy to increase retirement savings. Automatic enrollment or default policies that nudge individuals to save more could have larger impacts on national saving at lower social cost.” Irrelevant indeed!
Notice that the irrelevant design features that do all the work are essentially free, whereas a tax break is quite expensive. The Joint Economic Committee estimates that the United States tax break will cost the government $62 billion in 2015, a number that is predicted to grow rapidly. Furthermore, most of these tax benefits accrue to affluent taxpayers.
Here is another example. In the early years of the Obama administration, Congress passed a law giving taxpayers a temporary tax cut and the administration had to decide how to carry it out. Should taxpayers be given a lump sum check, or should the extra money be spread out over the year via regular paychecks?
In a world of Econs this choice would be irrelevant. A $1,200 lump sum would have the same effect on consumption as monthly paychecks that are $100 larger. But while most middleclass taxpayers spend almost their entire paycheck every month, if given a lump sum they are more likely to save some of it or pay off debts. Since the tax cut was intended to stimulate spending, I believe the administration made a wise choice in choosing to spread it out.
The field of behavioral economics has been around for more than three decades, but the application of its findings to societal problems has only recently been catching on. Fortunately, economists open to new ways of thinking are finding novel ways to use supposedly irrelevant factors to make the world a better place.
Rise of the Robots
On January 2, 2010, the Washington Post reported that the first decade of the twenty-first century resulted in the creation of no new jobs. Zero. This hasn’t been true of any decade since the Great Depression; indeed, there has never been a postwar decade that produced less than a 20 percent increase in the number of available jobs. Even the 1970s, a decade associated with stagflation and an energy crisis, generated a 27 percent increase in jobs. The lost decade of the 2000s is especially astonishing when you consider that the US economy needs to create roughly a million jobs per year just to keep up with growth in the size of the workforce. In other words, during those first ten years there were about 10 million missing jobs that should have been created — but never showed up.
Income inequality has since soared to levels not seen since 1929, and it has become clear that the productivity increases that went into workers’ pockets back in the 1950s are now being retained almost entirely by business owners and investors. The share of overall national income going to labor, as opposed to capital, has fallen precipitously and appears to be in continuing free fall. Our Goldilocks period has reached its end, and the American economy is moving into a new era.
It is an era that will be defined by a fundamental shift in the relationship between workers and machines. That shift will ultimately challenge one of our most basic assumptions about technology: that machines are tools that increase the productivity of workers. Instead, machines themselves are turning into workers, and the line between the capability of labor and capital is blurring as never before.
All this progress is, of course, being driven by the relentless acceleration in computer technology. While most people are by now familiar with Moore’s Law — the well-established rule of thumb that says computing power roughly doubles every eighteen to twenty-four months — not everyone has fully assimilated the implications of this extraordinary exponential progress.
As someone who has worked in software development for more than twenty-five years, I’ve had a front-row seat when it comes to observing that extraordinary acceleration in computing power. I’ve also seen at close hand the tremendous progress made in software design, and in the tools that make programmers more productive. And, as a small business owner, I’ve watched as technology has transformed the way I run my business — in particular, how it has dramatically reduced the need to hire employees to perform many of the routine tasks that have always been essential to the operation of any business.
It’s a good bet that nearly all of us will be surprised by the progress that occurs in the coming years and decades. Those surprises won’t be confined to the nature of the technical advances themselves: the impact that accelerating progress has on the job market and the overall economy is poised to defy much of the conventional wisdom about how technology and economics intertwine.
One widely held belief that is certain to be challenged is the assumption that automation is primarily a threat to workers who have little education and lower-skill levels. That assumption emerges from the fact that such jobs tend to be routine and repetitive. Before you get too comfortable with that idea, however, consider just how fast the frontier is moving. At one time, a “routine” occupation would probably have implied standing on an assembly line. The reality today is far dif-ferent. While lower-skill occupations will no doubt continue to be affected, a great many college-educated, white-collar workers are going to discover that their jobs, too, are squarely in the sights as software automation and predictive algorithms advance rapidly in capability.
The fact is that “routine” may not be the best word to describe the jobs most likely to be threatened by technology. A more accurate term might be “predictable.” Could another person learn to do your job by studying a detailed record of everything you’ve done in the past? Or could someone become proficient by repeating the tasks you’ve already completed, in the way that a student might take practice tests to prepare for an exam? If so, then there’s a good chance that an algorithm may someday be able to learn to do much, or all, of your job. That’s made especially likely as the “big data” phenomenon continues to unfold: organizations are collecting incomprehensible amounts of information about nearly every aspect of their operations, and a great many jobs and tasks are likely to be encapsulated in that data — waiting for the day when a smart machine learning algorithm comes along and begins schooling itself by delving into the record left by its human predecessors.
The upshot of all this is that acquiring more education and skills will not necessarily offer effective protection against job automation in the future. As an example, consider radiologists, medical doctors who specialize in the interpretation of medical images. Radiologists require a tremendous amount of training, typically a minimum of thirteen years beyond high school. Yet, computers are rapidly getting better at analyzing images. It’s quite easy to imagine that someday, in the not too distant future, radiology will be a job performed almost exclusively by machines.
In general, computers are becoming very proficient at acquiring skills, especially when a large amount of training data is available. Entry-level jobs, in particular, are likely to be heavily affected, and there is evidence that this may already be occurring. Wages for new college graduates have actually been declining over the past decade, while up to 50 percent of new graduates are forced to take jobs that do not require a college degree. Indeed, as I’ll demonstrate in this book, employment for many skilled professionals — including lawyers, journalists, scientists, and pharmacists — is already being significantly eroded by advancing information technology. They are not alone: most jobs are, on some level, fundamentally routine and predictable, with relatively few people paid primarily to engage in truly creative work or “blue-sky” thinking.
As machines take on that routine, predictable work, workers will face an unprecedented challenge as they attempt to adapt. In the past, automation technology has tended to be relatively specialized and to disrupt one employment sector at a time, with workers then switching to a new emerging industry. The situation today is quite different. Information technology is a truly general-purpose technology, and its impact will occur across the board. Virtually every industry in existence is likely to become less labor-intensive as new technology is assimilated into business models — and that transition could happen quite rapidly. At the same time, the new industries that emerge will nearly always incorporate powerful labor-saving technology right from their inception. Companies like Google and Facebook, for example, have succeeded in becoming household names and achieving massive market valuations while hiring only a tiny number of people relative to their size and influence. There’s every reason to expect that a similar scenario will play out with respect to nearly all the new industries created in the future.
All of this suggests that we are headed toward a transition that will put enormous stress on both the economy and society. Much of the conventional advice offered to workers and to students who are preparing to enter the workforce is likely to be ineffective. The unfortunate reality is that a great many people will do everything right — at least in terms of pursuing higher education and acquiring skills — and yet will still fail to find a solid foothold in the new economy.
Beyond the potentially devastating impact of long-term unemployment and underemployment on individual lives and on the fabric of society, there will also be a significant economic price. The virtuous feedback loop between productivity, rising wages, and increasing consumer spending will collapse. That positive feedback effect is already seriously diminished: we face soaring inequality not just in income but also in consumption. The top 5 percent of households are currently responsible for nearly 40 percent of spending, and that trend toward increased concentration at the top seems almost cer-tain to continue. Jobs remain the primary mechanism by which purchasing power gets into the hands of consumers. If that mechanism continues to erode, we will face the prospect of having too few viable consumers to continue driving economic growth in our mass-market economy.
As this book will make clear, advancing information technology is pushing us toward a tipping point that is poised to ultimately make the entire economy less labor-intensive. However, that transition won’t necessarily unfold in a uniform or predictable way. Two sectors in particular — higher education and health care — have, so far, been highly resistant to the kind of disruption that is already becoming evident in the broader economy. The irony is that the failure of technology to transform these sectors could amplify its negative consequences elsewhere, as the costs of health care and education become ever more burdensome.
Technology, of course, will not shape the future in isolation. Rather, it will intertwine with other major societal and environmental challenges such as an aging population, climate change, and resource depletion. It’s often predicted that a shortage of workers will eventually develop as the baby boom generation exits the workforce, effectively counterbalancing — or perhaps even overwhelming — any impact from automation. Rapid innovation is typically framed purely as a countervailing force with the potential to minimize, or even reverse, the stress we put on the environment. However, as we’ll see, many of these assumptions rest on uncertain foundations: the story is sure to be far more complicated. Indeed, the frightening reality is that if we don’t recognize and adapt to the implications of advancing technology, we may face the prospect of a “perfect storm” where the impacts from soaring inequality, technological unemployment, and climate change unfold roughly in parallel, and in some ways amplify and reinforce each other.
In Silicon Valley the phrase “disruptive technology” is tossed around on a casual basis. No one doubts that technology has the power to devastate entire industries and upend specific sectors of the economy and job market. The question I will ask in this book is bigger: Can accelerating technology disrupt our entire system to the point where a fundamental restructuring may be required if prosperity is to continue?
Real flexibility — the kind that gives you at least a measure of control over when and how you work in a week, a month, a year, and over the course of a career — is a critical part of the solution to the problem of how to fit work and care together. In most workplaces, however, flex policies exist largely on paper. It is often exceedingly difficult, if not impossible, for employees to avail themselves of them. Even when firms and their managers actually support flex policies, employees often don’t ask. And for good reason. In a work culture in which commitment to your career is supposed to mean you never think about or do anything else, asking for flexibility to fit your work and your life together is tantamount to declaring that you do not care as much about your job as your co-workers do. Dame Fiona Woolf, a British solicitor and former lord mayor of London, puts it succinctly: “Girls don’t ask for [flexible work] because they think it’s career suicide.”
This assessment resonated with one young woman who wrote to me after my article in The Atlantic came out. Kathryn Beaumont Murphy was already a mother when she became a junior associate — a rarity at big law firms. She had a generous maternity leave and took advantage of their flextime policy when she returned after having her second child. She was allowed to work “part-time” for six months after her maternity leave ended, though it was still forty hours a week. “At the end of that six-month period,” Murphy says, “I was told by the all-male leadership of my department that I could not continue on a flexible schedule as it would hurt my professional growth.” She ended up leaving the firm entirely for a less prestigious job that pays much less, but where she has more control over her schedule. Her children are now five and eight, and after her experience she now believes that “flexibility is as valuable as compensation.”
In 2013, the Journal of Social Issues published a special issue on “flexibility stigma” that included several studies showing that workers who take advantage of company policies specifically designed to let them adjust their schedules to accommodate caregiving responsibilities may still receive wage penalties, lower performance evaluations, and fewer promotions. If anything, men who try to take advantage of flexible policies have it even worse. Joan Blades and Nanette Fondas, co-authors of The Custom-Fit Workplace, give the example of a law firm associate named Carlos who tried to arrange for paternity leave and was told that his company’s parental leave policy was really meant just for women. But even if his firm had said yes, it’s likely that he then would have paid an even bigger price in terms of his chances for advancement and overall mistreatment than his female colleagues did.
A Catalyst study showing that men and women tend to use flexibility policies differently underlines the dangers that men perceive. The men and women in their study were equally likely to use a variety of flextime arrangements, from flexible arrival and departure times to compressing work in various ways across the week. But women were 10 per cent more likely than men to work from home and men were almost twice as likely as women to say that they had never telecommuted over the course of their careers. Patterns like these show that men are aware that if a certain kind of flexibility means a lot less face time at the office, they won’t run the risk of being penalized for taking advantage of it.
To be stigmatized means to be singled out, shamed, and discriminated against for some trait or failing. Stigma based on race, creed, gender, or sexual orientation is sharply and explicitly disapproved of in contemporary American society. Why should stigma based on taking advantage of company policy to care for loved ones be any different? Workers who work from home or even take time off do not lose IQ points. Their choice to put family alongside or even ahead of career advancement does not necessarily affect the quality of their work, even if it reduces the quantity.
However effective flexibility policies may seem in theory, flexibility cannot be the solution to work-life issues as long as it is stigmatized. The question that young people should be asking their employers is not what kinds of family-friendly policies a particular firm has. Instead, they should ask, “How many employees take advantage of these policies? How many men? And how many women and men who have worked flexibly have advanced to top positions in the firm?”
While we’re working to remove the stigma from flexibility, we must also recognize that even the limited flexibility that white-collar workers take for granted doesn’t exist at all for low-income hourly workers. Schedules that are too rigid to accommodate family needs on the one hand are often too flexible on the other. More than 70 per cent of low-wage workers in the United States do not get paid sick days, which means that they risk losing their jobs when a childcare or health issue arises. In fact, nearly one-quarter of adults in the United States have been fired or threatened with job loss for taking time off to recover from illness or care for a sick loved one.
For millions of American workers, then, flexibility is not the solution but the problem. Women and men at the top cannot advocate for more flexibility without insisting that these policies be implemented in ways that help workers rather than hurt them.
The kind of flexibility we need does not stigmatize or exploit. Ending flexibility stigma in the workplace must be about making room for care in all our lives, not an additional excuse to stop caring about the human impact of our policies.