Morality and the Idea of Progress in Silicon Valley
Eric Giannella
Silicon Valley’s amorality problem arises from the blind faith many place in progress. The narrative of progress provides moral cover to the tech industry and lulls people into thinking they no longer need to exercise moral judgment.
Brian Mayer didn’t expect to become the “most hated person in San Francisco, for a day.” After spending thirty minutes in line for a food truck, he built a service that would book reservations at Bay Area restaurants under fictional names and then sell those reservations to people willing to pay to get a table at the last minute. To horrified observers, it was scalping and gentrification all in one. It even led a writer at TechCrunch, which normally channels the go-go anxiety in Silicon Valley, to post an article titled “Stop the Jerk Tech.” When replying to critics on his blog, Mayer wrote, “Is this even legal? Is it ethical? … To be honest, I haven’t spent a lot of time thinking through these questions. I built this site as an experiment in consumer demand for a particular product…”
Part of the reason for the backlash against Mayer is that his service epitomized a shift toward amorality in Silicon Valley. It hasn’t always been this way. For much of its history as the heart of information technology, people in Silicon Valley had robust conversations about morality and ethics. In the 70s and early 80s, debates about software (free vs. commercial) and competing visions of technological utopias (e.g., libertarian vs. socialist) were relatively commonplace and infused engineering choices with a distinctly moral dimension. Even as recently as a decade ago, the Google founders insisted they understood the distinction between opportunism and ethical business through their motto, “Don’t be evil.” At a minimum, debate forced people to think about and articulate moral views. Yet, over time, dramatic examples such as the personal computer, the internet, and search engines seem to have convinced those of us in Silicon Valley that information technology is generally a force for good.[1] Moreover, the fact that these technologies happen to be beneficial to people and successful in the marketplace has lulled many into thinking that market success is ethics enough. As Mayer puts it, “If someone does pay for it willingly, is it really unethical?”
Critiques of recent scandals in Silicon Valley rightly place the blame on a culture that supports amorality, thoughtlessness, and ignorance rather than ill intent.[2] But the problem runs much deeper, because Silicon Valley’s amorality problem arises from the implicit and explicit narrative of progress companies use for marketing and that people use to find meaning in their work. By accepting this narrative of progress uncritically, imagining that technological change equals historic human betterment, many in Silicon Valley excuse themselves from moral reflection. Put simply, the progress narrative short-circuits moral reflection on the consequences of new technologies.
The idea that technology will bring about a better world for everyone can be traced back to the Enlightenment aspiration to “master all things by calculation” in the words of Max Weber.
The progress narrative has a strong hold on Silicon Valley for business and cultural reasons. The idea that technology will bring about a better world for everyone can be traced back to the Enlightenment aspiration to “master all things by calculation” in the words of Max Weber.[3] The successes of science and technology give rise to a faith among some that rationality itself tends to be a force for good.[4] This faith makes business easier because companies can claim to be contributing to progress while skirting the moral views of the various groups affected by their products and services. Most investors would rather not see their firms get mired in the fraught issue of defining what is morally better according to various groups; they prefer objective benefits, measured via return on investment (ROI) or other metrics. Yet, the fact that business goals and cultural sentiments go hand in hand so well ought to give us pause.
The idea of progress is popular because it ends up negating itself, and as a result, makes almost no demands upon us. In Silicon Valley, progress gets us thinking about objectively better, which suggests that we come up with some rational way to define better (e.g., ROI). But the only way to say that something is better in the sense we associate with progress is to first ask whether it is moral. Morality is inherently subjective and a-rational. Suggesting that a technology represents progress in any meaningful, moral sense would require understanding the values of the people affected by the technology. Few businesses and investors would be willing to claim they contributed to progress if held to account by this standard. If people are concerned with assessing whether specific technologies are helpful or harmful in a moral sense, they should abandon the progress narrative. Progress, as we think of it, invites us to cannibalize our initial moral aspirations with rationality, thus leaving us out of touch with moral intuitions. It leads us to rely on efficiency as a proxy for morality and makes moral discourse seem superfluous.
Why progress and rationality are so closely linked in our imagination
We need to look to our cultural history to see why our understanding of progress is so bound up with rationality. Silicon Valley’s faith in progress is the purest distillation of Enlightenment ideas that Max Weber saw embodied in early Americans like Ben Franklin.[5] Weber was interested in the rapidly growing role of rationality in changing how people lived and experienced life.[6] People like Ben Franklin not only thrived on a pragmatic, rational approach to life, they celebrated it. They took the rational and calculating style of thought that made the sciences so successful and applied it to every aspect of life. Because worldly success demonstrated one’s grace (in Protestant America), productivity became a moral issue and rationality was its engine. This allowed early Americans to view a purely means-ends approach to life as praiseworthy rather than shallow.
Once this means-ends approach to life was introduced, Weber thought that there was no going back. Rationally designed and managed firms would spread because they would outcompete firms that were run on more traditional bases – such as a mixture of family obligation and devotion to craft. Henry Ford’s manufacturing system for the Model-T would beat any other system for producing cars. Yet it was not just businesses that saw rationality applied in greater measure. In the German city-states of the late 19th century, professional administrators following explicit rational procedures allowed the government to attain a previously unimaginable level of speed, coordination and power. The rapidly expanding use of rationality in planning and running human affairs could also be seen in religion, the law, and even the university.
While it had innumerable practical benefits, applying more rationality to more of life took an existential toll. Combined with scientific explanations of the natural world, the observation that so much of life could be controlled through systematization reduced, for some, the power of traditional sources of meaning – superstition, religion, as well as pre-modern ethics like honor. With science being able to explain so much, and technology able to control so much, the world had become disenchanted.
Why progress became a source of meaning
Weber knew that people need narratives to provide coherence between their lives and their understanding of the world. He wondered what new beliefs modern people would invent to find meaning in their lives. Ironically, with no common ground left but the tools of disenchantment, we have enchanted those tools. John Gray describes the general pattern
Modern myths are myths of salvation stated in secular terms. What both kinds of myths have in common is that they answer to a need for meaning that cannot be denied. In order to survive, humans have invented science. Pursued consistently, scientific inquiry acts to undermine myth. But life without myth is impossible, so science has become a channel for myths – chief among them, a myth of salvation through science.[7]
To put it another way, progress is the only myth left when rationality has eviscerated other sources of meaning. Because of our faith in progress we have granted rationality itself a positive moral valence.
This problem of meaning is brought to a head in Silicon Valley. In trying to answer the question, “what does all this new technology mean for us?” Silicon Valley executives, investors and journalists often default to a story about human progress. Moreover, many in Silicon Valley are so privileged and talented that they can ask themselves what they would like their work to mean beyond simply making them richer. Venture capitalists (VCs) and entrepreneurs regularly invoke phrases like “make a difference,” “have an impact,” or “change the world,” which suggest that they at least partially view their work in moral terms; in terms of beneficence. Of the thousands of investments VCs might screen per year, they end up funding less than one percent. Yet, it is troublingly hard to glean consistent moral criteria from their investment choices. For people with so much discretion, one would think a robust concern with “changing the world” in any meaningful, moral sense, would at least preclude them from investing in companies such as Zynga; or, for that matter, cause them to fire the management team of Uber.
One way to claim moral credit and disavow blame is to equate economic benefits with moral benefits. If productivity improves, that is morally good.
The narrative of progress proves very useful here. One way to claim moral credit and disavow blame is to equate economic benefits with moral benefits. If productivity improves, that is morally good. If productivity does not improve, it is not good, but it is also not bad. The rhetoric around innovation relies on this logic. The harshest condemnation one can receive is “not innovative.” Talking about innovation provides a means for having a pseudo-moral discourse. It celebrates the good but fails to condemn the bad. Moreover, by placing all technologies within the same category, the innovation rhetoric legitimates each new technology product, however frivolous, by association with major beneficial technologies such as email and databases. The halo cast by a tiny minority makes inquiring about the moral implications of new technologies appear less urgent.
Another business benefit of finding meaning in a story about innovation is that it can motivate people. Imagining that major technological change might occur at any moment keeps buyers attentive. Journalists would rather write about the significance of historical trends than incremental changes in a business. Entrepreneurs would like to believe that the technology they are commercializing will be of tremendous consequence. Some engineers can indulge in the knowledge that they do the hard and under-appreciated real work of building celebrated products and services. Innovation justifies purchases, assigns roles, and allows people to have something bigger and more interesting to talk about than the fortunes of a company, but it will never lead to serious moral evaluation of a technology.
When the only means are rationality, the ends become more rationality
A common business model today is to optimize some activity. Information technology is perfectly suited as a tool to make activities more efficient (i.e., rational in a technical sense). Only a few ideologues would flat-out claim that more rationality is, as a rule, good.[8] Yet, because we’ve gotten so adept at using information technology to rationally plan, we’d like to be able to claim that making things more rational is good. This Enlightenment motif that “more rationality = progress” justifies the countless products and services whose origins can be traced to someone noticing an opportunity for optimization. But, if we put this default assumption aside for a moment, there are many cases in which we need to question whether making activities more rational in a technical sense is moral. Is workforce-scheduling software that makes single-parents’ lives even more demanding a good thing? Is automating someone’s job if we know they will struggle to find other work a good thing?
This brings us back to why our notion of progress is self-negating. We would like progress to be defined in moral terms. Yet, because not everyone shares the same morals, businesses and governments try to redefine progress in objective terms. Because we fear charges of subjectivity, we look to rational means and rational measures for pursuing objective goals. Besides, moral goals would, in many cases, make it impossible to serve everyone (to “scale,” in the local parlance). As a result, we take a technocrat’s approach to progress: we try to define it in objective terms and pursue it through rational means.[9] Yet, the only criteria we have for better (i.e., progress) are informed by subjective, moral intuitions. How we might define and measure better, even in an economic sense (e.g., cost-of-living adjusted income or reduced income disparity), is informed by moral intuitions. If we deny the importance of these moral intuitions, we cannot say much, if anything, about whether something is good or bad.[10] In our culture, progress is self-negating because we define and pursue progress solely in objective, rational terms, thus ignoring our inherently subjective moral intuitions and allowing them to atrophy. It is a classic story of the means overtaking the ends.
We need to talk more openly about moral consequences of new technologies
What if we allowed ourselves to reflect on and talk about morality a bit more? A more robust public moral discourse would make it less likely that companies such as Zynga, which has a history of treating employees and customers terribly, receive venture capital funding or find qualified job applicants. Marc Pincus, the longtime CEO, was known by many as extremely focused on “winning”—dominating competitors and going public. He convinced some of the most prestigious venture capital firms in the Bay Area to bankroll his efforts. Apart from rampant copying of other developers’ games, extreme overwork of engineers, and vicious treatment of some employees, the design of Zynga’s games also revealed an astounding disrespect for its users. It used an understanding of behavioral psychology, even hiring a psychologist, to design games to be more addictive. Had public conversation in Silicon Valley been more focused on moral issues, it would have been more difficult for Pincus to get venture capital funding and hire sought-after engineers.
There are, of course, cases more nuanced than Zynga that would have benefitted from a more robust discourse about morality. Until it was banned by the FDA from selling a product with unproven health claims, many celebrated 23andMe for doing something “innovative.” (In fact, many then complained that the injunction represented a “government threat to innovation.”) The company found a way to sell personal genetic tests for $99. People argued for the benefits that the company would bring: it would lower the cost of other forms of genetic testing; it would provide a massive repository of genetic data for researchers. It would bring the promise of genomic medicine closer to reality.
Despite 23andMe’s seeming aspirations to make money while helping people, any concrete benefits to consumers were far off into the future – and there were potential harms to consumers that were buried in Silicon Valley’s excitement. In other words, there were moral implications of selling the tests that we might have attended to if not for our desire to have another example of a commercially successful technology that helps people. For instance, do we think it is good for people to obtain hard-to-interpret genetic test results? Some patients might get unjustified medical tests, experience unwarranted anxiety, change their lifestyles or, in the worst cases, may decide to stop taking medications. Of course, 23andMe’s investors would not want to see the product marketed as a novelty (though some consumers treated it this way). Many more people would be willing to purchase the product if it represented a cheap medical screening.
The FDA had been trying for at least four years to get 23andMe to prove its health claims or to stop making health claims. In its letter telling the company it could no longer sell products in the U.S., the FDA noted:
…your company’s website at www.23andme.com/health (most recently viewed on November 6, 2013) markets the PGS [Personal Genome Service] for providing “health reports on 254 diseases and conditions,” including categories such as “carrier status,” “health risks,” and “drug response,” and specifically as a “first step in prevention” that enables users to “take steps toward mitigating serious diseases” such as diabetes, coronary heart disease, and breast cancer.
Banned from sale in the U.S., 23andMe just began selling its product in the U.K. In a recent interview on the BBC, CEO Anne Wojcicki could be heard again explaining the potential health benefits of her firm’s genetic tests.
Second, and perhaps just as troubling, is what 23andMe has long planned to do with genetic data. In order to justify the low price of its tests to investors, 23andMe will recoup money by selling aggregated data to other companies. The recently announced $60 million deal with Genentech suggests the real money will be made here. Unfortunately, because genomics is such an immature field, it is unclear what information is truly anonymous and what information might someday provide clues for exposing personal genetic data. The marketing emphasis on medical relevance rather than fun novelty makes consumers much more willing to compromise on their privacy.
23andMe should have been a long-term research project, not a Silicon Valley startup.
23andMe should have been a long-term research project, not a Silicon Valley startup. Given its business model, investors should not have funded it, nor should the media have celebrated it. There were too many questions to be resolved in terms of consumer’s use of the test results and potential misuse of genomic data by other firms.
My overall point is that the progress narrative is counterproductive. We ought to abandon it. A simple step is to stop using obfuscating terms that prop up a progress narrative. Words like innovation, impact and disruption invite an abstract style of thinking and talking that leaves little room for moral reflection.[11] Talking about technology in terms of progress invites a technocratic and uncritical approach to thinking about the human good. It quickly moves from real benefits for real people to abstract systems upon systems that may someday benefit people. By encouraging this hyper-analytical thinking, the idea of progress desensitizes us to the use of moral judgment. It allows our moral intuitions to become dull.[12] It serves a function: it preserves the false connection between what some Silicon Valley firms do, in terms of consequences for real people, and what they claim to do in terms of ushering in a better future. The progress narrative shrouds the tech industry in virtue for playing a key role in technological change while weakening moral evaluation of new products and services.
We ought to treat the tech industry as any other industry and put aside the association with human progress. Some technologies do improve our lives in general, but the assumption that technology is a force for good has proved harmful. Letting go of the idea of progress would allow us to talk more clearly about the moral consequences of new products and services.
An alternative narrative about contribution
There are alternatives to the progress narrative for making work in technology meaningful. Many people find meaning in their work through a narrative about making a contribution. Rather than thinking about contribution in a historic sense (i.e., progress), contribution can be thought in terms of specific groups of people.[13] People in many fields—teachers, cooks, doctors, among others—find meaning in their work through making a contribution to specific people. In tech, some might define the affected group more broadly, for example, programmers who rely on a software development tool, the users of a word processor, or the people who enjoy a particular game. The point is that knowing who will be affected by our work keeps us honest in terms of what we think is a contribution.
Many of the complaints about Silicon Valley’s service and social media tools focus on the fact that they reflect the concerns and interests of privileged young urbanites.
There is a second benefit to thinking of contribution in terms of specific people or groups rather than human progress generally. Knowing a group through individuals rather than via market segments prevents professionals from inadvertently imposing one set of values on groups with disparate values. In other words, thinking about contributions in terms of specific groups encourages understanding the people within those groups.[14] Many of the complaints about Silicon Valley’s service and social media tools focus on the fact that they reflect the concerns and interests of privileged young urbanites. Tools developed with the idea of contributing to specific groups would do less to encourage convergence of views about what constitutes the “good life.”[15]
None of this is to say that there are no do-gooders in tech. There are people who have a clear idea of how the technologies they are developing will serve specific groups – whether pursuing social justice in the United States or for providing better medical care in poorer nations. The morality of these causes does not stem from their association with progress – it flows from the desire to bring about real benefits that real people affected would say are good. Although it would be ideal if everyone could pursue such causes, that day is a long way off.
That does not leave everyone else off the hook. Everyone can, at a minimum, ask whether they are doing more harm than good. The trouble in Silicon Valley is that many talented, highly educated young people seem relatively unconcerned with the potential for harm. To be more aware of not harming people, much less helping them, we need to cultivate moral intuitions by discussing the consequences of our work for specific people.[16] The search for solidarity with specific people, not some objectively better moment in human history, keeps us exercising our moral intuitions.
References and Footnotes
- I refer to “us” and “we” in this essay because having grown up in Silicon Valley, with my parents and now many of my friends in tech, I consider myself a member of this culture. (There are large disagreements in any culture.) ↩
- For example: Bilton, Nick. 2014 “The Slippery Slope of Silicon Valley” The New York Times. ↩
- Weber, Max. 1958 [1919]. “Science as a Vocation.” Daedalus 87, no. 1: 117. ↩
- For example: Weber, Max. 1949. Pp. 34-47 of “The Meaning of Ethical Neutrality in Sociology and Economics” in The Methodology of the Social Sciences edited by Edward A. Shils and Henry A. Finch. Free Press. ↩
- Weber, Max. 2012 [1905]. The Protestant Ethic and the Spirit of Capitalism. Dover. ↩
- Brubaker, Rogers. 1984. The Limits of Rationality: An Essay on the Social and Moral Thought of Max Weber. HarperCollins; Schluchter, Wolfgang. 1985. The Rise of Western Rationalism: Max Weber's Developmental History. University of California Press. ↩
- Gray, John. 2013. The Silence of Animals: On Progress and Other Modern Myths. Farrar, Straus and Giroux. P. 82 ↩
- See so-called “rationalist” groups in the Bay Area, such as rationality.org or the meet-ups promoted by lesswrong.com. ↩
- For example, Marcuse argues that the rise of technocratic thinking curtails our ability to reflect and criticize. Marcuse, Herbert. 1964. One-Dimensional Man. Beacon Press. ↩
- On the idea of moral intuitions, see the work of Charles Taylor, such as chapter one of Sources of the Self: The Making of the Modern Identity. 1989. Harvard University Press. ↩
- In terms of communicating any underlying substance that people might attach to these words – one could replace innovation with “change” or “improvement”, impact with “effects” or “consequences” and disruption with “gain market share.” ↩
- The language of benevolence and emphasis on real people is borrowed from chapter nine in: Taylor, Charles. 1991. The Ethics of Authenticity. Harvard University Press. ↩
- This discussion largely parallels Richard Rorty’s pragmatist defense of scientific contributions being made to a specific community versus to a universal history of scientific progress. See: Rorty, Richard. 1991. “Solidarity or Objectivity?” in Objectivity, Relativism and Truth. Cambridge University Press. ↩
- Weber’s notion that formal and substantive rationality are often in competition captures an important tension here. Weber called the use of rationality to make activities more internally coherent rationalization. Rationalization is a process of logically orchestrating a set of activities in order to pursue certain ends. (Note that this is exactly what many information technologies help us do). Weber was particularly interested in the spread of formal rationality in recent history. Formal rationality strives for consistent, objective logic across contexts – it needs no reference to culture, time or place. As reliance on formal rationality expands, it often conflicts with substantive rationality. Substantive rationality is about whether something is reasonable in a particular context – it relies on subjective understandings of the people in that context. The spread of a single, technically efficient way of doing things might trample on a variety of local norms and values. The conflict in this essay might be reframed as faith in progress leading people to embrace formal rationality at the expense of substantive rationality. ↩
- See the work of Paul Feyerabend for an argument about the link between the veneration of rationality and the convergence and narrowing of human experience (e.g., the final chapters of Against Method or the introduction to Conquest of Abundance): Feyerabend, Paul. 1957. Against Method. Verso; Feyerabend, Paul. 1999. Conquest of Abundance: A Tale of Abstraction versus the Richness of Being. University of Chicago Press. ↩
- I am not claiming that we should only evaluate actions in light of their consequences (i.e., consequentialism). I am saying that, as a starting point for moral reflection, people might ask themselves whether they expect a new product or service to be beneficial or harmful to the people affected. Respect for individuals and groups matters. ↩