- The software industry faces wicked problems by developing complex software systems and by managing them as part of a larger social, economic, and environmental fabric.
- The power we have as technologists comes with greater responsibility. Don’t forget the asymmetry of power between large organizations, SMEs, and the consumer. The higher the power, the higher the responsibility.
- Software engineers must be prepared to continuously take a probe-sense-respond approach for monitoring and acting on unintended consequences that emerge from their systems, guided by ethical principles.
- The responsibility starts even before designing a system, when conceiving ideas. An ethical approach for responsible innovation is needed, either anticipatory or experimental/participatory.
- Software engineers should educate themselves about the system they serve to be able to make informed decisions on how they want to develop their careers.
Every day we receive news about the unethical use of technology. Whether it is the leakage of personal data that cannot be reversed, the manipulation of opinion or invasion of privacy. Many companies still do not analyze or react to this visible type of impact, often because there is not enough social pressure or government regulation.
However, this reality is changing. People are becoming increasingly aware of the negative impact of technologies as social networks, mobile phones, artificial intelligence, among others. Sometimes, they stop using the services of these organizations. Sometimes they are employees who stop working for them. These are initial actions and reactions to this (wicked) problem, but how far can we go?
Wherever role you are playing – consumer, employee, entrepreneur, manager – the purpose of a business is to serve society. When you try to cut corners, sooner or later the negative consequences will come. Let’s not forget the asymmetry of power between large organizations, SMEs, and the consumer. The higher the power, the higher the responsibility.
Related Sponsored Content
Organizations don’t need to go down in flames to finally start doing the obvious: seeking win-win solutions, considering sustainability and ethics on all phases, as well as learning how to scan and deal with the unintended consequences of each intervention.
Because this topic is easier said than done, approaches to deal with wicked problems and examples of actions depending on the role you are playing in the society.
The software industry is facing “wicked” problems
“Wicked problem” is a term introduced by the theorists Rittel and Webber (1973) to describe problems that cannot be definitively described, with no “solutions” in the sense of definitive and objective answers. It is also understood as a super-category of “complexity”, problems that overwhelm us in some sense.
There is also a class of “super-wicked” problems: climate change, poverty, food security, energy supply, education policy and public health. They all have many interdependent factors making them seem impossible to solve.
The software industry faces wicked problems in different ways: by developing complex software systems and by managing them as part of a larger social, economic, and environmental fabric. Wicked problems always existed in our industry, but the internet and globalization undoubtedly created conditions for new forms of interaction, thus expanding the universe of related wicked problems.
Examples of wicked problems closely associated with software are: social networks, sharing economy platforms, and air traffic control. In business, a new strategy (e.g. re-branding) or a modification in a product (e.g. introducing a new version of a video game) are classic examples.
Software engineers must be prepared to make ethical decisions, thinking critically, and acting systemically.
There is an element of responsibility inherent to tackling wicked problems, as those who present “solutions” to them are “liable for the consequences of the solutions they generate; the effects can matter a great deal to the people who are touched by those actions” (Rittel and Webber, 1973).
A software engineer is involved in the design and evolution of digital systems/platforms that provide new ways of seeing and navigating the world, new ways of organizing the economy, cities and social lives.
Due to the inherently complex nature of these ecosystems, there is no way to control or predict the varied behaviors and consequences that daily (design/operational/management/strategic) decisions can cause. Software engineers must be prepared to continuously take a probe-sense-respond approach for monitoring and acting on unintended consequences that emerge. By taking long-term responsibility, these actions should be guided by ethical principles.
There are many examples of how software engineers can make ethical decisions. There are famous examples, for example the case of Chef. Initially, a group of developers found that some customers were using their product to ensure deportation and separation of families in the USA. They raised the question on whether the company should maintain the contract or not. After protest, the company decided to let ICE contract expire. Another case was the 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. This breach enabled manipulation of voters during campaigns and elections, damaging the established democracy.
More recently, Twitter has permanently suspended the account of former USA President Donald Trump “due to the risk of further incitement of violence”, raising the question of whether the decision was ethical or not. Differentiating ethics and law is important here, as the law often sets the moral minimum. Experts are discussing the implications of tech companies’ decisions on short and long term, which of course are difficult to predict.
A simpler example of an ethical decision is when a developer adopts the precautionary principle of designing a system not to maintain unnecessary user data, protecting them from possible future data breaches. We know that once released to the public, data cannot be taken back, opening up a number of unintended consequences.
In a broader sense, there is no way to approach ethics & tech without questioning money, power, progress, and unconstrained/exponential growth. So software engineers should educate themselves about the system they serve to be able to make informed decisions on how they want to develop their careers. A good starting point is the ACM Code of Ethics and Professional Conduct, which was updated in 2018 to reflect the debate and make it clear Why Should We Care about Technology Ethics.
This also opens a discussion on how we need to evolve the Software Engineering and Computer Science curricula to better prepare students for our century. The same applies to professionals: how to keep broadening systemic knowledge (social, economic, environmental) so we can be more effective software engineers in tackling complex problems.
Making ethical decisions: underlying values behind ethical theories
I am not an ethicist; I usually adopt Prof. Colaner’s explanation on underlying values behind main ethical theories. They are important as they serve as guidance to reflect on ethical issues. We can appreciate five basic values that contribute to human well-being:
- Autonomy: a person’s ability to govern himself/herself and to order his/her own life, so s/he is not used as a means to an end that s/he did not choose (most closely associated with deontology).
- Non-suffering/harm: perhaps the most basic moral instinct is that suffering (whether physical or emotional) is bad because it takes away from a person’s happiness (most closely associated with utilitarianism).
- Equality: a person must not be regarded or treated as inferior or superior to other people.
- Trust: when people cooperate, they form trust, and living in a high-trust environment makes possible many of the things we associate with a good life. So trust is the goal, and cooperation is the way to get it (most closely associated with contractarianism).
- Character Excellence: having a goal to build a character of virtues is often related to human well-being. Forming ethical habits by doing moral actions is the way to get it (most closely associated with virtue ethics).
Dealing with wicked problems
COVID-19 is an example of a super wicked problem. It is not just a disease, but an entangled system of dynamic and iterative problems, in which interconnected causes and consequences go beyond health and encompass psychological, educational, environmental and economic dimensions. It involves different perspectives, interests and worldviews implying a number of different interpretations and opinions about how different actions should happen and be coordinated.
In a simplistic view, once we recognize we are dealing with a wicked problem, we need to engage with the system to gain a deeper understanding of how things are connected, which can only be done by probing it and making sense of it through experimentation. We then learn and adapt as we go, because these problems are never “finished” in the sense of cause-effect (that’s why linear methods don’t work well). Instead, we recognise some patterns that are manageable and start multiple small interventions, with the intention of shifting them to the desired direction.
Because we are intervening with the “system”, we must do it responsibly, as interventions change the system itself. When running experiments, nudges, or interventions, we need to think about “safe-to-fail” experiments, a concept explained by Snowden and Boone (2007). The software industry is increasingly working with safe-to-fail experiments through an agile approach. This can be also understood as applying effectively the Precautionary Principle, which is a “broad epistemological, philosophical and legal approach to innovations with potential for causing harm when extensive scientific knowledge on the matter is lacking”.
Approaches for designing solutions: the role of technologists
As technologists, we must consider the power we have in the digital age. We know how digital systems work behind the scenes and some of their consequences way before they are articulated or understood by the general public. This power comes with greater responsibility. It means being more active in sharing knowledge with other stakeholders in our society, from management of organizations to our government leaders, so we are able to work together on complex or wicked problems.
The responsibility starts even before designing a system, when we are conceiving ideas. An ethical approach for responsible innovation is needed, either anticipatory or experimental/participatory.
Anticipatory approaches combine ethical analysis with various techniques of forecasting of plausible futures (e.g. scenarios, trend analysis, Delphi panels, horizontal scanning), as well as various methods of technology assessment (Brey, 2017). These techniques are used to project products, applications, uses and impacts that may result from the further development and introduction of an emerging technology.
Ethical issues in these future applications and uses are subsequently identified and subjected to ethical analysis. For instance, “Technology X is likely to lead to applications and uses that harm privacy. Therefore, it should be developed and introduced in such a way as to minimize such harms”.
We can go deeper and check the results of a research project like ETICA, a 26-month European initiative including 12 partners who engaged in a foresight activity to explore developments in ICT to inform policy development. They developed a specific methodology to identify emerging ICTs and evaluate their expected benefits and undesired side effects or controversies. They selected 11 emerging technologies likely to be socially and economically relevant in the next 10 to 15 years. Figure 1 illustrates the ethical issues they forecasted for Cloud Computing:
Figure 1. Synthesis example: ethical issues of cloud computing, from the ETICA project
There are some toolkits for organizations to run anticipatory analysis with their teams. Consequence Scanning – an agile practice for responsible innovators – has a technique to adopt within an iterative development cycle.
The Ethical OS Toolkit brings:
“1) a checklist of 8 risk zones to help you identify the emerging areas of risk and social harm most critical for your team to start considering now;
2) 14 scenarios to spark conversation and stretch your imagination about the long-term impacts of tech you’re building today; and
3) 7 future-proofing strategies to help you take ethical action today”.
Another ethical approach for responsible innovation is the Experimental, which recognizes that some ethical issues will be uncovered during the process of introducing it into society, a process with emergent outcomes of the co-evolution of technology and society, a social experiment. The idea is to postpone the question “Is technology X morally acceptable?”, which we for the most part cannot answer before the technology has been fully introduced into society. Instead, we try to answer the question “Is it ethically acceptable to experiment with technology X in society?”
An example of this approach is described in the book White Hat UX | The Next Generation in User Experience. It takes an adaptive approach in which the organization can improve the user experience of its products and services, what they call a White Hat, honest user experience.
The role of entrepreneurs and organizations
Entrepreneurs have a challenge (and opportunity) to redesign their business models to be more sustainable and contribute to approaching existent wicked problems. Organizations will also need to redesign their governance mechanisms to be ethical by design (e.g. adopting anticipatory and responsive ethical assessments), build ethical capabilities, and make sure they are able to build diverse, inclusive, multi-disciplinary teams.
In this context, Jutta Eckstein and I recently wrote a book chapter (Sustainability: Delivering Agility’s Promise) describing ideas that organizations can adopt when designing solutions, and examples of entrepreneurs who established new business models which are more ethical and sustainable. We based our ideas on the Agile Manifesto, hoping that agile organizations and teams can leverage the agile principles to better navigate wicked problems.
- Brey, P. (2017). Ethics of Emerging Technologies. In S. O. Hansson (Ed.), Methods for the Ethics of Technology. Rowman and Littlefield International. Available here.
- Rittel, H.W.J., Webber, M.M. Dilemmas in a general theory of planning. Policy Sci 4, 155–169 (1973).
- Snowden DJ, Boone ME. A leader’s framework for decision making. A leader’s framework for decision making. Harv Bus Rev. 2007 Nov;85(11):68-76, 149.
About the Author
Dr. Claudia Melo is currently an enterprise agile coach/software engineer, and an International Organization (United Nations) and Advisory board member at Mulheres na Tecnologia (/MNT). Over the past 20 years, Melo has joined different organizations with a focus on software engineering, connecting delivery, research and education. She conducts extensive work in Agile Methods/DevOps and Organization Design in collaboration with companies, universities, entrepreneurs, government and international organisations. She was ThoughtWorks’ global head of tech learning development, member of the Technology Advisory Board, and CTO for Latin America. Since 2016, she has been working on ICT for Sustainability topics, aligned to the 2030 Agenda for Sustainable Development. Melo received her Ph.D. in Computer Science from the University of São Paulo (USP), in collaboration with the Norwegian University of Science and Technology (NTNU). Besides contributing to scientific research, books, and industry reports, in 2015, she received the USP Outstanding Thesis Award.