Designing for Automation vs Augmentation

How to balance anticipatory design between automation and augmentation?

Historical perspective
The three waves

Book Review — Human + Machine
AI-based solutions have been long used by business-to-business (B2B) businesses to automate routine tasks in operations and logistics. But historically, the first business transformation involved a standardizing process.

Daugherty and Wilson [3] call this era the first wave and was ushered by the first Industrial Revolution with Fordism, Taylorism, and Toyotism.

Succeeding, B2B transformation bears a second wave, consisting of automated processes that began in the 1970s and reached its peak in the 1990s. This era emerged with the business process reengineering movement, thanks to advances in information technology [3]. This was propelled by the ubiquity of computers, large databases, and the automation of numerous back-office tasks. Many people were replaced by machines.

Currently, Daugherty and Wilson [3], invoke a third wave, involving adaptive processes. This wave builds on the previous two, yet, is a completely new way of doing business. It is allowing businesses to adapt to people's behaviors, preferences, and needs at a given moment. It is powered by real-time data rather than by a pre-organized sequence of steps.

They envision that when this last wave is fully optimized, it will allow B2B and B2C businesses to take full advantage of AI. They will be able to produce individualized products and services which are satisfying beyond the capabilities of the mass-production of the past and deliver more profit.

Automation versus Augmentation

New conventions in the experience economy, alongside new technological developments as AI, are shaping how businesses invest in integrating user-centered design and user experience (UX) as a crucial part of their whole service design strategy plan.

Furthermore, when designing for AI-driven products, designers should understand if the service should be focused on augmentation or automation. This duality will be the rule of thumb to design for anticipation or automation.

Automation implies that machines take over a human task, augmentation means that humans collaborate closely with machines to perform a task.

Augmentation cannot be neatly separated from automation in the design domain. These dual AI applications are interdependent across time and space, creating a paradoxical tension [4]. In any case, if done right, both can work together to simplify and improve the outcome of a long, complicated process.

Taking a normative stance, several authors accentuate the benefits of augmentation while taking a more negative viewpoint on automation [3–5].

Their combined advice is that organizations should prioritize augmentation, which they relate to superior performance. Their views suggest that businesses focused on automation will only see short-term benefits. Consequently, Davenport and Kirby advise businesses to prioritize augmentation, which they hail as "the only path to sustainable competitive advantage" [5]. The following Table1 gives a summary of why we may support their vision by analyzing why we should augment humans' capabilities instead of simply automating their actions.
Table 1: Strengths between Humans and Algorithms [3].
If tasks are more robotic alike, focused on speed, accuracy, or repetition, humans are more than happy to delegate them to AI. Nevertheless, there also will be tasks that people want to control themselves, and not rely on automation for that [3]. Humans may delegate repetitive tasks to AI because it can help to handle those tasks faster, more efficiently, and sometimes even more creatively. In the end, algorithms don't see opportunities for improvement. But Humans do. We are the only creature capable of innovating.

Levels of Automation

Human-algorithms interactions are ubiquitous in our everyday life. Accordantly, designers should understand how the level of automation of these systems can impact the user experience. Because poor choices during the design process can impact, users' performance, security, and even safety. The design process needs to takes into account the human factors, and both human and algorithms limitations and abilities through the levels of automation. Therefore, it is needful for the design of such systems to do a preliminary study regarding the possible interactions between humans and algorithms.

Levels of automation range from complete human control to complete system control. In 1978, Sheridan & Verplank introduced a scale of the 10 levels of automation, representing a continuum of levels between low automation, in which the human performs the task manually, and full automation in which the computer is fully automated (table 2).
Table 2: 10 levels of Automation by Sheridan & Verplank [15].
We may infer that anticipatory design is only possible from level 6 and above.

Parasuraman took Sheridan & Verplank scale and introduced the idea of associating levels of automation to functions. These functions are based on a four-stage model of human information processing and can be translated into equivalent system functions [15]:
Parasuraman four types of automation
We can use these 10 levels of automation to make distinct design choices for each of these four types of automation.
Model of types of automation
Flow chart showing the application of the model of types and levels of automation. Adapted from Parasuraman, Sheridan, Wickens, 2000 [15]

What will happen when anticipation is inaccurate?

Anticipatory design will result in a number of accepted filters that will be applied to the information flow pouring on users every day. This leads to the perception of the future affecting the present. Given these ideas of speculation and prediction, anticipatory design is not without critics.

Despite its promise, little research has been done towards possible implications that may come with anticipatory design. Further research connecting well-identified issues such as information and experience bubbles [6], serendipity [7], and trust silos [8] should be addressed in the current trajectory of the anticipatory design.

Especially, if in a given moment a "systems model built with anticipatory design principles is speculating about user needs and attempting to fill in the blanks" [9]. Is the system filling them correctly or incorrectly? How can we measure the risks of a bad prediction? Reservations are in place when one enters into modes of prediction, we introduce constraints to understanding as well serendipity since the system could be unable us the capacity of making fortunate discoveries by accident [2, 9, 10].

Some authors safeguarded that when one enters into modes of prediction, one introduces constraints to understanding as well as serendipity since the system could be unable us the capacity of making fortunate discoveries by chance [2, 9, 10].

All human activities can be described by five high-level components [11]:
- Data
- Prediction
- Judgment
- Action
- Outcome

Is the value of human prediction skills decreasing as the machine learning provides an improved and cheaper human prediction substitute? Nevertheless, one of the most exciting opportunities for AI and ML is being able to help humans make better decisions more often. The best AI-human partnership enables better decisions than either party could make on their own.

As mentioned in table1, the value of human judgment skills is increasing. Judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises.

- If automation fails will the user be able to easily step in and course-correct?
- And if not, what are the consequences?
- Which mechanisms can designers design to anticipate system failures?
- What will become of Humans' experience without serendipity opportunities if all future digital services start to get more and more automated?
- Will anticipatory design and automation design rob us, humans, from serendipity fulfilment?

When going through the path of anticipatory design and automation of decision-making processes, there are 3 design principles that should be taken into consideration [12].

1. Transparency: Making sure that there's clarity about the decisions that are being made on behalf of the user. This way, users are more conscious about when to regain control over the situation, take back control, or reverse decisions;

2. Curation: Providing recommendations in a more humanized approach, by providing context, can be a solution to improve the decision-making process. People are skeptical about algorithms, so human curation would aim for quality instead of quantity;

3. Trust: In order to make successful decisions on behalf of users, it is necessary to gather a wide amount of user data. For a user to willingly give access to its private data, designers need to design a system that is grounded in reciprocal trust ties in every input and action, and constantly inform the user how their personal data is being used. Transparency will lead to system trust.

"The ubiquity of the Internet is increasing our ability to collect extraordinary amounts of data from virtually everyone, dramatically reshaping not only how we interact with our devices but how they interact with us" [1].

In short

Anticipatory Design is the method that can support the evolution of the connected everyday objects, by expanding its interaction with users and helping decision making in their daily lives. In this way, UX Design has the fundamental role of providing a good experience, supported by Machine learning to observe and IoT to learn users' routines. Together, they interpret this data and provide this interaction in advance. Affording an anticipatory experience to the user.

In the end, systems must be designed to allow users to still have the power to control a decision. When designing for anticipation, designers have to balance control and automation in consideration of users' needs and actions. An AI-driven product should be flexible design to allow users to adapt the output to their needs, edit it, or turn it off.

Despite the power of great hyper-personalization in AI-driven products, services won't be perfect every time to every user. Their context in the real world and the criticality of the task will dictate how people will interact with the service.

Machines don't see opportunities for improvement. Humans do!

[1] Shapiro, A.: The Next Big Thing In Design? Less Choice,, last accessed 2020/29/06.

[2] Zamenopoulos, T., Alexiou, K.: Towards an anticipatory view of design. Des. Stud. 28, 411–436 (2007).

[3] Daugherty, P., Wilson, J.: Human + Machine: Reimagining Work in the Age of AI. Harvard Business Review Press, United States (2018).

[4] Raisch, S., Krakowski, S.: Artificial Intelligence and Management: The Automation-Augmentation Paradox. Acad. Manag. Rev. 1–48 (2020).

[5] Davenport, T.H., Kirby, J.: Only Humans Need Apply: Winners and Losers in the Age of Smart Machines. Harper Business (2016).

[6] Pariser, E.: The Filter Bubble: What The Internet Is Hiding From You. (2011).

[7] Melo, R.: On Serendipity in the Digital Medium, (2018).

[8] Marcus, G., Davis, E.: Rebooting AI: Building artificial intelligence we can trust. Pantheon Books, New York, USA (2019).

[9] Clark, J.A.: Anticipatory Design: Improving Search UX using Query Analysis and Machine Cues. J. Libr. User Exp. 1, (2016).

[10] Van Allen, P.: Reimagining the Goals and Methods of UX for ML/AI A new context requires new approaches. In: The AAAI 2017 Spring Symposium on Designing the User Experience of Machine Learning Systems. pp. 431–434 (2017).

[11] Agrawal, Ajay; Gans, J., Goldfarb, A.: The Simple Economics of Machine Intelligence,, last accessed 2020/29/06.

[12] Doody, S.: Anticipatory Design & The Future of Experience —,, last accessed 2020/29/06.

[13] Rogers, A., Castree, N., Kitchin, R.: A Dictionary of Human Geography. Oxford University Press (2013).

[14] Pine II, Joseph B.; Gilmore, J.H.: The Experience Economy. Harv. Bus. Rev. 97–105 (1998).

[15] Parasuraman, Raja & Sheridan, Thomas & Wickens, Christopher. (2000). A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30(3), 286–297.
Did you like this article?