Little girl holding hands with a robot
All articles
Oct 28, 2014 | Updated Oct 20, 2022

Humans and The Risks of Automation

“How can we design systems to use the sophistication of human thinking and decision making, AND the complex computation power of machines to strike a healthy balance of automation?”
Black and white group photo of Blink team members

By

Ankitha Bharadwaj

PDF

A few nights ago, I found myself at an Irish pub with some good friends. Over drinks the conversation started veering from the usual “So, what are your plans for the weekend?” to “How can we design systems to use the sophistication of human thinking and decision making, AND the complex computation power of machines to strike a healthy balance of automation?”

All right, maybe I should back up a bit.

Cover of The Glass Cage Automation and Us, a book by Nicholas Carr.
Photo Credit: The Verge

The Glass Cage

Earlier this month I attended a talk by Nicholas Carr, a writer who focuses on technology and culture. Carr is the author of a few renowned books in the field of human-computer interaction; his book The Shallows: What the Internet Is Doing to Our Brains was a finalist for the 2011 Pulitzer Prize in general nonfiction. His latest book, The Glass Cage, tackles the subject of automation and humans. In this new book he specifically cautions us to consider the ramifications of relying too much on automation.

My first reaction was to immediately dismiss Carr as someone creating a straw man argument to warn us of the looming robot invasion. But after listening to his points, I understood that his fears are not about artificial intelligence and robots taking over the world. He is more concerned with taking advantage of the benefits of computers while still exercising our cognitive skills and abilities.

During his talk, Carr provided numerous examples of how our dependence on automation (e.g., spellchecker, Google maps’ turn-by-turn direction) has caused us to lose the ability to flex those cognitive skills. Carr fears that if we become too complacent or reliant on automation, our inherent skills and talents might begin to diminish.

After understanding Carr’s refined argument, I thought to myself…does any of this really matter? As long as we reach the goal, or accomplish the task, or do the work, does it matter if it was done in an automated or computer-supported way?

Part of me says yes: In situations when we are without our computers, or have lost access to that automation software, we may find ourselves in a lot of trouble. Carr provides examples of pilots who have become so reliant on automated cockpit functions that they become frazzled and incapable of taking control of the flight when those systems go down.

The other part of me says no: At the end of the day, the work got done! It’s a huge testament to human intelligence and engineering prowess that we built sophisticated systems that can automate mundane tasks that aren’t intellectually stimulating for humans, or tasks that require high-level, complex calculations difficult for humans.

Carr’s automation solution

In his talk, Carr presented a solution: He suggests that we build systems in a human-centered way. Now that’s a solution I can get behind with no issues whatsoever. Carr says that designers and engineers must take advantage of the computer’s ability to do those complex calculations and mundane repetitive tasks, while still allowing the human operator to exercise control on the system and be able to take control back at any time. Moreover, Carr goes on to say the system should be designed so that it encourages the human operator to take control and develop his or her skill.

After Carr’s talk, I did quite a bit of research online to find other points of view on this idea of “risk of automation.” I started sampling from the Buffet of Thought; Carr’s talk was an extra spicy dish and I wasn’t sure if I’d go back for seconds.

However, the more I read, I started to identify with ideas that align with Carr’s arguments: Good design should be human-centered, and it should provide situational awareness such that the human operator is always conscious of what’s happening and how he or she can jump in to take control. Researchers have identified different levels of automation:

  1. The computer offers no assistance: humans must make all decisions and actions.
  2. The computer offers a complete set of decisions/action alternatives.
  3. The computer narrows the selection down to a few.
  4. The computer suggests one alternative.
  5. The computer executes the suggestion after human approval.
  6. The computer allows human a restricted time to veto before the automatic execution.
  7. The computer executes automatically, then informs the human.
  8. The computer informs the human only if it is asked.
  9. The computer informs the human only if the computer decides to do so.
  10. The computer decides everything, acts autonomously, ignoring the human.

The sweet spot

The trick is to straddle the sweet spot between automation and human control. The ideal system (according to Carr, anyway) would be at Level 5: The human operator has total control in the decision-making process, while the machine executes the decision after the human approves. And this makes sense, doesn’t it?

It’s up to us to make those core decisions – whether it’s about the trajectory of the flight, or if we take a left or a right at Pike Street – and the system should encourage the human operator to make those decisions and support our cognitive skill development.

Want an example of bad automation? Look no further than your smartphone’s autocorrect function. When was the last time your phone corrected something normal to something scandalous? I bet you won’t have to think that far back. The new iPhone operating system attempted to solve this problem by tossing control back to the user. The new iOS allows the texter to select from a few options to increase the likelihood of getting the right word at the first try.

This predictive typing feature keeps the texter in control at all times. The user can opt into the phone’s autocorrect suggestion or select from a few others.

What are some systems you can think of that fall in that sweet spot? Tell us what you think!

Curious to know more about Nicholas Carr’s thoughts? Check out his interview on NPR with Tom Ashbrook back in September.

Ankitha Bharadwaj works in user research at Blink UX. She’s a doer and a trier, and spends her free time thinking about how we know what we know is, in fact, the truth.