Traction Heroes

Automation Complacency

Jorge Arango Episode 34

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 26:41

On the risks of unquestioningly delegating sense-making to technology in general (and AI in particular.)

Show notes:

The Glass Cage by Nicholas Carr

Harry

We don't really continually challenge the idea that the signals and the alerts and the messages effectively that we are taking away from the work that these computers are now doing on our behalf may in fact be leading us off course, not in some necessarily nefarious or large way, but perhaps a little bit over a long period of time and may end up putting us in a position where we "run aground."

Narrator

You're listening to Traction Heroes. Digging In to Get Results with Harry Max and Jorge Arango.

Harry

Jorge, it's great to see you again. I was looking forward to this all week.

Jorge

I am so happy to see you and glad that we can make the time to catch up.

Harry

Fabulous. I'm super excited. I brought up reading today that I think I've mentioned a couple of times, but haven't actually read, from. That doesn't make any sense, you know what I'm saying? But we've talked about this and we've talked around this, and so I wanted to read this and I thought it would be interesting to get your thoughts on it. As is our typical protocol, I will reveal the author and title after I'm done. I beg your forgiveness for how long this is, but I have given you some very short ones in the past, so hopefully in the mix it makes up for it. You ready?

Jorge

Let's do it.

Harry

Alright. "Most of us assume that automation is benign, that it raises us to higher callings, but doesn't otherwise alter the way we behave or think. That's a fallacy. It's an impression of what scholars of automation have come to call the substitution myth. A labor-saving device doesn't just provide a substitute for some isolated component of a job, it alters the character of the entire task, including the roles, attitudes, and the skills of the people who take part in it. Automation does not simply supplant human activity, but rather changes it, often in ways unintended and unanticipated by the designers. Automation remakes both work and worker. When people tackle a task with the aid of computers, they often fall victim to a pair of cognitive ailments: automation complacency and automation bias. Both reveal the traps that lie in store when we take the white hat route to performing important operations without thinking about them. Automation complacency takes hold when a computer lulls us into a false sense of security. We become so confident that the machine will work flawlessly, handling any challenges that may arise, that we allow our attention to drift. We disengage from our work, or at least from the parts of it that the software is handling. And as a result, may miss signals that something is amiss. Most of us have experienced complacency when at a computer. In using email or word processing software, we become less vigilant proofreaders when the spellchecker is on. That's a simple example, which at worst can lead to a moment of embarrassment. But as the sometimes tragic experience of aviators show, automation complacency can have deadly consequences. In the worst cases, people become so trusting of the technology that their awareness of what's going on around them fades completely. They tune out. If a problem suddenly crops up, they may act bewildered and waste precious moments trying to reorient themselves. Automation complacency has been documented in many high-risk situations, from battlefields to industrial control rooms, to bridges of ships and submarines. In one classic case involving a 1500 passenger ocean liner named The Royal Majesty, which in the spring of 1995 was sailing from Bermuda to Boston on the last leg of a one week cruise. The ship was outfitted with a state-of-the-art automated navigation system that used GPS signals to keep it on course. An hour into the voyage, the cable for the GPS antenna came loose, and the navigation system lost its bearings. It continued to give readings, but they were no longer accurate. For more than 30 hours, the ship slowly drifted off its appointed route. The captain and crew remained oblivious to the problem despite clear signs that the system had failed. At one point, a mate on watch was unable to spot an important location buoy that the ship was due to pass. He failed to report the fact. His trust in the navigation system was so complete, that he assumed the buoy was there and he just didn't see it. Nearly 20 miles off course, the ship finally ran aground."

Jorge

Wow. That's a great story. It's a tragic story. I hope no one was hurt, but, but wow. That's, such a great illustration. You know, what was running through my mind as the point of your story was becoming clear that I think that we might have to consider renaming this podcast to Seeing Clearly with Harry and Jorge because it seems like so many of our conversations are circling around this idea that we are just fallible. Our ability to sense what's going on is beset by all these distortions. Anyway, i'd love to hear the name of this book and who the author is.

Harry

Yeah, and I'll tell you in a second. But I think the true irony of all of this was that my infallible memory had incorrectly categorized the mental fallacy here. I went looking through everything I had for a term called learned complacency, and it was nowhere in the book that I knew it was in. It turned out it was called something else. And this book is called The Glass Cage: How Our Computers Are Changing Us by Nicholas Carr. And I studied this book when I was leading the effort at Rackspace to design the open cloud control panel because it was basically a sophisticated cockpit, much like an airplane, and that is what Nicholas Carr in this book explores a lot of. It was a fascinating read on how we can be lulled into this false sense of security or false sense of complacency and end up running a ground, so to speak. And incidentally, no one was hurt, apparently. That was the next sentence, but I cut it out 'cause the reading was so long.

Jorge

This is a rare occurrence where you've brought up a book that I have in fact read. I remember reading The Glass Cage and the story when you were going through, it was like, I feel like I've heard this. But I was just revisiting my notes now that you mentioned the name of the book and I read it nine years ago, so it's excusable. But let's circle back to this idea that technology can make us complacent, because that does strike me as a very serious occupational hazard for many people today, right? We, end up being overly dependent on these systems that we believe are providing accurate information, whether it be readings on the environment or feedback. And oftentimes, that's a bad assumption to make. Right?

Harry

It's such a deep challenge because an assumption is effectively an uncritically held belief. And an assumption is a shortcut to help us navigate the world, which is far too complex to challenge every possible thing. And so where this gets tricky and sticky is it seems that we start relying on technologies. And of course today, with the work that you and Greg are doing in AI and a lot of the work that I'm leading that definitely touches product teams and AI, it is so easy to forget that these systems are in their infancy and they speak with such confidence, when interacting with an LLM in a conversational mode. And yet, there's this concept of hallucination, but I think we don't really continually challenge the idea that the signals and the alerts and the messages effectively that we are taking away from the work that these computers are now doing on our behalf may in fact be leading us off course, not in some necessarily nefarious or large way, but perhaps a little bit over a long period of time and may end up putting us in a position where we "run aground."

Jorge

That's very interesting. I wanna unpack the AI thing because I think I have a bit of a contrarian view on that. But before I do, I wanna circle back to your definition of assumption. As an uncritically held belief, there are good reasons why we make assumptions. We don't want to be expending lots of energy on things that function normally the vast majority of the time. I think the critical thing is, for systems that are, let's call them mission-critical, you need to have backups and you need to be aware of where the mission-critical systems are and maybe apply that more critical lens to those parts of the system. Because if we are second-guessing every component of the system, we're going to expend so much time and energy that we're not going to be effective either.

Harry

Yeah, yeah.

Jorge

You have to know where you should be critical and where you might expend your mental energy elsewhere. So I don't wanna be critical of assuming, because as assuming things can be very helpful.

Harry

A hundred percent. And I think what's so interesting about this is that I think the deeper problems are a function of the second- and third-order effects. I think it's easy for us to perhaps aim to compensate for the first-order effects. Like, if I'm the one interacting with a new piece of technology, and I recognize that I'm in a mission-critical or no-fail environment, I may be able to compensate a little bit or by adding a protocol or by ensuring that I have some kind of mitigation path or some other way of checking my math. And I think where this gets interesting is as it starts to cascade out and you start relying on other people who are relying on these technologies or other people who are relying on other people who are relying on these technologies, that's where I think the distortion or the deletion or the overgeneralization of signals or alerts or messages or the... what happens when you get immune to these things, when an alarm, a car alarm, keeps going off and you just stop paying attention to it after a while. And as these systems get layered into social systems, this becomes very, very difficult to manage for, it seems to me. And so, it's no longer about the first-order effects of an assumption, but the deeper underlying system that we're relying on here.

Jorge

In system design, we have this concept of alert fatigue. I think that's what you're referring to, right? The fact that if you are overwhelming the user with alerts that... You know, there's that old fable about the boy who cried wolf.

Harry

Of course. Yeah.

Jorge

Which I think is slightly different, that's about raising false positives in some way. But I wanna circle back to the AI thing, because I totally hear what you're saying: we are now the midst of digital systems that have been packaged for us as artificial intelligences. And I'm saying that in a very intentional... I'm being very intentional with how I talk about it. They're being packaged as intelligences. There is serious conversation about whether we are reaching artificial general intelligence or even like super intelligence, which gives the impression that these systems are somehow smarter than humans. And that feeds into the danger that you're calling out, which is, these systems are very far from perfect. They are prone to making stuff up. And if we design systems around components that we assume are somehow infallible or smarter than us, but are in fact fallible, we might end up running aground. But where I have a slightly contrarian view is, I think that if you actually start becoming educated on how... I don't like using the phrase artificial intelligence, because it's such a broad subject. But it is the term that is being used in mainstream media and such, so I'll talk about artificial intelligence, meaning the kinds of systems that people now understand to be artificial intelligence. Such things like ChatGPT and Claude and things like that. If you understand how they work under the hood, you will very quickly come to an appreciation of their capabilities and constraints. And in my case, I'm not going to generalize, but in my case, understanding how those things work under the hood has made me a lot more skeptical of their output than I would've been had I been buying the media narrative. I've come to understand computing systems as being of two kinds. There is deterministic computing, which is the old school. You write a program, 10 print "hello Harry," 20 go to 10, kind of thing, right? Where, it has like a predictable flow. Given the same inputs, it's always going to produce predictable outputs, assuming there are no physical issues with the underlying computing substrate. So that's the old school deterministic computing. And now we have AIs, which are probabilistic computing systems. That's a very different animal. By definition, the outputs are not going to be as predictable as... and in fact, that's what makes them useful. So, I think that if you come to an understanding of how they work, you might find yourself being more skeptical of their ability or their infallibility. So, I'm of two minds about it. I think that the danger of relying too much on AI is just because we are in a stage of the development of the technology where people are very excited to roll out AI into everything and we haven't yet quite found what the sweet spot is. I think that the sweet spot is somewhere combining the best of deterministic and probabilistic computing, but I think I'm taking us off the subject now. I wanna circle back to this idea of questioning our assumptions. What made you bring this reading to our conversation?

Harry

Any number of things, but one of them was a story where I was traveling to Blacksburg, Virginia for Rackspace. And I got up one morning at five o'clock and had to get to the airport and punched into my GPS, how do I get to the airport and hopped in the car and started driving, and I didn't know where I was going. I just followed the instructions. And eventually, the road went from four lanes to two lanes and then from two lane to one lane, and then from one lane to a gravel road. And then I got to a sign that said, you have reached the end of Air Point Boulevard. I had hit the wrong thing. I had either typed the wrong thing or hit the wrong thing and had no discernment or understanding or experience of where I was going, and just blindly followed the GPS to the end of a mountain road in the middle of Virginia, and I won't go into the rest of the story about how I managed to actually get on the plane with a security guard chasing me after throwing the keys to the empty table at the Hertz counter, and running through security with a guy holding on his gun, chasing me, yelling, "Stop!" But, I just made a 2000-mile trip to get some work done on something and I went to AAA and I got all my maps for free and then I went to the bookstore and I got a brand new national map printed-out guide in case. And then, I ignored all of that and I just looked at the GPS the whole time, blindly assuming it was taking me in the right direction to the right place at the right time, and that everything was gonna be fine and never thought about it. And I had been thinking about this Glass Cage article for a while because we had touched on it, we had touched on the book a while ago, and then I read this piece and I was like, "Oh my God! God knows where I could have ended up in over the last week while I was traveling." And I wouldn't have realized it until I got to the wrong point because that GPS technology is in my mind robust now. But maybe the cable comes loose and then I inadvertently don't see the things that I'm expecting to see, but for confirmation bias or some other, intellectual fallible neural matter, I ignore the signs around me and end up in completely the wrong place. And then I read this and I was like, "We've gotta talk about this." Because I know you have so much experience in information architecture and wayfinding, and now you're in the middle of the AI conversation, and and yet I think all of us are struggling with the dynamic complexity of all the systems that we're relying on and we're being bombarded with things all the time, and it's becoming increasingly difficult to discern what is there that shouldn't be there and what isn't there that should be there. And and then, I read an article about how the folks that have hair, the younger folks, are going and getting iPods and getting them refurbished with new software and like getting rid of their smartphones and going back to simpler technology. And I think in terms of thinking clearly and acting with confidence, that's based on a real sense of what's going on, we need a better way to think about this and I don't have it.

Jorge

This feels like a conversation that merits more time than we have available now. But I'll say that there might be an analogy between your being misled on this road trip with the kind of use that a lot of people are putting AI to, which is things like, help steer me in some big decision that I need to make. Maybe it's like a career decision or an investment decision or what have you. And all of the things that we talked about previously apply there. Like just being aware that these are probabilistic systems and they're not infallible, right? And yet, to your point earlier, they create the illusion of expertise, right? They come across with a great deal of self-confidence and will do things like cite sources to gain your trust. I'll, just say It might be worth distinguishing between two scenarios. One scenario is the "Harry driving into the back roads unwittingly," which is the scenario of someone who is using the system to help guide them through an unfamiliar task or environment. I would imagine that you gradually got off course because you were unfamiliar with that context, whereas if you knew your way around, your gut would be telling you, "No, this can't be right. There's something wrong here."

Harry

Mm-hmm.

Jorge

And you would be more willing to question your assumptions because you have the environmental awareness that comes with some degree of expertise in the context. So that's one use case. There's another scenario, which is: you are an expert and you have decided to delegate part of your tasks, routines, expertise to these artificial systems.

Harry

Work.

Jorge

Work. I'm thinking specifically about the captain of the cruise ship in that story, right? Like, those people were familiar with the ship and the steering, and they're familiar with navigation... probably even in those waters, right? I would imagine that it might not have even been the first time they navigated that route.

Harry

Yeah.

Jorge

Now, it's a particular context in that the ocean and I know nothing about ocean navigation, but I imagine that you don't have as many landmarks, literally, to guide you along but, my point is it's different if you are a non-expert coming in to have the system fill in for your lack of expertise than it is if you are an expert that is delegating part of their job to the system. I think that those are two different use cases that require different critical lenses, and it might be worth, and this might be a future conversation, it might be worth unpacking how you... or techniques that you would use in both cases. Because I've used AI for both of those things, right? I've used AI for things that I'm very familiar with, I just want the thing to do it for me. And I've used it for things where I lack the expertise to even know if the answers it's giving me are within the ballpark. And those require a lot more trust than if I know the domain that we are working with.

Harry

Yeah. Interestingly, this feels like a richer topic than I originally imagined it was going to as a function of the reading. And I feel like it's worth going back. I know we've touched on some of these ideas very early on in our conversation, and I tempted to go back and take a look at them. Or maybe we can pick up this conversation offline a little bit and figure out how we can bring it into the foreground for folks in a way that's really meaningful. But I like the distinction you're making.

Jorge

It's a super relevant topic. Many of us... like, I'm facing this, right? Like, I work with both ChatGPT and Claude every day, and it's something that in the back of my mind, I'm always second-guessing it and going, "Yeah, is that really true?" And like I said, there are some things for which I'm much better qualified to judge whether what it's telling me is trustworthy than others. So it's an important literacy that we have to pick up and we have to pick it up pretty quickly.

Harry

I love framing it as a literacy. That is a great way to think about this. Okay, I like it. Let's bookmark that.

Jorge

This is not the first time we've put a pin on it, so maybe we have to have an AI collect all the pins for us.

Harry

That'd be a good name for a podcast. Put a pin in it.

Jorge

Put a pin in it. Alright, sir. Great talking with you.

Harry

Thank you so much.

Narrator

Thank you for listening to Traction Heroes with Harry Max and Jorge Arango. Check out the show notes at tractionheroes.com and if you enjoyed the show, please leave us a rating in Apple's podcasts app. Thanks.