Traction Heroes

Beyond Data

Jorge Arango Episode 3
Harry:

The data doesn't make the decision. The data's there to create an informational context for you to use your good judgment. And your good judgment is there because we, as beings, have evolved to collect information through all sorts of senses, conscious and unconscious.

Narrator:

You're listening to Traction Heroes. Digging In to Get Results with Harry Max and Jorge Arango.

Jorge:

Harry, how have you been, my friend?

Harry:

I've been great, Jorge. It's really nice to see you again.

Jorge:

It's always great to catch up with you.

Harry:

How are you?

Jorge:

I'm doing well. I was taking a walk earlier this morning. And I was thinking, I'm meeting later today with Harry and I'm wondering what text to bring to our conversation. And I had checked out a book from the library through Libby, and one of the things that happens when you check out a book through this means is that the books come with an expiration date. And it's usually hard to renew them, especially if it's a book that is in high demand. And I was taking a bit longer than normal finishing the previous book, which was also from the library, so it was due. So I had to finish that before I got to the other one. And I finally finished the other book yesterday and I started reading this book again this morning. And it's a book that I happen to own as an audio book. And as I was listening to the audio book, I thought,"Oh my gosh, this would be perfect for a conversation with Harry." And I felt kinda bad because I don't wanna bring a book that I haven't fully read. But then I had an interesting experience in that I opened the Kindle app on my phone to highlight something that the author had said in the audio book. And I realized that, even though I had got this book from the library, I already had annotations in the tail end of the book. And I was like,"Wait a second, I've read this before!" And not only have I read it before, I had already bought it as a Kindle ebook. So I'm at that stage towards either senility or having too many books in that I am checking out from the library books that I already own and feeling the pressure to read them quickly even though I own them.

Harry:

That's so funny. I know that feeling of"I've gotta buy a book!" And I go buy the book and then I realize this, I have to put it on the shelf right next to the other copy of it I already have.

Jorge:

It's a long-winded story, but anyway, it's a way of introducing this text and, like I said, even though I had already read the book, it had been several years since I read it. And it's a great book. I enjoyed it a lot the first time, and now that I'm revisiting it, I'm like, there's a lot of, really valuable stuff here. And it's one of those things that, this probably happens to you as well, but there are certain books that I've reread and when I revisit the book, the book is the same but I'm different and I take different things out of it. Maybe the context is different. And I'm finding that to be the case with this book. So I'm gonna read a fragment from this thing, and then I'll tell you what the book is and who it's by.

Harry:

Okay.

Jorge:

So, here goes:"In theory, you can't be too logical, but in practice you can. Yet we never seem to believe that it is possible for logical solutions to fail. After all, if it makes sense, how can it possibly be wrong. To solve logic-proof problems requires intelligent, logical people to admit the possibility that they might be wrong about something. But these people's minds are often most resistant to change, perhaps because their status is deeply entwined with their capacity for reason. Highly educated people don't merely use logic, it is part of their identity. When I told one economist that you can often increase the sales of a product by increasing its price, the reaction was not one of curiosity, but of anger. It was as though I had insulted his dog or his favorite football team." And I'm going to skip ahead now a little bit."If this book provides you with nothing else, I hope it gives you permission to suggest slightly silly things from time to time. To fail a little more often. To think unlike an economist. There are many problems which are logic-proof, and which will never be solved by the kind of people who aspire to go to the World Economic Forum at Davos."

Harry:

Wow. I have not read that book.

Jorge:

All right, so this is a book called Alchemy: The Dark Art and Curious Science of Making Magic in Brands, Business, and Life by Rory Sutherland.

Harry:

Oh, Sutherland's a genius.

Jorge:

Yeah, he's an interesting fellow. And, for folks listening along, you might have caught the reference to football team. So he's a Brit and, I think he's retired now, but he worked at Ogilvy for a long time. So advertising. And again, I'm, revisiting the book. I read it many years ago. And what I remember from this book is that it is, as suggested by this passage, a call or a reminder that there are decisions that you can make through the use of data, but there are decisions that kind of defy the use of data or I'll say it more strongly, maybe basing the decision on data might lead you astray.

Harry:

Yeah, It is funny, it reminds me... a friend of mine often talks about, this is, you might know Mark, my friend Mark Interrante. He talks about things in terms of churches. Like he'll talk about the religion of work. He'll talk about what is your religion of work? Or he'll talk about like the Church of Reason or something like that. I can't actually think of the one that I'm trying to reference right now in response to the Sutherland reading and the concept here, but the idea that we tend to think that rational logic... The question is who's the we in the statement I'm about to make: that we tend to think that rational logic will lead us to rational decisions and rational action is true for some people and not true for many others. And in fact, the idea itself that the rationality behind these decisions and these actions is actually what is ultimately informing those decisions and actions has been proven to be fundamentally flawed in that it is actually the part of our brain that is the emotional part of the brain, which is the decider. And in the neurobiology of the physical brain, when the sections of the brain that allow emotions and, logic to communicate, and I can't bring those forward into this with appropriate references right now, are separated, that somebody who still has access to their rational mind but not access to their emotional mind can no longer make decisions. And yet, there is this question of we that Sutherland brings up here because this, association into the thinking person or the rational mind is in effect its own blind spot. And the people that are associated into that or people that have identified with that absolutely fail to realize that. many, if not most people, are not driven to interact with the world that way, but even they themselves are not making decisions that way.

Jorge:

That's right, and we operate in relationship with other human beings who are also making decisions from emotions. In our last conversation, we talked about learning to trust your gut in some ways. This notion that you tune in to what your body's telling you, right? The mind is not a computer that is making an analysis based on data points in the same way that a computer algorithm is. Our bodies are part of our decision-making apparatus, and there's stuff like pheromones in the air that will affect how you understand what is going on, right? And there, there's so much more there. I remember hearing at one point, my YouTube algorithm, served up an interview with Ayn Rand. I don't think I had ever heard her voice, and I played the first few minutes of it. And the reason I tuned out, and this was a while back, so I'm gonna probably do her an injustice by paraphrasing, but the reason I tuned out is that one of the first questions the interviewer asked was something like,"What is your vision of an ideal man?" And her answer was something along the lines of,"An ideal man does not let emotions interfere with their decision-making." And again, I'm probably paraphrasing that, but, that explains so much to me about... I haven't read a lot of Ayn Rand books. I read The Fountainhead when I was in school. But it brought into contrast, the, the worldview, which I imagine her espousing with what I think of as a more kind of pragmatic approach that involves making use of the entirety of your capacities, including your emotions, right? We do not make decisions based just on the data. We would be doing ourselves a disservice if we make decisions based just on the data.

Harry:

Part of that is because the data we have is not all the data there is or are. I never remember if it's data there is, or data there are. I think it's data there are. At any rate, the point is that data are helpful and possibly informative and they certainly put us in a stronger position to help people understand how we've arrived at some informed place. However, the data doesn't make the decision. The data's there to create an informational context for you to use your good judgment. And your good judgment is there because we as beings have evolved to collect information through all sorts of senses, conscious and unconscious. And so much of our decisioning is unconscious that for us to hallucinate that we can look at data and make a conscious decision that's rational and end up in a place that's extremely predictable is foolish, right? Because a) as we said, much of the data aren't even available. Just because we have a set of data doesn't mean that's all that's there. Moreover, that data may have come through a very limited channel, and it's really just a set of points that help us rationalize a decision that we've already made or inform a decision that we're going to make and then act on. But in between all of that is the judgment that we bring here, the ethics that we bring here, the contextual awareness of both whatever has happened, what is happening in the moment, and the possibilities of what could be happening going forward. I touch on this, turns out, in the very last chapter of my book, Managing Priorities, where I talk about the role of AI in prioritization. And if we assume that AI can understand enough to be ethical, if we assume that AI can understand all of what's necessary to understand a highly complex and dynamically changing environment then, sure, maybe AIs will put us in a strong position to not have to use our own judgment. Otherwise, we're gonna have to come up with accommodations to insert our humanity. And our humanity, to bring it all the way around, is all of us. It's not just our rational minds.

Jorge:

It's funny you mention AI. I talked about a book that I had to return before getting to, Sutherland's book. And that book was Yuval Noah Harari's, new book, Nexus, which I would say it's about the degree to which the shape of human society is defined by the structure of our information networks. And his argument is that for all of human history up until now, these information networks have been characterized by means for sharing, capturing, sharing, disseminating information, but the means for doing that have not been actors in producing information themselves. And that, is changing now, right? So AI does, put us in the territory that we discussed in our last conversation. It's kind of unprecedented, right? The situation that we find ourselves with AI is unprecedented. And we are trying to do the best we can, given what we know, where we come from, our long history with information networks with other humans. But we now face the prospect of being in information networks with non-human actors. You said data don't make the decisions, but we do seem to be heading in a world where more and more decision-making is relegated to the data. And I've had the experience of working in projects with stakeholders who are very data-driven. I have one particularly in mind who would not accept design directions that were not informed by statistically significant data. And, I didn't do it as at the time, which was probably a mistake, but my gut feeling and my feeling about it in retrospect was that I should have pushed back more because in some ways the designerly approach to decision-making, taps into other means of knowing the world besides merely processing quantitative data.

Harry:

Yeah, that's a hundred percent right. I think perhaps in one of our previous conversations we touched on the iPhone. And I can't imagine for the life of me having worked at Apple and knowing the stories of the iPhone development, I can't imagine for the life of me that had Apple's leadership demanded that the decisions around the iPhone be informed by data, we would have no iPhone today. And the idea that somebody would require statistically relevant data without really understanding all of the probabilistic scenarios that could be informed by other data to enhance the decision options that they have, makes very little sense to me. I think about, Douglas Hubbard's book, How to Measure Anything, and his approach to... brilliant man, and his applied information economics, an approach to looking at speculative futures and different scenarios and dealing with decisions in situations of uncertainty. I think you bring up a super interesting point, which is something I've never seriously considered, which is how do we get people off that hook. That decision maker, what do we say to them? What are our talking points and how do we back those up? Because these are claims that we make as designers or information architects or product managers or leaders or whatnot. These are claims that we make about,"you can't use the data to make the decision solely, you have to take other things into consideration." But I don't know that we have the talking points laid out in a way that a leader in the situation that you're describing, would accept. And until we have those talking points, we are gonna be at the mercy of their demand for that data.

Jorge:

That's a great prompt. I think that a good use for the remainder of our time here would be to brainstorm a little bit about what those talking points might be. And I'll start by saying that one approach that might be helpful and again, I can't say that I used this, but in retrospect I wish I had, which is to acknowledge the fact that there are people who do want this sort of security. And I pause there because I'm imagining like air quotes around the word security here, but the security that is provided by quantitative data. Acknowledge that there are people who do want that, for whom that is important. And making clear to those folks that data are important, that there is a role for data in the decision-making process, but that role is not evenly distributed throughout the entirety of the process, right? There are certain decisions for which data is more appropriate than others. And I would venture that the appropriateness is highly dependent on the degree to which the ideas are developed. So you talked about the iPhone, I could imagine A/B testing aspects of the final design, but I can't imagine doing quantitative research on the exploration for the solution space, the initial part of the process, where you're trying to figure out what the heck is it that we're making here, right? That feels to me like, a part of the process that demands a much more intuitive, visionary, opportunistic, approach to decision making, which might make those folks nervous, right? And you, wanna call it out and you wanna say,"Hey, the phase of the process we're in is not quantitative; we can't quantitatively figure out our way through this forest. So it's gonna be uncomfortable for a little while. Hang tight; we're gonna get there."

Harry:

I think a really good point. And I thought of so many things as you were talking. one of them is, in working with Doug Hubbard, who wrote How to Measure Anything, I brought him in as a consultant to help with a project at AllClear ID, where I was the head of product for a number of years. And Doug was working to help us establish a very complicated decisioning model for looking at a set of potential events that could happen. And funny, Doug's the kind of guy that the insurance industry calls when they can't figure something out, right? He's just an amazing guy. And he once said to me, he goes,"Look, I didn't write a book on how to measure most things, I wrote a book on how to measure anything." And, I'm like,"Yeah, but Doug, what you're really talking about in this case, is about clearly identifying the subjective and then building some kind of model to establish some relevant quantitative framework to support that subjectivity." And he goes,"Exactly!" Which is what you would wanna do in a situation where you can't know something. He goes,"You can always know something. The question is, to what extent?" And he talks a lot about the value of, information. And about sometimes the value of information is very high. And sometimes the value of information is very low. And it falls on a spectrum. And I, think about a highly data-driven decision-maker can be the kind of decision-maker that ends up driving the kind of changes you see at a company like Boeing, where the decision framework around the design and development, execution, engineering, operational decisions, lead to doors falling out at 38,000 feet because they weren't thinking with a broad enough frame. And that broad enough frame had nothing to do with numbers and data, it had to do with ethics and humanity. Sure, if you open up the aperture, you can establish a quantitative model around it and look at brand equity and impact, and you can look at, stock price and all sorts of stuff like that. But most of those arguments are used in reverse to support the kind of changes that over time led to the kind of catastrophe that we're now seeing unfolding at Boeing, where you bring in a Jack Welch cost-cutting acolyte, an executive who's really there to ring out the innovation and really drive up the stock price over time by cost cutting, you end up in a Sidney Dekker model. There's a book that Sidney Dekker wrote called Drift Into Failure, where the decisions that are being made are very data-driven, but they're data-driven in a frame that's too small. And by looking at that frame, it appears that those decisions are being made well. But in fact, over time, as they play out, they have horrible consequences because nobody's stepped back and looked at the broader subjective context in which those decisions are being made. And that leads me to this course I took just a couple weeks ago. I flew out to Boston to take a course with Ed Catmull and Hal Gregersen. Ed Catmull was the co-founder of Pixar Studios, Toy Story Fame, and he ended up being the president of Disney Animation. And he was teaching, they together, wonderful course called Embracing Uncertainty. And much of it's embodied in the expanded edition of Creativity Inc., Ed's book, which is now out highly recommended, by the way. A dramatically good book on leadership, which I think speaks to many of the issues we're talking about. But it doesn't necessarily give us a script to maybe fill in the Mad Libs, story points, or talking points about how do you respond to an executive who says,"Look, I gotta have the data. Can't make a decision without the data." Because knowing what those talking points are gonna be, and having them laid out in a way that allows somebody in that sort of fixed frame, that smaller aperture fixed frame, say,"Huh, okay, in this particular case, I'll set aside the data because we can't necessarily know and we're gonna have to allow human subjectivity and judgment drive some of this." Brings us all the way back around to the question that you said, which is maybe we should be brainstorming on the talking points.

Jorge:

In hearing you talk about it, I could think of another talking point, which is that if the purpose that you are putting the data to is optimizing some kind of solution or some kind of response to a problem, it behooves you to be clear on what it is that you're optimizing toward. In the case of, Boeing, and I don't know in detail the situation there, but as far as I've read, they were optimizing towards cost savings, right? And in Nexus, Noah Harari talks about the fact that social media optimized their algorithms for engagement and that had the second, order effect of making, contentious, irritating, hate-filled content rise to the top, because that's what drives engagement. So maybe something like employing the five whys tool, right? Ask why."Why are we doing this? What is it in service to?" And then maybe that can help you in these conversations with data-driven stakeholders to... again, I wanna be very clear, I'm not arguing that we should not use data at all. I think that data are super useful. I just wanna make sure that we're using them adequately for the right purposes. And maybe this is a way to help steer that conversation.

Harry:

Yeah, I'm thinking of a couple things. Number one is, I realize I've stopped using, I think it was the TPS, Toyota Production System, the lean world that brought up the Five Why's. But I've stopped using that. I've replaced it with the Five How's, and I've found that I get to a much better causal understanding of things with a how then a why. And so, instead of saying,"Why did that happen? Why did that happen? and then why did this happen?" recursively, I'll say,"How did that happen and how did this happen?" And it tends to lead more directly to either a people, a process, a technology, a decision. The whys tend to like knock people back on their heels a little bit and maybe put people in a defensive psychological position. So I found those a lot less useful, therefore, I've stopped using them. The other thing this makes me... This makes me think of two other things. Number one was, I think a question that could be asked, as a talking point as it were, is,"What are the unintended negative consequences of relying on this kind of data? What blind spots could it be covering up or preventing us from seeing?" And that brings me right back to the Hubbard question of,"Okay, so we may not have all the data that we want, but if we had data to support the subjective elements of this that we think are important, what data would we need? A) what are those subjective elements? And B) what data would we need to support those? And then, what is the value of that data? Is it high, medium, or low or something in-between?" If we can pinpoint some subjective element or characteristic and then identify the data, putting a Bayesian model behind that might take a lot of work, might not take much work at all. But if we can assign a high value to that information, maybe it allows us to say,"Alright, fine. We have all of this other data that's informing this potential decision. What other subjective information would be valuable? Where is it high value? And do we have the time and resources to go do a little bit more work to establish some clarity there?"

Jorge:

It's funny. In our last conversation, we talked about our responses to unprecedented situations. And part of that entailed knowing enough to realize that you are in the domain of the unprecedented. And that seems to be rearing its head here in the sense that we might be operating based on the data we have and not on the data we need. And we might be in the thrall of the fact that we do have data and think ourselves in possession of greater expertise than we actually have because we don't have the data that we need and we don't even know that we need it. And this is where I think that a more designerly approach might be useful. And I'm gonna tie this into a learning from the field of information architecture, Marcia Bates's berry picking model, this notion that when people search for information, it's not very common that they know exactly what it is they're looking for. The more common scenario is that you clumsily ask a first question, and based on the answer you get back, you learn enough to ask the second question, right? And you progressively become a more skillful researcher of the situation. By getting feedback in the form of search results, getting feedback in the form of conversations with other people. Little by little, you acquire the necessary conceptual models, the necessary language, to undertake the next step, right? And it might be that a potential answer for these data-driven folks is to say,"Look, we are going to be undertaking a design process that is going to teach us what kind of data we need to be capturing and how to measure that data and what to do with it." So, perhaps we can give them some degree of confidence that we will build the necessary data muscles. But we have to work our way toward it because, at the beginning, we just don't know what we don't know. And we might be trying to force decisions that we are unable to act on based on what we have.

Harry:

Yeah, a hundred percent. And that it also brings us right back to another point we discussed in our previous conversation, which is I like to tell people,"Look, certainty is the enemy of truth." And when I'm dealing with an executive that says,"Look, I need the data to make the decision." And then they get to the point where they feel like they can make the decision and they assert that decision with a high degree of self-assurance and confidence, the job there, my job there as a designer, information architect, executive coach, consiglieri, whatever role I'm in, is to, without saying,"Hey, look man, certainty is the enemy of truth," is to get them to accept the possibility that they will not know if they're wrong until after the fact, is to get them to open up to the possibility that in this sense of self-assurance that they have lies the seeds of error.

Jorge:

Hubris.

Harry:

Well, that's the most extreme version of this. Yeah.

Jorge:

Well, you talked about Boeing, right? Again, I don't have insights into it, but from the outside seems to be the tail end of a disastrous series of decisions, probably made without engaging in the sort of humble introspection that we're discussing here.

Harry:

Yeah, that's my sense. Fundamentally, it came down to a bad hiring decision, I believe. And that's a board level thing, right? That's a governance problem. But it was all enacted by a CEO with a philosophy. And that philosophy may have worked well in one context, AKA, General Electric, but it was not appropriate in the context of Boeing. They're working their way through how to fix that now; they've got a lot of work to do.

Jorge:

We're obviously not going to fix Boeing's problems, but hopefully we can help folks navigate these sorts of situations better. And again, I wanna emphasize, I didn't bring this reading with the idea of poo-pooing data. Data are hugely important. They play a hugely important part in our future with AI. But I think that they must be used thoughtfully and we will be working with folks who are very data-driven and we need to find ways of being good contributors and helping them be good contributors. And that requires understanding the role of data and the fact that there are other ways of making decisions.

Harry:

Yeah. Yeah, no doubt. It's been fabulous. Once again, Jorge, I just love the conversation, and I so appreciate you making the time for these.

Jorge:

Oh, same here. I always learn something new when I talk with you. I'm gonna take the Five Hows and and implement those. I've retired the Five Whys, Harry, thank you!

Harry:

You're welcome. I'll be very curious to hear what your experience is. I've heard some good things from other people.

Jorge:

Awesome. Thank you for sharing.

Narrator:

Thank you for listening to Traction Heroes with Harry Max and Jorge Arango. Check out the show notes at tractionheroes.com. And if you enjoyed the show, please leave us a rating in Apple's podcasts app. Thanks.

People on this episode