Why Business Advice Is Often Bad Advice
The crash of the Royal Air Force aircraft Nimrod XV230 in Afghanistan in 2006 was extensively studied by a board of inquiry (BOI). The BOI report identified many causes, which are summarized in this month’s “From the PFC Technical Director.”
The BOI faulted the UK Ministry of Defense (MOD) for an inadequate safety culture and resultant human errors. It includes an extensive review of several important writings on processes safety, including works by James Reason, Karl Weick, Diane Vaughan, Charles Perrow, and Lord Cullen. The recommendations of the report for process safety and safety culture seem to be rooted in the best and latest science.
But the most withering blame is laid on flawed management in the MOD. And the flawed management practices are blamed largely on meddling by management consultants resulting in years of “cuts, cuts, cuts,” unending management initiatives, a focus on business principles at the expense of airworthiness, etc.
Whereas process safety and human error are discussed in great detail, the BOI report does not contain similar references to the best and latest literature on management. It is almost as if the BOI members considered the management failures to be so obvious that they needed no explanation and that the significant management changes recommended in the report needed no justification. If the management system was so badly flawed, then we must try to understand why that was the case in order to avoid repetition.
The BOI said the MBAs had taken over from the subject matter experts (SMEs). But why did that happen? How did it come to pass that people in authority believed in the wisdom of “soft-handed” MBAs over “hard-handed” SMEs?
This is an important question, because the problem is not rare.
A few weeks ago, I talked with a vice president at a mid-sized independent operating company that had recently brought in consultants to revamp its project management processes. I asked, “How did that go?” He responded that it had not gone well, but that the problems created by the consultants had now been fixed and he considered the initiative a success. But he worries that the project managers in his organization may resist implementing the new processes.
In Run to Failure (2012), Abrahm Lustgarten argues that MBAs were running the show at BP in Alaska prior to the pipeline leaks in 2006, in Texas City, Texas, prior to the refinery explosion in 2005, and in the Gulf of Mexico prior to Macondo in 2010.
And we can similarly attribute the fall of Mosul and other Iraqi cities to ISIS fighters in 2014. The Iraqi army, trained to the tune of billions of dollars, did not put up a fight. It is probably not the training that was to blame, but management.
Action Science to the Rescue
I believe that the reasons for misguided management advice are well understood, though not widely known. The following is based on work by Chris Argyris as summarized in his book Flawed Advice and the Management Trap, How Managers Can Know When They are Getting Good advice and When They’re Not (2000).
The explanation is centered on Argyris’ concept of theories of action. In last December’s column, I described action science and argued that Model I behavior is responsible for defensiveness and silo behavior. If you did not read the article, I suggest you do before reading this one, though the important points are summarized in the next section.
Summary of the Theories of Action
You possess a theory of action, which is the set of rules that you use to design your actions in order to achieve your objectives in social settings. It is useful to think of people as having two theories: In the espoused theory (Model II behavior), most people would claim that their behavior is guided by trying to be fair, seeking and valuing accurate data, and involving other stakeholders in decisions. And most people do behave this way as long as nothing important is at stake.
In the theory of use (Model I behavior), if something important is at stake, people are less inclined to be fair and more inclined to try to be in control and win, less interested in valid and accurate data, more likely to cherry pick data that support their position, and seek out people who agree with them and avoid people who do not.
We exhibit our worst behavior precisely when it is most important to be at our best.
Why Business Advice Fails
According to Argyris, there are three primary reasons why business advice fails.
- Much of the advice is not actionable. Many business problems are caused by Model I behavior. A great deal of business advice proposes solving the problems by acting in ways consistent with Model II behavior. This works well for a while as people play nice. But Model II behavior is not sustainable in the long term, and people will revert to Model I behavior when important issues arise.
- Much business advice is ambiguous and not causal. A great deal of the advice is too ambiguous to be useful. For advice to be actionable, it must be causal. That is, it must specify actions and expected results. Many suggested actions are not action items. For instance, in The Wisdom of Teams (2015), Jon Katzenbach and Douglas Smith counsel us that trust occurs on teams only when people “do the hard work to build trust,” but nowhere in the book do they give any clues as to the nature of this work. They also encourage team leaders to “strike a balance between providing guidance and giving up control.” But again, they provide no guidance on how one might go about determining the appropriate balance. Ambiguous advice often sounds good until you go out and try to do it.
- Business advice is frequently untested. No scientist would dream of publishing a new scientific theory without some experimental evidence to back it up. But commercially published business advice is rarely tested, and often is not testable. Much of the advice given is appealing and compelling and often appears to be obviously true. For example, Tom Peters and Robert Waterman Jr. have never provided any justification for the eight business principles they touted in their book In Search of Excellence.
A Case Study
Argyris describes a consulting group that did substandard work on a study. The company had successfully completed a similar study for the same client previously. A strong case team had been put together, which included three company managers and two outside consultants, but the study failed miserably.
A vice president (VP) of the consulting company conducted an after-action review meeting to determine what went wrong and to try to learn from the experience. He asked everyone to be candid so that the company could learn from the experience. The participants agreed. A summary of the conversation follows.
Manager 1: We had lots of chiefs, but no Indians. The internal team organization was never made explicit. Ambiguity in the team caused resentment, and client conditions changed during the study.
Manager 2: Everyone had his individual piece, and no one knew what anyone else was doing.
Manager 1: The VP changed commitments during the study (and disappeared). Manager 3 was supposed to compensate for losing the VP, but that was not made explicit.
Consultant 1: We went into the field to collect data too early. Then when the client’s situation changed, we could not go back into the field cost-effectively.
Manager 2: The order was to go into the field to do market interviews. No one was in control. We were out of control.
Manager 2: We rarely met as a case team without the client present.
Manager 2: We needed you (speaking to VP) there during the first client presentation because we were getting nitpicked to death. We killed ourselves for a week preparing, only to have the presentation nitpicked to hell for nonsensical reasons.
Consultant 2: I did not know how to pitch in because I did not know what we were doing as a whole.
Manager 3: I agree with most of what has been said. I probably should have been more forceful. After the first meeting we decided that they were crazy, not us, and that we would give them value for the money in the final analysis. There was an incredible fixation to come up with something special.
Model I Behavior in This Team
If we assume that people behave according to the Model I theory of action when threatened, then we can assume that they acted consistent with Model I both during the project and during the after-action review with the VP.
It is easy to spot instances of Model I behavior in the case study:
- The team went into the field too early to collect data. They probably did this to impress the client, to show early progress, to show that they knew what they were doing and were in control. But, the client probably wanted to be in control.
- “We killed ourselves for a week preparing” for the first presentation to the client. Translated: They knew there was a problem and worked hard to sugarcoat it.
- When things went badly, they blamed the client, blamed their own leadership (or lack thereof), and tried to rescue the project by “finding something.” Notice that in blaming their leadership, they blamed someone (VP) who wasn’t actually there working on the project rather than blame a colleague on the project.
- They knew early on that they were in trouble, but they failed to take any meaningful action to correct the early errors. They could have a) gone back into the field to collect more relevant data, b) apologized to the client for early missteps and tried to work together, and c) alerted their management to the issues and sought help. These actions would have required admitting error and losing control.
Their behaviors are consistent with Model I behavior, which sabotaged the project and also corrupted the results of the after-action review meeting so that little was learned that would improve their performance on future projects.
Among the primary recommendations made by this team to improve performance on future projects were to assign a chief to coordinate individual efforts, hold team meetings without the client present, and provide more time early in the project to think things through.
Theory of Action Critique
The principles of theory of action can be used to critique each of the team’s recommendations.
- Assign a chief. The participants’ main suggestion for improvement, which at first blush, seems like a reasonable suggestion since their efforts were uncoordinated. From the beginning, they were “out of control.” But the advice is ambiguous. It is not clear what the leader is supposed to do, and how exactly his actions would help the team. A leader could take a hands-off approach (“come see me if you have problems”), or could be a micromanager. Would either of those be effective, or is something in between required? It does not address any root cause of the team’s problems. Would a leader have prevented them from going into the field too early or encouraged more client participation?
- Hold team meetings without the client present. This recommendation resonates with any consultant. We like to plan a project before unveiling the plan to clients. But why? Who is better positioned to help with the planning than the client who best understands the problem? The reason for wanting to hold meetings without the client present are generally to maintain control and to mask incompetence or insecurity. Consultants generally want to air their dirty laundry in private, and then show only a confident face to the client. The problem in the case study is not that the team included the client too much, but that they did not include the client enough early on when the decisions were being made.
- Provide more time early in the project to think things through. This kind of advice is one of my pet peeves. It is completely ambiguous and depends entirely on the meaning of the word “more.” More time than what? How will we know when we have put in enough time? It is likely that the team believed that they fully understood the issues prior to starting the job, and they would have ignored this advice.
Business advice is frequently unactionable, ambiguous, and untested. Apply theory of action insights to spot bad advice. OGF
Argyris, C. 2000. Flawed Advice and the Management Trap: How Managers Can Know When They’re Getting Good Advice and When They’re Not, first edition. Oxford University Press.
Haddon-Cave, C. 2009. The Nimrod Review: An Independent Review Into The Broader Issues Surrounding the Loss of the RAF Nimrod MR2 Aircraft XV230 in Afghanistan in 2006. Report, The UK Stationery Office, London. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/229037/1025.pdf (accessed 12 January 2016).
Katzenbach, J. and Smith, D. 2015. The Wisdom of Teams: Creating the High-Performance Organization. Harvard Business Review Press.
Lustgarten, A. 2012. Run to Failure:
BP and the Making of the Deepwater Horizon Disaster, first edition. W.W. Norton & Company.
Howard Duhon is the systems engineering manager at GATE and the SPE technical director of Projects, Facilities, and Construction. He is a member of the Editorial Board of Oil and Gas Facilities. He may be reached at firstname.lastname@example.org.
Establishing Data Ethics in Oil and Gas Operations
The use of data-intensive decision making and smart risk-management solutions has resulted in the improvement of the ethical foundations underlying the industry. These digital tools and machine-based cognitive processes for risk-avoidance have also helped restore the public’s trust in the industry.
Should Cybersecurity Drive Business Decisions?
Cyberattacks are often seen as an IT issue, but the intelligence gained during the development of an effective cybersecurity protocol may serve a broader role as a business driver for energy. What should companies look for in assessing threats?
Financing FLNG Facilities—What Lies Ahead?
Although the FLNG concept has been around for decades, only three units are in the water and operational or under commissioning as of 2018. As the mitigation of technological risks is demonstrated, commercial risks become the focal point and the challenge to financing the capital-intensive projects.
Don't miss out on the latest technology delivered to your email every two weeks. Sign up for the OGF newsletter. If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.
24 October 2018
06 November 2018
24 October 2018
07 November 2018