It was early on February 24th, 1991, and Saddam Hussein contemplated the enemy soldiers amassed on the Kuwaiti border. With his massive, battle-hardened army positioned to repel the assault, Saddam felt confident he could defend Iraqi-annexed Kuwait. Blustering, he cautioned that the US-led coalition was in for “the mother of all battles.”
Little did Saddam know that the massed troops were rather a diversion; the prior 39 days of aerial bombardment had devastated Iraqi reconnaissance and intelligence. As Iraqi forces concentrated to repel the assault on Kuwait, the coalition unexpectedly invaded Iraq to the West, across its thinly defended Saudi border. By the time Iraqi intelligence realized its error, it was too late. In less than 100 hours, the coalition suffered minimal casualties while decimating the Iraqi army, capturing 100,000 troops, and forcing Iraq to retreat from all conquered territory.
How did the coalition execute so brilliantly? Most military experts credit John Boyd — fighter pilot, Air Force commander, and renowned military strategist — for conceiving of the subterfuge and devastating “left hook” that directly invaded Iraq. Rather than merely relying on superior numbers, Boyd applied the decision-making framework, the OODA loop, for which he is famous, to anticipate how the enemy would process information, and put in motion a plan to disorient and outmaneuver them.
An acronym for observation, orientation, decision, action, OODA is an iterative, recurring loop that every individual or team uses to analyze new information and take action.
Boyd knew that Saddam would observe coalition forces on the Kuwaiti border. That information, combined with other context, would orient that Kuwait is where the blow would be struck. Saddam would decide a course of action and act by deploying troops.
But the coalition was one loop ahead. They knew their sleight of hand, and the bombardment of Iraqi intelligence capabilities, would delay Iraqi recognition of the “left hook.” Coalition troops were free to penetrate hundreds of miles into Iraq before the plot was oriented, at which point it was too late to repel the attack.
While analysts leverage more powerful analytical tools than ever, the feedback mechanisms we use to navigate the OODA loop are brittle and outdated.
A friend recently emailed me a link to the below video, which is a talk titled The Developer's Way. “This made me think of your vision for changing how analysts work,” she wrote. In the talk, a16Z General Partner David Ulevitch states:
https://youtu.be/GAnanqIb9CE
In many ways, analysts already work like developers. We obsess over becoming Excel power users, and all have our favorite formulas (mine include sum-if and combining offset-match). We learn SQL and Python, so that we can pull, manipulate and analyze data programmatically. We automate reports so our teams can focus on decision making.
Yet we still struggle with broken feedback cycles that impede our ability to navigate the OODA loop.
In lacking uniform best practices for scoping out our projects, our teams are hamstrung in observing the business information that drive project plans. In being forced to email models and screenshots to stakeholders, we struggle to orient observed inputs so we can analyze them.
In having to monotonously tweak numbers based on stakeholder feedback, we spend inadequate time interpreting our analysis to inform a decision. And even teams that automate reports spend time manually managing stakeholder questions on the data, distracting us from putting into action our decisions and measuring our progress.
To align our team, we would write up a scoping and deliverables document that we would have to manually maintain. Maintenance and tracking progress was painful, given limited search and the fragmentation of our tool set.
-- Sri Batchu, Former VP of Operations & Head of Pricing at Opendoor
Today, kicking off and scoping out a project is laborious and completely manual. For every project, teams almost exclusively start from a blank canvas when observing the current state of the business.
If an analyst is following best practices, they might send a project summary email, create a shared requirements doc, or list out tasks and deliverables within a spreadsheet tab. We do this despite the fact that the same analytical projects are performed across every business, and we execute them on a predictable cadence.
What if analysts had a library of past project templates and snippets to repurpose and reuse? When building our company’s financial model, assessing an investment opportunity, or reworking KPIs, how much time would we save if we merely cloned the last instance? With the click of a button, we could leverage the model template, repurpose project deliverables and reference historical project discussion topics.
Most of our work supports our execs, or our product and customer experience teams. When we need feedback on a dashboard or report, we send a screenshot. My team does this all day long.
-- Shaun Chaudhary, Director of Data & Analytics at BetterCloud
Developers have Git to submit and accept change requests to their code; analysts still get emails with confusing feedback. Other feedback mechanisms include one off questions in Slack, random in-application comments, and in-person reviews where follow-up actions are jotted down on printouts
Last year, I personally led a project that resulted in no less than 30 versions of our analysis, a handful of relevant dashboards and data sets, numerous stakeholder meetings, a thread consisting of more than a hundred emails, endless Slack DMs with screenshots of the latest dashboard, and a final written report over 25 pages long. Orienting myself and my team so that we could analyze the situation was mind-numbingly tedious.
What if there was a single platform where teams could view their analytical work, provide feedback and manage iterations? And if analysts could see that history, and seamlessly roll back changes? What if teams could programmatically merge inputs from across their teams, avoiding the back and forth chatter altogether?
We have a templated model for presenting returns, and a standard deck for final recommendations. But getting the numbers into the presentation is totally manual. My team has to copy, paste and update the deck for each new set of numbers.
-- Director at New York based multi-billion investment management firm
The end result of an analytical project, whether it is building the corporate financial model, or investigating a new initiative or investment, is a “capstone” presentation, set of reports, or a written analysis, from which leadership makes decisions. And once the business actions those decisions, the analyst provides regular metrics to measure success, and inform necessary tactical adjustments.
But even automatically updated reports lead to dizzying clarifications and follow-up questions. Finding past distributions of the metrics to understand what might have changed, what was said, and what was supposed to happen next turns into a treasure hunt.
What if getting the metrics into the final materials was as easy as a shortcut? Or each time you created a new model version, the presentation automatically updated to reflect the new numbers? Or if impossible-to-locate KPI reports were centralized, and it was simple to find historical instances, the decisions made, and the questions asked?
As with military intelligence, analytical tools ensure we make informed decisions. But it is the speed by which we communicate, and act on decisions, that determines our success.
Analysts serve as the nerve center of teams, but the feedback mechanisms by which we observe, orient, decide and act are still fundamentally broken. Only by working as developers can we transform those mechanisms, allowing our teams to succeed as spectacularly as did Operation Desert Storm.
Receive regular updates about Workstream, and on our research into the past, present and future of how teams make decisions.