Intelligence Failures: Some Historical Lessons, Part 1
The United States is, at the moment, engaged in domestic investigations of the US government's failure to anticipate and prevent the attacks of last September 11. While the drama is likely to be played out largely in an American domestic political context, not part of The Estimate's normal brief, the fact that it involves a terrorist act which originated in the region covered by The Estimate means that many of The Estimate's readers may have an interest in the debate. And since The Estimate seeks to offer open-source intelligence analysis, it may also be a useful time to consider some historical causes of intelligence failures, in order to promote an understanding of how they happen and why, in all likelihood, they can never be completely avoided.
This Dossier examines the lessons to be learned from some of the key intelligence failures of the 20th century. Because the nature of intelligence collection differed so greatly in earlier centuries, historical parallels earlier than the 20th century are of less value.
Some of the most critical intelligence failures of that century did take place in the region covered by The Estimate: Israeli failure to recognize the imminence of war until the morning of October 6, 1973; six years earlier, the failure of Arab Air Forces to anticipate or prepare to defend against the possibility of an Israeli pre-emptive strike; and almost universal misreading by international intelligence services of Saddam Hussein's full intentions toward Kuwait in 1990.
The lessons of these will be examined in this Dossier, but so will lessons learned from other intelligence failures, including the most studied of all, Pearl Harbor.
Strategic surprise is a critical element in warfare, conventional or otherwise. The goal of any intelligence gathering and assessment activity is largely aimed at forestalling surprise, at learning the enemy's or potential enemy's plans and intentions, and anticipating events. Each country or player also naturally seeks to maintain a high level of operational security on their own side, to prevent a potential adversary from anticipating one's plans. One means of doing this is security, and another is deception.
When, despite all the efforts of a national intelligence community to anticipate enemy intentions, genuine strategic surprise is accomplished, there is first shock and then, usually, recrimination. Investigations follow. Probably no surprise of the 20th century was more thoroughly investigated than the attack on Pearl Harbor by Japan in 1941, but historians still debate the levels of responsibility for the failure, and conspiracy theorists assume the failure was intentional. Israel's Agranat Commission, after the October 1973 Arab-Israeli war, ruined several promising careers. Other examples proliferate.
Often, during the course of these investigations, it is learned that some key piece of intelligence which, in retrospect, clearly points to the intentions of the enemy, was missed or misread. In the case of Pearl Harbor, everyone recognized that Japan was about to move militarily somewhere in the Pacific or southeast Asia. The US was reading the Japanese diplomatic codes, and by the night before the attack, knew that something was likely to happen by 1:00 pm Washington time the next day, and that it was likely to involve American interests. The Japanese carrier fleet could not be located. Among other intercepts was a request of a Japanese consul in Japan to provide details on what ships were moored in Pearl Harbor and where. In retrospect the failure to anticipate the attack seems massive, and this has led many to conclude that it was deliberate. (But even if the US President wanted to get into the World War, why allow the fleet to be destroyed? A surprise attack would be an act of war, even if the fleet was not sunk.)
In a classic analysis published in 1962, Roberta Wohlstetter analyzed the Pearl Harbor failure in terms of "signal" versus "noise". Just as in a radio intercept one must discern the actual broadcast from the background noise, so in intelligence assessment there is a requirement to differentiate between the information that is real and the vast quantity of information which may be intended to deceive or is simply irrelevant. Some key Japanese signals were missed or not translated until after the attack; one warning that was sent was sent by commercial telegraph rather than by military cable. There were insufficient capable Japanese translators. And there was plenty of evidence pointing elsewhere. Hence the irony: never before had the US had such extensive intelligence concerning an adversary's intentions, yet it was caught by surprise.
Because it was known that a Japanese fleet was in fact on the move towards Southeast Asia (it began landings on the Malay Peninsula on the morning of December 8 Asia time, coinciding with the Pearl Harbor Attack), many expected the Japanese thrust to be exclusively against the British in Malaya and Singapore and against the Dutch in the East Indies (modern Indonesia). Even when it became clear that an attack would be made against US interests, the Philippines seemed a more likely target. And the possibility that Japan would move simultaneously against Britain, the Dutch, and the Americans, including an attack on the US fleet, seemed less likely than that Japan would concentrate on one or two of these targets. In fact, Japan attacked Pearl Harbor, the Philippines, Malaya, Hong Kong and Wake Island simultaneously and the Dutch East Indies shortly thereafter. (Only the attack on Wake failed.) The daring nature and scope of the move was itself such that it aided in guaranteeing a failure to anticipate it.
Not everyone will agree with the assessment just given, and historians still argue over minor details of who saw what and who knew what, and when. Second-guessing, 20-20 hindsight and Monday-morning quarterbacking are all part of the assessment of an intelligence failure. Wohlstetter's "signal" versus "noise" analogy seems, however, to explain most of the evidence, and also certainly explains a great deal of what went wrong before September 11.
Important, too, is the distinction between intelligence collection and intelligence analysis. It is one thing for the US National Security Agency to be trying to monitor all the cell-phone conversations of potential terrorists, and quite another for someone to actually examine all those conversations, translate them, and understand their meaning. There have been reports that at least one monitored conversation mentioned the precise date September 11, but that it was not translated until later. That parallels the Pearl Harbor problem of too few analysts trained in critical languages to immediately translate a vast body of intelligence product.
It is also well-known, of course, that the US has suffered severely for years from a lack of human intelligence (HUMINT) resources in key areas. But even when the information is provided, sifting it and recognizing it for what it is properly assessing what has been collected is a major challenge. Often, one will fail.
Ultimately, intelligence assessment is one of those many elements which go into making warfare unpredictable, one of the many sources of what Clausewitz called "friction", that which can go wrong. (On the concept of "friction", see The Esimate's Dossier of November 2, 2001.)
Failure Due to Preconceptions
On a smaller scale, the surprise attack across the Suez Canal and in the Golan Heights by Egyptian and Syrian forces on October 6, 1973 was a classic case of surprise involving intelligence failure because, despite considerable evidence, assessment was skewed by a preconception. The Israelis, who have studied the event almost as thoroughly as Americans have studied Pearl Harbor, ultimately blamed in part what they called the conceptsia, the preconception that Arab countries lacked the capability of attacking Israel and winning, and therefore would not do so. But the Arabs this time wanted only to fight long enough, and do well enough, to force negotiations: the Egyptians in particular had no illusions in 1973 about winning decisively. In fact, the Egyptian assessment was reportedly that once the Suez Canal was crossed, they would have somewhere between six and 24 hours before the Israelis counterattacked. In fact, the surprise was such that it took 48 hours.
Israeli failure on that occasion was directly related to the conceptsia. Egyptian and Syrian troop movements were well known and the Israelis apparently even had knowledge of the basic plans for canal crossing. But there had been several false alarms, including one in May of 1973, and even pessimistic assessments were suggesting that Egypt would not be in a position to attack for another year or two. Many Israeli analysts were expecting to fight a war in 1975 or 1976. Israel has one of the most successful reserve systems in the world, but its mobilization plans at the time were predicated on the notion that it would have two days' warning of war in order to bring about a callup.
Evidence grew rapidly in early October of an imminent attack. Leaves were canceled in Egypt, troops were being moved to the canal. Defense Minister Moshe Dayan, Chief of Staff David Elazar and Mossad Chief Zvi Zamir all became more and more convinced that war might be imminent, but Military Intelligence (Aman) chief Eli Zeira disagreed.
Because only parts of the Agranat Report investigating the failure were made public, some mysteries remain. One element seems to have been a conviction that one key intelligence source, reportedly characterized as able to provide "unambiguous" warning, should not be activated until war was imminent. That source was presumably highly placed in an Arab political or military post.
As late as October 3, Zeira continued to insist that war was not likely. By October 5, the evidence was sufficient that some moves were made to beef up defenses in both Sinai and Golan, but it was not until the morning of October 6, Yom Kippur, that the "unambiguous" warning came and full mobilization began. The attack began hours later, and Israel needed two days to fully mobilize and begin to strike back.
There are still mysteries, as noted, about the reluctance to believe what increasingly looked obvious. Key figures were replaced as a result of the failure. But clearly, the idea of the conceptsia is a part of the explanation. Unlike Stalin, who simply did not believe Hitler would attack him, Israel knew that Egypt and Syria would go to war again someday, but it refused to believe that someday was imminent.
Failure to Expect the Unexpected
It is a bit more difficult to discern what precisely happened in this case; Arab regimes do not hold public hearings or issue reports assessing blame. The Egyptian Defense Minister, Abd al-Hakim Amr, ultimately was blamed, but since he committed suicide (or, some feel, may have "committed suicide" with quotation marks), we do not have his version of events.
Clearly, Egypt's Gamal Abdel Nasser had been making provocative gestures, in part goaded by Syrian complaints that they, not the Egyptians, were bearing the brunt of confrontation with Israel. Nasser expelled the United Nations force which had been put in Sinai after the Suez War of 1956, and was threatening to close the Strait of Tiran. There is considerable evidence that Nasser intended to stop short of actually provoking war, and he had planned to send a Vice President to the United Nations, possibly to offer some conciliatory gesture. But he had already handed the Israelis a casus belli. Israel had long depended upon pre-emption because of its precarious security situation; if about to be struck, it would strike first. Any student of the Israeli War of Independence of 1948, not to mention of the Suez War of 1956, should have been able to forecast at least the possibility of a first strike. (Similarly, Japan had started the Russo-Japanese War with a surprise attack, and had also used the technique in Manchuria and China. Pearl Harbor was predictable, and the scenario had even been considered by war planners.)
Why the Arab militaries, at a time of provocation and threats being issued by the national leader, did not protect themselves against a surprise attack is hard to grasp. But in many cases the air forces were not only caught on the ground, but wingtip-to-wingtip. This is a failure of assessment in part caused by not recognizing Israel's, even then, longstanding reliance on pre-emption in its doctrine; if not precisely an intelligence failure, it is a failure to know one's enemy.
Failure to Anticipate Innovation
The idea of the airliner as weapon was not entirely new. In fact, it had even been used in a Tom Clancy novel, and as far back as 1972, when Israel shot down a Libyan airliner over occupied Sinai, the reason given was that Israel feared the aircraft was planning to crash into Tel Aviv. In the Ramzi Yusef trial, both a mass hijacking of airliners and the possibility of crashing an airliner into the CIA or the Pentagon had come up, though not necessarily linked together. The idea was out there and many discussions of possible terrorist scenarios have mentioned it in the past, but the fact that the World Trade Center towers collapsed completely was probably impossible to predict.
There is a long history of failure to anticipate innovation. When Japan invaded Malaya in 1940, Britain did not have a single tank in the Malay peninsula because the jungles were considered impassable for tanks, and if tanks kept to the roads, infantry would have no way of keeping up. Japan sent tanks, and it sent infantry, but the infantry rode bicycles. Some British analysts had guessed it would take an enemy two years to advance through Malaya to Singapore. It took two months, and in another notorious failure of planning, Singapore had very limited landward defenses, having been considered a fortress against naval attack.
When Germany invaded Belgium in May 1940, one of the major obstacles to crossing the Meuse was the great Belgian fortress of Eben Emael with its huge guns. The Germans landed gliders on the roof, which was undefended.
In fact, the entire German blitzkrieg of 1940 was an innovative use of technology. Though there had been champions of air power (Billy Mitchell, Giulio Douhet) and champions of the tank (B.H. Liddell Hart, Charles DeGaulle, Heinz Guderian), the combination of aircraft and tanks to create a fast-moving, penetrating force that does not depend on follow-on infantry so much as on its own surprise and assault effect was really initiated by Germany in its use of the blitzkrieg.
New technologies are often decisive in war (the machine gun, the atomic bomb), and it is not surprising that "secret weapons" are closely guarded secrets. Terrorists have a built-in advantage because a new weapon is unlikely to be fully understood until it is used, and even then it may be difficult to take countermeasures, as the Israelis have learned against the weapon of the suicide bomber. Part Two of this Dossier will examine other lessons of intelligence failures.
© Copyright 2001, The International Estimate, Inc. No part of this web site, including its graphics, written content or any other
material may be reprinted without the written permission of The International Estimate, Inc.