Stakes in the Sand: Surveying in the Gulf War

In 1990, US forces arrived in the Persian Gulf with a cornucopia of navigation technologies: not just GPS but also LORAN, TACAN, TERCOM (for cruise missiles), and inertial navigation systems which used laser, electrostatic, or mechanical gyroscopes, as well as old-fashioned manual tools like maps and compasses. So why were US surveyors heading off into the Saudi desert?

The surveyors were from the 30th Engineers Battalion (Topographic), which was deployed to provide map production and distribution, surveying, and terrain analysis services to the theatre. The survey platoon’s work was being done on behalf of the Corps and divisional artillery, which had their own particular navigational needs. Unlike fighter or helicopter pilots, field artillery gunners didn’t have the opportunity to see their targets and make last-minute adjustments to their own aim. Unlike bomber crews or cruise missiles, their fire missions were not planned well in advance using specialized materials. To provide precise positioning information to the guns, each artillery battalion in the Gulf was equipped with two Position and Azimuth Determining Systems (PADS), truck-mounted inertial navigation systems that keep an ongoing track of the unit’s positions. At the heart of the PADS was the standard US Navy inertial navigation system, the AN/ASN-92 Carrier Inertial Navigation System (CAINS).

Like all inertial navigation systems, PADS had a tendency to drift over time. That meant that it required regular refreshes using a pre-surveyed location, or control point. The initial specifications for PADS were to achieve a horizontal position accuracy of 20 meters over 6 hours and 220 kilometers. Actual horizontal accuracy seems to have been far better, more like 5 meters. One reason for the high accuracy was that, unlike an airplane, the vehicle carrying the PADS could come to a complete stop, during which the system detect and compensate for some of the errors by the accelerometers in the horizontal plane.

Unfortunately, the US had exactly one control point in Saudi Arabia, at Dharan airbase (Army Reserve historian John Brinkerhoff says this and several other point surveyed were done with “Doppler based methods.” I assume that means using the TRANSIT satellite system, which determined location on the basis of Doppler shift). Starting from that control point, the 30th’s surveyors extended a network of new control points northwards and westwards towards the Iraqi border. Conventional line-of-sight survey methods would have been too slow, but the surveyors had received four GPS receivers in 1989 and soon got more from the Engineer Topographic Laboratories to equip a follow-up team of surveyors. Eventually, their survey covered 10,000 square kilometers and included 95 control points. Relative GPS positioning took about two hours (according to Brinkerhoff) and offered accuracy to about 10 centimerers (compared to 17 meters for regular GPS use). Absolute positioning – done more rarely – required four hours of data collection and provided accuracy of 1–5 meters.

When the ground war began on 24 February 1991, the two survey teams tried to stay ahead of the artillery, which meant driving unescorted into the desert and marking new control points with steel pickets with reflectors (for daytime) and blinking lights (for night-time). Providing location data through headquarters was too slow, so the surveyors took to handing it directly to the artillery’s own surveyors or just tacking it to the pickets. By the ceasefire on March 1 they had surveyed all the way to 30 km west of Basra. Where the artillery outran the control points they used their own GPS receivers to make a “good enough” control point and reinitialized the battalion PADS there, so all the artillery batteries would at least share a common datum. One thing PADS could do and GPS couldn’t was provide directional information (azimuth), so units that outran their PADS capabilities had to use celestial observations or magnetic compasses to determine direction.

What the 30th Battalion and the artillery’s surveyors did in the Gulf was different enough from traditional survey methods that the some in the army already used a different phrase, “point positioning,” to describe it. In the 1968–1978 history for the Engineer Topographic Laboratories, which designed army surveying equipment, PADS was one of three surveying and land navigation instruments singled out as part of this new paradigm (the others were a a light gyroscope theodolite with the acronym SIAGL and the Analytical Photogrammetric Positioning System).

Brinkerhoff tells the story of the 30th’s surveyors as the meeting of the high and low tech, but the work really relied on a whole range of technology. Most of the GPS surveying was relative positioning that was anchored to previous Doppler surveying. Position and azimuth information was carried forward by inertial navigation, and the position of the firing battery was paired with target information from a forward observer equipped with GPS, an inertial navigation system, or a paper map or from aerial photography which could be interpreted using the aeroplane’s own navigation system or a photointerpreter’s tool like APPS. GPS surveying and navigation did not stay wrapped up with all these other navigational tools for long. The technology was flexible enough to be used in place of many of them. But in the early 1990s, GPS’s success was contingent on these other systems too.

Sources Notes: The story of the 30th and its surveyors appears in John Brinkerhoff’s monograph United States Army Reserve in Operation Desert Storm. Engineer Support at Echelons Above Corps: The 416th Engineer Command (printed in 1992). Further details appear in the Army Corps of Engineers history Supporting the Troops: The U.S. Army Corps of Engineers in the Persian Gulf War (1996) by Janet A. McDonnell and “The Topographic Challenge of DESERT SHIELD and DESERT STORM” by Edward J. Wright in the March 1992 issue of Military Review. Reflections on how the artillery used PADS and GPS in the Gulf come from the October 1991 issue of Field Artillery, a special issue on “Redlegs in the Gulf.” Technical details for PADS are from the ETL History Update, 1968–1978 by Edward C. Ezell (1979).

A Tale of Two Keystone States

An auxiliary crane ship, the SS Cornhusker State, in 2009. US Navy by Petty Officer 1st Class Brian Goy. DIVIDS Photo ID 185724.

In the later years of the Cold War, the US Navy recognized the need to revitalize its seagoing transport capacity. During the Second World War, the military had built a massive fleet to support transatlantic and transpacific campaigns. Mothballed after the war, much of it had rotted away by the time reconstruction began under presidents Nixon and Carter and accelerated under President Reagan. One necessity for the new fleet was equipment to move cargo – especially containers – from ship to shore. After experiments with lifting by helicopter or balloon, the Navy settled on fitting a series of cargo ships with heavy cranes to unload cargo in ports that lacked the necessary infrastructure. The first ship to be converted was the SS President Harrison, previously operated by American President Lines, which was renamed the SS Keystone State (T-ACS-1) upon completion of its refit in 1984.

The Barge Derrick Keystone State (BD-6801) being towed by two Army Small Tugs during an exercise at Joint Base Langley-Eustis, Va., Aug 6, 2013. (U.S. Army photo by Spc. Cal Turner/Released) DIVIDS ID 990511.

Confusingly, the T-ACS-1 is not the only US military crane watercraft named the Keystone State. In 1998, the US Army launched a engine-less crane barge BD-6801 with the same name, chosen to honor the 28 soldiers from Pennsylvania’s 14th Quartermaster Detachment killed in a SCUD attack during the first Gulf War (in this instance, BD stands for Barge Derrick). Operated by the Transportation Corps, the BD-6801 was built to help unload military cargo in any of the many ports around the world unequipped to handle the cargo. It carries a single crane with a reach of 175 feet and a lift capacity of 115 long tons which, unlike the cranes on previous army barges, is able to lift a 60 ton M1 tank off of a cargo ship.

Between 1985 to 2005, at least one Army floating crane like the Keystone State was always aboard the MV American Cormorant, a float-on/float-on (FLO/FLO) heavy lift ship at Diego Garcia that carried a package of Army watercraft for operating a damaged or unequipped port. The American Cormorant and its cargo deployed to many major crises as part of the army response, including the first Gulf War and Operation RESTORE HOPE in Somalia. Until the launch of the Keystone State, the crane barge carried aboard the American Cormorant was one from the BD-89T class, with an 100 foot reach and an 89 long ton (100 short ton) capacity.

The American Cormorant en route to the Gulf. Note the two BD-89T cranes on-board, only one of which was used in operations. From Operations Desert Shield and Desert Storm: The Logistics Perspective (Association of the United States Army Institute of Land Warfare, 1991), p.12. Courtesy of the AUSA website.

It was a BD-89T barge, the Algiers (BD-6072), which was deployed to be used by the Army 10th Transportation Battalion (Terminal) during the Gulf War. In addition to performing more than 1,500 lifts in Saudi ports, the Algiers was used to help clear damaged Kuwaiti ports of obstructions – harbor clearance being a mission shared between the US Army and Navy. After having built-up an extensive salvage force after the Second World War, changes to salvage doctrine meant the US Navy only sent one salvage ship and no heavy-lift gear to the Gulf. Commercial salvors being paid by the Dutch government took up much of the slack, but there were limits to what the contractors could do. With rental fees for barges and cranes running as much as $150,000 a day for a 600 ton Ringer crane barge, the Americans ended up mostly going without the heaviest equipment. The biggest harbor clearing lift involving the Algiers was a sunken Iraqi Osa II missile boat in the Kuwaiti port of Ash Shuaybah. Though small by seagoing standards, the Osa II was 127 feet long and displaced almost 200 tons in standard load. Even in combination with a quayside 140 ton crane, the crane barge couldn’t lift the ship whole. Only after army divers cut off the still-life missile launchers could the boat be raised. Looking back at the operation in the navy after-action report, perhaps with a little bit of envy, one of the navy salvage engineers called the army crane “very workable.” Other sunken craft the divers lifted at Ash Shuaybah, with or without the help of the crane, included a 90 foot sludge barge and two other boats.

The deployment of the Algiers during the first Gulf War is only the tip of the iceberg when it comes to the military roles played by American floating cranes, which since the conversion of the battleship Kearsage into Crane Ship No. 1 have worked to construct warships, salvage sunken submarines, and clear wrecks from the Suez Canal.

Source Notes: Much of the information for this post came from various sources around the internet, and in particular the website for the US Army Transportation Corps’ history office. The Corps’ 1994 official history, Spearhead of Logistics, was also useful. Details on the salvage operations during the Gulf War came mostly from the two-volume US Navy Salvage Report: Operations Desert Shield/Desert Storm, printed in 1992 and available online at the Government Attic (volumes one and two); the report’s chronology was the only place I was able to find which US Army crane barge was actually operated during the war.

Trawling for Spies

A new article in Intelligence and National Security by Stephen G. Craft reveals how the Office of Naval Intelligence (ONI) ran a counterintelligence program using fishermen along the southern Atlantic seaboard during the Second World War. Expecting that the Axis would land agents on US shores (something that happened, but only rarely) and use German and Italian-American fishermen to support U-boat operations (which seems to have happened not at all), ONI created Selected Masters & Informants (SMI) sections under naval district intelligence officers to recruit fishermen as confidential informants. Their operations caught no spies but offered some comfort that subversion was never rampant along the coast.

For fifty years from 1916 to 1966, the district intelligence officers were ONI’s contribution to local counterintelligence, security, and information-gathering in US coastal areas. The official responsibilities of the district intelligence officer were numerous. According to Wyman Packard’s A Century of U.S. Naval Intelligence they included:

maintenance of press relations for district headquarters; liaison with the investigating units of federal, state, and city agencies within the naval district; liaison with public and private research agencies and with business interests having information in intelligence fields; liaison with ONI and the intelligence services of the other naval districts, and with forces afloat within the district; counterespionage, security, and investigations; collection, evaluation, and recording of information regarding persons or organizations of value (or opposed) to the Navy; preparation and maintenance of intelligence plans for war; and administrative supervision over the recruiting, training, and activities of the appropriate personnel of the Naval Reserve within the district.

The position was only eliminated in 1966, when its investigative and counterintelligence duties passed to the Naval Investigative Service, its intelligence-collection duties to local Naval Field Operational Support Groups, and its other sundry tasks to the district staff intelligence officer.

Admiral Ingersoll, Commander-in-Chief, Atlantic Fleet, and Rear Admiral James at Charleston, South Carolina during an inspection of the Sixth Naval District, 23 November 1943. Courtesy of Mrs. Arthur C. Nagle. Collection of the Naval History and Heritage Command, NH 90955.

In October 1942, the creation of SMI sections added recruiting fishermen as counterintelligence agents to the long list of task mentioned above. By January 1943, the SMI sections had recruited 586 agents, 200 of them in the Sixth Naval District (headquartered in Charleston, South Carolina). Ship owners were paid $50 to cover installation of a radiotelephone, which were provided to about 50 craft. Otherwise masters were to report by carrier pigeon (the Navy operated lofts in Mayport, Florida and St Simon’s Island, Georgia) or by collect call once ashore. Some masters also received nautical charts that were overprinted with a confidential Navy grid for reporting purposes.

Shrimp fleet in harbor, St. Augustine, St. Johns County, Florida, 1936 or 1937. Photograph by Frances Benjamin Johnston. Retrieved from the Library of Congress, (Accessed April 20, 2017.)

The absence of winter shrimp fishing, a tendency to cluster in good fishing spots, and the total absence of enemy covert activity all combined to limit the program’s impact. Further south in the Seventh District (headquartered in Jacksonville and Miami, Florida), most fishing was done so close to shore that the district did not bother to implement the program. In a few cases fishing boats were attacked by U-boat and two confidential observers were reportedly killed in submarine attacks. Though the program operated until V-J Day, few reports of interest were ever received. Similar operations took place elsewhere in the US as well, with scattered references in Packard’s Century of U.S. Naval Intelligence to fishing vessels as observers in other naval districts too.

Shrimp boats were the basis for both overt and covert surveillance. Navy patrol craft like the YP-487 were known as “Shrimpers” because of their origins as commercial fishing boats. Collection of the Navy History and Heritage Command, NH 106994.

How successful you consider the program will depend on how plausible you consider the Navy’s fear of subversion and agent landings. However, the idea of using commercial seafarers as observers and informants clearly proved itself enough to resurface from time to time after the war. In 1947, the Chief of Naval Operations issued a letter authorizing the placing of informants on US merchant ships to detect any crew members involved in subversive activities (this was known as the Special Observer–Merchant Marine Plan). In 1955, merchant ships and fishing vessels were included in plans to collect “merchant intelligence” (MERINT) on sightings of ships, submarines, and aircraft (both efforts referenced in Packard). Did the use of fishermen for counterintelligence continue into the 1960s or beyond? If so, there might have been agents on the boats involved in joint US–Soviet fishing enterprises of the 1980s, carefully watching the Soviets carefully watch the Americans.

Tides of War, Part Two

The first part of this post appeared on the blog in November 2016. The second part was supposed to come out within a week or two, as soon as I found a little more on the post-war use of analog tide-predicting machines. Unfortunately, the search for “a little more” ended up taking way more time than expected and turning up nothing within easy reach. I’d skip the apology if it wasn’t for the fact that the proper references to Anna Carlsson-Hyslop‘s research (discussed in part one) were buried in the source note at the end of this post. Sorry.

Tide-predicting machines, the first of which appeared in the late nineteenth century, were an elegant mechanical solution to a complex mathematical problem. Used mostly to produce information useful to commercial shipping, during the two world wars they also played an important role in the planning of amphibious operations like the Normandy landings.

That contribution is interesting enough to give them a spot in the history of war, but the basic design of the British machines – using multiple gears to create an analog approximation of a mathematical function – also has an oblique connection to one of the most important technical achievements of the war: the mechanization of cryptanalysis.

Alan Turing is justifiably famous for his role in the breaking of the German Enigma cipher, and particularly for his contribution to designing electro-mechanical computing tools that transformed the process. (Even if popular versions of the story do a terrible disservice to the story by erasing everyone except Turing from the picture. Imitation Game, I’m looking at you.) Less well known are some of Turing’s pre-war flirtations with the mechanization of mathematical problem-solving. Andrew Hodges’ biography describes two projects which Turing took on, at least briefly. The first, during his time at Princeton in 1937, was to use electromagnetic relays for binary multiplication to create an extremely large number that could be used as the key for a cipher. This was, as Hodges puts it, a “surprisingly feeble” idea for a cipher but a practical success as far as constructing the relays was concerned.

The second project was an attempt to disprove the Riemann hypothesis about the distribution of prime numbers by calculating the Riemann zeta-function (for the hypothesis and the zeta-function, read Hodges or Wikipedia. There’s no chance of me describing it properly) by showing that not all instances where the function reached zero lay on a single line, as the hypothesis stated. An Oxford mathematician had already calculated the first 104 zeroes using punched-care machines to implement one approximation of the function. Since the zeta-function was the sum of circular functions of different frequencies, just like Thomson’s harmonic analysis of the tides, Turing realized it could be calculated using the same method. Or, more precisely, the machine could rule out enough values that only a few would have to be calculated by hand.

With a grant of £40 from the Royal Society, Turing and Donald MacPhail designed a machine that, like the tide calculators, used meshed gear wheels to approximate the thirty frequencies involved. The blueprint was completed by 17 July 1939 and the grinding of the wheels was underway when the war broke out at Turing joined the Government Code and Cypher School at Bletchley Park.

Nothing in the work that Turing did at Bletchley connected directly to the zeta-function machine, but, as Hodges notes, it was unusual for a mathematician like Turing to have any interest in using machines to tackle abstract problems of this sort. Clearly, though, Turing had been mulling the question of how machines could be applied to pure mathematics long before he became involved in the specific cryptanalytic problems that were tackled at Bletchley.

Of course, the secrecy surrounding code-breaking meant that no hint of the connection, or any of Turing’s wartime work, would have leaked out to those operating the tide-predicting machines in Liverpool or elsewhere. The end of the war meant a return to usual practice, but their strategic importance remained.

Probably the last analog machine to be constructed was a thirty-four constituent machine built in 1952–5 for East Germany (and now in the collection of the German Maritime Museum in Bremen). The Soviet Union had ordered a Kelvin-type machine for forty constituents from Légé and Co. in 1941 that was delivered to the State Oceanographic Institute in Moscow in 1946, on the eve of the Cold War. Bernard Zetler, an oceanographer who worked on tide prediction at the Scripps Institution of Oceanography in San Diego, recalls that he was unable to visit the machine in 1971 because it or its location was classified. The Soviet tide tables certainly were.

The American Tide Predicting Machine No. 2 remained in use until 1966, but played no role in the American amphibious landing at Inchon during the Korean War. The wide tidal range at Inchon meant that the landing needed good tidal information, but rather than making new calculations the existing American and Japanese tide tables were supplemented by first-hand observation by Navy Lieutenant Eugene F. Clark, whose unit reconnoitered the area for two weeks preceding the landings.

When analog machines like Tide Predicting Machine No. 2 were retired, they were replaced by digital computers whose architecture originated in other wartime projects like the ENIAC computer, which had been built to calculate ballistics tables for US artillery. The world’s navies have not relinquished their interest in tools to predict the tides. Their use, though, has never matched the high drama of prediction during the Second World War.

Source Note: The D-Day predictions are discussed many places on the internet, but almost all the accounts trace back to an article oceanographer Bruce Parker published in Physics Today, adapted from his 2010 book The Power of the Sea. Where Parker disagrees with the inventory of machines commissioned by the National Oceanography Centre, Liverpool (itself a descendant of the Liverpool Tidal Institute), I’ve followed Parker. Details on the work of Arthur Doodson and the Liverpool Tidal Institute come from Anna Carlsson-Hyslop‘s work: the articles “Human Computing Practices and Patronage: Antiaircraft Ballistics and Tidal Calculations in First World War Britain,” Information & Culture: A Journal of History 50:1 (2015) and “Patronage and Practice in British Oceanography: The Mixed Patronage of Storm Surge Science at the Liverpool Tidal Institute, 1919–1959,” Historical Studies in the Natural Sciences 46:3 (2016), and her dissertation for the University of Manchester (accessible through the NERC Open Repository). The scientist suspected of Nazi sympathies was Harald Sverdrup, a Norwegian national who worked with Walter Munk on wave prediction methods used in several amphibious landings. Turing’s experiments calculating the Riemann zeta-function appear in Andrew Hodges, Alan Turing: The Engima (1983; my edition the 2014 Vintage movie tie-in).

Map Overlap: Warsaw Pact vs. NATO Grids

The Charles Close Society hasn’t updated its topical list of articles on military mapping since I wrote about it in 2015, but there is a new article by John L. Cruickshank (“More on the UTM Grid system”) in Sheetlines 102 (April 2015) that is now freely available on the society’s website. The connection to Soviet mapping is that Cruickshank discusses how both NATO and the Warsaw Pact produced guides and maps to help their soldiers convert between their competing grid systems. Unlike latitude and longitude, a grid system assumes a flat surface.That’s good for simplifying calculations of distance and area, but means you have the problems of distortion that come with any map projection.

Both the Soviets and Americans based their standard grids on transverse Mercator projections that divided the globe up into narrow (6° wide) north-south strips, each with own projection. These were narrow enough not to be too badly distorted at the edges but still wide enough that artillery would rarely have to shoot from a grid location in one strip at a target in another (which required extra calculations to compensate for the difference in projections). The American system was called the Universal Transverse Mercator (or UTM; the grid itself was the Military Grid Reference System, or MGRS). The Soviet one was known, in the West at least, as the Gauß-Krüger grid.

In his article, Cruickshank reports that by 1961 East German intelligence was printing 1:200,000 military topographic maps that had both UTM and Soviet Gauß-Krüger grids. By 1985 a full series existed that ran all the way west to the English Channel. Rather than print a full map series with both grids, the US Army produced intelligence guides to the conversion between them. Field Manual 34-85, Conversion of Warsaw Pact Grids to UTM Grids was issued in September 1981. A supplement, G-K Conversion (Middle East) was released in February 1983. As Cruickshank observes, both manuals have fascinating illustrated covers. Conversion of Warsaw Pact Grids features a map with a rolled up map labelled “Intelligence” standing on a grid and looking at a globe focused on Europe. G-K Conversion, on the other hand, shows an Eagle literally stealing the map out of the hand of a Bear using calipers to measure distances from Turkey to Iran across the Caspian Sea.

The article ends with the observation that the history of modern geodesy, which underpins calculations like the UTM and Gauß-Krüger grids, remains “overdue for description.” Since it was published a new book has appeared that goes a long way towards covering some of those developments (at least for non-specialists, if not experts like Cruickshank). In fact, map grids are one of the main topics of After the Map: Cartography, Navigation and the Transformation of Territory in the Twentieth Century by William Rankin (University of Chicago Press, 2016). The book is chock-full of fascinating discussions of new mapping and navigation systems that developed between the end of the nineteenth century and the appearance of GPS. Its focus is on three overlapping case studies: large-scale global maps like the International Map of the World and World Aeronautical Charts (which have their own connection to Soviet mapping), grid systems like UTM, and radionavigation networks like Gee and Loran. (The third of these was already the topic of an article by Rankin that I wrote about here.)

In the chapters on map grids, After the Map shows just how long even an ostensibly universal design like UTM remained fragmented and regional. The use of grids had begun on the Western Front during the First World War. It spread to domestic surveying in the interwar period and been adopted by all the major powers during the Second World War. But universal adoption of the principles involved did not mean adoption of a common system. Even close allies like the United States and Britain ended up just dividing the world and jointly adopting one or the other nation’s approach in each region: British grids were applied to particular war zones and a more general American system used for the rest of the world. Neither used a transverse Mercator projection.

Even once America and its NATO allies settled on UTM as a postwar standard – a decision made despite opposition from the US Navy and Air Force, who fought vigorously for a graticule rather than a grid – UTM maps did not use a single consistent projection but adopted whichever reference ellipsoid was already in use for a region. While those differences were eventually resolved, even the 1990 edition of Defense Mapping Agency Technical Manual 8358.1, Datums, Ellipsoids, Grids, and Grid Reference Systems, still included specifications for twenty British grids including the British and Irish domestic surveys (plus a further nineteen further secondary grids), as well as the Russian Gauß-Krüger. East German tank commanders should have been grateful that they could get away with only two from the Intra-German Border to the Channel!

The First World War in British Memory

For Remembrance Day this year, the Canadian non-profit Vimy Foundation commissioned an Ipsos poll on the First World War whose most interesting question, at least for me, was about the number of soldier deaths suffered by the Canadian, American, Belgian, British, French, and German armies – with answers from online panels in all six countries. You can see the results, including average error, on page 17 of the poll.

Unsurprisingly, everyone was wildly inaccurate all the way, with the average error ranging from 299,286 for the French to 487,980 for the United States – relative to actual casualty numbers between 40,936 (for Belgium) and 1,397,800 (for Germany). I doubt I would have done any better without any cues for scale or relative order.

The official summary highlights a few interesting observations:

Respondents in the US, UK and France came closest to correctly guessing their own country’s World War One soldier deaths. US respondents also came closest to correctly guessing the numbers for Canada and Belgium, while those in France were nearest the mark for Germany. Each country over-estimated Canadian deaths – including Canadians by nearly 3-times. French respondents’ guesses were, on average, the most accurate.

Converting the results from absolute numbers to percentage error, though, makes some other facts jump out.


Deaths Respondents
Canada United States Belgium Great Britain France Germany Average
Canada 237% 107% 211% 266% 220% 241% 213%
United States 229% 201% 229% 237% 312% 289% 250%
Belgium 456% 278% 520% 659% 440% 447% 466%
Great Britain 59% 42% 58% 119% 54% 65% 66%
France 42% 30% 40% 47% 65% 60% 47%
Germany 41% 31% 52% 57% 73% 60% 53%
Average 130% 80.5% 135% 162% 130% 132%
More than anything else, what respondents missed was that European powers were an order of magnitude more involved in the war than Canada and the US (and that Belgium was, relatively speaking, a tiny country). Respondents in every country – with one exception – overestimated the North American and Belgian losses and underestimated the French, German, and British losses.

Beyond that, the accuracy of the French guess in absolute terms turn out to have rested on the fact that French and German deaths constituted three-quarters of the total deaths being estimated – the French guesses were the least accurate of any nation for US deaths and subpar for Canadian deaths too. In fact, if you look at the average error as a matter of percentages, it’s the Americans who come out best. I sure didn’t see that coming.

Last, but hardly least, only one country bucks the trend when it comes to under- or over-estimating a country’s deaths.The average British guess for that country’s losses is one of three cases where the nation’s guess of its own losses is closest of the six, but it’s the only one that breaks the common pattern.Where every other nation under-guesses British soldier deaths the British over-estimate their own by twenty per cent. I wouldn’t read too much into any of these numbers, but there’s got to be something interesting going on where the French and German’s underestimate their national losses by 35–40% and the British overestimate their by 20%.

Revisiting the Third World War

The latest issue of the British Journal for Military History has an interesting article by Jeffrey H. Michaels on Sir John Hackett’s The Third World War (1979), a fictionalized narrative of a potential NATO-Soviet conflict in the 1980s. Though it sparked a lot of attention at the time and sold more than 3 million copies, I don’t think posterity has been very kind to the book. The Third World War was a didactic narrative written as a thinly-veiled plea for more NATO conventional armaments, and a lot of the narrative choices haven’t aged well. Much of the political prognostication was laughably wrong – already discredited by the time its semi-sequel The Third World War: The Untold Story came out in 1982. As fiction, it was quickly overshadowed by Tom Clancy’s Red Storm Rising (1986), whose wargame underpinnings and multi-media afterlife are stories in themselves.

The Third World War is mostly of interest, then, as an artifact of Cold War policy debates played out in popular culture and as the first of what became quite a lot of late Cold War future war fiction (not just Red Storm Rising but also the Team Yankee series, Ralph Peters’s novel Red Army, Shelford Bidwell’s World War 3 [which Michaels says had its prospects mostly ruined by coming out shortly after Hackett’s book], and Kenneth Macksey’s First Clash: Canadians in World War Three, not to mention various games, TV shows, and movies).

Michaels’s article doesn’t change my mind about the qualities of the book itself, but by digging into Hackett’s papers at King’s College London he does reveal some interesting facts about its origins. For one thing, I had not realized how much Hackett played around with the entire scenario of the book as he developed it. His first outline called not for the brief, eighteen-day conflict in the final book, but a multi-year war of attrition in which NATO. That was scuppered by early readers who judged it too dispiriting. The inclusion of limited nuclear strikes on Birmingham and Minsk, which bring the war to an end (and which seem to me one of the more contrived aspects of Hackett’s narrative) were a late addition and a reversal of Hackett’s earlier opinion that nuclear strikes, if any, were likely to happen at sea or in space, not against the cities of a nuclear power. Also interesting: the description of the nuclear attack on Birmingham may have been borrowed from a classified study of just that situation made by Solly Zuckerman in 1961.