Map Overlap: Warsaw Pact vs. NATO Grids

The Charles Close Society hasn’t updated its topical list of articles on military mapping since I wrote about it in 2015, but there is a new article by John L. Cruickshank (“More on the UTM Grid system”) in Sheetlines 102 (April 2015) that is now freely available on the society’s website. The connection to Soviet mapping is that Cruickshank discusses how both NATO and the Warsaw Pact produced guides and maps to help their soldiers convert between their competing grid systems. Unlike latitude and longitude, a grid system assumes a flat surface.That’s good for simplifying calculations of distance and area, but means you have the problems of distortion that come with any map projection.

Both the Soviets and Americans based their standard grids on transverse Mercator projections that divided the globe up into narrow (6° wide) north-south strips, each with own projection. These were narrow enough not to be too badly distorted at the edges but still wide enough that artillery would rarely have to shoot from a grid location in one strip at a target in another (which required extra calculations to compensate for the difference in projections). The American system was called the Universal Transverse Mercator (or UTM; the grid itself was the Military Grid Reference System, or MGRS). The Soviet one was known, in the West at least, as the Gauß-Krüger grid.

In his article, Cruickshank reports that by 1961 East German intelligence was printing 1:200,000 military topographic maps that had both UTM and Soviet Gauß-Krüger grids. By 1985 a full series existed that ran all the way west to the English Channel. Rather than print a full map series with both grids, the US Army produced intelligence guides to the conversion between them. Field Manual 34-85, Conversion of Warsaw Pact Grids to UTM Grids was issued in September 1981. A supplement, G-K Conversion (Middle East) was released in February 1983. As Cruickshank observes, both manuals have fascinating illustrated covers. Conversion of Warsaw Pact Grids features a map with a rolled up map labelled “Intelligence” standing on a grid and looking at a globe focused on Europe. G-K Conversion, on the other hand, shows an Eagle literally stealing the map out of the hand of a Bear using calipers to measure distances from Turkey to Iran across the Caspian Sea.

The article ends with the observation that the history of modern geodesy, which underpins calculations like the UTM and Gauß-Krüger grids, remains “overdue for description.” Since it was published a new book has appeared that goes a long way towards covering some of those developments (at least for non-specialists, if not experts like Cruickshank). In fact, map grids are one of the main topics of After the Map: Cartography, Navigation and the Transformation of Territory in the Twentieth Century by William Rankin (University of Chicago Press, 2016). The book is chock-full of fascinating discussions of new mapping and navigation systems that developed between the end of the nineteenth century and the appearance of GPS. Its focus is on three overlapping case studies: large-scale global maps like the International Map of the World and World Aeronautical Charts (which have their own connection to Soviet mapping), grid systems like UTM, and radionavigation networks like Gee and Loran. (The third of these was already the topic of an article by Rankin that I wrote about here.)

In the chapters on map grids, After the Map shows just how long even an ostensibly universal design like UTM remained fragmented and regional. The use of grids had begun on the Western Front during the First World War. It spread to domestic surveying in the interwar period and been adopted by all the major powers during the Second World War. But universal adoption of the principles involved did not mean adoption of a common system. Even close allies like the United States and Britain ended up just dividing the world and jointly adopting one or the other nation’s approach in each region: British grids were applied to particular war zones and a more general American system used for the rest of the world. Neither used a transverse Mercator projection.

Even once America and its NATO allies settled on UTM as a postwar standard – a decision made despite opposition from the US Navy and Air Force, who fought vigorously for a graticule rather than a grid – UTM maps did not use a single consistent projection but adopted whichever reference ellipsoid was already in use for a region. While those differences were eventually resolved, even the 1990 edition of Defense Mapping Agency Technical Manual 8358.1, Datums, Ellipsoids, Grids, and Grid Reference Systems, still included specifications for twenty British grids including the British and Irish domestic surveys (plus a further nineteen further secondary grids), as well as the Russian Gauß-Krüger. East German tank commanders should have been grateful that they could get away with only two from the Intra-German Border to the Channel!

The First World War in British Memory

For Remembrance Day this year, the Canadian non-profit Vimy Foundation commissioned an Ipsos poll on the First World War whose most interesting question, at least for me, was about the number of soldier deaths suffered by the Canadian, American, Belgian, British, French, and German armies – with answers from online panels in all six countries. You can see the results, including average error, on page 17 of the poll.

Unsurprisingly, everyone was wildly inaccurate all the way, with the average error ranging from 299,286 for the French to 487,980 for the United States – relative to actual casualty numbers between 40,936 (for Belgium) and 1,397,800 (for Germany). I doubt I would have done any better without any cues for scale or relative order.

The official summary highlights a few interesting observations:

Respondents in the US, UK and France came closest to correctly guessing their own country’s World War One soldier deaths. US respondents also came closest to correctly guessing the numbers for Canada and Belgium, while those in France were nearest the mark for Germany. Each country over-estimated Canadian deaths – including Canadians by nearly 3-times. French respondents’ guesses were, on average, the most accurate.

Converting the results from absolute numbers to percentage error, though, makes some other facts jump out.

 

Deaths Respondents
Canada United States Belgium Great Britain France Germany Average
Canada 237% 107% 211% 266% 220% 241% 213%
United States 229% 201% 229% 237% 312% 289% 250%
Belgium 456% 278% 520% 659% 440% 447% 466%
Great Britain 59% 42% 58% 119% 54% 65% 66%
France 42% 30% 40% 47% 65% 60% 47%
Germany 41% 31% 52% 57% 73% 60% 53%
Average 130% 80.5% 135% 162% 130% 132%
More than anything else, what respondents missed was that European powers were an order of magnitude more involved in the war than Canada and the US (and that Belgium was, relatively speaking, a tiny country). Respondents in every country – with one exception – overestimated the North American and Belgian losses and underestimated the French, German, and British losses.

Beyond that, the accuracy of the French guess in absolute terms turn out to have rested on the fact that French and German deaths constituted three-quarters of the total deaths being estimated – the French guesses were the least accurate of any nation for US deaths and subpar for Canadian deaths too. In fact, if you look at the average error as a matter of percentages, it’s the Americans who come out best. I sure didn’t see that coming.

Last, but hardly least, only one country bucks the trend when it comes to under- or over-estimating a country’s deaths.The average British guess for that country’s losses is one of three cases where the nation’s guess of its own losses is closest of the six, but it’s the only one that breaks the common pattern.Where every other nation under-guesses British soldier deaths the British over-estimate their own by twenty per cent. I wouldn’t read too much into any of these numbers, but there’s got to be something interesting going on where the French and German’s underestimate their national losses by 35–40% and the British overestimate their by 20%.

Revisiting the Third World War

The latest issue of the British Journal for Military History has an interesting article by Jeffrey H. Michaels on Sir John Hackett’s The Third World War (1979), a fictionalized narrative of a potential NATO-Soviet conflict in the 1980s. Though it sparked a lot of attention at the time and sold more than 3 million copies, I don’t think posterity has been very kind to the book. The Third World War was a didactic narrative written as a thinly-veiled plea for more NATO conventional armaments, and a lot of the narrative choices haven’t aged well. Much of the political prognostication was laughably wrong – already discredited by the time its semi-sequel The Third World War: The Untold Story came out in 1982. As fiction, it was quickly overshadowed by Tom Clancy’s Red Storm Rising (1986), whose wargame underpinnings and multi-media afterlife are stories in themselves.

The Third World War is mostly of interest, then, as an artifact of Cold War policy debates played out in popular culture and as the first of what became quite a lot of late Cold War future war fiction (not just Red Storm Rising but also the Team Yankee series, Ralph Peters’s novel Red Army, Shelford Bidwell’s World War 3 [which Michaels says had its prospects mostly ruined by coming out shortly after Hackett’s book], and Kenneth Macksey’s First Clash: Canadians in World War Three, not to mention various games, TV shows, and movies).

Michaels’s article doesn’t change my mind about the qualities of the book itself, but by digging into Hackett’s papers at King’s College London he does reveal some interesting facts about its origins. For one thing, I had not realized how much Hackett played around with the entire scenario of the book as he developed it. His first outline called not for the brief, eighteen-day conflict in the final book, but a multi-year war of attrition in which NATO. That was scuppered by early readers who judged it too dispiriting. The inclusion of limited nuclear strikes on Birmingham and Minsk, which bring the war to an end (and which seem to me one of the more contrived aspects of Hackett’s narrative) were a late addition and a reversal of Hackett’s earlier opinion that nuclear strikes, if any, were likely to happen at sea or in space, not against the cities of a nuclear power. Also interesting: the description of the nuclear attack on Birmingham may have been borrowed from a classified study of just that situation made by Solly Zuckerman in 1961.

International Criminal Courtrooms

Reading descriptions of the unusual courtroom Dan Kiley designed for the international war crimes trials at Nuremberg prodded me to do some further research into how the Nuremberg courtroom compared to those created for other international trials.

The closest counterpart to Nuremberg were the Tokyo war crimes trials, held in the former Japanese Army headquarters at Ichigaya. Similar to at Nuremberg, the primary axis of the room ran between the judges on their raised dais and the defendants in the dock. The space between the two were filled with prosecution and defense lawyers, with extensive press seating and public gallery to the right of the judges – perpendicular to the main axis – and seating for dignitaries and a motion picture booth to the left.

The International Criminal Tribunal for the former Yugoslavia (ICTY), first of the new generation of international tribunals created in the 1990s, adopted a more conventional layout for its first courtroom. In Alphabet City no. 7 architect Laura Kurgan dicusses the arrangement of Courtroom One, the first of three that were retrofitted into the Aegon Insurance Building in The Hague. Unlike the courtrooms at Nuremberg and Tokyo, ICTY Courtroom One positions the prosecution, defense, witness, and public in a semicircular arc facing the judges. The prosecution and defense are closest, with desks angled towards the judges, while the public sits behind a pane of bulletproof glass. More importantly, four cameras controlled from a separate booth not only film the proceedings but broadcast the proceedings on a thirty-minute delay (to protect witness confidentiality) over the internet. (You can see the feed here.) The televised proceedings represent a continuity of sorts with the post-war war crimes trials: the Nuremberg trials pioneered techincal and procedural tools for simultaneous interpretation.

The courtroom of the ICTY; the visitor’s gallery is to the left, out of frame.

Since the ICTY opened its doors, that court has been followed by others: the International Criminal Tribunal for Rwanda (ICTR), the Mechanism for International Criminal Tribunals (MICT; which replaces both the ICTY and ICTR), and the International Criminal Court (ICC), plus hybrid UN-national courts. While the ICTR, like the ICTY, had to make do with a pre-existing building in Arusha, Tanzania, the MICT will have a brand-new compound in Arusha and the ICC has already moved into its own purpose-built facility in The Hague.

The Chief Prosecutor and ICTR judges after a swearing in ceremony for the judges.

None of these institutions seem particularly inclined to publicize the physical arrangements of their courtrooms, so writing about them remains a work in progress for me. The ICC’s photos and B-reel of its new courtroom does show a room that’s a little more austere than the ICTR or ICTY premises, with less warm wood and more cool white. A philosophical distinction, or just a change of decorators?

Tides of War, Part One

The best-known story about environmental science and D-Day has to be that of the last-minute forecast that let the invasion go ahead. That prediction, though, was only one of many contributions by Allied environmental scientists to the success of the invasion. Another was the secretive preparation of mundane but vital preparations for the assault: calculating the tides for D-Day.

The theoretical basis for tide prediction was the work of Newton, Daniel Bernoulli, and Pierre Simon Laplace, the third of whom was the first to outline the equations that describe the rise and fall of the tides. Laplace’s equations were too complex to use in practice, but in the mid-nineteenth century the British scientist William Thomson (later ennobled as Lord Kelvin) demonstrated that, given enough tidal measurements, one could use harmonic analysis to divide the tide-generating forces for a particular shoreline into a series of waves of known frequencies and amplitudes (the tidal constituents). That same process, carried out in reverse, would let one predict the tides along that shore. Unfortunately, making those calculations was was time-consuming the point of impracticality. However, Thomson also demonstrated that it was possible to construct an analog machine that would do the necessary work automatically.

Thomson’s machine drew a curve representing the height of the tide with a pen that was attached to the end of a long wire. The wire ran over top of a series of pulleys, which were raised and lowered by gears which reflected the the frequency and amplitude of the tidal constituents. As each pulley rose or fell, it affected the length of the wire’s path and thus the position of the pen. Altogether, they reflected the combined effect of the tidal constituents being simulated.

Thomson's design sketch for the third tide-predicting machine, 1879. Image courtesy Wikimedia.

Thomson’s design sketch for the third tide-predicting machine, 1879. Image courtesy Wikimedia.

The first machine, built in 1872, had gears for only ten constituents, but later machines could represent many more. Machines of his design, many of them built in Great Britain, were also used in other countries to create the necessary tide tables for their ports. In the United States, a different mechanical approach developed by William Ferrel was used to build similar machines. Altogether, though, tide-predicting were specialized, expensive, and rare. According to a modern inventory, only thirty-three were ever built – twenty-five of them in London, Glasgow, or Liverpool.

During the Second World War, the Admiralty Hydrographic Office relied on two tide-predicting machines operated by Arthur Thomas Doodson at the Liverpool Tidal Institute to do all their tidal calculations. One was Thomson’s original machine, refitted to handle twenty-six constituents. The other was a machine designed by Edward Roberts in 1906 and equipped for forty constituents.

Both Doodson and the Tidal Institute had their own unique histories of military collaboration. Doodson, despite being a conscientious objector, had worked on anti-aircraft ballistics for the Ministry of Munitions during the First World War. The Institute, established in 1919 with corporate and philanthropic support, had an important connection with the Admiralty’s own Hydrographic Department. Though the Hydrographic Department did not provide any direct funding until 1923, after that it made the Institute the Admiralty’s exclusive supplier of tide calculations. At the same time, the Hydrographic Department began appointing a representative to the Institute’s governing board.

Though they were the basis for only some of the Institute’s Admiralty work during the war, the tide-predicting machines in Liverpool were busy creating tide tables for Allied ports. According to historian Anna Carlsson-Hyslop’s research, the number of tidal predictions being performed doubled from 77 for 1938, the last pre-war year, to 154 for 1945. (Carlsson-Hyslop’s research is focused on areas of the Institute’s work other than the creation of tide tables, but much of it sheds light on its relationship with the Royal Navy and state patronage.)

In 1943 the Admiralty Hydrographic Office requested calculations to create tide tables for the invasion beaches to be used on D-Day in Normandy. Since the landing zone remained top secret, Commander William Ian Farquharson was responsible for establishing the constituents and providing them (anonymized under the codename “Point Z”) to Doodson in Liverpool. Unfortunately, there were no existing calculations for the area of the beaches. Nor, because tidal constituents were sensitive to local conditions, could he just extrapolate from the data for the ports to the east and west at Le Havre and Cherbourg. Instead, Farquharson combined fragmentary data from some local measurement points near the beaches, clandestine on-the-spot measurements made by Allied beach reconnaissance teams, and guesswork to come up with eleven tidal constituents. Oceanographer Bruce Parker suspects that he began with the Le Havre constituents and then adjusted them to approximate the data he had. The calculations, despite the roughness of the information on which they were based, proved sufficiently accurate for the invasion planner.

In the Pacific, tide tables for amphibious operations were generated by the US Coast and Geodetic Survey’s Tide Predicting Machine No. 2. In both theaters, as well as the Mediterranean, oceanographers supplemented the tide tables for beaches with wind, wave, and surf forecasts. The story of wave forecasting is, if anything, even more cloak and dagger than that of the D-Day tide forecasts, since one of the scientists involved was actively suspected (incorrectly) of being a Nazi sympathizer.

Dr. E. Lester Jones, Chief, U.S. Coast and Geodetic Survey, with the Tide Predicting Machine he built. Harris & Ewing, photographer, 1915. Retrieved from the Library of Congress, https://www.loc.gov/item/hec2008004303/

A US tide predicting machine, probably No.2. The caption from the Library of Congress attributes the machine’s construction to E. Lester Jones, Chief of the Coast and Geodetic Survey. Harris & Ewing, photographer, 1915. Retrieved from the Library of Congress, https://www.loc.gov/item/hec2008004303/

Beyond their civilian and military wartime work, tide-predicting machines had an oblique impact on Second World War cryptanalysis. Those developments would eventually put the machines out of work after the war, but not before the machines would have their final strategic significance.

Forward to Part Two, including Source Notes (soon)

 

The Last Steps

Active History has a pre-Remembrance Day blog post by Claire L. Halstead on The Last Steps, a recently unveiled First World War memorial on the Halifax waterfront.

The memorial takes the shape of an arch and stands on the city’s harbour front; a gangplank purposefully leads the observer’s eye up the pier, through the arch, and right out to sea. Footprints (cast from an authentic soldier’s boot) burnt into the wooden pier conjure up impressions of souls from long ago. In this, Nancy Keating, the Nova Scotia artist who designed the memorial, succeeds in imparting on the observer the haunting emotion the memorial is intended to convey. The memorial stands as a testament to the last steps soldiers took in Halifax before departing for the Great War.

One of Halstead’s comments about the memorial is that there’s a tension over its location. Despite its title and its positioning, The Last Steps is actually about a kilometer away from where Canadian soldiers who departed through Halifax actually embarked. Instead of being at Pier 2, the memorial is at Pier 21 close to the Maritime Museum of the Atlantic and on a far more traveled part of the waterfront.

Two questions which Halstead poses are a) “is it acceptable to sacrifice an essence of historical accuracy to ensure public engagement?” and b) whether the spatial distortion in The Last Steps‘s location could be “reconciled by expanding the scope of the memorial to include and emphasise Halifax’s contribution to the war in addition to the men who departed from it.”

I wonder whether the fact Halstead even poses the question is connected to the fact that The Last Steps is a mimetic memorial whose footprints, gangplank, and arch are built to look like an artifact of the era rather than a modern allusion. We know that the map is never the territory and that even “on this spot” markers make concessions to traffic and construction. The National War Memorial in Ottawa stands on a spot hallowed by nothing in particular from the war it commemorates, apart from proximity to Parliament. Not every question about a memorial’s location has to do with its style, as another post on Active History demonstrates, but in this case I feel like the memorial’s look has to be a factor.

Kiley at Nuremberg

“What I was trying to do was have a unified and orderly and dignified [courtroom] – that’s what the courtroom should be, and it should reflect the scales of justice too.”

— Dan Kiley on the courtroom at Nuremberg

Creating the physical spaces for the war crimes trials at Nuremberg was one of the last tasks performed by the OSS’s Presentation Branch before the service was dissolved and the branch transferred to the State Department. The designer was Dan Kiley, an architect who had been recruited by his friend Eero Saarinen from the Army Corps of Engineers and had replaced Saarinen as chief of design for the branch.

Kiley was something of an odd choice to do the work. Though a trained architect, he had never designed a courtroom and would never design another again. After the war, he became famous as a landscape architect, often doing work for his friend Saarinen. As Kiley himself told it, the job was something of a fluke. The Presentation Branch was already responsible for similar work at the United Nations conference in San Francisco. In compensation for not getting to go to San Francisco, branch chief Hugh Barton offered Kiley the chance to go to Nuremberg instead.

On the other hand, despite the apparent mismatch – why was the Office of Strategic Services designing a courthouse? – the project was a return to the Presentation Branch’s roots. Since its inception, the branch’s mission was to make the presentation of complex information clear, logical, and even captivating. How else would you describe the responsibilities of the international military tribunal at Nuremberg?

In fact, the branch had been created by the OSS’s founder, Bill Donovan, to build a grand automated briefing room for President Roosevelt. Though that project had foundered, it had begat an organization with a broad range of design skills. Kiley himself showed the breath of his talents on the Nuremberg project. He planned the renovations of the entire court building, not just the courtroom but also offices, restaurants, medical clinics, and a shop (the Army PX). His attention to detail included designing furniture for the building to be made from old plywood and putting a gray velvet panel on the chief prosecutor’s lectern so that  his papers wouldn’t fall off.

His arrangement for the courtroom, Joseph Disponzio has explained,  reflected a willingness to break traditional norms to achieve the necessary impact. Instead of positioning the audience of observers and journalists behind the adjudicating parties, with the judges facing both, Kiley positioned the audience perpendicular to the axis of judge–parties, giving them a far better view of the proceedings. A film screen facing the audience allowed for the projection of some of OSS’ other work on the trials, the documentary films.

The Nuremberg courtroom as soon from the press gallery. Note the alignment of the dock and judges’ dais, with lawyers in the foreground and film screen to the back.

Kiley’s work stood in rare company alongside the Ichigaya courtroom, where the International Military Tribunal for the Far East convened, until the 1990s saw the creation of new international criminal courts, bringing their own requirements and sensibilities to the presentation of international justice.

Source Note: In the mid-1990s, Kiley gave an oral history interview to the Thomas J. Dodd Research Center at the University of Connecticut. A shortened version was published in the book Witnesses to Nuremberg, edited by Bruce M. Stave and Michele Palmer with Leslie Frank. The description of Kiley’s work at Nuremberg is mostly drawn from that printed text.