A Tale of Two Keystone States

An auxiliary crane ship, the SS Cornhusker State, in 2009. US Navy by Petty Officer 1st Class Brian Goy. DIVIDS Photo ID 185724.

In the later years of the Cold War, the US Navy recognized the need to revitalize its seagoing transport capacity. During the Second World War, the military had built a massive fleet to support transatlantic and transpacific campaigns. Mothballed after the war, much of it had rotted away by the time reconstruction began under presidents Nixon and Carter and accelerated under President Reagan. One necessity for the new fleet was equipment to move cargo – especially containers – from ship to shore. After experiments with lifting by helicopter or balloon, the Navy settled on fitting a series of cargo ships with heavy cranes to unload cargo in ports that lacked the necessary infrastructure. The first ship to be converted was the SS President Harrison, previously operated by American President Lines, which was renamed the SS Keystone State (T-ACS-1) upon completion of its refit in 1984.

The Barge Derrick Keystone State (BD-6801) being towed by two Army Small Tugs during an exercise at Joint Base Langley-Eustis, Va., Aug 6, 2013. (U.S. Army photo by Spc. Cal Turner/Released) DIVIDS ID 990511.

Confusingly, the T-ACS-1 is not the only US military crane watercraft named the Keystone State. In 1998, the US Army launched a engine-less crane barge BD-6801 with the same name, chosen to honor the 28 soldiers from Pennsylvania’s 14th Quartermaster Detachment killed in a SCUD attack during the first Gulf War (in this instance, BD stands for Barge Derrick). Operated by the Transportation Corps, the BD-6801 was built to help unload military cargo in any of the many ports around the world unequipped to handle the cargo. It carries a single crane with a reach of 175 feet and a lift capacity of 115 long tons which, unlike the cranes on previous army barges, is able to lift a 60 ton M1 tank off of a cargo ship.

Between 1985 to 2005, at least one Army floating crane like the Keystone State was always aboard the MV American Cormorant, a float-on/float-on (FLO/FLO) heavy lift ship at Diego Garcia that carried a package of Army watercraft for operating a damaged or unequipped port. The American Cormorant and its cargo deployed to many major crises as part of the army response, including the first Gulf War and Operation RESTORE HOPE in Somalia. Until the launch of the Keystone State, the crane barge carried aboard the American Cormorant was one from the BD-89T class, with an 100 foot reach and an 89 long ton (100 short ton) capacity.

The American Cormorant en route to the Gulf. Note the two BD-89T cranes on-board, only one of which was used in operations. From Operations Desert Shield and Desert Storm: The Logistics Perspective (Association of the United States Army Institute of Land Warfare, 1991), p.12. Courtesy of the AUSA website.

It was a BD-89T barge, the Algiers (BD-6072), which was deployed to be used by the Army 10th Transportation Battalion (Terminal) during the Gulf War. In addition to performing more than 1,500 lifts in Saudi ports, the Algiers was used to help clear damaged Kuwaiti ports of obstructions – harbor clearance being a mission shared between the US Army and Navy. After having built-up an extensive salvage force after the Second World War, changes to salvage doctrine meant the US Navy only sent one salvage ship and no heavy-lift gear to the Gulf. Commercial salvors being paid by the Dutch government took up much of the slack, but there were limits to what the contractors could do. With rental fees for barges and cranes running as much as $150,000 a day for a 600 ton Ringer crane barge, the Americans ended up mostly going without the heaviest equipment. The biggest harbor clearing lift involving the Algiers was a sunken Iraqi Osa II missile boat in the Kuwaiti port of Ash Shuaybah. Though small by seagoing standards, the Osa II was 127 feet long and displaced almost 200 tons in standard load. Even in combination with a quayside 140 ton crane, the crane barge couldn’t lift the ship whole. Only after army divers cut off the still-life missile launchers could the boat be raised. Looking back at the operation in the navy after-action report, perhaps with a little bit of envy, one of the navy salvage engineers called the army crane “very workable.” Other sunken craft the divers lifted at Ash Shuaybah, with or without the help of the crane, included a 90 foot sludge barge and two other boats.

The deployment of the Algiers during the first Gulf War is only the tip of the iceberg when it comes to the military roles played by American floating cranes, which since the conversion of the battleship Kearsage into Crane Ship No. 1 have worked to construct warships, salvage sunken submarines, and clear wrecks from the Suez Canal.

Source Notes: Much of the information for this post came from various sources around the internet, and in particular the website for the US Army Transportation Corps’ history office. The Corps’ 1994 official history, Spearhead of Logistics, was also useful. Details on the salvage operations during the Gulf War came mostly from the two-volume US Navy Salvage Report: Operations Desert Shield/Desert Storm, printed in 1992 and available online at the Government Attic (volumes one and two); the report’s chronology was the only place I was able to find which US Army crane barge was actually operated during the war.

Trawling for Spies

A new article in Intelligence and National Security by Stephen G. Craft reveals how the Office of Naval Intelligence (ONI) ran a counterintelligence program using fishermen along the southern Atlantic seaboard during the Second World War. Expecting that the Axis would land agents on US shores (something that happened, but only rarely) and use German and Italian-American fishermen to support U-boat operations (which seems to have happened not at all), ONI created Selected Masters & Informants (SMI) sections under naval district intelligence officers to recruit fishermen as confidential informants. Their operations caught no spies but offered some comfort that subversion was never rampant along the coast.

For fifty years from 1916 to 1966, the district intelligence officers were ONI’s contribution to local counterintelligence, security, and information-gathering in US coastal areas. The official responsibilities of the district intelligence officer were numerous. According to Wyman Packard’s A Century of U.S. Naval Intelligence they included:

maintenance of press relations for district headquarters; liaison with the investigating units of federal, state, and city agencies within the naval district; liaison with public and private research agencies and with business interests having information in intelligence fields; liaison with ONI and the intelligence services of the other naval districts, and with forces afloat within the district; counterespionage, security, and investigations; collection, evaluation, and recording of information regarding persons or organizations of value (or opposed) to the Navy; preparation and maintenance of intelligence plans for war; and administrative supervision over the recruiting, training, and activities of the appropriate personnel of the Naval Reserve within the district.

The position was only eliminated in 1966, when its investigative and counterintelligence duties passed to the Naval Investigative Service, its intelligence-collection duties to local Naval Field Operational Support Groups, and its other sundry tasks to the district staff intelligence officer.

Admiral Ingersoll, Commander-in-Chief, Atlantic Fleet, and Rear Admiral James at Charleston, South Carolina during an inspection of the Sixth Naval District, 23 November 1943. Courtesy of Mrs. Arthur C. Nagle. Collection of the Naval History and Heritage Command, NH 90955.

In October 1942, the creation of SMI sections added recruiting fishermen as counterintelligence agents to the long list of task mentioned above. By January 1943, the SMI sections had recruited 586 agents, 200 of them in the Sixth Naval District (headquartered in Charleston, South Carolina). Ship owners were paid $50 to cover installation of a radiotelephone, which were provided to about 50 craft. Otherwise masters were to report by carrier pigeon (the Navy operated lofts in Mayport, Florida and St Simon’s Island, Georgia) or by collect call once ashore. Some masters also received nautical charts that were overprinted with a confidential Navy grid for reporting purposes.

Shrimp fleet in harbor, St. Augustine, St. Johns County, Florida, 1936 or 1937. Photograph by Frances Benjamin Johnston. Retrieved from the Library of Congress, https://www.loc.gov/item/csas200800452/. (Accessed April 20, 2017.)

The absence of winter shrimp fishing, a tendency to cluster in good fishing spots, and the total absence of enemy covert activity all combined to limit the program’s impact. Further south in the Seventh District (headquartered in Jacksonville and Miami, Florida), most fishing was done so close to shore that the district did not bother to implement the program. In a few cases fishing boats were attacked by U-boat and two confidential observers were reportedly killed in submarine attacks. Though the program operated until V-J Day, few reports of interest were ever received. Similar operations took place elsewhere in the US as well, with scattered references in Packard’s Century of U.S. Naval Intelligence to fishing vessels as observers in other naval districts too.

Shrimp boats were the basis for both overt and covert surveillance. Navy patrol craft like the YP-487 were known as “Shrimpers” because of their origins as commercial fishing boats. Collection of the Navy History and Heritage Command, NH 106994.

How successful you consider the program will depend on how plausible you consider the Navy’s fear of subversion and agent landings. However, the idea of using commercial seafarers as observers and informants clearly proved itself enough to resurface from time to time after the war. In 1947, the Chief of Naval Operations issued a letter authorizing the placing of informants on US merchant ships to detect any crew members involved in subversive activities (this was known as the Special Observer–Merchant Marine Plan). In 1955, merchant ships and fishing vessels were included in plans to collect “merchant intelligence” (MERINT) on sightings of ships, submarines, and aircraft (both efforts referenced in Packard). Did the use of fishermen for counterintelligence continue into the 1960s or beyond? If so, there might have been agents on the boats involved in joint US–Soviet fishing enterprises of the 1980s, carefully watching the Soviets carefully watch the Americans.

Tides of War, Part One

The best-known story about environmental science and D-Day has to be that of the last-minute forecast that let the invasion go ahead. That prediction, though, was only one of many contributions by Allied environmental scientists to the success of the invasion. Another was the secretive preparation of mundane but vital preparations for the assault: calculating the tides for D-Day.

The theoretical basis for tide prediction was the work of Newton, Daniel Bernoulli, and Pierre Simon Laplace, the third of whom was the first to outline the equations that describe the rise and fall of the tides. Laplace’s equations were too complex to use in practice, but in the mid-nineteenth century the British scientist William Thomson (later ennobled as Lord Kelvin) demonstrated that, given enough tidal measurements, one could use harmonic analysis to divide the tide-generating forces for a particular shoreline into a series of waves of known frequencies and amplitudes (the tidal constituents). That same process, carried out in reverse, would let one predict the tides along that shore. Unfortunately, making those calculations was was time-consuming the point of impracticality. However, Thomson also demonstrated that it was possible to construct an analog machine that would do the necessary work automatically.

Thomson’s machine drew a curve representing the height of the tide with a pen that was attached to the end of a long wire. The wire ran over top of a series of pulleys, which were raised and lowered by gears which reflected the the frequency and amplitude of the tidal constituents. As each pulley rose or fell, it affected the length of the wire’s path and thus the position of the pen. Altogether, they reflected the combined effect of the tidal constituents being simulated.

Thomson's design sketch for the third tide-predicting machine, 1879. Image courtesy Wikimedia.

Thomson’s design sketch for the third tide-predicting machine, 1879. Image courtesy Wikimedia.

The first machine, built in 1872, had gears for only ten constituents, but later machines could represent many more. Machines of his design, many of them built in Great Britain, were also used in other countries to create the necessary tide tables for their ports. In the United States, a different mechanical approach developed by William Ferrel was used to build similar machines. Altogether, though, tide-predicting were specialized, expensive, and rare. According to a modern inventory, only thirty-three were ever built – twenty-five of them in London, Glasgow, or Liverpool.

During the Second World War, the Admiralty Hydrographic Office relied on two tide-predicting machines operated by Arthur Thomas Doodson at the Liverpool Tidal Institute to do all their tidal calculations. One was Thomson’s original machine, refitted to handle twenty-six constituents. The other was a machine designed by Edward Roberts in 1906 and equipped for forty constituents.

Both Doodson and the Tidal Institute had their own unique histories of military collaboration. Doodson, despite being a conscientious objector, had worked on anti-aircraft ballistics for the Ministry of Munitions during the First World War. The Institute, established in 1919 with corporate and philanthropic support, had an important connection with the Admiralty’s own Hydrographic Department. Though the Hydrographic Department did not provide any direct funding until 1923, after that it made the Institute the Admiralty’s exclusive supplier of tide calculations. At the same time, the Hydrographic Department began appointing a representative to the Institute’s governing board.

Though they were the basis for only some of the Institute’s Admiralty work during the war, the tide-predicting machines in Liverpool were busy creating tide tables for Allied ports. According to historian Anna Carlsson-Hyslop’s research, the number of tidal predictions being performed doubled from 77 for 1938, the last pre-war year, to 154 for 1945. (Carlsson-Hyslop’s research is focused on areas of the Institute’s work other than the creation of tide tables, but much of it sheds light on its relationship with the Royal Navy and state patronage.)

In 1943 the Admiralty Hydrographic Office requested calculations to create tide tables for the invasion beaches to be used on D-Day in Normandy. Since the landing zone remained top secret, Commander William Ian Farquharson was responsible for establishing the constituents and providing them (anonymized under the codename “Point Z”) to Doodson in Liverpool. Unfortunately, there were no existing calculations for the area of the beaches. Nor, because tidal constituents were sensitive to local conditions, could he just extrapolate from the data for the ports to the east and west at Le Havre and Cherbourg. Instead, Farquharson combined fragmentary data from some local measurement points near the beaches, clandestine on-the-spot measurements made by Allied beach reconnaissance teams, and guesswork to come up with eleven tidal constituents. Oceanographer Bruce Parker suspects that he began with the Le Havre constituents and then adjusted them to approximate the data he had. The calculations, despite the roughness of the information on which they were based, proved sufficiently accurate for the invasion planner.

In the Pacific, tide tables for amphibious operations were generated by the US Coast and Geodetic Survey’s Tide Predicting Machine No. 2. In both theaters, as well as the Mediterranean, oceanographers supplemented the tide tables for beaches with wind, wave, and surf forecasts. The story of wave forecasting is, if anything, even more cloak and dagger than that of the D-Day tide forecasts, since one of the scientists involved was actively suspected (incorrectly) of being a Nazi sympathizer.

Dr. E. Lester Jones, Chief, U.S. Coast and Geodetic Survey, with the Tide Predicting Machine he built. Harris & Ewing, photographer, 1915. Retrieved from the Library of Congress, https://www.loc.gov/item/hec2008004303/

A US tide predicting machine, probably No.2. The caption from the Library of Congress attributes the machine’s construction to E. Lester Jones, Chief of the Coast and Geodetic Survey. Harris & Ewing, photographer, 1915. Retrieved from the Library of Congress, https://www.loc.gov/item/hec2008004303/

Beyond their civilian and military wartime work, tide-predicting machines had an oblique impact on Second World War cryptanalysis. Those developments would eventually put the machines out of work after the war, but not before the machines would have their final strategic significance.

Forward to Part Two, including Source Notes

Another “Smallest Aircraft Carrier”

At 131-feet in length, the helicopter landing trainer Baylander (IX-514) has been billed as the “smallest aircraft carrier” in the US Navy, if not the world, by the Navy itself, its current owners the Trenk Family Foundation, and, well, me. That claim is based on the more than 10,000 helicopter landings on the Baylander between 1986 and its retirement in 2014. But what if you want the smallest ship to regularly launch its own aircraft?

The November 1963 issue of Navy magazine All Hands crowned the 206-foot USS Targeteer (YV-3) as the fleet’s “smallest aircraft carrier.” A Drone Aircraft Catapult Ship, the Targeteer was equipped to launch and recover target drones used for gunnery practice by the fleet. The third Landing Ship, Medium (LSM) to be converted into a drone launching ship, the Targeteer was based in San Diego from 1961 to 1968, replacing the USS Launcher (YV-2, 1954–1960) and the USS Catapult (YV-1).

USS Targeteer insignia. NH 64878-KN (NHHC photo).

USS Catapult, circa 1955. NH 55065 (NHHC photo).

USS Catapult, the Targeteer‘s sister ship, circa 1955. NH 55065 (NHHC photo).

Even Targeteer‘s claim, though, is contested. The Executive Officer of the fleet tug USS Kalmia (ATA-184), which also launched and recovered drones at San Diego, wrote to All Hands to claim that its length of 143 feet entitled it to the title of “smallest aircraft carrier.” (All Hands deferred to the Navy’s official classifications. The Targeteer was a Drone Aircraft Catapult Ship, the Kalmia just an Auxiliary Ocean Tug.)

USS Kalmia underway on 16 January 1964. NH 102803 (NHHC photo).

USS Kalmia underway on 16 January 1964. NH 102803 (NHHC photo).

All three claims are weak if you are looking for a ship that launches and retrieves multiple aircraft. If, on the other hand, you are looking for the smallest Navy-crewed vessel which could land or launch a single aircraft, Baylander, Targeteer, and Kalmia all lose to the helicopter pad-equipped “Tango boats” of the Mobile Riverine Force in Vietnam. Officially designated Armored Troop Carriers (ATC)s, these were Landing Craft, Mechanized (LCM) that were modified to serve as floating armoured personnel carriers in the Mekong Delta. Some were further modified with a steel flight deck on top that ran pretty much the full length of the boat. The first helicopter landing on one of these Armored Troop Carrier (Helicopter), or ATC(H)s, took place on July 4, 1967. At 56-feet in length, which is more or less the length of a Huey helicopter, I doubt I’ll find anything smaller to claim the title.

 A U.S. Army UH-1D helicopter lands on the helicopter pad of a modified U.S. Navy Armored Troop Carrier (ATCH R-92-2) operating as part of the Riverine Mobile Force, 8 July 1967. Photography by Photographer's Mate Second Class Edward Shinton. USN 1132291 (NNHC photograph).

A U.S. Army UH-1D helicopter lands on the helicopter pad of a modified U.S. Navy Armored Troop Carrier (ATCH R-92-2) operating as part of the Riverine Mobile Force, 8 July 1967. Photography by Photographer’s Mate Second Class Edward Shinton. USN 1132291 (NNHC photograph).

The Impact of Middlebrow Architecture

From The Sound of Freedom: Naval Weapons Technology at Dahlgren, Virginia, 1918-2006:

The most conspicuous example of the early 1960s effort to make Dahlgren look more like a modern science installation rather than a gun range was the construction of the Computation and Analysis Building (Building 1200). ‘K’ Laboratory had been in need of office space for some time … Consequently, [Ralph A.] Niemann and [Charles J.] Cohen, with the early support of [Naval Weapons Laboratory] commander Captain Manley H. Simons Jr., began lobbying for a new office building at Dahlgren, using POLARIS, Naval Space Surveillance Command, and TRANSIT as justification for the additional work space … Designed by Dahlgren engineer Robert Ryland, the Computation and Analysis Building was (and remains) situated near the station’s front gate, well away from the Potomac and the gun range. There was no mistaking it for a testing shed. It really looked like a science building with its graceful lines and large windows, standing in sharp contract to the rest of NWL’s research plant. It was no mistake that the building was at the front gate, as it was intended to instill visitors coming to Dahlgren with a sense of scientific enterprise. The strategem worked. ‘One the building was constructed,’ said Niemann, ‘then the issue about closing Dahlgren sort of went away because when people would come down, they’d see a new building. They’d figure things were going good, and maybe Dahlgren shouldn’t be closed.'”

The photograph of the Computation and Analysis Building in The Sound of Freedom shows a pleasant but unremarkable low-rise office building.

James P. Rife and Rodney P. Carlisle, The Sound of Freedom: Naval Weapons Technology at Dahlgren, Virginia, 1918-2006, p. B-3

From James P. Rife and Rodney P. Carlisle, The Sound of Freedom: Naval Weapons Technology at Dahlgren, Virginia, 1918-2006, p. B-3

The previous decade had saw the appearance of a swathe of new corporate research and development centers with innovative architecture, often designed both to streamline the collaborative research process and to put an impressive, even futuristic, face on corporate America. The Eero Saarinen-designed GM Technical Center is the most famous of these, but many of the research campuses were built by companies in the aerospace and defense sectors. In 1957, TRW’s Space Technology Laboratories (architect, A.C. Martin) opened in Los Angeles; the next year Convair Astronautics built a new headquarters designed by Pereira and Luckman just outside San Diego.

The NWL Computation and Analysis Building was a far more modest building. Instead of glass curtain walls, it had ribbon windows. Instead of a landscaped campus, it had a lawn. Its designer, Robert Ryland, was an electrical engineer who had held series of management roles in the various NWL labs. According to his obituary in the Fredericksburg Free-Lance Star, Ryland graduated from MIT in 1951 and and headed the Electronics Systems, Strategic Systems, Protection Systems, and Personnel departments at Dahlgren before retiring as head of the Engineering and Information Systems Department in 1992.

There’s no real comparison between the Computation and Analysis Building and the big private research campuses, but there’s a entertaining overlap of eras and impact. Clearly, if Ralph A. Niemann is to be believed, you didn’t need a star architect or an expensive and expansive campus to make an impression if you were working in government.

Source: James P. Rife and Rodney P. Carlisle, The Sound of Freedom: Naval Weapons Technology at Dahlgren, Virginia, 1918-2006 [GPO, 2006] p. 119-120)

When Containers Flew by Balloon

What do a mature tree and a twenty-foot cargo container have in common? In the 1970s, you could carry both away by balloon. Balloon logging, as the practice was known, had a successful niche in the West Coast forestry industry for a few decades. Cargo handling by balloon, on the other hand, was tested by the US military but never made it out of the experimental stage

Two realizations were behind the US military experiments with cargo handling by balloon. The first was that the US was grossly under-equipped to handle military cargo in large quantities anywhere other than a well-equipped modern part. The second was that the international shipping business was becoming more and more reliant on moving materials in standardized cargo containers and that what equipment the US military did have was not designed to handle containerized cargo. Experiments with mobile cranes, helicopters, hovercraft, and floating piers were all part of the search for solutions to these two problems.

Among the more radical experiments were a series of tests that used helium balloons to unload cargo containers from a container ship anchored offshore. Inspiration for what became Joint Army/Navy Balloon Transport System came from Oregon and Washington, where loggers had been reaching further and further into the backcountry by developing new ways to move felled trees. In the 1920s groundlead yarding, where a long line powered by a steam engine skidded the logs along the ground, was replaced by high-lead yarding, where the lines were strung from a tall spar tree and the logs moved through the air rather than along the ground. High-lead yarding was limited by the need to find an appropriate spar tree, but lifting logs by balloon would let loggers shift logs even where the ground was too rough and the distances too long to use high leads. Tests started in Sweden in 1956, then in Canada in 1963 with Second World War-surplus barrage balloons. In the US, Goodyear Aerospace did some early experiments but it was Raven Industries that became the main supplier to the industry in the Pacific Northwest in the late 1960s.

Balloon logging configuration from the Washington Administrative Code's "Safety Standards-Logging Operations." See other cable logging systems here

Balloon logging configuration from the Washington Administrative Code’s “Safety Standards-Logging Operations.” See other cable logging systems here.

In 1972, the Advance Research Projects Agency took notice. ARPA was the military’s high-tech incubator – later than year it would pick up the prefix “Defense” and gain its current acronym, DARPA. It hosted a conference with balloon-builders at Raven Industries to brief military officials on the possibility of using their logging balloons to lift containerized cargo. The Air Force Range Measurements Laboratory, which was already using balloons to carry instrument packages, was roped in to provide technical expertise.

For Raven, offering balloons to the military brought the company back to its roots. Founded in 1956 by staff from General Mills’s High Altitude Research Division, including the “father of hot air ballooning,” Ed Yost, Raven began its business with contracts from the Office of Naval Research to build experimental balloons. Expanding from balloons into plastics, electronics, and sewn goods, the company secured millions of dollars of military contracts for radios and other electronics.

Borrowing balloons and working crews from Raven Industries and the Bohemia Lumber Company, the Air Force flew cargo containers 1500 feet across a ravine in Culp Creek, Oregon. Returning to Culp Creek five months later, they tested the balloon’s ability to lift and move containers from a simulated ship’s cargo cell, as well as the use of a third winch (a “flying Dutchman”) to shift the balloon not just along one axis but laterally as well. Extrapolating from the 1500 foot tests they calculated that a balloon could move nine containers every hour from up to a mile offshore. Finally, the laboratory towed the balloon along a track to test its reaction to wind speeds of up to 30 knots.

From Tethered Balloon Transport System: A Proposal by William Frederick Graeter II, fig. 31

From Tethered Balloon Transport System: A Proposal by William Frederick Graeter II, fig. 31.

In 1976, the balloon graduated to sea tests off Virginia Beach. Now given the moniker of Joint Army/Navy Balloon Transport System, a logging balloon was used to lift cargo containers from a simulated container cell on a Navy LST and deposit them either aboard a nearby landing craft or on the beach 700 yards away. Though it took about four times as long as the ideal calculations had suggested, the balloon was able to do the job. Ships shifting at anchor also required repeated stops to re-position the balloon.

The next year, a commercial company halfway around the world proved that the idea of unloading cargo by balloon wasn’t a fantasy. Operated by the Yemen Skyhook Company, the balloon “Queen of Sheba” was used to unload 800 tons of cargo a day from vessels in the congested Yemeni port of Hodeida. That example, though, was not enough to overcome the mediocre signals from the Virginia Beach tests. 1976 was the height point for balloon-based ship unloading in the US.

Though the Navy had experimented with a variety of unloading techniques, they finally opted for one of the less dramatic solutions on offer. Ten existing container ships were refitted to carry two or three powerful cranes each. These auxiliary crane ships could substitute for cargo cranes aboard the containership or a port, letting the Navy use the rest of its lighterage and cargo-handling equipment as is. Though the crane ships have spent most of their time in reserve, every once in a while they are called into action to support large US operations. Five were sent to the Persian Gulf in 1991 and two went to Haiti after the 2010 earthquake to help unload relief supplies. Balloon logging, on the other hand, has kept going in various places, albeit as a specialized practice rather than a widespread innovation.

The eventual solution: the auxiliary crane ship  SS Grand Canyon State. US Navy photograph courtesy Wikipedia.

The eventual solution: the auxiliary crane ship SS Grand Canyon State. US Navy photograph courtesy Wikipedia.

Source Notes: The USDA Yearbook of Agriculture describes the origins of balloon logging here; the Forest History Society discusses balloon logging on their blog; the military experiments are summarized in detail in a Naval Postgraduate School thesis here.

Elizebeth Friedman, Cryptographer: Part Two

The first twenty years of Elizebeth Friedman’s career as a cryptographer took her to a private research lab, the US Army and Navy, and the Department of the Treasury’s many law enforcement agencies. The start of the Second World War in Europe brought new challenges, starting with the preservation of American neutrality.

With the Coast Guard
The Coast Guard Cryptanalytic Unit began monitoring messages connected to foreign exchange for the Money Stabilization Board in 1938, watching for signs of imminent hostilities so the Board could freeze the funds of the belligerents. Starting in 1939 they also began picking up coded transmissions connected with the two sides. A presidential memorandum gave responsibility for espionage, counterespionage, and sabotage cases to the FBI, but when the Coast Guard turned their intercepts over to the FBI the FBI asked the Coast Guard cryptanalytic unit to solve the codes. The FBI was a relative latecomer to the code-breaking business, having only hired its full full-time cryptanalyst in October 1939. It leaned on the Coast Guard for cryptographic support. The first chief of its cryptanalytic section, W.G.B. Blackburn, was trained by Elizebeth Friedman.

Once the United States entered the war, cryptanalysis began to look like something of a free for all. In addition to the Army, Navy, Coast Guard, and FBI operations, the Office of Censorship, Federal Communications Commission, Weather Bureau, and Office of the Coordinator of Information (the future Office of Strategic Services) all announced that were setting up their own cryptanalysis programs. Thankfully, within about seven months all involved had agreed to centralize code-breaking activities in the Army, Navy (including the Coast Guard), and FBI. The division of labor split clandestine radio messages in the Western Hemisphere between the Navy and FBI and gave the former responsibility for intercepting clandestine communications in the rest of the world. The Coast Guard cryptanalytic unit, now a sub-section of the Navy’s code-breaking division (OP-20-G) continued to focus on these secret messages. It also grew, first to twelve and then to twenty-three people. Elizebeth Friedman was not the commander of the Coast Guard cryptanalytic unit. That role belonged to a commissioned officer, L.T. Jones. After the war she described herself, with perhaps an excess of modesty, as “just one of the workers.”

The ciphers that reached the Coast Guard for decryption came from both individual agents working in secret and substantial radio stations operating out of German embassies. Messages were enciphered using a range of classic ciphers that either replaced (in a substitution cipher) or shifted around (in a transposition cipher) the letters in the message. Most agents were using hand ciphers, in which the message is enciphered using pen and paper rather than a mechanical device. A few used a mechanical device, the Kryha machine, which created a shifting substitution cipher. Agents in Argentina used the same Enigma machine that the German army and navy used to protect their messages (and whose decryption was most recently depicted, with substantial inaccuracies, in The Imitation Game). The Coast Guard was able to use intercepted messages to reverse engineer the wiring that scrambled each letter in the simpler, commercial Enigma machine – those messages turned out to be from the Swiss army. According to NSA historian David P. Mowry this was “the first instance of Enigma wiring recovery in the United States.” Then, with the assistance of British techniques, the Coast Guard team was also able to decrypt messages sent on the Enigma between Argentina and Berlin.

The traffic that the Coast Guard’s code-breaking operation intercepted was never critical to the war effort. Interviewed after the war, Friedman herself suggested that the unit could probably have been better used on other material, rather than working the problem “to the point of overkill” (in her interviewer’s words). Mowry, who wrote a Top Secret history of the topic for the NSA, judged that the US effort to decrypt German clandestine transmissions from the Western Hemisphere had little or no impact on the conduct of the war. Still, American cryptanalysis ensured that nothing snuck up on US operations. Nor was the Coast Guard work Elizebeth Friedman’s only contribution to the war effort. When the Office of the Coordinator of Information was created, she also developed its first code systems.

A Long and Varied Career
Cryptography has such a long history that it’s sometimes hard to remember that large government code-breaking organizations are such a new development. Elizebeth Friedman entered the field at the moment those organizations were being created. Without schools or training programs, cryptographers were few and far between. While her husband spent his career with the Army and the National Security Agency, creating the institutions that would perpetuate the government’s cryptanalytic programs, Elizebeth worked far and wide. Between when she left Riverside Laboratories and when she retired from government service, she worked for or taught at seven of the sixteen members of the current US Intelligence Community (Army, Navy, Coast Guard, Central Intelligence Agency, Department of the Treasury, Federal Bureau of Investigation, and National Security Agency). Her career was not only remarkable for its scope but also probably unrepeatable. By the time she retired, these agencies were on their way towards the extensive permanent organizations that exist today. Retirement was also not the end of Elizebeth’s involvement in cryptography. She consulted for the International Monetary Fund on creating that agency’s secure communications and published a book, The Shakespearian Ciphers Examined (with William Friedman), on their work studying Shakespeare’s works for hidden codes.

Source Notes: The NSA’s history office commissioned several relevant histories as part of its Second World War series. One by Robert Louis Benson, The History of U.S. Communication Intelligence during World War II: Policy and Administration, covers the various organizations; two others, both by David P. Mowry, cover German Clandestine Activities in South American in World War II and The Cryptology of the German Intelligence Services (available amalgamated here). Some of Friedman’s own comments in an oral history interview with Benson (online here) were also useful.

MPDS: Solving the Message Processing Problem Afloat

It is a truth almost universally acknowledged that military organizations push a lot of paper and pass a lot of messages. During the Cold War, keeping those messages moving was a vital but herculean task. The need to make sure senior commanders could reach the nuclear deterrent in a timely manner was behind a vast investment in strategic command and control  The practical impact of failures in strategic communication was apparent during attacks on the USS Liberty and the USS Pueblo in 1967–8, when vital messages sent via the military’s digital data network (AUTODIN) were processed to slowly to be of use on the scene.

The messages involved in the Liberty and Pueblo crises at least had high priorities. What about the vast numbers of significant but not critical messages being passed? To give a convenient example, in 1979 the US Pacific Command’s Operations Directorate received 2,000 messages a week and took two hours just to pass a priority message from the AUTODIN switch to the relevant action officer. Cutting through this morass of messages required automation, which the various branches of the military adopted with varying levels of alacrity.

One attempt was the Military Message Experiment, which gave CINCPAC’s Operations Directorate a set of computer terminals to automatically distribute and display messages received via AUTODIN. The Military Message Experiment was interesting because it was built to borrow many of its features from the early e-mail functionality on ARPANET, but it was hardly the only attempt to automate message processing. Nor did it take on the most challenging conditions: the message processing problem didn’t stop at the water’s edge either.

US Navy ships had special challenges when it came to handling message traffic. To avoid giving away their positions to enemy direction-finding, ships avoided two-way communications with the rest of the armed forces communications network. Instead, the Navy operated a chain of radio stations ashore that sent out a steady stream of radio messages. Ships listened to the “Fleet Broadcast” that went out over these channels and staff in each ship’s main radio room (known as “Radio Central”) copied down the messages relevant to them, then distributed them to recipients elsewhere aboard ship. A communications readiness exercise held by the First Fleet in October 1966 demonstrated how easily this system was overwhelmed when the number of messages surged from peacetime to wartime levels, with most of the trouble coming when the messages had to be handled by human operators.

The solution was automation, but while automating the message processing and broadcast process ashore was relatively simple, doing the same afloat was far more complex. Ships had limited space to spare, limited staff with which to operate, and a range of environmental hazards that included rough seas, shock, humidity, and even salt water. Luckily, the core component for solving the problem was already at hand.

The Naval Tactical Data System (NTDS) was a revolutionary computerized system that automatically shared tracking information on targets between ships, built around a series of transistorized general-purpose computers: the UNIVAC CP-642. The first computer to be operated regularly at sea, the CP-642 was the size of a large refrigerator, contained more than 10,000 transistors mounted on 3,810 circuit cards. The first installations began in 1961 and the system proved reliable. NTDS proved that you could put a computer on board a warship and expect it to operate effectively in a critical capacity.

USS Oklahoma City, underway 9 December 1960 US Navy Photo NH98662

USS Oklahoma City, underway 9 December 1960
US Navy Photo NH98662

So when the Naval Electronics Laboratory (NEL) at Point Loma, San Diego was asked to design a system to automatically receive, process, and then distribute messages, they borrowed the already-proven CP-642 as its processing core. The first experimental installation was installed aboard the Seventh Fleet flagship USS Oklahoma City (CLG-5) less than a year later in May 1967. This version of what was being called the Message Processing and Distribution System (MPDS) wasn’t fully automated, but it was successful enough that NEL was asked to design a fully automated system to install on the first Nimitz-class aircraft carrier in 1975.

Bow view of the US Navy Aircraft Carrier USS NIMITZ (CVN 68) underway off the coast of Southern California. DoD Photo.

Bow view of the US Navy Aircraft Carrier USS NIMITZ (CVN 68) underway off the coast of Southern California. DoD Photo.

The automated MPDS (designated the AN/SYQ-6) was built from three CP-642 computers, plus disk drives, CRT consoles, and a backup teleprinter. It listened to (or “guarded”) the relevant fleet broadcast channel, recorded the messages, checked to see which ones were directed to that ship, and then forwarded them to the appropriate remote terminal elsewhere on the ship. And it made a big difference in helping the carrier handle an average communications load of more than 2,500 messages a day. The MPDS worked well enough to be installed on the next two Nimitz-class carriers, the USS Eisenhower and the USS Vinson, too.

Of course, there were some rough spots. The electronic storage for past messages was insufficient, forcing the crew to print off messages as backups. The original software could only read US-formatted messages, not NATO-formatted ones, which made allied operations problematic – that issue was fixed, at least partly, with a patch. And, once carriers were having to process more than 3,500 messages a day the system started to reach its saturation point.

So why have you never heard of the MPDS? The answer is that the system was always a temporary fix for a broader problem. In 1973, having demonstrated the potential of communications  automation afloat, Naval Electronic Systems Command (NAVELEX) began designing the Naval Modular Automated Communications System (NAVMACS). NAVMACS was a modular system that would fit into all the Navy’s ships, not just its carriers. It would use a newer computer, the UYK-20. And its largest installation format, the NAVMACS V-5, would have all the same remote terminal features that MPDS featured. Once NAVMACS entered service, MPDS vanished into obscurity.

Sources: References to MPDS are pretty scattered; an official history of NEL at Point Loma mentions the Oklahoma City installation in brief, and a NAVELEX brochure that discusses plans for the automated system. The main source for this posting, apart from numerous handy pieces of information at www.virhistory.com/navy/, was a master’s thesis on the system’s development by Kenneth Lee Whitten written in 1981 (and available many places, including the Naval Postgraduate School).

Spacewar!, and Mapping Gravity

The blog War is Boring has a brief shout-out to Spacewar!, the first video game. Programmed by a inventive group of graduate students to run on MIT’s Program Data Processor-1 (PDP-1), Spacewar! was a real-time two-player game in which spaceships swooped around the gravity-well created by a collapsed star, shooting missiles at each other against the backdrop of an expensive-looking starfield. Created in 1961, it was, as War is Boring puts it, “the Pentagon-funded video game” that created an entire form of entertainment. If you want to try it out yourself, there’s a JavaScript emulator of the game, as well as lots of historical articles and versions of the original code, at Norbert Landsteiner’s Spacewar! website.

War is Boring misses a neat irony about the Spacewar! story though. What made the game hideously hard and an obsession for physicists and engineers was its realistic physics, complete with the “real” gravity of a star. Yet, at the same time as the students at MIT were programming Spacewar! the US Navy was discovering that gravity in the real world was not as smooth as it was in Spacewar!‘s featureless sphere – and that the difference could be the Achilles heel of the US nuclear deterrent.

Though the force of gravity is the same all around the world, variation in the mass and shape of the earth mean that gravity varies, subtly, from place to place around the world. The fact itself was not terribly new – the first gravity surveys began in the seventeenth century and the US Navy was making gravity measurements from the submarine S-21 in the 1920s – but it acquired much greater significance after the Second World War.

Suddenly, the fact that differences in local gravity would have a small but measurable affect on a ballistic trajectory became important – especially when the object on that trajectory was a missile that had to travel several thousand kilometers, like, coincidentally a submarine-launched ballistic missile. Combine that with the fact that the launching submarine relied on an inertial guidance system whose measurements of motion could be skewed by those same gravitational anomalies. (This was the same navigation problem that led to the Transit satellites and then GPS.) In the early 1960s, at the same time as Spacewar! was being birthed at MIT, it turned out that the inconsistency of gravity could be vitally military important.

Understandably, this situation put the US military’s gravimetry program into high gear. The Army Map Service had 125 gravity survey crews in the field. The Air Force had a Geodetic Survey Squadron (the 1381st) and a vast library of gravity survey data at Aeronautical Chart and Information Center in St Louis, Missouri, but the most substantial contribution probably came from “The Triplets,” three converted merchantmen equipped to calibrate the Ships Inertial Navigation System (SINS) that would be used by the Navy’s ballistic missile submarines. The shortest-lived of the three, USNS Michaelson (T-AGS 23), served for seventeen years. The longest-lived, USNS Dutton (T-AGS 22), lasted thirty-one years after making the equivalent of one-hundred circumnavigations of the Earth.

By then, the state of the art for gravity measurement had advanced by leaps and bounds, helped in part by the computer revolution that began with the PSP-1 and its counterparts. NASA and the Hopkins Applied Physics Laboratory (APL) discovered that sea surface height, measured by the sensitive radar altimeter on the Seasat satellite, could be used to infer the differences in local gravity. It was an impressive enough achievement that it led to a brand-new Navy satellite – GEOSAT – whose altimeter could measure the ocean height with an accuracy of three centimeters. Sea surface topography, it would turn out, had all sorts of valuable military uses. That, though, is another story or two.

Plankton on the Battlefield

How small can an animal be, and still shape a battlefield? Very small, it turns out, as long as you’re zooplankton.

In 1942, the US Navy came to a group of oceanographers working in the University of California Division of War Research with an odd question. Navy sonar operators were getting echoes from “false bottoms,” with their signals bouncing back from depths of 1500 feet where the true depth of the ocean was several thousand. The navy was flummoxed, looking for ways to explain the phenomenon, and turned to the oceanographers Charles Eyring, R.J. Christensen, and Russell Raitt.

They turned to Martin Johnson of the Scripps Institute, a marine biologist who had already had success helping the navy recognize that interference with their sonar signals was the work of snapping shrimp (which are, apparently, shockingly loud when you put hundreds of them together).

Johnson was struck by the idea that the source of the reflection might be organic. In the deep ocean, zooplankton and the fishes which feed on them follow a diurnal cycle – staying deep during the day and climbing close to the surface at night. Clustered together, these fishes and plankton were dense enough to generate a sonar return that confused the operator.

In 1945, Johnson had the chance to prove his theory, watching as the false bottom (soon to be known as the “deep scattering layer”) rose towards the surface overnight and sank again in the morning. Once again, he had been able to help explain the environment in which the navy was fighting.

The use of oceanography during the Second World War became the basis for a massive expansion of government-funded ocean research during the Cold War, and Johnson’s discoveries helped ensure that marine biology and the study of the deep scattering layer would be part of that. After all, even tiny plankton could affect the operation of the US Navy’s most advanced sensors.

h/t Gary E. Weir, An Ocean in Common: American Naval Officers, Scientists, and the Ocean Environment (Texas A&M University Press, 2001); Robert L. Fisher, Edward D. Goldberg, and Charles S. Cox (eds.), Coming of Age: Scripps Institution of Oceanography : A Centennial Volume, 1903–2003 (Scripps Institution of Oceanography, 2003).