Minerals at War: Strategic Resources and the Foundations of the U.S. Defense Industrial Base
Published: 14 January 2026
By Gracelin Baskaran and Samantha Dady
via the the Center for Strategic and International Studies website

22R_SP25_Factory_WW1_160909-F-IO108-00.jpg
Workers at the Dayton-Wright Company built more than 3,000 aircraft in World War I.
In World War I, as now, access to critical minerals was a defining determinant of military and industrial power.
Introduction
Across major conflicts, the United States has repeatedly mobilized extraordinary state intervention—stockpiles, price controls, public financing, and foreign procurement—to overcome minerals supply shocks, only to dismantle these systems during periods of perceived stability. The historical record reveals recurring failures: overreliance on stockpiles absent industrial capacity, neglect of processing and refining, erosion of domestic expertise, and complacent assumptions about markets and allies. The post–Cold War drawdown marked the most severe rupture, hollowing out U.S. minerals capabilities and amplifying dependence on foreign, and eventually adversarial, supply chains. This paper traces the evolution of U.S. critical minerals policy in twentieth-century military industrialization, illustrating how a position of resource dominance gave way to economic and national security vulnerabilities. The central lesson is clear: Critical minerals security is a permanent national security challenge that requires continuous stewardship, integrated industrial policy, and durable engagement with allies, not episodic crisis response or faith in market self-correction.
World War I
In the early twentieth century, access to critical minerals was a defining determinant of military and industrial power. Europe’s leading powers—Britain, France, and Germany—secured the raw materials necessary for industrialization and rearmament not primarily through domestic production, but through colonial empires and overseas holdings. Although the British Isles possessed important domestic resources such as iron, tin, and coal, Britain’s true material strength flowed from its empire. India supplied manganese and tungsten; Rhodesia (now Zimbabwe) provided chrome; Canada delivered nickel; and Australia contributed lead and zinc. Germany faced similar constraints. With limited domestic mineral reserves, Berlin aggressively pursued stakes in foreign mines, from the United States to Australia, and even purchased the national debts of resource-rich countries to secure leverage over their mineral sectors.
This competition extended into emerging markets for rare earth elements (REEs). In the early 1900s, the German Thorium Syndicate and the Austrian Welsbach Company dominated global monazite extraction in Brazil and India, flooding markets with cerium, lanthanum, and thorium. Their control over these deposits effectively pushed the United States out of the REE industry; the United States would not resume domestic production until 1952.

The USS Arizona in New York Harbor in 1916. A single WWI-era battleship required 8,000–12,000 tons of steel and roughly 200 tons of copper wire.
Resource pressures intensified dramatically in the years leading up to World War I. Between 1909 and 1914, Britain and Germany entered an intense naval and military arms race, dramatically increasing demand for strategic minerals. A single battleship required 8,000–12,000 tons of steel and roughly 200 tons of copper wire. Producing that steel depended on alloying minerals such as nickel, chrome, and manganese, while zinc provided corrosion-resistant coatings. Ammunition was similarly mineral intensive, relying on nitrate-based explosives packed into brass casings made from copper and zinc. Even early wartime aircraft added to the competition for resources, as their lightweight frames required aluminum derived from bauxite.
World War I exposed the fragility of the United States’ own critical mineral supply chains. In 1914, the United States was among the world’s leading mineral producers, accounting for approximately 55 percent of global copper output, 40 percent of coal and iron, and 30 percent of lead and zinc. Yet domestic abundance masked deeper vulnerabilities: The United States lacked the international partnerships, strategic stockpiles, and coordinating mechanisms required for wartime mobilization.
As the war progressed, industries essential to munitions, aviation, and communications faced acute shortages, particularly of platinum. In the early 1900s, the Ural Mountains in Russia supplied roughly 95 percent of the world’s platinum. When the Bolshevik Revolution erupted in 1917, Russian exports contracted sharply, leaving the United States scrambling to secure alternative sources. This scarcity prompted President Woodrow Wilson to impose restrictions on nonessential uses of the mineral, such as jewelry. Colombia, then the world’s second-largest producer, initially appeared to offer relief, but negotiations faltered when U.S. officials learned Bogotá intended to establish a state monopoly to extract concessions on U.S. shipping tonnage. Before an agreement could be reached, the war ended, demand receded, and talks collapsed. A stable supply emerged only in 1923, when major new platinum deposits were discovered in South Africa, which quickly became the United States’ primary supplier.
These wartime improvisations exposed deeper structural weaknesses. Competition between the Army and Navy for materials drove up costs and fragmented supply chains, while the nation’s geological wealth failed to translate into the industrial capacity necessary to deploy those resources at scale.
Only in the final phase of the war did the federal government take sweeping, decisive action on critical minerals. The War Industries Board was established in July 1917 to coordinate U.S. industrial production by establishing priorities, setting prices, and standardizing goods to ensure the nation and its allies were adequately supplied for the war effort. In October 1918, the War Minerals Stimulation Law became the first large-scale U.S. attempt to intervene in mineral markets for national security. As a New York Times article on September 12, 1918, reported, the bill “authorize[d] the President to take over and operate undeveloped or insufficiently developed deposits of metals or minerals named in the bill, or mines, smelter, or plants which in his opinion are capable of producing minerals needed for the war.” The legislation created a $50 million fund to execute its objectives and empowered the president to establish one or more corporations to stimulate production and oversee distribution. It specified the minerals eligible for support—including manganese, phosphorus, potassium, radium, mercury, and 36 others—while explicitly excluding gold, silver, zinc, copper, and lead. All authorities granted under the act expired two years after peace was declared.
The United States emerged from World War I as one of the primary suppliers of metals to a devastated Europe, but peacetime demand quickly collapsed. Continued high U.S. production flooded global markets and drove plummeting prices. Producers engaged in fierce, uncoordinated competition, producing copper, zinc, and other base metals at far above sustainable levels. Temporary stabilization came through voluntary export associations, most notably the Copper Export Association, which brokered large import orders for credit from reconstructing European economies. When European demand receded in late 1923, however, these associations dissolved and U.S. mineral producers entered a prolonged depression, marked by oversupply and price collapse.
⇒ Read the entire article on the CSIS website.
External Web Site Notice: This page contains information directly presented from an external source. The terms and conditions of this page may not be the same as those of this website. Click here to read the full disclaimer notice for external web sites. Thank you.
