Important People, Important Discoveries, War: What Is It Good For? (Absolutely Nothin'!), World-Changing Inventions

A Ridiculously Brief History of Nuclear Fusion Research, Part II

Part I can be found here.

The Mid to Late Nineteen Hundred & Fifties

Hungarian-born American theoretical physicist, and, later, father of the hydrogen bomb Edward “Big Ed” Teller, working on “Project Matterhorn” at the newly-established Princeton Plasma Physics Laboratory, suggested at a group meeting that any nuclear fusion system that confined plasma within concave fields was destined to fail. Teller stated that, from what his research suggested, the only way to achieve a stable plasma configuration was via convex fields, or a “cusp” configuration.

Not THAT Matterhorn, dang it!

Not THAT Matterhorn, dang it!

Following Teller’s remarks, most of his cohorts on Project Matterhorn (which would soon be renamed “Project Sherwood”) quickly wrote up papers stating that Teller’s concerns did not apply to the devices they had been working on. Most of these chaps were working with pinch machines, which did not use magnetic fields. However, this rush of papers was quickly followed by a piece by David “Diamond Dave” Kruskal and Martin “Big Marty” Schwarzschild, which demonstrated the inherent deficits of pinch machines’ designs.

A new-and-improved pinch device, incorporating Kruskal and Schwarzschild’s suggestions, began operating in the UK in 1957. In early ’58, the British physicist Sir John “Big John” Cockcroft announced that this machine, dubbed ZETA, had successfully achieved fusion. However, US physicists soon disproved this claim, showing that the affected neutrons in ZETA’s fusion were, in fact, the result of a combination of different, previous processes. ZETA was decommissioned a decade later.

The first truly successful controlled fusion experiment was conducted at Los Alamos National Laboratory, later in 1958. Using a pinch machine and a cylinder of deuterium, scientists were able to generate magnetic fields that compressed plasma to 15 million degrees Celsius, then squeezed the gas, fused it, and produced neutrons.

The Nineteen Hundred & Sixties

In 1962, scientists at Lawrence Livermore National Laboratory used newly-developed laser technology to produce laser fusion. This process involves imploding a target using laser beams, making it probably the coolest scientific procedure in human history.

In 1967, researchers at that same laboratory developed the magnetic mirror, a magnetic confinement device used to trap high energy plasma via a magnetic field. This device consisted of two large magnets arranged so as to create strong individual fields within them and a weaker, connected field betwixt them. Plasma introduced into the between-magnet area would bounce off the stronger fields and return to the middle.

In Novosibirsk, Russia (then the USSR) in 1968, Andrei “Big Drei” Sakharov and his research team produced the world’s first quasistationary fusion reaction. Much of the scientific community was dubious of the team’s claims, but further investigation by British researchers confirmed Sakharov et al.’s claims. This breakthrough led to the development of numerous new fusion devices, as well as the abandonment of others as their designs were repurposed to more closely replicate Sakharov’s team’s device.

The Nineteen Hundred & Seventies

John “Johnny Nucks” Nuckolls first developed the concept of ignition in 1972. Ignition, in this case, is a fusion chain reaction in which superheated helium created during fusion reheats the fuel and starts more reactions. Nuckolls hypothesized that this process would require a one kilojoule laser, prompting the creation of the Central Laser Facility in the UK in 1976.

Project PACER, carried out at Los Alamos throughout the mid-‘70s, explored to possibility of a fusion power system that would detonate small hydrogen bombs in an underground cavity. Project PACER was the only concept of a fusion energy source that could operate with existing technology. However, as it also required a large and infinite supply of nuclear bombs, it was ultimately deemed unfeasible.

Tune in next week for “A Ridiculously Brief History of Nuclear Fusion Research, Part III”.

Photo credit: Olivier Bruchez via StoolsFair / CC BY-SA

Important People, Important Discoveries, War: What Is It Good For? (Absolutely Nothin'!), World-Changing Inventions

A Ridiculously Brief History of Nuclear Fusion Research, Part I

I’m not gonna lie: nuclear fusion is a complex conundrum, and I will readily admit that I do not fully understand it. But, it’s an important scientific and historical concept nonetheless, and one deserving of at least a few minutes of your reading time. Follow along as we breeze all too quickly through the history of nuclear fusion research.

The Nineteen Hundred & Twenties

In 1920, English chemist and physicist Francis William “Big Frank” Aston discovered that four hydrogen atoms had a heavier total mass equivalent than the total mass of one helium atom. This, of course, meant that net energy can be released by combining hydrogen atoms to form helium. This discovery was also mankind’s first look into the chemical mechanism by which stars produce energy in such massive quantities.

Throughout the decade, English astronomer, physicist, and mathematician Sir Arthur Stanley “Big Art” Eddington championed his own hypothesis that the proton-proton chain reaction* was the primary “engine” of the sun.

The Nineteen Hundred & Thirties

Things stayed pretty quiet until 1939, when German physicist and future Nobel Prize winner (in physics, natch) Hans Bethe verified a theory that showed that beta decay* and quantum tunneling* in the sun’s core could potentially convert protons into neutrons. This reaction, of course, produces deuterium rather than a simple diproton, and deuterium, as we all know, fuses with other reactions for increased energy output.

The Nineteen Hundred & Forties

Thanks to World War II, the Manhattan Project became the world’s biggest nuclear fusion project in 1942. We all know how that ended.

Pesky monkeys!

Pesky monkeys!

The UK Atomic Energy Authority registered the world’s first patent for a fusion reactor in 1946. Invented by English physicist and future Nobel Laurate in physics Sir George Paget “Big George” Thomson and British crystallographer Moses “Big Moses” Blackman, it was the first detailed examination of the Z-pinch concept.*

In 1947, two team of scientists in the United Kingdom performed a series of small experiments in nuclear fusion, expanding the size and scope of their experiments as they went along. Later experiments were inspired in part by the Huemul Project undertaken by German expat scientist Ronald “Big Ron” Richter in Argentina in 1949.

The Early Nineteen Hundred & Fifties

The first successful manmade fusion device—the boosted fission weapon, which doesn’t sound like something you should worry about at all—was first tested in 1951. This miniature nuclear bomb (again, don’t worry about it, I’m sure it’s fine) used a small amount of fusion fuel to increase the rate and yield of a fission reaction.

New and “improved” version of the device appeared in the years that followed. “Ivy Mike” in 1952 was the first example of a “true” fusion weapon, while “Castle Bravo” in 1954 was the first practical example of the technology. These devices used uncontrolled fusion reactions to release neutrons, which cause the atoms in the surrounding fission fuel to split apart almost instantaneously, increasing the effectiveness of explosive weapons. Unlike normal fission weapons (“normal” bombs), fusion weapons have no practical upper limit to their explosiveness.

"Ivy Mike" blowin' up real good, November 1952.

“Ivy Mike” blowin’ up real good, November 1952.

Spurred on by Richter’s findings (which were later found to be fake), James Leslie “Big Jim” Tuck, a physicist formerly working with one of the UK teams but by then working in Los Alamos, introduced the pinch concept to United States scientists. Tuck produced the excellently-named Perhapsatron, an early fusion power device based on the Z-pinch concept. The first Perhapsatron prototype was completed in 1953, and new and improved models followed periodically until research into the pinch concept more or less ended in the early ‘60s.

Be sure to join us next week for “A Ridiculously Brief History of Nuclear Fusion Research, Part II”.

* which we haven’t even remotely the time, energy, or intellect to get into here

Manhattan Project Photo credit: Manchester Library via Foter.com / CC BY-SA
Ivy Mike Photo credit: The Official CTBTO Photostream via Foter.com / CC BY

World-Changing Inventions

I(ndigo) Would Dye 4 U

Indigo is more than just a color and a prefix for the popular folk-rock duo Girls. It’s actually a natural dye extracted from plants that is now commonly used to color your blue jeans. Once upon a time, however, blue dyes were quite rare, and the process for extracting indigo ultimately proved to be an important discovery in textile history. Read on to learn more!

Made from the Best Indigofera Juice on Earth

Though indigo dye can be derived from a variety of plants, it is most commonly obtained from those in the Indigofera genus, especially Indigofera tinctoria—hence the name. These plants can be found in abundance throughout Asia and the Indian subcontinent, and it is therefore unsurprising that ancient Indians were the first to make extensive use of indigo dye.

The surprisingly pink Indigofera tinctoria plant.

The surprisingly pink Indigofera tinctoria plant.

I. tinctoria was first domesticated in India, and the colorant derived from it is amongst the oldest used for textile dyeing and printing. After it rose to prominence in India, indigo became common in what are now China, Japan, and other Asia nations.

India was not only the primary center for the actual, physical act of indigo dyeing in the ancient world, it was also the primary supplier of the pigment to Europe, dating as far back as the Greco-Roman days. In ancient Greece and Rome, indigo dye and anything dyed with indigo were considered luxury products. India is so closely associated with the indigo trade, in fact, that the Greek word for the dye—indikόn—literally means “Indian”. Those lazy Romans reduced it to indicum, and the even lazier English eventually turned it into “indigo.”

Cuneiform tablets from the 7th century BCE provide a recipe for dyeing wool with indigo, showing just how far back the practice dates. (Though it likely goes back even further than that.) Ancient Romans used it for painting and as an ingredient in medicines and cosmetics. In the Middle Ages, a chemically-identical dye derived from the woad plant was used to mimic indigo dye, but true indigo was still viewed as superior and remained a luxury item.

Indigo from India To Go

This remained more or less the case until the late 1400s, when Portuguese explorer Vasco “Grande Vasco” de Gama “discovered” a sea route to India, opening up direct trade between the Indian and European markets and cutting out those pesky Persian and Greek middlemen who had driven up prices.

Expanding European empires of the day soon established numerous indigo plantations in their tropical territories to keep up with growing demand for the now-inexpensive dye. Jamaica, the Virgin Islands, and what is now South Carolina all had expansive indigo fields during this period. Those buzzkills in France and Germany quickly outlawed imported indigo to protect their struggling local woad dye industries.

Natural indigo continued to be massively popular for centuries to come. In 1897, over 2,700 square miles (7,000 square kilometers) of farmland worldwide were used to grow indigo-producing plants—nearly three times the size of the nation of Luxembourg. Over 19,000 tons of indigo were produced from these and other plant sources.

Indigoing Down

With continuing advances in organic chemistry came synthetic indigo pigments. By 1914, production from natural sources was down to a mere 1,000 tons worldwide. As of 2002, worldwide production of synthetic indigo had topped 17,000 tons.

Photo credit: museumdetoulouse via Foter.com / CC BY

Historical Science & Technology, World-Changing Inventions

Forged in the Fires of History

Forging is a metalworking process in which metal is shaped via localized compressive forces. These forces are applied via a hammer (or mechanical hammer nowadays) or a die. It is one of the oldest known metalworking processes; good ol’ hammer-and-anvil blacksmithing is a method of forging metal, and mechanical forging was one of the first uses developed for water power.

The Forge Awakens

The earliest archaeological evidence of metal forging dates back to roughly 4000 BCE. Most OG forgers used bronze and iron to create tools, weapons, and other basic metal implements. History’s first forgers, however, were blingin’ outta control and used gold.

Actual photo of a 19th century blacksmith.

Actual photo of a 19th century blacksmith.

Over the centuries, humans more or less perfected the forging process. By the 19th Century CE, blacksmiths were producing high quality wrought iron forgings via the open die process, in which red hot metal is pressed into the desired shape and dimensions using appropriate hand tools. Working in teams—and with really, really big hammers—these smiths were able to produce custom forgings as large as 10 tons.

In 1856, the development of Bessemer steel proved to be a boon to metal forging. The Bessemer process provided a huge supply of low cost steel that made the mass-production of forged metal products possible.

Forging Onward

Like every other industry, the Industrial Revolution had a major impact on metal forging and metalworking in general. Better and more efficient equipment, such as steam-powered hammers, was developed that made forging a more versatile and more easily repeatable process.

Metal forging proved invaluable to the Allied war effort during World War II, and the huge increase in demand for forgings lead to numerous innovations that helped the industry grow even larger. Advances in electrical technology, specifically electric heating, gave metalworkers even greater control of their processes for better quality products and faster production.

Many modern metalworkers produce their steel forgings using computer-controlled, hydraulically powered equipment. This highly specialized, precision operated equipment offers almost unlimited possibilities for creating complex forged parts. Today, the ancient process of metal forging is used to produce products for mining, aerospace, and everything in between.

Photo credit: ilkerender via Foter.com / CC BY-NC

Technology, World-Changing Inventions

Can’t Stoppler the Doppler

A NEXRAD Doppler radar tower.

A NEXRAD Doppler radar tower.

You know it, you love it: it’s the Doppler weather radar! Without it, we’d never know if it was going to rain or snow or if the earth was about to crash into the sun… maybe not that last one, but you know what I mean. Where did this brilliant technology come from? Read on to learn more. Or don’t, it’s totally up to you.

You’re Thinking of REGULAR Weather Radar (Non-Doppler)

Though it might seem like the Doppler weather radar has been a staple of evening news forecasts since time immemorial, development of the technology did not begin until the late 1940s. US Armed Forces scientists, returning to civilian life (in most cases) after serving in World War II, set out to develop a peacetime application for the radar technology they had used in the field. Many had noted noise in their radar readings caused by various forms of precipitation, and these “echoes” ultimately formed the basis of Doppler weather radar technology.

In the late ‘40s, Air Force, and later MIT, scientist David Atlas developed the first operational weather radar. Canadian researchers known collectively as the Stormy Weather Group conducted successful studies on the effects of rain drop size distribution that would later evolve into radar reflectivity technology. In the UK, weather scientists continued to study radar echo patterns in all types of weather. EKCO, a UK-based company best known for producing radios and television sets, demonstrated a rudimentary version of Doppler weather radar technology in 1950. By the end that year, numerous weather stations around the world had installed reflectivity radars capable of measuring position and intensity of precipitation.

Technology advanced quickly, as it so often does, and by 1964, the US National Severe Storms Laboratory (or NSSL) had been created to make full use of the increasingly complex incoming data. In 1969, the NSSL used a military-surplus, ten centimeter, Doppler pulse radar to scan and film the complete life cycle of a tornado. From there, it was clear that Doppler radar had the potential to be a powerful predictive tool for forecasting severe weather events.

Adopting Doppler

Following the 1974 “Super Outbreak” (48 confirmed tornadoes across thirteen US states within 24 hours), which likely could have been predicted—and citizens then forewarned—using Doppler technology, the NSSL and the National Weather Service declared Doppler weather radar to be of crucial importance to all weather stations across the world.

By 1988, a nationwide Doppler network called NEXRAD had been installed across the United States. The Canadian Doppler Network was completed in 2004. Much of Europe had made the switch to Doppler technology by the early 2000s.

Photo credit: NOAA Photo Library via Foter.com / CC BY