Technology, World-Changing Inventions

The Cleanest Rooms in History

A cleanroom—or clean room—is just that: a very, very clean room. More specifically, a cleanroom is a controlled environments in which environmental pollutants and particulate are kept at extremely low levels via air filtration and purification, amongst other means. Cleanrooms are critical to the manufacture of a large number of products, from pharmaceuticals to semiconductors.

Compared to conditions in the early days of mass manufacturing, almost any modern room is clean. True clean rooms, though, are held to a higher standard. But who first developed the cleanroom as we know it? Whence did these havens of cleanliness originate? Read on to learn more!

Big Willy Whitfield Cleans Up

There is considerable historical evidence that shows that rudimentary contamination control efforts were being made in hospital operating rooms as early as the mid-19th century CE. Building off earlier discoveries by Louis “Big Lou” Pasteur, British surgeon Joseph “Big Joe” Lister was the first to introduce sterile and antiseptic surgical measures.

Lister’s efforts only affected the medical procedures themselves, however, and did not encompass the entire room in which said procedures were being performed. The concept of the modern cleanroom was developed by an American physicist named Willis “Big Willy” Whitfield.

Whitfield’s invention was spurred by the need for high precision manufacturing during World War II, which required clean environments to ensure the quality and reliability of military instrumentation. Previously, dirty production environments had contributed to poor performance and malfunctions in bomb sights, aircraft bearings, and other equipment crucial to the war effort.

Then known as “controlled assembly areas,” early clean rooms often suffered problems with airborne particulates and unpredictable airflows. Whitfield, an employee of Sandia National Laboratories, created an effective solution through the use of a constant, highly filtered airflow that flushed out impurities in the atmosphere. Air was filtered using high-efficiency particulate air (HEPA) filtration devices developed during the previous decade.

Whitfield’s first clean room prototype appeared in 1960, well after the war had ended but just in time for the space race, which would create a huge market for cleanroom technology. By the mid-1960s, more than $50 billion worth of cleanrooms had been installed throughout the United States and the world.

NASA workers operating in a cleanroom environment.

NASA workers operating in a cleanroom environment.

Today’s Cleanrooms Today

Modern cleanrooms have continued to improve Whitfield’s technology, and can provide particulate filtration as low as twelve particles of 0.3μm diameter or smaller per cubic meter. (Ambient air in a typical urban environment contains roughly 35 million of these particles per cubic meter.)

Modern clean rooms can enclose thousands of square meters, and use extensive filtration and airlock systems to ensure and maintain cleanliness. Specialized HVAC systems can control humidity levels, and ionizers are utilized to prevent ESD and other similar hazards. A modular clean room can be used for temporary or permanent operations, or can be relocated as production processes require.

Photo credit: NASA Goddard Photo and Video via Scandinavian / CC BY

Important People, Important Discoveries, World-Changing Inventions

A Ridiculously Brief History of Nuclear Fusion Research, Part III

Part I can be found here, whilst Part II can be found here.

The Nineteen Hundred & Eighties

In 1983, the NOVETTE laser was completed at Lawrence Livermore, followed a year later by the multi-beam NOVA laser. Thanks to massive advances in laser technology throughout the decade, by 1989 NOVA would be capable of producing 120 kilojoules of infrared light in a single nanosecond pulse.

Scientists at the Laboratory for Laser Energetics at the University of Rochester developed frequency-tripling crystal that would transform infrared laser beams into ultraviolet beams. Further laser amplification was developed in 1985 by US scientist Donna “Big Donna” Strickland and French scientist Gerard “Big Gerry” Mourou. Their “chirping” technique changed a single laser pulse’s wavelength into a full spectrum, then amplified the laser at each wavelength and reconstituted the beam into a single color. Chirp pulsed amplification, as the process came to be known, was instrumental in further technological advances, particularly for weaponized fusion.

Pew pew!

Pew pew!

In 1987, data collected by NASA’s Voyager 2 probe as it passed by Uranus was analyzed by Akira Hasegawa. Hasegawa noted that, in a dipolar magnetic field, fluctuations compressed plasma sans the usual energy loss, an observation that would become the basis for the Levitated Dipole branch of fusion technology and research.

The Nineteen Hundred & Nineties

The world’s first controlled release of fusion power took place at the Culham Centre for Fusion Energy in Oxfordshire, England, in 1991 at the Joint European Torus, the world’s largest operational magnetic confinement plasma physics experiment. Thankfully, this release of fusion power was intentional and did not annihilate anything at all.

Beginning in 1992 and continuing throughout the decade, numerous advocates for laser fusion technology drummed up support for ongoing research. In the late ‘90s, the United States Congress approved funding for the US Naval Research Laboratory (NRL) to continue their work in the field.

In 1995, a massive fusor—a device which uses an electric field to heat ions to conditions suitable for nuclear fusion—was built at the University of Wisconsin-Madison. Called HOMER, the device is still in operation today. Smaller, similar fusors were also built around this time at the University of Illinois at Urbana-Champaign and in Europe.

1996 saw the US Army giving the public its first look at the Z-machine, a device that generates a magnetic pulse which, upon striking a liner of tungsten wires, is capable of discharging 18 million amperes of energy in less than 100 nanoseconds. The Z-machine is an effective way to test the very high energy, very high temperature conditions (up to 2 BILLION degrees Fahrenheit) conditions of nuclear fusion.

The Levitated dipole fusion device was developed by a team of Columbia University and MIT researchers in the late ‘90s. Consisting of a superconducting electromagnet floating in a saucer-shaped vacuum chamber, the device swirls plasma around the chamber and fuses it along the center axis.

The Two Thousand & Aughts

In early 2002, Rusi Taleyarkhan published the findings of his team at the Oak Ridge National Laboratory. Taleyarkhan reported that he and his cohorts had recorded measurements of neutron and tritium output consistent with successful fusion, following a series of acoustic cavitation experiments. When these results were later discovered to have been falsified, Taleyarkhan was found guilty of misconduct by the Office of Naval Research and disbarred from receiving federal funding for more than two years.

A machine capable of producing fusion and small enough to fit “on a lab bench” was developed by a group of researchers at UCLA in 2005. The device used lithium tantalate to generate sufficient voltage to smash deuterium atoms together, though the process generated no net power.

In the early- to mid-2000s, researchers at MIT started investigating the possibility of using fusors, similar to the UCLA team’s device, for powering and propelling vehicles in space. In separate trials, a team from Phoenix Nuclear Labs developed a fusor that could be used as a neutron source for medical isotope production.

To infinity and BEYOND!

To infinity and BEYOND!

In 2008, an exceptionally brainy 14-year-old whippersnapper by the name of Taylor Wilson achieved successful nuclear fusion using a homemade fusor.

The Two Thousand & Tens

In 2012, a published paper demonstrated a method of dense plasma focus that could achieve temperatures of 1.8 billion degrees Celsius. This temperature is sufficient for boron fusion, and the method produced fusion reactions that occurred almost exclusively within a contained plasmoid (a cage of current and plasma trapped in a magnetic field), which is a necessary condition for generating net power.

Lockheed Martin’s Skunk Works announced the development of a high-beta fusion reactor in October 2014. Higher velocity, lower cost space travel, including deep space exploration would be made possible by this compact fusion reactor, as the above-mentioned MIT team postulated. Skunk Works researchers hope to have a functioning prototype of a 100-megawatt version built by 2017, and to have the device ready for regular operation by 2022.

And then it’s now and I don’t know what happens next!

Laser Photo credit: melissa.meister via DesignHunt / CC BY-SA
Space Exploration Photo credit: NASA Goddard Photo and Video via Foter.com / CC BY

Important People, Important Discoveries, War: What Is It Good For? (Absolutely Nothin'!), World-Changing Inventions

A Ridiculously Brief History of Nuclear Fusion Research, Part II

Part I can be found here.

The Mid to Late Nineteen Hundred & Fifties

Hungarian-born American theoretical physicist, and, later, father of the hydrogen bomb Edward “Big Ed” Teller, working on “Project Matterhorn” at the newly-established Princeton Plasma Physics Laboratory, suggested at a group meeting that any nuclear fusion system that confined plasma within concave fields was destined to fail. Teller stated that, from what his research suggested, the only way to achieve a stable plasma configuration was via convex fields, or a “cusp” configuration.

Not THAT Matterhorn, dang it!

Not THAT Matterhorn, dang it!

Following Teller’s remarks, most of his cohorts on Project Matterhorn (which would soon be renamed “Project Sherwood”) quickly wrote up papers stating that Teller’s concerns did not apply to the devices they had been working on. Most of these chaps were working with pinch machines, which did not use magnetic fields. However, this rush of papers was quickly followed by a piece by David “Diamond Dave” Kruskal and Martin “Big Marty” Schwarzschild, which demonstrated the inherent deficits of pinch machines’ designs.

A new-and-improved pinch device, incorporating Kruskal and Schwarzschild’s suggestions, began operating in the UK in 1957. In early ’58, the British physicist Sir John “Big John” Cockcroft announced that this machine, dubbed ZETA, had successfully achieved fusion. However, US physicists soon disproved this claim, showing that the affected neutrons in ZETA’s fusion were, in fact, the result of a combination of different, previous processes. ZETA was decommissioned a decade later.

The first truly successful controlled fusion experiment was conducted at Los Alamos National Laboratory, later in 1958. Using a pinch machine and a cylinder of deuterium, scientists were able to generate magnetic fields that compressed plasma to 15 million degrees Celsius, then squeezed the gas, fused it, and produced neutrons.

The Nineteen Hundred & Sixties

In 1962, scientists at Lawrence Livermore National Laboratory used newly-developed laser technology to produce laser fusion. This process involves imploding a target using laser beams, making it probably the coolest scientific procedure in human history.

In 1967, researchers at that same laboratory developed the magnetic mirror, a magnetic confinement device used to trap high energy plasma via a magnetic field. This device consisted of two large magnets arranged so as to create strong individual fields within them and a weaker, connected field betwixt them. Plasma introduced into the between-magnet area would bounce off the stronger fields and return to the middle.

In Novosibirsk, Russia (then the USSR) in 1968, Andrei “Big Drei” Sakharov and his research team produced the world’s first quasistationary fusion reaction. Much of the scientific community was dubious of the team’s claims, but further investigation by British researchers confirmed Sakharov et al.’s claims. This breakthrough led to the development of numerous new fusion devices, as well as the abandonment of others as their designs were repurposed to more closely replicate Sakharov’s team’s device.

The Nineteen Hundred & Seventies

John “Johnny Nucks” Nuckolls first developed the concept of ignition in 1972. Ignition, in this case, is a fusion chain reaction in which superheated helium created during fusion reheats the fuel and starts more reactions. Nuckolls hypothesized that this process would require a one kilojoule laser, prompting the creation of the Central Laser Facility in the UK in 1976.

Project PACER, carried out at Los Alamos throughout the mid-‘70s, explored to possibility of a fusion power system that would detonate small hydrogen bombs in an underground cavity. Project PACER was the only concept of a fusion energy source that could operate with existing technology. However, as it also required a large and infinite supply of nuclear bombs, it was ultimately deemed unfeasible.

Tune in next week for “A Ridiculously Brief History of Nuclear Fusion Research, Part III”.

Photo credit: Olivier Bruchez via StoolsFair / CC BY-SA

Important People, Important Discoveries, War: What Is It Good For? (Absolutely Nothin'!), World-Changing Inventions

A Ridiculously Brief History of Nuclear Fusion Research, Part I

I’m not gonna lie: nuclear fusion is a complex conundrum, and I will readily admit that I do not fully understand it. But, it’s an important scientific and historical concept nonetheless, and one deserving of at least a few minutes of your reading time. Follow along as we breeze all too quickly through the history of nuclear fusion research.

The Nineteen Hundred & Twenties

In 1920, English chemist and physicist Francis William “Big Frank” Aston discovered that four hydrogen atoms had a heavier total mass equivalent than the total mass of one helium atom. This, of course, meant that net energy can be released by combining hydrogen atoms to form helium. This discovery was also mankind’s first look into the chemical mechanism by which stars produce energy in such massive quantities.

Throughout the decade, English astronomer, physicist, and mathematician Sir Arthur Stanley “Big Art” Eddington championed his own hypothesis that the proton-proton chain reaction* was the primary “engine” of the sun.

The Nineteen Hundred & Thirties

Things stayed pretty quiet until 1939, when German physicist and future Nobel Prize winner (in physics, natch) Hans Bethe verified a theory that showed that beta decay* and quantum tunneling* in the sun’s core could potentially convert protons into neutrons. This reaction, of course, produces deuterium rather than a simple diproton, and deuterium, as we all know, fuses with other reactions for increased energy output.

The Nineteen Hundred & Forties

Thanks to World War II, the Manhattan Project became the world’s biggest nuclear fusion project in 1942. We all know how that ended.

Pesky monkeys!

Pesky monkeys!

The UK Atomic Energy Authority registered the world’s first patent for a fusion reactor in 1946. Invented by English physicist and future Nobel Laurate in physics Sir George Paget “Big George” Thomson and British crystallographer Moses “Big Moses” Blackman, it was the first detailed examination of the Z-pinch concept.*

In 1947, two team of scientists in the United Kingdom performed a series of small experiments in nuclear fusion, expanding the size and scope of their experiments as they went along. Later experiments were inspired in part by the Huemul Project undertaken by German expat scientist Ronald “Big Ron” Richter in Argentina in 1949.

The Early Nineteen Hundred & Fifties

The first successful manmade fusion device—the boosted fission weapon, which doesn’t sound like something you should worry about at all—was first tested in 1951. This miniature nuclear bomb (again, don’t worry about it, I’m sure it’s fine) used a small amount of fusion fuel to increase the rate and yield of a fission reaction.

New and “improved” version of the device appeared in the years that followed. “Ivy Mike” in 1952 was the first example of a “true” fusion weapon, while “Castle Bravo” in 1954 was the first practical example of the technology. These devices used uncontrolled fusion reactions to release neutrons, which cause the atoms in the surrounding fission fuel to split apart almost instantaneously, increasing the effectiveness of explosive weapons. Unlike normal fission weapons (“normal” bombs), fusion weapons have no practical upper limit to their explosiveness.

"Ivy Mike" blowin' up real good, November 1952.

“Ivy Mike” blowin’ up real good, November 1952.

Spurred on by Richter’s findings (which were later found to be fake), James Leslie “Big Jim” Tuck, a physicist formerly working with one of the UK teams but by then working in Los Alamos, introduced the pinch concept to United States scientists. Tuck produced the excellently-named Perhapsatron, an early fusion power device based on the Z-pinch concept. The first Perhapsatron prototype was completed in 1953, and new and improved models followed periodically until research into the pinch concept more or less ended in the early ‘60s.

Be sure to join us next week for “A Ridiculously Brief History of Nuclear Fusion Research, Part II”.

* which we haven’t even remotely the time, energy, or intellect to get into here

Manhattan Project Photo credit: Manchester Library via Foter.com / CC BY-SA
Ivy Mike Photo credit: The Official CTBTO Photostream via Foter.com / CC BY

World-Changing Inventions

I(ndigo) Would Dye 4 U

Indigo is more than just a color and a prefix for the popular folk-rock duo Girls. It’s actually a natural dye extracted from plants that is now commonly used to color your blue jeans. Once upon a time, however, blue dyes were quite rare, and the process for extracting indigo ultimately proved to be an important discovery in textile history. Read on to learn more!

Made from the Best Indigofera Juice on Earth

Though indigo dye can be derived from a variety of plants, it is most commonly obtained from those in the Indigofera genus, especially Indigofera tinctoria—hence the name. These plants can be found in abundance throughout Asia and the Indian subcontinent, and it is therefore unsurprising that ancient Indians were the first to make extensive use of indigo dye.

The surprisingly pink Indigofera tinctoria plant.

The surprisingly pink Indigofera tinctoria plant.

I. tinctoria was first domesticated in India, and the colorant derived from it is amongst the oldest used for textile dyeing and printing. After it rose to prominence in India, indigo became common in what are now China, Japan, and other Asia nations.

India was not only the primary center for the actual, physical act of indigo dyeing in the ancient world, it was also the primary supplier of the pigment to Europe, dating as far back as the Greco-Roman days. In ancient Greece and Rome, indigo dye and anything dyed with indigo were considered luxury products. India is so closely associated with the indigo trade, in fact, that the Greek word for the dye—indikόn—literally means “Indian”. Those lazy Romans reduced it to indicum, and the even lazier English eventually turned it into “indigo.”

Cuneiform tablets from the 7th century BCE provide a recipe for dyeing wool with indigo, showing just how far back the practice dates. (Though it likely goes back even further than that.) Ancient Romans used it for painting and as an ingredient in medicines and cosmetics. In the Middle Ages, a chemically-identical dye derived from the woad plant was used to mimic indigo dye, but true indigo was still viewed as superior and remained a luxury item.

Indigo from India To Go

This remained more or less the case until the late 1400s, when Portuguese explorer Vasco “Grande Vasco” de Gama “discovered” a sea route to India, opening up direct trade between the Indian and European markets and cutting out those pesky Persian and Greek middlemen who had driven up prices.

Expanding European empires of the day soon established numerous indigo plantations in their tropical territories to keep up with growing demand for the now-inexpensive dye. Jamaica, the Virgin Islands, and what is now South Carolina all had expansive indigo fields during this period. Those buzzkills in France and Germany quickly outlawed imported indigo to protect their struggling local woad dye industries.

Natural indigo continued to be massively popular for centuries to come. In 1897, over 2,700 square miles (7,000 square kilometers) of farmland worldwide were used to grow indigo-producing plants—nearly three times the size of the nation of Luxembourg. Over 19,000 tons of indigo were produced from these and other plant sources.

Indigoing Down

With continuing advances in organic chemistry came synthetic indigo pigments. By 1914, production from natural sources was down to a mere 1,000 tons worldwide. As of 2002, worldwide production of synthetic indigo had topped 17,000 tons.

Photo credit: museumdetoulouse via Foter.com / CC BY