Technology, World-Changing Inventions

Water Treatment Technology Through History

Water Treatment Technology Through History

 

Civilization has changed in uncountable ways over the course of human history, but one factor remains the same: the need for clean drinking water. Every significant ancient civilization was established near a water source, but the quality of the water from these sources was often suspect. Evidence shows that humankind has been working to clean up their water and water supplies since as early as 4000 BCE.

Cloudiness and particulate contamination were among the factors that drove humanity’s first water treatment efforts; unpleasant taste and foul odors were likely driving forces, as well. Written records show ancient peoples treating their water by filtering it through charcoal, boiling it, straining it, and through other basic means. Egyptians as far back as 1500 BCE used alum to remove suspended particles from drinking water.

By the 1700s CE, filtration of drinking water was a common practice, though the efficacy of this filtration is unknown. More effective slow sand filtration came into regular use throughout Europe during the early 1800s.

As the 19th century progressed, scientists found a link between drinking water contamination and outbreaks of disease. Drs. John Snow and Louis Pasteur made significant scientific finds in regards to the negative effects microbes in drinking water had on public health. Particulates in water were now seen to be not just aesthetic problems, but health risks as well.

Slow sand filtration continued to be the dominant form of water treatment into the early 1900s. in 1908, chlorine was first used as a disinfectant for drinking water in Jersey City, New Jersey. Elsewhere, other disinfectants like ozone were introduced.

The U.S. Public Health Service set federal regulations for drinking water quality starting in 1914, with expanded and revised standards being initiated in 1925, 1946, and 1962. The Safe Drinking Water Act was passed in 1974, and was quickly adopted by all fifty states.

Water treatment technology continues to evolve and improve, even as new contaminants and health hazards in our water present themselves in increasing numbers. Modern water treatment is a multi-step process that involves a combination of multiple technologies. These include, but are not limited to, filtration systems, coagulant (which form larger, easier-to-remove particles call “floc” from smaller particulates) and disinfectant chemicals, and industrial water softeners.

For further information, please read:

Planned future articles on Sandy Historical will expand on some of the concepts mentioned here. Please visit this page again soon for links to further reading.

Important People, Important Discoveries, Technology, World-Changing Inventions

Danger! High Alessandro Volta-ge!

Chances are good that you’ve got a battery in your pocket right now (or your purse, if you’re a lady). You know, the one that powers your phone. Really, we couldn’t get much done in the course of the day without batteries: your car needs one, your laptop, your TV remote. The sheer volume of battery-operated devices we encounter in the average day is staggering. So, whom can we thank for this portable power technology? Who invented the battery as we know it?

The Mars Italian Volta

Alessandro Giuseppe Antonio Anastasio Volta (try saying that five times fast) was born on 15 February 1745, in the small town of Como in northern Italy. At the ripe old age of 29, he became a professor of physics at the Royal School in Como. At 30, he developed an improved version of the electrophorus, a static electricity-generating device that was a precursor to the battery.

He was a pioneer in the study of electrical capacitance, developed methods of studying both electrical charge and potential, and eventually discovered what would become known as Volta’s Law of Capacitance (from whence the term “volt” is derived). In Seventeen-Hundred and Seventy-Nine, he was named professor of experimental physics at the University of Pavia.

Inventing the First Modern Battery

In the 1780s, Volta’s countryman Luigi Galvani discovered what he called “animal electricity.” Galvani proposed that when two different metals were connected in series with a frog’s leg (as he used in his experiments) and to each other, electric current was generated; he suggested that the frog leg was the source of the electricity.

After conducting his own research on the matter, Volta discovered that the current was a result of the contact between dissimilar metals, and that the frog leg served as a conductor and detector rather than the source of the charge. In 1794, Volta demonstrated that when two different metals and a piece of brine-soaked cloth or paper were arranged in a circuit, an electrical current was produced.

Holy smokes, an actual voltaic pile. Whaddaya know?!

Holy smokes, an actual voltaic pile. Whaddaya know?!

From there, Volta developed what came to be known as the voltaic pile, an early form of electric battery not too terribly different from those we use today. In his early experiments, Volta created cells from a series of wine goblets filled with brine and with the metal electrodes placed inside.

Later, Volta stacked multiple pairs of alternating copper and zinc discs, separated by sheets of cardboard soaked in brine. Upon connecting the top and bottom contacts with a wire, an electrical current flowed through both the discs and the wire. This, then, was the first electrical battery to produce its own electricity.

Photo credit: GuidoB / Foter / CC BY-SA

Science, Technology

Biosphere 2: Mission 1 (NOT Starring Pauly Shore)

Biosphere 2 is a 3.14-acre Earth systems science research facility in the Arizona desert, near Oracle. It was constructed between 1987 and 1991, and, at the time of its completion, contained numerous representative biomes: a rainforest, mangrove wetlands, an ocean with coral reef, a fog desert, a savannah grassland, an agricultural system, and a human habitat. The fully-enclosed, airtight system—still the largest of its kind—was famously used for two “closed missions” designed to study the interaction between humans, farming, technology, and nature, as well as to test the efficacy of the structure for space colonization.

Mission 1’s Crew

Biosphere 2’s first closed mission lasted exactly two years, from 26 September 1991 to 26 September 1993. It was manned by an eight-person crew of researchers, including Director of Research Abigail Alling, Linda Leigh, Taber MacCallum, Mark Nelson, Jane Poynter, Sally Silverstone, Mark Van Thillo, and medical doctor Roy Walford.

Biosphere 2

Biosphere 2

Scientific Diet

Throughout Mission 1, the crew ate a low-calorie, nutrient-rich diet Wolford had developed through prior research into extending the human lifespan through diet. The agricultural system within Biosphere 2 produced over 80 percent of the team’s total diet, including bananas, beans, beets, lablab, papayas, peanuts, rice, sweet potatoes, and wheat.

In their first year, they lost an average of 16 percent of their pre-entry bodyweight. This weight loss stabilized and the crew gained much of their weight back during the second year as caloric intake increased with larger crop volumes. Regular medical tests showed that the crew remained in excellent health throughout the mission, and lower cholesterol, lower blood pressure, and improved immune systems were noted across the board. Additionally, the crew’s metabolisms became more efficient at extracting nutrients from their food as they adapted to their unique diet.

Fauna of Mission 1

A number of animals were included in the mission, including goats, chickens, pigs, and fish. These creatures were contained in the special agricultural area to study the effects of the artificial environment on non-human animals. Numerous pollinating insects were also included to facilitate the continued growth of the plant life within Biosphere 2.

So-called “species-packing” was implemented to ensure that food webs and ecological functions could continue if some species did not survive. This proved prescient, as the fish were ultimately overstocked, causing many to die and clog the ocean area’s filtration systems. Native insects, such as ants and cockaroaches, inadvertently sealed inside the facility, soon took over, killing many of the other insects. The invasive ants and cockroaches did perform much of the pollination needed to maintain plant life, however.

Flora & Biomes of Mission 1

As with the animals within Biosphere 2, several of its various biomes reacted differently than researchers had anticipated. Successes and failures arose in nearly equal measure.

The fog desert area turned into chaparral due to higher than expected levels of condensation. Plants in the rainforest and savannah areas suffered from etiolation and were weaker than expected, as the lack of natural wind caused a lack of stress wood growth.

Morning glories overgrew the rainforest and blocked the growth of other plants. The savannah itself was seasonally active, as expected, but the crew had to cut and store biomass to help regulate carbon dioxide within the facility.

Corals reproduced regularly in the ocean area, but the crew had to hand-harvest algae from the corals and manipulate the water’s pH levels to keep it from becoming too acidic.

Biosphere 2 Today

The Biosphere 2 facility has been owned by the University of Arizona since 2011. It is now used for a wide range of research projects.

Photo credit: K e v i n / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Historical Science & Technology, Technology, World-Changing Inventions

Roman Bridges Are the Best Bridges

Cross any bridges recently? If so, thank an ancient Roman. Just kidding—they’re all dead. And while they didn’t actually invent bridges, they did discover better, more reliable ways to build them.

Stone + Concrete + Arches

Roman bridges were built from stone and/or concrete—a bridge building material they pioneered. (Concrete was not used for bridge building again until the Industrial Revolution.) Many Roman bridges still exist today, proving the quality of their construction—the oldest Roman bridge still in existence in Rome (Italy), is the Pons Aermilius, built in 142 BCE. A total of 931 Roman bridges still exist today, in 26 countries.

These bridges were/are often over five meters wide, with alternating header and stretcher layouts. Most slope upward slightly, allowing for water runoff. The stones are often linked with dovetail joints or, in some instances, metal bars. Most stones also have indentations to give gripping tools a better hold.

The earliest ancient Roman arch bridges, using the “ideal form” of the circle, were built as complete circles, with part of the arch underground. This design was later changed to the semi-circular arch; a variation, the segmental arch, was not uncommon. Other modified arch forms, such as the pointed arch, were also used, but rarely.

Roman bridges typically utilized matching, wedge-shaped primary arch stones. Single- arch spans and multiple-arch bridges were built in almost equal measure. The Limyra Bridge in modern Turkey features 26 individual segmental arches and spans over 1,000 feet.

A surviving Roman bridge spanning Köprülü Canyon in Turkey.

A surviving Roman bridge spanning Köprülü Canyon in what is now Turkey.

Crossing Water

Roman bridges often spanned large bodies of water, usually at least 60 feet above the surface. Bridges were built over every major river in the Roman Empire except the Euphrates and the Nile. (The Nile River was not permanently bridged until 1902.)

Trajan’s Bridge, which crossed the Danube River, was the longest bridge ever constructed for over 1,000 years. It was a multi-arch bridge featuring segmental arches made of wood and built on top of concrete piers over 130 feet high. The longest Roman bridge still in existence is the Puente Romano, at nearly 2,600 feet.

When spanning rivers with strong currents, or in times when fast army deployment was needed, the Romans often built pontoon bridges. No other civilization would successfully create a pontoon bridge until the 19th Century CE.

Photo credit: Anita363 / Foter / Creative Commons Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0)

Historical Science & Technology, Science

The Age of the Earth: Geology in the 17th & 18th Centuries

Geology has technically been a thing since the first caveman thought, upon picking up a rock, “Hey, this rock looks different from that one over there. I wonder why that is?” But it wasn’t until the 17th Century CE that scientists actually started using actual science to find some answers.

Geology & the Biblical Flood

In the 17th Century, most people in the “Christian world” still believed that the bible was an actual, factual historical document. From this, they extrapolated that the big rainstorm that got Russell Crowe all worked up had actually happened, and set out to prove it through science.

In searching for evidence to support this, scientists and researchers learned a good deal about the composition of the Earth and, perhaps more importantly, discovered fossils, and lots of ‘em. Perhaps unsurprisingly, the real information gleaned in this process was often significantly manipulated to support the idea of the Great Flood (as well as other Biblical nonsense). A New Theory of the Earth, written by William Whiston and first published in 1696, used Christian “reasoning” to “prove” that the Great Flood had not only happened, but that it was also solely responsible for creating the rock strata of the Earth’s crust.

Whiston’s book and further developments lead to numerous heated debates between religion and science over the true origin of the Earth. The overall upside was a growth in interest in the makeup of our planet, particularly the minerals and other components found in its crust.

What created these mineral strata? A relatively easily explainable scientific process, or Jesus?

What created these mineral strata? A relatively easily explainable scientific process, or Jesus?

Minerals, Mining & More

As the 18th Century progressed, mining became increasingly important to the economies of many European countries. The importance of accurate knowledge about mineral ores and their distribution throughout the world increased accordingly. Scientists began to systematically study the earth’s composition, compiling detailed records on soil, rocks, and, most importantly, precious and semiprecious metals.

In 1774, the German geologist Abraham Gottlob Werner published his book, On the External Characteristics of Minerals. In it, Werner presented a detailed system by which specific minerals could be identified through external characteristics. With a more efficient method of identifying land where valuable metals and minerals could be found, mining became even more profitable. This economic potential made geology a popular area of study, which, in turn, led to a wide range of further discoveries.

Religion vs. Facts

Histoire Naturelle, published in 1749 by French naturalist Georges-Louis Leclerc, challenged the then-popular biblical accounts of the history of Earth supported by Whiston and other theologically-minded scientific theorists. After extensive experimentation, Leclerc estimated that the Earth was at least 75,000 years old, not the 4,000-5,000 years the bible suggests. Immanuel Kant’s Universal Natural History and Theory of Heaven, published in 1755, similarly described the earth’s history without any religious leanings.

The works of Leclerc, Kant, and others drew into serious question, for the first time, the true origins of the Earth itself. With biblical and religious influences taken out of the equation, geology turned a corner into legitimate scientific study.

By the 1770s, two very different geological theories about the formation of Earth’s rock layers gained popularity. The first, championed by Werner, hypothesized that the earth’s layers were deposits from a massive ocean that had once covered the whole planet (i.e. the biblical flood). For whatever reason, supporters of this theory were called Neptunists. In contrast, Scottish naturalist James Hutton’s theory argued that Earth’s layers were formed by the slow, steady solidification of a molten mass—a process that made our planet immeasurably old, far beyond the chronological timeframe suggested by the bible. Supporters of Hutton’s theory, known as Plutonists, believed that Earth’s continual volcanic processes were the main cause of its rock layers.

Photo credit: Taraji Blue / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Technology

Combating Corrosion Through History

Mankind has long fought against the forces of corrosion that deteriorate and destroy the natural materials we’ve worked so hard to forge into useful devices. The great Roman philosophy Pliny the Elder (23-79 CE) postulated that the rusting of iron was a punishment from the gods, as this most domestically-useful of metals was also used by mankind to create weapons of war.

Wood and paper, stone and earthen materials of all types, and especially metals are subject to the corrosive forces of nature, at wildly varying rates, and humans have developed a wide range of methods for combating it.

Early Solutions

One of the earliest examples of a manmade corrosion protection solution is the “antifouling” paint first created by ancient seafarers in the Fifth Century BCE. Caused by the bacterial decay of wood from, fouling negatively impacted the performance of sailing ships by increasing drag resistance and decreasing maximum speed. Early antifouling paints were made from a mixture of arsenic, sulfur, and Chian oil (similar to linseed oil). The first patent for an antifouling paint recipe was awarded to William Beale in 1625—his mixture included iron powder, copper, and cement.

In 1824, financed by the British Navy, Sir Humphry Davy developed a method of cathodic protection designed to protect the metal hulls of the Navy’s ships. Assisted by electrochemistry specialist Michael Faraday, Davy experimented on copper, iron, zinc, and other metals in a variety of saline solutions and studied the materials’ electrochemical reactions. From these tests, Davy determined that a small quantity of zinc or low-grade malleable iron could be used as a coating or covering for copper to prevent saltwater corrosion.

Sir Humphry Davy

Sir Humphry Davy

Just over a decade later, Robert Mallet was commissioned by the British Association for the Advancement of Science to study the effects of seawater and “foul” river water at various temperatures on cast and wrought iron. Following extensive experimentation, Mallet discovered that zinc used as a hot-dipped galvanized coating (a method suggested by Davy’s findings) developed a thick layer of calciferous zinc oxide crystals that “retards or prevents its further corrosion and thus permits the iron to corrode.” Mallet then experimented with zinc alloy anodes, eventually finding that metals anodic to zinc, such as sodium, increased the metal’s corrosion resistance.

Modern Methods

Today, there multiple sub-fields of materials science dedicated to developing new ways to prevent corrosion. Special materials, such as stainless steel, and special processes, such as anodization, use microscopically-thin oxide layers to boost corrosion resistance. Specially-engineered corrosion protection coatings have been developed that can protect substrate materials against such exterior forces as friction, heat, chemicals, and oxidization (rust).

Photo credit: Royal Institution / iWoman / CC BY-NC-ND

Science

Give ‘Em an Inch

If you’re reading this in the US, Canada, or anywhere in the UK, you know the inch as a standard unit of length. If you’re reading it from anywhere else, you know it as the stupid thing that isn’t a centimeter, but should be. (Kind of. Wait. What?) But, either way, what you may not know is: where did the inch come from?

History’s Inchstories

The exact details of where the inch came from are a bit murky. The earliest surviving reference to the unit is a manuscript from 1120 CE, which itself describes the Laws of Æthelbert from the early 7th Century CE. This manuscript relates a Law regarding the cost of wounds based on depth: one inch costs one shilling, two inches costs two shilling, and so on. Whether this cost refers to a fine for the inflictor of said wound or the cost of treating the wound is unclear, because of weird Olde English.

Around that time, one of several standard Anglo-Saxon units of length was the “barleycorn,” defined as “three grains of barley, dry and round, placed end to end lengthwise.” One inch was said to equal three barleycorn, a legal definition which persisted for several centuries (as did the use of the barleycorn as the base unit). Similar definitions can be found in contemporaneous English and Welsh legal tracts.

inch

Not to scale.

Attempts at Standardization

Since grains of barley are notoriously nonconformist, the traditional method of measuring made it impossible to truly standardize the unit. In 1814, Charles Butler, a math teacher at the Cheam School in Ashford with Headley, Hampshire, England, revisited the old “three grains of barley” measurement and established the barleycorn as the base unit of the English Long Measure System. All other units of length derived from this.

George Long, in his Penny Cyclopædia, published in 1842, observed that standard measures had made the barleycorn definition of an inch obsolete. Long’s writing was supported by law professor John Bouvier in his law dictionary of 1843. Bouvier wrote that “as the length of the barleycorn cannot be fixed, so the inch according to this method will be uncertain.” He noted that, as a “standard inch measure” was at this time kept in the Exchequer chamber at Guildhall, this unit should be the legal definition of the inch.

Modern Standardization… in 1959?!

Somehow, it was not until 19friggin59 that the current, internationally accept length of an inch was established. This measurement, exactly 25.4 millimeters, or 0.9144 meters, was adopted through the International Yard and Pound Agreement, which sounds ridiculous but was an actual thing.

Before this, there were various, slightly different inch measurements in use. In the UK and British Commonwealth countries, an inch was defined based on the Imperial Standard Yard. The US, meanwhile, had used the conversion factor of 39.37 inches to one meter since 1866.

Photo credit: Biking Nikon SFO / Foter / Creative Commons Attribution 2.0 Generic (CC BY 2.0)

Historical Science & Technology, World-Changing Inventions

Origins of Steam Locomotion

As far back as the Ancient Greeks, railways were used to transport goods over long distances. The earliest railways relied on manpower to move their wheeled carts long the tracks. Later, horses were used, giving us the term “horsepower.” It was not until the late 1700s that the steam powered locomotive was developed.

From Prototypes to World Changers

The first prototype of a steam locomotive was created by Scottish inventor William Murdoch in 1784. By the late 1780s, steamboat pioneer John Fitch had built the US’ first working model of a steam rail locomotive.

It was not until 1804, however, that the first full-scale, working railway steam locomotive was built. Richard Trevithick of the United Kingdom sent his steam locomotive on the world’s first railway journey on 21 February that year, traveling from the Pen-y-darren ironworks at Merthyr Tydfil to Abercynon in South Wales.

Trevithick’s design employed numerous innovative features, including the use of high pressure steam that significantly increased the engine’s efficiency while simultaneously decreasing its weight. The Newcastle area of northeast England became Trevithick’s proving ground for further experimentation in locomotive design.

Salamanca, the first successful two-cylinder steam locomotive, designed by Matthew Murray, debuted on the Middleton Railway in 1812. Puffing Billy, completed in 1814 by engineer William Hedley, is the oldest surviving steam locomotive, currently on display in London’s Science Museum.

Steam locomotion innovator George Stephenson first built the Locomotion for northeast England’s Stockton & Darlington Railway, then the world’s only public steam railway. In 1829, he built The Rocket, which subsequently won the Rainhill Trials and lead to Stephenson becoming the world’s leading locomotive builder and engineer. His steam locomotives were used on railways throughout the UK, Europe, and the United States.

Stephenson's Rocket

Stephenson’s Rocket, currently on display at the Science Museum of London.

Early U.S. Steam Locomotives

The Tom Thumb, built for the Baltimore & Ohio Railroad, was the first steam locomotive developed, built, and run in the United States, in 1830. Previously, locomotives for American railroads had been imported from Britain. One of these imported steam locomotives, the John Bull, remains the oldest still-operable engine-powered vehicle of any kind in the US.

Locomotives in Continental Europe

Continental Europe’s first railway service opened in Belgium in May 1835, running between Mechelen and Brussels. The railways first locomotive was The Elephant.

Germany’s first steam locomotive was designed by British engineer John Blenkinsop and built by Johan Friedrich Krigar in 1816. At first, the locomotive was strictly demonstrative, running on a circular track in the factory yard of the Royal Berlin Iron Foundry. It also became the first steam-powered locomotive used for passenger service, as the public was welcomed to ride, free of charge, in coaches pulled by the locomotive along its circular track. The first German-designed steam locomotive, the Beuth, was built in 1841 by August Borsig.

In Austria, the Emperor Ferdinand Northern Railway became the nation’s first steam railway in 1837, running between Vienna-Floridsdorf and Deutsch-Wagram. Austria is the home of the world’s oldest continually running steam locomotive—the GKB 671 has been in service since its debut in 1860.

Photo credit: DanieVDM / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Technology

Quantum Clock (Quantum Snooze Button Optional)

Everyone’s had their alarm clock fail them at some point. Maybe a storm knocked out the power while you were sleeping, or you forgot to plug in your phone and the battery died overnight. Next thing you know, you wake up at 10:13 and you were supposed to be at work hours ago. A quantum clock is a perfect—albeit completely impractical—solution.

Ions & Lasers & Absolute Zero—Oh My!

The quantum clock is an advanced, even more accurate variation of the atomic clock, which itself was first suggested by Lord Kelvin in 1879 and built by the US National Bureau of Standards (now the National Institute of Standards and Technology, or NIST) in 1945. Instead of atomic clocks’ mercury ions, quantum clocks use electromagnetic traps to isolate aluminum and beryllium ions together. Lasers are used to cool these ions to just above absolute zero.

Like an atomic clock, a quantum clock measures the time of ion vibration at an optical frequency using a UV laser. This frequency is 100,000 higher than the microwave frequency utilized in NIST-F1, the cesium fountain atomic clock that serves as the United States’ official timekeeper.

NIST-F1: The now completely obsolete third most-accurate clock in the world.

NIST-F1: Only the third most accurate clock in the world.

Immeasurably Accurate

Quantum clocks are capable of dividing time into smaller units than atomic clocks do, allowing for even more precise time measurement. A quantum clock will lose approximately one second of accuracy every 3.4 billion years, making the devices roughly 37 times more accurate than existing international standards. The clocks’ accuracy is attributed, in part, to their insensitivity to background magnetic and electric fields, as well as temperature changes.

In fact, the NIST’s most recently developed quantum clock is so accurate that NIST researchers are unable to measure its ticks per second. The definition and exact length of a second is based on NIST-F1, and therefore cannot be properly applied to the more accurate quantum clock.

Photo credit: Foter / Public Domain Mark 1.0

Technology

What the Heck is A Homopolar Generator?

Homopolar generator, unipolar generator, acyclic generator, disc dynamo—this device is known by many names. (For simplicity’s sake, we’ll stick with homopolar generator.) But, apart from the obvious—“generator” is right there in the name—just what the heck is this thing?

It's a homopolar generator, of course.

It’s a homopolar generator, of course.

Spinny Electrical Disc Thingamajig

This unique type of DC electrical generator is comprised of an electrically conductive flywheel (or a cylinder, in some models) that rotates on a perpendicular plane to a uniform static magnetic field. They have one electrical contact near the disc’s axis, and the other near the periphery. This setup produces a potential difference between the center of the disc and its rim (or the ends of the cylinder). The electrical polarity varies depending on the rotational direction and the orientation of the field itself.

Typically, homopolar generators produce low voltage, no more than a few volts, but high currents. Very large versions built for research purposes have been able to produce hundreds of volts. Interconnected systems of multiple generators have been able to produce even higher voltage. Because of very low internal resistance, the largest homopolar generators can source electrical current up to a million amperes.

Homopolar generators can be used for applications such as welding and experimental railgun research. Because they are capable of storing energy over long periods, then releasing that stored energy in short bursts, they are also ideal for pulsed energy applications.

Faraday Disc Redux

The Faraday disc, or Faraday wheel, was an early type of homopolar generator invented by English scientist, physicist, and inventor Michael Faraday. Though the device was successful at producing electricity, Faraday’s design proved to be impractical. The homopolar generator is a modified, simplified configuration of the Faraday disc that produces roughly equal voltage, but much higher current.

A.F. Delafield received the first US patent for the general (disc) type of homopolar generator in May 1883. S.Z. de Ferranti and C. Batchelor also received separate US patents not long after.

Tesla’s Dynamo Electric Machine

Perhaps the most famous proponent of the homopolar generator was Nikola Tesla. He further improved upon the basic design, receiving a patent for his Dynamo Electric Machine. His device used an arrangement of two parallel discs, each with a separate, parallel shaft, which were joined by a metallic belt.

The discs generated opposite electric fields, causing current to flow from one shaft to the disc’s edge, across the belt to the edge of the other disc, then to the second shaft. This significantly reduced frictional losses caused by sliding contacts.

Photo credit: Foter / Public domain

Historical Science & Technology, World-Changing Inventions

He Who Smelted It Dealt It

To smelt is to produce a metal from its ore, such as silver, iron, and copper. This extractive metallurgy process uses heat and a chemical reducing agent (such as coke or charcoal) to decompose the ore itself, while other elements are expelled as gases or reduced to slag, leaving only the desired metal material behind.

The Mystery of the First Smelters

Exactly where, when, and how smelting was first discovered is unknown. Because the discovery of the process predated the invention of writing by several thousand years, there are no written records available. We do know that prehistoric humans were capable of smelting metals, with evidence dating back more than 8,000 years.

In ancient Europe, the first metals to be successfully smelted were tin and lead. These metals, being relatively soft and having relatively low melting points, could easily be smelted by placing their ores in a wood fire. As such, it is possible that the discovery was made accidentally.

Though evidence suggests that these materials may have been smelted earlier in history, the earliest artifacts of this process are cast lead beads created in Antolia (now Turkey) in roughly 6500 BCE.

Smelting the Useful Metals

Due to their physical characteristics, the smelting of lead and tin had very little impact on human civilization. However, the discovery and use of so-called useful metals—copper and bronze—was significant, ushering in the Copper and Bronze Ages, respectively.

Because copper’s melting point is roughly 400°F above the temperature of a campfire, the development of copper smelting was surely no accident. It is believed that early copper smelting was performed using pottery kilns.

The earliest smelted copper artifacts, found in modern Serbia, date back to approximately 5500 BCE.

Around 4200 BCE, smelters in Asia Minor created bronze by combining copper with arsenic, resulting in an alloy that is considerably harder than copper and, therefore, more useful. Because arsenic is a common impurity in copper ores, it is believed that the alloying process was discovered by mistake.

Roughly 1,000 years later, in the same region of the world, smelters found that alloying copper and tin produced a bronze material that was even harder and more durable than copper/arsenic bronze. It is believed that this discovery, too, was accidental. However, by 2000 BCE, tin was being mined for the sole purpose of bronze production.

Surviving Bronze Age swords.

Surviving Bronze Age swords.

For several millennia to come, bronze was the material of choice for the forging of weapons, armor, tools, agricultural implements, and household utensils such as saws and sewing needles. The mining of the raw materials used in bronze smelting contributed significantly to the development of trade networks throughout Europe and Asia.

I Am Iron (Smelting) Man

The origins of iron smelting are essentially unknown. The general consensus is that the process was first performed in what is now Turkey. Historical evidence found in Egypt suggests that iron smelting was known in the region as far back as 1100 BCE; additional evidence points to iron smelting being part of West African culture as early as 1200 BCE. This, then, gave rise to the Iron Age, which lasted until roughly 200 CE.

Wherever iron smelting originated, the process was generally the same in all cultures. Iron ore was smelted in bloomeries, a type of earthen furnace where temperatures could be regulated accurately enough to facilitate smelting iron without actually melting the material itself. This would create a spongy mass of metal known as a bloom, which was consolidated with hammers and good ol’ elbow grease. The oldest known example of a bloomer dates back to 930 BCE in modern Jordan.

In the Medieval period, the iron smelting process was refined. Bloomeries were replaced with blast furnaces which produced pig iron. Pig iron was then further refined—in a finery forge, of all things—to create forgeable bar iron. The end product was what we now call wrought iron. This iron smelting process was used, in essentially the same form, until the Industrial Revolution.

Photo credit: Foter / Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)