Technology, World-Changing Inventions

Water Treatment Technology Through History

Water Treatment Technology Through History

 

Civilization has changed in uncountable ways over the course of human history, but one factor remains the same: the need for clean drinking water. Every significant ancient civilization was established near a water source, but the quality of the water from these sources was often suspect. Evidence shows that humankind has been working to clean up their water and water supplies since as early as 4000 BCE.

Cloudiness and particulate contamination were among the factors that drove humanity’s first water treatment efforts; unpleasant taste and foul odors were likely driving forces, as well. Written records show ancient peoples treating their water by filtering it through charcoal, boiling it, straining it, and through other basic means. Egyptians as far back as 1500 BCE used alum to remove suspended particles from drinking water.

By the 1700s CE, filtration of drinking water was a common practice, though the efficacy of this filtration is unknown. More effective slow sand filtration came into regular use throughout Europe during the early 1800s.

As the 19th century progressed, scientists found a link between drinking water contamination and outbreaks of disease. Drs. John Snow and Louis Pasteur made significant scientific finds in regards to the negative effects microbes in drinking water had on public health. Particulates in water were now seen to be not just aesthetic problems, but health risks as well.

Slow sand filtration continued to be the dominant form of water treatment into the early 1900s. in 1908, chlorine was first used as a disinfectant for drinking water in Jersey City, New Jersey. Elsewhere, other disinfectants like ozone were introduced.

The U.S. Public Health Service set federal regulations for drinking water quality starting in 1914, with expanded and revised standards being initiated in 1925, 1946, and 1962. The Safe Drinking Water Act was passed in 1974, and was quickly adopted by all fifty states.

Water treatment technology continues to evolve and improve, even as new contaminants and health hazards in our water present themselves in increasing numbers. Modern water treatment is a multi-step process that involves a combination of multiple technologies. These include, but are not limited to, filtration systems, coagulant (which form larger, easier-to-remove particles call “floc” from smaller particulates) and disinfectant chemicals, and industrial water softeners.

For further information, please read:

Planned future articles on Sandy Historical will expand on some of the concepts mentioned here. Please visit this page again soon for links to further reading.

Pseudoscience

Transmutation of Species: More or Less Than Meets the Eye?

Before Chuck Darwin developed his theory of natural selection and cracked the nut of evolution, many scientists and the people who trusted them to science followed the logic of transmutation of species (a.k.a., transformism). There was a good deal of opposition to this theory, and many prominent scientists of the 19th century could be found on either side of the debate.

Lamarck My Words…

French naturalist Jean-Baptiste Pierre Antoine de Monet, Chevalier de Lamarck, often helpfully shortened to simply Lamarck, first proposed the theory of the transmutation of species in his 1809 tome Philosophie Zoologique. Lamarck’s theory suggested that, rather than sharing a common ancestor, the simplest forms of life were created via spontaneous generation. He postulated that an innate life force drove species to become more complex over time.

While recognizing that many species were uniquely adapted to their respective environments, Lamarck suggested that the same life force also caused animals’ organs and plants’… whatever-the-plant-equivalent-of-organs-are to change based on how much or how little they’re used, thus creating more specialized species over successive generations.

The British Are Coming! The British Are Coming!

Concurrently, British surgeon Robert Knox and anatomist Robert “Research” Grant developed their own school of thought on comparative anatomy. Closely aligned with Lamarck’s French Transformationism approach, they further developed the idea of transmutation as well as evolutionism, and investigated homology to prove common descent.

Along with a student named Charles Darwin, Grant investigated the life cycle of marine animals. Darwin went on to study geology with professor Robert Jameson. In 1826, Jameson published an anonymous essay praising Lamarck’s “explanation” of higher animals evolving from “the simplest worms.”

Probably not what they meant.

Probably not what they meant.

In Eighteen-Hundred and Thirty-Seven, computing pioneer and almost-cabbage Charles Babbage published the Ninth Bridgewater Treatise. In it, he proposed that God had the foresight and the power to create laws (or, as Babbage put it, since he’s all computery, “programs”) that would produce new species at the appropriate times instead of dishing out a “miracle” each time a new species arose.

The Vestiges of the Natural History of Creation

Seven years later, Scottish publisher Robert Chambers published—anonymously—The Vestiges of the Natural History of Creation. This book proved to be both highly influential and extremely controversial, as it proposed an evolutionary hypothesis that explained the origins of life on Earth and the existence of our entire solar system. Chambers claimed that, by studying the fossil record, one could easily see a progressive ascent of animals. Current animals, he posited, branched off a main line that ultimately lead to humans. Chambers’ theory suggested that species’ transmutations were part of a preordained plan woven into the very fabric of the universe.

Though slightly less stupid (in 21st century retrospect) than Grant’s theories, Chambers’ implication that humans were merely the top rung of a predetermined evolutionary ladder, if you will, ruffled the feathers of both conservative thinkers and radical materialists. Numerous scientific inaccuracies were found in The Vestiges, and were roundly derided. Darwin lamented Chambers’ “poverty of intellect,” ultimately dismissing his book as no more than a “literary curiosity.” He would go on to publish his own since-proven-correct theory of evolution roughly a decade later.

Photo credit: dBnetco / Foter / CC BY-NC-ND

Historical Science & Technology, Technology

Z3: Thrice The Computer Z1 Was

Designed by the excellently-named German inventor Konrad Zuse, the Z3 was the world’s first working programmable and fully automatic digital computer. Due to the success of the Z3 (and its predecessors, the Z1 and Z2) Zuse is generally considered the father of the modern computer.

The Conclusion of the Epic Trilogy

Zuse designed and built the Z1 over the course of three years, from 1935 to 1938. The entirely mechanical Z1 was only operable for a few minutes at a time. Fellow German computer engineer Helmut Schreyer suggested that Zuse use different technology to improve the device—specifically, Schreyer’s recently developed “flip flop” circuits.

Instead, Zuse based the next iteration, the Z2, on relays. Completed in 1939, the Z2, too, worked correctly only sparingly, most notably during a demonstration to a large audience at the Deutsche Versuchsanstalt für Luftfahrt (now the German Aerospace Center) in Cologne in 1940. This demonstration was sufficient to convince board members of the DVL to partially finance Zuse’s next design.

Developed in secrecy at the behest of the German government, the Z3 was completed in 1941 in Berlin. The electromechanical device incorporated over 2000 relays and implemented a 22-bit word length operating at a clock frequency of roughly 5-10 Hz. Program codes for and data derived from the Z3 were stored on punched film. This external storage capability eliminated the need for rewiring when changing programs.

Faster and more reliable than its forebears, the Z3 was used by the German Aircraft Research Institute to perform statistical analyses of wing flutter. Zuse later requested funding to replace the relays with fully electronic switches, but funding was denied due to the outbreak of World War II.

Speaking of World War II, the Z3 was, unfortunately, destroyed during an Allied Forces bombardment of Berlin. In the 1960s, Zuse’s company Zuse KG built a working replica, which is now on display at the Deutsches Museum in Munich.

The Deutsches Museum's Z3 replica.

The Deutsches Museum’s Z3 replica.

After the Z3’s destruction, Zuse went on to design and build the Z4, which became the world’s first commercially-available digital computer. The finished product was completed just days before the end of WWII, and so was not accidentally destroyed. Zuse never designed or built a Z5 model computer.

Predecessors & Descendants

The programming language of the Z3 was based on the simple binary system, invented by German mathematician Gottfried Wilhelm von Leibniz in the late 1600s. Leibniz’s system was the basis of Boolean algebra, which was in turn used by American mathematician and electronic engineer Claude Shannon to map electronic relays for digital circuit designs in 1937.

Colossus, the world’s first programmable electronic digital computer, was built in 1943 by British engineer Tommy Flowers. Colossus used vacuum tubes and a system of binary number representation similar to the Z3’s.

Half a decade later, the Manchester Baby and the EDSAC (Electronic Delay Storage Automatic Calculator) became the world’s first computers that could store program instructions and data in the same space. These devices utilized a stored-program concept developed by Konrad Zuse in 1936; Zuse’s patent application for this concept was rejected at the time, and he never revisited the idea.

Photo credit: Scott Beale / Foter / CC BY-NC

Historical Science & Technology

Wootz! There It Is! (sorry…)

Wootz steel is a type of steel (no kiddin’?!) that is known for its distinctive pattern of micro carbide bands inside a tempered martensite or pearlite base material. You’ve likely seen wootz steel used in the blades of fancy, bone handled, collectible knives or replica swords. It’s a truly unique material and a fine example of metallurgy as both science and art.

Ancient Indian Origins

As far as we can tell, wootz steel was invented (or, perhaps more accurately, the method of making it was discovered) in India. Ancient Greek and Roman writings dating back to the time of Alexander the Great’s quest to conquer India refer to high strength, high quality steel originating from the subcontinent.

The term “wootz” most likely derived from a misspelling of “wook,” itself an Anglicized version of ukku, the Kannada word for steel. Another theory suggests that “wootz” is a variation of uchcha, meaning “superior” in that same language.

The distinctive pattern of wootz steel.

The distinctive pattern of wootz steel.

Existing archaeological evidence points to modern-day Tamil Nadu being the first area to utilize the crucible steel process that results in wootz steel, starting circa Year Zero. Upon reaching Damascus, smiths in that city set about developing weapons made of the metal. Abdullah El Idrisi, one of the most prolific and well-known travelers of the 12th century, called Indian wootz steel the best in the world. Though the material was exported and traded all over the known world, it was most famous and widely-used in the Middle East.

Big in Europe

By the 17th century, legends of wootz steel and Damascus swords—made from the material and notorious for their sharpness and toughness—reached Europe. The continent’s collective scientific community was understandably intrigued by these tales, and the methods of making wootz steel were soon observed and adopted by European blacksmiths.

Prior to the arrival of wootz steel, the use of high carbon alloy steels was unknown in Europe. The crucible technique used was crucial to the development of English, French, and Russian metallurgical practices.

After a good run of a few hundred years in Europe, wootz steel began to fall out of favor. Knowledge of how to make the material died out in roughly 1700, and Britain expressly prohibited its manufacture in 1866.

Photo credit: awrose / Foter / CC BY-NC-ND

Historical Science & Technology, The Science of Film, Music & Art

The Water Organ: OG Keyboard Instrument

Music is an art; the design and construction of musical instruments is a science. Here, we’ll discuss the history of the water organ, one of the oldest, if not the oldest, keyboard instrument known to man or ape. These pipe organ-style instruments are powered by blown air which is pushed through the pipes by flowing water derived either from a natural source, such as a waterfall, or from a manual pump. Unlike other types of pipe organs, water organs have no bellows, blowers, or compressors.

The Steinway of Antiquity

Written descriptions of water organs, a.k.a. hydrauli (singular: hydraulis), are found in texts dating back as far as the 3rd century BCE. Remains of water organs dated to 288 CE have been found. In the time of the Greek and Roman Empires, they were highly regarded by philosophers and ordinary dumdums alike, although the Talmud states that the instrument is not appropriate for the Jerusalem Temple.

Some ancient models used solar heat to transfer water from one closed tank to another, producing compressed air to sound the pipes. Byzantine and Arab musicians developed an automatic, hydraulic version, as well as a “long distance” water organ that could be heard up to sixty miles away.

In the Renaissance period, water organs were regarded as magical and metaphysical instruments by certain scientific and religious groups. Water organs, “played” by hydraulic automation, were placed in gardens, conservatories, and the like, delighting onlookers with music, and, often, an array of complex automatons, including dancing figures and “flying” birds, that were powered by air as it escaped the organs’ pipes.

A partially restored water organ, dated to the 1st century BCE.

A partially restored water organ, dated to the 1st century BCE.

Post-Renaissance Hydrauli

Following the Renaissance, water organs became immensely popular throughout Europe. Dozens of hydrauli were built throughout Italy, and by the 17th century, the instrument had made its way to England and elsewhere. Cornelius Drebbel built an elaborate model for King James I; Prince Henry commissioned several from builder Salomon de Caus.

De Caus later built several additional hydrauli at Heidelberg Castle in Germany after the marriage of Princess Elizabeth and Prince Friedrich V. These featured some of the most intricate waterworks ever devised for the instrument. In France, the Francini brothers constructed several extravagant water organs at Saint Germain-en-Lave and at the palace of Versailles.

As the 17th century wound to a close, water organs began to fall out of favor, as upkeep was costly and few instrument builders with knowledge of how to maintain and repair them remained. By 1920, not a single working hydrauli could be found anywhere in Europe.

The Cadillac of Water Organs

Perhaps the grandest and most famous hydraulis in history was built by Lucha Clericho circa 1569-72 in Tivoli, in what is now Italy. Standing over twenty feet high, it was built under a massive arch and fed by a huge waterfall. Of its golden timbres, G.M. Zappi wrote, in 1576, “When somebody gives the order to play, at first one hears trumpets […] and then there is a consonance.” This water organ was designed to play music on its own, and was capable of auto-playing at least three separate pieces; it also had a keyboard that allowed for manual playing.

Photo credit: Saxphile / Foter / CC BY-NC-SA

Important People, Important Discoveries, Technology, World-Changing Inventions

Danger! High Alessandro Volta-ge!

Chances are good that you’ve got a battery in your pocket right now (or your purse, if you’re a lady). You know, the one that powers your phone. Really, we couldn’t get much done in the course of the day without batteries: your car needs one, your laptop, your TV remote. The sheer volume of battery-operated devices we encounter in the average day is staggering. So, whom can we thank for this portable power technology? Who invented the battery as we know it?

The Mars Italian Volta

Alessandro Giuseppe Antonio Anastasio Volta (try saying that five times fast) was born on 15 February 1745, in the small town of Como in northern Italy. At the ripe old age of 29, he became a professor of physics at the Royal School in Como. At 30, he developed an improved version of the electrophorus, a static electricity-generating device that was a precursor to the battery.

He was a pioneer in the study of electrical capacitance, developed methods of studying both electrical charge and potential, and eventually discovered what would become known as Volta’s Law of Capacitance (from whence the term “volt” is derived). In Seventeen-Hundred and Seventy-Nine, he was named professor of experimental physics at the University of Pavia.

Inventing the First Modern Battery

In the 1780s, Volta’s countryman Luigi Galvani discovered what he called “animal electricity.” Galvani proposed that when two different metals were connected in series with a frog’s leg (as he used in his experiments) and to each other, electric current was generated; he suggested that the frog leg was the source of the electricity.

After conducting his own research on the matter, Volta discovered that the current was a result of the contact between dissimilar metals, and that the frog leg served as a conductor and detector rather than the source of the charge. In 1794, Volta demonstrated that when two different metals and a piece of brine-soaked cloth or paper were arranged in a circuit, an electrical current was produced.

Holy smokes, an actual voltaic pile. Whaddaya know?!

Holy smokes, an actual voltaic pile. Whaddaya know?!

From there, Volta developed what came to be known as the voltaic pile, an early form of electric battery not too terribly different from those we use today. In his early experiments, Volta created cells from a series of wine goblets filled with brine and with the metal electrodes placed inside.

Later, Volta stacked multiple pairs of alternating copper and zinc discs, separated by sheets of cardboard soaked in brine. Upon connecting the top and bottom contacts with a wire, an electrical current flowed through both the discs and the wire. This, then, was the first electrical battery to produce its own electricity.

Photo credit: GuidoB / Foter / CC BY-SA

Science, Technology

Biosphere 2: Mission 1 (NOT Starring Pauly Shore)

Biosphere 2 is a 3.14-acre Earth systems science research facility in the Arizona desert, near Oracle. It was constructed between 1987 and 1991, and, at the time of its completion, contained numerous representative biomes: a rainforest, mangrove wetlands, an ocean with coral reef, a fog desert, a savannah grassland, an agricultural system, and a human habitat. The fully-enclosed, airtight system—still the largest of its kind—was famously used for two “closed missions” designed to study the interaction between humans, farming, technology, and nature, as well as to test the efficacy of the structure for space colonization.

Mission 1’s Crew

Biosphere 2’s first closed mission lasted exactly two years, from 26 September 1991 to 26 September 1993. It was manned by an eight-person crew of researchers, including Director of Research Abigail Alling, Linda Leigh, Taber MacCallum, Mark Nelson, Jane Poynter, Sally Silverstone, Mark Van Thillo, and medical doctor Roy Walford.

Biosphere 2

Biosphere 2

Scientific Diet

Throughout Mission 1, the crew ate a low-calorie, nutrient-rich diet Wolford had developed through prior research into extending the human lifespan through diet. The agricultural system within Biosphere 2 produced over 80 percent of the team’s total diet, including bananas, beans, beets, lablab, papayas, peanuts, rice, sweet potatoes, and wheat.

In their first year, they lost an average of 16 percent of their pre-entry bodyweight. This weight loss stabilized and the crew gained much of their weight back during the second year as caloric intake increased with larger crop volumes. Regular medical tests showed that the crew remained in excellent health throughout the mission, and lower cholesterol, lower blood pressure, and improved immune systems were noted across the board. Additionally, the crew’s metabolisms became more efficient at extracting nutrients from their food as they adapted to their unique diet.

Fauna of Mission 1

A number of animals were included in the mission, including goats, chickens, pigs, and fish. These creatures were contained in the special agricultural area to study the effects of the artificial environment on non-human animals. Numerous pollinating insects were also included to facilitate the continued growth of the plant life within Biosphere 2.

So-called “species-packing” was implemented to ensure that food webs and ecological functions could continue if some species did not survive. This proved prescient, as the fish were ultimately overstocked, causing many to die and clog the ocean area’s filtration systems. Native insects, such as ants and cockaroaches, inadvertently sealed inside the facility, soon took over, killing many of the other insects. The invasive ants and cockroaches did perform much of the pollination needed to maintain plant life, however.

Flora & Biomes of Mission 1

As with the animals within Biosphere 2, several of its various biomes reacted differently than researchers had anticipated. Successes and failures arose in nearly equal measure.

The fog desert area turned into chaparral due to higher than expected levels of condensation. Plants in the rainforest and savannah areas suffered from etiolation and were weaker than expected, as the lack of natural wind caused a lack of stress wood growth.

Morning glories overgrew the rainforest and blocked the growth of other plants. The savannah itself was seasonally active, as expected, but the crew had to cut and store biomass to help regulate carbon dioxide within the facility.

Corals reproduced regularly in the ocean area, but the crew had to hand-harvest algae from the corals and manipulate the water’s pH levels to keep it from becoming too acidic.

Biosphere 2 Today

The Biosphere 2 facility has been owned by the University of Arizona since 2011. It is now used for a wide range of research projects.

Photo credit: K e v i n / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Historical Science & Technology, Technology, World-Changing Inventions

Roman Bridges Are the Best Bridges

Cross any bridges recently? If so, thank an ancient Roman. Just kidding—they’re all dead. And while they didn’t actually invent bridges, they did discover better, more reliable ways to build them.

Stone + Concrete + Arches

Roman bridges were built from stone and/or concrete—a bridge building material they pioneered. (Concrete was not used for bridge building again until the Industrial Revolution.) Many Roman bridges still exist today, proving the quality of their construction—the oldest Roman bridge still in existence in Rome (Italy), is the Pons Aermilius, built in 142 BCE. A total of 931 Roman bridges still exist today, in 26 countries.

These bridges were/are often over five meters wide, with alternating header and stretcher layouts. Most slope upward slightly, allowing for water runoff. The stones are often linked with dovetail joints or, in some instances, metal bars. Most stones also have indentations to give gripping tools a better hold.

The earliest ancient Roman arch bridges, using the “ideal form” of the circle, were built as complete circles, with part of the arch underground. This design was later changed to the semi-circular arch; a variation, the segmental arch, was not uncommon. Other modified arch forms, such as the pointed arch, were also used, but rarely.

Roman bridges typically utilized matching, wedge-shaped primary arch stones. Single- arch spans and multiple-arch bridges were built in almost equal measure. The Limyra Bridge in modern Turkey features 26 individual segmental arches and spans over 1,000 feet.

A surviving Roman bridge spanning Köprülü Canyon in Turkey.

A surviving Roman bridge spanning Köprülü Canyon in what is now Turkey.

Crossing Water

Roman bridges often spanned large bodies of water, usually at least 60 feet above the surface. Bridges were built over every major river in the Roman Empire except the Euphrates and the Nile. (The Nile River was not permanently bridged until 1902.)

Trajan’s Bridge, which crossed the Danube River, was the longest bridge ever constructed for over 1,000 years. It was a multi-arch bridge featuring segmental arches made of wood and built on top of concrete piers over 130 feet high. The longest Roman bridge still in existence is the Puente Romano, at nearly 2,600 feet.

When spanning rivers with strong currents, or in times when fast army deployment was needed, the Romans often built pontoon bridges. No other civilization would successfully create a pontoon bridge until the 19th Century CE.

Photo credit: Anita363 / Foter / Creative Commons Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0)

Historical Science & Technology, Science

The Age of the Earth: Geology in the 17th & 18th Centuries

Geology has technically been a thing since the first caveman thought, upon picking up a rock, “Hey, this rock looks different from that one over there. I wonder why that is?” But it wasn’t until the 17th Century CE that scientists actually started using actual science to find some answers.

Geology & the Biblical Flood

In the 17th Century, most people in the “Christian world” still believed that the bible was an actual, factual historical document. From this, they extrapolated that the big rainstorm that got Russell Crowe all worked up had actually happened, and set out to prove it through science.

In searching for evidence to support this, scientists and researchers learned a good deal about the composition of the Earth and, perhaps more importantly, discovered fossils, and lots of ‘em. Perhaps unsurprisingly, the real information gleaned in this process was often significantly manipulated to support the idea of the Great Flood (as well as other Biblical nonsense). A New Theory of the Earth, written by William Whiston and first published in 1696, used Christian “reasoning” to “prove” that the Great Flood had not only happened, but that it was also solely responsible for creating the rock strata of the Earth’s crust.

Whiston’s book and further developments lead to numerous heated debates between religion and science over the true origin of the Earth. The overall upside was a growth in interest in the makeup of our planet, particularly the minerals and other components found in its crust.

What created these mineral strata? A relatively easily explainable scientific process, or Jesus?

What created these mineral strata? A relatively easily explainable scientific process, or Jesus?

Minerals, Mining & More

As the 18th Century progressed, mining became increasingly important to the economies of many European countries. The importance of accurate knowledge about mineral ores and their distribution throughout the world increased accordingly. Scientists began to systematically study the earth’s composition, compiling detailed records on soil, rocks, and, most importantly, precious and semiprecious metals.

In 1774, the German geologist Abraham Gottlob Werner published his book, On the External Characteristics of Minerals. In it, Werner presented a detailed system by which specific minerals could be identified through external characteristics. With a more efficient method of identifying land where valuable metals and minerals could be found, mining became even more profitable. This economic potential made geology a popular area of study, which, in turn, led to a wide range of further discoveries.

Religion vs. Facts

Histoire Naturelle, published in 1749 by French naturalist Georges-Louis Leclerc, challenged the then-popular biblical accounts of the history of Earth supported by Whiston and other theologically-minded scientific theorists. After extensive experimentation, Leclerc estimated that the Earth was at least 75,000 years old, not the 4,000-5,000 years the bible suggests. Immanuel Kant’s Universal Natural History and Theory of Heaven, published in 1755, similarly described the earth’s history without any religious leanings.

The works of Leclerc, Kant, and others drew into serious question, for the first time, the true origins of the Earth itself. With biblical and religious influences taken out of the equation, geology turned a corner into legitimate scientific study.

By the 1770s, two very different geological theories about the formation of Earth’s rock layers gained popularity. The first, championed by Werner, hypothesized that the earth’s layers were deposits from a massive ocean that had once covered the whole planet (i.e. the biblical flood). For whatever reason, supporters of this theory were called Neptunists. In contrast, Scottish naturalist James Hutton’s theory argued that Earth’s layers were formed by the slow, steady solidification of a molten mass—a process that made our planet immeasurably old, far beyond the chronological timeframe suggested by the bible. Supporters of Hutton’s theory, known as Plutonists, believed that Earth’s continual volcanic processes were the main cause of its rock layers.

Photo credit: Taraji Blue / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Technology

Combating Corrosion Through History

Mankind has long fought against the forces of corrosion that deteriorate and destroy the natural materials we’ve worked so hard to forge into useful devices. The great Roman philosophy Pliny the Elder (23-79 CE) postulated that the rusting of iron was a punishment from the gods, as this most domestically-useful of metals was also used by mankind to create weapons of war.

Wood and paper, stone and earthen materials of all types, and especially metals are subject to the corrosive forces of nature, at wildly varying rates, and humans have developed a wide range of methods for combating it.

Early Solutions

One of the earliest examples of a manmade corrosion protection solution is the “antifouling” paint first created by ancient seafarers in the Fifth Century BCE. Caused by the bacterial decay of wood from, fouling negatively impacted the performance of sailing ships by increasing drag resistance and decreasing maximum speed. Early antifouling paints were made from a mixture of arsenic, sulfur, and Chian oil (similar to linseed oil). The first patent for an antifouling paint recipe was awarded to William Beale in 1625—his mixture included iron powder, copper, and cement.

In 1824, financed by the British Navy, Sir Humphry Davy developed a method of cathodic protection designed to protect the metal hulls of the Navy’s ships. Assisted by electrochemistry specialist Michael Faraday, Davy experimented on copper, iron, zinc, and other metals in a variety of saline solutions and studied the materials’ electrochemical reactions. From these tests, Davy determined that a small quantity of zinc or low-grade malleable iron could be used as a coating or covering for copper to prevent saltwater corrosion.

Sir Humphry Davy

Sir Humphry Davy

Just over a decade later, Robert Mallet was commissioned by the British Association for the Advancement of Science to study the effects of seawater and “foul” river water at various temperatures on cast and wrought iron. Following extensive experimentation, Mallet discovered that zinc used as a hot-dipped galvanized coating (a method suggested by Davy’s findings) developed a thick layer of calciferous zinc oxide crystals that “retards or prevents its further corrosion and thus permits the iron to corrode.” Mallet then experimented with zinc alloy anodes, eventually finding that metals anodic to zinc, such as sodium, increased the metal’s corrosion resistance.

Modern Methods

Today, there multiple sub-fields of materials science dedicated to developing new ways to prevent corrosion. Special materials, such as stainless steel, and special processes, such as anodization, use microscopically-thin oxide layers to boost corrosion resistance. Specially-engineered corrosion protection coatings have been developed that can protect substrate materials against such exterior forces as friction, heat, chemicals, and oxidization (rust).

Photo credit: Royal Institution / iWoman / CC BY-NC-ND

Science

Give ‘Em an Inch

If you’re reading this in the US, Canada, or anywhere in the UK, you know the inch as a standard unit of length. If you’re reading it from anywhere else, you know it as the stupid thing that isn’t a centimeter, but should be. (Kind of. Wait. What?) But, either way, what you may not know is: where did the inch come from?

History’s Inchstories

The exact details of where the inch came from are a bit murky. The earliest surviving reference to the unit is a manuscript from 1120 CE, which itself describes the Laws of Æthelbert from the early 7th Century CE. This manuscript relates a Law regarding the cost of wounds based on depth: one inch costs one shilling, two inches costs two shilling, and so on. Whether this cost refers to a fine for the inflictor of said wound or the cost of treating the wound is unclear, because of weird Olde English.

Around that time, one of several standard Anglo-Saxon units of length was the “barleycorn,” defined as “three grains of barley, dry and round, placed end to end lengthwise.” One inch was said to equal three barleycorn, a legal definition which persisted for several centuries (as did the use of the barleycorn as the base unit). Similar definitions can be found in contemporaneous English and Welsh legal tracts.

inch

Not to scale.

Attempts at Standardization

Since grains of barley are notoriously nonconformist, the traditional method of measuring made it impossible to truly standardize the unit. In 1814, Charles Butler, a math teacher at the Cheam School in Ashford with Headley, Hampshire, England, revisited the old “three grains of barley” measurement and established the barleycorn as the base unit of the English Long Measure System. All other units of length derived from this.

George Long, in his Penny Cyclopædia, published in 1842, observed that standard measures had made the barleycorn definition of an inch obsolete. Long’s writing was supported by law professor John Bouvier in his law dictionary of 1843. Bouvier wrote that “as the length of the barleycorn cannot be fixed, so the inch according to this method will be uncertain.” He noted that, as a “standard inch measure” was at this time kept in the Exchequer chamber at Guildhall, this unit should be the legal definition of the inch.

Modern Standardization… in 1959?!

Somehow, it was not until 19friggin59 that the current, internationally accept length of an inch was established. This measurement, exactly 25.4 millimeters, or 0.9144 meters, was adopted through the International Yard and Pound Agreement, which sounds ridiculous but was an actual thing.

Before this, there were various, slightly different inch measurements in use. In the UK and British Commonwealth countries, an inch was defined based on the Imperial Standard Yard. The US, meanwhile, had used the conversion factor of 39.37 inches to one meter since 1866.

Photo credit: Biking Nikon SFO / Foter / Creative Commons Attribution 2.0 Generic (CC BY 2.0)