Technology, World-Changing Inventions

Water Treatment Technology Through History

Water Treatment Technology Through History

 

Civilization has changed in uncountable ways over the course of human history, but one factor remains the same: the need for clean drinking water. Every significant ancient civilization was established near a water source, but the quality of the water from these sources was often suspect. Evidence shows that humankind has been working to clean up their water and water supplies since as early as 4000 BCE.

Cloudiness and particulate contamination were among the factors that drove humanity’s first water treatment efforts; unpleasant taste and foul odors were likely driving forces, as well. Written records show ancient peoples treating their water by filtering it through charcoal, boiling it, straining it, and through other basic means. Egyptians as far back as 1500 BCE used alum to remove suspended particles from drinking water.

By the 1700s CE, filtration of drinking water was a common practice, though the efficacy of this filtration is unknown. More effective slow sand filtration came into regular use throughout Europe during the early 1800s.

As the 19th century progressed, scientists found a link between drinking water contamination and outbreaks of disease. Drs. John Snow and Louis Pasteur made significant scientific finds in regards to the negative effects microbes in drinking water had on public health. Particulates in water were now seen to be not just aesthetic problems, but health risks as well.

Slow sand filtration continued to be the dominant form of water treatment into the early 1900s. in 1908, chlorine was first used as a disinfectant for drinking water in Jersey City, New Jersey. Elsewhere, other disinfectants like ozone were introduced.

The U.S. Public Health Service set federal regulations for drinking water quality starting in 1914, with expanded and revised standards being initiated in 1925, 1946, and 1962. The Safe Drinking Water Act was passed in 1974, and was quickly adopted by all fifty states.

Water treatment technology continues to evolve and improve, even as new contaminants and health hazards in our water present themselves in increasing numbers. Modern water treatment is a multi-step process that involves a combination of multiple technologies. These include, but are not limited to, filtration systems, coagulant (which form larger, easier-to-remove particles call “floc” from smaller particulates) and disinfectant chemicals, and industrial water softeners.

For further information, please read:

Planned future articles on Sandy Historical will expand on some of the concepts mentioned here. Please visit this page again soon for links to further reading.

Historical Science & Technology, Science

A Brief History of Cataract Surgery

Cataract surgery is a surgery in which a cataract is removed. Nailed it! More specifically, cataract surgery is the removal of the human eye’s natural lens necessitated by the lens having developed an opacifiation (a.k.a. a cataract) which causes impairment or loss of vision. While we tend to think of “advanced” medical procedures such as this as relatively modern developments, cataract surgery has been performed for thousands of years.

Couching in Ancient India

Sushruta, a physician in ancient India (ca. 800 BCE), is the first doctor known to have performed cataract surgery. In this procedure, known as “couching,” the cataract, or kapha, was not actually removed.

First, the patient would be sedated, but not rendered unconscious. He/she would be held firmly and advised to stare at his/her nose. Then, a barley-tipped curved needle was used to push the kapha out of the eye’s field of vision. Breast milk was used to irrigate the eye during the procedure. Doctors were instructed to use their left hand to perform the procedure on affected right eyes, and the right hand to treat left eyes.

Even drawings of cataract surgery look super unpleasant.

Even drawings of cataract surgery look super unpleasant.

When possible the cataract matter was maneuvered into the sinus cavity, and the patient could expel it through his/her nose. Following the procedure, the eye would be soaked with warm, clarified butter and bandaged, using additional delicious butter as a salve. Patients were advised to avoid coughing, sneezing, spitting, belching or shaking during and after the operation.

Couching was later introduced to China from India during the Sui dynasty (581-618 CE). It was first used in Europe circa 29 CE, as recorded by the historian Aulus Cornelius Celsus. Couching continued to be used in India and Asia throughout the Middle Ages. It is still used today in parts of Africa.

Suction Procedures

In the 2nd Century CE, Greek physician Antyllus developed a surgical method of removing cataracts that involved the use of suction. After creating an incision in the eye, a hollow bronze needle and lung power were used to extract the cataract. This method was an improvement over couching, as it always eliminated the cataract and, therefore, the possibility of it migrating back into the patient’s field of vision.

In his Book of Choices in the Treatment of Eye Diseases, the 10th Century CE Iraqi ophthalmologist Ammar ibn Ali Al-Mosuli presented numerous case histories of successful use of this procedure. In 14th Century Egypt, oculist Al-Shadhili developed a variant of the bronze needle that used a screw mechanism to draw suction.

“Modern” Cataract Surgery

The first modern European physician to perform cataract surgery was Jacques Daviel, in 1748.

Implantable intraocular lenses were introduced by English ophthalmologist Harold Ridley in the 1940s, a process that made patient recovery a more efficient and comfortable process.

Charles Kelman, an American surgeon, developed the technique of phacoemulsification in 1967. This process uses ultrasonic was to facilitate the removal of cataracts without a large incision in the eye. Phacoemulsification significantly reduced recovery patient times and all but eliminated the pain and discomfort formerly associated with the procedure.

Photo credit: Internet Archive Book Images / Foter / Creative Commons Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0)

Historical Science & Technology, World-Changing Inventions

Cuneiform for Me & Youneiform

Cuneiform, sometimes called cuneiform script, is one of the world’s oldest forms of writing. It takes the form of wedge-shaped marks, and was written or carved into clay tables using sharpened reeds. Cuneiform is over 6,000 years old, and was used for three millennia.

The Endless Sumerians

The earliest version of cuneiform was a system of pictographs developed by the Sumerians in the 4th millennium BCE. The script evolved from pictographic proto-writing used throughout Mesopotamia and dating back to the 34th Century BCE.

In the middle of the 3rd millennium BCE, cuneiform’s writing direction was changed from top-to-bottom in vertical columns  to left-to-right in horizontal rows (how modern English is read). For permanent records, the clay tablets would be fired in kilns; otherwise, the clay could be smoothed out and the tablet reused.

A surviving cuneiform tablet.

A surviving cuneiform tablet.

Over the course of a tight thousand years or so, the pictographs became simplified and more abstract. Throughout the Bronze Age, the number of characters used decreased from roughly 1,500 to approximately 600. By this time, the pictographs had evolved into a combination of what are now known as logophonetic, consonantal alphabetic, and syllabic symbols. Many pictographs slowly lost their original function and meaning, and a given sign could provide myriad meanings, depending on the context.

Sumerian cuneiform was adapted into the written form of numerous languages, including Akkadian, Hittite, Hurrian, and Elamite. Other written languages were derived from cuneiform, including Old Persian and Ugaritic.

CuneiTRANSform

Cuneiform was gradually replaced by the Phoenician alphabet throughout the Neo-Assyrian and Roman Empirical ages. By the end of the 2nd Century CE, cuneiform was essentially extinct. All knowledge of how to decipher the language was lost until the 1850s—it was a completely unknown writing system upon its rediscovery.

Modern archaeology has uncovered between 500,000 and 2 million cuneiform tablets. Because there are so few people in the world who are qualified to read and translate the script, only 30,000 to 100,000 of these have ever been successfully translated.

Photo credit: listentoreason / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Important People, Important Discoveries, Science

The Periodic Table of Dmitri Mendeleev

If you’re here, reading a blog about science and technology, I’m going to assume you already know what the periodic table of elements is, and therefore dispense with the introductory information. However, though you may know the table well, do you know where it came from? Read on, friend, read on…

From Russia with Science

Dmitri Ivanovich Mendeleev (1834-1907) was a Russian inventor and chemist. In the 1850s, he postulated that there was a logical order to the order of the elements. As of 1863, there were 56 known elements, with new ones being discovered at a rate of roughly one per year. By that time, Mendeleev had already been working to collect and organize data on the elements for seven years.

Mendeleev discovered that arranging the known chemical elements in order by atomic weight, from lowest to highest, a recurring pattern developed. This pattern showed the similarity in properties between groups of elements. Building off this discovery, Mendeleev created his own version of the periodic table that included the 66 elements that were then known. He published the first iteration of his periodic table in Principles of Chemistry, a two-volume textbook that would be the definitive work on the subject for decades, in 1869.

Mendeleev’s periodic table is essentially the same one we use today, organizing the elements in ascending order by atomic weight and grouping those with similar properties together.

Dmitri Mendeleev

Changing & Predicting the Elements

The classification method Mendeleev formulated came to be known as “Mendeleev’s Law.” So sure of its validity and effectiveness was he that used it to propose changes to the previously-accepted values for atomic weight of a number of elements. These changes were later found to be accurate.

In the updated, 1871 version of his periodic table, he predicted the placement on the table of eight then-unknown elements and described a number of their properties. His predictions proved to be highly accurate, as several elements that were later discovered almost perfectly matched his proposed elements. Though they were renamed (his “ekaboron” became scandium, for example), they fit into Mendeleev’s table in the exact locations he had suggested.

From Skepticism to Wide Acclaim

Despite its accuracy and the scientific logic behind it, Mendeleev’s periodic table of elements was not immediately embraced by chemists. It was not until the discovery of several of his predicted elements—most notably gallium (in 1875), scandium (1879), and germanium (1886)—that it gained wide acceptance.

The genius and accuracy of his predictions brought Mendeleev fame within the scientific community. His periodic table was soon accepted as the standard, surpassing those developed by other chemists of the day. Mendeleev’s discoveries became the bedrock of a large part of modern chemical theory.

By the time of his death, Mendeleev had received a number awards and distinctions from scientific communities around the world, and was internationally recognized for his contributions to chemistry.

Photo credit: CalamityJon / Foter / Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)

Technology, World-Changing Inventions

Toastory: The Tale of the Unsung Hero of the Kitchen

As long as there has been bread and fire, there has been toast. Sliced bread was toasted over open flames with the help of a special metal frame or long-handled forks since time immemorial. In the early 19th Century, simpler, easier-to-use handheld utensils for toasting toast were invented. In 1893, everything changed.

The (Mac) Toast Master of Scotland

That fateful year, Alan MacMasters—the man, the myth, the legend—invented the first electric bread toaster in Edinburgh, Scotland. The key to MacMasters’ macmasterpiece was the development of a heating element that could repeatedly be heated to red-hotness without breaking. The then-fairly recent invention of the incandescent light bulb had proved that such an element was possible—however, adapting the technology to the toasting conundrum would prove difficult.

Light bulbs’ metal elements benefited from being sealed within a vacuum, a method MacMasters could not employ, for obvious reasons. Additionally, the kinks were still being worked out of wiring for electrical appliances—the oft-used iron wiring frequently melted at high temps, causing severe fire hazards. Moreover, electricity was not readily available in the 1890s, leading to further difficulties.

Albert marsh, a young engineer, created an alloy of nickel and chromium, called nichrome, that could withstand the rapid, repeated heating and cooling a toasting element required. Nichrome was swiftly implemented into MacMasters’ toaster design. Soon after, the device was commercialized by Crompton & Co., under the name Eclipse.

Marsh, along with George Schneider of the American Electrical Heater Company, received the first US patent for an electric toaster. In 1909, General Electric introduced the Model D-12, America’s first commercially successful electric toaster.

Toastervations: Technological Innovations

Truly, the toaster game was a cutthroat one in its early days. Inventors continually piggybacked off each other’s successes, one-upping previous advancements and fighting tooth and nail for even the smallest piece of the golden goose that was the toasted bread market.

"Two toasts, coming right up!"

“Two toasts, coming right up!”

The earliest electric toasters toasted toast on one side at a time—it had to be flipped manually to continue the toasting process on the other side. American inventory Lloyd Groff Copeman and his wife, Hazel Berger Copeman, received a number of patents for toaster technology in 1913. Later that year, the Copeman Electric Stove Company introduced the “toaster that turns toast,” a device with an mechanical bread turner that flipped toast automatically.

Building off of Copeman’s advances, Charles Strite invented and patented the first automatic pop-up toaster, which included a mechanism that ejected the toasted after toasting, in 1919.

Working from Strite’s design, the Waters-Genter Company developed the Model 1-A-1 in 1925. The 1-A-1 was the first commercially-sold toaster that could toast both sides simultaneously, deactivate the heating element after a set time, and eject the toasted toast after toasting.

Starting in the 1940s, Sunbeam Products began offering a high-end toaster that would perform the entire toasting process of its own accord; all the user had to do was drop in a slice of pre-toast. The Sunbeam T-20 would lower the bread into the toasting chamber without the use of an external control mechanism. The details of how the T-20 and subsequent models worked are too dry and technical to recount here—for simplicity’s sake, we’ll chalk it up to magick.

In recent years, there have been many toaster innovations that are less darkly mysterious, but no less useful. The ability to use only one toast slot as needed saved Americans over $30 billion in electricity costs in the 1980s alone. Wider slots make the toasting of bagels possible. Modifications that “print” logos or images onto toasted toast mark the zenith of toaster technology.

What a time to be alive!

Photo credit: litlnemo / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Historical Science & Technology, Science

Medieval European Herbals

After Theophrastus’ and other ancient Greeks’ significant advances in botany, China, India, and the Arabian nations continued to study and expand their knowledge of the science. However, in Western Europe, the study of botany went through a period of inactivity that lasted over 1,800 years. During this time, much of the Greeks’ knowledge and breakthrough discoveries were lost or forgotten.

Moveable Type to the Rescue!

In 15th and 16th Century Europe, life of the average citizen revolved around and was highly dependent upon agriculture. However, when printing and moveable type were developed, most the first published works were not strictly about agriculture. Most of them were “herbals,” lists of medicinal plants with descriptions of their beneficial properties, accompanied by woodcut illustrations. These early herbals reinforced the importance of botany in medicine.

Herbals gave rise to the science of plant classification, description, and botanical illustration. Botany and medicine became more and more intertwined as the Middle Ages gave way to the Renaissance period. In the 17th Century, however, books on the medicinal properties of plants eventually came to omit the “plant lore” aspects of early herbals. Simultaneously, other printed works on the subject began to leave out medicinal information, evolving into what are now known as floras—compilations of plant descriptions and illustrations. This transitional period signaled the eventual separation of botany and medicine.

Notable Herbals

The first herbals were generally compilations of information found in existing texts, and were often written by curators of university gardens. However, it wasn’t long before botanists began producing original works.

Herbarum Vivae Eicones, written in 1530 by Otto Brunfels, catalogued nearly 50 new plant species and included accurate, detailed illustrations.

A page from Brunfels' Herbarum Vivae Eicones

A page from Brunfels’ Herbarum Vivae Eicones

Englishman William Turner published Libellus De Ra Herbaria Novus in 1538. Turner’s tome included names, descriptions, and local habitats of native British plants.

In 1539, Heironymus Bock wrote Kreutterbuch, describing plants the author found in the German countryside. A 1546 second edition of the book included illustrations.

The five-volume Historia Planatrum, written by Valerius Cordus and published between 1561 and 1563 some two decades after the author’s death, became the gold standard for herbals. The work included formal descriptions detailing numerous flowers and fruits, as well as plant anatomy and observations on pollination.

Rembert DodoensStirpium Historiae, written in 1583, included detailed descriptions and illustrations of many new plant species discovered in the author’s native Holland.

Photo credit: ouhos / Foter / Creative Commons Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0)

Technology

Time Switches: Not What They Sound Like

What would be super cool, what would just about the most amazingly fantastic and fantastically amazing thing ever, is if time switches switched time on and off, as light switches do for lights. “Fudge it to dern, I’m running late”—time switch *off*—“Never mind.”

Of course, if everyone had one, no one would ever get anything done because time would constantly be switching on and off at random. (That is, assuming that whoever switched the switch was the only for whom time stopped, which would…you know what, that’s a whole ‘nother can of worms that would take literally thousands and thousands of words to get to the bottom of. I digress…)

“Timer Switches” is Considerably More Accurate

What time switches actually do is control other electric switches, turning them on or off at certain times or after set intervals. This can help save energy by turning equipment and devices off when not needed, as well as security, by switching lights in a pattern that gives the impression that the building is occupied, for example.

Lighting, HVAC systems, cooking devices like ovens, and numerous other devices with electronic controls can be operated via timer switches. Most modern time switches are electronic themselves, utilizing semiconductor timing circuitry to activate or deactivate connected devices.

Originally, time switches were clockwork devices that were wound like watches and set to operate at or after a certain time. Many manufacturers of clockwork time switches used technology adapted from other devices—Intermatic timers, for example, were originally built with modified streetcar fare register mechanics. The vast majority of clockwork timers were capable of only a single on or off switching operation per cycle—i.e., they had to be manually reset after each cycle.

GE Model 3T27 Time Switch (circa 1948)

GE Model 3T27 Time Switch (circa 1948)

From there, timer switches evolved into electromechanical devices, utilizing slowly rotating, electrically-powered, geared motors that mechanically operate switches. Electromechanical time switches offered more programming options than clockwork versions had, allowing for multiple on-off cycles during a given period of time.

Modern, electronic time switches can be programmed to run their set cycles indefinitely. Cycles are usually delineated in 24 hour or seven day periods. Electronic timers can, for example, manage a central heating system to supply heat during specified times of the morning and evening during the week (often programmable down to the exact desired minute), and all day during weekends.

Photo credit: Telstar Logistics / Foter / CC BY-NC

Technology

Do V-Belts Keep Your V-Pants Up?

Come on, title, don’t be ridiculous.

If you’ve spent any amount of time under the hood of an automobile, you’re likely familiar with v-belts. Often called “serpentine belts,” these sturdy, rubber loops are the both the cheapest and the easiest way to transmit power between two or more rotating shafts that are not aligned axially but that run parallel to each other. V-belts are one of the most important components for the operation of automotive engines.

 

OG Leather V-Belts of 1916

 

The earliest mentions of v-belts as used in automobiles dates back to 1916. Originally, v-belts were manufactured from leather, and were made with any number of “V” angles, as the burgeoning auto industry had not yet standardized designs for these components (as well as countless others).

 

In 1917, the endless rubber v-belt was developed by John Gates of the Gates Rubber Company, which would go on to become Gates Corporation, the world’s largest non-tire rubber manufacturer.

 

Walter Geist of Allis-Chalmers would develop the first multiple-v-belt drive in 1925, and a patent for his design (US Patent #1,662,511) was awarded three years later. Allis-Chalmers then marketed Geist’s creation under the “Texrope” brand name.

 

Modern V-Belts

 

Today, v-belts are made from advanced (or “engineered”) rubber materials that provide better durability and flexibility and longer service life. For further improved strength, many designs include special embedded fibers; commonly used fiber materials include nylon, polyester, cotton, steel, Kevlar, and others.

Elaborate treasure map, or V-belt routing diagram?

 

Modern v-belts are manufactured with essentially universal “V” angles (there are, of course,. This standardized shape was scientifically developed to provide an optimum combination of traction and speed. Bearing load is also optimized by this design—as the load increases, the V shape wedges into the corresponding groove in the shaft pulley, providing improved torque transmission.

 

Endless v-belts are the current standard, but jointed and linked v-belts can also be utilized for special applications. Jointed and linked v-belts are made from a number of composite rubber links joined together—links can be added or removed as needed for adjustability. Most of these specialty v-belts can operate at the same speed and power ratings as standard types, and require no other special equipment.

Photo credit: johnsoax / Foter / Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)

Technology, World-Changing Inventions

A Bulb… of Light?

If you don’t know what a light bulb is, you probably can’t read, either, so we’ll just forego the usual introduction here and get right into the story how of this world-changing device was invented. What say you?

Thomas Edison is A Punk

Forget what you’ve been told: Thomas Edison did not invent the light bulb. He did refine the device and was the first to make it commercially viable, but there were literally dozens of others before him who had created functional light bulbs. As ever, Edison stole his “revolutionary” idea from another inventor and took the credit for himself.

Thomas Alva Edison is a thief, a liar, and a murderer.

An Abridged But Accurate History

Forty-five years before Edison was even born, in 1802, Humphry Davy created the first incandescent light. Using what was at the time the world’s most powerful electric battery, he passed electrical current through a thin strip of platinum. The material’s high melting point made it ideal for Davy’s experiments, but it did not glow brightly enough nor last long enough for practical application. However, it did prove that such a thing was possible.

The first person to create a workable version of the incandescent light bulb was James Bowman Lindsay. In 1835, in Dundee, Scotland, Lindsay publicly demonstrated a constant electric light that allowed him to “read a book at a distance of one and a half feet.”

Light Bulbs

Warren de la Rue, a British scientist, created his own version of the light bulb in 1840. De la Rue’s design used a platinum filament enclosed in a vacuum tube. Because of its high melting point, he deterimined that platinum would perform better at high temperatures; the evacuated chamber would, in theory, contain fewer gas molecules to react with the platinum and therefore extend its working life. De la Rue’s light bulb performed well, but platinum proved too costly for commercial use.

A year later, fellow Englishman Frederick de Moleyns was the first to receive a patent for an incandescent lamp. His design also utilized platinum wires in vacuum tubes. The first American to acquire a patent for an incandescent bulb was John W. Starr—two years before Edison was born. Unfortunately, Starr kicked the bucket shortly after being granted his patent, and his version of the light bulb died with him.

Alexander Lodygin received was granted the first Russian patent for an incandescent light bulb in 1874. His device used two carbon rods in a hermetically sealed glass receiver filled with nitrogen. It was designed so that electric current would transfer to the second rod when the first had been used up. Lodygin later moved to the United States and obtained a number of patents for variations on his original design. These bulbs used chromium, iridium, osmium, molybdenum, rhodium, ruthenium, and tungsten filaments, respectively. Lodygin demonstrated his molybdenum bulb at the 1900 World’s Fair in Paris.

In 1874, Canadian inventors Henry Woodward and Matthew Evans acquired a patent in their country for a light bulb that used carbon rods in a nitrogen-filled glass cylinder. After failing to commercialize their invention, the duo sold the rights to their patent to Thomas Edison in 1879. It’s likely that they sold their patent as an alternative to getting whacked by Edison’s goon squad.

Photo credit: Cardboard Antlers / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Science, Science & Society

Women in Science in Europe’s Age of Enlightenment

The “Age of Enlightenment” began in late-17th Century Europe. It was a far-reaching cultural movement and a revolution in human thought that emphasized reason and individualism over tradition. The intellectuals behind the Enlightenment hoped to reform the then-current society via reason, challenge widely-held ideas based in faith and tradition, and advance knowledge via scientific method.

“Enlightened” Yet Exclusionary

Despite the supposedly forward-thinking spirit of the era, women were still excluded from science at every turn. Scientific universities, professions, and societies uniformly refrained from accepting women into their ranks. Women’s only options for scientific learning were self-study, paid tutors, or, occasionally, instruction from their fathers. The few learned women of the time were primarily found among the elite of society.

Restrictions against female involvement in science were equal parts severe and ridiculous. Women were denied access to even the simplest scientific instruments; midwives were forbidden to use forceps. Scientifically-inclined women, as well as any women interested in higher education, were often ridiculed for neglecting their “domestic roles.”

Got a real sausage-fest going there, fellas.

Got a real sausage-fest going there, fellas.

Exceptional Women

Though this exclusionary attitude toward women in science was nearly universal, some women did manage to make significant scientific contributions during the 18th century.

Laura Bassi received a PhD in physics from Italy’s University of Bologna, and became a professor at the school in 1732.

For her contributions to agronomy, and specifically her discovery of methods for making flour and alcohol from potatoes, Eva Ekeblad was the first woman inducted into the Royal Swedish Academy of Sciences in 1748.

Through a personal relationship with Empress Catherine the Great, Russian Princess Yekaterina Dashvoka was named director of the Russian Imperial Academy of Sciences of St. Petersburg in 1783. This marked the first time in history that a woman served as the head of a scientific institution.

After serving as an assistant to her brother, William, Germany’s Caroline Herschel became a noted astronomer in her own right. She is best known for her discovery of eight individual comets, the first of which she identified on 1 August 1786, as well as for creating the Index to Flamsteed’s Observations of the Fixed Stars in 1798.

In addition to collaborating with her husband, Antoine, in his laboratory research, Marie-Anne Pierette Paulze of France translated numerous texts on chemistry from English to French. She also illustrated a number of her husband’s books, including his famous Treatise on Chemistry from 1789.

Photo credit: Foter / Public Domain Mark 1.0

Important People, Important Discoveries, Technology

A Brief Overview of Non-Led Zeppelins

Named after Ferdinand von Zeppelin, a German count who pioneered rigid airship development, a zeppelin is type of dirigible that features a fabric-covered metal grid of transverse rings and longitudinal girders, which contains numerous separate gasbags. This design allowed the aircrafts to be much larger than blimps or other non-rigid airships, which require overpressure within a single envelope to maintain their shape.

Zeppelin I

Count von Zeppelin, he of history’s greatest moniker, first developed designs for the airship that would bear his name in 1874. These designs were finalized in 1893, and patented in Germany two years later; a US patent was issued in 1899.

The frameworks of Zeppelin’s zeppelins were usually made of duralumin, an early aluminum alloy. Rubberized cotton was initially used for the inflatable gasbags, with later craft using a material made from cattle intestines called goldbeater’s skin.

Because of their size, most zeppelins required several engines, which were usually attached to the outside of the structural framework. Usually, at least one of these engines would provide reverse thrust to aid in maneuvering while landing and mooring.

Zeppelin II

The first commercial zeppelin flight took place in 1910. Deutsche Luftschiffahrts-AG (DELAG), founded by Count von Zeppelin himself, ran the world’s leading commercial zeppelin service, and by 1914 had carried over 10,000 passengers on more than 1,500 flights. The runaway success of zeppelin flight led to the “zeppelin” becoming a general term for rigid airships of any design.

The publicly-financed Graf Zeppelin, one of the largest commercial airships ever built.

The publicly-financed Graf Zeppelin, one of the largest commercial airships ever built.

Passengers, crew, and cargo were carried in compartments built into the bottom of the zeppelin’s frame. These compartments were quite small relative to the size of the inflatable portion of the ship. Some later designs included an internal compartment, inside the framework, for passengers and cargo.

Zeppelin III

In early 1912, the German Navy commissioned its first zeppelin, an oversized version of DELAG’s standard zeppelins. It was designated Z1 and entered service in October 1912. A few months later, Admiral Alfred von Tirpitz, the German Imperial Navy’s Secretary of State, instituted a five-year program to enlarge German’s naval airship fleet. DELAG provided a fleet of ten zeppelins, while the German military would construct two airship bases.

During a training exercise, L1, one of the military’s commissioned zeppelins, crashed into the sea due to a storm. The 14 crew members who perished were the first fatalities of zeppelin flight. Six weeks later, L2 caught fire during flight trials, killing the entire crew, including the acting head of the Admiralty Air Department.

It was not until May 1914 that L3, the next German Navy zeppelin, entered service. It was the first M-class airship—measuring nearly 520 feet in length and with a volume of nearly 795,000 cubic feet, these zeppelins could carry a payload of over 20,000 pounds. Three 630-horsepower engines provided top speeds of 52 miles per hour.

Zeppelin IV (Zoso)

In World War I, Germany used zeppelins as bombers and scout craft. Bombing raids over Britain killed over 500 people. Following Germany’s defeat, terms of the Treaty of Versailles put a significant damper on airship use, including for commercial purposes. Existing zeppelins had to be surrendered, and new production was prohibited, with the exception of one airship commissioned by the US Navy. To the victor go the spoils, indeed.

The Treaty’s restrictions on airship production were lifted in 1926, reviving DELAG’s business. Throughout the 1930s, zeppelins made regular transatlantic flights between Germany and North America. However, the Hindenburg disaster of 1937 essentially ended the zeppelin’s run as a commercial aircraft.

Led Zeppelin rules.

Photo credit: San Diego Air & Space Museum Archives / Foter / No known copyright restrictions