Technology, World-Changing Inventions

Water Treatment Technology Through History

Water Treatment Technology Through History

 

Civilization has changed in uncountable ways over the course of human history, but one factor remains the same: the need for clean drinking water. Every significant ancient civilization was established near a water source, but the quality of the water from these sources was often suspect. Evidence shows that humankind has been working to clean up their water and water supplies since as early as 4000 BCE.

Cloudiness and particulate contamination were among the factors that drove humanity’s first water treatment efforts; unpleasant taste and foul odors were likely driving forces, as well. Written records show ancient peoples treating their water by filtering it through charcoal, boiling it, straining it, and through other basic means. Egyptians as far back as 1500 BCE used alum to remove suspended particles from drinking water.

By the 1700s CE, filtration of drinking water was a common practice, though the efficacy of this filtration is unknown. More effective slow sand filtration came into regular use throughout Europe during the early 1800s.

As the 19th century progressed, scientists found a link between drinking water contamination and outbreaks of disease. Drs. John Snow and Louis Pasteur made significant scientific finds in regards to the negative effects microbes in drinking water had on public health. Particulates in water were now seen to be not just aesthetic problems, but health risks as well.

Slow sand filtration continued to be the dominant form of water treatment into the early 1900s. in 1908, chlorine was first used as a disinfectant for drinking water in Jersey City, New Jersey. Elsewhere, other disinfectants like ozone were introduced.

The U.S. Public Health Service set federal regulations for drinking water quality starting in 1914, with expanded and revised standards being initiated in 1925, 1946, and 1962. The Safe Drinking Water Act was passed in 1974, and was quickly adopted by all fifty states.

Water treatment technology continues to evolve and improve, even as new contaminants and health hazards in our water present themselves in increasing numbers. Modern water treatment is a multi-step process that involves a combination of multiple technologies. These include, but are not limited to, filtration systems, coagulant (which form larger, easier-to-remove particles call “floc” from smaller particulates) and disinfectant chemicals, and industrial water softeners.

For further information, please read:

Planned future articles on Sandy Historical will expand on some of the concepts mentioned here. Please visit this page again soon for links to further reading.

Technology

Do V-Belts Keep Your V-Pants Up?

Come on, title, don’t be ridiculous.

If you’ve spent any amount of time under the hood of an automobile, you’re likely familiar with v-belts. Often called “serpentine belts,” these sturdy, rubber loops are the both the cheapest and the easiest way to transmit power between two or more rotating shafts that are not aligned axially but that run parallel to each other. V-belts are one of the most important components for the operation of automotive engines.

 

OG Leather V-Belts of 1916

 

The earliest mentions of v-belts as used in automobiles dates back to 1916. Originally, v-belts were manufactured from leather, and were made with any number of “V” angles, as the burgeoning auto industry had not yet standardized designs for these components (as well as countless others).

 

In 1917, the endless rubber v-belt was developed by John Gates of the Gates Rubber Company, which would go on to become Gates Corporation, the world’s largest non-tire rubber manufacturer.

 

Walter Geist of Allis-Chalmers would develop the first multiple-v-belt drive in 1925, and a patent for his design (US Patent #1,662,511) was awarded three years later. Allis-Chalmers then marketed Geist’s creation under the “Texrope” brand name.

 

Modern V-Belts

 

Today, v-belts are made from advanced (or “engineered”) rubber materials that provide better durability and flexibility and longer service life. For further improved strength, many designs include special embedded fibers; commonly used fiber materials include nylon, polyester, cotton, steel, Kevlar, and others.

Elaborate treasure map, or V-belt routing diagram?

 

Modern v-belts are manufactured with essentially universal “V” angles (there are, of course,. This standardized shape was scientifically developed to provide an optimum combination of traction and speed. Bearing load is also optimized by this design—as the load increases, the V shape wedges into the corresponding groove in the shaft pulley, providing improved torque transmission.

 

Endless v-belts are the current standard, but jointed and linked v-belts can also be utilized for special applications. Jointed and linked v-belts are made from a number of composite rubber links joined together—links can be added or removed as needed for adjustability. Most of these specialty v-belts can operate at the same speed and power ratings as standard types, and require no other special equipment.

Photo credit: johnsoax / Foter / Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)

Technology, World-Changing Inventions

A Bulb… of Light?

If you don’t know what a light bulb is, you probably can’t read, either, so we’ll just forego the usual introduction here and get right into the story how of this world-changing device was invented. What say you?

Thomas Edison is A Punk

Forget what you’ve been told: Thomas Edison did not invent the light bulb. He did refine the device and was the first to make it commercially viable, but there were literally dozens of others before him who had created functional light bulbs. As ever, Edison stole his “revolutionary” idea from another inventor and took the credit for himself.

Thomas Alva Edison is a thief, a liar, and a murderer.

An Abridged But Accurate History

Forty-five years before Edison was even born, in 1802, Humphry Davy created the first incandescent light. Using what was at the time the world’s most powerful electric battery, he passed electrical current through a thin strip of platinum. The material’s high melting point made it ideal for Davy’s experiments, but it did not glow brightly enough nor last long enough for practical application. However, it did prove that such a thing was possible.

The first person to create a workable version of the incandescent light bulb was James Bowman Lindsay. In 1835, in Dundee, Scotland, Lindsay publicly demonstrated a constant electric light that allowed him to “read a book at a distance of one and a half feet.”

Light Bulbs

Warren de la Rue, a British scientist, created his own version of the light bulb in 1840. De la Rue’s design used a platinum filament enclosed in a vacuum tube. Because of its high melting point, he deterimined that platinum would perform better at high temperatures; the evacuated chamber would, in theory, contain fewer gas molecules to react with the platinum and therefore extend its working life. De la Rue’s light bulb performed well, but platinum proved too costly for commercial use.

A year later, fellow Englishman Frederick de Moleyns was the first to receive a patent for an incandescent lamp. His design also utilized platinum wires in vacuum tubes. The first American to acquire a patent for an incandescent bulb was John W. Starr—two years before Edison was born. Unfortunately, Starr kicked the bucket shortly after being granted his patent, and his version of the light bulb died with him.

Alexander Lodygin received was granted the first Russian patent for an incandescent light bulb in 1874. His device used two carbon rods in a hermetically sealed glass receiver filled with nitrogen. It was designed so that electric current would transfer to the second rod when the first had been used up. Lodygin later moved to the United States and obtained a number of patents for variations on his original design. These bulbs used chromium, iridium, osmium, molybdenum, rhodium, ruthenium, and tungsten filaments, respectively. Lodygin demonstrated his molybdenum bulb at the 1900 World’s Fair in Paris.

In 1874, Canadian inventors Henry Woodward and Matthew Evans acquired a patent in their country for a light bulb that used carbon rods in a nitrogen-filled glass cylinder. After failing to commercialize their invention, the duo sold the rights to their patent to Thomas Edison in 1879. It’s likely that they sold their patent as an alternative to getting whacked by Edison’s goon squad.

Photo credit: Cardboard Antlers / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Science, Science & Society

Women in Science in Europe’s Age of Enlightenment

The “Age of Enlightenment” began in late-17th Century Europe. It was a far-reaching cultural movement and a revolution in human thought that emphasized reason and individualism over tradition. The intellectuals behind the Enlightenment hoped to reform the then-current society via reason, challenge widely-held ideas based in faith and tradition, and advance knowledge via scientific method.

“Enlightened” Yet Exclusionary

Despite the supposedly forward-thinking spirit of the era, women were still excluded from science at every turn. Scientific universities, professions, and societies uniformly refrained from accepting women into their ranks. Women’s only options for scientific learning were self-study, paid tutors, or, occasionally, instruction from their fathers. The few learned women of the time were primarily found among the elite of society.

Restrictions against female involvement in science were equal parts severe and ridiculous. Women were denied access to even the simplest scientific instruments; midwives were forbidden to use forceps. Scientifically-inclined women, as well as any women interested in higher education, were often ridiculed for neglecting their “domestic roles.”

Got a real sausage-fest going there, fellas.

Got a real sausage-fest going there, fellas.

Exceptional Women

Though this exclusionary attitude toward women in science was nearly universal, some women did manage to make significant scientific contributions during the 18th century.

Laura Bassi received a PhD in physics from Italy’s University of Bologna, and became a professor at the school in 1732.

For her contributions to agronomy, and specifically her discovery of methods for making flour and alcohol from potatoes, Eva Ekeblad was the first woman inducted into the Royal Swedish Academy of Sciences in 1748.

Through a personal relationship with Empress Catherine the Great, Russian Princess Yekaterina Dashvoka was named director of the Russian Imperial Academy of Sciences of St. Petersburg in 1783. This marked the first time in history that a woman served as the head of a scientific institution.

After serving as an assistant to her brother, William, Germany’s Caroline Herschel became a noted astronomer in her own right. She is best known for her discovery of eight individual comets, the first of which she identified on 1 August 1786, as well as for creating the Index to Flamsteed’s Observations of the Fixed Stars in 1798.

In addition to collaborating with her husband, Antoine, in his laboratory research, Marie-Anne Pierette Paulze of France translated numerous texts on chemistry from English to French. She also illustrated a number of her husband’s books, including his famous Treatise on Chemistry from 1789.

Photo credit: Foter / Public Domain Mark 1.0

Important People, Important Discoveries, Technology

A Brief Overview of Non-Led Zeppelins

Named after Ferdinand von Zeppelin, a German count who pioneered rigid airship development, a zeppelin is type of dirigible that features a fabric-covered metal grid of transverse rings and longitudinal girders, which contains numerous separate gasbags. This design allowed the aircrafts to be much larger than blimps or other non-rigid airships, which require overpressure within a single envelope to maintain their shape.

Zeppelin I

Count von Zeppelin, he of history’s greatest moniker, first developed designs for the airship that would bear his name in 1874. These designs were finalized in 1893, and patented in Germany two years later; a US patent was issued in 1899.

The frameworks of Zeppelin’s zeppelins were usually made of duralumin, an early aluminum alloy. Rubberized cotton was initially used for the inflatable gasbags, with later craft using a material made from cattle intestines called goldbeater’s skin.

Because of their size, most zeppelins required several engines, which were usually attached to the outside of the structural framework. Usually, at least one of these engines would provide reverse thrust to aid in maneuvering while landing and mooring.

Zeppelin II

The first commercial zeppelin flight took place in 1910. Deutsche Luftschiffahrts-AG (DELAG), founded by Count von Zeppelin himself, ran the world’s leading commercial zeppelin service, and by 1914 had carried over 10,000 passengers on more than 1,500 flights. The runaway success of zeppelin flight led to the “zeppelin” becoming a general term for rigid airships of any design.

The publicly-financed Graf Zeppelin, one of the largest commercial airships ever built.

The publicly-financed Graf Zeppelin, one of the largest commercial airships ever built.

Passengers, crew, and cargo were carried in compartments built into the bottom of the zeppelin’s frame. These compartments were quite small relative to the size of the inflatable portion of the ship. Some later designs included an internal compartment, inside the framework, for passengers and cargo.

Zeppelin III

In early 1912, the German Navy commissioned its first zeppelin, an oversized version of DELAG’s standard zeppelins. It was designated Z1 and entered service in October 1912. A few months later, Admiral Alfred von Tirpitz, the German Imperial Navy’s Secretary of State, instituted a five-year program to enlarge German’s naval airship fleet. DELAG provided a fleet of ten zeppelins, while the German military would construct two airship bases.

During a training exercise, L1, one of the military’s commissioned zeppelins, crashed into the sea due to a storm. The 14 crew members who perished were the first fatalities of zeppelin flight. Six weeks later, L2 caught fire during flight trials, killing the entire crew, including the acting head of the Admiralty Air Department.

It was not until May 1914 that L3, the next German Navy zeppelin, entered service. It was the first M-class airship—measuring nearly 520 feet in length and with a volume of nearly 795,000 cubic feet, these zeppelins could carry a payload of over 20,000 pounds. Three 630-horsepower engines provided top speeds of 52 miles per hour.

Zeppelin IV (Zoso)

In World War I, Germany used zeppelins as bombers and scout craft. Bombing raids over Britain killed over 500 people. Following Germany’s defeat, terms of the Treaty of Versailles put a significant damper on airship use, including for commercial purposes. Existing zeppelins had to be surrendered, and new production was prohibited, with the exception of one airship commissioned by the US Navy. To the victor go the spoils, indeed.

The Treaty’s restrictions on airship production were lifted in 1926, reviving DELAG’s business. Throughout the 1930s, zeppelins made regular transatlantic flights between Germany and North America. However, the Hindenburg disaster of 1937 essentially ended the zeppelin’s run as a commercial aircraft.

Led Zeppelin rules.

Photo credit: San Diego Air & Space Museum Archives / Foter / No known copyright restrictions

Historical Science & Technology, Science

Instruments & Observatories of Medieval Islamic Astronomy

Medieval Islamic astronomy was an amalgamation and extension of foreign influences, including Greek and Indian astronomy. Islamic astronomy itself went on to influence later discoveries, especially those in Europe and China. Numerous star names (Aldebaran, for example) and astronomical terms (such as azimuth) still in use today are of Islamic origin.

The peak of medieval Islamic astronomy came between the 8th and 15th Centuries CE. Developments took place throughout Islam’s sphere of influence, from the Middle East and Central Asia to North Africa, India, and the Far East. Roughly 10,000 medieval Islamic manuscripts on astronomy still exist today.

Inventions & Advances in Astronomy Instruments

Though the astrolabe was an ancient Greek invention for charting the stars, Islamic astronomer Fazari is credited with vastly improving the device. Examples of these improved astrolabes date back as far as 315 CE. During the Abbasid caliphate, Muslim scientists perfected the astrolabe to help chart the official beginning of Ramadan, the hours of prayer, and the relative direction of Mecca. A variation of the astrolabe, called the saphea, was devised by al-Zargali of Andalusia—this device could be used anywhere, independent of the user’s latitude.

Celestial globes, similar to standard globes but showing the apparent positions of stars in the sky, were first developed by Islamic astronomers and date back to at least the 11th Century. A related device, called the armillary sphere, consisting of a spherical framework of rings which represent lines of celestial longitude and latitude, the ecliptic, and other important astronomical features, was also a medieval Islamic invention.

A modern armillary sphere.

A modern armillary sphere.

The equatorium, a mechanical device that helps plot the positions of the moon, sun, and planets without calculation, was invented in roughly 1015 by astronomers in Al-Andalus (modern Spain, Portugal, Andorra, and southern France).

Medieval Islamic astronomers also invented numerous quadrants, including the sine quadrant (used for astronomical calculations) and several variations of the horary quadrant (used to determine time by observations of the sun or stars).

Major Observatories

In the medieval Islamic world, a number of private observatories existed in Baghdad, Damascus, and elsewhere. A number of astronomical advances were made from findings at these observatories, including the measurement of meridian degrees and solar parameters.

The first major observatory in the region was built in Isfahan (in modern Iran) in the 11th Century. From this observatory, Omar Khayyám and other scientists formulated the Persian Solar Calendar, a modernized version of which is still in use in Iran today.

The largest and most significant observatory was created in Maragha (also in modern Iran) in the 13th Century. The Mongol ruler Hulagu Khan kept a home there, and the structure also housed a mosque and library. Numerous leading astronomers of the time worked there, and over the course of a half-century, developed a number of key modifications to the Ptolemaic system.

Another major observatory, now known as the Istanbul Observatory of Taqi ad-Din, was built in 1577. It lasted three years later, however. A large, long-tailed comet was observed and prognosticated to be a sign of coming good fortune; instead, a plague followed, after which spiritually-minded opponents of science and prognostication from the heavens called for the observatory’s destruction.

Photo credit: francisco.j.gonzalez / Foter / Creative Commons Attribution 2.0 Generic (CC BY 2.0)

Historical Science & Technology, Pseudoscience

Alchemy: Science + Magic + Religion

When they hear the world “alchemy,” many people likely think of the vaguely science-esque practice of attempting to turn lead into gold. Others will recognize it as the vaguely magic-esque practice that produced the Philosopher’s Stone, a ticket to immortality. (These folks likely know it from Harry Potter and the Philosopher’s Stone, a.k.a. The Sorcerer’s Stone in the US.) And, while neither of these generalities is wrong, necessarily, they’re not really right, either.

An Ancient Philosophical Tradition

Since its earliest days, alchemists have claimed that the practice is a means of acquiring exceptional powers, immortality among them. It is rooted equally in science (specifically chemistry), religion (specifically hermeticism), and magic. Today, alchemy is recognized as a kind of protoscience that helped contribute to the development of modern scientific disciplines and methods. Like these practices, alchemy was based on laboratory work, theory, and experimentation.

The objectives of alchemy are many and varied, but the “big three,” if you will, are:

  1. The transmutation of base metals (such as lead) into precious metals (most commonly gold, but also silver).
  2. The creation of a single universal remedy for all ailments and diseases, known as a “panacea,” which could prolong life (and youth) indefinitely.
  3. The discovery of a universal solvent (known as Alkahest) which is capable of dissolving any other substance in the world. Alkahest was sought for its potential medicinal uses.

The Philosopher’s Stone of alchemical lore is related to both #1 and #2; some claim it can turn lead into gold, while others say that it is the “Elixir of Life.”

Pictured: NOT an actual alchemist.

Pictured: NOT an actual alchemist, but certainly a favorable modern interpretation.

Historic Divisions

The three major divisions of alchemy can be traced back over four thousand years across three continents. It is impossible to know if they shared a common origin, or if they exerted influence upon each other in any way during their respective developments.

Chinese alchemy, which started in China specifically (obviously) and spread across western Asia, shares close connections with Taoism. Indian alchemy, practiced throughout the Indian subcontinent and known as rasāyana, meaning “path of essence,” is based on the Dharmic religions. Western alchemy, which originated in the Mediterranean region and eventually shifted to medieval Europe, was based on an independent philosophical system that was independent from, though influenced by, a number of Western religions.

The Decline of Alchemy

Slowly but steadily, modern science grew to displace alchemy. The terms “alchemy” and “chemistry” were used more or less interchangeable as late as the 17th Century. By the 18th, however, alchemy was thought of as little more than a charlatan’s practice of attempting to turn gold to lead.

From there, alchemy was largely forgotten, until it was revived as an “occult science” during the early 19th Century’s occult revival. This view of alchemy focused strictly on the spiritual interpretation of the practice, ignoring its scientific, theoretic, and experimental aspects. This version of alchemy continues to be the one most widely known to the modern layperson.

Photo credit: walknboston / Foter / Creative Commons Attribution 2.0 Generic (CC BY 2.0)

Technology, World-Changing Inventions

Remembering the Great Spacewar

No, the title is not a clever attempt to circumvent Star Wars copyright, nor was there some big fistfight on the International Space Station that you didn’t hear about. Spacewar was, in fact, one of the earliest computer games ever created, one that helped set the stage for the countless arcade, console, and PC video games to follow.

A Revolution in (Virtual) Space

In Spacewar, two players control virtual spaceships and attempt to blast each other to smithereens, while a star at the center of the screen creates a gravitational pull that players must maneuver around—falling into the star causes a player’s ship to explode, thus losing the game. A “hyperspace” function could be utilized, causing the player’s ship to disappear and return at a random location on screen.

By today’s video game standards, Spacewar is incredibly simple. At the time, however, it was revolutionary. Created by Martin Graetz, Steve Russell, and Wayne Wiitanen (working under the collective moniker “the Hingham Institute”), with an assist by Alan Kotok, the first version was completed in February 1962. Over 200 hours of coding work went into the initial iteration, with numerous further hours spent on additional features and revisions by Graetz, Dan Edwards, and Peter Samson.

Prior to Spacewar, numerous interactive graphical programs had been developed for the TX-0 experimental computer at the Massachusetts Institute of Technology (MIT). The Hingham Institute team found that these programs failed to demonstrate the full potential of the computer, and brainstormed ideas to make them more compelling. Russell, an avid reader of science-fiction novels, came up with the space-themed concept.

"Spacewar" in all its glory.

“Spacewar” in all its glory.

The original version used a randomly-generated starfield for the background. The inaccuracy and lack of realism stuck in Samson’s craw, so he created a program dubbed “Expensive Planetarium”—the title was a nod to the cost of the team’s PDP-1 computer, on which they wrote the Spacewar programming. Expensive Planetarium was based on real star charts, and this new background would scroll slowly across the screen as the game progressed. It showed 45% of the night sky at any given time, including stars down to the fifth magnitude.

Spacewar Fever Spreads Rapidly

Other computer researchers soon learned of the game, and the code was shared throughout the community. Other programmers began creating their own variations of Spacewar, including a first-person perspective version, and new features, such as space mines.

The game quickly gained popularity, and was ported to other systems, many to other DEC systems such as the PDP-10 and PDP-11. Spacewar was also converted to work with early microcomputer systems, though many lacked sufficient memory to display the high-resolution bitmap—instead, these versions used the computers’ flexible character generators to render the ships at different angles. Other microcomputers used oscilloscopes for their graphical displays.

In the 1970s, Spacewar was ported to the HP9826 desktop calculator by a mathematician working in the Air Force’s 544th ARTW/Trajectory Division. The game proved to be an ideal “learning distraction” for engineers writing ballistic missile coding.

Photo credit: nik.clayton / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Historical Science & Technology, Science & Society, Technology, World-Changing Inventions

Roamin’ Roman Roads

Romans built roads so they could roam around Rome. Specifically, the Romans’ roads were built for military use, but they also played a major role in the maintenance and expansion of the Roman state.

Travel, Communication & Trade

Construction of these roads lasted from 500 BCE until the Roman Republic gave way to the Roman Empire, roughly 500 years later. (Keep that in mind next time a roadwork project in your neck of the woods takes a week or two longer than scheduled.)

Though civilian traffic was often restricted to keep the roads open for military use, Roman roads were used for public travel, for carrying official communications throughout the land, and for the movement of trade goods.

A surviving Roman road.

A surviving Roman road.

At the peak of Roman civilization, there were over 53,000 miles of paved, interconnected roads throughout the Empire. These were the most advanced and well-built roads in the history of civilization, and would remain so until the 19th Century CE.

Construction

The construction of Roman roads was no small undertaking. A trench, often down to the bedrock, was dug the full length and width of the road’s intended length (obviously, the roads started small—or short—and were extended from there). This pit was filled with rocks, gravel, and sand, which was then covered with a layer of concrete. Flat, loosely interlocking rock slabs were then laid over the concrete.

Larger roads were cambered for drainage, with drainage ditches running parallel on either side. Many roads were also accompanied by footpaths. Bridges were built to span rivers, waterways, and ravines. Hills were cut through in many places to create flat, level roads. In marshes and other areas with unstable ground, piled foundations were built for support.

The Roman roads were incredibly well-constructed, and were resistant to floods and other weather and environmental factors. The roads remained in use for more than 1,000 years, and many modern roads in Europe are built on top of Roman roads.

Layout

Nearly all Roman cities were built on a square grid, with four main roads leading outward from the center of the city. These larger roads were akin to modern highways, connecting cities, towns, and military installations. Within the city, smaller roads connected the four main roads and formed the streets where people lived.

Nearly 30 major highways led in and out of the Roman capital at the Empire’s zenith. The 113 provinces that made up the Roman Empire were joined by a series of 372 interconnected roads.

Between towns and cities, way stations were built at regular intervals. Official and private couriers had their own separate stations for changing horses (and riders), which allowed communication to be carried up to 500 miles in a 24-hour period.

Photo credit: KJGarbutt / Foter / Creative Commons Attribution 2.0 Generic (CC BY 2.0)

Historical Science & Technology, World-Changing Inventions

Pull Up A Seat for A Brief History of the Chair

If you’re sitting right now, chances are pretty dang good you’re sitting on a chair. It seems like one of the most basic inventions there could ever be—a surface to sit on, and something to hold that surface up. Heck, it might not even really seem like “technology” at all. But it is. And there’s more to the history of the chair than one might suspect.

Not for the Common Man

While chairs have been around for thousands of years, they were not always the commonplace, household item they are today. In the early days of chairs, they were reserved almost exclusively for leaders and dignitaries—thrones, after all, are nothing more than extra-fancy chairs. While authority rested in chairs, the common man made due with benches and stools until roughly the 16th Century CE.

Conversely, in modern times, chairs are reserved almost exclusively for use by cats.

Modern chairs, of course, are reserved almost exclusively for use by cats.

The few examples of ancient chairs currently known are seen almost exclusively in sculptures and paintings. Actual physical examples of early chairs are extremely rare.

Ancient Egyptian & Greek Chairs

Like so much of ancient history and artifacts, some of the earliest records of chairs come to us from Egypt. And, like so many of the relics from ancient Egypt, their chairs were highly ornate. Many were crafted from ivory, ebony, gold, and other exotic materials. Their surfaces were decorated with intricate carvings and designs; chair legs were frequently carved in the shape of animal legs such as lions’ paws, or as human figures holding the seat above their heads.

The frieze of the Parthenon depicts Zeus sitting in a square chair with thick legs, ornamented with carvings of sphinxes and animal feet on its thick legs. This and other examples of Greek and Roman chairs dates back to the 7th Century BCE. The (reputed) Chair of St. Peter, now on display in his eponymous Basilica in Rome, is in advanced decay, but appears to be a genuine work of 6th Century CE craftsmanship, with ivory carvings depicting the labors of Hercules.

Notable Medieval Chairs*

Several examples of chairs from the medieval period still exist today. Also dated to the 6th Century, the chair of Maximian is carved from marble and features an intricate relief depicting numerous saints and Gospel scenes. Maximian’s chair can currently be seen in the San Vitale Basilica in Ravenna, Italy.

Dated to the 4th or 5th Century CE, the Chair of Dagobert is sculpted in partially-gilt bronze. Its legs are carved to look like animal heads and feet. A back and arms were added by Suger, abbot of St. Denis, in the 12th Century.

The oldest known surviving example of a medieval English chair is that of Edward I, made in the late 13th Century. Made of oak, and at one time covered with gilded gesso, it is the chair in which most subsequent English monarchs have been crowned.

* Please note that this is the first time in history that the phrase “Notable Medieval Chairs” was used. Something to tell the grandkids.

Photo credit: oschene / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Technology, World-Changing Inventions

Mechanical Television is Mechanical

Prior to Philo Farnsworth’s technological breakthroughs that created the all-electronic version we still use, more or less, today, early televisions were mechanical (often called “electromechanical”) devices. Sometimes referred to as televisors, after the most commercially successful model, the Baird Televisor, mechanical televisions were in use from roughly 1926 until 1939.

The Nipkow Disk

As the name implies, the mechanical television utilized a mechanical spinning device to generate video signals. These signals were entirely electronic and were transmittable via radio or over wires. The spinning mechanical component was known as a Nipkow disk, which had a series of holes in a spiral pattern on its surface.

Artist's rendering of a Nipkow disk.

Artist’s rendering of a Nipkow disk.

In early television cameras, the Nipkow disk was used in tandem with a light source and one or multiple photoelectric cells to “film” the subject and adapt point-to-point variations in the image’s brightness into an electrical signal. AM radio waves or close circuit systems transmitted the signal to televisors, which had their own, synchronously spinning Nipkow disks. The brightness of a neon glow lamp (or other light source) behind the disk was modulated by the video signal—as the holes of the disk passed sequentially in front of the light source, the image was transmitted one scan line at a time.

First Forays in Mechanical Television

Charles Francis Jenkins created America’s first mechanical television systems in the 1920s and ‘30s. He transmitted the first moving silhouette images in 1923, and gave the first public demonstration of transmitted images and sound in 1925. Jenkins received 75 patents for mechanical television innovations.

British inventor John Logie Baird (for whom the Baird Televisor was named) transmitted the first live, moving images (albeit in grayscale) via mechanical television in January of 1926. Baird’s system utilized just 30 scan lines of resolution.

WCFL was Chicago’s first mechanical television station, premiering on 12 June 1928. Station engineer Ulises Armand Sanabria transmitted the sound signal to WIBO, with the video transmitted through WCFL’s signal, thereby becoming the first person to transmit sound and picture on the same wave band simultaneously.

Limitations

Nipkow disks could only be made with a limited number of holes. Increasing the size of the disks allowed for more holes, but at a certain diameter, this became impractical. As a result, mechanical television provided only very low resolution, topping out at approximately 120 scan lines. Technological advances did improve this resolution over time, and 180-line systems were installed in Paris and Montreal in 1935. Systems with as high as 200-line resolution were eventually created.

Electronic Television Kills Mechanical

Philo Farnsworth developed his all-electronic television system in 1927, almost as soon as mechanical televisions were made available. By 1936, electronic television signals were being transmitted at up to 600-line resolution. Farnsworth’s invention, and its swift adoption throughout the burgeoning industry, signaled the end of mechanical television.

Photo credit: Foter / Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)