Historical Science & Technology

What Time Is It? Water o’ Clock

A water clock, or clepsydra, is a unique style of timepiece that measures time via the regulated flow of liquid (usually water, hence the name) into or out of a vessel (called inflow and outflow, respectively). Along with the sundial and the hourglass, the water clock is among the oldest known time-measuring devices.

Hail Clepsydra

The oldest surviving physical evidence of a water clock dates to roughly 1400 BCE. This water clock was used in the Temple of Amen-Re during the reign of Pharaoh Amenhotep III in Ancient Egypt. However, an inscription on the tomb of Egyptian court official Amenemhet identifying him as the inventor dates back at least two more centuries.

Um... not quite...

Um… not quite…

Even older evidence suggests that water clocks were used in astronomical calculations during the Old Babylonian period (circa 2000-1600 BCE). No physical examples from this period still remain, but records written on clay tablets have survived. The Babylonians measured time using temporal hours, which meant that, as seasons changed, the length of an hour fluctuated. As such, the amount of water that had to pass through these water clocks to mark each “hour” changed, as well.

Pints of Persia

Historical records of water clocks dating to 328 BCE describe the ancient Persians in what is now Iran using them to ensure the just and exact distribution of water from local wells for irrigation purposes. Water clocks were also used to calculate the holy days of pre-Islamic religions, such as the equinoxes and solstices.

A typical Persian water clock of this vintage consisted of a large pot full of water and a bowl with a small hole in its center. The bowl would be placed on top of the water, where it would start to slowly fill with water. Once full, it would sink into the pot. It was then retrieved and emptied, and the process would be repeated as necessary. Typically, the “manager” of the water clock would tally the number of cycles by placing a small stone in an unrelated jar for each time the bowl sank.

Photo credit: 96dpi via Foter.com / CC BY-NC

Science & Society

The Dawn of Crystallography

Of all the experimental sciences used to determine the arrangement of atoms in crystalline solids, crystallography is by far the best. Modern crystallographers use a process called x-ray crystallography to study the structure of crystalline molecules, but the pioneers of this science had no such fancy technology. Instead they did it the old fashioned way, which for them was actually the new fashioned way because they were literally making it up as they went.

Kepler Creates Crystallography (Kinda)

One of the first notable hypotheses on crystallography comes from the famous German mathematician  and astronomer Johannes “Big Bad John” Kepler. In his 1611 writing, A New Year’s Gift of Hexagonal Snow (roughly translated from the German), Kepler hypothesized that the hexagonal symmetry of snowflakes was due to the regular packing of spherical water particles.


More than half a century later, in 1669, Danish scientist Nicolas “Big Nick” Steno conducted the first experimental investigations of crystal symmetry. Steno found that the angles between the faces of a particular type of crystal are the same across all examples of said crystal.

Haüy No Haüy

More than a full century later, in 1784, the French mineralogist and “Father of Modern Crystallography” René Just “Big René” Haüy discovered that simple stacking patterns of blocks of the same shape and size can be used to describe every face of a given crystal. Haüy’s work led to the further discovery that crystals are constructed on a regular, repeating three-dimensional array of atoms/molecules, in which a single unit cell repeats indefinitely along three principal, and not necessarily perpendicular, directions.

Building off these discoveries, in 1839 Welsh mineralogist William Hallowes “Big Willie” Miller devised a way to give each face of a crystal a unique label of three small integers. These integers are now known as “the Miller Indices,” which to this day are used to identify and classify crystals.

Combining Haüy’s discoveries and Miller’s work, a group of late-19th century scientists (German mineralogist Johann “Big Johann” Hessel, French crystallographer Auguste “Big Augie” Bravais, Russian mineralogist Evgraf “Big Ev” Fedorov, German mathematician Arthur “Big Art” Schoenflies, and English geologist William “Big Bill” Barlow) compiled a complete catalog of all possible crystal symmetries.

Photo credit: subarcticmike via Foter.com / CC BY

Historical Science & Technology, Technology, World-Changing Inventions

A Brief History of Industrial Machining

The term “machining” refers to any of a number of processes by which raw material is cut, ground, or otherwise mechanically/physically transformed into a desired final shape via controlled material removal. Sometimes referred to as “subtractive machining,” the basic process has been used since the first caveman sharpened a stick on a rock to create a makeshift spear. In the more modern sense, machining has been used extensively since the 18th century CE and is a major part of manufacturing and other industrial processes.

The Meaning of Machining

Prior to Ye Olde Industrial Revolution, a “machinist” was a dude who built and/or repaired machines, work that was done almost exclusively by hand. By the middle of the 19th century, industry all around the world was revolting and the definition of “machinist” had become more akin to what we think of now—someone who machines material into an end product, part, or component via turning, drilling, boring, sawing, shaping, etc. Early machine tools such as lathes, drill presses, and milling machines helped launch the first wave of modern machinists.

The lathe dates back to ancient Egypt, but did not become mechanically powered—and thus far more powerful and useful—until the Industrial Revolution. The earliest lathes can be traced back to roughly 1300 BCE. These lathes were operated by a two-person team, one of whom turned the wooden workpiece with a length of rope, while the other cut shapes into the wood with sharp tools. Pedal power replaced hand-operated lathes by the Middle Ages. The first true machine lathe was a horizontal boring machine installed at the British Royal Arsenal in 1772. The horse-powered machine was used to manufacture cannons used in the Revolutionary War. So, ultimately, not a huge success.

An early mechanical lathe (circa 1919) from a Canadian metalworking factory.

An early mechanical lathe (circa 1919) from a Canadian metalworking factory.

Very early humanoids invented the first drills circa 35,000 BCE. (What highs and lows humanity has experienced in the millennia since!) These first rudimentary drills were little more than pointed sticks that were rubbed between the palms—flint points were sometimes attached. Bow- or strap-drills were developed approximately 10,000 years ago, and were primarily used to create fire. Augers were first used to drill (or dig) large holes in the heyday of the Roman Empire. The drill press was derived from the bow-drill, and early models were windmill- or water wheel-powered. The invention of the electric motor in the late 1800s led to the invention of the electric drill and drill press, early versions of which are not all that much different than those we use today.

Machining Today

While some aspects of machining equipment have remained largely the same, there are other devices that would have been wholly unimaginable to Industrial Revolutionaries. Fully-automated, CNC-powered machining centers can now do the work of a dozen or more men in a fraction of the time, and even have the capability change out their own tools if, for example, a drill bit breaks mid-operation. New machining methods, like electrical discharge machining, make full use of technologies that were barely even conceived of in the 1800s. Even the machine enclosures used today, with soundproofing, temperature control, air-cleaning HVAC systems, and other advanced features, are technological marvels by Industrial Revolution standards.

Photo credit: Internet Archive Book Images via Foter.com / No known copyright restrictions

World-Changing Inventions

Don’t Fear the Reaper

Halloween is long gone, so we’re not talking about the Grim version here—but don’t fear him, either! Instead, since it’s nearly Thanksgiving, a harvest celebration, we’re talking about the reaper that cuts and gathers (or “reaps”) crops.

Manual Reaping

Naturally, the first reapers farmers used were handheld and powered by good ol’ elbow grease. After farmers got tired of plucking ears of grain, etc., by hand, they invented sickles and scythes to cut the stalks for harvest. (A scythe is a type of reaper, which is why the Grim Reaper carries one and why he’s called that. Whaddaya know?!)

Mechanical Reaping

Artist's rendition of the Hussey Reaper in action (see below)

Artist’s rendition of the Hussey Reaper in action (see below)

A truly unsung hero of human civilization, the mechanical reaper is one of the most important inventions in the history of mankind. They helped make it possible for farmers to harvest more crops faster and easier, which in turn made it possible to sustain villages and settlements via agriculture without every single person in the group having to work the fields, which in turn allowed civilization to grow in other ways.

The first mechanical reaper was invented by the Belgae Gallics. Known as the “Gallic header,” this simple device cut the ears off grain stalks, leaving the straw behind, and was pushed by an ox or oxen. The Gallic header was essentially lost during the Dark Ages, because Dark Ages, and farmers reverted to manual reaping. Dummies.

After just a few quick centuries, in 1814, Thomas “Big Tom” Dobbs of Birmingham, England, invented a new-and-improved mechanical reaper. Dobbs’ invention consisted of a circular blade that cut grain stalks as it went and gathered the harvested grains via a pair of rollers.

It took but fourteen years for a newer-and-improveder mechanical reaper to appear. Developed by Scottish minister and inventor Patrick “Big Paddy” Bell, it used a revolving reel, a cutting knife, and a canvas conveyor belt. Bell’s reaper was widely used throughout Scotland, and eventually reached mainland Europe.

Hussey vs. McCormick

In 1833, American inventor Obed Hussey patented the Hussey Reaper, which provided a significant improvement in reaping efficiency. The Hussey Reaper could be drawn by two horses (and wasn’t particularly strenuous on the horsies), as well as a human operator and a separate human driver. It’s design left reaped fields with clean and even surfaces.

Invented by the father and son duo of Robert and Cyrus McCormick and patented in 1837, the McCormick Reaper was also horsedrawn and was specially designed to harvest small grain crops. Though it included a number of unique features, the McCormick Reaper was very similar in design to the Hussey Reaper, and Hussey and the McCormicks battled each other in patent court for many years, even as they continued to update their respective designs to outdo their competitor in the marketplace.

A McCormick Reaper reaping.

A McCormick Reaper reaping.

A mere twenty-four years later, the US Patent and Trademark Office issued their ruling. They determined that Hussey’s design was the basis for both sides’ reapers and their success. It was ruled that Hussey’s heirs should receive monetary compensation for his invention as well as a number of further innovations made by others. Simultaneously, and perhaps paradoxically, McCormick’s patent was extended for seven more years.

Hussey Reaper photo credit: Internet Archive Book Images via Foter.com / No known copyright restrictions

McCormick Reaper photo credit: UpNorth Memories – Donald (Don) Harrison via Foter.com / CC BY-NC-ND

Historical Science & Technology, Pseudoscience

The Baghdad Battery

These days, batteries are everywhere and in everything. You’ve probably got one in your pocket right now, in fact (in your phone). The origin of modern batteries can be traced back to good ol’ Big Ben Franklin in the 1700s, but the alkaline batteries that were the standard for decades did not appear until 1899. Less than two decades later, lithium batteries had been developed; lithium-ion batteries took another 60 years to appear, with rechargeables not far behind.

But way, way back when, in the Parthian period (circa 250 BCE to 224 CE), the first-ever battery was invented. Or was it?

Scroll Storage or Power Source?

The “Baghdad Battery” consists of three components: a ceramic pot, a copper tube, and an iron rod. Though its true purpose remains unclear, the most widely accepted explanation of this disparate trio is that they were used collectively to store sacred scrolls—wrap the scrolls around the rod, but the rod into the tube, and stow the tube in the pot.

A modern, commercially-available version of the Baghdad Battery.

A modern, commercially-available version of the Baghdad Battery.

However, upon its initial discovery, it was speculated that these three pieces could be combined to create a galvanic cell; that is, a battery.

Some folks speculate that, with the addition of wine, lemon juice, grape juice, or vinegar to serve as acidic electrolyte, the copper and iron materials could function as electrodes and produce an electrical current. Researchers, particularly German painter and naturalist Wilhelm “Big Willie” König, noted that a high number of objects from ancient Iraq (whence the “battery” originated) were plated with very thin layers of gold. These researchers suggest that the Baghdad Battery was used to perform an early type of electroplating.


Though at least two attempts to duplicate the Baghdad Battery’s supposed electrical potential have proved successful, the genuine artifact has since been debunked as a possible power source. Firstly, an iron-copper-electrolyte junction produces gas and, along with it, bubbles; these bubbles would create insulation around the electrode; this insulation would increase with ongoing use, making the battery progressively less effective. Secondly, even in a perfect setup, the voltage generated by an iron-copper-electrolyte cell of this size would be far lower than that required for electroplating.

König later determined that the “electroplated” items he had studied were, in fact, fire-gilded with mercury.

Photo credit: Boynton Art Studio via Foter.com / CC BY