Technology, World-Changing Inventions

Water Treatment Technology Through History

Water Treatment Technology Through History

 

Civilization has changed in uncountable ways over the course of human history, but one factor remains the same: the need for clean drinking water. Every significant ancient civilization was established near a water source, but the quality of the water from these sources was often suspect. Evidence shows that humankind has been working to clean up their water and water supplies since as early as 4000 BCE.

Cloudiness and particulate contamination were among the factors that drove humanity’s first water treatment efforts; unpleasant taste and foul odors were likely driving forces, as well. Written records show ancient peoples treating their water by filtering it through charcoal, boiling it, straining it, and through other basic means. Egyptians as far back as 1500 BCE used alum to remove suspended particles from drinking water.

By the 1700s CE, filtration of drinking water was a common practice, though the efficacy of this filtration is unknown. More effective slow sand filtration came into regular use throughout Europe during the early 1800s.

As the 19th century progressed, scientists found a link between drinking water contamination and outbreaks of disease. Drs. John Snow and Louis Pasteur made significant scientific finds in regards to the negative effects microbes in drinking water had on public health. Particulates in water were now seen to be not just aesthetic problems, but health risks as well.

Slow sand filtration continued to be the dominant form of water treatment into the early 1900s. in 1908, chlorine was first used as a disinfectant for drinking water in Jersey City, New Jersey. Elsewhere, other disinfectants like ozone were introduced.

The U.S. Public Health Service set federal regulations for drinking water quality starting in 1914, with expanded and revised standards being initiated in 1925, 1946, and 1962. The Safe Drinking Water Act was passed in 1974, and was quickly adopted by all fifty states.

Water treatment technology continues to evolve and improve, even as new contaminants and health hazards in our water present themselves in increasing numbers. Modern water treatment is a multi-step process that involves a combination of multiple technologies. These include, but are not limited to, filtration systems, coagulant (which form larger, easier-to-remove particles call “floc” from smaller particulates) and disinfectant chemicals, and industrial water softeners.

For further information, please read:

Planned future articles on Sandy Historical will expand on some of the concepts mentioned here. Please visit this page again soon for links to further reading.

Historical Science & Technology, World-Changing Inventions

Magna Rola: The Treadwheel Crane

Ever wondered how Roman cathedrals or Medieval castles were built? For example, how’d they get those huge stones way up to the top? A few different methods were used, but the most common solution was the treadwheel crane, a wooden, human-powered lifting and lowering machine.

A Hard-Working Hamster Wheel

In simplest terms, the treadwheel crane was a large wooden wheel that turned around a central shaft. The wheel enclosed a treadway wide enough for two workers to walk side by side. A rope running through a pulley at the end of the crane’s lifting arm is turned onto a spindle by the wheel’s rotation. This rotation lengthens or shortens the rope, allowing the crane to lift or lower its load.

Unlike modern cranes, the treadwheel crane was only capable of moving its load vertically. There was little to no allowance for lateral movement. As such, additional workers were required to guide the load to its final position. Stones and stone blocks were generally lifted via slings, while smaller items would be loaded onto pallets or into barrels or baskets prior to lifting.

Interestingly, medieval treadwheel cranes rarely included ratcheting or braking mechanisms to prevent runaways or backsliding. Such features were rendered unnecessary by the high friction force exercised on the machines, which naturally prevented such occurrences.

Treadwheel cranes were usually placed within the building they were being used to construct. When one level was completed, and the floor above was secured and stable, the crane would be disassembled, moved up to the next level, and reassembled to continue the building process. Because of this, many of the ancient treadwheel cranes still in existence are found at the top of castle or church towers, above the vaulting and below the roof.

60X More Efficient than the Egyptians

The most well-known treadwheel crane, the one about which the most contemporaneous writing survives, is the Roman Polyspastos. With just two men walking on its treadway, it was capable of lifting up to 6000 kilograms (3000 kg per person). In contrast, when building the pyramids, the ancient Egyptians used ramps that required roughly 50 men to move a 2.5 ton block—about 50 kg per person. If you math it out, you’ll find that the treadwheel crane is 60 times more efficient. (Perhaps unsurprisingly, as ramps are notoriously inefficient.)

Surviving 17th century treadwheel crane in Harwich, Essex, England.  (see below)

Surviving 17th century treadwheel crane in Harwich, Essex, England.
(see below)

After the fall of the Roman Empire, the treadwheel crane fell out of common use for several centuries. It was reintroduced in the High Middle Ages; the oldest recorded mention of a Medieval-era treadwheel crane comes from a French manuscript dated to 1240 CE.

Used throughout France, England, the Netherlands, Belgium, and Germany, treadwheel cranes were used in the construction of the lofty Gothic cathedrals of the era. The technology was also frequently used in harbors to load and unload ships, and in mining to excavate dirt and rocks, as well as loads of valuable minerals, from mine shafts.

Survivors

Original, historical treadwheel cranes are now somewhat difficult to find. Two examples can be found in the United Kingdom, one in Guildford, Surrey, and the other in Harwich, Essex. Both date back to the 17th century CE. The Guildford crane was last used in 1960 to help build the Guildford Cathedral. Both cranes are now protected as part of the UK’s Statutory List of Buildings of Special Architectural or Historic Interest.

Photo credit: diamond geezer / Foter / CC BY-NC-ND

Historic Events that Drove Innovation

The 2008 Georgia Sugar Factory Explosion

At approximately 7:00pm on February 7, 2008, a massive dust explosion shook the Imperial Sugar refinery in Port Wentworth, Georgia. The explosion killed 14, injured 42, and destroyed 12 percent of the 872,000 square foot facility. OSHA investigations of the event found that the explosion had been “entirely preventable.” The disaster led to the development of new technology designed to prevent future combustible dust explosions in similar situations.

Background: The Dixie Crystals Refinery

The sugar processing plant, known as the Dixie Crystals Refinery, was built in 1916, opened a year later, and had been in operation almost continuously since then. Constructed on a 160 acre site, the facility was the second largest sugar refinery in the United States. Sugar Land, Texas-based Imperial Sugar bought the factory from the previous, local owner. In 2007, the facility refined roughly nine percent of all the sugar used in the US that year, making it one of the highest-producing sugar refineries in the world.

Dixie Crystals

Many areas of the factory had never been fully updated or renovated, and still contained the now-outdated construction methods and materials used in the plant’s initial construction. The facility’s ceilings were made of wood; creosote tar was still plentiful, despite being a well-known fire risk.

In 2005, after three similar and fatal explosions two years prior, the US Chemical Safety and Hazard Investigation Board (CSB) had released the findings of a study that investigated the risks of combustible dust explosions. The report stated that dust explosions posed a severe risk, with evidence showing that 281 combustible dust explosions had occurred in the US between 1980 and 2005. The CSB subsequently made numerous recommendations to the Occupational Safety and Health Administration (OSHA), only some of which had been implemented by 2008.

The Explosion

The explosion occurred at roughly 7pm, erupting from the center of the Dixie Crystals Refinery, in a building where processed sugar was fed to storage and bagging equipment via a series of conveyor belts and elevators. One-hundred-twelve employees were onsite at the at the time, including Imperial Sugar CEO John Sheptor.

Sheptor, who survived the blast only because he was protected by a firewall, stated that the accumulated dust from countless tons of processed sugar had acted like gunpowder. Sugar dust, a highly combustible substance, would be officially identified as the fuel for the explosion within 24 hours of the explosion. Evidence suggests that a spark, caused by equipment that was overheating due to the significant accumulated buildup of sugar dust along the conveyor line, served as the source of ignition.

The building at the event’s epicenter was destroyed in the explosion and ensuing fire, as were several others nearby. Two 100-foot tall, reinforced concrete storage silos adjacent to the building caught fire, as well, and the sugar inside the silos continued to burn and smolder for seven days before finally being extinguished. Over 3 million pounds of fire-hardened sugar were eventually recovered from the silos.

Ambulances from 12 counties and fire fighters from three responded, as the US Coast Guard, which closed off a portion of the Savannah River alongside the refinery.

OSHA & CSB Investigate

OSHA arrived on the scene within two hours. An OSHA investigation was launched immediately, as well as an independent CSB investigation. Interviews with employees of the refinery revealed a lack of training and preparedness—40 of those interviewed had never received training on exiting the building in an emergency, and only five recalled any instance of a fire drill having been conducted.

Less than a month later, OSHA sent a letter to 30,000 employees across the US, altering them to the danger of dust explosions in the workplace. A Congressional bill was soon proposed by OSHA as well; the Combustible Dust Explosion and Fire Prevention Act of 2008 was passed by the House of Representatives, but stalled out in the Senate. Numerous new OSHA regulations were enacted in the wake of the Dixie Crystals Refinery dust explosion.

The CSB completed their investigation the next year, and released their report in September 2009. In it, the explosion was called “entirely preventable.” CSB investigators noted that companies in all areas of the sugar industry had been well aware of the potential for combustible dust explosions since 1926, citing memos from the late 1960s that voiced concerns about these risks. The CSB also revealed that Imperial Sugar’s own construction changes on the site had exacerbated the accumulation of sugar dust, and that the company had never practiced evacuation procedures.

From Disaster Springs Innovation

Following the Dixie Crystals Refinery dust explosion, equipment manufacturers for the sugar processing and the broader chemical processing industry set out to develop new ways to reduce or eliminate the risk of combustible dust explosions. As the Georgia plant’s conveyor system was identified as the source of ignition, a good deal of these efforts were focused on improving material conveyors.

The first step was safer, more efficient conveyors that utilized fewer moving parts, thus reducing friction and heat generation. From there, industry engineers went on to develop pneumatic conveyor systems that can transport materials with almost no friction and that significantly reduce the creation of material dust.

Currently, the most advanced solution yet for preventing combustible dust explosion is being developed by Nol-Tec Systems in partnership with Air Products and Chemicals. This new system will incorporate a state-of-the-art pneumatic conveyor, and, for further improved safety, will replace the combustible system’s oxygen flow with an inert gas. This inert gas will not operate in a vacuum, but rather will be pumped through the system in high enough quantities to displace oxygen down to a non-combustible level—less than 15 percent of the volume of the conveying flow.

The two companies co-authored a related research paper, “Prevent Combustible Dust Explosions with Nitrogen Inerting”, in the March 2015 issue of Chemical Engineering.

Photo credit: Scott McLeod / Foter / CC BY

The Science of Film, Music & Art, World-Changing Inventions

The Spirit of Radio

Radio has played a huge role in human history and is all but unescapable in modern life. Even if you don’t listen to the radio—which is understandable, since what’s being played on it is almost uniformly terrible—chances are good you’ve got at least one radio-ready device somewhere. (Like in your car, for example.) Radio is one of those inventions that now seems like it’s always been around, as if it were just a part of the world since the dawn of time. However, this world-changing device had a long and ramshackle history before a single word was broadcast.

Like most things, old-timey radios look way cooler than their modern counterparts.

Like most things, old-timey radios look way cooler than their modern counterparts.

“Wireless Telegraphy” & Electromagnetism

Experiments with wireless communication first began in the 1830s. Researchers started with “wireless telegraphy,” or the transmission of electric telegraphy (telegraph) signals without the use of wires. These scientific trials used inductive and capacitive induction and transmission to send signals through the ground, water, and along train tracks. Though it did work, it was soon discovered that this form of communication was limited in its range, and could not transmit signals far enough to be of any practical use.

By the mid-1870s, Scottish scientist James Clerk Maxwell had proven mathematically that electromagnetic waves could propagate through free space, making good on the theory he put forth in his A Treatise on Electricity and Magnetism. His theory united a number of previously unrelated observations, equations, and experiments on electricity and magnetism (as well as optics) into a single, consistent theory.

Maxwell’s equations—now known as “Maxwell’s Equations”—proved that electricity, magnetism, and light are all part of the electromagnetic field. These equations remain the basis of all radio design, though Maxwell himself had no involvement in further radio research or invention.

Hughes & Hertz

As far as historians can tell, the first intentional transmission of a signal via electromagnetic waves was part of an experiment by Anglo-American inventor and professor David Edward Hughes, circa 1880. Through a series of trial-and-error experiments a year earlier, Hughes had developed a portable telephone device that could pick up “aerial waves” as much as 500 yards from their source.

Hughes demonstrated his technology to representatives of London’s Royal Society, including Sir George Gabriel Stokes, the famed mathematician, physicist, and Cambridge professor. Stokes posited that Hughes’ device operated on electromagnetic induction rather than conduction through the air. Having no background in physics himself, Hughes apparently accepted Stokes’ observation as truth and did not pursue further experiments.

Working off of Maxwell’s theory in the late 1880s, German physicist Heinrich Hertz conducted a series of experiments that proved it. Hertz developed a method by which radio waves (known as “Hertzian waves” at the time) could be intentionally transmitted through free space by a spark-gap device—basically an antenna—and detected over short distances.

By altering the inductance and capacitance of his transmitting and receiving antennae, Hertz was able to gain a modicum of control over the frequencies of radiated waves. Using a corner reflector and parabolic reflector, he was able to focus electromagnetic waves, thus demonstrating that radio waves behaved in the same manner as light waves, just as Maxwell has postulated some 20 years prior.

Despite his successes, Hertz never developed a practical way to utilize electromagnetic waves. In fact, he saw no value in such technology. “It’s of no use whatsoever,” he told students at the University of Bonn. “This is just an experiment that proves Maestro Maxwell was right.” Hertz died in 1894, not long before radio became practical and commercially viable.

Stuck A Feather in His Hat & Called It Marconi

That same year, Italian inventor Guglielmo Marconi set out to build a commercial wireless telegraphy system utilizing Hertzian waves. Building off of the work of Hughes, Hertz, and others, Marconi developed new devices such as portable transmitters and receivers that could broadcast signals over long distances.

By late 1895, Marconi was able to send and receive signals up to two miles, even over hilly terrain. His experimental devices would eventually become the first commercially successful radio transmission system. Marconi’s system has been credited with making possible the rescue of the 700 survivors of the Titanic.

Marconi was granted a British patent in 1896, the first ever for a radio wave system. A year later, he established the world’s first radio station on the Isle of Wight, and the year after that he opened a factory that produced radio apparatuses for commercial sale. He would become the most successful inventor in his field, thanks to the commercialization of his devices. Marconi would win the Nobel Prize in Physics in 1909.

Photo credit: ellenm1 / Foter / CC BY-NC

Important People, Important Discoveries

The History Behind Newton’s Laws of Motion

The first law of motion is: Do not talk about fight club. Wait. That’s not right…

The Laws of Motion are physical laws that form the basis of classical mechanics. They describe the relationship between a body (any “thing,” essentially), the forces acting upon said body, and the motion of the body caused by those forces. Though the Laws of Motion were first compiled by and credited to Isaac Newton, he didn’t just make them up off the top of his head. Each has its own historical context.

First Law (a.k.a. The Law of Inertia)

“Every object persists in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to changes its state by force impressed.”

Aristotle, ancient Greek philosopher extraordinaire, believed that a body was in its natural state when at rest, and that all objects have a natural place in the universe. Rocks wanted to be at rest on the earth; smoke wanted to be at rest in the sky. For a body to move in a straight line at a constant, he believed, it required an external agent to constantly propel it, or it would stop moving and settle back into its “natural” state.

Galileo refined this notion, realizing that an external force is necessary to change the velocity of a body, but not to maintain that velocity. Without another force working against it, he determined, a moving object will keep moving. Inertia, the tendency of objects to resist changes in motion, was “discovered” by Galileo.

Newton further refined this notion into the Law of Inertia. With no force, there will be no acceleration, and the body in question will maintain its velocity.

Sir Fig has remained at rest for 169 years and counting.

Sir Fig has remained at rest for 169 years and counting.

Second Law

“The change of momentum of a body is proportional to the impulse impressed on the body, and happens along a straight line on which that impulse is impressed.”

Newton further extrapolated, commenting:

If a force generates a motion, a double force will generate double the motion, a triple force triple the motion, whether that force be impressed altogether and at once, or gradually and successively. […] This motion (being always directed the same way with the generating force), if the body moved before, is added to or subtracted from the former motion, according as they directly conspire with or are directly contrary to each other; or obliquely joined, when they are oblique, so as to produce a new motion compounded from the determination of both.

A handy formula for this Law of Motion is F=p’. That’s really all you need to know.

 

 

 

 

 

 

Just kidding. In this formula, F is force, p is momentum, and p’ is the time derivative. Interestingly, but perhaps unsurprisingly (Newton was a genius, after all), this equation remains valid in the context of special relativity, a the physical theory regarding the relationship between space and time proposed by Albert Einstein in 1905.

Third Law

“To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always each, and directed to contrary parts.”

By way of further explanation, Newton wrote:

Whatever draws or presses another is as much drawn or pressed by that other. […] If a horse draws a stone tied to a rope, the horse (if I may so say) will be equally drawn back towards the stone: for the distended rope, by the same endeavor to […] unbend itself, will draw the horse as much towards the stone, as it does the stone towards the horse, and will obstruct the progress of the one as much as it advances that of the other. If a body impinges upon another, and by its force changes the motion* of the other, that body also (because of the equality of the mutual pressure) will undergo an equal change, in its own motion, toward the contrary part. The changes made by these actions are equal, not in the velocities but in the motions of the bodies […] if the bodies are not hindered by any other impediments. For, as the motions are equally changed, the changes of the velocities made toward contrary parts are reciprocally proportional to the bodies. This law takes place also in attractions.

* Motion, here, means momentum.

From this Law of Motion, Newton later derived the law of conservation of momentum. The conservation of momentum has since been found to be a more fundamental concept that holds in cases where the Third Law apparently fails, particularly in quantum mechanics.

Photo credit: Tim Green aka atoach / Foter / CC BY

Historical Science & Technology

The Mariner’s Astrolabe

Also known as the “sea astrolabe,” the mariner’s astrolabe was a device (technically an inclinometer) used to determine the latitude of a ship at sea, using the sun’s noon altitude or the meridian altitude of a star of known declination. The design of the mariner’s astrolabe differed from a proper astrolabe to facilitate its use at sea in rough waters or heavy winds. It consisted of a graduated circle with an alidade used to measure vertical angles.

Nigh as Ancient as the Sea Itself

Okay, so that ^^^ is obviously an exaggeration. However, the invention of the mariner’s astrolabe does date back many centuries. The first existing mention of the device dates to 1295 CE, while the oldest surviving specimen has a confirmed date of 1554—this mariner’s astrolabe, salvaged from the wreck of the San Esteban, is on display at the Corpus Christi Museum of Science and History in Texas. Only about 70 examples of ancient mariner’s astrolabes survive today.

Mariner's astrolabes on display at the Museum of the Forte da Ponta de Bandeira in Lagos, Portugal.

Mariner’s astrolabes on display at the Museum of the Forte da Ponta de Bandeira in Lagos, Portugal.

Surviving writings of Samuel Purchas from the late 15th century CE state that the standard astrolabe was not modified for marine use until about a century prior. Ultimately, the creation and refinement of the mariner’s astrolabe is attributed to Portuguese sailors, who developed the device during the Portuguese discovery period. Martín Cortés de Albacar, a Spanish cosmographer, is credited with the first written description of how to construct and use a mariner’s astrolabe, found in the 1551 tome Arte de Navegar.

Obviously, the mariner’s astrolabe was derived directly from the traditional astrolabe’s used for the same purpose on land, though the basic principle of the seaborne device is essentially the same as that of the archipendulum used in the building of Egypt’s mighty pyramids.

Previous seafaring instruments like the cross staff and quadrant were rendered obsolete by the invention of the mariner’s astrolabe, which remained in use until well into the 17th century. The mariner’s astrolabe was, in turn, replaced by the more accurate and easier-to-use Davis quadrant.

How To

Mariner’s astrolabes were constructed with a thick ring on top, which allowed the devices to be held or hung vertically. Holding it by this ring, a navigator at sea would align the plane of the instrument to the direction of the sun or the star being used as a reference point. With the alidade aligned with the reference point, the altitude could be read off a degree scale on the circle’s outer edge. Because it had to be suspended vertically, the mariner’s astrolabe could be difficult to use in windy conditions, but, if you’re on a boat in the middle of the ocean, what else are you gonna do but tough it out, amirite?

Photo credit: Georges Jansoone / Foter / CC BY-SA