Technology, World-Changing Inventions

Water Treatment Technology Through History

Water Treatment Technology Through History

 

Civilization has changed in uncountable ways over the course of human history, but one factor remains the same: the need for clean drinking water. Every significant ancient civilization was established near a water source, but the quality of the water from these sources was often suspect. Evidence shows that humankind has been working to clean up their water and water supplies since as early as 4000 BCE.

Cloudiness and particulate contamination were among the factors that drove humanity’s first water treatment efforts; unpleasant taste and foul odors were likely driving forces, as well. Written records show ancient peoples treating their water by filtering it through charcoal, boiling it, straining it, and through other basic means. Egyptians as far back as 1500 BCE used alum to remove suspended particles from drinking water.

By the 1700s CE, filtration of drinking water was a common practice, though the efficacy of this filtration is unknown. More effective slow sand filtration came into regular use throughout Europe during the early 1800s.

As the 19th century progressed, scientists found a link between drinking water contamination and outbreaks of disease. Drs. John Snow and Louis Pasteur made significant scientific finds in regards to the negative effects microbes in drinking water had on public health. Particulates in water were now seen to be not just aesthetic problems, but health risks as well.

Slow sand filtration continued to be the dominant form of water treatment into the early 1900s. in 1908, chlorine was first used as a disinfectant for drinking water in Jersey City, New Jersey. Elsewhere, other disinfectants like ozone were introduced.

The U.S. Public Health Service set federal regulations for drinking water quality starting in 1914, with expanded and revised standards being initiated in 1925, 1946, and 1962. The Safe Drinking Water Act was passed in 1974, and was quickly adopted by all fifty states.

Water treatment technology continues to evolve and improve, even as new contaminants and health hazards in our water present themselves in increasing numbers. Modern water treatment is a multi-step process that involves a combination of multiple technologies. These include, but are not limited to, filtration systems, coagulant (which form larger, easier-to-remove particles call “floc” from smaller particulates) and disinfectant chemicals, and industrial water softeners.

For further information, please read:

Planned future articles on Sandy Historical will expand on some of the concepts mentioned here. Please visit this page again soon for links to further reading.

Historical Science & Technology, World-Changing Inventions

Origins of Steam Locomotion

As far back as the Ancient Greeks, railways were used to transport goods over long distances. The earliest railways relied on manpower to move their wheeled carts long the tracks. Later, horses were used, giving us the term “horsepower.” It was not until the late 1700s that the steam powered locomotive was developed.

From Prototypes to World Changers

The first prototype of a steam locomotive was created by Scottish inventor William Murdoch in 1784. By the late 1780s, steamboat pioneer John Fitch had built the US’ first working model of a steam rail locomotive.

It was not until 1804, however, that the first full-scale, working railway steam locomotive was built. Richard Trevithick of the United Kingdom sent his steam locomotive on the world’s first railway journey on 21 February that year, traveling from the Pen-y-darren ironworks at Merthyr Tydfil to Abercynon in South Wales.

Trevithick’s design employed numerous innovative features, including the use of high pressure steam that significantly increased the engine’s efficiency while simultaneously decreasing its weight. The Newcastle area of northeast England became Trevithick’s proving ground for further experimentation in locomotive design.

Salamanca, the first successful two-cylinder steam locomotive, designed by Matthew Murray, debuted on the Middleton Railway in 1812. Puffing Billy, completed in 1814 by engineer William Hedley, is the oldest surviving steam locomotive, currently on display in London’s Science Museum.

Steam locomotion innovator George Stephenson first built the Locomotion for northeast England’s Stockton & Darlington Railway, then the world’s only public steam railway. In 1829, he built The Rocket, which subsequently won the Rainhill Trials and lead to Stephenson becoming the world’s leading locomotive builder and engineer. His steam locomotives were used on railways throughout the UK, Europe, and the United States.

Stephenson's Rocket

Stephenson’s Rocket, currently on display at the Science Museum of London.

Early U.S. Steam Locomotives

The Tom Thumb, built for the Baltimore & Ohio Railroad, was the first steam locomotive developed, built, and run in the United States, in 1830. Previously, locomotives for American railroads had been imported from Britain. One of these imported steam locomotives, the John Bull, remains the oldest still-operable engine-powered vehicle of any kind in the US.

Locomotives in Continental Europe

Continental Europe’s first railway service opened in Belgium in May 1835, running between Mechelen and Brussels. The railways first locomotive was The Elephant.

Germany’s first steam locomotive was designed by British engineer John Blenkinsop and built by Johan Friedrich Krigar in 1816. At first, the locomotive was strictly demonstrative, running on a circular track in the factory yard of the Royal Berlin Iron Foundry. It also became the first steam-powered locomotive used for passenger service, as the public was welcomed to ride, free of charge, in coaches pulled by the locomotive along its circular track. The first German-designed steam locomotive, the Beuth, was built in 1841 by August Borsig.

In Austria, the Emperor Ferdinand Northern Railway became the nation’s first steam railway in 1837, running between Vienna-Floridsdorf and Deutsch-Wagram. Austria is the home of the world’s oldest continually running steam locomotive—the GKB 671 has been in service since its debut in 1860.

Photo credit: DanieVDM / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Technology

Quantum Clock (Quantum Snooze Button Optional)

Everyone’s had their alarm clock fail them at some point. Maybe a storm knocked out the power while you were sleeping, or you forgot to plug in your phone and the battery died overnight. Next thing you know, you wake up at 10:13 and you were supposed to be at work hours ago. A quantum clock is a perfect—albeit completely impractical—solution.

Ions & Lasers & Absolute Zero—Oh My!

The quantum clock is an advanced, even more accurate variation of the atomic clock, which itself was first suggested by Lord Kelvin in 1879 and built by the US National Bureau of Standards (now the National Institute of Standards and Technology, or NIST) in 1945. Instead of atomic clocks’ mercury ions, quantum clocks use electromagnetic traps to isolate aluminum and beryllium ions together. Lasers are used to cool these ions to just above absolute zero.

Like an atomic clock, a quantum clock measures the time of ion vibration at an optical frequency using a UV laser. This frequency is 100,000 higher than the microwave frequency utilized in NIST-F1, the cesium fountain atomic clock that serves as the United States’ official timekeeper.

NIST-F1: The now completely obsolete third most-accurate clock in the world.

NIST-F1: Only the third most accurate clock in the world.

Immeasurably Accurate

Quantum clocks are capable of dividing time into smaller units than atomic clocks do, allowing for even more precise time measurement. A quantum clock will lose approximately one second of accuracy every 3.4 billion years, making the devices roughly 37 times more accurate than existing international standards. The clocks’ accuracy is attributed, in part, to their insensitivity to background magnetic and electric fields, as well as temperature changes.

In fact, the NIST’s most recently developed quantum clock is so accurate that NIST researchers are unable to measure its ticks per second. The definition and exact length of a second is based on NIST-F1, and therefore cannot be properly applied to the more accurate quantum clock.

Photo credit: Foter / Public Domain Mark 1.0

Technology

What the Heck is A Homopolar Generator?

Homopolar generator, unipolar generator, acyclic generator, disc dynamo—this device is known by many names. (For simplicity’s sake, we’ll stick with homopolar generator.) But, apart from the obvious—“generator” is right there in the name—just what the heck is this thing?

It's a homopolar generator, of course.

It’s a homopolar generator, of course.

Spinny Electrical Disc Thingamajig

This unique type of DC electrical generator is comprised of an electrically conductive flywheel (or a cylinder, in some models) that rotates on a perpendicular plane to a uniform static magnetic field. They have one electrical contact near the disc’s axis, and the other near the periphery. This setup produces a potential difference between the center of the disc and its rim (or the ends of the cylinder). The electrical polarity varies depending on the rotational direction and the orientation of the field itself.

Typically, homopolar generators produce low voltage, no more than a few volts, but high currents. Very large versions built for research purposes have been able to produce hundreds of volts. Interconnected systems of multiple generators have been able to produce even higher voltage. Because of very low internal resistance, the largest homopolar generators can source electrical current up to a million amperes.

Homopolar generators can be used for applications such as welding and experimental railgun research. Because they are capable of storing energy over long periods, then releasing that stored energy in short bursts, they are also ideal for pulsed energy applications.

Faraday Disc Redux

The Faraday disc, or Faraday wheel, was an early type of homopolar generator invented by English scientist, physicist, and inventor Michael Faraday. Though the device was successful at producing electricity, Faraday’s design proved to be impractical. The homopolar generator is a modified, simplified configuration of the Faraday disc that produces roughly equal voltage, but much higher current.

A.F. Delafield received the first US patent for the general (disc) type of homopolar generator in May 1883. S.Z. de Ferranti and C. Batchelor also received separate US patents not long after.

Tesla’s Dynamo Electric Machine

Perhaps the most famous proponent of the homopolar generator was Nikola Tesla. He further improved upon the basic design, receiving a patent for his Dynamo Electric Machine. His device used an arrangement of two parallel discs, each with a separate, parallel shaft, which were joined by a metallic belt.

The discs generated opposite electric fields, causing current to flow from one shaft to the disc’s edge, across the belt to the edge of the other disc, then to the second shaft. This significantly reduced frictional losses caused by sliding contacts.

Photo credit: Foter / Public domain

Historical Science & Technology, World-Changing Inventions

He Who Smelted It Dealt It

To smelt is to produce a metal from its ore, such as silver, iron, and copper. This extractive metallurgy process uses heat and a chemical reducing agent (such as coke or charcoal) to decompose the ore itself, while other elements are expelled as gases or reduced to slag, leaving only the desired metal material behind.

The Mystery of the First Smelters

Exactly where, when, and how smelting was first discovered is unknown. Because the discovery of the process predated the invention of writing by several thousand years, there are no written records available. We do know that prehistoric humans were capable of smelting metals, with evidence dating back more than 8,000 years.

In ancient Europe, the first metals to be successfully smelted were tin and lead. These metals, being relatively soft and having relatively low melting points, could easily be smelted by placing their ores in a wood fire. As such, it is possible that the discovery was made accidentally.

Though evidence suggests that these materials may have been smelted earlier in history, the earliest artifacts of this process are cast lead beads created in Antolia (now Turkey) in roughly 6500 BCE.

Smelting the Useful Metals

Due to their physical characteristics, the smelting of lead and tin had very little impact on human civilization. However, the discovery and use of so-called useful metals—copper and bronze—was significant, ushering in the Copper and Bronze Ages, respectively.

Because copper’s melting point is roughly 400°F above the temperature of a campfire, the development of copper smelting was surely no accident. It is believed that early copper smelting was performed using pottery kilns.

The earliest smelted copper artifacts, found in modern Serbia, date back to approximately 5500 BCE.

Around 4200 BCE, smelters in Asia Minor created bronze by combining copper with arsenic, resulting in an alloy that is considerably harder than copper and, therefore, more useful. Because arsenic is a common impurity in copper ores, it is believed that the alloying process was discovered by mistake.

Roughly 1,000 years later, in the same region of the world, smelters found that alloying copper and tin produced a bronze material that was even harder and more durable than copper/arsenic bronze. It is believed that this discovery, too, was accidental. However, by 2000 BCE, tin was being mined for the sole purpose of bronze production.

Surviving Bronze Age swords.

Surviving Bronze Age swords.

For several millennia to come, bronze was the material of choice for the forging of weapons, armor, tools, agricultural implements, and household utensils such as saws and sewing needles. The mining of the raw materials used in bronze smelting contributed significantly to the development of trade networks throughout Europe and Asia.

I Am Iron (Smelting) Man

The origins of iron smelting are essentially unknown. The general consensus is that the process was first performed in what is now Turkey. Historical evidence found in Egypt suggests that iron smelting was known in the region as far back as 1100 BCE; additional evidence points to iron smelting being part of West African culture as early as 1200 BCE. This, then, gave rise to the Iron Age, which lasted until roughly 200 CE.

Wherever iron smelting originated, the process was generally the same in all cultures. Iron ore was smelted in bloomeries, a type of earthen furnace where temperatures could be regulated accurately enough to facilitate smelting iron without actually melting the material itself. This would create a spongy mass of metal known as a bloom, which was consolidated with hammers and good ol’ elbow grease. The oldest known example of a bloomer dates back to 930 BCE in modern Jordan.

In the Medieval period, the iron smelting process was refined. Bloomeries were replaced with blast furnaces which produced pig iron. Pig iron was then further refined—in a finery forge, of all things—to create forgeable bar iron. The end product was what we now call wrought iron. This iron smelting process was used, in essentially the same form, until the Industrial Revolution.

Photo credit: Foter / Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)

Historical Science & Technology

Astronomy in Ancient Egypt: A Brief Look

To say that the Ancient Egyptians were way ahead of the astronomy game would be an understatement. Stone circles at Nabta Playa, dating back to the 5th millennium BCE, were likely built to align with specific astronomical bodies. By the 3rd millennium BCE, they had devised a 365-day calendar that, save for a string of five “extra” days at the end of the year, was quite similar to the one we use today. And the pyramids (heard of those?) were built to align with the pole star.

Cataloging Yearly Events

In Ancient Egypt, astronomy was used to determine the dates of religious festivals and other yearly events. Numerous surviving temple books record, in great detail, the movements and phases of the sun, moon, and stars. The start of annual flooding of the Nile River was determined by careful notation of heliacal risings—the first visible appearance of stars at dawn. The rising of Sirius was of particular importance in predicting this event.

The temple of Amun-Ra at Karnak was built to align with the sunrise on the day of the winter solstice. A lengthy corridor in the temple is brightly illuminated by the sun on that day only—for the rest of the year, only indirect sunlight enters.

The remains of Karnak temple.

The remains of Karnak temple.

Logging the Hours

In the shared tomb of pharaohs Ramses VI and Ramses IX, tables carved into the ceiling tell the exact hour that the pole star will be directly overhead on any given night. The temple includes a statue called the Astrologer; those using the star chart must sit in a precise position, facing the Astrologer, for the chart to be accurate.

Aligning the Pyramids

The Egyptian pyramids were aligned with the pole star. Owing to the precession of the equinoxes, the pole star at the time (ca. 3rd millennium BCE) was Thuban, a faint star found in the constellation Draco.

Later temples were built or rebuilt using the Ramses’ star chart to find true north. When used correctly, this star chart still provides surprisingly good accuracy.

Astronomical Innovations

The now-proven planetary theory that the Earth rotates on its axis, and that the planets revolve around the sun (the only other planets observed at that time were Venus and Mercury), was initially devised by the Ancient Egyptians. In his writings, Roman philosopher Macrobius (395-423 CE) referred to this as the “Egyptian System.”

The Ancient Egyptians also used their own complex constellation system. Though there is some influence from others’ observations, the system was almost completely devised by the Egyptians themselves.

Photo credit: Malosky / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Important People, Important Discoveries, World-Changing Inventions

Big John Gutenberg & the Printing Press

Johannes Gensfleisch zur Laden zum Gutenberg (1398-1468) was a man of many talents. Known as a blacksmith, a goldsmith, a printer, an engraver, a publisher, and an inventor in his native Germany, his greatest contribution to the world was the mechanical, moving-type printing press. Introduced to Renaissance Europe in the mid-1400s, his printing press ushered in the era of mass communication and permanently altered the structure of society.

Full-Court Press

Johannes Gutenberg began working on this printing press in approximately 1436, in partnership with Andreas Dritzehn and Andreas Heilmann, a gem cutter and a paper mill owner, respectively.

Gutenberg’s experience as a goldsmith served him well, as his knowledge of metals alloyed him to create a new alloy of tin, lead, and antimony that proved crucial to producing durable, high-quality printed books. His special alloy proved far better suited to printing than all other materials available at the time.

He also developed a unique method of quickly and accurately molding new type blocks from a uniform template. Using this process, Gutenberg produced over 290 separate letter boxes (a letter box being a single letter or symbol), including myriad special characters, punctuation marks, and the like.

Gutenberg’s printing press itself consisted of a screw press that was mechanically modified to produce over 3,500 printed pages per day. The press could print with equal quality and speed on both paper and vellum. Printing was done with a special oil-based ink Gutenberg developed himself, which proved more durable than previous water-based inks. The vast majority of printed materials from Gutenberg’s press were in simple black, though a few examples of colored printing do exist.

A copy of the Gutenberg Bible in the Huntington Library in San Marino, California

One of the surviving Gutenberg Bibles, Huntington Library, San Marino, California

Changing the World

The moveable-type printing press is considered the most important invention of the second millennium CE, as well as the defining moment of the Renaissance period. It sparked the so-called “Printing Revolution,” enabling the mass production of printed books.

By the 16th Century, printing presses based on Gutenberg’s invention could be found in over 200 cities spanning 12 European countries. More than 200 million books had been printed by that time. The 48 surviving copies of the Gutenberg Bible, the first and most famous book Gutenberg himself ever printed, are considered the most valuable books in the world.

Literacy, learning, and education throughout Europe rose dramatically, fueled by the now relatively free flow of information. This information included revolutionary ideas that reached the peasants and middle class, and threatened the power monopoly of the ruling class. The Print Revolution gave rise to what would become the news media, and was the key component in the gradual democratization of knowledge.

Photo credit: armadillo444 / Foter / Creative Commons Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0)

Technology, World-Changing Inventions

Your Keyboard is Made of Dinosaur Bones

Plastic is everywhere, plastic is everything. So many of the things use every day are made of plastic, it’s kind of ridiculous. Especially when you consider that plastic is, essentially, made from the fossilized bones of the mighty dinosaurs (like all petroleum products). Once we’ve extruded all that fantastic plastic from a stegosaurus patella, though, how the heck do we turn it into something useful?

Why, Via Injection Molding, Of Course!

Simply put, injection molding is the process of manufacturing objects by injecting molten plastic into a mold that is shaped like the desired end product. Glass and some metals can be used in injection molding, as well, but plastics are by far the most commonly used material in this process. (There are, of course, many other methods for turning “raw” plastic into a useful product, but we’re not writing about those ones here, so let’s just pretend they don’t exist.)

The first manmade plastic was invented in Eighteen-Hundred and Sixty-One by British metallurgist and scientist Alexander Parkes. Dubbed “Parkesine,” this material could be heated and molded into shape, and would retain that shape when it cooled. Parkesine was far from perfect, however, as it was highly flammable, prone to cracking, and expensive to produce.

Improving on Parkes’ Parkesine, American inventor John Wesley Hyatt created a new plastic material, celluloid, in 1868. Just a few years later, Hyatt and his brother from the same mother Isaiah patented the first injection molding machine. Though relatively simple compared to the advanced technology used today (that was over 140 years ago, so…), it operated on the same general principle: the heated plastic (celluloid) was forced through a tube and into a mold. The first products manufactured with the Hyatt Bros’ machine were fairly simple ones: buttons, combs, collar stays, and the like.

WWII Drives Innovation

Injection molding technology remained more or less unchanged for decades; so, too, did the products manufactured with it. It was not until the outbreak of World War II, when every last scrap of metal materials was needed for war efforts, that plastic injection molding came into its own.

As was the case with WWII in general, it was up to the Americans to do the heavy lifting. Inventor James Watson Hendry built the first screw injection machine in 1946. This new device afforded much greater control over the speed with which plastic was injected into the mold, and subsequently greatly improved the quality of end products. Hendry’s screw machine also allowed materials to be mixed prior to injection. For the first time, recycled plastic could be added to “raw” material and thoroughly mixed together before being injected; this, too, helped to improve product quality. And, hey, recycling, right?!

JSYK, this is an atypical injection molding system.

JSYK, this is an atypical injection molding system.

Further Advances

Today, most injection molding is performed using Hendry’s screw injection technology. Hendry later went on to develop the first gas-assisted injection molding process—this enabled the production of complex and/or hollow shapes, allowing for greater design flexibility and reducing costs, product weight, material waste, and production times.

Advances in machining technology have made it possible to create highly complex molds from a range of materials—carbon steel, stainless steel, aluminum, and many others. Most of these molds can be used for years on end with little to no maintenance required, and can produce several million parts during their lifetime. This further reduces the cost of injection molding.

Numerous variations of injection molding have been developed, as well. Insert molding uses special equipment to mold plastic around other, preexisting components—these components can be machined from metal or molded from different plastics with higher melting points. Thin wall injection molding saves on material costs by mass-producing plastic parts that are very thin and light, such clamshell produce containers (like those blueberries come in, for example).

Plastic injection molding has evolved over the decades. From its simple beginnings, churning out combs and other relatively basic products, the process can now be used to produce high precision, tight tolerance parts for airplanes, medical devices, construction components… and computer keyboards.

Photo credit: jarnott / Foter / CC BY-NC

Historical Science & Technology, Science

A Brief History of Cataract Surgery

Cataract surgery is a surgery in which a cataract is removed. Nailed it! More specifically, cataract surgery is the removal of the human eye’s natural lens necessitated by the lens having developed an opacifiation (a.k.a. a cataract) which causes impairment or loss of vision. While we tend to think of “advanced” medical procedures such as this as relatively modern developments, cataract surgery has been performed for thousands of years.

Couching in Ancient India

Sushruta, a physician in ancient India (ca. 800 BCE), is the first doctor known to have performed cataract surgery. In this procedure, known as “couching,” the cataract, or kapha, was not actually removed.

First, the patient would be sedated, but not rendered unconscious. He/she would be held firmly and advised to stare at his/her nose. Then, a barley-tipped curved needle was used to push the kapha out of the eye’s field of vision. Breast milk was used to irrigate the eye during the procedure. Doctors were instructed to use their left hand to perform the procedure on affected right eyes, and the right hand to treat left eyes.

Even drawings of cataract surgery look super unpleasant.

Even drawings of cataract surgery look super unpleasant.

When possible the cataract matter was maneuvered into the sinus cavity, and the patient could expel it through his/her nose. Following the procedure, the eye would be soaked with warm, clarified butter and bandaged, using additional delicious butter as a salve. Patients were advised to avoid coughing, sneezing, spitting, belching or shaking during and after the operation.

Couching was later introduced to China from India during the Sui dynasty (581-618 CE). It was first used in Europe circa 29 CE, as recorded by the historian Aulus Cornelius Celsus. Couching continued to be used in India and Asia throughout the Middle Ages. It is still used today in parts of Africa.

Suction Procedures

In the 2nd Century CE, Greek physician Antyllus developed a surgical method of removing cataracts that involved the use of suction. After creating an incision in the eye, a hollow bronze needle and lung power were used to extract the cataract. This method was an improvement over couching, as it always eliminated the cataract and, therefore, the possibility of it migrating back into the patient’s field of vision.

In his Book of Choices in the Treatment of Eye Diseases, the 10th Century CE Iraqi ophthalmologist Ammar ibn Ali Al-Mosuli presented numerous case histories of successful use of this procedure. In 14th Century Egypt, oculist Al-Shadhili developed a variant of the bronze needle that used a screw mechanism to draw suction.

“Modern” Cataract Surgery

The first modern European physician to perform cataract surgery was Jacques Daviel, in 1748.

Implantable intraocular lenses were introduced by English ophthalmologist Harold Ridley in the 1940s, a process that made patient recovery a more efficient and comfortable process.

Charles Kelman, an American surgeon, developed the technique of phacoemulsification in 1967. This process uses ultrasonic was to facilitate the removal of cataracts without a large incision in the eye. Phacoemulsification significantly reduced recovery patient times and all but eliminated the pain and discomfort formerly associated with the procedure.

Photo credit: Internet Archive Book Images / Foter / Creative Commons Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0)

Historical Science & Technology, World-Changing Inventions

Cuneiform for Me & Youneiform

Cuneiform, sometimes called cuneiform script, is one of the world’s oldest forms of writing. It takes the form of wedge-shaped marks, and was written or carved into clay tables using sharpened reeds. Cuneiform is over 6,000 years old, and was used for three millennia.

The Endless Sumerians

The earliest version of cuneiform was a system of pictographs developed by the Sumerians in the 4th millennium BCE. The script evolved from pictographic proto-writing used throughout Mesopotamia and dating back to the 34th Century BCE.

In the middle of the 3rd millennium BCE, cuneiform’s writing direction was changed from top-to-bottom in vertical columns  to left-to-right in horizontal rows (how modern English is read). For permanent records, the clay tablets would be fired in kilns; otherwise, the clay could be smoothed out and the tablet reused.

A surviving cuneiform tablet.

A surviving cuneiform tablet.

Over the course of a tight thousand years or so, the pictographs became simplified and more abstract. Throughout the Bronze Age, the number of characters used decreased from roughly 1,500 to approximately 600. By this time, the pictographs had evolved into a combination of what are now known as logophonetic, consonantal alphabetic, and syllabic symbols. Many pictographs slowly lost their original function and meaning, and a given sign could provide myriad meanings, depending on the context.

Sumerian cuneiform was adapted into the written form of numerous languages, including Akkadian, Hittite, Hurrian, and Elamite. Other written languages were derived from cuneiform, including Old Persian and Ugaritic.

CuneiTRANSform

Cuneiform was gradually replaced by the Phoenician alphabet throughout the Neo-Assyrian and Roman Empirical ages. By the end of the 2nd Century CE, cuneiform was essentially extinct. All knowledge of how to decipher the language was lost until the 1850s—it was a completely unknown writing system upon its rediscovery.

Modern archaeology has uncovered between 500,000 and 2 million cuneiform tablets. Because there are so few people in the world who are qualified to read and translate the script, only 30,000 to 100,000 of these have ever been successfully translated.

Photo credit: listentoreason / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Important People, Important Discoveries, Science

The Periodic Table of Dmitri Mendeleev

If you’re here, reading a blog about science and technology, I’m going to assume you already know what the periodic table of elements is, and therefore dispense with the introductory information. However, though you may know the table well, do you know where it came from? Read on, friend, read on…

From Russia with Science

Dmitri Ivanovich Mendeleev (1834-1907) was a Russian inventor and chemist. In the 1850s, he postulated that there was a logical order to the order of the elements. As of 1863, there were 56 known elements, with new ones being discovered at a rate of roughly one per year. By that time, Mendeleev had already been working to collect and organize data on the elements for seven years.

Mendeleev discovered that arranging the known chemical elements in order by atomic weight, from lowest to highest, a recurring pattern developed. This pattern showed the similarity in properties between groups of elements. Building off this discovery, Mendeleev created his own version of the periodic table that included the 66 elements that were then known. He published the first iteration of his periodic table in Principles of Chemistry, a two-volume textbook that would be the definitive work on the subject for decades, in 1869.

Mendeleev’s periodic table is essentially the same one we use today, organizing the elements in ascending order by atomic weight and grouping those with similar properties together.

Dmitri Mendeleev

Changing & Predicting the Elements

The classification method Mendeleev formulated came to be known as “Mendeleev’s Law.” So sure of its validity and effectiveness was he that used it to propose changes to the previously-accepted values for atomic weight of a number of elements. These changes were later found to be accurate.

In the updated, 1871 version of his periodic table, he predicted the placement on the table of eight then-unknown elements and described a number of their properties. His predictions proved to be highly accurate, as several elements that were later discovered almost perfectly matched his proposed elements. Though they were renamed (his “ekaboron” became scandium, for example), they fit into Mendeleev’s table in the exact locations he had suggested.

From Skepticism to Wide Acclaim

Despite its accuracy and the scientific logic behind it, Mendeleev’s periodic table of elements was not immediately embraced by chemists. It was not until the discovery of several of his predicted elements—most notably gallium (in 1875), scandium (1879), and germanium (1886)—that it gained wide acceptance.

The genius and accuracy of his predictions brought Mendeleev fame within the scientific community. His periodic table was soon accepted as the standard, surpassing those developed by other chemists of the day. Mendeleev’s discoveries became the bedrock of a large part of modern chemical theory.

By the time of his death, Mendeleev had received a number awards and distinctions from scientific communities around the world, and was internationally recognized for his contributions to chemistry.

Photo credit: CalamityJon / Foter / Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)