Historical Science & Technology, Science & Society

Notable Local Alphabets of Archaic Greece

The Archaic Period of Ancient Greece lasted from the 8th century BCE until 480 BCE, during the Greco-Persian wars. The Greek alphabet was still not 100 percent codified at this point, as the 22 original symbols (letters) adapted from the Phoenician alphabet were slowly being replaced by the 24-letter Greek alphabet that exists today. As such, many areas of Greece developed their own variations of the alphabet, some of which were in use for centuries before the “official” Greek alphabet was put into use throughout the land. The most historically significant of these local alphabets are known as “Old Attic,” “Euboean,” and “Corinthian.” Read on to learn more!

The Old Attic Alphabet

Until the late 5th century BCE, the capital city of Athens used a variation of the so-called “light blue alphabet,” which included two unique letters and replaced multiple, similar letters with single letters (multiple E variations were reduced to a single E, for example). Additionally, Athens’ Old Attic alphabet used a number of letter forms that varied from the “traditional” shapes and were, at least partially, borrowed from alphabets of neighboring regions.

By the end of the 5th century BCE, it was commonplace for writing to be done in both standard and Old Attic alphabets, with words that used different letters in the different language written side-by-side. As part of the reforms that came about after the Thirty Tyrants, a formal decree was, um, decreed in 403 BCE, decreeing that all public writing must be done in using the newly-agreed upon full alphabet. Fittingly, given its name, the Old Attic alphabet was hastily packed in a cardboard box, stashed up in the rafters, and promptly forgotten about for like twelve years.

Pretty sure this is from that shield Indy finds in "Last Crusade."

Pretty sure this is from that shield Indy finds in “Last Crusade.”

The Euboean Alphabet

Not a typo of “European,” the Euboean alphabet was used Eretria, Chalkis, and related colonies throughout southern Italy. This variation brought the Greek alphabet to Italy, where it, in turn, begat Etruscan and other Old Italic alphabets, which ultimately led to the Latin alphabet we use today (more or less). A number of the features distinct to the Latin alphabet can be found in their nascent forms in the Euboean alphabet.

Like Old Attic, the Euboean alphabet dropped certain letters, combined some, and added others, while also using modified letter shapes. It even included letters that were not used in writing at all, but were still part of the alphabet for some reason; some of these letters made epic comebacks and found themselves in full use in later versions of the alphabet.

The well-known classicist (as “well known” as a classicist can be, anyway), and current Halls-Bascom Professor of Classics Emeritus at the University of Wisconsin-Madison, Barry “Big Barry” Powell has suggested that the Euboea region was likely where the Greek alphabet was first used in written form, in roughly 775 BCE, and that the written language may well have been developed solely for the purpose of writing down epic poetry. Someone get those beatniks a guitar and teach ‘em how to write an actual song like a normal person!

The Corinthian Alphabet

Used extensively across southern and eastern Peloponnese, the Corinthian alphabet also modified or reduced the usage of certain letters, while at the same time integrating letters from different alphabets that existed elsewhere. It maintained the use of two letters that were deemed obsolete in other alphabets, and combined its’ parent alphabet’s multiple Es into a single letter that was, for reasons unknown, shaped like a B; in place of the B, the Corinthian alphabet used a modified J.

The Corinthians were not real good with letters and such.

Photo credit: Kirk Siang via Foter.com / CC BY-NC-ND

Science & Society

In This Corner, Weighing 0.9 Ounces: Knockout Mouse!

While it’s certainly not ideal that scientists experiment on animals, it’s proven to be better than the alternative (i.e., experimenting on humans). The noble and adorable mouse is one of the most commonly-used laboratory animal, as they are relatively closely related to humans in terms of genetic similarity—humans and mice share a large number of genes. To test the effects of specific genes within the overall gene sequence, scientists frequently work with special genetically-modified mice called “knockout mice.”

knockout mouse

Gene Gene the Knockout Machine

In a knockout mouse, science has been used to inactivate or “knock out” an existing gene by replacing it or by disrupting it with a piece of artificial DNA. In modifying the creature’s gene structure, researchers can study the role of genes that have been sequenced, but whose functions are as yet undetermined. Observed differences in the knockout mouse’s behavior or physiology can be used to infer the probably function of the inactivated gene.

The first knockout mouse was created in 1989 by the power trio of Italian-American molecular geneticist Mario “Big Mario” Capecchi, English biologist Sir Martin “Big Marty” Evans, and British-American geneticist Oliver “Big Oli” Smithies. The technical details of how a knockout mouse is created are a bit much to get into here; suffice it to say that, for their efforts, Capecchi, Evans, and Smithies were awarded the Nobel Prize in Physiology/Medicine in 2007. Why it took 18 years for their achievement to be recognized is an Agatha Christie-caliber mystery.

Mighty Mouse

Since their “invention,” knockout mice have been used to model and study numerous diseases and maladies, including anxiety, arthritis, cancer, diabetes, heart disease, obesity, and Parkinson’s disease. They are also used to provide a biological and scientific context for the development and testing of drugs and other therapy techniques.

Millions of knockout mice are used in scientific and medical experiments every year, and thousands of different strains of knockout mice have been developed. Many different variations of the technology used to create them, as well as the modified mice themselves, have been patented by private companies.

Though gene knockout is most easily achieved in mice, other research animals can be “knocked out,” as well. Knockout rats have been used in research since 2003, but knockout rats are much more difficult to create than knockout mice.

Photo credit: CameliaTWU via Decorators Guru / CC BY-NC-SA

Historical Science & Technology, Science

The Edwin Smith Papyrus

Named for the antiquities dealer who purchased it in 1862, the Edwin Smith Papyrus is oldest known medical writing to deal with trauma surgery. Consisting of descriptions of practical treatment for 48 different injuries, fractures, wounds, and tumors, it is believed to be an early military surgery manual. It’s suggests some pretty impressive skills and knowledge for doctors who operated 3,600 years ago.

Medicine Not Magic

Dated to circa 1600 BCE, during the 16th and/or 17th Dynasties of Ancient Egypt’s Second Intermediate Period, the Edwin Smith Papyrus (ESP) is one of four significant medical-related papyri from this period. Unlike the others, while prescribe magic or spells for certain maladies, ESP describes a legitimate scientific and medical approach to treating injuries and ailments.

Over 15 feet long, ESP includes extensive inscription on both sides. The A side contains 377 lines in 17 columns, while the B side holds 92 lines in 5 columns. It is almost fully intact, with only minor damage and wear despite its age; however, it was cut into multiple single-column pages by some 20th century jack@$$. ESP is written right-to-left in hieratic, which is essentially cursive hieroglyphics. (Who knew that was a thing?)

A portion of the Edwin Smith Papyrus, displaying some splendid penmanship.

A portion of the Edwin Smith Papyrus, displaying some splendid penmanship.

Most of the medical information provided by the ESP relates to trauma and surgery. The front details 48 case histories of injuries/illnesses and their treatments, organized by organ or body part, starting with the head and moving down the body. Each case also offers additional info on the patients, explanations of what caused the trauma (in most cases), diagnosis, and prognosis. Titles are highly descriptive—“Practices for a gaping wound in his head, which has penetrated to the bone and split the skull”—because catchy titles had not yet been invented. The back of ESP contains eight magic spells, five prescriptions, and a few sections devoted to gynecology and cosmetics.

A number of treatments are described in detail, including closing wounds (of the lip, throat, and shoulder) with sutures, bandaging, splinting broken bones, poultices, infection prevention and cures (mostly honey-related), and how to stop bleeding with raw meat, which seems a bit suspect by modern standards. ESP contains the world’s first known descriptions of several internal structures of the skull and brain, as well as the first ever written use of the word “brain”. (Like ever, in the history of ever. Ever.)

Big Ed with the Assist

Born in Connecticut in 1822, Edwin “Big Ed” Smith was an American Egyptologist. He purchased the papyrus that now bears his name in Luxor, Egypt, at age 40, and it was in his possession until his death in 1906. His daughter subsequently donated the papyrus to the New York Historical Society, who displayed it at the Brooklyn Museum from 1938 to 1948. At that time, it was gifted by the Society and the Museum to the New York Academy of Medicine, where it is still on display today. (It was briefly on exhibit at New York’s Metropolitan Museum of Art, from 2005 to 2006.)

Smith had a working knowledge of hieroglyphs, but did not know hieratic well enough to translate the scroll himself. In 1930, it was successfully translated by American archaeologist James Henry “Big Jim” Breasted and Dr. Arno Luckhardt. This translation demonstrated for the first time that the Ancient Egyptians used rational, scientific medical treatment methods, not just magic potions and spells as other medical resources of the time suggested.

Photo credit: Internet Archive Book Images via Scandinavian / No known copyright restrictions

Technology, World-Changing Inventions

600 Words on the History of Machine Tools

A “machine tool” is, as the name suggests, both a machine and a tool, one that shapes, cuts, bores, grinds, shears, or otherwise deforms metal or another rigid material. Though there are a wide variety of machine tools—from drill presses to lathes to electrical discharge machining systems—all utilize some method of constraining the material being worked and provide guided movement of the parts of the machine. (A circular saw, for example, is not a machine tool, as it allows for unguided, or “freehand”, movement.)

Most modern machine tools are electrically, hydraulically, or otherwise externally powered; very few rely on good ol’ elbow grease. That fact may make it seem as though machine tools are a relatively new invention, however, they have been around for millennia.

Early Forerunners

The first kinda-sorta machine tools are the bow drill and the potter’s wheel, which were used in ancient Egypt at least as far back as 2500 BCE. Rudimentary lathes were known throughout Europe as early as 1000 BCE.

However, it was not until the Late Middle Ages/the early Renaissance that true machine tools exhibiting the features noted above began to appear. A chap by the name of Leonardo “Big Leo” da Vinci helped pioneer machine tool technology, with further advancements championed by clockmakers of the time.

Driven By Industry

In its early days, machine tool development was spurred by a number of nascent industries which more or less needed the devices to grow. The first was firearms, because war never goes out of fashion, followed by the textile market and transportation—first steam engines, then bicycles, then automobiles, then aircraft.

Textile manufacturing was perhaps the biggest driver of machine tool innovation. Prior to the Industrial Revolution in England, most textile machinery was constructed from wood (even gears and shafts). However, these early machines couldn’t withstand the rigors of increased mechanization, and parts were replaced by cast or wrought iron. For large parts, cast iron was generally cast in molds (hence the name), but was all but impossible to work on a smaller scale. Wrought iron could be blacksmithed into shape when red hot from the forge, but after cooling was very difficult to hand-machine into the more complex shapes required.

The Watt steam engine, brainchild of James “Big Game James” Watt and the godfather of all modern engines, would never have come about without machine tools. Watt was unable to manually machine a correctly-bored cylinder for his engine until John “Big Bad John” Wilkinson invented a boring machine in 1774.

Selling Out

Portion of an advert for an early lathe machine.

Portion of an advert for an early lathe machine.

Throughout the 18th, 19th, and early 20th centuries CE, machine tools were generally built by the same people would use them. Eventually, people realized that there was a significant market for machine tools, and machine tool builders began to offer their creations for sale to the general public. The first commercially available machine tools were built by English steam engine manufacturer Matthew “Fat Matt” Murray, starting in 1800. Others soon followed suit, including Scottish engineer James “Big Jim” Nasmyth, English inventor Joseph “Big Joe” Whitworth, and Henry “Big Hank” Maudslay, whose skill and innovation would eventually lead him to be dubbed “the father of machine tool technology.”

Among the earliest commercially available machine tools were there the metal planer, the milling machine, the pattern tracing lathe, the screw cutting lathe, the slide rest lathe, the shaper, and the turret lathe. These devices and those that followed allowed for the realization of a long-sought after goal in manufacturing: the production of identical, interchangeable parts such as nuts and bolts. This, in turn, paved the way for mass production, assembly lines, and modern manufacturing as we know it.

Photo credit: Internet Archive Book Images via Foter.com / No known copyright restrictions

Historical Science & Technology, World-Changing Inventions

Bloomery: Iron, Not Underpants

Iron is, for lack of a better word, good. If you haven’t spotted any iron around you today, it’s only because we’re so used to it that it has become essentially invisible. But before iron became ubiquitous in architecture, transportation, and elsewhere, the people of Earth had to make do without and hold their buildings and bridges up with rocks or trees or whatever. And so, tired of an ironless lifestyle, ancient man created the bloomery, with which they could smelt iron to their hearts content.

A Bloom of One’s Own

Consisting of a pit or chiminey (generally made of earth, clay, stone, or other heat-resistant material) with one or more pipes entering through the side walls near the base, a bloomery was the earliest manmade method of smelting iron. Preheated charcoal is used to “fire” iron ore inside the bloomery, and the pipes allow air to enter the furnace via natural draft or with assistance from bellows. The product of a bloomery is porous iron and slag, known as “bloom.” The so-called “sponge iron” that results from the process can be further forged to create wrought iron.

Not coincidentally, the development and widespread use of the bloomery ushered in the Iron Age. Earlier samples of processed iron do exist, but these artefacts have been identified as meteoric iron, which required no smelting, or happy accidents produced in bronze smelting processes.

The surviving remnants of an early American bloomery.

The surviving remnants of an early American bloomery.

A History of Bloomery

The earliest archaeological evidence of the use of bloomeries comes from East Africa, where bloomery-smelted iron tools have been dated to 1000 to 500 BCE. In sub-Saharan Africa, Forged iron tools dating back to 500 BCE have been found amongst relics from the highly advanced and mysterious Nok culture.

In Europe, the first bloomeries were small by necessity, capable of smelting only about 1 kg of iron at a time because they simply could not be built any bigger at the time. By the 14th century BCE, large bloomeries with capacities up to 300 kg had been developed. Some even used waterwheels to power their bellows.

At a larger scale, bloomeries expose iron ore to burning charcoal for longer. Combined with the more powerful air blast required to adequate heat the charcoal in these larger chambers, this often led to the accidental production of pig iron. This pig iron was naught but a waste product for roughly a century, until the arrival of the blast furnace, which enabled smelters to oxidize pig iron and turn it into cast iron, iron, or steel.

Eventually, the bloomery would be replaced for nearly all smelting processes by the blast furnace. Developed in China in the 5th century BCE, the blast furnace did not make its way to the West until the 15th century CE. It was long thought that the ancient Chinese did not use bloomeries, and instead went straight to blast furnacin’. However, recent evidence suggests that bloomeries were in use in China as early as 800 BCE, having migrated eastward from Europe.

Photo credit: mixedeyes via Small Kitchen / CC BY-NC-SA