Friday, January 31, 2014

Soon: Universe's Coldest Refrigerator

The Coldest Spot in the Known Universe By Dr. Tony Phillips, NASA

Jan. 30, 2014:
Everyone knows that space is cold. In the vast gulf between stars and galaxies, the temperature of gaseous matter routinely drops to 3 degrees K, or 454 degrees below zero Fahrenheit.
It’s about to get even colder.

NASA researchers are planning to create the coldest spot in the known universe inside the International Space Station.

"We’re going to study matter at temperatures far colder than are found naturally," says Rob Thompson of JPL. He’s the Project Scientist for NASA’s Cold Atom Lab, an atomic ‘refrigerator’ slated for launch to the ISS in 2016. "We aim to push effective temperatures down to 100 pico-Kelvin."

100 pico-Kelvin is just one ten billionth of a degree above absolute zero, where all the thermal activity of atoms theoretically stops. At such low temperatures, ordinary concepts of solid, liquid and gas are no longer relevant. Atoms interacting just above the threshold of zero energy create new forms of matter that are essentially ... quantum.

Quantum mechanics is a branch of physics that describes the bizarre rules of light and matter on atomic scales. In that realm, matter can be in two places at once; objects behave as both particles and waves; and nothing is certain: the quantum world runs on probability.

It is into this strange realm that researchers using the Cold Atom Lab will plunge.

"We’ll begin," says Thompson, "by studying Bose-Einstein Condensates."

In 1995, researchers discovered that if you took a few million rubidium atoms and cooled them near absolute zero, they would merge into a single wave of matter. The trick worked with sodium, too. In 2001, Eric Cornell of the National Institute of Standards & Technology and Carl Wieman of University of Colorado shared the Nobel Prize with Wolfgang Ketterle of MIT for their independent discovery of these condensates, which Albert Einstein and Satyendra Bose had predicted in the early 20th century.

If you create two BECs and put them together, they don't mix like an ordinary gas. Instead, they can "interfere" like waves: thin, parallel layers of matter are separated by thin layers of empty space. An atom in one BEC can add itself to an atom in another BEC and produce – no atom at all.

"The Cold Atom Lab will allow us to study these objects at perhaps the lowest temperatures ever," says Thompson.

The lab is also a place where researchers can mix super-cool atomic gasses and see what happens.

"Mixtures of different types of atoms can float together almost completely free of perturbations," explains Thompson, "allowing us to make sensitive measurements of very weak interactions. This could lead to the discovery of interesting and novel quantum phenomena."

The space station is the best place to do this research. Microgravity allows researchers to cool materials to temperatures much colder than are possible on the ground.

Thompson explains why:

"It’s a basic principle of thermodynamics that when a gas expands, it cools. Most of us have hands-on experience with this. If you spray a can of aerosols, the can gets cold."

Quantum gases are cooled in much the same way. In place of an aerosol can, however, we have a ‘magnetic trap.’

"On the ISS, these traps can be made very weak because they do not have to support the atoms against the pull of gravity. Weak traps allow gases to expand and cool to lower temperatures than are possible on the ground."

No one knows where this fundamental research will lead. Even the "practical" applications listed by Thompson—quantum sensors, matter wave interferometers, and atomic lasers, just to name a few—sound like science fiction. "We’re entering the unknown," he says.

Researchers like Thompson think of the Cold Atom Lab as a doorway into the quantum world. Could the door swing both ways? If the temperature drops low enough, "we’ll be able to assemble atomic wave packets as wide as a human hair--that is, big enough for the human eye to see." A creature of quantum physics will have entered the macroscopic world.

And then the real excitement begins.

For more information about the Cold Atom Lab, visit

Thursday, January 30, 2014

Why We Have Four Limbs

How we got four limbs? Because we
have a belly: New model for the
evolution of paired appendages
All of us backboned animals – at least the ones who also have jaws – have four fins or limbs, one pair in front and one pair behind. These have been modified dramatically in the course of evolution, into a marvelous variety of fins, legs, arms, flippers, and wings. But how did our earliest ancestors settle into such a consistent arrangement of two pairs of appendages? – Because we have a belly. Researchers in the Theoretical Biology Department at the University of Vienna and the Konrad Lorenz Institute for Evolution and Cognition Research have presented a new model for approaching this question in the current issue of the journal "Evolution & Development".

As with any long-standing question in evolutionary biology, numerous ideas have been proposed to explain different aspects of the origin of paired appendages in vertebrates known as gnathostomes, which includes all living and extinct animals having both a backbone and jaw. "This group does not include the living lampreys and hagfishes, which have neither jaws nor paired fins, although they do have median fins along the midline of the back and tail", says Brian Metscher from the Department of Theoretical Biology. Any explanation must account for not only the fossil evidence, but also for the subtleties of the early development of fins and limbs.

Pairs of appendages situated at front and back ends of the body cavity
"We have drawn together a large body of molecular embryology work, as well as results from paleontology and classical morphology to work out an overall explanation of how the vertebrate embryo forms pairs of appendages along each side, and only two pairs situated at front and back ends of the body cavity," said Laura Nuño de la Rosa, lead author of the study and post-doctoral fellow at the Konrad Lorenz Institute in Altenberg, Austria.

Embryo segregates into three main layers of tissue
The proposed model incorporates results from much previous research, including information on gene expression and on interactions among the different tissues that make up an early vertebrate embryo. In its earliest stages of development, an embryo segregates into three main layers of tissue: an outer one (ectoderm) that will form the skin and nervous system, an inner layer (endoderm) that becomes the digestive tract, and an in-between layer (mesoderm) that eventually forms muscles, bones, and other organs. The early mesoderm splits into two layers that line the inside of the body cavity and the outside of the gut.

The new hypothesis proposes that fins or limbs begin to form only at the places where those two layers are sufficiently separated and interact favorably with the ectodermal tissues – namely at the two ends of the forming gut. In between, no fin/limb initiation takes place, because the two mesoderm layers maintain a narrower separation and, the authors propose, interact with the developing gut.

Behind the back end of the digestive tract (the anal opening), along the bottom of the tail, the two mesoderm layers come together as the body wall closes up, forming a single (median) fin. Along the length of the developing gut, the body wall cannot close completely, so the conditions for initiating fins or limbs occur to the left and right of the midline, allowing the development of paired instead of median fins. "You could say that the reason we have four limbs is because we have a belly," adds Laura Nuño de la Rosa.  

"The most important function of a model like this is to provide a coherent framework for formulating specific hypotheses, which can be tested with molecular and other laboratory methods," says Brian Metscher, Senior Scientist in the University of Vienna's Department of Theoretical Biology. This work is also a contribution to the ongoing discussion of the roles of changes to embryonic development in the evolution of new structures, or evolutionary novelties. Further, the focus of the hypothesis on global embryonic patterning and tissue interactions emphasizes the importance of accounting for factors other than genes (epigenetics) to understand development and evolution.

Publication in "Evolution & Development":

Laura Nuño de la Rosa, Gerd B. Müller, Brian D. Metscher: The Lateral Mesodermal Divide: An Epigenetic Model of the Origin of Paired Fins. Evolution & Development, January 2014, 38-48. DOI: 10.1111/ede.12061

Wednesday, January 29, 2014

Singer/Activist Pete Seeger Dies

Peter "Pete" Seeger (May 3, 1919 – January 27, 2014) was an American folk singer. A fixture on nationwide radio in the 1940s, he also had a string of hit records during the early 1950s as a member of the Weavers, most notably their recording of Lead Belly's "Goodnight Irene", which topped the charts for 13 weeks in 1950. Members of the Weavers were blacklisted during the McCarthy Era. In the 1960s, he re-emerged on the public scene as a prominent singer of protest music in support of international disarmament, civil rights, counterculture and environmental causes.

A prolific songwriter, his best-known songs include "Where Have All the Flowers Gone?" (with Joe Hickerson), "If I Had a Hammer (The Hammer Song) " (with Lee Hays of the Weavers), and "Turn! Turn! Trun!" (lyrics adapted from Ecclesiastes), which have been recorded by many artists both in and outside the folk revival movement and are sung throughout the world. "Flowers" was a hit recording for th Kingston Trio (1962); Marlene Dietrich, who recorded it in English, German and French (1962); and Johnny Rivers (1965). "If I Had a Hammer" was a hit for Peter, Paul & Mary (1962) and Trini Lopez (1963), while the Byrds had a number one hit with "Turn! Turn! Turn!" in 1965.

Seeger was one of the folksingers most responsible for popularizing the spiritual "We Shall Overcome" (also recorded by Joan Baez and many other singer-activists) that became the acknowledged anthem of the 1960s American Civil Rights Movement, soon after folk singer and activist Guy Carawan introduced it at the founding meeting of the Student Nonviolent Coordinating Committee (SNCC) in 1960. In the PBS American Masters episode "Pete Seeger: The Power of Song", Seeger stated it was he who changed the lyric from the traditional "We will overcome" to the more singable "We shall overcome".

Seeger was very active in
The Young Communist League from 1936 to 1942
The Communist Party of the USA from 1942 to 1949
Vietnam War protests
The environmental movement
Pete Seeger and Bob Dylan

Pete Seeger was one of the earliest backers of Bob Dylan and was responsible for urging A&R man John Hammond to produce Dylan's first LP on Columbia, and for inviting him to perform at the Newport Folk Festival, of which Seeger was a board member. There was a widely repeated story that Seeger was so upset over the extremely loud amplified sound that Dylan, backed by members of the Butterfield Blues Band, brought into the 1965 Newport Folk Festival that he threatened to disconnect the equipment. There are multiple versions of what went on, some fanciful. What is certain is that tensions had been running high between Dylan's manager, Albert Grossman, and Festival Board members (who besides Seeger also included Theodore Bikel, Bruch Jackson, Alan Lomax, festival MC Peter Yarrow and George Wein) over the scheduling of performers and other matters. Two days earlier there had been a scuffle and brief exchange of blows between Grossman and Alan Lomax; and the Board, in an emergency session, had voted to ban Grossman from the grounds, but had backed off when George Wein pointed out that Grossman also managed highly popular draws Odetta and Peter, Paul and Mary. Seeger has been portrayed as a folk "purist" who was one of the main opponents to Dylan's "going electric," but when asked in 2001 about how he recalled his "objections" to the electric style, he said:
I couldn't understand the words. I wanted to hear the words. It was a great song, "Maggie’s Farm," and the sound was distorted. I ran over to the guy at the controls and shouted, "Fix the sound so you can hear the words." He hollered back, "This is the way they want it." I said "Damn it, if I had an axe, I'd cut the cable right now." But I was at fault. I was the MC, and I could have said to the part of the crowd that booed Bob, "you didn't boo Howlin’ Wolf yesterday. He was electric!" Though I still prefer to hear Dylan acoustic, some of his electric songs are absolutely great. Electric music is the vernacular of the second half of the twentieth century, to use my father's old term.

Seeger died on January 27, 2014, at age 94. According to his grandson, Kitama Cahill-Jackson, Seeger died peacefully in his sleep around 9:30 p.m. at New York's Presbyterian Hospital, where he had been for six days. Family members were with him at the time of his death. Cahill-Jackson said Seeger was still as active as ever, out chopping wood ten days prior to his death.

Response and reaction to Seeger's death quickly poured in. Bruce Springsteen said of Seeger's passing, "I lost a great friend and a great hero last night, Pete Seeger", before performing "We Shall Overcome" while on tour in South Africa, President Barack Obama called Seeger "America's tuning fork" who believed in "the power of song" to bring social change. "Over the years, Pete used his voice and his hammer to strike blows for workers' rights and civil rights; world peace and environmental conservation, and he always invited us to sing along. For reminding us where we come from and showing us where we need to go, we will always be grateful to Pete Seeger. Michelle and I send our thoughts and prayers to Pete's family and all those who loved him."

Tuesday, January 28, 2014

Bridgman and Operationalism

Percy Williams Bridgman (21 April 1882 – 20 August 1961) was an American physicist who won the 1946 Nobel Prizer in Physics for his work on the physics of high pressures. He also wrote extensively on the scientific method and on other aspects of the philosophy of science.

Bridgman entered Harvard University in 1900, and studied physics through to his PhD. From 1910 until his retirement, he taught at Harvard, becoming a full professor in 1919. In 1905, he began investigating the properties of matter under high pressure. A machinery malfunction led him to modify his pressure apparatus; the result was a new device enabling him to create pressures eventually exceeding 100,000 kgf/cm² (10 GPa; 100,000 atmospheres). This was a huge improvement over previous machinery, which could achieve pressures of only 3,000 kgf/cm² (0.3 GPa). This new apparatus led to an abundance of new findings, including a study of the compressibility, electric and thermal conductivity, tensile strength and viscosity of more than 100 different compounds. Bridgman is also known for his studies of electrical conduction in metals and properties of crystals. He developed the Bridgman seal and is the eponym for Bridgman’s thermodynamic equations.

Bridgman made many improvements to his high pressure apparatus over the years, and unsuccessfully attempted the synthesis of diamond many times.

His philosophy of science book The Logic of Modern Physics (1927) advocated operationalism and coined the term operational definition. He was also one of the 11 signatories to the Russell-Einstein Manifesto.

                                                        Percy Williams Bridgman

Bridgman committed suicide by gunshot after living with metastic cancer for some time. His suicide note read in part, "It isn't decent for society to make a man do this thing himself. Probably this is the last day I will be able to do it myself." Bridgman's words have been quoted by many on both sides of the assisted suicide debate
Honors and Awards
Bridgman received Doctors, honoris causa from Stevens Institute (1934), Harvard (1939), Brooklyn Polytechnic (1941), Princeton (1950), Paris (1950), and Yale (1951). He received the Bingham Medal (1951) from the Society of Rheology, the Rumford Medal from the American Academy of Arts and Sciences, the Elliott Cresson Medal from the Franklin Institute, the Gold Medal from Bakhuys Roozeboom Fund (founder Hendrik Willem Bakhuis Roozeboom) (1933) from the Royal Netherlands Academy of Arts and Sciences, and the Comstock Prize (1933) of the National Academy of Sciences. He was a member of the American Physical Society and was its President in 1942
. He was also a member of the American Association for the Advancement of Science, the American Academy of Arts and Sciences, the American Philosophical Society, and the National Academy of Sciences. He was a Foreign Member of the Royal Society and Honorary Fellow of the Physical Society of London.
The Percy W. Bridgman House, in Massachusetts, is a U.S. National Historica Landmark designated in 1975.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

In research design, especially in psychology, social sciences, life sciences, and physics, operationalization is a process of defining the measurement of a phenomenon that is not directly measurable, though its existence is indicated by other phenomena. It is the process of defining a fuzzy concept so as to make the theoretical concept clearly distinguishable or measurable, and to understand it in terms of empirical observations. In a wider sense, it refers to the process of specifying the extension of a concept — describing what is and is not a part of that concept. For example, in medicine, the phenomenon of health might be operationalized by one or more indicators like body mass index or tobacco smoking. Thus, some phenomena are directly difficult to observe (i.e. they are latent), but their existence can be inferred by means of their observable effects.

The concept of operationalization was first presented by the British physicist N. R. Campbell in his 'Physics: The Elements' (Cambridge, 1920). This concept next spread to humanities and social sciences. It remains in use in physics.

Operationalization is used to specifically refer to the scientific practice of operationally defining, where even the most basic concepts are defined through the operations by which we measure them. This comes from the philosophy of science book The Logic of Modern Physics (1927), by Percu Williams Bridgman, whose methodological position is called operationalism.

Bridgman's theory was criticized because we measure "length" in various ways (e.g. it's impossible to use a measuring rod if we want to measure the distance to the Moon), "length" logically isn't one concept but many. Each concept is defined by the measuring operations used. Another example is the radius of a sphere, obtaining different values depending on the way it is measured (say, in metres and in millimeters). Bridgman said the concept is defined on the measurement. So the criticism is that we would end up with endless concepts, each defined by the things that measured the concept.

Bridgman notes that in the theory of relativity we see how a concept like "duration" can split into multiple different concepts. As part of the process of refining a physical theory, it may be found that what was one concept is, in fact, two or more distinct concepts. However, Bridgman proposes that if we only stick to operationally defined concepts, this will never happen.

Operationalization and Relativity
The practical 'operational definition' is generally understood as relating to the theoretical definitions that describe reality through the use of theory.

The importance of careful operationalization can perhaps be more clearly seen in the development of General Relativity. Einstein discovered that there were two operational definitions of "mass" being used by scientists: inertial, defined by applying a force and observing the acceleration, from Newton’s Second Law of Motion; and gravitational, defined by putting the object on a scale or balance. Previously, no one had paid any attention to the different operations used because they always produced the same results, but the key insight of Einstein was to posit the Principle of Equivalence that the two operations would always produce the same result because they were equivalent at a deep level, and work out the implications of that assumption, which is the General Theory of Relativity. Thus, a breakthrough in science was achieved by disregarding different operational definitions of scientific measurements and realizing that they both described a single theoretical concept. Einstein's disagreement with the operationalist approach was criticized by Bridgman as follows: "Einstein did not carry over into his general relativity theory the lessons and insights he himself has taught us in his special theory."

Monday, January 27, 2014

Strength of Materials

Mechanics of materials, also called strength of materials, is a subject which deals with the behavior of solid objects subject to stresses and strains. The complete theory began with the consideration of the behavior of one and two dimensional members of structures, whose states of stress can be approximated as two dimensional, and was then generalized to three dimensions to develop a more complete theory of the elastic and plastic behavior of materials. An important founding pioneer in mechanics of materials was Stephen Timoshenko.

The study of strength of materials often refers to various methods of calculating the stresses and strains in structural members, such as beams, columns, and shafts. The methods employed to predict the response of a structure under loading and its susceptibility to various failure modes takes into account the properties of the materials such as its yield strength, ultimate strength,Young’s Modulus, and Poisson’s ratio; in addition the mechanical element's macroscopic properties (geometric properties), such as it length, width, thickness, boundary constraints and abrupt changes in geometry such as holes are considered.

In materials science, the strength of a material is its ability to withstand an applied stress without failure. The field of strength of materials deals with forces and deformations that result from their acting on a material. A load applied to a mechanical member will induce internal forces within the member called stresses when those forces are expressed on a unit basis. The stresses acting on the material cause deformation of the material. Deformation of the material is called strain when those deformations too are placed on a unit basis. The applied loads may be axial (tensile or compressive), or shear. The stresses and strains that develop within a mechanical member must be calculated in order to assess the load capacity of that member. This requires a complete description of the geometry of the member, its constraints, the loads applied to the member and the properties of the material of which the member is composed. With a complete description of the loading and the geometry of the member, the state of stress and of state of strain at any point within the member can be calculated. Once the state of stress and strain within the member is known, the strength (load carrying capacity) of that member, its deformations (stiffness qualities), and its stability (ability to maintain its original configuration) can be calculated. The calculated stresses may then be compared to some measure of the strength of the member such as its material yield or ultimate strength. The calculated deflection of the member may be compared to a deflection criteria that is based on the member's use. The calculated buckling load of the member may be compared to the applied load. The calculated stiffness and mass distribution of the member may be used to calculate the member's dynamic response and then compared to the acoustic environment in which it will be used.

Material strength refers to the point on the engineering stress-strain curve (yield stress) beyond which the material experiences deformations that will not be completely reversed upon removal of the loading and as a result the member will have a permanent deflection. The ultimate strength refers to the point on the engineering stress-strain curve corresponding to the stress that produces fracture.

Failure Theories
There are four important failure theories: maximum shear stress theory, maximum normal stress theory, maximum strain energy theory, and maximum distortion energy theory. Out of these four theories of failure, the maximum normal stress theory is only applicable for brittle materials, and the remaining three theories are applicable for ductile materials. Of the latter three, the distortion energy theory provides most accurate results in majority of the stress conditions. The strain energy theory needs the value of Poisson’s ratio of the part material, which is often not readily available. The maximum shear stress theory is conservative. For simple unidirectional normal stresses all theories are equivalent, which means all theories will give the same result.
  • Maximum Shear stress Theory- This theory postulates that failure will occur if the magnitude of the maximum shear stress in the part exceeds the shear strength of the material determined from uniaxial testing.
  • Maximum normal stress theory - This theory postulates that failure will occur if the maximum normal stress in the part exceeds the ultimate tensile stress of the material as determined from uniaxial testing. This theory deals with brittle materials only. The maximum tensile stress should be less than or equal to ultimate tensile stress divided by factor of safety. The magnitude of the maximum compressive stress should be less than ultimate compressive stress divided by factor of safety.
  • Maximum strain energy theory - This theory postulates that failure will occur when the strain energy per unit volume due to the applied stresses in a part equals the strain energy per unit volume at the yield point in uniaxial testing.
  • Maximum distortion energy theory - This theory is also known as shear energy theory or von Mises hencky theory. This theory postulates that failure will occur when the distortion energy per unit volume due to the applied stresses in a part equals the distortion energy per unit volume at the yield point in uniaxial testing. The total elastic energy due to strain can be divided into two parts: one part causes change in volume, and the other part causes change in shape. Distortion energy is the amount of energy that is needed to change the shape.
  • Fracture mechanics was established by Alan Arnold Griffith and George Rankine Irwin. This important theory is also known as numeric conversion of toughness of material in the case of crack existence.
  • Fractology was proposed by Takeo Yokobori because each fracture law, including creep rupture criterion, must be combined nonlinially.

Sunday, January 26, 2014

Ductility in Materials Science

In materials science, ductility is a solid material's ability to deform under tensile stress; this is often characterized by the material's ability to be stretched into a wire. Malleability, a similar property, is a material's ability to deform under compressive stress; this is often characterized by the material's ability to form a thin sheet by hammering or rolling. Both of these mechanical properties are aspects of plasticity, the extent to which a solid material can be plastically deformed without fracture. Also, these material properties are dependent on temperature and pressure (investigated by Percy Williams Bridgman as part of his Nobel Prize winning work on high pressures).

Ductility and malleability are not always coextensive – for instance, while gold has high ductility and malleability, lead has low ductility but high malleability. The word ductility is sometimes used to embrace both types of plasticity.

Materials Science
Ductility is especially important in metalworking, as materials that crack, break or shatter under stress cannot be manipulated using metal forming processes, such as hammering, rolling and drawing. Malleable materials can be formed using stamping or pressing, whereas brittle metals and plastiucs must be molded.

High degrees of ductility occur due to metallic bonds, which are found predominantly in metals and leads to the common perception that metals are ductile in general. In metallic bonds valence shell electrons are delocalized and shared between many atoms. The delocalized electrons allow metal atoms to slide past one another without being subjected to strong repulsive forces that would cause other materials to shatter.

Ductility can be quantified by the fracture strain, which is the engineering strain at which a test specimen fractures during a uniaxial tensile test. Another commonly used measure is the reduction of area at fracture . The ductility of steel varies depending on the alloying constituents. Increasing levels of carbon decreases ductility. Many plastics and amorphous solids, such as Play-Doh, are also malleable. The most ductile metal is platinum and the most malleable metal is gold.

Ductile-brittle transition temperature
The ductile–brittle transition temperature (DBTT), nil ductility temperature (NDT), or nil ductility transition temperature of a metal represents the point at which the fracture energy passes below a pre-determined point (for steels typically 40 J for a standard Charpy impact test). DBTT is important since, once a material is cooled below the DBTT, it has a much greater tendency to shatter on impact instead of bending or deforming. For example, zamak 3 exhibits good ductility at room temperature but shatters at sub-zero temperatures when impacted. DBTT is a very important consideration in materials selection when the material in question is subject to mechanical stresses. A similar phenomenon, the glass transition temperature, occurs with glasses and polymers, although the mechanism is different in these amorphous materials.

In some materials this transition is sharper than others. For example, the transition is generally sharper in materials with a body-centered cubic (BCC) lattice than those with a face-centefed cubic (FCC) lattice. DBTT can also be influenced by external factors such as neutron radiation, which leads to an increase in internal lattice defects and a corresponding decrease in ductility and increase in DBTT.

The most accurate method of measuring the BDT or DBT temperature of a material is by fracture testing. Typically, four point bend testing at a range of temperatures is performed on pre-cracked bars of polished material. For experiments conducted at higher temperatures, dislocation activity increases. At a certain temperature, dislocations shield the crack tip to such an extent the applied deformation rate is not sufficient for the stress intensity at the crack-tip to reach the critical value for fracture (KiC). The temperature at which this occurs is the ductile–brittle transition temperature. If experiments are performed at a higher strain rate, more dislocation shielding is required to prevent brittle fracture and the transition temperature is raised.

Saturday, January 25, 2014

Annealing Metals and Materials

Annealing, in metallurgy and materials science, is a heat treatment that alters a material to increase its ductility and to make it more workable. It involves heating a material to above its critical temperature, maintaining a suitable temperature, and then cooling. Annealing can induce ductility, soften material, relieve internal stresses, refine the structure by making it homogeneous, and improve cold working properties.

In the cases of copper, steel, silver, and brass, this process is performed by heating the material (generally until glowing) for a while and then slowly letting it cool to room temperature in still air. Copper, silver and brass can be cooled slowly in air, or quickly by quenching in water, unlike ferrous metals, such as steel, which must be cooled slowly to anneal. In this fashion, the metal is softened and prepared for further work—such as shaping, stamping, or forming.

Annealing occurs by the diffusion of atoms within a solid material, so that the material progresses towards its equilibrium state. Heat increases the rate of diffusion by providing the energy needed to break bonds.
The movement of atoms has the effect of redistributing and destroying the dislocations in metals and (to a lesser extent) in ceramics. This alteration in dislocations allows metals to deform more easily, so increases their ductility.

The amount of process-initiating Gibbs free energy in a deformed metal is also reduced by the annealing process. In practice and industry, this reduction of Gibbs free energy is termed stress relief.

The relief of internal stresses is a thermodynamically spontaneous process; however, at room temperatures, it is a very slow process. The high temperatures at which annealing occurs serve to accelerate this process.

The three stages of the annealing process that proceed as the temperature of the material is increased are: recovery, recrystallization, and grain growth. The first stage is recovery, and it results in softening of the metal through removal of primarily linear defects called dislocations and the internal stresses they cause. Recovery occurs at the lower temperature stage of all annealing processes and before the appearance of new strain-free grains. The grain size and shape do not change. The second stage is recrystallization, where new strain-free grains nucleate and grow to replace those deformed by internal stresses. If annealing is allowed to continue once recrystallization has completed, then grain growth (the third stage) occurs. In grain growth, the microstructure starts to coarsen and may cause the metal to lose a substantial part of its original strength. This can however be regained with hardening.

Controlled Atmospheres
Typically, large ovens are used for the annealing process. The inside of the oven is large enough to place the workpiece in a position to receive maximum exposure to the circulating heated air. For high volume process annealing, gas fired conveyor furnaces are often used. For large workpieces or high quantity parts, car-bottom furnaces are used so workers can easily move the parts in and out. Once the annealing process is successfully completed, workpieces are sometimes left in the oven so the parts cool in a controllable way. While some workpieces are left in the oven to cool in a controlled fashion, other materials and alloys are removed from the oven. Once removed from the oven, the workpieces are often quickly cooled off in a process known as quench hardening. Typical methods of quench hardening materials involve media such as air, water, oil, or salt. Salt is used as a medium for quenching usually in the form of brine (salt water). Brine provides faster cooling rates than water. This is because when an object is quenched in water air bubbles form on the surface of the object reducing the surface area the water is in contact with. The salt in the brine reduces the formation of air bubbles on the objects surface, meaning there is a larger surface area of the object in contact with the water, providing faster cooling rates. Quench hardening is generally applicable to some ferrous alloys, but not copper alloys.

Friday, January 24, 2014

Sounds We Learn to Like

Relativity of Pleasing MusicWAMC Public Radio
By Dr. Neil McLachlan

Over two thousand years ago Pythagoras discovered a way of tuning strings by using simple proportions of length. He thought that the pitch interval created by two identical strings with a ratio of 2/3 length created a perfect harmony. This was because he heard the beating (or roughness) between the two string diminish at this ratio. We now call that interval a Perfect 5th, and it has been at the heart of western musical scales ever since.

About 150 years ago the famous scientist Helmholtz sought to explain the harmony of the Perfect 5th by showing that strings create harmonic frequencies when they vibrate. Harmonic frequencies are created when the string vibrations break-up into fractions of the string length, and so occur at integer multiples of the lowest frequency (i.e. for a 100 Hz string, harmonics occur at 200, 300, 400 Hz and so on). When two strings have a length ratio of 2/3 they have a frequency ratio of 3/2, and the 3rd harmonic of one string is at exactly the same frequency as the 2nd harmonic of the other. This reduces the beating between the strings, and so it was believed reduces the dissonance that we hear.

Helmholtz also suggested that sound waves resonate in the cochlea, which causes a strong nerve signal representing the pitch of the sound at the peak of the resonating wave. But later scientists discovered that people often heard pitches at frequencies that weren’t part of the sound wave, and more sophisticated explanations that included pattern matching of harmonic frequencies or extracting the common period of the wave were suggested for pitch.

Pitch and dissonance are innate properties of the auditory system in these theories. However innate theories cannot account for the huge variety of human musical ability and preferences. Over the last couple of decades science has learnt about the plasticity of the human brain, and in our research we suggest that this plasticity extends to the early (pre-conscious) stages of auditory processing. We showed that pitch processing is learnt by training non-musicians with unusual sounds that don’t have simple harmonic overtones. We found that as people learnt to recognize a sound they could find its pitch and it sounded less dissonant. In other words, over two thousand years of music theory has contradicted the simple observation that music is learnt, and that we all come to love the music of our own culture, however strange it may sound to others.

Production support for the Academic Minute comes from Newman’s Own, giving all profits to charity and pursuing the common good for over 30 years, and from Mount Holyoke College.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Neil McLachlan is an associate professor in the School of Psychological Sciences at the University of Melbourne. His research in the Music and Auditory Neuroscience Laboratory examines the neurocognitive mechanisms involved in auditory perception.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Thursday, January 23, 2014

Creating New Antibiotics

Drug Discovery Potential of
Natural Microbial Genomes
Advanced genetic technique yields novel antibiotic from ocean bacteria
January 22, 2014 -- Scientists at the University of California, San Diego have developed a new genetic platform that allows efficient production of naturally occurring molecules, and have used it to produce a novel antibiotic compound. Their study, published this week in PNAS, may open new avenues for natural product discoveries and drug development.

According to lead investigator Bradley S. Moore, PhD, of the Scripps Institution of Oceanography and Skaggs School of Pharmacy and Pharmaceutical Sciences at UC San Diego, the findings demonstrate a "plug and play" technique to trigger previously unknown biosynthetic pathways and identify natural product drug candidates.

"In my opinion, the new synthetic biology technology we developed – which resulted in the discovery of a new antibiotic from a marine bacterium – is just the tip of the iceberg in terms of our ability to modernize the natural product drug discovery platform," Moore said.

The ocean, covering 70 percent of the earth's surface, is a rich source of new microbial diversity for the discovery of new natural products effective as drugs for treating infections, cancer and other important medical conditions. Most natural antibiotics are complex molecules that are assembled by a special group of enzymes genetically encoded in the microbe's chromosome.

But it often proves difficult to grow the newly discovered ocean bacteria in the laboratory, or to get them to produce their full repertoire of natural products.

The UC San Diego scientists harvested a set of genes predicted to encode a natural product from ocean bacteria, then used the synthetic biology technology to identify and test a totally new antibiotic – taromycin A – found to be effective in fighting drug-resistant MRSA.

"Antibiotic resistance is critical challenge to the public health. Most antibiotics, such as penicillin, used in human medicine are natural molecules originally isolated from microbes in the soil or rainforest – part of the chemical warfare that microbes deploy to out-compete one another and secure their niche in the environment," said co-investigator Victor Nizet, MD, professor of pediatrics and pharmacy at UC San Diego.

Such microbes have the genetic capacity to biosynthesize a wide range of specialized compounds. Although next-generation sequencing technologies can potentially exploit this capacity as an approach to natural drug discovery, researchers currently lack procedures to efficiently link genes with specific molecules. To help bridge this gap, the UC San Diego researchers developed a genetic platform based on transformation-associated recombination (TAR) cloning, which efficiently produces natural product molecules from uncharacterized gene collections.

The researchers applied the platform to yeast, targeting the taromycin gene cluster because of its high degree of similarity to the biosynthesis pathway of daptomycin, a clinically approved antibiotic used to treat infections caused by multi-resistant bacteria. "The technique has the potential to unlock the drug discovery potential of countless new and mysterious microbes," Nizet concluded.

Additional contributors to the study include Kazuya Yamanaka, Kirk A. Reynolds, Roland D. Kersten, Katherine S. Ryan, David J. Gonzalez, and Pieter C. Dorrestein.

The study was supported by grants from the National Institutes of Health: GM085770, GM097509, AI057153, and Instrument Grant S10-OD010640-01.

UC San Diego’s Technology Transfer Office has filed patent applications on this process and on the taromycin antibiotics, which are available for licensing. Visit
for more information.

Wednesday, January 22, 2014

Dense New Battery Runs on Sugar

Environmentally friendly, energy-dense sugar battery developed to power the world's gadgets
BLACKSBURG, Va., Jan. 22, 2014 – A Virginia Tech research team has developed a battery that runs on sugar and has an unmatched energy density, a development that could replace conventional batteries with ones that are cheaper, refillable, and biodegradable.

The findings from Y.H. Percival Zhang, an associate professor of biological systems engineering in the College of Agriculture and Life Sciences and the College of Engineering, were published yesterday in the journal Nature Communications.

While other sugar batteries have been developed, Zhang said his has an energy density an order of magnitude higher than others, allowing it to run longer before needing to be refueled.

In as soon as three years, Zhang’s new battery could be running some of the cell phones, tablets, video games, and the myriad other electronic gadgets that require power in our energy-hungry world, Zhang said.

"Sugar is a perfect energy storage compound in nature," Zhang said. "So it’s only logical that we try to harness this natural power in an environmentally friendly way to produce a battery."

In America alone, billions of toxic batteries are thrown away every year, posing a threat to both the environment and human health, according to the Environmental Protection Agency. Zhang’s development could help keep hundreds of thousands of tons of batteries from ending up in landfills.

This is one of Zhang’s many successes in the last year that utilize a series of enzymes mixed together in combinations not found in nature. He has published articles on creating edible starch from non-food plants and developed a new way to extract hydrogen in an economical and environmentally friendly way that can be used to power vehicles.

In this newest development, Zhang and his colleagues constructed a non-natural synthetic enzymatic pathway that strips all charge potentials from the sugar to generate electricity in an enzymatic fuel cell.

Then, low-cost biocatalyst enzymes are used as catalysts instead of costly platinum, which is typically used in conventional batteries.

Like all fuel cells, the sugar battery combines fuel — in this case, maltodextrin, a polysaccharide made from partial hydrolysis of starch — with air to generate electricity and water as the main byproducts.

"We are releasing all electron charges stored in the sugar solution slowly step-by-step by using an enzyme cascade," Zhang said.

Different from hydrogen fuel cells and direct methanol fuel cells, the fuel sugar solution is neither explosive nor flammable and has a higher energy storage density. The enzymes and fuels used to build the device are biodegradable.

The battery is also refillable and sugar can be added to it much like filling a printer cartridge with ink.

Support for the current research comes from the Department of Biological Systems Engineering at Virginia Tech and Cell-Free Bioinnovations, a biotech start-up, located in Blacksburg, Va. Additional funding was contributed by the National Science Foundation Small Business Innovation Research grant to Cell-Free Bioinnovations Inc. Zhiguang Zhu, the first author of this paper, and a 2013 biological systems engineering graduate of Virginia Tech, is the principal investigator for the National Science Foundation grant.

Nationally ranked among the top research institutions of its kind, Virginia Tech’s College of Agriculture and Life Sciences focuses on the science and business of living systems through learning, discovery, and engagement. The college’s comprehensive curriculum gives more than 3,100 students in a dozen academic departments a balanced education that ranges from food and fiber production to economics to human health. Students learn from the world’s leading agricultural scientists, who bring the latest science and technology into the classroom.

Tuesday, January 21, 2014

Elevator Music -- a cultural war

Elevator music (also known as Muzak, piped music, weather music or lift music) refers to a type of popular music, often instrumental, that is commonly played through speakers at shopping malls, grocery stores, department stores, , telephone systems (while the caller is on hold), cruise ships, airports, business offices, and elevators. The term is also frequently applied as a generic term for any form of easy listening, smooth jazz, or middle of the road music, or to the type of recordings commonly heard on "beautiful music" radio stations.

Elevator music is typically set to a very simple melody so that it can be unobtrusively looped back to the beginning. In a mall or shopping center, elevator music of a specific type has been found to have a psychological effect: slower, more relaxed music tends to make people slow down and browse longer. Elevator music may also be preferred over broadcast radio stations due to the lack of lyrics and commercial interruptions.

This style of music is sometimes used to comedic effect in mass media such as film, where intense or dramatic scenes may be interrupted or interspersed with such anodyne music while characters use an elevator (e.g. The Blues Brothers, Dawn of the Dead, Mr. And Mrs. Smith, Asterix and Obelix, Mission Cleopatra and Spider Man 2.) Some video games have used elevator music for comedic effect, e.g. Metal Gear Solid 4 where a few elevator music-themed tracks are accessible on the in-game iPod.

The Muzak Holdings Corporation is a major supplier of business background music, and was the best known such supplier for years. Since 1997 Muzak has used original artists for its music source, except on the Environmental channel.
= = = = = = = = = = = = = = = = = = = = = = = = = = = =

Elevator Music is also the title of an in-depth look at this sort of music written by Joseph Lanza. Here’s a description of this book with some reviewer quotes:

For a sound intended to be comforting, unobtrusive, and inoffensive, "elevator music"—i.e., easy listening, mood music, "Beautiful Music," and "Music by Muzak®"—has ignited strong and often heated opinions.

With an arsenal of historical anecdotes and facts, Joseph Lanza sings seriously, with healthy doses of humor and wit, the praises of this misunderstood musical genre. Lanza traces mood music's mystifying presence from the mind-altering sirens who lured Odysseus to the harp David played to soothe King Saul, but the tale gets more intriguing in the early twentieth century, with Erik Satie's "furniture music" experiments, the birth of the Muzak® Corporation, and various science fiction stories that featured mood music as a futuristic staple.

Lanza also chronicles the parallel development of the "easy listening" instrumental, discussing such "mood maestros" as Ray Conniff, Percy Faith, Andre Kostelanetz, and Mantovani. More recent "ambient" soundscapers like Brian Eno and practitioners of what some still call New Age also enter the picture. Along the way, Lanza addresses mood music's social and even governmental uses, raising questions about music's role in modern life while challenging aesthetic assumptions.

This revised and expanded edition delves deeper into the surreal phenomenon of "metarock"—the art of reinterpreting rock songs into dreamlike, string-laden, easy-listening alternatives. The author also adds an afterword about some of the actual musicians who arranged and conducted Muzak® sessions—respected names like Nelson Riddle.

Elevator Music confronts the criticisms of elites who say that elevator music is "dehumanizing" or less than music. These reactions, Lanza argues, are based more on cultural prejudices than honest musical appraisal. In a current climate where the noises are louder, and the background beats are ever more aggressive, this history of music intended as a pleasing background makes for a captivating read.

Joseph Lanza
is currently writing an impressionistic history of romantic pop ballads. He has served as an independent consultant for Time Life Music and was executive producer for the two-disc collection Music for TV Dinners (Caroline Records). His most recent book is about the legendary crooner Russ Columbo.

Praise / Awards
  • "Snobby musicologists ignore this fascinating topic, but I learned a lot while being well-entertained by Lanza's delightful book."
    —Wendy Carlos, composer, soundtracks for "A Clockwork Orange" and "The Shining"
  • "Not until Joseph Lanza's Elevator Music have I been privileged to read what I consider the definitive history of twentieth-century music. This is it."
    —Errol Morris, director of "The Thin Blue Line"
  • "Elevator Music is a fascinating tour of the sonic inferno we all unconsciously inhabit."
    —J. G. Ballard, author of "Crash" and "Empire of the Sun"
  • ". . . a valuable addition to collections supporting music and culture."
  • "It's still a surreal world, after all, and Lanza's neat tome is a great way to reflect on some of the aural factors that make it so."
    Washington Post
  • "Lanza takes background music seriously as both music and social utility. In doing so, he's written one of the few pop-history books that won't put you to sleep - not to mention the only one that dares to probe the very real connections between shopping-mall music and Devo."
    Entertainment Weekly
  • "A fascinating tour of a genuine piece of American surrealism, diligently researched, sparklingly presented, surprising at every turn. Hilarious and at times terrifying."
    —Phil Patton, author of Made in the U.S.A.: The Secret Histories of the Things That Made
  • America

  • = = = = = = = = = = = = = = = = = = = = = = = = = = = =

    Note from the Blog Author
    Lanza’s book is at times astonishing. Musicologists, and serious music fans who hate soundtracks, will not like what Lanza has to say. The key to Lanza’s message involves the intense and shrewd thinking that went into the the key developers of easy listening and light music. Lanza offers brief biographies of such important figures as Jackie Gleason, Mantovani and Angelo Badalamenti that are, themselves, worth the price of the book.

    Elevator music has been on the front lines of a culture war for decades. The combat continues.

    Monday, January 20, 2014

    Mysterious Ball Lightning

    Ball lightning is an unexplained atmospheric electrical phenomenon. The term refers to reports of luminous, usually spherical objects, which vary from pea-sized to several meters in diameter. It is usually associated with thunderstorms, but lasts considerably longer than the split-second flash of a lightning bolt. Many of the early reports say that the ball eventually explodes, sometimes with fatal consequences, leaving behind the odor of sulfur.

    Laboratory experiments have produced effects that are visually similar to reports of ball lightning, but it is unknown whether these are actually related to any naturally occurring phenomenon. Scientific data on natural ball lightning is scarce owing to its infrequency and unpredictability. The presumption of its
    existence is based on reported public sightings, and has therefore produced somewhat inconsistent findings.

    Given inconsistencies and the lack of reliable data, the true nature of ball lightning is still unknown. The first optical and spectral characteristics of ball lightning were published in January 2014.

    Descriptions of ball lightning vary widely. It has been described as moving up and down, sideways or in unpredictable trajectories, hovering and moving with or against the wind; attracted to, unaffected by, or repelled from buildings, people, cars and other objects. Some accounts describe it as moving through solid masses of wood or metal without effect, while others describe it as destructive and melting or burning those substances. Its appearance has also been linked to power lines as well as during thunderstorms and also calm weather. Ball lightning has been described as transparent, translucent, multicolored, evenly lit, radiating flames, filaments or sparks, with shapes that vary between spheres, ovals, tear-drops, rods, or disks.

    Ball lightning is often erroneously identified as St. Elmo’s fire. They are separate and distinct phenomena.
    The balls have been reported to disperse in many different ways, such as suddenly vanishing, gradually dissipating, absorption into an object, "popping," exploding loudly, or even exploding with force, which is sometimes reported as damaging. Accounts also vary on their alleged danger to humans, from lethal to harmless.

    A review of the available literature published in 1972 identified the properties of a "typical" ball lightning, whilst cautioning against over-reliance on eye-witness accounts:
    • They frequently appear almost simultaneously with cloud-to-ground lightning discharge
    • They are generally spherical or pear-shaped with fuzzy edges
    • Their diameters range from 1–100 cm, most commonly 10–20 cm
    • Their brightness corresponds to roughly that of a domestic lamp, so they can be seen clearly in daylight
    • A wide range of colours has been observed, red, orange, and yellow being the most common.
    • The lifetime of each event is from 1 second to over a minute with the brightness remaining fairly constant during that time
    • They tend to move, most often in a horizontal direction at a few metres per second, but may also move vertically, remain stationary or wander erratically.
    • Many are described as having rotational motion
    • It is rare that observers report the sensation of heat, although in some cases the disappearance of the ball is accompanied by the liberation of heat
    • Some display an affinity for metal objects and may move along conductors such as wires or metal fences
    • Some appear within buildings passing through closed doors and windows
    • Some have appeared within metal aircraft and have entered and left without causing damage
    • The disappearance of a ball is generally rapid and may be either silent or explosive
    • Odors resembling ozone, burning sulfur, or nitrogen oxides are often reported

    Direct Measurements of Natural Ball Lightning
    In January 2014, scientists from Northwest Normal University in Lanzhou, China, published the results of recordings made in July 2012 of the optical spectrum of what was thought to be natural ball lightning made during the study of ordinary cloud–ground lightning. At a distance of 900 m (3,000 ft), a total of 1.64 seconds of high-speed video of the ball lightning and its spectrum was made, from the formation of the ball lightning after the ordinary lightning struck the ground, up to the optical decay of the phenomenon. The emission lines of neutral atomic silicon, calcium, iron, nitrogen and oxygen were detected in contrast with mainly ionized nitrogen emission lines in the spectrum of the parent lightning. Oscillations in the light intensity and in the oxygen and nitrogen emission at a frequency of 100 Hx, caused by the electric field of the 50 Hz high-voltage power transmission line in vicinity, were observed. From the spectrum, the temperature of the ball lightning was assessed as being lower than the temperature of the parent lightning (<15,000–30,000 K). The observed data are consistent with vaporization of soil as well as with ball lightning's sensitivity to electrical fields.

    Sunday, January 19, 2014

    "Cosmic Web Filaments" Are Real

    UCSC Scientists Capture First Cosmic Web Filaments at Keck Observatory

    Keck Observatory, January 21, 2014

    – Astronomers have discovered a distant quasar illuminating a vast nebula of diffuse gas, revealing for the first time part of the network of filaments thought to connect galaxies in a cosmic web. Researchers at the University of California, Santa Cruz, led the study, published January 19 in the journal, Nature.

    Using the 10-meter Keck I telescope at the W. M. Keck Observatory in Hawaii, the researchers detected a very large, luminous nebula of gas extending about 2 million light-years across intergalactic space.
    "This is a very exceptional object: it's huge, at least twice as large as any nebula detected before, and it extends well beyond the galactic environment of the quasar," said Sebastiano Cantalupo, first author of the paper and a postdoctoral fellow at UC Santa Cruz.

    The standard cosmological model of structure formation in the universe predicts that galaxies are embedded in a cosmic web of matter, most of which (about 84 percent) is invisible dark matter. This web is seen in the results from computer simulations of the evolution of structure in the universe, which show the distribution of dark matter on large scales, including the dark matter halos in which galaxies form and the cosmic web of filaments that connect them. Gravity causes ordinary matter to follow the distribution of dark matter, so filaments of diffuse, ionized gas are expected to trace a pattern similar to that seen in dark matter simulations.

    Until now, these filaments have never been seen. Intergalactic gas has been detected by its absorption of light from bright background sources, but those results don't reveal how the gas is distributed. In this study, the researchers detected the fluorescent glow of hydrogen gas resulting from its illumination by intense radiation from the quasar.

    "This quasar is illuminating diffuse gas on scales well beyond any we've seen before, giving us the first picture of extended gas between galaxies," said J. Xavier Prochaska, coauthor and professor of astronomy and astrophysics at UC Santa Cruz. "It provides a terrific insight into the overall structure of our universe."

    The hydrogen gas illuminated by the quasar emits ultraviolet light known as Lyman alpha radiation. The distance to the quasar is so great (about 10 billion light-years) that the emitted light is "stretched" by the expansion of the universe from an invisible ultraviolet wavelength to a visible shade of violet by the time it reaches the Keck telescope and the LRIS spectrometer used for this discovery. Knowing the distance to the quasar, the researchers calculated the wavelength for Lyman alpha radiation from that distance and built a special filter for LRIS to get an image at that wavelength.

    "We have studied other quasars this way without detecting such extended gas," Cantalupo said. "The light from the quasar is like a flashlight beam, and in this case we were lucky that the flashlight is pointing toward the nebula and making the gas glow. We think this is part of a filament that may be even more extended than this, but we only see the part of the filament that is illuminated by the beamed emission from the quasar."

    A quasar is a type of active galactic nucleus that emits intense radiation powered by a supermassive black hole at the center of the galaxy. In an earlier survey of distant quasars using the same technique to look for glowing gas, Cantalupo and others detected so-called "dark galaxies," the densest knots of gas in the cosmic web. These dark galaxies are thought to be either too small or too young to have formed stars.

    "The dark galaxies are much denser and smaller parts of the cosmic web. In this new image, we also see dark galaxies, in addition to the much more diffuse and extended nebula," Cantalupo said. "Some of this gas will fall into galaxies, but most of it will remain diffuse and never form stars."

    The researchers estimated the amount of gas in the nebula to be at least ten times more than expected from the results of computer simulations. "We think there may be more gas contained in small dense clumps within the cosmic web than is seen in our models. These observations are challenging our understanding of intergalactic gas and giving us a new laboratory to test and refine our models," Cantalupo said.

    In addition to Cantalupo and Prochaska, the coauthors of the paper include Piero Madau, professor of astronomy and astrophysics at UC Santa Cruz, and Fabrizio Arrigoni-Battaia and Joseph Hennawi at the Max Planck Institute for Astronomy in Heidelberg, Germany. This research was supported by grants from the National Science Foundation (AST-1010004, OIA-1124453) and NASA (NNX12AF87G).

    The W. M. Keck Observatory operates the largest, most scientifically productive telescopes on Earth. The two, 10-meter optical/infrared telescopes on the summit of Mauna Kea on the Island of Hawaii feature a suite of advanced instruments including imagers, multi-object spectrographs, high-resolution spectrographs, integral-field spectroscopy and world-leading laser guide star adaptive optics systems. The Observatory is a private 501(c) 3 non-profit organization and a scientific partnership of the California Institute of Technology, the University of California and NASA.

    Saturday, January 18, 2014

    Positive Quiddity: Hugh Dowding

    Air Chief Marshal Hugh Caswall Tremenheere Dowding, 1st Baron Dowding GCB, GCVO, CMG (24 April 1882 – 15 February 1970) was a British officer in the Royal Air Force. He was the commander of RAF Fighter Command during the Battle of Britain, and is generally credited with playing a crucial role in Britain's defence, and hence, the defeat of Hitler's plan to invade Britain.
    Early Life
    Hugh Dowding's father, Arthur Dowding, taught at Fettes College in Edinburgh before moving to the southern Scottish town of Moffat where Hugh Dowding was born in 1882. Hugh Dowding received his early education at St. Ninian's Boys' Preparatory School in Moffat which his father had been instrumental in founding. Hugh Dowding was of Cornish ancestry being the grandchild of Lieutenant General Charles William Tremenheere. Dowding was educated at Winchester College in England on a scholarship before joining the Royal Military Academy, Woolrich. He later served abroad in the Royal Garrison Artillery.

    Military Career
    Initially he served in Gibraltar, Ceylon, Hong Kong, and India. After returning to Great Britain, Dowding attended the Army Staff College in January 1912 before being posted to the Royal Garrison Artillery on the Isle of Wight in 1913. After becoming interested in aviation, Dowding gained Aviator’s Certificate no. 711 on 19 December 1913 in a Vickers biplane at the Vickers School of Flying, Brooklands. He then attended the Central Flying School, where he was awarded his wings. Although added to the Reserve List of the Royal Flying Corps (RFC), Dowding returned to the Isle of Wight to resume his Royal Garrison Artillery duties. However, this arrangement was short-lived and in August 1914, he joined the RFC as a pilot on No. 7 Squadron.

    First World War
    Dowding was sent to France and in 1915 was promoted to commander of No. 16 Squadron. After the Battle of the Somme, Dowding clashed with General Hugh Trenchard, the commander of the RFC, over the need to rest pilots exhausted by non-stop duty. As a result Dowding was sent back to Britain and, although promoted to the rank of Brigadier General, saw no more operational service during the First World War.

    Inter-war Years
    Dowding then joined the recently created Royal Air Force and gained experience in departments of training, supply, development, and research. On 19 August 1924, Aif Commodore Dowding was made Chief Staff Officer for RAF Iraq Command. In 1929, he was promoted to Air Vice Marshal and the following year joined the Air Council. Tragedy struck in the inter-war period when Clarice, his wife of two years, died after a short illness. Left alone to bring up his son, Derek, Hugh Dowding withdrew from socialising and threw himself into his work. In 1933 Dowding was promoted to Air Marshal and was knighted.

    In the years prior to the Second World War, Dowding was the commanding officer of RAF Fighter Command, and was perhaps the one important person in Britain, and perhaps the world, who did not agree with British Prime Minister Stanley Baldwin’s's 1932 declaration that "The bomber will always get through". He conceived and oversaw the development of the "Dowding System". This comprised an integrated air defence system which included (i) radar (whose potential Dowding was among the first to appreciate), (ii) human observers (including the Royal Observer Corps), who filled crucial gaps in what radar was capable of detecting at the time (the early radar systems, for example, did not provide good information on the altitude of incoming German aircraft), (iii) raid plotting, and (iv) radio control of aircraft.
    The whole network was tied together, in many cases, by dedicated phone links buried sufficiently deep to provide protection against bombing. The network had its apex (and Dowding his own headquarters) at RAF Bentley Priory, a converted country house on the outskirts of London. The system as a whole later became known as Ground-controlled interception (GCI).Dowding also introduced modern aircraft into service during the pre-war period, including the eight-gun Spitfire and Hurricane. He is also credited with having fought the Air Ministry so that fighter planes were equipped with bullet proof wind shields. He was promoted to Air Chief Marshal in 1937.
    Battle of Britain
    Due to retire in June 1939, Dowding was asked to stay on until March 1940 because of the tense international situation. He was again grudgingly permitted to continue, first until July and finally until October 1940. Thus he fought the Battle of Britain under the shadow of retirement.

    In 1940, Dowding, nicknamed "Stuffy" by his men, proved unwilling to sacrifice aircraft and pilots in the attempt to aid Allied troops during the Battle of France. He, along with his immediate superior Sir Cyril Newall, then Chief of the Air Staff, resisted repeated requests from Winston Churchill to weaken the home defence by sending precious squadrons to France. When the Allied resistance in France collapsed, he worked closely with Air Vice-Marshal Keith Park, the commander of 11 Fighter Group, in organising cover for the evacuation of the British Expeditionary Force at Dunkirk.

    Through the summer and autumn of 1940 in the Battle of Britain, Dowding's Fighter Command resisted the attacks of the Luftwaffe. Beyond the critical importance of the overall system of integrated air defence which he had developed for Fighter Command, his major contribution was to marshal resources behind the scenes (including replacement aircraft and air crew) and to maintain a significant fighter reserve, while leaving his subordinate commanders' hands largely free to run the battle in detail. At no point did Dowding commit more than half his force to the battle zone in Southern England.

    Dowding was known for his humility and intense sincerity. Fighter Command pilots came to characterise Dowding as one who cared for his men and had their best interests at heart. Dowding often referred to his "dear fighter boys" as his "chicks". Indeed his son Derek was one of them: He was a pilot in 74 Squadron. Because of his brilliant detailed preparation of Britain's air defences for the German assault, and his prudent management of his resources during the battle, Dowding is today generally given the credit for Britain's victory in the Battle of Britain.

    Dowding's subsequent downfall has been attributed by some to his singlemindedness and perceived lack of diplomacy and political savoir faire in dealing with intra-RAF challenges and intrigues, most obviously the still even now hotly debated Big Wing controversy in which a number of senior and active service officers had argued in favour of large set-piece air battles with the Luftwaffe as an alternative to Dowding's successful Fabian strategy. Another reason often cited for his removal, but characterised by some contemporary commentators more as a pretext, was the difficulty of countering German nighttime bombing raids on British cities. The account of radar pioneer, E G. Bowen in Radar Days (1987) rebuts the claim that Dowding's perception of the problems of British night fighters was inadequate. He suggests that if Dowding had been left to follow his own path, the ultimately effective British response to night bombing (which depended completely on developments in air-borne radar) would have come somewhat sooner. Dowding himself showed that he had a good grasp of night fighter defence and was planning a defence system against night bombing in a letter he wrote some time after the Battle of Britain. However, there was great political and public pressure during the Blitz for something to be done, and Fighter Command's existing resources without, as yet, airborne radar, proved woefully inadequate. A committee of enquiry chaired by Sir John Salmond produced a long list of recommendations to improve night air defence; when Dowding approved only four of them, his erstwhile supporters, Lord Beaverbrook and Churchill, decided that it was time for him to step down.

    Dowding unwillingly relinquished command on 24 November 1940 and was replaced by Big Wing advocate, Sholto Douglas. Churchill tried to soften the blow by putting him in charge of the British Air Mission to the USA, responsible for the procurement of new aircraft types.

    Dowding was made a Knight Grand Cross of the Order of the Bath. Publication of his book, Twelve
    Legions of Angels, was suppressed in 1942. The British Government considered that it contained information which might be of use to the Germans. The book was finally published in 1946, soon after the war ended.

    Ministry of Aircraft Production
    After leaving Fighter Command, Dowding was sent on special duty to the United States for the Ministry of Aircraft Production, but there he made himself unpopular with his outspoken behaviour. On his return he headed a study into economies of RAF manpower before retiring from the Royal Air Force in July, 1942. The following year he was honoured with a peerage, as Baron Dowding of Bentley Priory.

    Battle of Britain Film
    In the 1969 film Battle of Britain, Dowding was played by legendary actor Laurence Olivier. Olivier had himself served as a pilot in the Royal Navy’s Fleet Air Arm during World War II. During filming, Dowding (then aged 86 and in a wheelchair) met Olivier. Olivier told Dowding he had sat behind the latter's desk all day "pretending to be you" and was "making an awful mess of it too", to which Dowding replied, "Oh, I'm sure you are". This broke the crew and Olivier into laughter. Footage of this can be seen in the special features section of the film's Special Edition DVD.

    According to The Real Life of Laurence Olivier by Roger Lewis (Arrow Books, 1997), while Olivier filmed the scenes of Dowding in his office at Bentley Priory, Lord Dowding, watching the shooting, wept. Coincidentally, both Dowding and Olivier are interred in Westminster Abbey.

    Friday, January 17, 2014

    Richard Brinsley Sheridan (1751 - 1816)

    Richard Brinsley Butler Sheridan (30 October 1751 – 7 July 1816) was an Irish playwright and poet and long-term owner of the London Theatre Royal, Drury Lane. He is known for his plays such as The Rivals, The School for Scandal and A Trip to Scarborough. For thirty-two years he was also a Whig Member of the British House of Commons for Stafford (1780–1806), Westminster (1806–1807) and Ilchester (1807–1812). He was buried at Poet’s Corner in Westminster Abbey.


    R. B. Sheridan was born in 1751 in Dublin, Ireland, where his family had a house on then-fashionable Dorset Street. While in Dublin Sheridan attended the English Grammar School in Grafton Street. The family moved permanently to England in 1758 when he was age seven. He was a pupil at Harrow School outside London from 1762 to 1768. His mother, Frances Sheridan, was a playwright and novelist. She had two plays produced in London in the early 1760s, though she is best known for her novel The Memoirs of Sidney Biddulph (1761). His father, Thomas Sheridan, was for a while an actor-manager at the Smock Alley Theatre but, following his move to England in 1758, he gave up acting and wrote a number of books concerning education and, especially, the standardisation of the English language in education. After Harrow his father employed a private tutor, Lewis Ker, who directed his studies in his father's house in London, while Angelo instructed him in fencing and horsemanship.

    In 1772 Richard Sheridan fought a famous duel against Captain Thomas Mathews. Mathews had written a newspaper article defaming the character of Elizabeth Linley, the woman Sheridan intended to marry, and honour dictated that a duel must be fought. A first duel was fought in London where they agreed to fight in Hyde Park, but finding it too crowded they went first to the Hercules Pillars tavern (on the site where Apsley House now stands at Hyde Park Corner) and then on to the Castle Tavern in Henrietta Street, Covent Garden. Far from its romantic image, the duel was short and bloodless. Mathews lost his sword and, according to Sheridan, was forced to "beg for his life" and sign a retraction of the article. The apology was made public and Mathews, infuriated by the publicity the duel had received, refused to accept his defeat as final and challenged Sheridan to another duel. Sheridan was not obliged to accept this challenge, but would have become a social pariah if he had not. The second duel, fought in August 1772 at Kingsdown near Bath, was a much more ferocious affair. This time both men broke their swords but carried on fighting in a "desperate struggle for life and honour". Both were wounded, Sheridan dangerously, being "borne from the field with a portion of his antagonist's weapon sticking through an ear, his breast-bone touched, his whole body covered with wounds and blood, and his face nearly beaten to jelly with the hilt of Mathews' sword". Fortunately his remarkable constitution pulled him through, and eight days after this bloody affair the Bath Chronicle was able to announce that he was out of danger. Mathews escaped in a post chaise.


    In 1772, Richard Sheridan, at the age of 21, eloped with and subsequently married Elizabeth Ann Linley and set up house in London on a lavish scale with little money and no immediate prospects of any—other than his wife's dowry. The young couple entered the fashionable world and apparently held up their end in entertaining. When Sheridan settled in London, he began writing for the stage. Less than two years later, in 1775, his first play, The Rivals, was produced at London's Covent Garden Theatre. It was a failure on its first night. Sheridan cast a more capable actor for the role of the comic Irishman for its second performance, and it was a smash which immediately established the young playwright's reputation and the favour of fashionable London. It has gone on to become a standard of English literature.

    Shortly after the success of The Rivals, Sheridan and his father-in-law Thomas Linley the Elder, a successful composer, produced the opera, The Duenna. This piece was accorded such a warm reception that it played for seventy-five performances.

    In 1776, Sheridan, his father-in-law, and one other partner, bought a half interest in the Drury Lane theatre and, two years later, bought out the other half. Sheridan was the manager of the theatre for many years, and later became sole owner with no managerial role.

    His most famous play The School for Scandal (Drury Lane, 8 May 1777) is considered one of the greatest comedies of manners in English. It was followed by The Critic (1779), an updating of the satirical Restoration play The Rehearsal, which received a memorable revival (performed with Oedipus Rex in a single evening) starring Laurence Olivier as Mr Puff, opening at the New Theatre on 18 October 1945 as part of an Old Vic Theatre Company season.

    Having quickly made his name and fortune, in 1776 Sheridan bought David Garrick’s's share in the Drury Lane patent, and in 1778 the remaining share. His later plays were all produced there. In 1778 Sheridan wrote The Camp, which commented on the ongoing threat of a French invasion of Britain. The same year Sheridan's brother-in-law Thomas Linley, a young composer who worked with him at Drury Lane Theatre, died in a boating accident. Sheridan had a rivalry with his fellow playwright Richard Cumberland and included a parody of Cumberland in his play The Critic. On 24 February 1809 (despite the much vaunted fire safety precautions of 1794) the theatre burned down. On being encountered drinking a glass of wine in the street while watching the fire, Sheridan was famously reported to have said, "A man may surely be allowed to take a glass of wine by his own fireside."

    Member of Parliament

    In 1780, Sheridan entered Parliament as the ally of Charles James Fox on the side of the American Colonials in the political debate of that year. He is said to have paid the burgesses of Stafford five guineas apiece for the honour of representing them. As a consequence, his first speech in Parliament had to be a defence against the charge of bribery.

    In 1787 Sheridan demanded the impeachment of Warren Hastings, the first Governor-General of India. His speech in the House of Commons was described by Edmund Burke, Charles James Fox and William Pitt as the greatest ever delivered in ancient or modern times.

    In 1793 during the debates on the Aliens Act designed to prevent French Revolutionary spies and saboteurs from flooding into the country, Edmund Burke made a speech in which he claimed there were thousands of French agents in Britain ready to use weapons against the authorities. To dramatically emphasise his point he threw down a knife onto the floor of the House of Commons. Sheridan is said to have shouted out "Where's the fork?", which led to much of the house collapsing in laughter.

    During the invasion scare of 1803 Sheridan penned an Address to the People:
    THEY, by a strange Frenzy driven, fight for Power, for Plunder, and extended Rule—WE, for our Country, our Altars, and our Homes.—THEY follow an ADVENTURER, whom they fear—and obey a Power which they hate—WE serve a Monarch whom we love—a God whom we adore...They call on us to barter all of Good we have inherited and proved, for the desperate Chance of Something better which they promise.—Be our plain Answer this: The Throne WE honour is the PEOPLE'S CHOICE—the Laws we reverence are our brave Fathers' Legacy—the Faith we follow teaches us to live in bonds of Charity with all Mankind, and die with Hope of Bliss beyond the Grave. Tell your Invaders this; and tell them too, we seek no Change; and, least of all, such Change as they would bring us.

    He held the posts of Receiver-General of the Duchy of Cornwall (1804–1807) and Treasurer of the Navy (1806–1807).

    When he failed to be re-elected to Parliament in 1812, after 32 years, his creditors closed in on him and his last years were harassed by debt and disappointment. On hearing of his debts, the American Congress offered Sheridan £20,000 in recognition of his efforts to prevent the American War of Independence. The offer was refused.

    In December 1815 he became ill, largely confined to bed. Sheridan died in poverty, and was buried in the Poets’ Corner of Westminster Abbey; his funeral was attended by dukes, earls, lords, viscounts, the Lord Mayor of London, and other notables.

    In 1825 the Irish writer Thomas Moore published a two-volume sympathetic biography Memoirs of the Life of Richard Brinsley Sheriden which became a major influence on subsequent perceptions of him. A Royal Society of Arts blue plaque was unveiled in 1881 to commemorate Sheridan at 14 Savile Row in Mayfair. Another plaque is in Stafford.

    = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

    The Critic [a play by Sheridan]

    The Critic: or, a Tragedy Rehearsed is a satire by Richard Brinsley Sheridan. It was first staged at Drury Lane Theatre in 1779. It is a burlesque on stage acting and play production conventions, and Sheridan considered the first act to be his finest piece of writing.

    One of its major roles, Sir Fretful Plagiary, is a comment on the vanity of authors, and in particular a caricature of the dramatist Richard Cumberland who was a contemporary of Sheridan.
    Based on George Villiers’ The Rehearsal, it concerns misadventures that arise when an author, Mr Puff, [who himself also is a critic] invites Sir Fretful Plagiary and the theatre critics Dangle and Sneer to a rehearsal of his play The Spanish Armada, Sheridan's parody of the then-fashionable tragic drama.
    In 1911, Herbert Beerbohm Tree mounted a star-studded production of The Critic at Her Majesty’s Theatre starring George Alexander, Cecil Armstrong, George Barrett, Arthur Bourchier, C. Hayden Coffin, Kenneth Douglas, Lily Elsie, Winifred Emery, George Graves, George Grossmith, Jr., Edmund Gurney, John Harwood, Charley Hartrey, Helen Haye, Laurence Irving, Cyril Maude, Gerald Du Maurier, Gertie Millar, Edmund Payne, Courtice Pounds, Marie Tempest, Violet Vanbrugh and Arthur Williams. In 1946, Laurence Olivier played the role of Mr. Puff in a famous production of the play at the Old Vic, alternating with Sophocles’s Oedipus Rex. In 1982, Hywel Bennett starred in a BBC television production which was
    also broadcast in the U.S., on the A&E channel.

    The play was adapted as an opera in two acts by Sir Charles Villiers Stanford; it received its premiere in London in 1916.

    Thursday, January 16, 2014

    Positive Quiddity: Forensic Engineering

    Forensic engineering is the investigation of materials, products, structures or components that fail or do not operate or function as intended, causing personal injury or damage to property. The consequences of failure are dealt with by the law of product liability. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. The subject is applied most commonly in civil law cases, although it may be of use in criminal law cases. Generally, the purpose of a forensic engineering investigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of an accident. It can also involve investigation of intellectual property claims, especially patents.

    As the field of engineering has evolved over time so has the field of forensic engineering. Early examples include investigation of bridge failures such as the Tay rail bridge disaster of 1879 and the Dee bridge disaster of 1847. Many early rail accidents prompted the invention of tensile testing of samples and fractography of failed components.

    With the prevalence of liability lawsuits in the late 1900s the use of forensic engineering as a means to determine culpability spread in the courts. Edmond Locard (1877–1966) was a pioneer in forensic science who formulated the basic principle of forensic science: "Every contact leaves a trace". This became known as Locard's "exchange principle".

    Vital to the field of forensic engineering is the process of investigating and collecting data related to the materials, products, structures or components that failed. This involves inspections, collecting evidence, measurements, developing models, obtaining exemplar products, and performing experiments. Often testing and measurements are conducted in an Independent testing laboratory or other reputable unbiased laboratory.

    Failure mode and effects analysis (FMEA) and fault tree analysis methods also examine product or process failure in a structured and systematic way, in the general context of safety engeineering. However, all such techniques rely on accurate reporting of failure rates, and precise identification, of the failure modes involved.

    There is some common ground between forensic science and forensic engineering, such as scene of crime and scene of accident analysis, integrity of the evidence and court appearances. Both disciplines make extensive use of optical and scanning electron microscopes, for example. They also share common use of spectroscopy (infared, ultraviolet, and nuclear magnetic resonance) to examine critical evidence.

    Radiography using X-rays (such as X-ray computed tomography), or neutrons is also very useful in examining thick products for their internal defects before destructive examination is attempted. Often, however, a simple hand lens may reveal the cause of a particular problem.

    Trace evidence is sometimes an important factor in reconstructing the sequence of events in an accident.
    For example, tire burn marks on a road surface can enable vehicle speeds to be estimated, when the brakes were applied and so on. Ladder feet often leave a trace of movement of the ladder during a slipaway, and may show how the accident occurred. When a product fails for no obvious reason, SEM and Energy-dispersive X-ray spectroscopy (EDX) performed in the microscope can reveal the presence of aggressive chemicals that have left traces on the fracture or adjacent surfaces. Thus an acetal resin water pipe joint suddenly failed and caused substantial damages to a building in which it was situated. Analysis of the joint showed traces of chlorine, indicating a stress corrosion cracking failure mode. The failed fuel pipe junction mentioned above showed traces of sulfur on the fracture surface from the sulfuric acid, which had initiated the crack.

    Extracting physical evidence from digital photography is a major technique used in forensic accident reconstruction. Camera matching, photogrammetry, and photo rectification techniques are used to create three dimensional and top-down views from the two-dimensional photos typically taken at an accident scene. Overlooked or undocumented evidence for accident reconstruction can be retrieved and quantified as long as photographs of such evidence are available. By using photographs of the accident scene including the vehicle, "lost" evidence can be recovered and accurately determined.

    Forensic materials engineering involves methods applied to specific materials, such as metals, glasses, ceramics, composites and polymers.

    Historic Examples
    There are many examples of forensic methods used to investigate accidents and disasters, one of the earliest in the modern period being the fall of the Dee bridge at Chester, England. It was built using cast iron girders, each of which was made of three very large castings dovetailed together. Each girder was strengthened by wrought iron bars along the length. It was finished in September 1846, and opened for local traffic after approval by the first Railway Inspector, General Charles Pasley. However, on 24 May 1847, a local train to Ruabon fell through the bridge. The accident resulted in five deaths (three passengers, the train guard, and the locomotive fireman) and nine serious injuries. The bridge had been designed by Robert Stephenson, and he was accused of negligence by a local inquest.

    Although strong in compression, cast iron was known to be brittle in tension or bending. On the day of the accident, the bridge deck was covered with track ballast to prevent the oak beams supporting the track from catching fire, imposing a heavy extra load on the girders supporting the bridge and probably exacerbating the accident. Stephenson took this precaution because of a recent fire on the Great Western Railway at Uxbridge, London, where Isambard Kingdom Brunel's bridge caught fire and collapsed.
    One of the first major inquiries conducted by the newly formed Railway Inspectorate was conducted by Captain Simmons of the Royal Engineers, and his report suggested that repeated flexing of the girder weakened it substantially. He examined the broken parts of the main girder, and confirmed that the girder had broken in two places, the first break occurring at the center. He tested the remaining girders by driving a locomotive across them, and found that they deflected by several inches under the moving load. He concluded that the design was flawed, and that the wrought iron trusses fixed to the girders did not reinforce the girders at all, which was a conclusion also reached by the jury at the inquest. Stephenson's design had depended on the wrought iron trusses to strengthen the final structures, but they were anchored on the cast iron girders themselves, and so deformed with any load on the bridge. Others (especially Stephenson) argued that the train had derailed and hit the girder, the impact force causing it to fracture. However, eye witnesses maintained that the girder broke first and the fact that the locomotive remained on the track showed otherwise.