Friday, May 31, 2013

Sharper Nanotechnology Imaging

High-pressure imaging breakthrough
a boon for nanotechnology
By Tona Kunz, Argonne National Laboratory, April 9, 2013

The study of nanoscale material just got much easier, and the design of nanoscale technology could get much more efficient, thanks to an advance in X-ray analysis.

Nanomaterials develop new physical and chemical properties, such as superconductivity and enhanced strength, when exposed to extreme pressure. A better understanding of how and when those changes occur can guide the design of better products that use nanotechnology.

But high-energy X-rays produced by lightsources such as the Advanced Photon Source (APS) at Argonne National Laboratory are the only way to study the in-situ structural changes induced by pressure in nanomaterials, and those studies have lacked precision.

Until now.

As reported in a Carnegie Institute of Science press release, an international team of scientists using the APS detailed in the April 9 issue of the journal Nature Communications that they devised a way to overcome the distortion caused by sample environments used with the X-rays to improve spatial resolution imaging by two orders of magnitude. This 30-nanometer resolution greatly reduces uncertainties for studies of nanoscale materials. Researchers expect to fine-tune the technique to reach resolutions of a few nanometers in subsequent experiments.

"Before, we had to average the strain on polycrystals (solids composed of many microscopic crystals) or over an entire single crystal under high pressure," said Wenge Yang of the Carnegie Institution of
Washington and the High Pressure Synergetic Consortium at the APS. "But now we can zoom in like a microscope on a single crystal and even inside a single crystal. We will continue to improve this. There are still two orders of magnitudes in length scale we can achieve before reaching the diffraction limit of hard x-rays."

The team of researchers led by Wenge used the APS beamline 34-ID-C to demonstrate a proof of principle of the new imaging technique by looking at gold nanocrystal as reported in the paper "Coherent diffraction imaging of nanoscale evolution in a single crystal under high pressure".

The imaging technique uses an algorithm developed at the London Centre for Nanotechnology and X-ray instrumentation pioneered at the APS.mThe technique opens the door to a variety of new research and can be used at third- and fourth-generation lightsources, which produce highly coherent X-rays.

Improved spatial resolution imaging for nanomaterial under high pressure has ramifications for advanced engineering, such as the development of nanoscale coatings for semiconductors that can better withstand
high currents and heat loads.

An accurate structural view of nanoscale materials under pressure can aid in redesigning materials at larger scales to exhibit the same high-performance traits.

Improved imaging also can aid studies of minerals under high-pressure to mimic the real conditions from earth’s crust to mantle, even core conditions.

http://www.anl.gov/articles/high-pressure-imaging-breakthrough-boon-nanotechnology

Thursday, May 30, 2013

Ten U.S. States with Most Major Disasters

By Doug Whiteman | Bankrate.com – Tue, May 28, 2013

Some states find themselves in the crosshairs of disaster far more than others. Presidents have declared nearly 2,000 major disasters in the 50 states and the District of Columbia over the past 60 years as of April 2013, but a mere 10 states have been responsible for a third of that total. See if you live in one of these disaster-prone states -- and if you do, you may want to review your insurance policies.

10. Missouri


The Show-Me State has been shown disastrous weather in every month of the year: severe snow and ice storms in winter, tornadoes during the spring, summer and fall, and flooding at virtually any time.

Major disaster declarations since 1953: 53

9. Arkansas

Arkansas has been walloped by heavy rain, snow, ice, tornadoes and flooding over the years and has even taken poundings from tropical storm systems, though it's not a coastal state. In 2008, storms and tornadoes associated with Hurricane Gustav littered streets with debris, damaged buildings, roads and bridges, and knocked out electric cooperatives.

Major disaster declarations since 1953: 54

8. Kentucky

The disaster roster in Kentucky has included landslides, mudslides and rockslides, along with flooding and tornadoes. The state was ripped up in 2008 by the remnants of Hurricane Ike. Another major disaster declaration involved a record snowfall in late 2004, and yet another stemmed from a 1981 series of chemical explosions in the Louisville sewers.

Major disaster declarations since 1953: 56

7. Alabama

This Gulf Coast state has been battered by hurricanes, including Isaac in 2012, Gustav in 2008, Katrina and Dennis in 2005, and Ivan in 2004. But tornadoes in April 2011 rivaled the hurricanes for destructive power, lashing the state with winds that exceeded 210 mph and leaving about 250 people dead and an estimated $1.5 billion in damage.

Major disaster declarations since 1953: 57

6. Louisiana

It has taken Louisiana years to recover from Hurricane Katrina, the now-legendary 2005 storm that government officials say killed nearly 1,000 residents and caused tens of billions of dollars in damage. The Gulf state has been visited by numerous hurricanes including 1969's Camille, a Category 5 storm that came ashore with 190 mph winds. By comparison, Katrina was "only" a Category 3 on the wind scale.

Major disaster declarations since 1953: 60

5. Florida

The Sunshine State has been pummeled by dozens of tropical storm systems since the 1950s -- none worse than Hurricane Andrew in 1992. The Category 5 hurricane with gusts of more than 200 mph held the title as the most expensive natural disaster in U.S. history until Hurricane Katrina in 2005. Severe freezes have been disastrous for Florida farming on multiple occasions.

Major disaster declarations since 1953: 65

4. New York

Across its empire that stretches from the Great Lakes to the Atlantic coast, New York has been lashed by everything from blizzards to tropical storms. In 2012, Superstorm Sandy killed nearly 50 in the state and caused more than $40 billion in damage. New York also received disaster declarations for the Sept. 11, 2001, terrorist attacks that destroyed the World Trade Center and an earlier bombing in the complex's garage.

Major disaster declarations since 1953: 67

3. Oklahoma


The recent monster tornado that blasted through the Oklahoma City suburbs is only the latest devastating storm to hit a state that recorded an average of 55 twisters per year since 1950. The worst tornado in recent history struck near Oklahoma City in May 1999 with unprecedented winds in excess of 300 mph that killed 36 people. Oklahoma also has endured severe winter storms, wildfires, floods and the 1995 terrorist bombing that killed 168 people at the Oklahoma City federal building.

Major disaster declarations since 1953: 73

2. California

The nation's most populous state also is one of the most disaster-prone thanks to wildfires, landslides, flooding, winter storms, severe freeze and even tsunami waves. But earthquakes are the disaster perhaps most closely associated with California. The worst in recent years have included a magnitude-6.9 quake near San Francisco in 1989 that killed 63 and a magnitude-6.7 quake in Southern California in 1994 that killed 61.

Major disaster declarations since 1953: 78

1. Texas

Within Texas' nearly 267,000 square miles (second only to Alaska in size), at least one major disaster is declared nearly every calendar year. The Lone Star State has dealt with tornadoes, floods, wildfires and fairly frequent coastal hurricanes. One of the deadliest and costliest in recent decades was Hurricane Celia, which tore up Corpus Christi in 1970. The storm left 13 dead and destroyed millions of dollars' worth of property.

Major disaster declarations since 1953: 86

Source: Federal Emergency Management Agency
  http://finance.yahoo.com/news/10-states-most-at-risk-for-major-disasters-173801433.html

Wednesday, May 29, 2013

Greatest World Universities Fit a Pattern

Top 10 Countries with
Greatest Scientific Impact

The H-Index
RealClearScience, May 29, 2013

Conventional wisdom holds that America is in decline, particularly in the arena of science and technology, and that countries like China are taking the lead. But as is often the case, the conventional wisdom is wrong.

According to R&D Magazine, in 2013, the U.S. will invest $424 B in research and development (R&D), while China will invest roughly half that amount ($220 B). Amazingly, the U.S. will invest more in R&D than all of Europe combined ($350 B).

Two years ago, the BBC reported that China may overtake the U.S. in scientific output by 2013, as measured by the total number of scientific journal articles published. That may or may not happen. Still, as correctly reported in Nature News, quantity does not trump quality. The article went on to say, rather harshly, that China's "scientific journals are filled with incremental work, read by virtually no one and riddled with plagiarism."

Therefore, a better -- though imperfect -- measure of scientific output is the h-index, a numerical value that reflects not only the quantity of articles published, but also the number of times they were cited by researchers…By this measure, China is not even in the global top 10. Instead, it ranks in at #17. The other BRICS do not even crack the top 20. (Russia #21, Brazil #22, India #23 and South Africa #35.)

So, which countries are in the top 10 for greatest scientific impact? Continue with the list to find out.

Source: SCImago Journal & Country Rank

#20. Norway
#19. South Korea
#18. Finland
#17. China
#16. Austria
#15. Israel
#14. Denmark
#13. Belgium
#12. Spain
#11. Australia
#10 Sweden
# 9 Switzerland
# 8 Netherlands
# 7 Italy
# 6 Japan
# 5 Canada
# 4 France
# 3 Germany
# 2 United Kingdom (with 18 of the world’s top 100 universities)
# 1 United States (with 31 of the world’s top 100 universities)

http://www.realclearscience.com/lists/top_10_countries_with_greatest_scientific_impact/

= = = = = = = = = = = = = = = = = = = = = =

and those 100 top universities are:

# 1 Massachusetts Institute of Technology [MIT] (USA)
# 2 University of Cambridge (UK)
# 3 Harvard University (USA)
# 4 University College London (UK)
# 5 University of Oxford (UK)
# 6 Imperial College London (UK)
# 7 Yale University (USA)
# 8 University of Chicago (USA)
# 9 Princeton (USA)
#10 California Institute of Technology (Caltech) (USA)
#11 Columbia University (USA)
#12 University of Pennsylvania (USA)
#13 ETH Zurich [Swiss Federal Institute of Technology] (Switzerland)
#14 Cornell Univeristy (USA)
#15 Stanford Univeristy (USA)
#16 Johns Hopkins University (USA)
#17 University of Michigan (USA)
#18 McGill University (Canada)
#19 University of Toronto (Canada)
#20 Duke University (USA)
#21 University of Edinburgh (UK)
#22 University of California, Berkeley (USA)
#23 University of Hong Kong (China)
#24 Australian National University (Australia)
#25 National University of Singapore (Singapore)
#26 King’s College London (UK)
#27 Northwestern University (USA)
#28 University of Bristol (UK)
#29 Ecole Polytechnique Federale de Lausanne (Switzerland)
#30 The University of Tokyo (Japan)
#31 University of California, Los Angeles (USA)
#32 University of Manchester (UK)
#33 Hong Kong University of Science and Technology (China)
#34 Ecole Normale Superiere de Paris (France)
#35 Kyoto University (Japan)
#36 University of Melbourne (Australia)
#37 Seoul National University (South Korea)
#38 University of Wisconsin-Madison (USA)
#39 University of Sydney (Australia)
#40 Chinese University of Hong Kong (China)
#41 Ecole Polytechnique (France)
#42 Brown University (USA)
#43 New York University (USA)
#44 Peking University (China)
#45 University of British Columbia (Canada)
#46 University of Queensland (Australia)
#47 Nanyang Technological University (Singapore)
#48 Tsinghua University (China)
#49 Carnegie Mellon University (USA)
#50 Osaka University (Japan)
#51 University of Copenhagen (Denmark)
#52 University of New South Wales (Australia)
#53 Technische Uiversitat Munchen (Germany)
#54 University of Glasgow (UK)
#55 Ruprecht-Karls-Universitat Heidelberg (Germany)
#56 University of Illinois at Urbana-Champaign (USA)
#57 University of North Carolina, Chapel Hill (USA)
#58 University of Warwick (UK)
#59 University of Washington (USA)
#60 Ludwig-Maximilians-Universitat Munchen (Germany)
#61 Monash University (Australia)
#62 University of Amsterdam (Netherlands)
#63 KAIST – Korea Advanced Institute of Science & Technology (South Korea)
#64 Boston University (USA)
#65 Tokyo Institute of Technology (Japan)
#66 University of Sheffield (UK)
#67 Trinity College Dublin (Ireland)
#68 University of Texas at Austin (USA)
#69 London School of Economics and Political Science (UK)
#70 University of California, San Diego (USA)
#71 Lund University (Sweden)
#72 University of Nottingham (UK)
#73 University of Southampton (UK)
#74 University of Geneva (Switzerland)
#75 Leiden University (Netherlands)
#75 Tohuko University (Japan)
#77 University of Birmingham (UK)
#78 University of Helsinki (Finland)
#79 University of Western Australia (Australia)
#80 National Taiwan University (Taiwan)
#81 Uppsala University (Sweden)
#82 KU Leuven (Belgium)
#83 University of Auckland (New Zealand)
#84 Washington University in St. Louis (USA)
#85 Utrecht University (Netherlands)
#86 Nagoya University (Japan)
#87 Freie Universitat Berlin (Germany)
#88 Georgia Institute of Technology (USA)
#89 Aarhus University (Denmark)
#90 Fudan University (China)
#90 University of Zurich (Switzerland)
#92 Durham University (UK)
#93 University of St. Andrews (UK)
#94 University of Leeds (UK)
#95 City University of Hong Kong (China)
#95 Perdue University (USA)
#97 Pohang University of Science and Technology (South Korea)
#98 University of Pittsburgh (USA)
#99 Erasmus University Rotterdam (Netherlands)
#100 University of California, Davis (USA)

http://www.usnews.com/education/worlds-best-universities-rankings/top-400-universities-in-the-world

= = = = = = = = = = = = = = = = = = = = = = = = = =

Postscript by the Blog Author

A flat out majority of the top 100 universities are found in the "Anglosphere" of the UK, Ireland, USA, Canada, Australia and New Zealand, all of which are represented in the above list. I am not including the Hong Kong schools begun when that colony was part of the British Crown Colonies.

Another way of achieving a flat out majority of the top 100 universities is to include the "G7" nations of the USA, Canada, Japan, UK, France, Italy and Germany.

Tuesday, May 28, 2013

Glassy Metal Made From Cement

The Formula for Turning Cement into MetalBy Tona Kunz, Argonne National Laboratory, May 27, 2013

LEMONT, Ill. – In a move that would make the Alchemists of King Arthur’s time green with envy, scientists have unraveled the formula for turning liquid cement into liquid metal. This makes cement a semi-conductor and opens up its use in the profitable consumer electronics marketplace for thin films, protective coatings, and computer chips.

"This new material has lots of applications, including as thin-film resistors used in liquid-crystal displays, basically the flat panel computer monitor that you are probably reading this from at the moment," said Chris Benmore, a physicist from the U.S. Department of Energy’s (DOE) Argonne National Laboratory who worked with a team of scientists from Japan, Finland and Germany to take the "magic" out of the cement-to-metal transformation. Benmore and Shinji Kohara from Japan Synchrotron Radiation Research Institute/SPring-8 led the research effort.

This change demonstrates a unique way to make metallic-glass material, which has positive attributes including better resistance to corrosion than traditional metal, less brittleness than traditional glass, conductivity, low energy loss in magnetic fields, and fluidity for ease of processing and molding. Previously, only metals have been able to transition to a metallic-glass form. Cement does this by a process called electron trapping, a phenomena only previously seen in ammonia solutions. Understanding how cement joined this exclusive club opens the possibility of turning other solid normally insulating materials into room-
temperature semiconductors.

"This phenomenon of trapping electrons and turning liquid cement into liquid metal was found recently, but not explained in detail until now," Benmore said. "Now that we know the conditions needed to create trapped electrons in materials we can develop and test other materials to find out if we can make them conduct electricity in this way."

The results were reported May 27 in the journal the Proceeding of the National Academy of Sciences in the article "Network topology for the formation of solvated electrons in binary CaO–Al2O3 composition glasses".

The team of scientists studied mayenite, a component of alumina cement made of calcium and aluminum oxides. They melted it at temperatures of 2,000 degrees Celsius using an aerodynamic levitator with carbon dioxide laser beam heating. The material was processed in different atmospheres to control the way that oxygen bonds in the resulting glass. The levitator keeps the hot liquid from touching any container surfaces and forming crystals. This let the liquid cool into glassy state that can trap electrons in the way needed for electronic conduction. The levitation method was developed specifically for in-situ measurement at Argonne’s Advanced Photon Source by a team led by Benmore.

The scientists discovered that the conductivity was created when the free electrons were "trapped" in the cage-like structures that form in the glass. The trapped electrons provided a mechanism for conductivity similar to the mechanism that occurs in metals.

http://www.anl.gov/articles/formula-turning-cement-metal

Monday, May 27, 2013

An Explanation for Infantile Amnesia

A study presented Friday at the Canadian Association of Neuroscience offers and explanation for what children cannot remember events before age three, a characteristic called "infantile amnesia."

Paul Frankland, a co-author of the study, suggests that the growth of cells in the hippocampus area of the brain offers the explanation. The rapid growth of this area at age two changes and improves the way new experiences are filed as long-term memories.

To test this theory, mice were gathered and the rate of cell formation in their hippocampuses was slowed.
The ability of mice to remember how to get through a maze remained steady. This was not the case for mice who were experiencing explosive growth in their hippocampus cells.

Further research is expected. An article by Linda Carroll of NBC News on this research is available at:
 
http://bodyodd.nbcnews.com/_news/2013/05/24/18453169-brain-overload-explains-missing-childhood-
memories?lite

Can Humans Think Without Language?

By Arika Okrent, mentalfloss.com, May 23, 2013

Language is so deeply embedded in almost every aspect of the way we interact with the world that it's hard to imagine what it would be like not to have it. What if we didn't have names for things? What if we didn't have experience making statements, asking questions, or talking about things that hadn't actually happened?
Would we be able to think? What would our thoughts be like?

The answer to the question of whether thought is possible without language depends on what you mean by thought. Can you experience sensations, impressions, feelings without language? Yes, and very few would argue otherwise. But there is a difference between being able to experience, say, pain or light, and possessing the concepts "pain" and "light." Most would say true thought entails having the concepts.

Many artists and scientists, in describing their own inner processes while they work, say they do not use words to solve problems, but images. The autistic author Temple Grandin, in explaining how she thinks visually rather than linguistically, says that concepts for her are collections of images. Her concept of "dog," for example, "is inextricably linked to every dog I've ever known. It's as if I have a card catalog of dogs I have seen, complete with pictures, which continually grows as I add more examples to my video library." Of course, Grandin has language, and knows how to use it, so it is hard to say how much of her thinking has been influenced by it, but it is not unimaginable—and probably likely—that there are people who lack the ability to use language and think in the way she describes.

There is also evidence that deaf people cut off from language, spoken or signed, think in sophisticated ways before they have been exposed to language. When they later learn language, they can describe the experience of having had thoughts like those of the 15 year old boy who wrote in 1836, after being educated at a school for the deaf, that he remembered thinking in his pre-language days "that perhaps the moon would strike me, and I thought that perhaps my parents were strong, and would fight the moon, and it would fail, and I mocked the moon." Also, the spontaneous sign languages developed by deaf students without language models, in places like Nicaragua, display the kind of thinking that goes far beyond mere sensory impression or practical problem solving.

However, while it appears that we can indeed think without language, it is also the case that there are certain kinds of thinking that are made possible by language. Language gives us symbols we can use to fix ideas, reflect on them and hold them up for observation. It allows for a level of abstract reasoning we wouldn't have otherwise. The philosopher Peter Carruthers has argued that there is a type of inner, explicitly linguistic thinking that allows us to bring our own thoughts into conscious awareness. We may be able to think without language, but language lets us know that we are thinking.
 
http://mentalfloss.com/article/50684/it-possible-think-without-language#ixzz2UTOelZlK

Saturday, May 25, 2013

American Boys Lack Discipline

"Boys will be boys" in U.S., but not in Asia

Oregon State University, May 22, 2013

CORVALLIS, Ore. – A new study shows there is a gender gap when it comes to behavior and self-control in American young children – one that does not appear to exist in children in Asia.

In the United States, girls had higher levels of self-regulation than boys. Self-regulation is defined as children’s ability to control their behavior and impulses, follow directions, and persist on a task. It has been linked to academic performance and college completion, in past studies by Oregon State University researchers.

In three Asian countries, the gender gap in the United States was not found when researchers directly assessed the self-regulation of 3-6 year olds. The results appear in the new issue of the journal Early Childhood Research Quarterly.

"These findings suggest that although we often expect girls to be more self-regulated than boys, this may not be the case for Asian children," said Shannon Wanless, lead author of the study.

Wanless began conducting the research during her doctoral studies at Oregon State University under Megan McClelland, an associate professor in OSU’s Hallie E. Ford Center for Healthy Children and Famlies. Wanless is now on the faculty at the University of Pittsburgh.

One interesting part of the researcher’s findings: Although there were no gender differences in self-regulation when the children were directly assessed using a variety of school-readiness tasks, teachers in Asia perceived girls as performing better on self-regulation even when they actually performed equally to boys.

"Teachers are rating children's behavior in the classroom environment, which has a lot of distractions and is very stimulating," Wanless said. "It is possible that boys in the Asian countries were able to self-regulate as well as girls when they were in a quiet space (the direct assessment), but were not able to regulate themselves as well in a bustling classroom environment (teacher ratings)."

In addition, McClelland said cultural expectations of girls’ behavior versus that of their male peers may be influencing teachers’ assessments.

"In general, there is more tolerance for active play in boys than in girls," McClelland said. "Girls are expected to be quiet and not make a fuss. This expectation may be coloring some teachers’ perceptions."

The researchers conducted assessments with 814 children in the United States, Taiwan, South Korea and China. Their study showed that U.S. girls had significantly higher self-regulation than boys, but there were no significant gender differences in any Asian societies. In addition, for both genders, directly assessed and teacher-rated self-regulation were related to many aspects of school readiness in all societies for girls and boys.

"We know from previous research that many Asian children outperform American children in academic achievement," McClelland said. "Increasingly, we are seeing that there is also a gap when it comes to their ability to control their behavior and persist with tasks."
Wanless said this study paves the way for future research to explore why there is such a large gender gap in the United States, and what can be learned from Asian schools.

"What can we learn from Asian cultural and teaching practices about how we can support girls and boys to be successful in school?" she said. "When we see differences in developmental patterns across countries it suggests that we might want to look at teaching and parenting practices in those countries and think about how they might apply in the United States."

Both researchers emphasized the importance of working with young children, regardless of gender or culture, on their self-regulation skills. Practicing games such as Simon Says and Red Light, Green Light are a few ways that parents can work with their children to help them learn how to follow instructions, persist on a task, and listen carefully.

"In our study, self-regulation was good for academic achievement for boys and girls," Wanless said. "That means this skill is important for both genders and we should be supporting self-regulatory development for all children, especially boys. Low self-regulation in preschool has been linked to difficulties in adulthood, so increased focused on supporting young boys' development can have long-term positive bdnefits."

http://oregonstate.edu/ua/ncs/archives/2013/may/%E2%80%9Cboys-will-be-boys%E2%80%9D-us-not-asia

Friday, May 24, 2013

Some Definitions of Jazz

Jazz is a music that originated at the beginning of the 20th century, arguably earlier [see blog author comment below], within the African-American communities of the Southern United States. Its roots lie in the adoption by African-Americans of European harmony and form, taking on those European elements and combining them into their existing African-based music. Its African musical basis is evident in its use of blue notes, improvisation, polyrhythms, syncopation and the swing note. From its early development until the present day, jazz has also incorporated elements from popular music especially, in its early days, from American popular music.

Definitions
Jazz spans a range of music from ragtime to the present day—a period of over 100 years—and has proved to be very difficult to define. Attempts have been made to define jazz from the perspective of other musical traditions—using the point of view of European music history or African music for example—but critic
Joachim Berendt argues that its terms of reference and its definition should be broader. Berendt defines jazz as a "form of art music which originated in the United States through the confrontation of blacks with European music" and argues that it differs from European music in that jazz has a "special relationship to time defined as 'swing'", involves "a spontaneity and vitality of musical production in which improvisation plays a role" and contains a "sonority and manner of phrasing which mirror the individuality of the performing jazz musician".

A broader definition that encompasses all of the radically different eras of jazz has been proposed by Travis Jackson: he states that it is music that includes qualities such as swing, improvising, group interaction, developing an 'individual voice', and being open to different musical possibilities. An overview of the discussion on definitions is provided by Krin Gabbard, who argues that "jazz is a construct" that, while artificial, still is useful to designate "a number of musics with enough in common to be understood as part of a coherent tradition". In contrast to the efforts of commentators and enthusiasts of certain types of jazz, who have argued for narrower definitions that exclude other types, the musicians themselves are often reluctant to define the music they play. Duke Ellington, one of jazz's most famous figures, summed up this perspective by saying, "It's all music".
  http://en.wikipedia.org/wiki/Jazz

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Note by the Blog Author
I was raised on live boogie-woogie music and very early rock and roll. It was said in those days (the 1950s), that jazz began as the prototype of ragtime, as walking-band funeral music. At a Black funeral in the deep South from about 1870 onward, a band would play a march as the mourners walked from the church to the gravesite. After the services at the grave were finished, the same band would change tempo and play something lively and positive as the mourners danced back to town. This is where the improvisation and handoffs to the various instruments come from. The fetching and eternal message of jazz is that "life goes on" even after the wrenching mourning of a beloved and departed person. The formal, musicological, reductionist definitions of jazz came much later –- and none of them completely fit that lively dance back to town and away from the gravesite.

Thursday, May 23, 2013

Progressive Taxation Is Cannibalism

The tenth commandment says "Thou shalt not covet anything that is thy neighbor’s." Yet in 1913 under T. Woodrow Wilson an income tax was passed. This occurred because modern liberalism demands, as its first principle, income redistribution. Government by its nature is assumed to better distribute wealth than the marketplace. This initial tax, at a 2% flat rate, meant that a wealthy man earning $100,000 a year would have to pay $2,000 each year to remain a citizen in good standing and able to vote. A pauper need pay nothing.

Why should a wealthy man pay more to live in a society where there is equality under the law? A) because he and his success are resented, and a resentful mob co-opts the government to punish that success and assuage their resentment. B) because of hard-left philosophy, for example, "The courts serve a social purpose" – Lenin.

Wilson and the democrats utterly failed to foresee the long-term consequences of this social engineering.

What is that wealthy man going to do, now that he’s forced to pay an exorbitant price to remain a participant in society? "He’s going to fight back." He should do so and that is what happened. He was galvanized to fight as the tax rate became "progressive" – the higher his income the higher the tax rate. With the depression, came this progressive tax policy. For a while, this progressive rate was a masterpiece of mobocracy, but it galvanized those who were peculiarly capable of fighting back.

The first counter-attack consisted of increasing deductions to adjusted gross income. More and more sweeteners were added to these deductions; and every deduction carried a 0% tax rate for the amount categorically allowed.

The flurry of deductions required higher tax rates. The higher tax rates motivated lobbying for higher deductions.

As this snowball picked up speed, the progressives "won" another ideological battle – the fight against common law. Why should judges be able to determine the verdict and decision in a case based on the meditations of an imaginary "reasonable man" and his theoretical actions? Why should a case be decided on centuries of legally similar cases instead of on the written law drafted and passed by a legislature? The ideological prayers of the left were answered in 1938 in the Erie Railroad versus Tompkins case when it came before the Supreme Court.

See http://en.wikipedia.org/wiki/Federal_common_law

The idea of eliminating common law was foolish – and ultimately impossible (since case law and precedent are inextricably involved in the concept of common law – and, logically, unacceptable, because the notion that everything that exists can be written down on paper is itself irrational. There are things that exist that can’t be fully described.

In 1943, a buffer occurred which modified the effect of the 1938 Erie Railroad case. But the long-term effect on tax law had already occurred. Tax legislation became enormously picky and overly precise, as well as difficult to apply. Precedent was being set whenever a federal court, especially a tax court, was in session. This is still true today.

Those with enough money to fight the tax laws modestly and humbly hired expert lobbyists and tax attorneys to do the fighting for them. They enlisted clever certified public accountants to prepare their returns. Their deductions grew and the tax burden lessened.

Eventually a moderate democrat was elected President who himself was the son of an astoundingly wealthy man. This president listened to the lobbyists and lowered the maximum tax rate down from the 90% it had become, which was $90,000 in taxes for someone earning $100,000 and failing to take advantage of itemized deductions.  The tax rate reduction passed, and as the tax rate went down, tax receipts into the Treasury actually went up!

And that fight has been the front lines ever since John F. Kennedy was president. The complexity and trickiness of the tax code, overall, has continued to increase, however, as progressives have continued to attempt to use the tax code to mollify resentment and alter social behavior.

In 1969, the complicated tax code had become so cumbersome and full of loopholes that it was decided to deny deductible items for certain high-income taxpayers and impose a flat-rate minimum tax. This was called the "Alternative Tax." In 1982, this was brought up to date with what is known generally as the "Alternative Minimum Tax" or "AMT." Note how this technique of throwing out the progressive system and going back to a flat tax becomes a necessity once the snowball of complex taxation law without common law becomes unworkable.

But the blame for the needless complexity and arbitrariness of taxation is entirely on progressives. They still resent it when a neighbor is "allowed" to have more than they themselves have. That impulse to tear down anyone who arouses resentment is, itself, mobocratic cannibalism.

Another further indicator of the complexity of tax law lies with the practitioners of the accounting profession who specialize in this area. Until about 25 years ago, many practitioners memorized certain sections of the IRS code and used that knowledge in their practice. They were called "code heads." An alternative approach was to study summaries of tax court and federal court decisions to determine if a particular situation fit with various applicable federal decisions (essentially a case law approach).

In the long run, the case law approach has eaten and devoured the code heads. The Thompson RIA Tax Coordinator 2D is the professional choice for tax accountants. It is written, paragraph by paragraph, by lawyers who teach tax law at America’s best law schools. Revisions are made every week. The publication has 45,000 paragraphs, all of which are subject to revision and amendment at any time. It is generally distributed by subscription over the Internet these days.

To resent success and seek to use the government to punish it starts the snowball rolling downhill. The result is gross and needless complexity that only an unemotional expert can understand. And it’s inefficient. And it corrupts rule of law in a manner that endangers the poor, needy or uneducated.

Of course – a legislature can maintain high tax rates and mobocratic use of the tax laws to achieve social ends. Such a nation is going to wind up borrowing from abroad to pay government bills. And what happens when their borrowing power runs out and the progressive taxes are stubbornly maintained? The poor get poorer, the young can’t get jobs, and –get this – pay attention to this – everyone, and I mean everyone, cheats on their taxes. This is the end of the bus ride of progressivism. It was covered in the October, 2010, Vanity Fair magazine in an article called "Beware of Greeks Bearing Bonds":
http://www.vanityfair.com/business/features/2010/10/greeks-bearing-bonds-201010?currentPage=all

AFTERWORD

The blog author has mastered tax research. I’ve never been contradicted or proven wrong on any of the extensive tax research I have looked into or offered an opinion about – and there were directors and a CFO who picked fights with me. On the other hand, as an active participant in a state CPA society, there were accounting directors and CFOs who contacted me before important meetings to consult with me to make sure their conclusions about taxation were sound and above dispute. Those consultations remain among the proudest moments of my professional life. They could count on me, they could bet their reputations on the quality of my advice. Because I had no emotions and no ideological snap judgments. I went to the Tax Coordinator with an unbiased mind and got the best fit with case law. And when I gave my opinion, I always asked if they wanted backup research for their file on the tax issue (a courtesy which was normally accepted).

FINAL NOTE

There's something creepily wrong about using a government agency to assuage one's personal resentment.  That's what happens with progressive taxation.  It raises a fierce and ugly question: how much integrity does the tax collector have?  In the United States, the answer is "not much."  Washington, D.C., is my hometown.  I was a CPA there for many years.  The Internal Revenue Service was regarded as the bottom of the professional barrel, the employer of last resort.  It became virtually a laughing stock in the 1990s when it blew $8 billion on computer programming trying to modernize its system, get tax records recorded electronically, and use programming to identify returns that were irregular and deserving of audit.

The Internal Revenue Service was established in 1913 and always suspected of favoring certain classes of taxpayer and leaning on other classes.  Some of this rumored favoritism became factual under President Franklin Roosevelt.  More became true under Richard Nixon.  And today?  With the community organizer who is president right now?  The IRS is going to reduce resentment by implementing tax law?  Really?  Read on:

http://online.wsj.com/article/SB10001424127887323336104578501522735423256.html?mod=djemalertNEWS

Wednesday, May 22, 2013

Major Advance in Alzheimer's Research

Molecular Trigger for
Alzheimer's Disease Identified

May 20, 2013 — Researchers have pinpointed a catalytic trigger for the onset of Alzheimer’s disease – when the fundamental structure of a protein molecule changes to cause a chain reaction that leads to the death of neurons in the brain.

For the first time, scientists at Cambridge’s Department of Chemistry have been able to map in detail the pathway that generates "aberrant" forms of proteins which are at the root of neurodegenerative conditions such as Alzheimer’s.

They believe the breakthrough is a vital step closer to increased capabilities for earlier diagnosis of neurological disorders such as Alzheimer’s and Parkinson’s, and opens up possibilities for a new generation of targeted drugs, as scientists say they have uncovered the earliest stages of the development of Alzheimer’s that drugs could possibly target.

The study, published today in the Proceedings of the National Academy of Sciences, is a milestone in the long-term research established in Cambridge by Professor Christopher Dobson and his colleagues, following the realisation by Dobson of the underlying nature of protein ‘misfolding’ and its connection with disease over 15 years ago.

The research is likely to have a central role to play in diagnostic and drug development for dementia-related diseases, which are increasingly prevalent and damaging as populations live longer.

"There are no disease modifying therapies for Alzheimer’s and dementia at the moment, only limited treatment for symptoms. We have to solve what happens at the molecular level before we can progress and have real impact," said Dr Tuomas Knowles, lead author of the study and long-time collaborator of Professor Dobson.

"We’ve now established the pathway that shows how the toxic species that cause cell death, the oligomers, are formed. This is the key pathway to detect, target and intervene – the molecular catalyst that underlies the pathology."

In 2010, the Alzheimer’s Research Trust showed that dementia costs the UK economy over £23 billion, more than cancer and heart disease combined. Just last week, PM David Cameron urged scientists and clinicians to work together to "improve treatments and find scientific breakthroughs" to address "one of the biggest social and healthcare challenges we face."

The neurodegenerative process giving rise to diseases such as Alzheimer’s is triggered when the normal structures of protein molecules within cells become corrupted.

Protein molecules are made in cellular ‘assembly lines’ that join together chemical building blocks called amino acids in an order encoded in our DNA. New proteins emerge as long, thin chains that normally need to be folded into compact and intricate structures to carry out their biological function.
Under some conditions, however, proteins can ‘misfold’ and snag surrounding normal proteins, which then tangle and stick together in clumps which build to masses, frequently millions, of malfunctioning molecules that shape themselves into unwieldy protein tendrils.

The abnormal tendril structures, called ‘amyloid fibrils’, grow outwards around the location where the focal point, or 'nucleation' of these abnormal "species" occurs.

Amyloid fibrils can form the foundations of huge protein deposits – or plaques – long-seen in the brains of Alzheimer’s sufferers, and once believed to be the cause of the disease, before the discovery of ‘toxic oligomers’ by Dobson and others a decade or so ago.

A plaque’s size and density renders it insoluble, and consequently unable to move. Whereas the oligomers, which give rise to Alzheimer's disease, are small enough to spread easily around the brain - killing neurons and interacting harmfully with other molecules - but how they were formed was until now a mystery.

The new work, in large part carried out by researcher Samuel Cohen, shows that once a small but critical level of malfunctioning protein ‘clumps’ have formed, a runaway chain reaction is triggered that multiplies exponentially the number of these protein composites, activating new focal points through ‘nucleation’.

It is this secondary nucleation process that forges juvenile tendrils, initially consisting of clusters that contain just a few protein molecules. Small and highly diffusible, these are the ‘toxic oligomers’ that careen dangerously around the brain cells, killing neurons and ultimately causing loss of memory and other symptoms of dementia.

The researchers brought together kinetic experiments with a theoretical framework based on master
equations, tools commonly used in other areas of chemistry and physics but had not been exploited to their
full potential in the study of protein malfunction before.

The latest research follows hard on the heels of another ground breaking study, published in April of this year again in PNAS, in which the Cambridge group, in Collaboration with Colleagues in London and at MIT, worked out the first atomic structure of one of the damaging amyloid fibril protein tendrils. They say the years spent developing research techniques are really paying off now, and they are starting to solve "some of the key mysteries" of these neurodegenerative diseases.

"We are essentially using a physical and chemical methods to address a biomolecular problem, mapping out the networks of processes and dominant mechanisms to ‘recreate the crime scene’ at the molecular root of Alzheimer’s disease," explained Knowles.

"Increasingly, using quantitative experimental tools and rigorous theoretical analysis to understand complex biological processes are leading to exciting and game-changing results. With a disease like Alzheimer’s, you have to intervene in a highly specific manner to prevent the formation of the toxic agents. Now we’ve found how the oligomers are created, we know what process we need to turn off."
 
http://www.sciencedaily.com/releases/2013/05/130520154217.htm

tory Source:
The above story is reprinted from materials provided by University of Cambridge
, via AlphaGalileo. Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Tuesday, May 21, 2013

Breakthrough in Prime Number Theory

Introduction
By the blog author

Prime numbers (2,3,5,7,11,13,17,19,23,29…) are integers which can only be divided by themselves and by 1 to yield integral fractions of themselves. For a very long time, there has been speculation about the possible nature of primes. Some conjectures have been around for many generations – conjectures being notions that other mathematicians haven’t been able to prove nor to dismiss.

One of these conjectures is called the "twin primes conjecture," which speculates that there are an infinite number of prime numbers which are arithmetically only two numbers apart from each other. No one has been able to crack this conjecture to upgrade it to a proven mathematical truth or to eliminate it as ultimately untrue.

But a virtually unknown mathematician has come up with a proof that paves the road toward resolving the twin primes conjecture. He has shown that "there is some number N smaller than 70 million such that there are infinitely many pairs of primes that differ by N. No matter how far you go into the deserts of the truly gargantuan prime numbers — no matter how sparse the primes become — you will keep finding prime pairs that differ by less than 70 million."

It’s arcane. There’s a big difference between 2 and 70,000,000. But what has been proven amounts to very significant new mathematical knowledge. Here’s an excellent article on how this discovery occurred:

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Unheralded Mathematician
Bridges the Prime Gap
By Erica Klareich, Simons Foundation, May 19, 2013

On April 17, a paper arrived in the inbox of Annals of Mathematics, one of the discipline’s preeminent journals. Written by a mathematician virtually unknown to the experts in his field — a 50-something lecturer at the University of New Hampshire named Yitang Zhang — the paper claimed to have taken a huge step forward in understanding one of mathematics’ oldest problems, the twin primes conjecture.

Editors of prominent mathematics journals are used to fielding grandiose claims from obscure authors, but this paper was different. Written with crystalline clarity and a total command of the topic’s current state of the art, it was evidently a serious piece of work, and the Annals editors decided to put it on the fast track
Just three weeks later — a blink of an eye compared to the usual pace of mathematics journals — Zhang received the referee report on his paper.

"The main results are of the first rank," one of the referees wrote. The author had proved "a landmark theorem in the distribution of prime numbers."

Rumors swept through the mathematics community that a great advance had been made by a researcher no one seemed to know — someone whose talents had been so overlooked after he earned his doctorate in 1991 that he had found it difficult to get an academic job, working for several years as an accountant and even in a Subway sandwich shop.

"Basically, no one knows him," said Andrew Granville, a number theorist at the Université de Montréal.
"Now, suddenly, he has proved one of the great results in the history of number theory."

Mathematicians at Harvard University hastily arranged for Zhang to present his work to a packed audience there on May 13. As details of his work have emerged, it has become clear that Zhang achieved his result not via a radically new approach to the problem, but by applying existing methods with great perseverance.

"The big experts in the field had already tried to make this approach work," Granville said. "He’s not a known expert, but he succeeded where all the experts had failed."

….A Chinese immigrant who received his doctorate from Purdue University, he had always been interested in number theory, even though it wasn’t the subject of his dissertation. During the difficult years in which he was unable to get an academic job, he continued to follow developments in the field.

"There are a lot of chances in your career, but the important thing is to keep thinking," he said.
Zhang read the GPY paper, and in particular the sentence referring to the hair’s breadth between GPY and bounded prime gaps. "That sentence impressed me so much," he said.

Without communicating with the field’s experts, Zhang started thinking about the problem. After three years, however, he had made no progress. "I was so tired," he said.

To take a break, Zhang visited a friend in Colorado last summer. There, on July 3, during a half-hour lull in his friend’s backyard before leaving for a concert, the solution suddenly came to him. "I immediately realized that it would work," he said.

Zhang’s idea was to use not the GPY sieve but a modified version of it, in which the sieve filters not by every number, but only by numbers that have no large prime factors.

"His sieve doesn’t do as good a job because you’re not using everything you can sieve with," Goldston said.

"But it turns out that while it’s a little less effective, it gives him the flexibility that allows the argument to work."
 
While the new sieve allowed Zhang to prove that there are infinitely many prime pairs closer together than 70 million, it is unlikely that his methods can be pushed as far as the twin primes conjecture, Goldston said.

Even with the strongest possible assumptions about the value of the level of distribution, he said, the best result likely to emerge from the GPY method would be that there are infinitely many prime pairs that differ by 16 or less.

But Granville said that mathematicians shouldn’t prematurely rule out the possibility of reaching the twin primes conjecture by these methods.

"This work is a game changer, and sometimes after a new proof, what had previously appeared to be much harder turns out to be just a tiny extension," he said. "For now, we need to study the paper and see what’s what."

It took Zhang several months to work through all the details, but the resulting paper is a model of clear exposition, Granville said. "He nailed down every detail so no one will doubt him. There’s no waffling."

Once Zhang received the referee report, events unfolded with dizzying speed. Invitations to speak on his work poured in. "I think people are pretty thrilled that someone out of nowhere did this," Granville said.

For Zhang, who calls himself shy, the glare of the spotlight has been somewhat uncomfortable. "I said, ‘Why is this so quick?’" he said. "It was confusing, sometimes."

Zhang was not shy, though, during his Harvard talk, which attendees praised for its clarity. "When I’m giving a talk and concentrating on the math, I forget my shyness," he said.

Zhang said he feels no resentment about the relative obscurity of his career thus far. "My mind is very peaceful. I don’t care so much about the money, or the honor," he said. "I like to be very quiet and keep working by myself."

Meanwhile, Zhang has already started work on his next project, which he declined to describe. "Hopefully it will be a good result," he said.
  http://simonsfoundation.org/features/science-news/unheralded-mathematician-bridges-the-prime-gap/

This article was
reprinted on Wired.com.

Monday, May 20, 2013

Fewer Wind Generators -- but Solar Soars

Wind suffers worst Q1 of capacity
growth in 7 years; solar surgesby Charlotte Cox, SNL, May 16, 2013

The weak quarter comes in the wake of a huge surge of installations in 2012, which saw 13,329 MW installed ahead of the scheduled expiration of the production tax credit. The fourth quarter of 2012 alone saw 8,054 MW of capacity brought online, more than twice the amount in any prior quarter. The fourth quarter of each year since 2000 has been the industry's strongest, particularly in years when the production tax credit was scheduled to expire.

That trend may continue, but fourth-quarter booms are likely to be reduced latest version of the production tax credit passed by Congress early this year. That revised policy extends the PTC through 2013, but requires only that projects begin construction or meet a safe harbor clause by the deadline instead of begin commercial operations, putting less pressure on projects to all come online at the same time.

Given that safe harbor provision and the time required to build a wind project, turbine manufacturers expect only limited installations this year followed by an increase in new capacity in 2014 as projects are able to begin operations.

The solar industry, meanwhile, had its best first quarter ever for utility-scale solar generation. The 297 MW completed during the quarter more than doubled the 123 MW completed in the first quarter of 2012. Along with wind, solar capacity peaked in the fourth quarter of 2012, with 841 MW installed, more than twice the amount in any prior quarter. Solar projects are subject to different federal incentives, including the investment tax credit, which steps down from 30% to 10% at the end of 2016.

More (with graphs and links) at:
  http://www.snl.com/InteractiveX/Article.aspx?cdid=A-17680958-13088

Sunday, May 19, 2013

"Singularity" and Artificial Intelligence

Introductionby the Blog Author

In expectation of increasingly useful artificial intelligence advances, a new organization, now called the Machine Intelligence Research Institute (MIRI) was formed. This organization also spun off a new organization in 2012, the Center for Applied Rationality. Below is Wikipedia’s discussion of these developments.

= = = = = = = = = = = = = = = = from Wikipedia: = = = = = = = = = = = = = = =

The Machine Intelligence Research Institute (MIRI) is a non-profit organization founded in 2000 to develop safe artificial intelligence software, and to raise awareness of both the dangers and potential benefits it believes AI presents. The organization advocates ideas initially put forth by I.J. Good and Vernor Vinge regarding an "intelligence explosion" or Singlarity, which is predicted to follow the creation of sufficiently advanced AI. In their view, the potential benefits and risks of this event necessitate the search for solutions to problems involving AI goal systems to ensure powerful AIs are not dangerous when they are created. MIRI espouses the Friendly AI model created by its co-founder Eliezer Yudkowsky as a potential solution to such problems. MIRI was formerly known as the Singularity Institute, and before that the Singularity Institute for Artificial Intelligence.

Luke Muehlhauser is Executive Director. Inventor and futures author Ray Kurzweil served as one of its directors from 2007 to 2010. The institute maintains an advisory board whose members include Oxford philosopher Nick Bostrom, biomedical gerontologist Aubrey de Grey, PayPal co-founder Peter Thiel, and Foresight Nanotech Institute co-founder Christine Peterson. It is tax exempt under Section 501(c)(3) of the United States Internal Revenue Code, and has a Canadian branch, SIAI-CA, formed in 2004 and recognized as a Charitable Organization by the Canada Revenue Agency.

Center for Applied Rationality
In mid-2012, the Institute spun off a new organization called the Center for Applied Rationality, whose focus is to help people apply the principles of rationality in their day-to-day life and to research and develop de-biasing techniques.

[online at: http://appliedrationality.org/ ]


Usefulness
Dr. Hugo de Garis dubbed the organization the "singhilarity institute" in a recent H+ Magazine article, saying that creating truly safe artificial intelligence is utterly impossible. However, Dr. James Miller believes that even if the organization has no prospect of creating friendly AI, it more than justifies its existence simply by spreading awareness of the risks of unfriendly AI.
 
http://en.wikipedia.org/wiki/Singularity_Institute_for_Artificial_Intelligence

-=-=-=-=-=-=-=-=-=-==-=-==-=-=-=-=

related current news: When Will the Human Mind Upload to a Computer by Eric Niiler for Discovery.com at:
  http://news.discovery.com/tech/when-will-humans-upload-their-brains-to-computers-130517.htm

Saturday, May 18, 2013

Six Global Hotspots to Watch

Compiled by RealClearWorld
    • East China Sea
    • [Japan vs. China]
    • Eastern Mediterranean
    • [Turkey, Israel, and Greece quarrelling over undersea oil rights]
    • Eastern Congo
    • [Democratic Republic of the Congo versus rebels – particularly the M23 group]
    • India-China border
    • [at a long-contested border in the Himalayas]
    • The Arctic Ocean
    • – Canada, USA, Russia, Denmark, Sweden, Norway and influence sought by China and India
    • The Caspian Sea
    • – military buildup by Russia, Iran, Kazakhstan, Turkmenistan, and Azerbaijan
http://www.realclearworld.com/lists/global_hotspots_to_watch/

Friday, May 17, 2013

Bright Explosion on the Moon March 17th

By Dr. Tony Phillips, NASA

May 17, 2013:
For the past 8 years, NASA astronomers have been monitoring the Moon for signs of explosions caused by meteoroids hitting the lunar surface. "Lunar meteor showers" have turned out to be more common than anyone expected, with hundreds of detectable impacts occurring every year.

They've just seen the biggest explosion in the history of the program.

"On March 17, 2013, an object about the size of a small boulder hit the lunar surface in Mare Imbrium," says Bill Cooke of NASA's Meteoroid Environment Office. "It exploded in a flash nearly 10 times as bright as anything we've ever seen before."

Anyone looking at the Moon at the moment of impact could have seen the explosion--no telescope required. For about one second, the impact site was glowing like a 4th magnitude star.

Ron Suggs, an analyst at the Marshall Space Flight Center, was the first to notice the impact in a digital video recorded by one of the monitoring program's 14-inch telescopes. "It jumped right out at me, it was so bright," he recalls.

The 40 kg meteoroid measuring 0.3 to 0.4 meters wide hit the Moon traveling 56,000 mph. The resulting explosion1 packed as much punch as 5 tons of TNT.

Cooke believes the lunar impact might have been part of a much larger event.

"On the night of March 17, NASA and University of Western Ontario all-sky cameras picked up an unusual number of deep-penetrating meteors right here on Earth," he says. "These fireballs were traveling along nearly identical orbits between Earth and the asteroid belt."

This means Earth and the Moon were pelted by meteoroids at about the same time.

"My working hypothesis is that the two events are related, and that this constitutes a short duration cluster of material encountered by the Earth-Moon system," says Cooke.

One of the goals of the lunar monitoring program is to identify new streams of space debris that pose a potential threat to the Earth-Moon system. The March 17th event seems to be a good candidate.

Controllers of NASA's Lunar Reconnaissance Orbiter have been notified of the strike. The crater could be as wide as 20 meters, which would make it an easy target for LRO the next time the spacecraft passes over the impact site. Comparing the size of the crater to the brightness of the flash would give researchers a valuable "ground truth" measurement to validate lunar impact models.

Unlike Earth, which has an atmosphere to protect it, the Moon is airless and exposed. "Lunar meteors" crash into the ground with fair frequency. Since the monitoring program began in 2005, NASA’s lunar impact team has detected more than 300 strikes, most orders of magnitude fainter than the March 17th event. Statistically speaking, more than half of all lunar meteors come from known meteoroid streams such as the Perseids and Leonids. The rest are sporadic meteors--random bits of comet and asteroid debris of unknown parentage.

U.S. Space Exploration Policy eventually calls for extended astronaut stays on the lunar surface. Identifying the sources of lunar meteors and measuring their impact rates gives future lunar explorers an idea of what to expect. Is it safe to go on a moonwalk, or not? The middle of March might be a good time to stay inside.

"We'll be keeping an eye out for signs of a repeat performance next year when the Earth-Moon system passes through the same region of space," says Cooke. "Meanwhile, our analysis of the March 17th event continues."

Footnote: (1) The Moon has no oxygen atmosphere, so how can something explode? Lunar meteors don't require oxygen or combustion to make themselves visible. They hit the ground with so much kinetic energy that even a pebble can make a crater several feet wide. The flash of light comes not from combustion but rather from the thermal glow of molten rock and hot vapors at the impact site.
http://science.nasa.gov/science-news/science-at-nasa/2013/16may_lunarimpact/

Thursday, May 16, 2013

North Dakota's Energy Boom

North Dakota Proves Obama
Doesn't Get Energy
By Thomas Pyle, RealClearEnergy, May 16, 2013

The Bakken and Three Forks formations in North Dakota, South Dakota and Montana contain significantly more oil-and-gas resources than previously thought, according to a new U.S. Geological Survey (USGS) reassessment. The survey found an estimated 7.4 billion barrels of undiscovered, technically recoverable shale oil (resources that can be recovered with current technology but whose precise location is unknown), double its 2008 assessment. The reassessment also found three times as much recoverable natural gas than previously believed to exist.

North Dakota, which leads the nation in job growth, wage increases, and lowunemployment, owes much of its success to the shale energy boom and hydraulic fracturing advances. Meanwhile, as national unemployment hovers around 7.5 percent - more than twice the rate of North Dakota -- the federal government's policy toward energy production can only be described as a "bust."

At nearly every turn, the federal government is actively obstructing domestic energy production. For example, oil and natural gas production on federal lands is down by more than 40 percent compared to 10 years ago. Consider the impact in light of the government's vast landholdings: Uncle Sam owns 36.6 percent of Colorado, 57.4 percent of Utah and 42.3 percent of Wyoming. Those three states happen to contain the nation's principal holdings of oil shale (not to be confused with shale oil). Unlike shale oil, which is conventional oil trapped in shale rock, oil shale is sedimentary rock that contains kerogen. Kerogen is considerably valuable because it can be processed into liquid fuels that are used as raw material for a vast array of consumer goods.

Instead of encouraging oil shale development, the Obama administration is keeping resource-rich regions in Colorado, Utah, and Wyoming under lock and key. At stake is enough technically recoverable oil shale to provide American consumers with 140 years of electricity at current usage rates. While oil shale has yet to be developed in the United States, Estonia, for example, already derives 90 percent of its electricity production from this energy source. However, America will never develop its oil shale resources if the federal government continues to restrict access.

Despite untapped domestic energy supplies locked away on federal lands, the Obama administration continues to drag its feet in processing drilling applications, according to a recent report by the nonpartisan Congressional Research Service. In 2006, it took 218 days on average to process an application for a permit to drill on federal lands. Since then, the Bureau of Land Management increased the complexity of the forms so much that by 2011, it took an average of 307 days to turn an application around, a 41 percent increase. "A more efficient permitting process may be an added incentive for the industry to invest in developing federal resources, which may allow for some oil and gas to come onstream sooner, but in general, the regulatory framework for developing resources on federal lands will likely remain more involved and time-consuming than that on private land," the CRS report concluded.

Expanding energy production on federal lands would have an enormous impact on the American economy, according to an independent study published by the Institute for Energy Research (IER). The resulting broad-based economic stimulus would include a $127 billion annual boost in GDP for the next seven years; an increase of 552,000 jobs annually for the next seven years; wage hikes of $32 billion for each of the next seven years and $115 billion annually between seven and thirty years. Additionally, our cash-strapped federal government would realize $24 billion in annual revenue over the next seven years, and $86 billion annually thereafter. State and local governments would also benefit from $10.3 billion additional annual revenues over the next seven years, and $35.5 billion annually thereafter.

As the USGS reassessment demonstrates, new energy discoveries and technological advancements are happening at a rapid pace. North America has 1.79 trillion barrels of proven oil reserves, according to the North American Energy Inventory released by IER in December 2011. That's enough to fill the tank of every passenger car in the United States for the next 430 years! The survey also estimated 4.244 quadrillion cubic feet of recoverable natural gas resources, enough to heat every home for the next 575 winters at current usage rates. Of course, these estimates were published well before the new USGS reassessments. America's true energy supply is undoubtedly even more robust, providing further certainty that domestic resources can meet America's energy needs for years to come.

America continues to struggle amid sluggish growth, staggering budget deficits, and the Obama administration's desire to ensure that sequester cuts inflict maximum harm. North Dakota, with plenty of high-paying jobs and a $1.6 billion budget surplus, serves as a blueprint for rejuvenating the American economy. North Dakota's pro-growth energy policies are a testament to the curative power of unlocking access to domestic energy resources.

* * * * * * * * * * * * * * *

Thomas J. Pyle [author of this piece] is the President of the Institute for Energy Research.
Link (with access to further links supporting the figures) at:
 
http://www.realclearenergy.org/articles/2013/05/16/north_dakota_proves_obama_doesnt_get_energy_107020.html

Wednesday, May 15, 2013

Earth's Rarest Naturally Occurring Element

Here is a deceptively simple question.

There are several radioactive elements which exist only because they represent a heavy metal that has been bombarded with energy. This "series" of elements is properly identified on the periodic table of the elements.
The other elements are naturally occurring here on the earth’s surface.

So the simple question is, "What is the rarest element naturally occurring on earth?"

My guess would have been plutonium or perhaps rhodium. Those answers are wrong.

The right answer is astatine.

= = = = = = = = = = = = = from Wikipedia: = = = = = = = = = = = =

Astatine is a radioactive chemical element with the chemical symbol At and atomic number 85. It occurs on Earth only as the result of the radioactive decay of certain heavier elements. All of its isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. Accordingly, much less is known about astatine than most other elements. The observed properties are consistent with it being a heavier analog of iodine; many other properties have been estimated based on this resemblance.

Elemental astatine has never been viewed, because a mass large enough to be seen (by the naked human eye) would be immediately vaporized by the heat generated by its own radioactivity. Astatine may be dark, or it may have a metallic appearance and be a semiconductor, or it may even be a metal. It is likely to have a much higher melting point than does iodine, on par with those of bismuth and polonium. Chemically, astatine behaves more or less as a halogen [in the same group with fluorine, chlorine, bromine, and iodine], being expected to form ionic astatides with alkali or alkaline earth metals; it is known to form covalent compounds with nonmetals, including other halogens. It does, however, also have a notable cationic chemistry that distinguishes it from the lighter halogens. The second longest-lived isotope of astatine, astatine-211, is the only one currently having any commercial application, being employed in medicine to diagnose and treat some diseases via its emission of alpha particles (helium-4 nuclei). Only extremely small quantities are used, however, due to its intense radioactivity.

The element was first produced by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio segre at the University of California, Berkeley in 1940. They named the element "astatine", a name coming from the great instability of the synthesized matter (the source Greek word αστατος (astatos) means "unstable"). Three years later it was found in nature, although it is the least abundant element in the Earth's crust among the non-transuranic elements, with an estimated total amount of less than 28 grams (1 oz) at any given time. Six astatine isotopes, with mass numbers of 214 to 219, are present in nature as the products of various decay routes of heavier elements, but neither the most stable isotope of astatine (with mass number 210) nor astatine-211 (which is used in medicine) is produced naturally.

Characteristics
Astatine is an extremely radioactive element; all its isotopes have half-lives of less than 12 hours, decaying into bismuth, poloniu, radon, or other astatine isotopes. Among the first 101 elements in the periodic table, only francium [the most reactive of the metals] is less stable.


The bulk properties of astatine are not known with any great degree of certainty. Research is limited by its short half-life, which prevents the creation of weighable quantities. A visible piece of astatine would be immediately and completely vaporized due to the heat generated by its intense radioactivity. Astatine is usually classified as either a nonmetal or a metalloid. However, metal formation for condensed-phase astatine has also been suggested.

Uses and Precautions
The newly formed astatine-211 is important in nuclear medicine. Once produced, astatine must be used quickly, as it decays with a half-life of 7.2 hours; this is, however, long enough to permit multi-step labeling strategies. Astatine-211 can be used for targeted alpha particle radiotherapy, since it decays either via emission of an alpha particle (to bismuth-207), or via electron capture (to an extremely short-lived nuclide of polonium-211, which itself undergoes further alpha decay).
In a manner similar to iodine, astatine is preferentially concentrated in the thyroid gland, although to a lesser extent. However, it tends to concentrate in the liver in the form of a radiocolloid if it is released into the systemic circulation. The principal medicinal difference between astatine-211 and iodine-131 (a radioactive iodine isotope also used in medicine) is that astatine does not emit high energy beta particles (electrons), as does iodine-131. Beta particles have considerably greater penetrating power through tissues than do the much heavier alpha particles. While an average energy alpha particle released by decay of astatine-211 can travel up to 70 µm through the surrounding tissues, an average energy beta particle emitted by iodine-131 can travel nearly 30 times as far, to about 2 mm. Thus, using astatine-211 instead of iodine-131 enables the thyroid to be dosed appropriately, while the neighboring parathyroid gland is spared. The short half-life and limited penetrating power of its radiation through tissues renders astatine generally preferable to iodine-131 when used in diagnosis as well.


Experiments in rats and monkeys, however, suggest that astatine causes much greater damage to the thyroid gland than does iodine-131, with repetitive injection of the nuclide resulting in necrosis and cell dysplasia within the gland. These experiments also suggest that astatine could cause damage to the thyroid of any organism. Early research suggested that injection of lethal quantities of astatine caused morphological changes in breast tissue (although not other tissues); however, this conclusion currently remains controversial.

http://en.wikipedia.org/wiki/Astatine

Tuesday, May 14, 2013

AutoPulse -- Sometimes a Life Saver

The AutoPulse is an automated, portable, battery-powered cardiopulmonary resuscitation device created by Revivant and subsequently purchased and currently manufactured by ZOLL Medical Corporation. It is a chest compression device composed of a constricting band and half backboard that is intended to be used as an adjunct to CPR during advanced cardiac life support by professional health care providers. The
AutoPulse uses a distributing band to deliver the chest compressions. In literature it is also known as LDB-CPR (Load Distributing Band-CPR).

The AutoPulse measures chest size and resistance before it delivers the unique combination of thoracic and cardiac chest compressions. The compression depth and force varies per patient. The chest displacement equals a 20% reduction in the anterior-posterior chest depth. The physiological duty cycle is 50%, and it runs in a 30:2 or continuous compression mode, which is user-selectable, at a rate of 80 compressions-per-minute.

Device Operation
The patient's head, shoulders and upper back lay upon the base unit, with the controls for the AutoPulse beside the patient's left ear. It can be augmented for cervical spinal support. The unit contains the control computer, the rechargeable battery, and the motors that operate the LifeBand. The LifeBand is an adjustable strap that covers the entire rib cage. When the patient (who must be disrobed) is strapped in and the start button is pressed, the LifeBand pulls tight around the chest, determines the patient's chest size and resistance, and proceeds to rhythmically constrict the entire rib cage, pumping the heart at a rate of 80 compressions per minute. The LifeBand can be placed over defibrillation pads but must be temporarily loosened to use standard paddle defibrillators and repositioned after the shock has been delivered. The LifeBand is disposable, and designed to be used on a single patient for sanitary reasons.

Studies and Clinical Trials
The gold standard for resuscitation research is survival to hospital discharge. Although common sense suggests that short-term and intermediate outcomes like return of spontaneous circulation (ROSC) or survival to hospital admission are promising, experienced scientists know that anything less than a neurologically intact survivor walking out of the hospital is ultimately irrelevant.


Several animal studies have shown that automated CPR machines are more effective at providing circulatory support than manual CPR. One study showed that use of the AutoPulse produced blood flow to the heart and brain that was comparable to pre-arrest levels. In another study, an adapted AutoPulse was shown to be highly effective in support of cardiac arrest in animals, whereas manual CPR was tenuous in its effectiveness. Pigs were used in the study, and were left in cardiac arrest for eight minutes to simulate average ambulance response time. 73% of the pigs that were put into the AutoPulse were revived, and 88% of the surviving pigs showed no neurological damage. None of the pigs that received manual CPR survived.

The device has shown less promise with human research. Although some studies showed improved coronary perfusion pressure and more spontaneous return of circulation with the AutoPulse, one large, multi-centered, randomized clinical trial was canceled early by the International Review Board (IRB) when it was determined that patients who received manual CPR were more likely to walk out of the hospital, suggesting that enthusiasm for the device "is premature, given that the effectiveness of the device likely depends on still-to-be-defined factors independent of the mechanical capabilities of the device."

Criticism
The AutoPulse has received a fair amount of criticism surrounding its battery life, bulk, and studies suggesting poor survival to hospital discharge. The most notable case of such issues can be found in the news reports of the resuscitation of Prince Frisco after he and his companion were caught in an avalanche. In that case, the AutoPulse batteries failed after only 9 and 15 minutes. Others have criticized the high cost and non-reimbursable nature of the disposable AutoPulse LifeBand.

Studies have also failed to show an increase in survival to hospital discharge. During the ASPIRE trial (the first multi-centered, randomized trial with large enrollment), the survival to hospital discharge rates decreased from 9.9% with manual CPR to approximately 5%. Because of these findings, the ethics review board terminated the study. However, some researchers question the validity of the ASPIRE protocol. The CIRC study results additionally indicate that the AutoPulse increases the time before first defibrillation and decrease the average compressions per minute in comparison with manual CPR. Overall, the study shows no improvement over high quality manual CPR.


Several cases have been reported where the AutoPulse has caused additional injury to patients receiving compressions from the device.


Some departments and agencies have begun discontinuing the use of the AutoPulse, citing battery and reliability issues.

http://en.wikipedia.org/wiki/AutoPulse

Monday, May 13, 2013

Physical Make-up of the Brain and Criminality

Tim Adams wrote a long yet fascinating piece about Adrian Raine for The Observer which was published May 11, 2013.  Raine is a scientist who studies the brains of criminals. Raine moved to the United States in 1987, partly because he was doing research on the biological basis for criminal behavior. This is something of a taboo approach, because the source of crime is always supposed to be social and environmental.

Raine’s research has compiled 41 PET scans of convicted criminals and comparison scans against a control group of people with similar ages and biological characteristics. The scans are daunting because the criminal brains look like other criminal brains and unlike the normal brains of the controls (a picture is available at the link at the end of this summary).

Raine joins in repudiating the nineteenth century quack science of eugenics. But he says that the physical organization of the brain – and the social environment in which that person lives – and epigenetics – are all critical to understanding the complex nature of the criminal mind.  This thoughtful, sobering article can be found at:

http://www.guardian.co.uk/science/2013/may/12/how-to-spot-a-murderers-brain

Sunday, May 12, 2013

Liberal Pundits Have Abandoned Science

"The core trait of a scientific mind is that when its commitments clash with evidence, evidence rules. On that count, what grade do liberals deserve? Fail, given their reaction to the latest evidence on universal health care, global warming and universal preschool."
-- Shikha Dalmia in an opinion piece in the Washington Examiner, May 11, 2013

Dalmia presents poignant and direct evidence in support of her contention that modern liberals are startlingly unscientific. The article is on-line at

http://washingtonexaminer.com/scientific-liberals-should-accept-results-of-science/article/2529368

Saturday, May 11, 2013

Movie Review: Oz the Great and Powerful

By the blog author

Oz, the Great and Powerful
is a movie nearing the end of its first run at American theaters. The film was directed by Sam Raimi and produced by Joe Roth. David Lindsay-Abaire and Mitchell Kapner wrote the script. James Franco plays Oscar Diggs, a con artist, effective ladies man and carnival magician. Of course, the character is whisked away to Oz, where he uses his training as a magician to pretend to fill the prophesy of a Wizard who will free the entire country of Oz from a tyrannical witch who overthrew her father, a benevolent king.

Mila Kuis plays Theodora, a good witch who meets the magician upon his landing in Oz, Rachel Weisz plays Evanora, the sister of Theodora, and Michelle Williams plans Glinda.

Early in Oz, Diggs uses the magician’s prop of fireworks to save a flying monkey from an attack by a lion. The flying monkey volunteers to serve the magician throughout his lifetime. Diggs is also able to repair the legs of a living china doll. She and the monkey accompany him for the rest of the adventure.

There are several key elements in the movie which are exhibited with perfection. Like the 1939 Wizard of Oz, the wizard himself is a con man and a humbug. The writers did a thorough job looking through the L. Frank Baum Oz books to show that this supposed leader was a man of very modest talents. This is seen early when Diggs escapes a furious husband and strong man at the circus by climbing into a balloon to get away. When an approaching tornado spins the balloon into the vortex, Diggs is terrified and begs to be allowed to live and amend his ways. His shrieking and cowardly begging in the tornado funnel are so banal that the scene is actually funny. Landing in Oz, his character appears not change at all. But the adventurous situations that arise often have parallels to Diggs’ immature and cowardly actions in Kansas.

It isn’t difficult to sort out which witches are good and which are bad, but nonetheless the drama is well displayed and presented. What matters, centrally, is the humanity of Diggs’ companions, the flying monkey and the china girl. These are the buffers who move Diggs’ wizard toward maturity.

The wizard grows up to the point that he realizes he cannot defeat the enemies of the good side with singing Munchkins, sewing Quadlings and tinkers. But he is encouraged by Glinda not to underestimate himself. This inspires Digs into using magic and slight of hand to challenge the power of the witches, who themselves are in possession of the Emerald City.

Historically, the fight for the Emerald City is an analog to a military truth: when mechanics fight farmers, the mechanics always win. This is the power of the eccentric and unwarlike tinkers as led by Diggs.

In the final scene, Diggs accepts the wizard title and rewards his allies. This is performed and filmed magnificently. The new wizard’s exchanges with the china girl and the flying monkey are exemplary – just the sort of thing that children should experience at a movie theater.

There’s a lot more to this movie than a bag of special effects and green screen technology. It is a genuine and well-researched tribute to the gentle philosophy of L. Frank Baum, himself America’s answer to the brothers Grimm and to Hans Christian Andersen.
That the critics didn’t get it and gave this film mixed reviews is heartening. The 1939 Wizard of Oz lost money and wasn’t regarded as a classic until it was shown annually on television in the 1950s and 1960s. Oz the Great and Powerful may be the only legitimate companion to the 1939 film; both movies are proper presentations of Baum’s philosophy, itself a masterpiece of understanding of the minds of children. That the Disney studio can captain such a project is remarkable and respectable.

Friday, May 10, 2013

Acne Bacteria Can Cause Back Pain

Up to 40 percent of lower back pain is caused by bacterial infections, a group of specialists at the University of South Denmark have documented. This Danish team published its results in two articles in the European Spine Journal.

This would mean that antibiotics (against bacteria related to those which cause acne) would replace surgery for a significant portion of persons with chronic lower back pain.

More from Ian Sample’s article in The Guardian at:
http://www.guardian.co.uk/society/2013/may/07/antibiotics-cure-back-pain-patients

Thursday, May 9, 2013

New Approach to Restless Legs Syndrome

Restless Legs Syndrome, Insomnia And Brain Chemistry: A Tangled Mystery Solved?Johns Hopkins – May 7, 2013


Johns Hopkins researchers believe they may have discovered an explanation for the sleepless nights associated with restless legs syndrome (RLS), a symptom that persists even when the disruptive, overwhelming nocturnal urge to move the legs is treated successfully with medication.

Neurologists have long believed RLS is related to a dysfunction in the way the brain uses the neurotransmitter dopamine, a chemical used by brain cells to communicate and produce smooth, purposeful muscle activity and movement. Disruption of these neurochemical signals, characteristic of Parkinson’s disease, frequently results in involuntary movements. Drugs that increase dopamine levels are mainstay treatments for RLS, but studies have shown they don’t significantly improve sleep. An estimated 5 percent of the U.S. population has RLS.

The small new study, headed by Richard P. Allen, Ph.D., an associate professor of neurology at the Johns Hopkins University School of Medicine, used MRI to image the brain and found glutamate — a neurotransmitter involved in arousal — in abnormally high levels in people with RLS. The more glutamate the researchers found in the brains of those with RLS, the worse their sleep.

The findings are published in the May issue of the journal Neurology.

"We may have solved the mystery of why getting rid of patients’ urge to move their legs doesn’t improve their sleep," Allen says. "We may have been looking at the wrong thing all along, or we may find that both dopamine and glutamate pathways play a role in RLS."

For the study, Allen and his colleagues examined MRI images and recorded glutamate activity in the thalamus, the part of the brain involved with the regulation of consciousness, sleep and alertness. They looked at images of 28 people with RLS and 20 people without. The RLS patients included in the study had symptoms six to seven nights a week persisting for at least six months, with an average of 20 involuntary movements a night or more.

The researchers then conducted two-day sleep studies in the same individuals to measure how much rest each person was getting. In those with RLS, they found that the higher the glutamate level in the thalamus, the less sleep the subject got. They found no such association in the control group without RLS.

Previous studies have shown that even though RLS patients average less than 5.5 hours of sleep per night, they rarely report problems with excessive daytime sleepiness. Allen says the lack of daytime sleepiness is likely related to the role of glutamate, too much of which can put the brain in a state of hyperarousal — day or night.

If confirmed, the study’s results may change the way RLS is treated, Allen says, potentially erasing the sleepless nights that are the worst side effect of the condition. Dopamine-related drugs currently used in RLS do work, but many patients eventually lose the drug benefit and require ever higher doses. When the doses get too high, the medication actually can make the symptoms much worse than before treatment. Scientists don’t fully understand why drugs that increase the amount of dopamine in the brain would work to calm the uncontrollable leg movement of RLS.

Allen says there are already drugs on the market, such as the anticonvulsive gabapentin enacarbil, that can reduce glutamate levels in the brain, but they have not been given as a first-line treatment for RLS patients.

RLS wreaks havoc on sleep because lying down and trying to relax activates the symptoms. Most people with RLS have difficulty falling asleep and staying asleep. Only getting up and moving around typically relieves the discomfort. The sensations range in severity from uncomfortable to irritating to painful.

"It’s exciting to see something totally new in the field — something that really makes sense for the biology of arousal and sleep," Allen says.

As more is understood about this neurobiology, the findings may not only apply to RLS, he says, but also to some forms of insomnia.

The study was funded in part by the National Institutes of Health’s National Institute of Neurological Disorders and Stroke (R01 NS075184 and NS044862), the National Institute on Aging (P10-AG21190) and the National Center for Research Resources (M01RR02719).

Allen has received honoraria serving on advisory boards from impax, Pfizer and UCB and honoraria for scientific lectures, consultancy and research support from UCB, GSK, Pfizer and Pharmacosmos.

Other Johns Hopkins researchers involved in the study include Peter B. Barker, D.Phil.; Alena Horska, Ph.D.; and Christopher J. Earley, M.D., Ph.D.

http://www.hopkinsmedicine.org/news/media/index.html