Wednesday, December 31, 2014

New Year's Eve

In the Gregorian calendar, Near Year's Eve (also known as Old Year's Day or Saint Sylvester’s Day in many countries), the last day of the year, is on December 31. In many countries, New Year's Eve is celebrated at evening social gatherings, where many people dance, eat, drink alcoholic beverages, and watch or light fireworks to mark the new year. Some people attend a watchnight service.  The celebrations generally go on past midnight into January 1 (New Year’s Day).

Kiribati is the first country to welcome the New Year while Honolulu, Hawaii, in the United States of America is among the last.

New Year’s Eve in the United States

In the United States, New Year's Eve is celebrated with formal parties, family-oriented activities, and other large public events.

One of the most prominent celebrations in the country is the "ball drop" held in New York City’s Times Square.  Inspired by the time balls that were formally used as a time signal, at 11:59 p.m. ET, an 11,875-pound (5,386 kg), 12-foot (3.7 m) diameter Waterford crystal ball located on the roof of One Times Square is lowered down a pole that is 70 feet high, reaching the roof of the building one minute later to signal the start of the New Year. The Ball Drop has been held since 1907, and in recent years has averaged around a million spectators annually. The popularity of the spectacle also inspired similar “drop” events outside of New York City, which often use objects that represent a region's culture, geography, or history—such as Atlanta’s "Peach Drop", representing Georgia’s identity as the "Peach State," or Nashville's "Music Note Drop".

The portrayal of festivities on radio and television has helped ingrain certain aspects of the celebration in American pop culture; beginning on the radio in 1928, and on CBS television from 1956 to 1976 (which also included coverage of the ball drop), Guy Lombardo and his band, The Royal Canadians, presented an annual New Year's Eve broadcast from the ballroom of New York's Waldorf-Astoria Hotel. The broadcasts were also well known for the Royal Canadians' signature performance of "Auld Lang Syne" at midnight, which helped popularize the song as a New Year's standard.  After Lombardo's death in 1977, prominence shifted towards ABC’s special Dick Clark’s New Year’s Rockin’ Eve (which had recently moved from NBC), originally intended by its creator and host Dick Clark to be a modern and youthful alternative to Lombardo's big band music. Including ABC’s special coverage of the year 2000, Clark would host New Year's Eve coverage on ABC for 33 straight years. After suffering a stroke, Clark ceded hosting duties in 2005 to talk show host Regis Philbin. Although Clark returned the following year, a speech impediment caused by the stroke prevented him from being the main host until his death in April 2012, Clark made limited appearances on the show as a co-host, but was formally succeeded by Ryan Seacrest.

New Year's Eve is traditionally the busiest day of the year at Walt Diusney World Resort in Florida and Disneyland in Anaheim, California, where the parks stay open late and the usual nightly fireworks are supplemented by an additional New Year's Eve-specific show at midnight.

Los Angeles, a city long without a major public New Year celebration, held for the first time on December 31, 2013 a major gathering in Downtown’s newly completed Grand Park. The event included food trucks, art installations, and various color and light shows, culminating with a massive light projection onto the side of Los Angeles City Hall which counted down to midnight with the crowd. The event drew over 25,000 spectators and participants, and is expected to rival other major cities' festivities in years to come.

Religious observances

In the Roman Catholic Church, January 1 is a solemnity honoring the Blessed Virgin Mary, the Mother of Jesus; it is a Holy Day of Obligation in most countries (Australia being a notable exception), thus the Church requires the attendance of all Catholics in such countries for Mass that day. However a vigil Mass may be held on the evening before a Holy Day; thus it has become customary to celebrate Mass on the evening of New Year's Eve. (New Year's Eve is a feast day honoring Pope Sylvester I in the Roman Catholic calendar, but it is not widely recognized in the United States.)

Many Christian congregations have New Year's Eve watchnight services. Some, especially Lutherans and Methodists and those in the African American community, have a tradition known as "Watch Night", in which the faithful congregate in services continuing past midnight, giving thanks for the blessings of the outgoing year and praying for divine favor during the upcoming year. In the English-speaking world, Watch Night can be traced back to John Wesley, the founder of Methodism, who learned the custom from the Moravian Brethren who came to England in the 1730s. Moravian congregations still observe the Watch Night service on New Year's Eve. Watch Night took on special significance to African Americans on New Year's Eve 1862, as slaves anticipated the arrival of January 1, 1863, when Lincoln had announced he would sign the Emancipation Proclamation

Tuesday, December 30, 2014

Dawn Nears Asteroid Belt

Dawn Spacecraft Begins Approach to Dwarf Planet Ceres
NASA -- Dec 30, 2014

NASA's Dawn spacecraft has entered an approach phase in which it will continue to close in on Ceres, a Texas-sized dwarf planet never before visited by a spacecraft. Dawn launched in 2007 and is scheduled to enter Ceres orbit in March 2015.

"Ceres is almost a complete mystery to us," said Christopher Russell, principal investigator for the Dawn mission, based at the University of California, Los Angeles. "Ceres has no meteorites linked to it to help reveal its secrets. All we can predict with confidence is that we will be surprised."

The next couple of months promise continually improving views of Ceres, prior to Dawn's arrival. By the end of January, the spacecraft's images and other data will be the best ever taken of the dwarf planet.

Dawn recently emerged from solar conjunction, in which the spacecraft is on the opposite side of the sun, limiting communication with antennas on Earth. Now that Dawn can reliably communicate with Earth again, mission controllers have programmed the maneuvers necessary for the next stage of the rendezvous, which they label the Ceres approach phase. Dawn is currently 400,000 miles (640,000 kilometers) from Ceres, approaching it at around 450 miles per hour (725 kilometers per hour).

The spacecraft's arrival at Ceres will mark the first time that a spacecraft has ever orbited two solar system targets. Dawn previously explored the protoplanet Vesta for 14 months, from 2011 to 2012, capturing detailed images and data about that body.

The two planetary bodies are thought to be different in a few important ways. Ceres may have formed later than Vesta, and with a cooler interior. Current evidence suggests that Vesta only retained a small amount of water because it formed earlier, when radioactive material was more abundant, which would have produced more heat. Ceres, in contrast, has a thick ice mantle and may even have an ocean beneath its icy crust.

Ceres, with an average diameter of 590 miles (950 kilometers), is also the largest body in the asteroid belt, the strip of solar system real estate between Mars and Jupiter. By comparison, Vesta has an average diameter of 326 miles (525 kilometers), and is the second most massive body in the belt. 

The spacecraft uses ion propulsion to traverse space far more efficiently than if it used chemical propulsion. In an ion propulsion engine, an electrical charge is applied to xenon gas, and charged metal grids accelerate the xenon particles out of the thruster. These particles push back on the thruster as they exit, creating a reaction force that propels the spacecraft. Dawn has now completed five years of accumulated thrust time, far more than any other spacecraft.

"Orbiting both Vesta and Ceres would be truly impossible with conventional propulsion. Thanks to ion propulsion, we're about to make history as the first spaceship ever to orbit two unexplored alien worlds," said Marc Rayman, Dawn's chief engineer and mission director, based at NASA's Jet Propulsion Laboratory in Pasadena, California.

Half-Light and Half-Matter

Study Unveils New Half-Light
Half-Matter Quantum Particles
December 29, 2014 | City College

Prospects of developing computing and communication technologies based on quantum properties of light and matter may have taken a major step forward thanks to research by City College of New York physicists led by Dr. Vinod Menon.

In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles.

“Besides being a fundamental breakthrough, this opens up the possibility of making devices which take the benefits of both light and matter,” said Professor Menon.

For example one can start envisioning logic gates and signal processors that take on best of light and matter. The discovery is also expected to contribute to developing practical platforms for quantum computing.

Dr. Dirk Englund, a professor at MIT whose research focuses on quantum technologies based on semiconductor and optical systems, hailed the City College study.

“What is so remarkable and exciting in the work by Vinod and his team is how readily this strong coupling regime could actually be achieved. They have shown convincingly that by coupling a rather standard dielectric cavity to exciton–polaritons in a monolayer of molybdenum disulphide, they could actually reach this strong coupling regime with a very large binding strength,” he said.

Professor Menon’s research team included City College PhD students, Xiaoze Liu, Tal Galfsky and Zheng Sun, and scientists from Yale University, National Tsing Hua University (Taiwan) and Ecole Polytechnic -Montreal (Canada).

The study appears in the January issue of the journal “Nature Photonics.”

Sunday, December 28, 2014

Copyright Is a Sacred Monopoly

Copyright is a legal right created by the law of a country, that grants the creator of an original work exclusive rights to its use and distribution, usually for a limited time, with the intention of enabling the creator (e.g. the photographer of a photograph or the author of a book) to receive compensation for their intellectual effort.  [It is distinguished from copywriting, which is the use of words to promote or advertise].

Copyright is a form of intellectual property, applicable to any expressed representation of a creative work. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rightsholders. These rights frequently include reproduction, control over derivative works, distribution, public performance, and “moral rights” such as attribution.

As far back as 1787, the United States Constitution provided for the protection of copyrights "to promote the Progress of Science and useful Arts." The contemporary intent of copyright is to promote the creation of new works by giving authors control of and profit from them. Copyrights are said to be territorial, which means that they do not extend beyond the territory of a specific state unless that state is a party to an international agreement. Today, however, this is less relevant since most countries are parties to at least one such agreement. While many aspects of national copyright laws have been standardized through international copyright agreements, copyright laws of most countries have some unique features. Typically, the duration of copyright is the whole life of the creator plus fifty to a hundred years from the creator's death, or a finite period for anonymous or corporate creations. Some jurisdictions have required formalities to establishing copyright, but most recognize copyright in any completed work, without formal registration. Generally, copyright is enforced as a civil matter, though some jurisdictions do apply criminal sanctions.

Most jurisdictions recognize copyright limitations, allowing "fair" exceptions to the creator's exclusivity of copyright, and giving users certain rights. The development of digital media and computer network technologies have prompted reinterpretation of these exceptions, introduced new difficulties in enforcing copyright, and inspired additional challenges to copyright law's philosophic basis. Simultaneously, businesses with great economic dependence upon copyright, such as those in the music business, have advocated the extension and expansion of their intellectual property rights, and sought additional legal and technological enforcement.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

The above text is part of Wikipedia’s entry on Copyright

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Copyright and Legal Status [for Musical Compositions]

Copyright is a government-granted monopoly which, for a limited time, gives a composition's owner—such as a composer or a composer's employer, in the case of work for hire—a set of exclusive rights to the composition, such as the exclusive right to publish sheet music describing the composition and how it should be performed. Copyright requires anyone else wanting to use the composition in the same ways to obtain a license (permission) from the owner.

In some jurisdictions, the composer can assign copyright, in part, to another party. Often, composers who aren't doing business publishing companies themselves will temporarily assign their copyright interests to formal publishing companies, granting those companies a license to control both the publication and the further licensing of the composer's work. Contract law, not copyright law, governs these composer–publisher contracts, which ordinarily involve an agreement on how profits from the publisher's activities related to the work will be shared with the composer in the form of royalties.

The scope of copyright in general is defined by various international treaties and their implementations, which take the form of national statutes, and in common law jurisdictions, case law. These agreements and corresponding body of law distinguish between the rights applicable to sound recordings and the rights applicable to compositions. For example, Beethoven’s 9th Symphony is in the public domain, but in most of the world, recordings of particular performances of that composition usually are not.

For copyright purposes, song lyrics and other performed words are considered part of the composition, even though they may have different authors and copyright owners than the non-lyrical elements.

Many jurisdictions allow for compulsory licensing of certain uses of compositions. For example, copyright law may allow a record company to pay a modest fee to a copyright collective to which the composer or publisher belongs, in exchange for the right to make and distribute CDs containing a cover band's performance of the composer or publisher's compositions. The license is "compulsory" because the copyright owner cannot refuse or set terms for the license. Copyright collectives also typically manage the licensing of public performances of compositions, whether by live musicians or by transmitting sound recordings over radio or the Internet.

In the U.S.
Even though the first US copyright laws did not include musical compositions, they were added as part of the Copyright Act of 1831.

 = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

The above text is part of Wikipedia’s entry on musical composition:


Saturday, December 27, 2014

Digital Piano Basics

A digital piano (sometimes incorrectly referred to as an electric piano) is a modern electronic musical instrument, different from the electronic keyboard, designed to serve primarily as an alternative to the traditional acoustic piano, both in the way it feels to play and in the sound produced. It is intended to provide an accurate simulation of an acoustic piano. Some digital pianos are also designed to look like an acoustic piano. While digital pianos may fall short of a real piano in feel and sound, they nevertheless have other advantages over acoustic pianos.


The following is a non-exhaustive list of advantages offered by digital pianos over acoustic pianos:

  • Sound level can be adjusted, and headphones can be used. This allows to practice where (and when) the sound of the instrument would disturb other people.
  • Compared to acoustic pianos, digital pianos are generally less expensive and also cheaper to maintain (they do not require regular tunings).
  • They are less sensitive to the room climate changes and can be used for training in places like basements.
  • They are much more likely to incorporate a MIDI implementation.
  • They may have more features to assist in learning and composition.
  • They often have a transposition feature.
  • They do not require the use of microphones, eliminating the problem of audio feedback in sound reinforcement, as well as simplifying the recording process.
  • Most models are smaller and considerably lighter, but there are large ones as well. Some of them are also portable and they weigh less than 20 lbs.
  • Depending on the individual features of each digital piano, they may include many more instrument sounds including strings, guitars, organs, and more.


In most implementations, a digital piano produces a variety of piano timbres and usually other sounds as well. For example, a digital piano may have settings for a concert grand piano, an upright piano, a tack piano, and various electric pianos such as the Fendeer Rhodes and Wurlitzer. Some digital pianos incorporate other basic "syhthesizer" sounds such as string ensemble, for example, and offer settings to combine them with piano.

The sounds produced by a digital piano are samples stored in ROM. The samples stored in digital pianos are usually of very high quality and made using world class pianos, expensive microphones, and high-quality preamps in a professional recording studio. ROM may include multiple samples for the same keystroke, attempting to reproduce diversity observed on the real piano, but the number of these recorded alternatives is limited. Some implementations like Roland V-piano use mathematical models of the real piano  to generate sounds that vary more freely depending on how the keys have been struck.

Digital pianos do have limitations on the faithfulness with which they reproduce the sound of an acoustic piano. These include the lack of implementation of harmonic tones that result when certain combinations of notes are sounded, limited polyphony, and a lack of natural reverberation when the instrument is played percussively. They often lack the incidental acoustic noises associated with piano playing, such as the sounds of pedals being depressed and the associated machinery shifting within the piano, which some actually consider a benefit. These limitations apply to most acoustic instruments and their sampled counterparts, the difference often being described as "visceral".

On an acoustic piano, the sustain pedal lifts the dampers for all strings, allowing them to resonate naturally with the notes played. Digital pianos all have a similar pedal switch to hold notes in suspension, but only some can reproduce the resonating effect.

Many digital pianos include an amplifier and loudspeakers so that no additional equipment is required to play the instrument. Some do not. Most digital pianos incorporate headphone output.


Since the inception of the MIDI [Musical Instrument Digital Interface] interface standard in the early 1980s, most digital pianos can be connected to a computer. With appropriate software, the computer can handle sound generation, mixing of tracks, music notation, musical instruction, and other music composition tasks.

Friday, December 26, 2014

Breakthrough in Gum Disease

Researchers Shed Light on How ‘Microbial
Dark Matter’ Might Cause Disease
Breakthrough by Scientists from UCLA, J. Craig Venter Institute and University of Washington May Be Roadmap for Study of Other Elusive Bacteria
by Brian Aldrich, UCLA, December 23, 2014

One of the great recent discoveries in modern biology was that the human body contains 10 times more bacterial cells than human cells. But much of that bacteria is still a puzzle to scientists.
It is estimated by scientists that roughly half of bacteria living in human bodies is difficult to replicate for scientific research — which is why biologists call it “microbial dark matter.” Scientists, however, have long been determined to learn more about these uncultivable bacteria, because they may contribute to the development of certain debilitating and chronic diseases.

For decades, one bacteria group that has posed a particular challenge for researchers is the Candidate Phylum TM7, which has been thought to cause inflammatory mucosal diseases because it is so prevalent in people with periodontitis, an infection of the gums.

Now, a landmark discovery by scientists at the UCLA School of Dentistry, the J. Craig Venter Institute and the University of Washington School of Dentistry has revealed insights into TM7’s resistance to scientific study and to its role in the progression of periodontitis and other diseases. Their findings shed new light on the biological, ecological and medical importance of TM7, and could lead to better understanding of other elusive bacteria.
The team’s findings are published online in the December issue of the Proceedings of the National Academy of Sciences.

“I consider this the most exciting discovery in my 30-year career,” said Dr. Wenyuan Shi, a UCLA professor of oral biology. “This study provides the roadmap for us to make every uncultivable bacterium cultivable.”

The researchers cultivated a specific type of TM7 called TM7x, a version of TM7 found in people’s mouths, and found the first known proof of a signaling interaction between the bacterium and an infectious agent called Actinomyces odontolyticus, or XH001, which causes mucosal inflammation.

“Once the team grew and sequenced TM7x, we could finally piece together how it makes a living in the human body,” said Dr. Jeff McLean, acting associate professor at the University of Washington School of Dentistry. “This may be the first example of a parasitic long-term attachment between two different bacteria — where one species lives on the surface of another species gaining essential nutrients and then decides to thank its host by attacking it.”
To prove that TM7x needs XH001 to grow and survive, the team attempted to mix isolated TM7x cells with other strains of bacteria. Only XH001 was able to establish a physical association with TM7x, which led researchers to believe that TM7x and XH001 might have evolved together during their establishment in the mouth.
What makes TM7x even more intriguing are its potential roles in chronic inflammation of the digestive tract, vaginal diseases and periodontitis. The co-cultures collected in this study allowed researchers to examine, for the first time ever, the degree to which TM7x helps cause these conditions.
“Uncultivable bacteria presents a fascinating ‘final frontier’ for dental microbiologists and are a high priority for the NIDCR research portfolio,” said Dr. R. Dwayne Lunsford, director of the National Institute of Dental and Craniofacial Research’s microbiology program. “This study provides a near-perfect case of how co-cultivation strategies and a thorough appreciation for interspecies signaling can facilitate the recovery of these elusive organisms. Although culture-independent studies can give us a snapshot of microbial diversity at a particular site, in order to truly understand physiology and virulence of an isolate, we must ultimately be able to grow and manipulate these bacteria in the lab.”

It was previously known that XH001 induces inflammation. But by infecting bone marrow cells with XH001 alone and then with the TM7x/XH001 co-culture, the researchers also found that inflammation was greatly reduced when TM7x was physically attached to XH001. This is the only known study that has provided evidence of this relationship between TM7 and XH001.

The researchers plan to further study the unique relationship between TM7X and XH001 and how they jointly cause mucosal disease. Their findings could have implications for potential treatment and therapeutics.

Thursday, December 25, 2014

Churchill's 1941 Christmas Message

Christmas Message 1941

24 December 1941

Washington, D.C.

Shortly after the Japanese attack on Pearl Harbor, December 7, 1941, Churchill went to Washington with his chiefs of staff to meet President Roosevelt and the American military leaders and coordinate plans for the defeat of the common enemy.  On Christmas Eve Churchill broadcast to the world from the White House on the 20th annual observation of the lighting of the community Christmas tree.

I spend this anniversary and festival far from my country, far from my family, yet I cannot truthfully say that I feel far from home.  Whether it be the ties of blood on my mother's side, or the friendships I have developed here over many years of active life, or the commanding sentiment of comradeship in the common cause of great peoples who speak the same language, who kneel at the same altars and, to a very large extent, pursue the same ideals, I cannot feel myself a stranger here in the centre and at the summit of the United States.  I feel a sense of unity and fraternal association which, added to the kindliness of your welcome,  convinces me that I have a right to sit at your fireside and share your Christmas joys.

This is a strange Christmas Eve.  Almost the whole world is locked in deadly struggle, and, with the most terrible weapons which science can devise, the nations advance upon each other.  Ill would it be for us this Christmastide if we were not sure that no greed for the land or wealth of any other people, no vulgar ambition, no morbid lust for material gain at the expense of others, had led us to the field.  Here, in the midst of war, raging and roaring over all the lands and seas, creeping nearer to our hearts and homes, here, amid all the tumult, we have tonight the peace of the spirit in each cottage home and in every generous heart.  Therefore we may cast aside for this night at least the cares and dangers which beset us, and make for the children an evening of happiness in a world of storm.  Here, then, for one night only, each home throughout the English-speaking world should be a brightly-lighted island of happiness and peace.

Let the children have their night of fun and laughter.  Let the gifts of Father Christmas delight their play.  Let us grown-ups share to the full in their unstinted pleasures before we turn again to the stern task and the formidable years that lie before us, resolved that, by our sacrifice and daring, these same children shall not be robbed of their inheritance or denied their right to live in a free and decent world.

And so, in God's mercy, a happy Christmas to you all.

Wednesday, December 24, 2014

An Optimistic Christmas

Merry Christmas –and I mean it.  I’d like to give you something for this holiday, and it is something that I usually have in short supply –optimism.

This optimisn comes from one of the most cynical cases of mismanagement that I have seen in my adult lifetime.  I’m optimistic because there is a chance we can escape from decades of past mismanagement.

The mismanagement started as a hare-brained scheme from unelected, then-acting-Vice-President Nelson Rockefeller (remember him?!)  Rockefeller came up with an expensive energy scheme he called a $100 billion energy corporation.  This white elephant, in Rockefeller’s mind, would solve the problem of energy independence and be good for the economy.

Because of this scheme and other ideas of Rockefeller’s (going all the way back to the way he insulted Barry Goldwater during the 1964 Republican presidential primaries), Rockefeller withdrew from the 1976 race.  He wouldn’t be a candidate for Vice President.  He was out of the picture.  The unelected, acting-president, Gerald Ford, picked another candidate as running mate, Kansas Senator Bob Dole.  They lost to Jimmy Carter and Walter Mondale.

Most people don’t know this, but Jimmy Carter was a Naval Academy graduate who worked under Admiral Rickover (himself the perfectionist who built a nuclear powered submarine fleet for the US Navy – the first successful use of nuclear power for sea-going service in the world).  Carter was an engineer.  As president, he resurrected Rockefeller’s $100 billion energy corporation as a new federal agency – the Department of Energy.

The original mission of the Department of Energy was to insure energy independence for the U.S.A.  Within a matter of months, under Secretary James Schlesinger (himself a previous Secretary of Defense), this new department had come up with a detailed plan to achieve that energy independence.  The plan was to build nuclear power plants amid America’s coal fields (especially the huge strip mining fields in the western states).  The heat from the nuclear fission would cook the coal into a hot, pressurized slush that could be chemically converted into natural gas and synthetic fuels (gasoline and diesel) for transportation.  This represents a cleaner, non-coal-burning improvement over the process of burning coal in order to heat other coal into pressurized slush and then synthetic fuel called the Fischer Tropsch process, an ingenious chemical procedure used by Germany to fuel their tanks and planes during World War II and by South Africa to survive the economic sanctions imposed on it because of apartheid.

America has enough coal to produce synthetic fuel for hundreds of years.  In spite of this, Schlesinger’s plan was never implemented.  By the 1980’s, the Department of Energy had become a joke.  Its real work had become to act as the research arm of the Department of Defense.  President Reagan tried to kill the Energy Department, but Democrats on Capitol Hill denied him this option.
By the 1990’s, not only was the U.S.A. continuing to move away from energy independence, but the nation had begun the tedious process of getting into resource wars in the Middle East (Iraq 1991, Iraq 2003 and now Iraq 2014).  As a veteran of an undeclared war we promised not to win [Vietnam], I’m very cynical about this turn of events, especially since we had a workable solution to the energy problem under Schlesinger in the 1970’s.
And then there came wise men bearing gifts in the form of fracking, an inevitable improvement over conventional oil drilling.
And now there come even wiser men bearing an improved form of oil drilling that uses pressurized carbon dioxide to increase yield from a well.  A lot of that carbon dioxide is left in the well to sequester it.  The world does not produce enough carbon dioxide to satisfy the demand of this improved process.  But we can get the carbon dioxide needed for this extraction process if we take it from the burning of coal in coal fired electricity generating power plants!
I’m optimistic about this.  Here’s a long explanation that is worth looking into:

Tuesday, December 23, 2014

What Killed Chopin?

"Hats off gentlemen, a genius!" - Robert Schumann

"A really perfect virtuoso" - Felix Mendelssohn

"A sickroom talent" - John Field

"He shines lonely, peerless in the firmament of art" - Franz Liszt

   ~~ great 19th century pianists talking about Frederic Chopin

 Though buried in Paris, Chopin’s heart is preserved in cognac at the Church of the Holy Cross in Warsaw, Poland.    He supposedly died of tuberculosis, but his symptoms also fit a number of other modern diseases.  Living descendants of Chopin’s family will not allow that heart preserved in cognac to be tested with modern scientific techniques.

So we may never attain confirmation of how the great pianist and composer died.

The current issue of BBC News Magazine has an article by Marek Purszewicz on the mystery of Chopin’s death.  The link is here:

Monday, December 22, 2014

German Hyperinflation 1921-23

The hyperinflation in the Weimar Republic was a three-year period of hyperinflation in the Weimar Republic (modern-day Germany) between June 1921 and January 1924.


In order to pay the large costs of the First World War, Germany suspended the convertibility of its currency into gold when that war broke out. Unlike France, which imposed its first income tax to pay for the war, the German Kaiser and Parliament decided without opposition to fund the war entirely by borrowing, a decision criticized by financial experts like Hjalmar Schacht even before hyperinflation broke out. The result was that the exchange rate of the Mark against the US dollar fell steadily throughout the war from 4.2 to 8.91 Marks per dollar. The Treaty of Versailles further accelerated the decline in the value of the Mark, so that by the end of 1919 more than 6.7 paper Marks were required to buy one US dollar.

German currency was relatively stable at about 90 Marks per US Dollar during the first half of 1921. Because the Western theatre warfare was mostly in France and Belgium, Germany had come out of the war with most of its industrial power intact, a healthy economy, and in a better position to become the dominant force on the European continent. However the “London ultimatum" in May 1921 demanded reparations in gold or foreign currency to be paid in annual installments of 2,000,000,000 (2 billion) goldmarks plus 26 percent of the value of Germany's exports.

The first payment was made when due in June 1921. That was the beginning of an increasingly rapid devaluation of the Mark which fell to less than one third of a cent by November 1921 (approx. 330 Marks per US Dollar). The total reparations demanded was 132,000,000,000 (132 billion) gold marks, of which Germany only had to pay 50 billion marks (a sum less than what they had offered to pay).

Because reparations were required to be repaid in hard currency and not the rapidly depreciating Papiermark, one strategy Germany employed was the mass printing of bank notes to buy foreign currency which was in turn used to pay reparations. This greatly exacerbated the inflation rates of the paper mark.


Beginning in August 1921, Germany began to purchase foreign currency with Marks at any price, but that only increased the speed at which the Mark declined in value. The lower the Mark sank on foreign exchanges, the more marks were required to buy the foreign currency demanded by the Reparations Commission.

During the first half of 1922, the Mark stabilized at about 320 Marks per Dollar. This was accompanied by international reparations conferences, including one in June 1922 organized by U.S. investment banker J.P. Morgan, Jr.  When these meetings produced no workable solution, the inflation changed to hyperinflation and the Mark fell to 800 Marks per Dollar by December 1922. The cost-of-living index was 41 in June 1922 and 685 in December, a 15-fold increase.

In January 1923 French and Belgian troops occupied the Ruhr, the industrial region of Germany in the Ruhr valley to ensure that the reparations were paid in goods, such as coal from the Ruhr and other industrial zones of Germany. Because the Mark was practically worthless, it became impossible for Germany to buy foreign exchange or gold using paper Marks. Instead, reparations were paid in goods. Inflation was exacerbated when workers in the Ruhr went on a general strike, and the German government printed more money in order to continue paying them for "passively resisting."

By November 1923, the American dollar was worth 4,210,500,000,000 German marks.

As a result of hyperinflation, there were news accounts of individuals in Germany suffering from a compulsion called zero stroke, a condition where the person has a "desire to write endless rows of [zeros] and engage in computations more involved than the most difficult problems in logarithms."


When a new currency, the Rentenmark, replaced the worthless Reichsbank marks on November 16, 1923 and 12 zeros were cut from prices, prices in the new currency remained stable. The German people regarded this stable currency as a miracle because they had heard such claims of stability before with the Notgeld (emergency money) that rapidly devalued as an additional source of inflation. The usual explanation was that the Rentenmarks were issued in a fixed amount and were backed by hard assets such as agricultural land and industrial assets, but what happened was more complex than that, as summarized in the following description.

In August 1923, Karl Helfferich proposed a plan to issue a new currency (Roggenmark) backed by mortgage bonds indexed to market prices (in paper Marks) of rye grain. His plan was rejected because of the greatly fluctuating price of rye in paper Marks. The Agriculture Minister Hans Luther proposed a different plan which substituted gold for rye and a new currency, the Rentenmark, backed by bonds indexed to market prices (in paper Marks) of gold.

The gold bonds were defined at the rate of 2790 gold Marks per kilogram of gold, which was the same definition as the pre-war goldmarks. The rentenmarks were not redeemable in gold, but were only indexed to the gold bonds. This rentenmark plan was adopted in monetary reform decrees on October 13–15, 1923 that set up a new bank, the Rentenbank controlled by Hans Luther who had become the new Finance Minister.

After November 12, 1923, when Hjalmar Schacht became currency commissioner, the Reichsbank, the old central bank, was not allowed to discount any further government Treasury bills, which meant the corresponding issue of paper marks also ceased.  Discounting of commercial trade bills was allowed and the amount of Rentenmarks expanded, but the issue was strictly controlled to conform to current commercial and government transactions. The new Rentenbank refused credit to the government and to speculators who were not able to borrow Rentenmarks, because Rentenmarks were not legal tender.  When Reichsbank president Rudolf Habenstein died on November 20, 1923, Schacht was appointed president of the Reichsbank. By November 30, 1923, there were 500 million Rentenmarks in circulation, which increased to 1 billion by January 1, 1924, and again to 1.8 billion Rentenmarks by July 1924. Meanwhile, the old paper Marks continued in circulation. The total paper Marks increased to 1.2 sextillion (or 1,200,000,000,000,000,000,000) in July 1924 and continued to fall in value to one third of their conversion value in Rentenmarks.

The monetary law of August 30, 1924 permitted exchange of each old paper 1 trillion Mark note for one new Reichsmark, equivalent in value to one Rentenmark.


The hyperinflation episode in the Weimer Republic in the early 1920s was not the first hyperinflation, nor was it the first one in Europe, or even the most extreme—though probably the most famous—instance of inflation in history (the Hungarian pengo and Zim;babwean dollar have both been even more inflated). However, as the most prominent case following the emergence of economics as a scholarly discipline, the Weimar hyperinflation drew interest in a way that previous instances had not. Many of the dramatic and unusual economic behaviors now associated with hyperinflation were first documented systematically in Germany: order-of-magnitude increases in prices and interest rates, redenomination of the currency, consumer flight from cash to hard assets, and the rapid expansion of industries that produced those assets. German monetary economics was then highly influenced by Chartalism and the German Historical School and this conditioned the way the hyperinflation was then usually analyzed.

John Maynard Keynes described the situation in The Economic Consequences of the Peace: "The inflationism of the currency systems of Europe has proceeded to extraordinary lengths. The various belligerent Governments, unable, or too timid or too short-sighted to secure from loans or taxes the resources they required, have printed notes for the balance."

It was during this period of hyperinflation that French and British economic experts began to claim that Germany destroyed its economy with the purpose of avoiding reparations, but both governments had conflicting views on how to handle the situation. The French declared that Germany should keep paying reparations, while Britain sought to grant a moratorium that would allow for its financial reconstruction.

Reparations accounted for about one third of the German deficit from 1920 to 1923, and were therefore cited by the German government as one of the main causes of hyperinflation. Other causes cited included bankers and speculators (particularly foreign). The inflation reached its peak by November 1923, but ended when a new currency (the Rentenmark) was introduced. In order to make way for the new currency, banks "turned the marks over to junk dealers by the ton" to be recycled as paper.

Sunday, December 21, 2014

New NASA Propulsion Concept

NASA's Idea to Nearly Replace Rockets
RealClearScience Newton’s Blog,
Posted by Tom Hartsfield December 17, 2014

NASA is facing a problem: chemical rocket engines are about as good as they will ever get by the laws of chemistry and physics. It's becoming increasingly difficult to make them any cheaper or safer, and private companies are now doing much of that work. Embarrassing, physically impossible microwave engine pipe dreams aside, what can NASA do?

When pressed for answers by the Obama administration, NASA engineers proposed something interesting: taking some of the load off of liquid-fueled rockets. The slack is taken up by two other propulsion technologies. The first stage of the takeoff is achieved by the use of a railgun. The second is accomplished via an engine called a scramjet. First, the rail gun launches the craft up to speed. Then, the scramjet takes over and pushes the ship to one third or more of escape velocity. Finally, the traditional rocket engine takes over for the final push to orbit.

The railgun stage is a simple idea. Railguns are powered by electromagnetic physics. Two thick metal rails are connected to each end of a capacitor. The capacitor is an enormous storage cell for electric charge, the "fuel" for this system. Electrical energy is stored in the electrical field of the capacitor by holding positive and negative charges close together but separated. So long as the two areas of the device containing positive and negative charge have no connection to one another the device is ready to fire.

One rail of the gun is hooked to the positive charge area of the capacitor and the other to the negative charge area. When a metal object is placed across the rails, the positive and negative areas are connected by this conductive bridge. A massive bolt of charge is immediately driven through the system, flowing from down one rail, across the bridging projectile and back down the other; the pull of the positively charged capacitor area driving an enormous current of electrons.

Flowing electrical currents produce magnetic fields. The magnetic field produced in each rail is proportional to the amount of current flowing through it. For a huge current, the magnetic field can become incredibly strong. Circling each rail in opposite directions, the two fields add together constructively in the center to produce a strong upward field. A law of nature called the Lorentz force says that a current and magnetic field flowing perpendicularly produce a force in the direction perpendicular to both of them. This Lorentz force pushes the projectile down the track at tremendous speed.

The advantage of a rail gun is that it requires no chemical propellant for its energy. The entire system is powered solely by an electrical generator that produces electrons and stores them in the capacitor. This means the first stage of the rocket will not need to load the craft down with any propellant. The second stage of the system also reduces the need for rocket fuel; the fuel is supplemented by air.

The scramjet stage takes over power at roughly Mach 1.5 . The scramjet is a type of jet engine which operates at much higher velocities. A traditional ramjet engine works by compressing air and creating combustion within it as the air flows through. While the ramjet flows this air at velocities less than the speed of sound, the scramjet produces combustion in a supersonic air flow through the combustion chamber, which is far more efficient.

The simple reason that this technology is needed is that as airspeed increases, the air being forced into the engine is moving at higher and higher velocities. This requires more and more and slowdown to drop back below the speed of sound for combustion. This in turn creates shockwaves. Above speeds near Mach 5, the shockwaves become so strong that they disrupt the airflow into the combustion area and restrict any greater air intake, limiting speed.
Scramjets can easily surpass this limit. Supersonic combustion engines have been tested by NASA in such ships as the X-15, X-43 and X-51. These rocket planes have reached speeds as high as Mach 10, roughly one-third of Earth's escape velocity. The scramjet design is theoretically capable of reaching speeds near 100% of escape velocity. Much more research and experimental testing will need to be performed before the feasibility of those speeds is known.

The challenges of this plan are very clear. First, no railgun vaguely approaching this size has ever been constructed. The Navy has built railguns capable of launching 23-pound projectiles; NASA is talking about launching projectiles of 1000 times that mass, with humans inside! Further, the rails will need to be nearly two miles long, and filling the capacitor will require a 180 megawatt power plant. On the bright side, this is mostly achievable with current technology plus research. However, it would require lots of money, initiative and a significant change in course at NASA.

The scramjet is also challenging. There are no declassified tests of a scramjet engine at speeds of Mach 10 for more than 10 seconds. Flights of even Mach 7 have never exceeded four minutes. How difficult it will be to design an engine that can run faster, longer is not at all clear.

Give NASA some credit for thinking big with this proposal. Now, let's see if they are provided the resources and can muster the gumption to really work on it, or if it's just another pie-in-the-sky dream.

Tom Hartsfield is a physics PhD Candidate at the University of Texas


Saturday, December 20, 2014

Excellent Blankets

A Hudson's Bay point blanket is a type of wool blanket traded by the Hudson’s Bay Company (HBC) in British North America (now Canada) and the United States during the 18th century and 19th century. The company is named for the famous saltwater bay in Northeastern Canada, and the blankets were typically traded to First Nations and Native Americans in exchange for beaver pelts. The blankets continue to be sold by Canada's Hudson’s Bay department stores and have come to hold iconic status in Canada.

Importance to Native Trade

In the North American fur trade, wool blankets were one of the main European items sought by native peoples in exchange for beaver pelts, buffalo robes, pemmican, moccasins, and other trade goods. They were desired because of wool's ability to hold heat even when wet, and because they were easier to sew than bison or deer skins.

Wool cloth of one kind or another was traded as far back as the French regime in North America (1534-1765), but HBC point blankets were introduced in 1780 to compete with similar blankets offered by the Montreal-based private traders. The blankets were often produced with a green stripe, red stripe, yellow stripe and indigo stripe on a white background; the four stripe colours were popular and easily produced using good colourfast dyes at that time.

From the early days of the fur trade, wool blankets were made into hooded coats called capotes by both natives and French Canadian voyageurs which were perfectly suited to Canada's cold winters.

Current Use

Made in England from 100% wool, versions of the blanket are available at Hudson’s Bay stores throughout Canada. Solid colours are available, as is the classic pattern featuring the green, red, yellow, and indigo stripes. Newly made blankets retail at between Cdn $275 and $475. Today the blankets are made in England by John Atkinson, a sub brand of A.W. Hainsworth & Sons Ltd.

The official licencee allowed to import Hudson's Bay Blankets into the United States is Woolrich Inc. in Pennsylvania

The coloured stripes appear on textile products by other manufacturers including some patterns on blankets made by Pendleton Woolen Mills which makes a wool coat with the Hudson's Bay stripes sold at Hudson's Bay stores. The "Hudson's Bay stripes" sometimes are also found on numerous additional items, such as scarves, beanies, coffee mugs, mittens, and the like.

Friday, December 19, 2014

One Protein -- Many Allergies

Multiple Allergic Reactions
Traced To Single Protein
Points to new strategy to reduce allergic
responses to many medications
Johns Hopkins, December 17, 2014

Fast Facts:

  • Mast cells are immune cells responsible for fending off pathogens and parasites, but they are also a frequent culprit in allergic reactions, including reactions to many medications.
  • Johns Hopkins researchers identified the receptor protein that induces mast cells to react to the medications.
  • If a drug can be developed that targets the receptor protein, it could potentially eliminate allergic reactions to many drugs, while leaving overall immunity intact.

Johns Hopkins and University of Alberta researchers have identified a single protein as the root of painful and dangerous allergic reactions to a range of medications and other substances. If a new drug can be found that targets the problematic protein, they say, it could help smooth treatment for patients with conditions ranging from prostate cancer to diabetes to HIV. Their results appear in the journal Nature on Dec. 17.

Previous studies traced reactions such as pain, itching and rashes at the injection sites of many drugs to part of the immune system known as mast cells. When specialized receptors on the outside of mast cells detect warning signals known as antibodies, they spring into action, releasing histamine and other substances that spark inflammation and draw other immune cells into the area. Those antibodies are produced by other immune cells in response to bacteria, viruses or other perceived threats. However, “although many of these injection site reactions look like an allergic response, the strange thing about them is that no antibodies are produced,” says Xinzhong Dong, Ph.D., an associate professor of neuroscience in the Institute for Basic Biomedical Sciences at the Johns Hopkins University School of Medicine.

To zero in on the cause of the reactions, Benjamin McNeil, Ph.D., a postdoctoral fellow in Dong’s laboratory, first set out to find which mast cell receptor — or receptors — responded to the drugs in mice. Previous studies had identified a human receptor likely to be at fault in the allergic reactions; McNeil found a receptor in mice that, like the human receptor, is found only in mast cells. He then tested that receptor by putting it into lab-grown cells and found that they did react to medications that provoke mast cell response. He found similar results for the human receptor that previous studies had indicated was a likely culprit.

“It’s fortunate that all of the drugs turn out to trigger a single receptor — it makes that receptor an attractive drug target,” McNeil says.

To find out whether eliminating the receptor really would eliminate the allergic reactions, the research team also disabled the gene for the suspect receptor in mice. These “knockout” mice did not have any of the drug allergy symptoms that their genetically normal counterparts displayed.

The researchers are now working to find compounds that could safely block the culprit receptor in humans, known as MRGPRX2. Such a drug would not prevent true allergic reactions, which produce antibodies, but only the pseudoallergic reactions triggered by MRGPRX2. Still, it could improve the lives of many patients, says McNeil, by lessening the drug side effects they currently endure. Medications that trigger MRGPRX2 include cancer drugs cetrorelix, leuprolide and octreotide; HIV drug sermorelin; fluoroquinolone antibiotics; and neuromuscular blocking drugs used to paralyze muscles during surgeries.

Dong’s research group is also looking into the possibility that MRGPRX2 could be behind immune conditions such as rosacea and psoriasis that don’t stem from medication use.

Other authors on the paper are Priyanka Pundir and Marianna Kulka of the University of Alberta, and Sonya Meeker, Liang Han and Bradley J. Undem of The Johns Hopkins University.

This study was supported by the National Institute of Neurological Disorders and Stroke (grant number R01NS054791) and the National Institute of General Medical Sciences (grant number R01GM087369). Dong is an early career scientist with the Howard Hughes Medical Institute.

Thursday, December 18, 2014

Equation for Superconductors

New Law for Superconductors
Mathematical description of relationship between thickness,
temperature, and resistivity could spur advances.
By Larry Hardesty | MIT News Office, December 16, 2014

MIT researchers have discovered a new mathematical relationship — between material thickness, temperature, and electrical resistance — that appears to hold in all superconductors. They describe their findings in the latest issue of Physical Review B.

The result could shed light on the nature of superconductivity and could also lead to better-engineered superconducting circuits for applications like quantum computing and ultralow-power computing.

“We were able to use this knowledge to make larger-area devices, which were not really possible to do previously, and the yield of the devices increased significantly,” says Yachin Ivry, a postdoc in MIT’s Research Laboratory of Electronics, and the first author on the paper.

Ivry works in the Quantum Nanostructures and Nanofabrication Group, which is led by Karl Berggren, a professor of electrical engineering and one of Ivry’s co-authors on the paper. Among other things, the group studies thin films of superconductors.

Superconductors are materials that, at temperatures near absolute zero, exhibit no electrical resistance; this means that it takes very little energy to induce an electrical current in them. A single photon will do the trick, which is why they’re useful as quantum photodetectors. And a computer chip built from superconducting circuits would, in principle, consume about one-hundredth as much energy as a conventional chip.

“Thin films are interesting scientifically because they allow you to get closer to what we call the superconducting-to-insulating transition,” Ivry says. “Superconductivity is a phenomenon that relies on the collective behavior of the electrons. So if you go to smaller and smaller dimensions, you get to the onset of the collective behavior.”

Vexing variation

Specifically, Ivry studied niobium nitride, a material favored by researchers because, in its bulk form, it has a relatively high “critical temperature” — the temperature at which it switches from an ordinary metal to a superconductor. But like most superconductors, it has a lower critical temperature when it’s deposited in the thin films on which nanodevices rely.

Previous theoretical work had characterized niobium nitride’s critical temperature as a function of either the thickness of the film or its measured resistivity at room temperature. But neither theory seemed to explain the results Ivry was getting. “We saw large scatter and no clear trend,” he says. “It made no sense, because we grew them in the lab under the same conditions.”

So the researchers conducted a series of experiments in which they held constant either thickness or “sheet resistance,” the material’s resistance per unit area, while varying the other parameter; they then measured the ensuing changes in critical temperature. A clear pattern emerged: Thickness times critical temperature equaled a constant — call it A — divided by sheet resistance raised to a particular power — call it B.

After deriving that formula, Ivry checked it against other results reported in the superconductor literature. His initial excitement evaporated, however, with the first outside paper he consulted. Though most of the results it reported fit his formula perfectly, two of them were dramatically awry. Then a colleague who was familiar with the paper pointed out that its authors had acknowledged in a footnote that those two measurements might reflect experimental error: When building their test device, the researchers had forgotten to turn on one of the gases they used to deposit their films.

Broadening the scope

The other niobium nitride papers Ivry consulted bore out his predictions, so he began to expand to other superconductors. Each new material he investigated required him to adjust the formula’s constants — A and B. But the general form of the equation held across results reported for roughly three dozen different superconductors.

It wasn’t necessarily surprising that each superconductor should have its own associated constant, but Ivry and Berggren weren’t happy that their equation required two of them. When Ivry graphed A against B for all the materials he’d investigated, however, the results fell on a straight line.

Finding a direct relationship between the constants allowed him to rely on only one of them in the general form of his equation. But perhaps more interestingly, the materials at either end of the line had distinct physical properties. Those at the top had highly disordered — or, technically, “amorphous” — crystalline structures; those at the bottom were more orderly, or “granular.” So Ivry’s initial attempt to banish an inelegance in his equation may already provide some insight into the physics of superconductors at small scales.

“None of the admitted theory up to now explains with such a broad class of materials the relation of critical temperature with sheet resistance and thickness,” says Claude Chapelier, a superconductivity researcher at France’s Alternative Energies and Atomic Energy Commission. “There are several models that do not predict the same things.”

Chapelier says he would like to see a theoretical explanation for that relationship. But in the meantime, “this is very convenient for technical applications,” he says, “because there is a lot of spreading of the results, and nobody knows whether they will get good films for superconducting devices. By putting a material into this law, you know already whether it’s a good superconducting film or not.”


Wednesday, December 17, 2014

Summary of Hearsay

Hearsay evidence is "an out-of-court statement introduced to prove the truth of the matter asserted therein." In court hearsay evidence is inadmissible (the "Hearsay Evidence Rule") unless an exception to the Hearsay Rule applies.

For example, to prove Tom was in town, the attorney asks a witness, "What did Susan tell you about Tom being in town?" Since the witness's answer will rely on an out-of-court statement that Susan made, Susan is not available for cross-examination, and it is to prove the truth that Tom was in town, it is hearsay. A justification for the objection is that the person who made the statement is not in court and thus is insulated from cross examination. Note, however, that if the attorney asking the same question is trying to prove not the truth of the assertion about Tom being in town but the fact that Susan said the specific words, it may be acceptable. For example, it would be acceptable to ask a witness what Susan told them about Tom in a defamation case against Susan because now the witness is asked about the opposing party's statement that constitutes a verbal act.

The hearsay rule does not exclude the evidence if it is an operative fact. Language of commercial offer and acceptance is also admissible over a hearsay exception because the statements have independent legal significance.

Double hearsay is a hearsay statement that contains another hearsay statement itself.

For example, a witness wants to testify that "a very reliable man informed me that Wools-Sampson told him." The statements of the very reliable man and Wools-Sampson are both hearsay submissions on the part of the witness, and the second hearsay (the statement of Wools-Sampson) depends on the first (the statement of the very reliable man). In a court, both layers of hearsay must be found separately admissible. In this example, the first hearsay also comes from an anonymous source, and the admissibility of an anonymous statement requires additional legal burden of prooof.  Many jurisdictions that generally disallow hearsay evidence in courts permit the more widespread use of hearsay in non-judicial hearings.

United States

Main article: Hearsay in United States Law

The Sixth Amendment to the United States Constitution provides that "In all criminal prosecutions, the accused shall enjoy the right ... to be confronted with the witnesses against him".

"Hearsay is a statement, other than one made by the declarant while testifying at the trial or hearing, offered in evidence to prove the truth of the matter asserted." Per Federal Rule of Evidence 801(d)(2)(a), a statement made by a defendant is admissible as evidence only if it is inculpatory; exculpatory statements made to an investigator are hearsay and therefore may not be admitted as evidence in court, unless the defendant testifies. When an out-of-court statement offered as evidence contains another out-of-court statement it is called double hearsay, and both layers of hearsay must be found separately admissible.

There are several exceptions to the rule against hearsay in U.S. law.[1] Federal Rule of Evidence 803 lists the following:

  • Statement against interest
  • Present sense impressions and Excited utterances
  • Then existing mental, emotional, or physical condition
  • Medical diagnosis or treatment
  • Recorded recollection
  • Records of regularly conducted activity
  • Public records and reports, as well as absence of entry in records
  • Records of vital statistics
  • Absence of public record or entry
  • Records of religious organizations
  • Marriage, baptismal, and similar certificates, and Family and Property records
  • Statements in documents affecting an interest in property
  • Statements in ancient documents the authenticity of which can be established.
  • Market reports, commercial publications
  • "Learned treatises"
  • Reputation concerning personal or family history, boundaries, or general history, or as to character
  • Judgment of previous conviction, and as to personal, family or general history, or boundaries.

Also, some documents are self-authenticating under Rule 902, such as (1) domestic public documents under seal, (2) domestic public documents not under seal, but bearing a signature of a public officer, (3) foreign public documents, (4) certified copies of public records, (5) official publications, (6) newspapers and periodicals, (7) trade inscriptions and the like, (8) acknowledged documents (i.e. by a notary public), (9) commercial paper and related documents, (10) presumptions under Acts of Congress, (11) certified domestic records of regularly conducted activity, (12) certified foreign records of regularly conducted activity.

Other nations

England and Wales, Canada, Hong Kong, Australia, Malaysia, New Zealand, Norway, Sri Lanka, and Sweden have their own evidence rules.