Saturday, April 30, 2016

The Great British Bridge Scandal

Introduction by the Blog Author

The Great British Bridge Scandal took place at the Bermuda Bowl in Buenos Aires in 1965.  World famous contract bridge experts John Reese and Boris Schapiro were accused to using hand signals in the game, and therefore the entire British team forfeited the matches that had been played.  Reese and Schapiro were not allowed to play in any further tournament matches.
The BBC has broadcast coverage of this bridge scandal as recently as April, 2016,

= = = = = = = = = = = = = = = = = = = = = = = = = = = = =

John Terence Reese (28 August 1913 – 29 January 1996) was a British bridge player and writer, regarded as one of the finest of all time in both fields. He was born in Epsom, Surrey, England to middle-class parents, and was educated at Bradfield College and New College, Oxford, where he studied classics and attained a double first, graduating in 1935.

As a bridge player, Reese won every honour in the game, including the European Championship four times (1948, 1949, 1954, 1963) and the Bermuda Bowl (effectively, the World Team Championship) in 1955—all as a member of the Great Britain open team. He was World Par champion in 1961 and placed second in both the inaugural World Team Olympiad 1960, and the inaugural World Open Pairs, 1962. He also represented Britain in the 1965 Bermuda Bowl and in five other European Championships. He won the Gold Cup, the premier British domestic competition, on eight occasions.
https://en.wikipedia.org/wiki/Terence_Reese

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Boris Schapiro (22 August 1909 – 1 December 2002) was a British international bridge player. He was a Grandmaster of the World Bridge Federation, and the only player to have won both the Bermuda Bowl (the world championship for national teams) and the World Senior Pairs championship. He won the European teams championship on four occasions as part of the British team.

Buenos Aires Affair

In the 1965 Buenos Aires Bermuda Bowl, B. Jay Becker noticed Schapiro and his partner, Terence Reese, holding their cards in unusual ways during bidding, the number of fingers showing indicating length of the heart suit. A number of players and observers, including Dorothy Hayden, New York Times columnists Alan Truscott, John Gerber, British nonplaying captain Ralph Swimer, British Bridge League Chairman Geoffrey Butler, ACBL president emeritus Waldemar von Zedtwitz, and ACBL President Robin McNabb, all watched Reese and Schapiro and were convinced that they were signalling illegally. It was also confirmed that Reese was not using such signals while playing with his other partner, Jeremy Flint. At a hearing held at the tournament site in Buenos Aires, the World Bridge Federation (WBF) judged Reese and Schapiro guilty of cheating, and announced that due to "certain irregularities", the British team was forfeiting the matches they had already won against North America and Argentina, and that Reese and Shapiro would not be playing in the remaining matches.

The British Bridge League (BBL) subsequently convened their own enquiry, chaired by Sir John Foster, barrister and Member of Parliament, and General Lord Bourne. After a hearing lasting many months, including a surprise revelation by Swimer that Schapiro had confessed his guilt to him, the "Foster Enquiry" found insufficient evidence to find Reese and Schapiro guilty beyond reasonable doubt. Without rebutting the "direct" evidence that grips were correlated with heart count, the report emphasized that there was inconclusive evidence that the players had benefitted from the signals in their bidding or play.

In 1967, the BBL asked the WBF to reverse their guilty finding; the WBF responded by unanimously reaffirming their guilty verdict, and later reiterating that they would not accept a British team including Reese and Schapiro for the 1968 Olympiad, which the BBL boycotted in protest. In 1968, a compromise was reached, the WBF maintaining their guilty verdict, but allowing Reese and Schapiro (who had announced his retirement from international bridge after the Buenos Aires Olympiad) to play in future world championships.

Subsequently, both Reese and Alan Truscott published books on the scandal. Reese's book stated: "The basis of the charge, as everybody knows, is that Schapiro and I communicated length in hearts to one another by means of illegal signals. If you want to support that charge by reference to the hands played, what you have to show is that a number of illogical, uncharacteristic, and implausible bids or plays were made that can be explained only on the basis that the players had improper knowledge of one another's hands." It then goes on to argue for the logic of the pair's bidding and play on the various hands from the Olympiad. Truscott's book emphasizes the unlikelihood of the observed variations in finger signals being coincidental, or of such a large number of witnesses colluding to fabricate the evidence.

In May 2005, the English journalist David Rex-Taylor, a bridge player and publisher, claimed that Reese had made a confession to him forty years earlier, one that was not to be revealed until 2005 and after he and Schapiro were dead. The purported confession claimed that Reese and Schapiro were indeed signalling, but only to show that such signaling was possible (and so were not actually paying attention to each other's signals), purportedly as part of a book on cheating (which was abandoned after the scandal broke). Although this explanation could conceivably reconcile the use of finger signals with the absence of evidence from bidding or play, there is no corroborating evidence to support this account. In contrast, Schapiro's widow claims he continued to deny the accusations to his death.

After 1965

Schapiro was bridge correspondent of The Sunday Times from 1968 until his death in 2002. Despite his facility with language, he was never really interested in writing; his output was two small books, and it is likely that his newspaper column was often ghosted. He made his mark as a player and a personality.

The Buenos Aires affair removed at a stroke the central activity of his life. It took years for Schapiro to be rehabilitated in world bridge, although he was always held in high esteem in Europe. He did eventually return to international bridge competition, unlike Reese, and did so with considerable success (above).

Schapiro's 90th birthday party in London was attended by Jaime Ortiz-Patino, the WBF President Emeritus and the owner of Valderrama Golf Club, who had been a witness for Reese and Schapiro in the BBL enquiry; Omar Sharif, the Egyptian film star and bridge player; Prince Khalid Abudullah of Saudi Arabia, a family friend; and many personalities from the bridge and casino worlds.

Anecdotes

Schapiro's conversation at the bridge table was either a delight or a nuisance, depending on taste and point of view.

His standard greeting to females – "What about a spot of adultery?" (or "Fancy a spot of adultery?") – is mentioned in every biographical note and obituary, and reveals his sense of humour. When his team played an exhibition match at Leicester, the wife of the Chief Constable organised a cocktail party for them to meet the locals. The traveling players were invited to sign and comment in the Visitors book, and Schapiro wrote the catchphrase after his signature. Dimmie Fleming – another international player and the only woman to play on the British open team – defused the situation by signing next, drawing an upwards arrow and writing, "But will he ever be adult?"

Another story shows his partner Terence Reese picking up a collection of silver cup trophies from Schapiro's flat in Eaton Place (the Upstairs, Downstairs setting) and carrying them in a pillow-case. Stopped in the street by a policeman and asked to explain his unusual sack of possessions, Reese led the officer back to the flat so that Schapiro could validate his explanation. When Schapiro answered the door, he sized up the situation, and when asked "Can you identify this man?", said "Never seen him before in my life."

Friday, April 29, 2016

Edict of Nantes

The Edict of Nantes (French: Édit de Nantes), signed probably on 30 April 1598, by King Henry IV of France, granted the Calvinist Protestants of France (also known as Huguenots) substantial rights in the nation, which was, at the time, still considered essentially Catholic. In the Edict, Henry aimed primarily to promote civil unity. The Edict separated civil from religious unity, treated some Protestants for the first time as more than mere schismatics and heretics, and opened a path for secularism and tolerance. In offering general freedom of conscience to individuals, the Edict offered many specific concessions to the Protestants, such as amnesty and the reinstatement of their civil rights, including the right to work in any field or for the State and to bring grievances directly to the king. It marked the end of the religious wars that had afflicted France during the second half of the 16th century.

The Edict of St. Germain promulgated 36 years before by Catherine de Médici had granted limited tolerance to Huguenots, but was overtaken by events, as it was not formally registered until after the Massacre of Vassy on 1 March 1562, which triggered the first of the French Wars of Religion.

The later Edict of Fontainebleau which revoked the Edict of Nantes in October 1685 by Louis XIV, the grandson of Henry IV, drove an exodus of Protestants, and increased the hostility of Protestant nations bordering France.

Background

The Edict aimed primarily to end the long-running, disruptive French Wars of Religion. Henry IV also had personal reasons for supporting the Edict. Prior to assuming the throne in 1589 he had espoused Protestantism, and he remained sympathetic to the Protestant cause: he had converted to Catholicism in 1593 only in order to secure his position as king, supposedly saying "Paris is well worth a Mass". The Edict succeeded in restoring peace and internal unity to France, though it pleased neither party: Catholics rejected the apparent recognition of Protestantism as a permanent element in French society and still hoped to enforce religious uniformity, while Protestants aspired to parity with Catholics. "Toleration in France was a royal notion, and the religious settlement was dependent upon the continued support of the crown."

Re-establishing royal authority in France required internal peace, based on limited toleration enforced by the crown. Since royal troops could not be everywhere, Huguenots needed to be granted strictly circumscribed possibilities of self-defense.

                                                                    Edict of Nantes
                                                           

Thursday, April 28, 2016

Shortwave Radio

Shortwave radio is radio transmission using shortwave frequencies, generally 1.6–30 MHz (187.4–10.0 m), just above the medium wave AM broadcast band.

Shortwave radio is used for long distance communication by means of skywave or skip propagation, in which the radio waves are reflected or refracted back to Earth from the ionosphere, allowing communication around the curve of the Earth. Shortwave radio is used for broadcasting of voice and music to shortwave listeners, and long-distance communication to ships and aircraft, or to remote areas out of reach of wired communication or other radio services. Additionally, it is used for two-way international communication by amateur radio enthusiasts for hobby, educational and emergency purposes.

Propagation Characteristics

Shortwave radio frequency energy is capable of reaching any location on the Earth as it is influenced by ionospheric reflection back to the earth by the ionosphere, (a phenomenon known as "skywave propagation"). A typical phenomenon of shortwave propagation is the occurrence of a skip zone (see first figure on that page) where reception fails. With a fixed working frequency, large changes in ionospheric conditions may create skip zones at night.

As a result of the multi-layer structure of the ionosphere, propagation often simultaneously occurs on different paths, scattered by the E or F region and with different numbers of hops, a phenomenon that may be disturbed for certain techniques. Particularly for lower frequencies of the shortwave band, absorption of radio frequency energy in the lowest ionospheric layer, the D layer, may impose a serious limit. This is due to collisions of electrons with neutral molecules, absorbing some of a radio frequency's energy and converting it to heat. Predictions of skywave propagation depend on:

  • The distance from the transmitter to the target receiver.
  • Time of day. During the day, frequencies higher than approximately 12 MHz can travel longer distances than lower ones. At night, this property is reversed.
  • With lower frequencies the dependence on the time of the day is mainly due to the lowest ionospheric layer, the D Layer, forming only during the day when photons from the sun break up atoms into ions and free electrons.
  • Season. During the winter months of the Northern or Southern hemispheres, the AM/MW broadcast band tends to be more favorable because of longer hours of darkness.
  • Solar flares produce a large increase in D region ionization so high, sometimes for periods of several minutes, all skywave propagation is nonexistent.

Advantages of Shortwave Radio

Shortwave does possess a number of advantages over newer technologies, including the following:

  • Difficulty of censoring programming by authorities in restrictive countries: unlike their relative ease in monitoring the Internet, government authorities face technical difficulties monitoring which stations (sites) are being listened to (accessed). For example, during the Russian coup against President Mikhail Gorbachev, when his access to communications was limited, Gorbachev was able to stay informed by means of the BBC World Service on shortwave.
  • Low-cost shortwave radios are widely available in all but the most repressive countries in the world. Simple shortwave regenerative receivers can be easily built with a few parts.
  • In many countries (particularly in most developing nations and in the Eastern bloc during the Cold War era) ownership of shortwave receivers has been and continues to be widespread (in many of these countries some domestic stations also used shortwave).
  • Many newer shortwave receivers are portable and can be battery-operated, making them useful in difficult circumstances. Newer technology includes hand-cranked radios which provide power without batteries.
  • Shortwave radios can be used in situations where Internet or satellite communications service is temporarily or long-term unavailable (or unaffordable).
  • Shortwave radio travels much farther than broadcast FM (88-108 MHz). Shortwave broadcasts can be easily transmitted over a distance of several thousands of kilometers, including from one continent to another.
  • Particularly in tropical regions, SW is somewhat less prone to interference from thunderstorms than medium wave radio, and is able to cover a large geographic area with relatively low power (and hence cost). Therefore, in many of these countries it is widely used for domestic broadcasting.
  • Very little infrastructure is required for long-distance two-way communications using shortwave radio. All one needs is a pair of transceivers, each with an antenna, and a source of energy (such as a battery, a portable generator, or the electrical grid). This makes shortwave radio one of the most robust means of communications, which can be disrupted only by interference or bad ionospheric conditions. Modern digital transmission modes such as MFSK and Olivia are even more robust, allowing successful reception of signals well below the noise floor of a conventional receiver.

Disadvantages of Short Wave Radio

Shortwave radio's benefits are sometimes regarded as being outweighed by its drawbacks, including:

  • In most Western countries, shortwave radio ownership is usually limited to true enthusiasts, since most new standard radios do not receive the shortwave band. Therefore, Western audiences are limited.
  • In the developed world, shortwave reception is very difficult in urban areas because of excessive noise from switched mode power adapters, fluorescent or led light sources, internet modems and routers, computers and many, many other sources of radio interference.

Wednesday, April 27, 2016

Basics of Followership

Followership refers to a role held by certain individuals in a gym of statistics, team, or group. Specifically, it is the capacity of an individual to actively follow a leader. Followership is the reciprocal social process of leadership. The study of followership (part of the emerging study of Leadership psychology) is integral to a better understanding of leadership, as the success and failure of groups, organizations, and teams is not only dependent on how well a leader can lead, but also on how well the followers can follow. Specifically, followers play an active role in organization, group, and team successes and failures. Effective followers are individuals who are considered to be enthusiastic, intelligent, ambitious, and self-reliant. The emergence of the field of followership has been attributed to the scholar Robert Kelley.

Kelley described four main qualities of effective followers, which include:

  1. Self-Management: This refers to the ability to think critically, to be in control of one’s actions, and work independently. It is important that followers manage themselves well as leaders are able to delegate tasks to these individuals.
  2. Commitment: This refers to an individual being committed to the goal, vision, or cause of a group, team, or organization. This is an important quality of followers as it help keep one’s (and other member’s) morale and energy levels high.
  3. Competence: It is essential that individuals possess the skills and aptitudes necessary to complete the goal or task or the group, team, or organization. Individuals high on this quality often hold skills higher than their average co-worker (or team member). Further, these individuals continue their pursuit of knowledge by upgrading their skills through classes and seminars.
  4. Courage: Effective followers hold true to their beliefs and maintain and uphold ethical standards, even in the face of dishonest or corrupt superiors (leaders). These individuals are loyal, honest, and importantly, candid with their superiors.

Followership Patterns

Kelley identified two underlying behavioural dimensions that help identify the difference between followers and non-followers. The first behavioural dimension is whether or not the individual is an independent, critical thinker. The second dimension is whether or not the individual is active or passive. From these dimensions, Kelley has identified five followership patterns, or types of followers:

  1. The Sheep: These individuals are passive and require external motivation from the leader. These individuals lack commitment and require constant supervision from the leader.
  2. The Yes-People: These individuals are committed to the leader and the goal (or task) of the organization (or group/team). These conformist individuals do not question the decisions or actions of the leader. Further, yes-people will defend adamantly their leader when faced with opposition from others.
  3. The Pragmatics: These individuals are not trail-blazers; they will not stand behind controversial or unique ideas until the majority of the group has expressed their support. These individuals often remain in the background of the group.
  4. The Alienated: These individuals are negative and often attempt to stall or bring the group down by constantly questioning the decisions and actions of the leader. These individuals often view themselves as the rightful leader of the organization and are critical of the leader and fellow group members.
  5. The Star Followers: These exemplary individuals are positive, active, and independent thinkers. Star followers will not blindly accept the decisions or actions of a leader until they have evaluated them completely. Furthermore, these types of followers can succeed without the presence of a leader.

Tuesday, April 26, 2016

Bubblegum Rock


Bubblegum pop (also known as bubblegum rock, bubblegum music, or simply bubblegum) is a genre of pop music with an upbeat sound contrived and marketed to appeal to pre-teens and teenagers, that may be produced in an assembly-line process, driven by producers and often using unknown singers. Bubblegum's classic period ran from 1967 to 1972.  A second wave of bubblegum started two years later and ran until 1977 when disco took over and punk rock emerged.

The genre was predominantly a singles phenomenon rather than an album-oriented one. Also, because many acts were manufactured in the studio using session musicians, a large number of bubblegum songs were by one-hit wonders. Among the best-known acts of bubblegum's golden era are 1910 Fruitgum Company, The Ohio Express and The Archies, an animated group which had the most successful bubblegum song with "Sugar, Sugar", Billboard Magazine's No. 1 single for 1969. Singer Tommy Roe, arguably, had the most bubblegum hits of any artist during this period, notably 1969's "Dizzy".

Characteristics

The chief characteristics of the genre are that it is pop music contrived and marketed to appeal to pre-teens and teenagers, is produced in an assembly-line process, driven by producers, often using unknown singers and has an upbeat sound.  The songs typically have singalong choruses, seemingly childlike themes and a contrived innocence, occasionally combined with an undercurrent of sexual double entendre.  Bubblegum songs are also defined as having a catchy melody, simple chords, simple harmonies, dancy (but not necessarily danceable) beats, repetitive riffs or "hooks" and a vocally-multiplied refrain. The song lyrics often feature themes of romantic love and personal happiness, with references to sunshine, platonic love, toys, colors, nonsense words, etc. They are also notable for their frequent reference to sugary food, including sugar, honey, butterscotch, jelly and marmalade. Cross-marketing with cereal and bubblegum manufacturers also strengthened the link between bubblegum songs and confectionery. Cardboard records by The Archies, The Banana Splits, The Jackson 5, The Monkees, Bobby Sherman, Josie and the Pussycats, H.R. Pufnstuf and other acts were included on the backs of cereal boxes in the late 1960s and early 1970s, while acts including The Brady Bunch had their own brands of chewing gum as a result of licensing deals with TV networks and record companies.

Etymology

Producers Jerry Kasenetz and Jeff Katz have claimed credit for coining the term bubblegum pop, saying that when they discussed their target audience, they decided it was "teenagers, the young kids. And at the time we used to be chewing bubblegum, and my partner and I used to look at it and laugh and say, 'Ah, this is like bubblegum music'." The term was seized upon by Buddah Records label executive Neil Bogart. Music writer and bubblegum historian Bill Pitzonka confirmed the claim, telling Goldmine magazine: "That's when bubblegum crystallized into an actual camp. Kasenetz and Katz really crystallized it when they came up with the term themselves and that nice little analogy. And Neil Bogart, being the marketing person he was, just crammed it down the throats of people. That's really the point at which bubblegum took off."


Afterword by the Blog Author

Novelty songs of the 1950s and early 1960s were vital in the development of bubblegum rock.  Little Richard, early Paul Anka (especially “Diana” and “Put Your Head on my Shoulder” as well as hits like “Little Nash Rambler” and Brian Hyland’s “Itsy Bitsy Teeny Weeenie Yellow Polka Dot Bikini”) formed a sub-genre that stayed alive in the early 1960s and became bubblegum after the British invasion.

Monday, April 25, 2016

The Celestial Equator

The celestial equator is a great circle on the imaginary celestial sphere, in the same plane as the Earth's equator. In other words, it is a projection of the terrestrial equator out into space.  As a result of the Earth's axial tilt, the celestial equator is inclined by 23.4° with respect to the ecliptic plane.

An observer standing on the Earth's equator visualizes the celestial equator as a semicircle passing directly overhead through the zenith. As the observer moves north (or south), the celestial equator tilts towards the opposite horizon. The celestial equator is defined to be infinitely distant (since it is on the celestial sphere); thus the observer always sees the ends of the semicircle disappear over the horizon exactly due east and due west, regardless of the observer's position on Earth. (At the poles, though, the celestial equator would be parallel to the horizon.) At all latitudes the celestial equator appears perfectly straight because the observer is only finitely far from the plane of the celestial equator but infinitely far from the celestial equator itself.

 


This illustration by Dennis Nilsson, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=3262268

Celestial objects near the celestial equator are visible worldwide, but they culminate the highest in the sky in the tropics. The celestial equator currently passes through these constellations:

  • Pisces
  • Cetus
  • Taurus
  • Eridanus
  • Orion
  • Monoceros
  • Canis Minor
  • Hydra
  • Sextans
  • Leo
  • Virgo
  • Serpens
  • Ophiuchus
  • Aquila
  • Aquarius

Celestial bodies other than Earth also have similarly defined celestial equators.

Sunday, April 24, 2016

Battery Endurance Breakthrough

All Powered Up
UCI chemists create battery technology
with off-the-charts charging capacity

Irvine, Calif., April 20, 2016 — University of California, Irvine researchers have invented nanowire-based battery material that can be recharged hundreds of thousands of times, moving us closer to a battery that would never require replacement. The breakthrough work could lead to commercial batteries with greatly lengthened lifespans for computers, smartphones, appliances, cars and spacecraft.

Scientists have long sought to use nanowires in batteries. Thousands of times thinner than a human hair, they’re highly conductive and feature a large surface area for the storage and transfer of electrons. However, these filaments are extremely fragile and don’t hold up well to repeated discharging and recharging, or cycling. In a typical lithium-ion battery, they expand and grow brittle, which leads to cracking.

UCI researchers have solved this problem by coating a gold nanowire in a manganese dioxide shell and encasing the assembly in an electrolyte made of a Plexiglas-like gel. The combination is reliable and resistant to failure.

The study leader, UCI doctoral candidate Mya Le Thai, cycled the testing electrode up to 200,000 times over three months without detecting any loss of capacity or power and without fracturing any nanowires. The findings were published today in the American Chemical Society’s Energy Letters.

Hard work combined with serendipity paid off in this case, according to senior author Reginald Penner.

“Mya was playing around, and she coated this whole thing with a very thin gel layer and started to cycle it,” said Penner, chair of UCI’s chemistry department. “She discovered that just by using this gel, she could cycle it hundreds of thousands of times without losing any capacity.”

“That was crazy,” he added, “because these things typically die in dramatic fashion after 5,000 or 6,000 or 7,000 cycles at most.”

The researchers think the goo plasticizes the metal oxide in the battery and gives it flexibility, preventing cracking.

“The coated electrode holds its shape much better, making it a more reliable option,” Thai said. “This research proves that a nanowire-based battery electrode can have a long lifetime and that we can make these kinds of batteries a reality.”

The study was conducted in coordination with the Nanostructures for Electrical Energy Storage Energy Frontier Research Center at the University of Maryland, with funding from the Basic Energy Sciences division of the U.S. Department of Energy.

Saturday, April 23, 2016

Lyapunov time and exponent

In mathematics, the Lyapunov time is the characteristic timescale on which a dynamical system is chaotic. It is named after the Russian mathematician Aleksandr Lyapunov. See the extensive discussion of the Lyapunov exponent, its inverse.

Use

The Lyapunov time reflects the limits of the predictability of the system. By convention, it is defined as the time for the distance between nearby trajectories of the system to increase by a factor of e. However, measures in terms of 2-foldings and 10-foldings are sometimes found, since they correspond to the loss of one bit of information or one digit of precision respectively.

While it is used in many applications of dynamical systems theory, it has been particularly used in celestial mechanics where it is important for the stability of the Solar System question. However, empirical estimation of the Lyapunov time is often associated with computational or inherent uncertainties


= = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Lyapunov Exponent

This is the inverse of Lyampunov time – see this link for a detailed explanation: 
 

Friday, April 22, 2016

USA Mediocrity in Education

Dumbing Us Down:
The Hidden Curriculum of Compulsory Schooling,
10th Anniversary Edition Paperback
– February 1, 2002

by John Taylor Gatto (Author), Thomas Moore (Foreword)
With over 70,000 copies of the first edition in print, this radical treatise on public education has been a New Society Publishers’ bestseller for 10 years! Thirty years in New York City’s public schools led John Gatto to the sad conclusion that compulsory schooling does little but teach young people to follow orders like cogs in an industrial machine. This second edition describes the wide-spread impact of the book and Gatto’s "guerrilla teaching."

John Gatto has been a teacher for 30 years and is a recipient of the New York State Teacher of the Year award. His other titles include A Different Kind of Teacher (Berkeley Hills Books, 2001) and The Underground History of American Education (Oxford Village Press, 2000).

= = = = = = = =Amazon Customer Review = = = = = = =

5 Stars
Real learning demands individuality, not regimentation.
By Patricia Brattan on March 1, 2000
Format: Paperback

After 26 years of teaching in the New York public schools, John Taylor Gatto has seen a lot. His book,Dumbing Us Down, is a treatise against what he believes to be the destructive nature of schooling. The book opens with a chapter called "The Seven-Lesson Schoolteacher," in which he outlines sevenharmful lessons he must convey as a public schoolteacher: 1.) confusion 2.) class position 3.) indifference 4.) emotional dependency 5.) intellectual dependency 6.) provisional self-esteem 7.) constant surveillance and the denial of privacy.
How ironic it is that Gatto's first two chapters contain the text of his acceptance speeches for NewYork State and City Teacher of the Year Awards. How ironic indeed, that he uses his own award presentation as a forum to attack the very same educational system that is honoring him! Gatto describes schooling, as opposed to learning, as a "twelve-year jail sentence where bad habits are the onlycurriculum truly learned. I teach school and win awards doing it," taunts the author.
While trapped in this debilitative system along with his students, Gatto, observed in them anoverwhelming dependence. He believes that school teaches this dependence by purposely inhibitingindependent thinking, and reinforcing indifference to adult thinking. He describes his students as"having almost no curiosity, a poor sense of the future, are a historical, cruel, uneasy with intimacy, and materialistic."
Gatto suggests that the remedy to this crisis in education is less time spent in school, and more timespent with family and "in meaningful pursuits in their communities." He advocates apprenticeships andhome schooling as a way for children to learn. He even goes so far as to argue for the removal of certification requirements for teachers, and letting "anybody who wants to, teach."
Gatto's style of writing is simple and easy to follow. He interlaces personal stories throughout the book to bring clarity and harmony to his views, while also drawing on logic and history to support his ideas about freedom in education and a return to building community. He clearly distinguishes communities from networks: "Communities ... are complex relationships of commonality and obligation," whereas, "Networksdon't require the whole person, but only a narrow piece."
While Gatto harshly criticizes schooling, we must realize that his opinions do come as a result of 26 yearsof experience and frustration with the public school system. Unfortunately, whether or not one agrees with his solutions, he has not outlined the logistics of how these improvements would be implemented. His ideas are based on idealism, and the reality of numbers and economics would present many obstacles. Nevertheless, it gives us a clear vision and a direction to follow for teachers and parents who believe in the family as the most important agent for childrearing and growth.

Thursday, April 21, 2016

Life 4.1 Billion Years Ago?


Introduction
Zircons in Australia reveal graphite embedded within them that hints at carbon-based like on Earth 4.1 billion years ago.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Abstract

Evidence of life on Earth is manifestly preserved in the rock record. However, the microfossil record only extends to 3.5 billion years (Ga), the chemofossil record arguably to 3.8 Ga, and the rock record to 4.0 Ga. Detrital zircons from Jack Hills, Western Australia range in age up to nearly 4.4 Ga. From a population of over 10,000 Jack Hills zircons, we identified one >3.8-Ga zircon that contains primary graphite inclusions. Here, we report carbon isotopic measurements on these inclusions in a concordant, 4.10 ± 0.01-Ga zircon. We interpret these inclusions as primary due to their enclosure in a crack-free host as shown by transmission X-ray microscopy and their crystal habit. Their δ13CPDB of −24 ± 5‰ is consistent with a biogenic origin and may be evidence that a terrestrial biosphere had emerged by 4.1 Ga, or 300 My earlier than has been previously proposed.
Significance

Evidence for carbon cycling or biologic activity can be derived from carbon isotopes, because a high 12C/13C ratio is characteristic of biogenic carbon due to the large isotopic fractionation associated with enzymatic carbon fixation. The earliest materials measured for carbon isotopes at 3.8 Ga are isotopically light, and thus potentially biogenic. Because Earth’s known rock record extends only to 4 Ga, earlier periods of history are accessible only through mineral grains deposited in later sediments. We report 12C/13C of graphite preserved in 4.1-Ga zircon. Its complete encasement in crack-free, undisturbed zircon demonstrates that it is not contamination from more recent geologic processes. Its 12C-rich isotopic signature may be evidence for the origin of life on Earth by 4.1 Ga.

Authors

1.      Elizabeth A. Bell

o        aDepartment of Earth, Planetary, and Space Sciences, University of California, Los Angeles, CA 90095;

2.      Patrick Boehnke

o        aDepartment of Earth, Planetary, and Space Sciences, University of California, Los Angeles, CA 90095;


3.      T. Mark Harrison

o        aDepartment of Earth, Planetary, and Space Sciences, University of California, Los Angeles, CA 90095;

4.    Wendy L. Mao

o   bSchool of Earth, Energy, and Environmental Sciences, Stanford University, Stanford, CA 94305

Wednesday, April 20, 2016

Fad Diets Explained

A fad diet or diet cult is a diet that makes promises of weight loss or other health advantages such as longer life without backing by solid science, and in many cases are characterized by highly restrictive or unusual food choices.  Celebrity endorsements are frequently associated with fad diets, and the individuals who develop and promote these programs often profit handsomely.

Definition

A competitive market for "healthy diets" arose in the nineteenth century developed world, as migration and industrialization and commodification of food supplies began eroding adherence to traditional ethnocultural diets, and the health consequences of pleasure-based diets were becoming apparent. As Matt Fitzgerald describes it, "This modern cult of healthy eating is made up of innumerable sub-cults that are constantly vying for superiority. ...Like consumer products in commercial markets, each of these diets has a brand name and is advertised as being better than competing brands. The recruiting programs of the healthy-diet cults consist almost entirely of efforts to convince prospective followers that their diet is the One True Way to eat for maximum physical health."

These diets are generally restrictive, and are characterized by promises of fast weight loss or great physical health, and which are not grounded in sound science.

These diets are often endorsed by celebrities or medical professionals who style themselves as "gurus" and profit from sales of branded products, books, and public speaking.

These diets attract people who want to lose weight quickly and easily and keep that weight off or who want to be healthy and find that belonging to a group of people defined by a strict way of eating helps them to avoid the many bad food choices available in the developed world.

98% of people who diet using these diets in order to lose weight gain it back within 5 years; fad diets fail because many of them are not sustainable, and people revert to former eating habits when the diet fails.

Mainstream Diet Advice

Healthy eating is simple, according to Marion Nestle, who expresses the mainstream view of healthy eating:

The basic principles of good diets are so simple that I can summarize them in just ten words: eat less, move more, eat lots of fruits and vegetables. For additional clarification, a five-word modifier helps: go easy on junk foods. Follow these precepts and you will go a long way toward preventing the major diseases of our overfed society—coronary heart disease, certain cancers, diabetes, stroke, osteoporosis, and a host of others.... These precepts constitute the bottom line of what seem to be the far more complicated dietary recommendations of many health organizations and national and international governments—the forty-one “key recommendations” of the 2005 Dietary Guidelines, for example. ... Although you may feel as though advice about nutrition is constantly changing, the basic ideas behind my four precepts have not changed in half a century. And they leave plenty of room for enjoying the pleasures of food.

David L. Katz, who reviewed the most prevalent popular diets in 2014, noted:

The weight of evidence strongly supports a theme of healthful eating while allowing for variations on that theme. A diet of minimally processed foods close to nature, predominantly plants, is decisively associated with health promotion and disease prevention and is consistent with the salient components of seemingly distinct dietary approaches. Efforts to improve public health through diet are forestalled not for want of knowledge about the optimal feeding of Homo sapiens but for distractions associated with exaggerated claims, and our failure to convert what we reliably know into what we routinely do. Knowledge in this case is not, as of yet, power; would that it were so.

Tuesday, April 19, 2016

How Children Fail

First published in the mid 1960s, How Children Fail began an education reform movement that continues today. In his 1982 edition, John Holt added new insights into how children investigate the world, into the perennial problems of classroom learning, grading, testing, and into the role of the trust and authority in every learning situation. His understanding of children, the clarity of his thought, and his deep affection for children have made both How Children Fail and its companion volume, How Children Learn, enduring classics.

     --Amazon.com

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Customer Review

4 Stars
Facing Our Demons
By Maria Morales on April 24, 2000

This book with its simple format and language has opened my eyes to possibilities and perspectives that I simply never thought of. As an educator, I think everyone in the world of education should read. From policy-makers to administrators to teachers to school psychologists, and very specially, parents, we all owe it to our children and to ourselves to become informed and critical about the efficiency (or the lack thereof) of our educational system. Especially at times, such as now, when our children seem to be failing more than ever. Holt's observations, although limited to private schools, provide one with a solid view of what is happening in the world of teaching accross the board. Holt makes and answers questions that are not only relevant to his subject but vital to the development of better teaching. Holt's idea that we don't know enough about student-teacher relationships could not be more accurate. I know this because I am an educator. I agree with Holt when he says that it is time that we look beyond ourselves and our own interest and begin looking at students with respect. As an insider, I couldn't help blushing while reading the reasons that Holt gives for children's failure in school. I was only able to nod my head positively when he said that teachers aren't listening to their students because they are only listening to what they want to hear. Another reason children fail, according to Holt, is that they are not being intellectually challenged enough at school. The conclusion made by Holt makes plenty of sense. Teachers definitely need to make every effort to free their teaching from ambiguity, confusion and self-contradiction. Besides teachers, the pointing finger also points to standardized exams. Standardized exams, I agree with the author, do not make our children more knowledgeable. Holt's final verdict is clear and pungent: Students are failing because adults-teachers, administrators, parents, policy-makers, etc.-are not doing their jobs. Although not a pleasant thing to hear (especially for those of us who have chosen to dedicate our lives to the education of our young), I am personally grateful to Mr. Holt for taking a bold stand to face us with our demons.

Monday, April 18, 2016

20 Questions About Poll Results

20 Questions A Journalist
Should Ask About Poll Results
National Council on Public Polls, by Sheldon R. Gawiser, Ph.D. and G. Evans Witt

Polls provide the best direct source of information about public opinion. They are valuable tools for journalists and can serve as the basis for accurate, informative news stories. For the journalist looking at a set of poll numbers, here are the 20 questions to ask the pollster before reporting any results. This publication is designed to help working journalists do a thorough, professional job covering polls. It is not a primer on how to conduct a public opinion survey.

The only polls that should be reported are "scientific" polls. A number of the questions here will help you decide whether or not a poll is a "scientific" one worthy of coverage – or an unscientific survey without value.

Unscientific pseudo-polls are widespread and sometimes entertaining, but they never provide the kind of information that belongs in a serious report. Examples include 900-number call-in polls, man-on-the-street surveys, many Internet polls, shopping mall polls, and even the classic toilet tissue poll featuring pictures of the candidates on each roll.

One major distinguishing difference between scientific and unscientific polls is who picks the respondents for the survey. In a scientific poll, the pollster identifies and seeks out the people to be interviewed. In an unscientific poll, the respondents usually "volunteer" their opinions, selecting themselves for the poll.

The results of the well-conducted scientific poll provide a reliable guide to the opinions of many people in addition to those interviewed – even the opinions of all Americans. The results of an unscientific poll tell you nothing beyond simply what those respondents say.

By asking these 20 questions, the journalist can seek the facts to decide how to report any poll that comes across the news desk.

For a copy of 20 Questions in a PDF file, click here.

The authors wish to thank the officers, trustees and members of the National Council on Public Polls for their editing assistance and their support.

  1. Who did the poll?
  2. Who paid for the poll and why was it done?
  3. How many people were interviewed for the survey?
  4. How were those people chosen?
  5. What area (nation, state, or region) or what group (teachers,lawyers, Democratic voters, etc.) were these people chosen from?
  6. Are the results based on the answers of all the people interviewed?
  7. Who should have been interviewed and was not? Or do response rates matter?
  8. When was the poll done?
  9. How were the interviews conducted?
  10. What about polls on the Internet or World Wide Web?
  11. What is the sampling error for the poll results?
  12. Who’s on first?
  13. What other kinds of factors can skew poll results?
  14. What questions were asked?
  15. In what order were the questions asked?
  16. What about "push polls?"
  17. What other polls have been done on this topic? Do they say the same thing? If they are different, why are they different?
  18. What about exit polls?
  19. What else needs to be included in the report of the poll?
  20. So I've asked all the questions. The answers sound good. Should we report the results?

1. Who did the poll?
What polling firm, research house, political campaign, or other group conducted the poll? This is always the first question to ask.


If you don't know who did the poll, you can't get the answers to all the other questions listed here. If the person providing poll results can't or won't tell you who did it, the results should not be reported, for their validity cannot be checked.

Reputable polling firms will provide you with the information you need to evaluate the survey. Because reputation is important to a quality firm, a professionally conducted poll will avoid many errors.

2. Who paid for the poll and why was it done?
You must know who paid for the survey, because that tells you – and your audience – who thought these topics are important enough to spend money finding out what people think.


Polls are not conducted for the good of the world. They are conducted for a reason – either to gain helpful information or to advance a particular cause.

It may be the news organization wants to develop a good story. It may be the politician wants to be re-elected. It may be that the corporation is trying to push sales of its new product. Or a special-interest group may be trying to prove that its views are the views of the entire country.

All are legitimate reasons for doing a poll.

The important issue for you as a journalist is whether the motive for doing the poll creates such serious doubts about the validity of the results that the numbers should not be publicized.

Private polls conducted for a political campaign are often unsuited for publication. These polls are conducted solely to help the candidate win – and for no other reason. The poll may have very slanted questions or a strange sampling methodology, all with a tactical campaign purpose. A campaign may be testing out new slogans, a new statement on a key issue or a new attack on an opponent. But since the goal of the candidate’s poll may not be a straightforward, unbiased reading of the public's sentiments, the results should be reported with great care.

Likewise, reporting on a survey by a special-interest group is tricky. For example, an environmental group trumpets a poll saying the American people support strong measures to protect the environment. That may be true, but the poll was conducted for a group with definite views. That may have swayed the question wording, the timing of the poll, the group interviewed and the order of the questions. You should carefully examine the poll to be certain that it accurately reflects public opinion and does not simply push a single viewpoint.

3. How many people were interviewed for the survey?
Because polls give approximate answers, the more people interviewed in a scientific poll, the smaller the error due to the size of the sample, all other things being equal. A common trap to avoid is that "more is automatically better."  While it is absolutely true that the more people interviewed in a scientific survey, the smaller the sampling error, other factors may be more important in judging the quality of a survey.


4. How were those people chosen?
The key reason that some polls reflect public opinion accurately and other polls are unscientific junk is how people were chosen to be interviewed. In scientific polls, the pollster uses a specific statistical method for picking respondents. In unscientific polls, the person picks himself to participate.


The method pollsters use to pick interviewees relies on the bedrock of mathematical reality: when the chance of selecting each person in the target population is known, then and only then do the results of the sample survey reflect the entire population. This is called a random sample or a probability sample. This is the reason that interviews with 1,000 American adults can accurately reflect the opinions of more than 210 million American adults.

Most scientific samples use special techniques to be economically feasible. For example, some sampling methods for telephone interviewing do not just pick randomly generated telephone numbers. Only telephone exchanges that are known to contain working residential numbers are selected, reducing the number of wasted calls. This still produces a random sample. But samples of only listed telephone numbers do not produce a random sample of all working telephone numbers.

But even a random sample cannot be purely random in practice as some people don't have phones, refuse to answer, or aren't home.

Surveys conducted in countries other than the United States may use different but still valid scientific sampling techniques, for example, because relatively few residents have telephones. In surveys in other countries, the same questions about sampling should be asked before reporting a survey.

5. What area (nation, state, or region) or what group (teachers, lawyers, Democratic voters, etc.) were these people chosen from?
It is absolutely critical to know from which group the interviewees were chosen.


You must know if a sample was drawn from among all adults in the United States, or just from those in one state or in one city, or from another group. For example, a survey of business people can reflect the opinions of business people – but not of all adults. Only if the interviewees were chosen from among all American adults can the poll reflect the opinions of all American adults.

In the case of telephone samples, the population represented is that of people living in households with telephones. For most purposes, telephone households are similar to the general population. But if you were reporting a poll on what it was like to be homeless, a telephone sample would not be appropriate. The increasingly widespread use of cell phones, particularly as the only phone in some households, may have an impact in the future on the ability of a telephone poll to accurately reflect a specific population.  Remember, the use of a scientific sampling technique does not mean that the correct population was interviewed.

Political polls are especially sensitive to this issue.

In pre-primary and pre-election polls, which people are chosen as the base for poll results is critical. A poll of all adults, for example, is not very useful for a primary race where only 25 percent of the registered voters actually turn out. So look for polls based on registered voters, "likely voters," previous primary voters and such. These distinctions are important and should be included in the story, for one of the most difficult challenges in polling is trying to figure out who actually is going to vote.

The ease of conducting surveys in the United States is not duplicated around the world. It may not be possible or practical in some countries to conduct surveys of a random sample throughout the country. Surveys based on a smaller group than the entire population – such as a few larger cities – can still be reliable if reported correctly - as the views of those in the larger cities, for example, but not those of the country - and may be the only available data.

6. Are the results based on the answers of all the people interviewed?
One of the easiest ways to misrepresent the results of a poll is to report the answers of only a subgroup. For example, there is usually a substantial difference between the opinions of Democrats and Republicans on campaign-related matters. Reporting the opinions of only Democrats in a poll purported to be of all adults would substantially misrepresent the results.


Poll results based on Democrats must be identified as such and should be reported as representing only Democratic opinions.

Of course, reporting on just one subgroup can be exactly the right course. In polling on a primary contest, it is the opinions of those who can vote in the primary that count – not those who cannot vote in that contest. Primary polls should include only eligible primary voters.

7. Who should have been interviewed and was not? Or do response rates matter?
No survey ever reaches everyone who should have been interviewed. You ought to know what steps were undertaken to minimize non-response, such as the number of attempts to reach the appropriate respondent and over how many days.


There are many reasons why people who should have been interviewed were not. They may have refused attempts to interview them. Or interviews may not have been attempted if people were not home when the interviewer called. Or there may have been a language problem or a hearing problem. 

 In recent years, the percentage of people who respond to polls has diminished. There has been an increase in those who refuse to participate. Some of this is due to the increase in telemarketing and part is due to Caller ID and other technology that allows screening of incoming calls.  While this is a subject that concerns pollsters, so far careful study has found that these reduced response rates have not had a major impact on the accuracy of most public polls. 

Where possible, you should obtain the overall response rate from the pollster, calculated on a recognized basis such as the standards of the American Association for Public Opinion Research. One poll is not “better” than another simply because of the one statistic called response rate. 

8. When was the poll done?
Events have a dramatic impact on poll results. Your interpretation of a poll should depend on when it was conducted relative to key events. Even the freshest poll results can be overtaken by events. The President may have given a stirring speech to the nation, pictures of abuse of prisoners by the military may have been broadcast, the stock market may have crashed or an oil tanker may have sunk, spilling millions of gallons of crude on beautiful beaches.


Poll results that are several weeks or months old may be perfectly valid, but events may have erased any newsworthy relationship to current public opinion.

9. How were the interviews conducted?
There are four main possibilities: in person, by telephone, online or by mail. Most surveys are conducted by telephone, with the calls made by interviewers from a central location. However, some surveys are still conducted by sending interviewers into people's homes to conduct the interviews.


Some surveys are conducted by mail. In scientific polls, the pollster picks the people to receive the mail questionnaires. The respondent fills out the questionnaire and returns it.

Mail surveys can be excellent sources of information, but it takes weeks to do a mail survey, meaning that the results cannot be as timely as a telephone survey. And mail surveys can be subject to other kinds of errors, particularly extremely low response rates. In many mail surveys, many more people fail to participate than do. This makes the results suspect.

Surveys done in shopping malls, in stores or on the sidewalk may have their uses for their sponsors, but publishing the results in the media is not among them. These approaches may yield interesting human-interest stories, but they should never be treated as if they represent public opinion.

Advances in computer technology have allowed the development of computerized interviewing systems that dial the phone, play taped questions to a respondent and then record answers the person gives by punching numbers on the telephone keypad. Such surveys may be more vulnerable to significant problems including uncontrolled selection of respondents within the household, the ability of young children to complete the survey, and poor response rates.

Such problems should disqualify any survey from being used unless the journalist knows that the survey has proper respondent selection, verifiable age screening, and reasonable response rates.

10. What about polls on the Internet or World Wide Web?
The explosive growth of the Internet and the World Wide Web has given rise to an equally explosive growth in various types of online polls and surveys.


Online surveys can be scientific if the samples are drawn in the right way.  Some online surveys start with a scientific national random sample and recruit participants while others just take anyone who volunteers.  Online surveys need to be carefully evaluated before use.

Several methods have been developed to sample the opinions of those who have online access. The fundamental rules of sampling still apply online: the pollster must select those who are asked to participate in the survey in a random fashion. In those cases where the population of interest has nearly universal Internet access or where the pollster has carefully recruited from the entire population, online polls are candidates for reporting.

However, even a survey that accurately sampled all those who have access to the Internet would still fall short of a poll of all Americans, as about one in three adults do not have Internet access.  

But many Internet polls are simply the latest variation on the pseudo-polls that have existed for many years. Whether the effort is a click-on Web survey, a dial-in poll or a mail-in survey, the results should be ignored and not reported. All these pseudo-polls suffer from the same problem: the respondents are self-selected. The individuals choose themselves to take part in the poll – there is no pollster choosing the respondents to be interviewed.

Remember, the purpose of a poll is to draw conclusions about the population, not about the sample. In these pseudo-polls, there is no way to project the results to any larger group. Any similarity between the results of a pseudo-poll and a scientific survey is pure chance.

Clicking on your candidate’s button in the "voting booth" on a Web site may drive up the numbers for your candidate in a presidential horse-race poll online. For most such efforts, no effort is made to pick the respondents, to limit users from voting multiple times or to reach out for people who might not normally visit the Web site.

The dial-in or click-in polls may be fine for deciding who should win on American Idol or which music video is the MTV Video of the Week. The opinions expressed may be real, but in sum the numbers are just entertainment. There is no way to tell who actually called in, how old they are, or how many times each person called.

Never be fooled by the number of responses. In some cases a few people call in thousands of times. Even if 500,000 calls are tallied, no one has any real knowledge of what the results mean. If big numbers impress you, remember that the Literary Digest's non-scientific sample of 2,000,000 people said Landon would beat Roosevelt in the 1936 Presidential election.

Mail-in coupon polls are just as bad. In this case, the magazine or newspaper includes a coupon to be returned with the answers to the questions. Again, there is no way to know who responded and how many times each person did.

Another variation on the pseudo-poll comes as part of a fund-raising effort. An organization sends out a letter with a survey form attached to a large list of people, asking for opinions and for the respondent to send money to support the organization or pay for tabulating the survey. The questions are often loaded and the results of such an effort are always meaningless.

This technique is used by a wide variety of organizations from political parties and special-interest groups to charitable organizations. Again, if the poll in question is part of a fund-raising pitch, pitch it – in the wastebasket.

11. What is the sampling error for the poll results?

Interviews with a scientific sample of 1,000 adults can accurately reflect the opinions of nearly 210 million American adults. That means interviews attempted with all 210 million adults – if such were possible – would give approximately the same results as a well-conducted survey based on 1,000 interviews.

What happens if another carefully done poll of 1,000 adults gives slightly different results from the first survey? Neither of the polls is "wrong." This range of possible results is called the error due to sampling, often called the margin of error.

This is not an "error" in the sense of making a mistake. Rather, it is a measure of the possible range of approximation in the results because a sample was used.

Pollsters express the degree of the certainty of results based on a sample as a "confidence level." This means a sample is likely to be within so many points of the results one would have gotten if an interview were attempted with the entire target population. Most polls are usually reported using the 95% confidence level.

Thus, for example, a "3 percentage point margin of error" in a national poll means that if the attempt were made to interview every adult in the nation with the same questions in the same way at the same time as the poll was taken, the poll's answers would fall within plus or minus 3 percentage points of the complete count’s results 95% of the time.

This does not address the issue of whether people cooperate with the survey, or if the questions are understood, or if any other methodological issue exists. The sampling error is only the portion of the potential error in a survey introduced by using a sample rather than interviewing the entire population. Sampling error tells us nothing about the refusals or those consistently unavailable for interview; it also tells us nothing about the biasing effects of a particular question wording or the bias a particular interviewer may inject into the interview situation. It also applies only to scientific surveys.

Remember that the sampling error margin applies to each figure in the results – it is at least 3 percentage points plus or minus for each one in our example. Thus, in a poll question matching two candidates for President, both figures are subject to sampling error.

12. Who’s on first?
Sampling error raises one of the thorniest problems in the presentation of poll results: For a horse-race poll, when is one candidate really ahead of the other?


Certainly, if the gap between the two candidates is less than the sampling error margin, you should not say that one candidate is ahead of the other. You can say the race is "close," the race is "roughly even," or there is "little difference between the candidates." But it should not be called a "dead heat" unless the candidates are tied with the same percentages.   And it certainly is not a “statistical tie” unless both candidates have the same exact percentages.

And just as certainly, when the gap between the two candidates is equal to or more than twice the error margin – 6 percentage points in our example – and if there are only two candidates and no undecided voters, you can say with confidence that the poll says Candidate A is clearly leading Candidate B.

When the gap between the two candidates is more than the error margin but less than twice the error margin, you should say that Candidate A "is ahead," "has an advantage" or "holds an edge." The story should mention that there is a small possibility that Candidate B is ahead of Candidate A.

When there are more than two choices or undecided voters – virtually in every poll in the real world – the question gets much more complicated.

While the solution is statistically complex, you can fairly easily evaluate this situation by estimating the error margin. You can do that by taking the sum of the percentages for each of the two candidates in question and multiplying it by the total respondents for the survey (only the likely voters if that is appropriate). This number is now the effective sample size for your judgment. Look up the sampling error in a table of statistics for that reduced sample size, and apply it to the candidate percentages. If they overlap, then you do not know if one is ahead. If they do not, then you can make the judgment that one candidate has a lead.

And bear in mind that when subgroup results are reported – women or blacks or young people – the sampling error margin for those figures is greater than for results based on the sample as a whole. Be very careful about reporting results from extremely small subgroups. Any results based on fewer than 100 respondents are subject to such large sampling errors that it is almost impossible to report the numbers in a meaningful manner.

13. What other kinds of factors can skew poll results?
The margin of sampling error is just one possible source of inaccuracy in a poll. It is not necessarily the source of the greatest possible error; we use it because it's the only one that can be quantified. And, other things being equal, it is useful for evaluating whether differences between poll results are meaningful in a statistical sense.


Question phrasing and question order are also likely sources of flaws. Inadequate interviewer training and supervision, data processing errors and other operational problems can also introduce errors. Professional polling operations are less subject to these problems than volunteer-conducted polls, which are usually less trustworthy.   Be particularly careful of polls conducted by untrained and unsupervised college students.  There have been several cases where the results were at least in part reported by the students without conducting any survey at all.

You should always ask if the poll results have been "weighted." This process is usually used to account for unequal probabilities of selection and to adjust slightly the demographics in the sample. You should be aware that a poll could be manipulated unduly by weighting the numbers to produce a desired result. While some weighting may be appropriate, other weighting is not. Weighting a scientific poll is only appropriate to reflect unequal probabilities or to adjust to independent values that are mostly constant. 

14. What questions were asked?
You must find out the exact wording of the poll questions. Why? Because the very wording of questions can make major differences in the results.


Perhaps the best test of any poll question is your reaction to it. On the face of it, does the question seem fair and unbiased? Does it present a balanced set of choices? Would most people be able to answer the question?

On sensitive questions – such as abortion – the complete wording of the question should probably be included in your story. It may well be worthwhile to compare the results of several different polls from different organizations on sensitive questions. You should examine carefully both the results and the exact wording of the questions.

15. In what order were the questions asked?
Sometimes the very order of the questions can have an impact on the results. Often that impact is intentional; sometimes it is not. The impact of order can often be subtle.


During troubled economic times, for example, if people are asked what they think of the economy before they are asked their opinion of the president, the presidential popularity rating will probably be lower than if you had reversed the order of the questions. And in good economic times, the opposite is true.

What is important here is whether the questions that were asked prior to the critical question in the poll could sway the results. If the poll asks questions about abortion just before a question about an abortion ballot measure, the prior questions could sway the results.

16. What about "push polls?"
In recent years, some political campaigns and special-interest groups have used a technique called "push polls" to spread rumors and even outright lies about opponents. These efforts are not polls, but political manipulation trying to hide behind the smokescreen of a public opinion survey.


In a "push poll," a large number of people are called by telephone and asked to participate in a purported survey. The survey "questions" are really thinly-veiled accusations against an opponent or repetitions of rumors about a candidate’s personal or professional behavior. The focus here is on making certain the respondent hears and understands the accusation in the question, not in gathering the respondent’s opinions.

"Push polls" are unethical and have been condemned by professional polling organizations.

"Push polls" must be distinguished from some types of legitimate surveys done by political campaigns. At times, a campaign poll may ask a series of questions about contrasting issue positions of the candidates – or various things that could be said about a candidate, some of which are negative. These legitimate questions seek to gauge the public’s reaction to a candidate’s position or to a possible legitimate attack on a candidate’s record.

A legitimate poll can be distinguished from a "push poll" usually by:

The number of calls made – a push poll makes thousands and thousands of calls, instead of hundreds for most surveys; The identity of who is making the telephone calls – a polling firm for a scientific survey as opposed to a telemarketing house or the campaign itself for a "push poll;" The lack of any true gathering of results in a "push poll," which has as its only objective the dissemination of false or misleading information.

17. What other polls have been done on this topic? Do they say the same thing? If they are different, why are they different?
Results of other polls – by a newspaper or television station, a public survey firm or even a candidate's opponent – should be used to check and contrast poll results you have in hand.


If the polls differ, first check the timing of the interviewing. If the polls were done at different times, the differing results may demonstrate a swing in public opinion.

If the polls were done about the same time, ask each poll sponsor for an explanation of the differences. Conflicting polls often make good stories.

18. What about exit polls?
Exit polls, properly conducted, are an excellent source of information about voters in a given election.  They are the only opportunity to survey actual voters and only voters. 


There are several issues that should be considered in reporting exit polls.  First, exit polls report how voters believe they cast their ballots.  The election of 2000 showed that voters may think they have voted for a candidate, but their votes may not have been recorded.  Or in some cases, voters actually voted for a different candidate than they thought they did.

Second, absentee voters are not included in many exit polls.  In states where a large number of voters vote either early or absentee, an absentee telephone poll may be combined with an exit poll to measure voter opinion.  If in a specific case there are large numbers of absentee voters and no absentee poll, you should be careful to report that the exit poll is only of Election Day voters.

Third, make sure that the company conducting the exit poll has a track record.  Too many exit polls are conducted in a minimal number of voting locations by people who do not have experience in this specialized method of polling.  Those results can be misleading.

 

19. What else needs to be included in the report of a poll?
The key element in reporting polls is context.  Not only does this mean that you should compare the poll to others taken at the same time or earlier, but it also means that you need to report on what events may have impacted on the poll results.


A good poll story not only reports the results of the poll but also assists the reader in the interpretation of those results.  If the poll shows a continued decline in consumer confidence even though leading economic indicators have improved, your report might include some analysis of whether or not people see improvement in their daily economic lives even though the indicators are on the rise.

If a candidate has shown marked improvement in a horse race, you might want to report about the millions of dollars spent on advertising immediately prior to the poll.

Putting the poll in context should be a major part of your reporting.

20. So I've asked all the questions.  The answers sound good.  Should we report the results?
Yes, because reputable polling organizations consistently do good work.


However, remember that the laws of chance alone say that the results of one poll out of 20 may be skewed away from the public's real views just because of sampling error.

Also remember that no matter how good the poll, no matter how wide the margin, no matter how big the sample, a pre-election poll does not show that one candidate has the race "locked up." Things change – often and dramatically in politics. That’s why candidates campaign.

If the poll was conducted correctly, and you have been able to obtain the information outlined here, your news judgment and that of your editors should be applied to polls, as it is to every other element of a story.

In spite of the difficulties, the public opinion survey, correctly conducted, is still the best objective measure of the state of the views of the public.

This is a copyrighted publication of the National Council on Public Polls in keeping with its mission to help educate journalists on the use of public opinion polls.

The National Council on Public Polls hereby grants the right to duplicate this work in whole, but not in part, for any noncommercial purpose provided that any copy include all of the information on this page.

Sheldon R. Gawiser, Ph.D. is Director, Elections, NBC News.  G. Evans Witt is CEO, Princeton Survey Research Associates International.  They were cofounders of the Associated Press/NBC News Poll.

For any additional information on any aspect of polling or a specific poll, please call NCPP at 845.575.5050.

The price for a single printed copy is $2.95.  For educational discounts and multiple copies contact NCPP.