Sunday, August 31, 2014

Imperfect Airport Scanner

Researchers Find Security Flaws
in Backscatter X-ray Scanners
By Ioana Patringenaru, UC San Diego News Center, August 20, 2014

A team of researchers from the University of California, San Diego, the University of Michigan, and Johns Hopkins University have discovered several security vulnerabilities in full-body backscatter X-ray scanners deployed to U.S. airports between 2009 and 2013.

In laboratory tests, the team was able to successfully conceal firearms and plastic explosive simulants from the Rapiscan Secure 1000 scanner. The team was also able to modify the scanner operating software so it presents an “all-clear” image to the operator even when contraband was detected.  “Frankly, we were shocked by what we found,” said J. Alex Halderman, a professor of computer science at the University of Michigan. “A clever attacker can smuggle contraband past the machines using surprisingly low-tech techniques.”

 The researchers attribute these shortcomings to the process by which the machines were designed and evaluated before their introduction at airports. “The system’s designers seem to have assumed that attackers would not have access to a Secure 1000 to test and refine their attacks,” said Hovav Shacham, a professor of computer science at UC San Diego However, the researchers were able to purchase a government-surplus machine found on eBay and subject it to laboratory testing.

Many physical security systems that protect critical infrastructure are evaluated in secret, without input from the public or independent experts, the researchers said. In the case of the Secure 1000, that secrecy did not produce a system that can resist attackers who study and adapt to new security measures. “Secret testing should be replaced or augmented by rigorous, public, independent testing of the sort common in computer security,” said Shacham.

Secure 1000 scanners were removed from airports in 2013 due to privacy concerns, and are now being repurposed to jails, courthouses, and other government facilities. The researchers have suggested changes to screening procedures that can reduce, but not eliminate, the scanners’ blind spots. However, “any screening process that uses these machines has to take into account their limitations,” said Shacham.

The researchers shared their findings with the Department of Homeland Security and Rapiscan, the scanner’s manufacturer, in May. The team will present their findings publicly at the USENIX Security conference, Thursday Aug. 21, in San Diego. Details of the results will be available at radsec.org on Aug. 20.

To contact the research team, e-mail radsec-team@umich.edu.

Saturday, August 30, 2014

The Growing Brain Comes First


A Long Childhood Feeds

the Hungry Human Brain

Study of brain scans explains why children

grow slowly and childhood lasts so long

By Erin White

EVANSTON, Ill. – August 25, 2014 -- A five-year old’s brain is an energy monster. It uses twice as much glucose (the energy that fuels the brain) as that of a full-grown adult, a new study led by Northwestern University anthropologists has found.

The study helps to solve the long-standing mystery of why human children grow so slowly compared with our closest animal relatives.

It shows that energy funneled to the brain dominates the human body’s metabolism early in life and is likely the reason why humans grow at a pace more typical of a reptile than a mammal during childhood.

Results of the study were published the week of Aug. 25 in the journal Proceedings of the National Academy of Sciences.

“Our findings suggest that our bodies can’t afford to grow faster during the toddler and childhood years because a huge quantity of resources is required to fuel the developing human brain,” said Christopher Kuzawa, first author of the study and a professor of anthropology at Northwestern’s Weinberg College of Arts and Sciences. “As humans we have so much to learn, and that learning requires a complex and energy-hungry brain.”

Kuzawa also is a faculty fellow at the Institute for Policy Research at Northwestern.

The study is the first to pool existing PET and MRI brain scan data -- which measure glucose uptake and brain volume, respectively -- to show that the ages when the brain gobbles the most resources are also the ages when body growth is slowest. At 4 years of age, when this “brain drain” is at its peak and body growth slows to its minimum, the brain burns through resources at a rate equivalent to 66 percent of what the entire body uses at rest.

The findings support a long-standing hypothesis in anthropology that children grow so slowly, and are dependent for so long, because the human body needs to shunt a huge fraction of its resources to the brain during childhood, leaving little to be devoted to body growth. It also helps explain some common observations that many parents may have.

“After a certain age it becomes difficult to guess a toddler or young child’s age by their size,” Kuzawa said. “Instead you have to listen to their speech and watch their behavior. Our study suggests that this is no accident. Body growth grinds nearly to a halt at the ages when brain development is happening at a lightning pace, because the brain is sapping up the available resources.” 

It was previously believed that the brain’s resource burden on the body was largest at birth, when the size of the brain relative to the body is greatest. The researchers found instead that the brain maxes out its glucose use at age 5. At age 4 the brain consumes glucose at a rate comparable to 66 percent of the body’s resting metabolic rate (or more than 40 percent of the body’s total energy expenditure). 

“The mid-childhood peak in brain costs has to do with the fact that synapses, connections in the brain, max out at this age, when we learn so many of the things we need to know to be successful humans,” Kuzawa said.

“At its peak in childhood, the brain burns through two-thirds of the calories the entire body uses at rest, much more than other primate species,” said William Leonard, co-author of the study. “To compensate for these heavy energy demands of our big brains, children grow more slowly and are less physically active during this age range. Our findings strongly suggest that humans evolved to grow slowly during this time in order to free up fuel for our expensive, busy childhood brains.”

Leonard is professor and chair of the department of anthropology at Northwestern’s Weinberg College of Arts and Sciences.

This study was a collaboration between researchers at Northwestern University, Wayne State University, Children’s Hospital of Michigan, Icahn School of Medicine at Mount Sinai, University of Illinois, George Washington University and Harvard Medical School.

The title of the paper, which is published in the Proceedings of the National Academy of Sciences, is “Energetic costs and evolutionary implications of human brain development.” Authors include Kuzawa and Leonard as well as Harry T. Chugani, Lawrence I. Grossman, Leonard Lipovich, Otto Muzik, Patrick R. Hof, Derek E. Wildman, Chet C. Sherwood and Nicholas Lange.

The study was funded by the U.S. National Science Foundation’s Biological Anthropology Program.

See more at: http://www.northwestern.edu/newscenter/stories/2014/08/a-long-childhood-feeds-the-hungry-human-brain.html#sthash.VuoUzzNM.dpuf

 

Friday, August 29, 2014

Halting Cancer Growth

Preventing cancer from forming 'tentacles' stops dangerous spread
Provided by University of Alberta to Medical Press, August 29, 2014

Roughly 2 in 5 Canadians will develop cancer in their lifetime, and one in four of them will die of the disease. In 2014, it's estimated that nine Canadians will die of cancer every hour. Thanks to advances in medical research and care, cancer can often be treated with high success if detected early. However, after it spreads, cancer becomes much more difficult to treat.

To spread, or "metastasize," cancer cells must enter the blood stream or lymph system, travel through its channels, and then exit to another area or organ in the body. This final exit is the least understood part of the metastatic process. Previous research has shown cancer cells are capable of producing "invadopodia," a type of extension that cells use to probe and change their environment. However, their significance in the escape of cancer cells from the bloodstream has been unclear.

In the study, the scientists injected fluorescent cancer cells into the bloodstream of test models, and then captured the fate of these cells using high-resolution time-lapse imaging. Results confirmed the cancer cells formed invadopodia to reach out of the bloodstream and into the tissue of the surrounding organs – they essentially formed "tentacles" that enabled the tumor cell to enter the organ. However, through genetic modification or drug treatment, the scientists were able to block the factors needed for invadopodia to form. This effectively stopped all attempts for the cancer to spread.

The study findings confirm invadopodia play a key role in the spread of cancer. Most importantly, they suggest an important new target for therapy. If a drug can be developed to prevent invadopodia from forming, it could potentially stop the spread of cancer.

"The spread of cancer works a lot like plane travel," says lead author Dr. Hon Leong, now a Scientist at Lawson Health Research Institute and Western University. "Just as a person boards an airplane and travels to their destination, tumor cells enter the bloodstream and travel to distant organs like the liver, lungs, or brain. The hard part is getting past border control and airport security, or the vessels, when they arrive. We knew that cancer cells were somehow able to get past these barriers and spread into the organs. Now, for the first time, we know how."

"Metastasis is the deadliest aspect of cancer, responsible for some 90% of cancer deaths," says Dr. John Lewis, the Frank and Carla Sojonky Chair in Prostate Cancer Research at the University of Alberta. "These new insights give us both a new approach and a clinical window of opportunity to reduce or block the spread of cancer".

What ISIS Wants in Iraq and Syria

What does ISIS want?  “A complete theocracy.”  Take a look at this review of the situation authored by a former MI6 official, Alastair Crooke, then hunt through the comments to Glenn R. Davis and his “game of Risk” masterful insight.  I don’t think the current White House knows what it is up against here.


Glen R Davis of Washington State University commented on the above with this remark:

The Saudis wahabists will use the ISIS attacks to take control of Iraq, to consolidate their power and wealth (Saudi oil fields are running down) and to confront the Shiite powers in Iraq and Iran. The long-term goal is to first unite the middle east (Iraq, Jordan, UAE, Qatar, Yemen, Syria,Lebanon, Turkey, Palestine, Israel); and then Africa (Egypt, Libya, Tunisia, Algiers, Morocco, the rest of Northern Africa, then Middle/Eastern Africa. Then even longer out; Iran, Afghanistan, Pakistan, Azerbijan, Turkmenistan, Uzbekistan, Tadjikistan, India, Bangladesh; all the southern tier countries from the old caliphates of the Ottoman, Safavid, Mughal and Abbasid Empires and beyond. Then Indonesia, Malaysia and the Phillipines. It is like a big game of "Risk", driven by their fanaticism and sense of inerrancy and religious necessity.
 

Wednesday, August 27, 2014

Changes in Retina Predict a Dementia


Gladstone scientists show that retinal thinning can be used as an early marker for frontotemporal dementia, prior to the onset of cognitive symptoms.


SAN FRANCISCO, CA—August 25, 2014—Researchers at the Gladstone Institutes and University of California, San Francisco have shown that a loss of cells in the retina is one of the earliest signs of frontotemporal dementia (FTD) in people with a genetic risk for the disorder—even before any changes appear in their behavior.

Published today in the Journal of Experimental Medicine, the researchers, led by Gladstone investigator Li Gan, PhD and UCSF associate professor of neurology Ari Green, MD, studied a group of individuals who had a certain genetic mutation that is known to result in FTD. They discovered that before any cognitive signs of dementia were present, these individuals showed a significant thinning of the retina compared with people who did not have the gene mutation.

“This finding suggests that the retina acts as a type of ‘window to the brain,’” said Dr. Gan. “Retinal degeneration was detectable in mutation carriers prior to the onset of cognitive symptoms, establishing retinal thinning as one of the earliest observable signs of familial FTD. This means that retinal thinning could be an easily measured outcome for clinical trials.”

Although it is located in the eye, the retina is made up of neurons with direct connections to the brain. This means that studying the retina is one of the easiest and most accessible ways to examine and track changes in neurons.

Lead author Michael Ward, MD, PhD, a postdoctoral fellow at the Gladstone Institutes and assistant professor of neurology at UCSF, explained, “The retina may be used as a model to study the development of FTD in neurons. If we follow these patients over time, we may be able to correlate a decline in retinal thickness with disease progression. In addition, we may be able to track the effectiveness of a treatment through a simple eye examination.”

The researchers also discovered new mechanisms by which cell death occurs in FTD. As with most complex neurological disorders, there are several changes in the brain that contribute to the development of FTD. In the inherited form researched in the current study, this includes a deficiency of the protein progranulin, which is tied to the mislocalization of another crucial protein, TDP-43, from the nucleus of the cell out to the cytoplasm.

However, the relationship between neurodegeneration, progranulin, and TDP-43 was previously unclear. In follow-up studies using a genetic mouse model of FTD, the scientists were able to investigate this connection for the first time in neurons from the retina. They identified a depletion of TDP-43 from the cell nuclei before any signs of neurodegeneration occurred, signifying that this loss may be a direct cause of the cell death associated with FTD.

TDP-43 levels were shown to be regulated by a third cellular protein called Ran. By increasing expression of Ran, the researchers were able to elevate TDP-43 levels in the nucleus of progranulin-deficient neurons and prevent their death.

“With these findings,” said Dr. Gan, “we now not only know that retinal thinning can act as a pre-symptomatic marker of dementia, but we’ve also gained an understanding into the underlying mechanisms of frontotemporal dementia that could potentially lead to novel therapeutic targets.”

Tuesday, August 26, 2014

Prions -- simplest infectious diseases

A prion in the Scrapie form (PrPSc) is an infectious agent composed of protein in a misfolded form.  This is the central idea of the Prion Hypothesis, which remains debated. This would be in contrast to all other known infectious agents, like viruses, bacteriam fungi or parasites—which must contain nucleic acids (eitherDNA, RNA, or both). The word prion, coined in 1982 by Stanley B. Prusiner, is derived from the words protein and infection. Prions are responsible for the transmissible spongiform encephalopathies in a variety of mammals, including bovine spongiform encephalopathy (BSE, also known as "mad cow disease") in cattle. In humans, prions cause Creutzbeldt-Jakob Disease (CJD), variant Creutzfeldt-Jakob Disease (vCJD), Gerstmann-Straussler-Sheinker syndrome, Fatal Familial Insomnia and kuru. All known prion diseases in mammals affect the structure of the brain or other neural tissue and all are currently untreatable and universally fatal. In 2013, a study revealed that 1 in 2,000 people in the United Kingdom might harbour the infectious prion protein that causes vCJD.

Prions are not considered living organisms but may propagate by transmitting a misfolded protein state. If a prion enters a healthy organism, it induces existing, properly folded proteins to convert into the disease-associated, prion form; the prion acts as a template to guide the misfolding of more proteins into prion form. These newly formed prions can then go on to convert more proteins themselves; this triggers a chain reaction that produces large amounts of the prion form. All known prions induce the formation of an amyloid fold, in which the protein polymerises into an aggregate consisting of tightly packed beta sheets. Amyloid aggregates are fibrils, growing at their ends, and replicating when breakage causes two growing ends to become four growing ends. The incubation period of prion diseases is determined by the exponental growth rate associated with prion replication, which is a balance between the linear growth and the breakage of aggregates. (Note that the propagation of the prion depends on the presence of normally folded protein in which the prion can induce misfolding; animals that do not express the normal form of the prion protein can neither develop nor transmit the disease.)

This altered structure is extremely stable and accumulates in infected tissue, causing tissue damage and cell death. This structural stability means that prions are resistant to denaturation by chemical and physical agents, making disposal and containment of these particles difficult. Prions come in different strains, each with a slightly different structure, and, most of the time, strains breed true. Prion replication is nevertheless subject to occasional epimutation and then natural selection just like other forms of replication.

All known mammalian prion diseases are caused by the so-called prion protein, PrP. The endogenous, properly folded form is denoted PrPC (for Common or Cellular), whereas the disease-linked, misfolded form is denoted PrPSc (for Scrapie, after one of the diseases first linked to prions and neurodegeneration.) The precise structure of the prion is not known, though they can be formed by combining PrPC, polyadenylic acid, and lipids in a Protein-Misfolding Cyclic Amplification (PMCA) reaction.

Proteins showing prion-type behavior are also found in some fungi, which has been useful in helping to understand mammalian prions. Fungal prions do not appear to cause disease in their hosts.

Monday, August 25, 2014

A Primer on Viruses

A virus is a biological agent that reproduces inside the cells of living hosts.. When infected by a virus, a host cell is forced to produce many thousands of identical copies of the original virus, at an extraordinary rate. Unlike most living things, viruses do not have cells that divide; new viruses are assembled in the infected host cell. But unlike still simpler infectious agents, viruses contain genes, which give them the ability to mutate and evolve. Over 5,000 species of viruses have been discovered.

The origins of viruses are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. A virus consists of two or three parts: genes, made from either DNA or RNA, long molecules that carry genetic information; a protein coat that protects the genes; and in some viruses, an envelope of fat that surrounds and protects them when they are not contained within a host cell. Viruses vary in shape from the simple helical and icosahedral to more complex structures. Viruses range in size from 20 to 300 nanometres; it would take 30,000 to 750,000 of them, side by side, to stretch to 1 centimetre (0.39 in).

Viruses spread in many ways. Just as many viruses are very specific as to which host species or tissue they attack, each species of virus relies on a particular method for propagation. Plant viruses are often spread from plant to plant by insects and other organisms, known as vectors. Some viruses of animals, including humans, are spread by exposure to infected bodily fluids. Viruses such as influenza are spread through the air by droplets of moisture when people cough or sneeze. Viruses such as norovirus are transmitted by the faecal-oral route, which involves the contamination of hands, food and water. Rotavirus is often spread by direct contact with infected children. The human immunodeficiency virus, HIV, is transmitted by bodily fluids transferred during sex. Others, such as the Dengue virus, are spread by blood-sucking insects.

Viral infections can cause disease in humans, animals and even plants. However, they are usually eliminated by the immune system, conferring lifetime immunity to the host for that virus. Antibiotics have no effect on viruses, but antiviral drugs have been developed to treat life-threatening infections. Vaccines that produce lifelong immunity can prevent some viral infections.

Viruses and Diseases

Common human diseases caused by viruses include the common cold, the flu, chickenpox and cold sores. Serious diseases such as Ebola and AIDS are also caused by viruses. Many viruses cause little or no disease and are said to be "benign". The more harmful viruses are described as virulent. Viruses cause different diseases depending on the types of cell that they infect. Some viruses can cause lifelong or chronic infections where the viruses continue to reproduce in the body despite the host's defence mechanisms. This is common in hepatitis B virus and hepatitis C virus infections. People chronically infected with a virus are known as carriers. They serve as important reservoirs of the virus. If there is a high proportion of carriers in a given population, a disease is said to be endemic.

There are many ways in which viruses spread from host to host but each species of virus uses only one or two. Many viruses that infect plants are carried by organisms; such organisms are called vectors. Some viruses that infect animals, including humans, are also spread by vectors, usually blood-sucking insects. However, direct transmission is more common. Some virus infections, (norovirus and rotavirus), are spread by contaminated food and water, hands and communal objects and by intimate contact with another infected person, while others are airborne (influenza virus). Viruses such as HIV, hepatitis B and hepatitis C are often transmitted by unprotected sex or contaminated hypodermic needles. It is important to know how each different kind of virus is spread to prevent infections and epidemics.

Friday, August 22, 2014

The 1911 Mona Lisa Theft

Vandalism and Theft of the Mona Lisa

The painting's fame was emphasized when it was stolen on 21 August 1911.  The next day, Louis Beroud, a painter, walked into the Louvre and went to the Salon Carré where the Mona Lisa had been on display for five years. However, where the Mona Lisa should have stood, he found four iron pegs. Béroud contacted the section head of the guards, who thought the painting was being photographed for marketing purposes. A few hours later, Béroud checked back with the section head of the museum, and it was confirmed that the Mona Lisa was not with the photographers. The Louvre was closed for an entire week to aid in investigation of the theft.

French poet Guuillaume Apollinaire, who had once called for the Louvre to be "burnt down", came under suspicion; he was arrested and put in jail. Apollinaire tried to implicate his friend Pablo Picasso, who was also brought in for questioning, but both were later exonerated.

At the time, the painting was believed to be lost forever, and it was two years before the real thief was discovered. Louvre employee Vincenzo Peruggia had stolen it by entering the building during regular hours, hiding in a broom closet and walking out with it hidden under his coat after the museum had closed. Peruggia was an Italian patriot who believed Leonardo's painting should be returned to Italy for display in an Italian museum. Peruggia may have also been motivated by a friend whose copies of the original would significantly rise in value after the painting's theft. A later account suggested Eduardo de Valfierno had been the mastermind of the theft and had commissioned forger Yves Chaudron to create six copies of the painting to be sold in the United States while the location of the original was unclear. But the original remained in Europe and after having kept the Mona Lisa in his apartment for two years, Peruggia grew impatient and was finally caught when he attempted to sell it to the directors of the Uffizi Gallery in Florence it was exhibited all over Italy and returned to the Louvre in 1913. Peruggia was hailed for his patriotism in Italy and served six months in jail for the crime.

In 1956, part of the painting was damaged when a vandal threw acid at it. On 30 December of that same year, the painting was damaged again when a rock was thrown at it, resulting in the loss of a speck of pigment near the left elbow, which was later restored.

The use of bulletproof glass has shielded the Mona Lisa from more recent attacks. In April 1974 a "lame woman", upset by the museum's policy for disabled people, sprayed red paint at the painting while it was on display at the Tokyo National Museum. On 2 August 2009, a Russian woman, distraught over being denied French citizenship, threw a terra cotta mug or teacup, purchased at the museum, at the painting in the Louvre; the vessel shattered against the glass enclosure. In both cases, the painting was undamaged.

Thursday, August 21, 2014

Science Protocols Are Essential

In the natural sciences a protocol is a predefined written procedural method in the design and implementation of experiments. Protocols are written whenever it is desirable to standardize a laboratory method to ensure successful replication of results by others in the same laboratory or by other laboratories. Detailed protocols also facilitate the assessment of results through peer review In addition to detailed procedures and lists of required equipment and instruments, protocols often include information on safety precautions, the calculation of results and reporting standards, including statistical analysis and rules for predefining and documenting excluded data to avoid bias. Protocols are employed in a wide range of experimental fields, from social science to quantum mechanics. Written protocols are also employed in manufacturing to ensure consistent quality.

Overview

Formal protocols are the general rule in fields of applied science, such as environmental and medical studies that require the coordinated, standardized work of many participants. Such predefined protocols are an essential component of Good Laboratory Practice (GLP) and Good Clinical Practice (GCP) regulations. Protocols written for use by a specific laboratory may incorporate or reference standard operating procedures (SOP) governing general practices required by the laboratory. A protocol may also reference applicable laws and regulations that are applicable to the procedures described. Formal protocols typically require approval by a laboratory official before they are implemented for general use.

Manufacturing protocols are required by current good manufacgturing practice (cGMP) for manufacture of foods, pharmaceuticals, and medical devices.

In a clinical trial, the protocol is carefully designed to safeguard the health of the participants as well as answer specific research questions. A protocol describes what types of people may participate in the trial; the schedule of tests, procedures, medications, and dosages; and the length of the study. While in a clinical trial, participants following a protocol are seen regularly by research staff to monitor their health and to determine the safety and effectiveness of their treatment. Since 1996, clinical trials conducted are widely expected to conform to and report the information called for the in the CONSORT Statement, which provides a framework for designing and reporting protocols. Though tailored to health and medicine, ideas in the CONSORT statement are broadly applicable to other fields where experimental research is used. Clearly defined protocols are also required by research funded by the National Institutes of Health.

Safety

Safety precautions are a valuable addition to a protocol, and can range from requiring goggles to provisions for containment of microbes, environmental hazards, toxic substances, and volatile solvents. Procedural contingencies in the event of an accident may be included in a protocol or in a referenced SOP.

Procedures

Procedural information may include not only safety procedures but also procedures for avoiding contamination, calibration of equipment, equipment testing, documentation, and all other relevant issues. These procedural protocols can be used by skeptics to invalidate any claimed results if flaws are found.

Equipment

Equipment testing and documentation includes all necessary specifications, calibrations, operating ranges, etc. Environmental factors such as temperature, humidity, barometric pressure, and other factors can often have effects on results. Documenting these factors should be a part of any good procedure.

Calculations, statistics and bias

Protocols for methods that produce numerical results generally include detailed formulae for calculation of results. Formula may also be included for preparation of reagents and other solutions required for the work. Methods of statistical analysis may be included to guide interpretation of the data.

Many protocols include provisions for avoiding bias in the interpretation of results. Approximation error is common to all measurements. These errors can be absolute errors from limitations of the equipment or propagation errors from approximate numbers used in calculations. Sample bias is the most common and sometimes the hardest bias to quantify. Statisticians often go to great lengths to ensure that the sample used is representative. For instance political polls are best when restricted to likely voters and this is one of the reasons why web polls cannot be considered scientific. The sample size is another important concept and can lead to biased data simply due to an unlikely event. A sample size of 10, i.e. polling 10 people, will seldom give valid polling results. Standard deviation and variance are concepts used to quantify the likely relevance of a given sample size. The mass media and the public often use average and mean values interchangeably, which can lead to dubious and even misleading arguments. The placebo effect and observer bias often require an experiment to use a double blind protocol and a control group.

Blinded protocols

A protocol may require blinding to avoid bias.

A single blind protocol requires that the experimenter does not know the identity of samples or animals during the testing and calculations. It is appropriate when no human subjects are involved.

A double blind protocol comes into play when human subjects are tested and requires insuring neither the experimenter nor experimental subjects have knowledge of the identity of the treatments or the results until after the experiment is complete.

An experimenter may have latitude defining procedures for blinding and controls but may be required to justify those choices if the results are published or submitted to a regulatory agency. When it is known during the experiment which data was negative there are often reasons to rationalize why that data shouldn't be included. Positive data are rarely rationalized the same way.

Reporting

A protocol may specify reporting requirements. Reporting requirements would include all elements of the experiments design and protocols and any environmental factors or mechanical limitations that might affect the validity of the results.

Tuesday, August 19, 2014

The Brains of Children

New Research Sheds Light on How
Children’s Brains Memorize Facts
Stanford Medicine News Center, August 17, 2014

As children shift from counting on their fingers to remembering math facts, the hippocampus and its functional circuits support the brain’s construction of adult-like ways of using memory.

As children learn basic arithmetic, they gradually switch from solving problems by counting on their fingers to pulling facts from memory. The shift comes more easily for some kids than for others, but no one knows why.

Now, new brain-imaging research gives the first evidence drawn from a longitudinal study to explain how the brain reorganizes itself as children learn math facts. A precisely orchestrated group of brain changes, many involving the memory center known as the hippocampus, are essential to the transformation, according to a study from the Stanford University School of Medicine.

The results, published online Aug. 17 in Nature Neuroscience, explain brain reorganization during normal development of cognitive skills and will serve as a point of comparison for future studies of what goes awry in the brains of children with learning disabilities.

“We wanted to understand how children acquire new knowledge, and determine why some children learn to retrieve facts from memory better than others,” said Vinod Menon, PhD, the Rachael L. and Walter F. Nichols, MD, Professor and  professor of psychiatry and behavioral sciences, and the senior author of the study. “This work provides insight into the dynamic changes that occur over the course of cognitive development in each child.”

The study also adds to prior research into the differences between how children’s and adults’ brains solve math problems. Children use certain brain regions, including the hippocampus and the prefrontal cortex, very differently from adults when the two groups are solving the same types of math problems, the study showed.

“It was surprising to us that the hippocampal and prefrontal contributions to memory-based problem-solving during childhood don’t look anything like what we would have expected for the adult brain,” said postdoctoral scholar Shaozheng Qin, PhD, who is the paper’s lead author.

Charting the shifting strategy


In the study, 28 children solved simple math problems while receiving two functional magnetic resonance imaging brain scans; the scans were done about 1.2 years apart. The researchers also scanned 20 adolescents and 20 adults at a single time point. At the start of the study, the children were ages 7-9. The adolescents were 14-17 and the adults were 19-22. The participants had normal IQs. Because the study examined normal math learning, potential participants with math-related learning disabilities and attention deficit hyperactivity disorder were excluded. The children and adolescents were studying math in school; the researchers did not provide any math instruction.

During the study, as the children aged from an average of 8.2 to 9.4 years, they became faster and more accurate at solving math problems, and relied more on retrieving math facts from memory and less on counting. As these shifts in strategy took place, the researchers saw several changes in the children’s brains. The hippocampus, a region with many roles in shaping new memories, was activated more in children’s brains after one year. Regions involved in counting, including parts of the prefrontal and parietal cortex, were activated less.

The scientists also saw changes in the degree to which the hippocampus was connected to other parts of children’s brains, with several parts of the prefrontal, anterior temporal cortex and parietal cortex more strongly connected to the hippocampus after one year. Crucially, the stronger these connections, the greater was each individual child’s ability to retrieve math facts from memory, a finding that suggests a starting point for future studies of math-learning disabilities.

Although children were using their hippocampus more after a year, adolescents and adults made minimal use of their hippocampus while solving math problems. Instead, they pulled math facts from well-developed information stores in the neocortex.

Memory scaffold


“What this means is that the hippocampus is providing a scaffold for learning and consolidating facts into long-term memory in children,” said Menon, who is also the Rachel L. and Walter F. Nichols, MD, Professor at the medical school. Children’s brains are building a schema for mathematical knowledge. The hippocampus helps support other parts of the brain as adult-like neural connections for solving math problems are being constructed. “In adults this scaffold is not needed because memory for math facts has most likely been consolidated into the neocortex,” he said. Interestingly, the research also showed that, although the adult hippocampus is not as strongly engaged as in children, it seems to keep a backup copy of the math information that adults usually draw from the neocortex.

The researchers compared the level of variation in patterns of brain activity as children, adolescents and adults correctly solved math problems. The brain’s activity patterns were more stable in adolescents and adults than in children, suggesting that as the brain gets better at solving math problems its activity becomes more consistent.

The next step, Menon said, is to compare the new findings about normal math learning to what happens in children with math-learning disabilities.

“In children with math-learning disabilities, we know that the ability to retrieve facts fluently is a basic problem, and remains a bottleneck for them in high school and college,” he said. “Is it that the hippocampus can’t provide a reliable scaffold to build good representations of math facts in other parts of the brain during the early stages of learning, and so the child continues to use inefficient strategies to solve math problems? We want to test this.”

Other Stanford co-authors of the study are former postdoctoral scholar Soohyun Cho, PhD; postdoctoral scholar Tianwen Chen, PhD; and Miriam Rosenberg-Lee, PhD, instructor in psychiatry and behavioral sciences.

The research was supported by the National Institutes of Health (grants HD047520, HD059205 and MH101394), Stanford’s Child Health Research Institute, the Lucile Packard Foundation for Children’s Health, Stanford’s Clinical and Translational Science Award (grant UL1RR025744) and the Netherlands Organization for Scientific Research.

Monday, August 18, 2014

The Size of an Exoplanet


Exoplanet Measured with Remarkable Precision

By Dr. Tony Phillips

NASA -- August 18, 2014:  Barely 30 years ago, the only planets astronomers had found were located right here in our own solar system.  The Milky Way is chock-full of stars, millions of them similar to our own sun.  Yet the tally of known worlds in other star systems was exactly zero.

What a difference a few decades can make.

As 2014 unfolds, astronomers have not only found more than a thousand "exoplanets" circling distant suns, but also they're beginning to make precise measurements of them.  The old void of ignorance about exoplanets is now being filled with data precise to the second decimal place.

A team led by Sarah Ballard, a NASA Carl Sagan Fellow at the University of Washington in Seattle, recently measured the diameter of a "super Earth" to within an accuracy of 148 miles total or about 1 percent — remarkable accuracy for an exoplanet located about 300 light years from Earth.

"It does indeed seem amazing," says Ballard. "The landscape of exoplanet research has changed to an almost unrecognizable degree since I started graduate school in 2007."

To size up the planet, named "Kepler 93 b," Ballard used data from NASA's Kepler and Spitzer Space Telescopes.

First, Kepler discovered the planet. As seen from Earth, Kepler 93 b passes directly in front of its parent star, causing the starlight to dim during the transit. That dimming, which occurs once per orbit, is what allowed Kepler mission scientists to find the planet in the first place.

Next, both Spitzer and Kepler recorded multiple transits at visible and infrared wavelengths. Data from the observatories agreed: Kepler 93 b was really a planet and not some artefact of stellar variability. Ballard then knew that by looking carefully at the light curve she could calculate the size of the planet relative to the star.

At that point, the only missing piece was the diameter of the star itself.

"The precision with which we measured the size of the planet is linked directly to our measurement of the star," says Ballard.  "And we measured the star using a technique called asteroseismology."

Most people have heard of "seismology," the study of seismic waves moving through the Earth.  "We can learn a lot about the structure of our planet by studying seismic waves," she says.

Asteroseismology is the same thing, except for stars: The outer layers of stars boil like water on top of a hot stove.  Those convective motions create seismic waves that bounce around inside the core, causing the star to ring like an enormous bell.  Kepler can detect that "ringing," which reveals itself as fluctuations in a star's brightness.

Ballard's colleague, University of Birmingham professor Bill Chaplin led the asteroseismic analysis for Kepler-93 b. "By analyzing the seismic modes of the star, he was able to deduce its radius and mass to an accuracy of a percent," she says.

The new measurements confirm that Kepler-93 b is a "super-Earth" sized exoplanet, with a diameter about one-and-a-half times the size of our planet. Previous measurements by the Keck Observatory in Hawaii had put Kepler-93 b's mass at about 3.8 times that of Earth. The density of Kepler-93 b, derived from its mass and newly obtained radius, suggests the planet is very likely made of iron and rock, like Earth itself.

Although super-Earths are common in the galaxy, none exist in our solar system. That makes them tricky to study.  Ballard's team has shown, however, that it is possible to learn a lot about an exoplanet even when it is very far away.

Sunday, August 17, 2014

Summary of Asymmetric Warfare

Asymmetric warfare is war between belligerents whose relative military power differs significantly, or whose strategy or tactics differ significantly.

Asymmetric warfare can describe a conflict in which the resources of two belligerents differ in essence and in the struggle, interact and attempt to exploit each other's characteristic weaknesses. Such struggles often involve strategies and tactics of unconventional warfare, the weaker combatants attempting to use strategy to offset deficiencies in quantity or quality. Such strategies may not necessarily be militarized. This is in contrast to symmetric warfare, where two powers have similar military power and resources and rely on tactics that are similar overall, differing only in details and execution.

The term is frequently used to describe what is also called "guerrilla warfare", "insurgency", "terrorism", "counterinsurgency", and "counterterrorism", essentially violent conflict between a formal military and an informal, less equipped and supported, undermanned but resilient opponent.

Definition and Differences

The popularity of the term dates from Andrew J.R. Mack's 1975 article "Why Big Nations Lose Small Wars" in World Politics, in which "asymmetric" referred simply to a significant disparity in power between opposing actors in a conflict. "Power," in this sense, is broadly understood to mean material power, such as a large army, sophisticated weapons, an advanced economy, and so on. Mack's analysis was largely ignored in its day, but the end of the Cold War sparked renewed interest among academics. By the late 1990s, new research building on Mack's insights was beginning to mature, and, after 2004, the U.S. military began once again to seriously consider the problems associated with asymmetric warfare.

Discussion since 2004 has been complicated by the tendency of academic and military communities to use the term in different ways, and by its close association with guerrilla warfare, insurgency, terrorism, counterinsurgency, and counterterrorism. Military authors tend to use the term "asymmetric" to refer to the indirect nature of the strategies many weak actors adopt, or even to the nature of the adversary itself (e.g., "asymmetric adversaries can be expected to ...") rather than to the correlation of forces.

Academic authors tend to focus more on explaining the puzzle of weak actor victory in war: if "power," conventionally understood, conduces to victory in war, then how is the victory of the "weak" over the "strong" explained? Key explanations include (1) strategic interaction; (2) willingness of the weak to suffer more or bear higher costs; (3) external support of weak actors; (4) reluctance to escalate violence on the part of strong actors; (5) internal group dynamics and (6) inflated strong actor war aims. Asymmetric conflicts include both interstate and civil wars, and over the past two hundred years have generally been won by strong actors. Since 1950, however, weak actors have won a majority of all asymmetric conflicts.

Strategic Basis

In most conventional warfare, the belligerents deploy forces of a similar type and the outcome can be predicted by the quantity of the opposing forces or by their quality, for example better command and control of their forces (c2). There are times where this is not true because the composition or strategy of the forces makes it impossible for either side to close in battle with the other. An example of this is the standoff between the continental land forces of the French army and the maritime forces of the United Kingdom’s Royal Navy during the French Revolutionary and Napoleonic Wars. In the words of Admiral Jervis during the campaigns of 1801, "I do not say, my Lords, that the French will not come. I say only they will not come by sea", and a confrontation that Napoleon Bonaparte described as that between the elephant and the whale.
 
Tactical Basis

The tactical success of asymmetric warfare is dependent on at least some of the following assumptions:

  • One side can have a technological advantage which outweighs the numerical advantage of the enemy; the decisive English longbow at the Battle of Crecy is an example.

  • Technological inferiority usually is cancelled by more vulnerable infrastructure which can be targeted with devastating results. Destruction of multiple electric lines, roads or water supply systems in highly populated areas could have devastating effects on economy and morale, while the weaker side may not have these structures at all.

  • Training and tactics as well as technology can prove decisive and allow a smaller force to overcome a much larger one. For example, for several centuries the Greek hoplite's (heavy infantry) use of phalanx made them far superior to their enemies. The Battle of Thermopylae, which also involved good use of terrain, is a well-known example.

  • If the inferior power is in a position of self-defense; i.e., under attack or occupation, it may be possible to use unconventional tactics, such as hit-and-run and selective battles in which the superior power is weaker, as an effective means of harassment without violating the laws of war. Perhaps the classical historical examples of this doctrine may be found in the American Revolutionary War, movements in World War II, such as the French Resistance and Soviet and Yugoslav partisans. Against democratic aggressor nations, this strategy can be used to play on the electorate's patience with the conflict (as in the Vietnam War, and others since) provoking protests, and consequent disputes among elected legislators.

  • If the inferior power is in an aggressive position, however, and/or turns to tactics prohibited by the laws of war (jus in bello), its success depends on the superior power's refraining from like tactics. For example, the law of land warfare prohibits the use of a flag of truce or clearly marked medical vehicles as cover for an attack or ambush, but an asymmetric combatant using this prohibited tactic to its advantage depends on the superior power's obedience to the corresponding law. Similarly, laws of warfare prohibit combatants from using civilian settlements, populations or facilities as military bases, but when an inferior power uses this tactic, it depends on the premise that the superior power will respect the law that the other is violating, and will not attack that civilian target, or if they do the propaganda advantage will outweigh the material loss. As seen in most conflicts of the 20th and 21st centuries, this is highly unlikely as the propaganda advantage has always outweighed adherence to international law, especially by dominating sides of any conflict.

Saturday, August 16, 2014

The FBI's Biggest Case

The Duquesne Spy Ring is the largest espionage case in United States history that ended in convictions. A total of thirty-three members of a German espionage network headed by Frederick “Fritz” Joubert Duquesne were convicted after a lengthy espionage investigation by the Federal Bureau of Investigation (FBI). Of those arrested on the charge of espionage, 19 pleaded guilty. The remaining 14 men who entered pleas of not guilty were brought to jury trial in Federal District Court, Brooklyn, New York, on September 3, 1941; and all found guilty on December 13, 1941. On January 2, 1942, the group was sentenced to serve a total of over 300 years in prison.

The German spies who formed the Duquesne spy ring were placed in key jobs in the United States to get information that could be used in the event of war and to carry out acts of sabotage: one person opened a restaurant and used his position to get information from his customers; another person worked on an airline so that he could report Allied ships that were crossing the Atlantic Ocean; others in the ring worked as delivery people so that they could deliver secret messages alongside normal messages.

William G. Sebold, who had been recruited as a spy for Germany, was a major factor in the FBI's successful resolution of this case through his work as a double agent for the United States government. For nearly two years the FBI ran a radio station in New York for the ring, learning what Germany was sending to its spies in the United States while controlling the information that was being transmitted to Germany. Sebold's success as a counterespionage agent was demonstrated by the successful prosecution of the German agents.

One German spymaster later commented that the ring's roundup delivered "the death blow" to their espionage efforts in the United States. FBI director J. Edgar Hoover called his concerted FBI swoop on Duquesne's ring the greatest spy roundup in U.S. history.

The 1945 film The House on 92nd Street was a thinly disguised version of the Duquesne Spy Ring saga of 1941, but differs from historical fact. It won screenwriter Charles G. Booth an Academy Award for the best original motion picture story.

Friday, August 15, 2014

The Norden Bombsight

The Norden bombsight was a tachometric bombsight used by the United States Army Air Forces (USAAF) and the United States Navy during World War II, and the United States Air Force in the Korean and Vietnam Wars to aid the crew of bomber aircraft in dropping bombs accurately. Key to the operation of the Norden were two features; an analog computer that constantly calculated the bomb's trajectory based on current flight conditions, and a linkage to the bomber's autopilot that let it react quickly and accurately to changes in the wind or other effects.

Together, these features allowed for unprecedented accuracy in day bombing from high altitudes; in testing the Norden demonstrated a circular error probable (CEP) of 23 metres (75 ft), an astonishing performance for the era. This accuracy allowed direct attacks on ships, factories, and other point targets. Both the Navy and the AAF saw this as a means to achieve war aims through high-altitude bombing, without resorting to area bombing, as proposed by European forces. To achieve these aims, the Norden was granted the utmost secrecy well into the war, and was part of a then-unprecedented production effort on the same scale as the Manhattan Project. Carl L. Norden, Inc. ranked 46th among United States corporations in the value of World War II military production contracts.

In practice it was not possible to achieve this level of accuracy in combat conditions, with the average CEP in 1943 being 370 metres (1,200 ft). Both the Navy and Air Forces had to give up on the idea of pinpoint attacks during the war. The Navy turned to dive bombing and skip bombing to attack ships, while the Air Forces developed the lead bomber concept to improve accuracy. Nevertheless, the Norden's reputation as a pin-point device lived on, due in no small part to Norden's own advertising of the device after secrecy was reduced during the war.

The Norden saw some use in the post-World War II era, especially during the Korean War. Post-war uses were greatly reduced due to the introduction of radar-based systems, but the need for accurate daytime attacks kept it in service for some time. The last combat use of the Norden was in the US Navy’s VO-67 squadron, which used them to drop sensors onto the Ho Chi Minh Trai as late as 1967. The Norden remains one of the best known bombsights of all time.

Thursday, August 14, 2014

ICU Use of Chlorhexidine Gluconate


Bacteria Responsible for Dangerous Bloodstream Infections


Growing Less Susceptible to Common Antiseptic


CHICAGO –Society for Healthcare Epidemiology of America-- (August 13, 2014) – Bacteria that cause life-threatening bloodstream infections in critically ill patients may be growing increasingly resistant to a common hospital antiseptic, according to a recent study led by investigators at Johns Hopkins. The study was published in the September issue of Infection Control and Hospital Epidemiology, the journal of the Society for Healthcare Epidemiology of America.

Chlorhexidine gluconate (CHG) has been increasingly used in hospitals in light of recent evidence that daily antiseptic baths for patients in intensive care units (ICUs) may prevent infections and stop the spread of healthcare-associated infections. The impact of this expanded use on the effectiveness of the disinfectant is not yet known.

"Hospitals are appropriately using chlorhexidine to reduce infections and control the spread of antibiotic-resistant organisms," said Nuntra Suwantarat, MD, lead author. "However, our findings are a clear signal that we must continue to monitor bacteria for emerging antiseptic resistance as these antibacterial washes become more widely used in hospitals."

In the study, investigators compared bacterial resistance between cultures from patients in eight ICUs receiving daily antiseptic washes to patients in 30 non-ICUs who did not bathe daily with CHG.  Bacterial cultures obtained from patients with regular antiseptic baths showed reduced susceptibility to CHG when compared with those from patients who did not have antiseptic baths. Regardless of unit protocol, 69 percent of all bacteria showed reduced CHG susceptibility, a trend that requires vigilant monitoring.

"The good news is that most bacteria remain vulnerable to CHG, despite the reduced susceptibility. Daily baths with a CHG solution remain effective against life-threatening bloodstream infections," said Suwantarat.

The investigators caution that the clinical implications of their findings remain unclear. For example, antibiotic susceptibility tests are commonly used to determine whether patients will respond to antibiotic treatment. A similar correlation between antiseptic susceptibility and response to an antiseptic are not as well defined. Identifying particular bacteria and settings in which these bacteria will not respond to antiseptic agents used in hospitals is an important next step. 

Wednesday, August 13, 2014

Engines With Less Friction Loss

Fraunhofer-Gesellschaft Research News, Jun 03, 2013 

Researchers have developed a method that can reduce engine friction and wear even during production of engine components. Special coatings can help to reduce fuel consumption and CO2 emissions.

If a new car engine is to run „smoothly,“ first it has to be properly run in: drivers should avoid quick acceleration and permanent short trips during the first 1000 kilometers, for instance. Why is this „grace period“ necessary at all? When an engine is being run in, the peripheral zone on the articulations – the components in mechanical contact with one another – changes as a result of friction: the surface itself becomes „smoother“, and the granularity of the microstructure becomes finer at a material depth of roughly 500 to 1000 nanometers (nm), creating a nanocrystalline layer.

Quite a bit of friction has taken place, though, by the time this nano scale layer has formed. That is why, even now, a large share of the energy is lost to friction during the phase in which an engine is run in. Surface running properties are also a function of the customer‘s behavior during the running-in phase. A critical topic for the automotive industry: against the backdrop of increasingly scarce resources and the need to reduce CO2 emissions, reductions of friction loss has top priority on the development agenda.


More precision through optimized production technologies

Within the scope of the “TRIBOMAN“ project, researchers at five Fraunhofer Institutes are working to develop production methods and processes to improve combustion engines‘ tribological (meaning friction-related) performance. The focus is on components exposed to particularly high levels of friction, such as the running surfaces of engine cylinders. „Our common approach is to move the process of forming marginalized layers to an earlier stage in production,“ explains Torsten Schmidt from the Fraunhofer Institute for Machine Tools and Forming Technology IWU in Chemnitz.

Schmidt and his team have developed optimized production technologies for precision finishing in this connection. „For precision drilling of running surfaces on cylinders, we use defined cutting edges with a specific design. This results in very high surface quality,“ Schmidt adds. „We also systematically use the force of the machining process to promote ‚grain refinement‘ - meaning the hardening of the materials - even during production.“

The new process is designed to improve the influence on friction and wear in engine components in the future – taking the automotive industry a significant step closer to achieve the goal of using energy more efficiently and reducing CO2 emissions. But customers stand to benefit as well: these new advancements would considerably shorten the running-in period for new engines. Besides improvements in comfort, it also reduces the risk of premature wear as a result of running in a new engine.

Using a single cylinder test engine with cylinder running surfaces of aluminum, researchers at the Fraunhofer Institute for Mechanics of Materials IWM in Freiburg have already documented the first positive results of this kind of modified finishing: analyses of the processed cylinder surfaces showed a significantly lower grain size compared to conventional methods. The surface microgeometry is comparable to the cylinder running surfaces of well-run-in cylinders. Researchers are currently working to adapt their method to new development trends in automobile manufacturing such as the introduction of biofuels: since the ethanol content of biofuels is higher, aluminum components are now usually fitted with a coating layer to protect them from corrosion more effectively.

Tuesday, August 12, 2014

Stem Cells As Stroke Therapy

by Sam Wong, Imperial College London, 08 August 2014

A stroke therapy using stem cells extracted from patients' bone marrow has shown promising results in the first trial of its kind in humans.
Five patients received the treatment in a pilot study conducted by doctors at Imperial College Healthcare NHS Trust and scientists at Imperial College London.
The therapy was found to be safe, and all the patients showed improvements in clinical measures of disability.
The findings are published in the journal Stem Cells Translational Medicine. It is the first UK human trial of a stem cell treatment for acute stroke to be published.
The therapy uses a type of cell called CD34+ cells, a set of stem cells in the bone marrow that give rise to blood cells and blood vessel lining cells. Previous research has shown that treatment using these cells can significantly improve recovery from stroke in animals. Rather than developing into brain cells themselves, the cells are thought to release chemicals that trigger the growth of new brain tissue and new blood vessels in the area damaged by stroke.
The patients were treated within seven days of a severe stroke, in contrast to several other stem cell trials, most of which have treated patients after six months or later. The Imperial researchers believe early treatment may improve the chances of a better recovery.
A bone marrow sample was taken from each patient. The CD34+ cells were isolated from the sample and then infused into an artery that supplies the brain. No previous trial has selectively used CD34+ cells, so early after the stroke, until now
Although the trial was mainly designed to assess the safety and tolerability of the treatment, the patients all showed improvements in their condition in clinical tests over a six-month follow-up period.
Four out of five patients had the most severe type of stroke: only four per cent of people who experience this kind of stroke are expected to be alive and independent six months later. In the trial, all four of these patients were alive and three were independent after six months.
Dr Soma Banerjee, a lead author and Consultant in Stroke Medicine at Imperial College Healthcare NHS Trust, said: “This study showed that the treatment appears to be safe and that it’s feasible to treat patients early when they might be more likely to benefit. The improvements we saw in these patients are very encouraging, but it’s too early to draw definitive conclusions about the effectiveness of the therapy. We need to do more tests to work out the best dose and timescale for treatment before starting larger trials.”
Over 150,000 people have a stroke in England every year. Survivors can be affected by a wide range of mental and physical symptoms, and many never recover their independence.
Stem cell therapy is seen as an exciting new potential avenue of treatment for stroke, but its exact role is yet to be clearly defined.

Dr Paul Bentley, also a lead author of the study, from the Department of Medicine at Imperial College London, said: “This is the first trial to isolate stem cells from human bone marrow and inject them directly into the damaged brain area using keyhole techniques. Our group are currently looking at new brain scanning techniques to monitor the effects of cells once they have been injected.”

Professor Nagy Habib, Principal Investigator of the study, from the Department of Surgery and Cancer at Imperial College London, said: "These are early but exciting data worth pursuing.  Scientific evidence from our lab further supports the clinical findings and our aim is to develop a drug, based on the factors secreted by stem cells, that could be stored in the hospital pharmacy so that it is administered to the patient immediately following the diagnosis of stroke in the emergency room.  This may diminish the minimum time to therapy and therefore optimise outcome.  Now the hard work starts to raise funds for this exciting research.”

http://www3.imperial.ac.uk/newsandeventspggrp/imperialcollege/newssummary/news_8-8-2014-12-58-0

Monday, August 11, 2014

Robin Williams Dies

Robin McLaurin Williams (July 21, 1951 – August 11, 2014) was an American actor and stand-uyp comedian. Rising to fame with his role as the alien Mork in the TV series Mork & Mindy (1978–1982), Williams went on to establish a successful career in both stand-up comedy and feature film acting. His film career included such acclaimed films as  The World According to Garp (1982), Good Morning Vietnam (1987), Dead Poets Society (1989), Awakenings (1990), The Fisher King (1991), and Good Will Hunting (1997), as well as financial successes such as Popeye (1980), Hook (1991), Aladdin (1992), Mrs. Doubtfire (1993), Jumanji (1995), The Birdcage (1996), Night at the Museum (2006), and Happy Feet (2006). He also appeared in the video "Don’t Worry, Be Happy" by Bobby McFerrin.

Nominated for the Academy Award for Best Actor three times, Williams received the Academy Award for Best Supporting Actor for his performance in Good Will Hunting. He also received two Emmy Awards, four Golden Globe Awards, two Screen Actors Guild Awards and five Grammy Awards.  [Williams also co-hosted the annual Academy Awards ceremony with Alan Alda and Jane Fonda on March 24, 1986].

Major influences on Robin Williams: Peter Sellers, Richard Pryor, Jonathan Winters, George Carlin, Chuck Jones and Spike Mulligan.  [Williams was also profoundly respectful of the comedy routines and improv of the team of Mike Nichols and Elaine May.]

Robin Williams had a significant influence upon Conan O’Brien, Frank Caliendo, Dat Phan, Jo Koy, Gabriel Iglesias, Alexei Sayle and Eddie Murphy.

On August 11, 2014, Williams was found unconscious at his residence [just outside Tiburon, California] and was pronounced dead at the scene. According to the Marin County, California, coroner's office, the probable cause of death was asphyxiation.

http://en.wikipedia.org/wiki/Robin_Williams