Saturday, March 31, 2018

Common Sense and Aristotle

Common sense is sound practical judgment concerning everyday matters, or a basic ability to perceive, understand, and judge that is shared by ("common to") nearly all people. The first type of common sense, good sense, can be described as "the knack for seeing things as they are, and doing things as they ought to be done." The second type is sometimes described as folk wisdom, "signifying unreflective knowledge not reliant on specialized training or deliberative thought." The two types are intertwined, as the person who has common sense is in touch with common-sense ideas, which emerge from the lived experiences of those commonsensical enough to perceive them.

In a psychology context, Smedslund defines common sense as "the system of implications shared by the competent users of a language" and notes, "A proposition in a given context belongs to common sense if and only if all competent users of the language involved agree that the proposition in the given context is true and that its negation is false."

The everyday understanding of common sense derives from historical philosophical discussion involving several European languages. Related terms in other languages include Latin sensus communis, Greek κοινὴ αἴσθησις (koinē aísthēsis), and French bon sens, but these are not straightforward translations in all contexts. Similarly in English, there are different shades of meaning, implying more or less education and wisdom: "good sense" is sometimes seen as equivalent to "common sense", and sometimes not.

"Common sense" also has at least two specifically philosophical meanings. One is a capability of the animal soul (Greek psukhē) proposed by Aristotle, which enables different individual senses to collectively perceive the characteristics of physical things such as movement and size, which all physical things have in different combinations, allowing people and other animals to distinguish and identify physical things. This common sense is distinct from basic sensory perception and from human rational thinking, but cooperates with both.

The second special use of the term is Roman-influenced and is used for the natural human sensitivity for other humans and the community. Just like the everyday meaning, both of these refer to a type of basic awareness and ability to judge that most people are expected to share naturally, even if they cannot explain why.

All these meanings of "common sense", including the everyday ones, are inter-connected in a complex history and have evolved during important political and philosophical debates in modern Western civilisation, notably concerning science, politics and economics. The interplay between the meanings has come to be particularly notable in English, as opposed to other western European languages, and the English term has become international.

Since the Age of Enlightenment the term "common sense" has frequently been used for rhetorical effect, sometimes pejorative, and sometimes appealed to positively, as an authority. It can be negatively equated to vulgar prejudice and superstition, or on the contrary it is often positively contrasted to them as a standard for good taste and as the source of the most basic axioms needed for science and logic. It was at the beginning of the eighteenth century that this old philosophical term first acquired its modern English meaning: “Those plain, self-evident truths or conventional wisdom that one needed no sophistication to grasp and no proof to accept precisely because they accorded so well with the basic (common sense) intellectual capacities and experiences of the whole social body"  This began with Descartes' criticism of it, and what came to be known as the dispute between "rationalism" and "empiricism". In the opening line of one of his most famous books, Discourse on Method, Descartes established the most common modern meaning, and its controversies, when he stated that everyone has a similar and sufficient amount of common sense (bon sens), but it is rarely used well. Therefore, a skeptical logical method described by Descartes needs to be followed and common sense should not be overly relied upon. In the ensuing 18th century Enlightenment, common sense came to be seen more positively as the basis for modern thinking. It was contrasted to metaphysics, which was, like Cartesianism, associated with the ancien regime. Thomas Paine's polemical pamphlet Common Sense (1776) has been described as the most influential political pamphlet of the 18th century, affecting both the American and French revolutions. Today, the concept of common sense, and how it should best be used, remains linked to many of the most perennial topics in epistemology and ethics, with special focus often directed at the philosophy of the modern social sciences.

Aristotle and Common Sense

The origin of the term is in the works of Aristotle. The best-known case is De Anima Book III, chapter 2, especially at line 425a27. The passage is about how the animal mind converts raw sense perceptions from the five specialized sense perceptions, into perceptions of real things moving and changing, which can be thought about. According to Aristotle's understanding of perception, each of the five senses perceives one type of "perceptible" or "sensible" which is specific (Greek: idia) to it. For example, sight can see colour. But Aristotle was explaining how the animal mind, not just the human mind, links and categorizes different tastes, colours, feelings, smells and sounds in order to perceive real things in terms of the "common sensibles" (or "common perceptibles"). In this discussion "common" (koinos) is a term opposed to specific or particular (idia). The Greek for these common sensibles is ta koina, which means shared or common things, and examples include the oneness of each thing, with its specific shape and size and so on, and the change or movement of each thing. Distinct combinations of these properties are common to all perceived things.

In this passage, Aristotle explained that concerning these koina (such as movement) we already have a sense, a "common sense" or sense of the common things (Greek: koinē aisthēsis), which does not work by accident (Greek: kata sumbebēkos). And there is no specific (idia) sense perception for movement and other koina, because then we would not perceive the koina at all, except by accident. As examples of perceiving by accident Aristotle mentions using the specific sense perception vision on its own to see that something is sweet, or to recognize a friend by their distinctive color. Lee (2011, p. 31) explains that "when I see Socrates, it is not insofar as he is Socrates that he is visible to my eye, but rather because he is coloured". So the normal five individual senses do sense the common perceptibles according to Aristotle (and Plato), but it is not something they necessarily interpret correctly on their own. Aristotle proposes that the reason for having several senses is in fact that it increases the chances that we can distinguish and recognize things correctly, and not just occasionally or by accident. Each sense is used to identify distinctions, such as sight identifying the difference between black and white, but, says Aristotle, all animals with perception must have "some one thing" that can distinguish black from sweet. The common sense is where this comparison happens, and this must occur by comparing impressions (or symbols or markers; Greek: sēmeion) of what the specialist senses have perceived. The common sense is therefore also where a type of consciousness originates, "for it makes us aware of having sensations at all". And it receives physical picture imprints from the imaginative faculty, which are then memories that can be recollected.

The discussion was apparently intended to improve upon the account of Aristotle's friend and teacher Plato in his Socratic dialogue, the Theaetetus. But Plato's dialogue presented an argument that recognizing koina is an active thinking process in the rational part of the human soul, making the senses instruments of the thinking part of man. Plato's Socrates says this kind of thinking is not a kind of sense at all. Aristotle, trying to give a more general account of the souls of all animals, not just humans, moved the act of perception out of the rational thinking soul into this sensus communis, which is something like a sense, and something like thinking, but not rational.

The passage is difficult to interpret and there is little consensus about many of the details. Gregorić (2007, pp. 204–205) has argued that this may be because Aristotle did not use the term as a standardized technical term at all. For example, in some passages in his works, Aristotle seems to use the term to refer to the individual sense perceptions simply being common to all people, or common to various types of animals. There is also difficulty with trying to determine whether the common sense is truly separable from the individual sense perceptions and from imagination, in anything other than a conceptual way as a capability. Aristotle never fully spells out the relationship between the common sense and the imaginative faculty ("phantasia"), although the two clearly work together in animals, and not only humans, for example in order to enable a perception of time. They may even be the same. Despite hints by Aristotle himself that they were united, early commentators such as Alexander of Aphrodisias and Al-Farabi felt they were distinct, but later, Avicenna emphasized the link, influencing future authors including Christian philosophers. Gregorić (2007, p. 205) argues that Aristotle used the term "common sense" both to discuss the individual senses when these act as a unity, which Gregorić calls "the perceptual capacity of the soul", or the higher level "sensory capacity of the soul" that represents the senses and the imagination working as a unity. According to Gregorić, there appears to have been a standardization of the term koinē aisthēsis as a term for the perceptual capacity (not the higher level sensory capacity), which occurred by the time of Alexander of Aphrodisias at the latest.

Compared to Plato, Aristotle's understanding of the soul (Greek psyche) has an extra level of complexity in the form of the nous or "intellect"—which is something only humans have and enables humans to perceive things differently from other animals. It works with images coming from the common sense and imagination, using reasoning (Greek logos) as well as the active intellect. The nous identifies the true forms of things, while the common sense identifies shared aspects of things. Though scholars have varying interpretations of the details, Aristotle's "common sense" was in any case not rational, in the sense that it implied no ability to explain the perception. Reason or rationality (logos) exists only in man according to Aristotle, and yet some animals can perceive "common perceptibles" such as change and shape, and some even have imagination according to Aristotle. Animals with imagination come closest to having something like reasoning and nous. Plato, on the other hand was apparently willing to allow that animals could have some level of thought, meaning that he did not have to explain their sometimes complex behavior with a strict division between high-level perception processing and the human-like thinking such as being able to form opinions. Gregorić additionally argues that Aristotle can be interpreted as using the verbs, phronein and noein, to distinguish two types of thinking or awareness, one being found in animals, and the other unique to humans and involving reason. Therefore, in Aristotle (and the medieval Aristotelians) the universals used to identify and categorize things are divided into two. In medieval terminology these are the species sensibilis used for perception and imagination in animals, and the species intelligibilis or apprehendable forms used in the human intellect or nous.

Aristotle also occasionally called the koinē aisthēsis (or one version of it), the prōton aisthētikon, the first of the senses. (According to Gregorić this is specifically in contexts where it refers to the higher order common sense that includes imagination.) Later philosophers developing this line of thought, such as Themistius, Galen, and Al-Farabi, called it the ruler of the senses or ruling sense, apparently a metaphor developed from a section of Plato's Timaeus (70b). Augustine and some of the Arab writers, also called it the "inner sense". The concept of the inner senses, plural, was further developed in the Middle Ages. Under the influence of the great Persian philosophers Al-Farabi and Avicenna, several inner senses came to be listed. "Thomas Aquinas and John of Jandun recognized four internal senses: the common sense, imagination, vis cogitativa, and memory. Avicenna, followed by Robert Grosseteste, Albert the Great, and Roger Bacon, argued for five internal senses: the common sense, imagination, fantasy, vis aestimativa, and memory." By the time of Descartes and Hobbes, in the 1600s, the inner senses had been standardized to five wits, which complemented the more well-known five "external" senses. Under this medieval scheme the common sense was understood to be seated not in the heart, as Aristotle had thought, but in the anterior Galenic ventricle of the brain. The great anatomist Andreas Vesalius however found no connections between the anterior ventricle and the sensory nerves, leading to speculation about other parts of the brain into the 1600s.

Heller-Roazen (2008) writes that "In different ways the philosophers of medieval Latin and Arabic tradition, from Al-Farabi to Avicenna, Averroës, Albert, and Thomas, found in the De Anima and the Parva Naturalia the scattered elements of a coherent doctrine of the "central" faculty of the sensuous soul." It was "one of the most successful and resilient of Aristotelian notions.”

Friday, March 30, 2018

Slow Brain Waves Are Vital

Slow Steady Waves Keep Brain Humming
The Rhythmic Waves Are Linked to State of Consciousness
By Tamara Bhandari, Washington University of St Louis School of Medicine

March 29, 2018 -- Very slow brain waves, long considered an artifact of brain scanning techniques, may be more important than anyone had realized. Researchers at Washington University School of Medicine in St. Louis have found that very slow waves are directly linked to state of consciousness and may be involved in coordinating activity across distant brain regions.

If you keep a close eye on an MRI scan of the brain, you’ll see a wave pass through the entire brain like a heartbeat once every few seconds. This ultra-slow rhythm was recognized decades ago, but no one quite knew what to make of it. MRI data are inherently noisy, so most researchers simply ignored the ultra-slow waves.

But by studying electrical activity in mouse brains, researchers at Washington University School of Medicine in St. Louis have found that the ultra-slow waves are anything but noise. They are more like waves in the sea, with everything the brain does taking place in boats upon that sea. Research to date has been focused on the goings-on inside the boats, without much thought for the sea itself. But the new information suggests that the waves play a central role in how the complex brain coordinates itself and that the waves are directly linked to consciousness.

“Your brain has 100 billion neurons or so, and they have to be coordinated,” said senior author Marcus Raichle, MD, the Alan A. and Edith L. Wolff Distinguished Professor of Medicine and a professor of radiology at Mallinckrodt Institute of Radiology at the School of Medicine. “These slowly varying signals in the brain are a way to get a very large-scale coordination of the activities in all the diverse areas of the brain. When the wave goes up, areas become more excitable; when it goes down, they become less so.”

The study is published March 29 in the journal Neuron.

In the early 2000s, Raichle and others discovered patterns of brain activity in people as they lay quietly in MRI machines, letting their minds wander. These so-called resting-state networks challenged the assumption that the brain quiets itself when it’s not actively engaged in a task. Now we know that even when you feel like you’re doing nothing, your brain is still humming along, burning almost as much energy daydreaming as solving a tough math problem.

Using resting-state networks, other researchers started searching for – and finding – brain areas that behaved differently in healthy people than in people with brain diseases such as schizophrenia and Alzheimer’s. But even as resting-state MRI data provided new insights into neuropsychiatric disorders, they also consistently showed waves of activity spreading with a slow regularity throughout the brain, independently of the disease under study. Similar waves were seen on brain scans of monkeys and rodents.

Some researchers thought that these ultra-slow waves were no more than an artifact of the MRI technique itself. MRI gauges brain activity indirectly by measuring the flow of oxygen-rich blood over a period of seconds, a very long timescale for an organ that sends messages at one-tenth to one-hundredth of a second. Rather than a genuinely slow process, the reasoning went, the waves could be the sum of many rapid electrical signals over a relatively long time.

First author Anish Mitra, PhD, and Andrew Kraft, PhD – both MD/PhD students at Washington University – and colleagues decided to approach the mystery of the ultra-slow waves using two techniques that directly measure electrical activity in mice brains. In one, they measured such activity on the cellular level. In the other, they measured electrical activity layer by layer along the outer surface of the brain.

They found that the waves were no artifact: Ultra-slow waves were seen regardless of the technique, and they were not the sum of all the faster electrical activity in the brain.

Instead, the researchers found that the ultra-slow waves spontaneously started in a deep layer of mice’s brains and spread in a predictable trajectory. As the waves passed through each area of the brain, they enhanced the electrical activity there. Neurons fired more enthusiastically when a wave was in the vicinity.

Moreover, the ultra-slow waves persisted when the mice were put under general anesthesia, but with the direction of the waves reversed.

“There is a very slow process that moves through the brain to create temporary windows of opportunity for long-distance signaling,” Mitra said. “The way these ultra-slow waves move through the cortex is correlated with enormous changes in behavior, such as the difference between conscious and unconscious states.”

The fact that the waves’ trajectory changed so dramatically with state of consciousness suggests that ultra-slow waves could be fundamental to how the brain functions. If brain areas are thought of as boats bobbing about on a slow-wave sea, the choppiness and direction of the sea surely influences how easily a message can be passed from one boat to another, and how hard it is for two boats to coordinate their activity.

The researchers now are studying whether abnormalities in the trajectory of such ultra-slow waves could explain some of the differences seen on MRI scans between healthy people and people with neuropsychiatric conditions such as dementia and depression.

“If you look at the brain of someone with schizophrenia, you don’t see a big lesion, but something is not right in how the whole beautiful machinery of the brain is organized,” said Raichle, who is also a professor of biomedical engineering, of neurology, of neuroscience and of psychological and brain sciences. “What we’ve found here could help us figure out what is going wrong. These very slow waves are unique, often overlooked and utterly central to how the brain is organized. That’s the bottom line.”

Thursday, March 29, 2018

Nearby Galaxy Lacks Dark Matter


Hubble Finds First Galaxy in the Local Universe Without Dark Matter

March 28, 2018 -- An international team of researchers using the NASA/ESA Hubble Space Telescope and several other observatories have, for the first time, uncovered a galaxy in our cosmic neighborhood that is missing most — if not all — of its dark matter. This discovery of the galaxy NGC 1052-DF2 challenges currently-accepted theories of and galaxy formation and provides new insights into the nature of dark matter. The results are published in Nature.

Astronomers using Hubble and several ground-based observatories have found a unique astronomical object: a galaxy that appears to contain almost no dark matter [1]. Hubble helped to accurately confirm the distance of NGC 1052-DF2 to be 65 million light-years and determined its size and brightness. Based on these data the team discovered that NGC 1052-DF2 larger than the Milky Way, but contains about 250 times fewer stars, leading it to be classified as an ultra diffuse galaxy.

"I spent an hour just staring at this image," lead researcher Pieter van Dokkum of Yale University says as he recalls first seeing the Hubble image of NGC 1052-DF2. "This thing is astonishing: a gigantic blob so sparse that you see the galaxies behind it. It is literally a see-through galaxy."

Further measurements of the dynamical properties of ten globular clusters orbiting the galaxy allowed the team to infer an independent value of the galaxies mass. This mass is comparable to the mass of the stars in the galaxy, leading to the conclusion that NGC 1052-DF2 contains at least 400 times less dark matter than astronomers predict for a galaxy of its mass, and possibly none at all [2]. This discovery is unpredicted by current theories on the distribution of dark matter and its influence on galaxy formation.

"Dark matter is conventionally believed to be an integral part of all galaxies — the glue that holds them together and the underlying scaffolding upon which they are built," explains co-author Allison Merritt from Yale University and the Max Planck Institute for Astronomy, Germany. And van Dokkum adds: "This invisible, mysterious substance is by far the most dominant aspect of any galaxy. Finding a galaxy without any is completely unexpected; it challenges standard ideas of how galaxies work."

Merritt remarks: "There is no theory that predicts these types of galaxies — how you actually go about forming one of these things is completely unknown."

Although counterintuitive, the existence of a galaxy without dark matter negates theories that try to explain the Universe without dark matter being a part of it [3]: The discovery of NGC 1052-DF2 demonstrates that dark matter is somehow separable from galaxies. This is only expected if dark matter is bound to ordinary matter through nothing but gravity.

Meanwhile, the researchers already have some ideas about how to explain the missing dark matter in NGC 1052-DF2. Did a cataclysmic event such as the birth of a multitude of massive stars sweep out all the gas and dark matter? Or did the growth of the nearby massive elliptical galaxy NGC 1052 billions of years ago play a role in NGC 1052-DF2’s dark matter deficiency?

These ideas, however, still do not explain how this galaxy formed. To find an explanation, the team is already hunting for more dark-matter deficient galaxies as they analyse Hubble images of 23 ultra-diffuse galaxies — three of which appear to be similar to NGC 1052-DF2.

Notes


[1] The galaxy was identified with the Dragonfly Telephoto Array (DFA) and also observed by the Sloan Digital Sky Survey (SDSS). As well as the NASA/ESA Hubble Space Telescope, the Gemini Observatory and the Keck Observatory were used to study the object in more detail.

[2] Since 1884 astronomers have invoked dark matter to explain why galaxies do not fly apart, given the speed at which the stars within galaxies move. From Kepler’s Second Law it is expected that the rotation velocities of stars will decrease with distance from the centre of a galaxy. This is not observed.

[3] The MOND theory — Modified Newtonian Dynamics — suggests that the phenomena usually attributed to dark matter can be explained by modifying the laws of gravity. The result of this would be that a signature usually attributed to dark matter should always be detected, and is an unavoidable consequence of the presence of ordinary matter.

Wednesday, March 28, 2018

Unique Properties of Water

Understanding the Strange Behaviour of Water

The properties of water have fascinated scientists for centuries, but yet its unique behaviour remains a mystery.

University of Exeter – March 27, 2018 -- Published this week in the journal Proceedings of the National Academy of Sciences of the United States of America, a collaboration between the Universities of Bristol and Tokyo has attempted a novel route to understand what makes a liquid behave like water.

 

A clathrate ice, with oxygens represented as spheres, and hydrogen-bonds as lines. The work has shown how complex crystalline structures emerge as a result of water’s interactions.  Image Credit: University of Bristol

When compared to an ordinary liquid, water displays a vast array of anomalies. Common examples include the fact that liquid water expands on cooling below 4 C, which is responsible for lakes freezing from the top rather than the bottom.

In addition, the fact that water becomes less viscous when compressed,  or its unusually high surface tension, allows insects to walk on water’s surface.

These and many other anomalies are of fundamental importance in countless natural and technological processes, such as the Earth’s climate, and the possibility of life itself. From an anthropic viewpoint, it is like the water molecule was fine-tuned to have such unique properties.

Starting from the observation that the properties of water seem to appear fine-tuned, a collaboration between Dr John Russo from the University of Bristol’s School of Mathematics and Professor Hajime Tanaka from the University of Tokyo, harnessed the power of powerful supercomputers, using computational models to slowly “untune” water's interactions.

This showed how the anomalous properties of water can be changed and eventually reduced to those of a simple liquid. For example, instead of floating on water, the density of ice can be changed continuously until it sinks, and the same can be done with all water anomalies.

Dr Russo said: "With this procedure, we have found that what makes water behave anomalously is the presence of a particular arrangement of the water’s molecules, such as the tetrahedral arrangement, where a water molecule is hydrogen-bonded to four molecules located on the vertices of a tetrahedron.

"Four of such tetrahedral arrangements can organise themselves in such a way that they share a common water molecule at the centre without overlapping.

"It is the presence of this highly ordered arrangement of water molecules, mixed with other disordered arrangements that gives water its peculiar properties.

"We think this work provides a simple explanation of the anomalies and highlights the exceptional nature of water, which makes it so special compared with any other substance."

Tuesday, March 27, 2018

Wear Your Glasses!

Half of Vision Impairment
in First World Is Preventable

Anglia Ruskin University – March 26, 2018 -- Around half of vision impairment in Western Europe is preventable, according to a new study published in the British Journal of Ophthalmology.

The study was carried out by the Vision Loss Expert Group, led by Professor Rupert Bourne of Anglia Ruskin University, and shows the prevalence and causes of vision loss in high income countries worldwide as well as other European nations in 2015, based on a systematic review of medical literature over the previous 25 years.

A comparison of countries in the study shows that, based on the available data, UK has the fifth lowest prevalence of blindness in the over 50s out of the 50 countries surveyed, with 0.52% of men and women in that age group affected. Belgium had the lowest prevalence at 0.46%.

However, in terms of the percentage of population with moderate to severe vision impairment (MSVI), the UK ranked in the bottom half of the table with 6.1%, a higher prevalence than non-EU countries such as Andorra, Serbia and Switzerland.

Cataract was found to be the most common cause of blindness in Western Europe in 2015 (21.9%), followed by age-related macular degeneration (16.3%) and glaucoma (13.5%), but the main cause of MSVI was uncorrected refractive error - which is a condition that can be treated simply by wearing glasses.

This condition made up 49.6% of all MSVI in Western Europe. Cataract was the next main cause in this region, with 15.5%, followed by age-related macular degeneration.

The research also predicts that the contribution of the surveyed countries to the world's vision impaired is expected to lessen slightly by 2020, although the number of people in these nations with impaired sight will rise overall to 69 million due to a rising overall population.

Professor Bourne, Professor of Ophthalmology at Anglia Ruskin University's Vision and Eye Research Unit, said: "Vision impairment is of great importance for quality of life and for the socioeconomics and public health of societies and countries.

"Overcoming barriers to services which would address uncorrected refractive error could reduce the burden of vision impairment in high-income countries by around half. This is an important public health issue even in the wealthiest of countries and more research is required into better treatments, better implementation of the tools we already have, and ongoing surveillance of the problem.

"This work has exposed gaps in the global data, given that many countries have not formally surveyed their populations for eye disease. That is the case for the UK and a more robust understanding of people's needs would help bring solutions."

The work by the study team contributes to the wider Global Burden of Disease (GBD) Study, a comprehensive regional and global research program of disease burden that assesses mortality and disability from major diseases, injuries, and risk factors.

Monday, March 26, 2018

Huge Galactic Gas Cloud

Decades of Research Identify Source
of Galaxy-Sized Stream of Gas
By Eric Hamilton, University of Wisconsin, Madison

March 23, 2018 -- A cloud of gas 300,000 light-years long is arching around the Milky Way, shunted away from two dwarf galaxies orbiting our own. For decades, astronomers have wanted to know which of the two galaxies, the Large and Small Magellanic Clouds, is the source of the gas that has been expelled as the two galaxies gravitationally pull at one another. The answer will help astronomers understand how galaxies, including the Milky Way, form and change over time.

With colleagues at the Space Telescope Science Institute and other institutions, astronomers at the University of Wisconsin–Madison used the Hubble Space Telescope to analyze the stream of gas. By identifying the chemical makeup of the gas, known as the Leading Arm of the Magellanic Stream, the researchers identified one branch as coming from the Small Magellanic Cloud.

The results show that the Large Magellanic Cloud is winning a gravitational tug of war with its smaller partner and will help refine models of the complex orbit controlling the dwarf galaxies’ motion.

“We still don’t know how the Milky Way has formed,” says Elena D’Onghia, a professor of astronomy at UW–Madison and co-author of the new report, which was published Feb. 21 in The Astrophysical Journal. “We have this huge amount of gas sitting around the Milky Way, and we still don’t know its origin. Knowing where it comes from helps us understand how galaxies form, including our Milky Way.”

Visible with the naked eye from the Southern Hemisphere, the Magellanic Clouds appear as fuzzy offshoots of the Milky Way, which they orbit. In the 1970s, they were identified as the source of an enormous stream of matter — the Magellanic Stream, which includes the Leading Arm — that could be seen encircling the disk of the Milky Way. After its discovery, Blair Savage, an emeritus professor of astronomy at UW–Madison, worked for decades to understand the gas complexes around the Milky Way, including the Magellanic Stream. He recruited several young researchers to tackle the problem during their training at UW–Madison.

Six of Savage’s previous mentees, many of whom have since moved to other institutions, are co-authors of the new report.

“He’s really the core reason that this all happened in Wisconsin,” says Bart Wakker, a senior scientist in the UW–Madison astronomy department who came to Madison to study the interstellar and intergalactic medium with Savage in the 1990s.

“There’s been a question: Did the gas come from the Large Magellanic Cloud or the Small Magellanic Cloud? At first glance, it looks like it tracks back to the Large Magellanic Cloud,” explains Andrew Fox of the Space Telescope Science Institute in Baltimore, a former graduate student of Savage’s and the lead author of the study. “But we’ve approached that question differently, by asking: What is the Leading Arm made of? Does it have the composition of the Large Magellanic Cloud or the composition of the Small Magellanic Cloud?”

To get at the chemical composition of the Leading Arm, the researchers identified four quasars — ultra-bright galactic centers — that lie behind the stream of gas. With the Hubble Space Telescope, they then collected ultraviolet light from the quasars as it filtered through the Leading Arm. The team combined the ultraviolet light data with measurements of hydrogen using radio data, which showed an abundance of oxygen and sulfur characteristic of the Small, rather than the Large, Magellanic Cloud.

That similarity in chemical makeup is evidence that the Leading Arm was gravitationally ripped from the Small Magellanic Cloud, likely more than a billion years ago. Previous results appeared to show that the Leading Arm comes from the Large Magellanic Cloud, which suggests that galaxy-sized stream of gas may have complicated origins.

The researchers note that the new work leaves open questions about the fates of the dwarf galaxies and the matter they throw off.

“The whole point of this work is to understand what is happening to the Magellanic Clouds as they begin to merge with the Milky Way and how the gas from the clouds mixes with the gas from the Milky Way,” says Wakker. “And we’re really only in the beginning stages of understanding that process.”

Sunday, March 25, 2018

Tiny Skeleton with Bone Diseases


Mysterious Skeleton Shows Molecular Complexity of Bone Diseases

The strange skeletal remains of a fetus discovered in Chile have turned up new insights into the genetics of some bone diseases, according to a new study from researchers at Stanford and UCSF.

A bizarre human skeleton, once rumored to have extraterrestrial origins, has gotten a rather comprehensive genomic work-up, the results of which are now in, researchers from the Stanford University School of Medicine report.

The findings stamp out any remaining questions about the specimen’s home planet — it’s without a doubt human — but more than that, the analysis answers questions about remains that have long been a genetic enigma.

After five years of deep genomic analysis, Garry Nolan, PhD, professor of microbiology and immunology at Stanford, and Atul Butte, MD, PhD, director of the Institute for Computational Health Sciences at the University of California-San Francisco, have pinpointed the mutations responsible for the anomalous specimen. The researchers found mutations in not one but several genes known to govern bone development; what’s more, some of these molecular oddities have never been described before.

“To me, it seems that when doctors perform analyses for patients and their families, we’re often searching for one cause — one super-rare or unusual mutation that can explain the child’s ailment. But in this case, we’re pretty confident that multiple things went wrong,” said Butte. It’s an indication, he said, that looking for a single mutation, or even mutations that are already known to cause a particular disease, can discourage researchers from looking for other potential genetic causes and, in turn, potential treatments for patients.

Nolan, who holds the Rachford and Carlota Harris Professorship, and Butte, a former Stanford faculty member who now holds the Priscilla Chan and Mark Zuckerberg Distinguished Professorship at UCSF, are senior authors of the study, which was published online March 22 in Genome Research. Sanchita Bhattacharya, a bioinformatics researcher at UCSF, is the lead author.

A human? A primate? An alien?


The skeleton, nicknamed Ata, was discovered more than a decade ago in an abandoned town in the Atacama Desert of Chile. After trading hands and eventually finding a permanent home in Spain, the mummified specimen started to garner public attention. Standing just 6 inches tall — about the length of a dollar bill — with an angular, elongated skull and sunken, slanted eye sockets, the internet began to bubble with other-worldly hullabaloo and talk of ET.

After sequencing Ata's genome, researchers found mutations in seven genes that separately or in combinations contribute to various bone deformities, facial malformations or skeletal dysplasia.
Emery Smith

“I had heard about this specimen through a friend of mine, and I managed to get a picture of it,” Nolan said. “You can’t look at this specimen and not think it’s interesting; it’s quite dramatic. So I told my friend, ‘Look, whatever it is, if it’s got DNA, I can do the analysis.’”

With the help of Ralph Lachman, MD, clinical professor of radiology at Stanford and an expert in a type of pediatric bone disease, Nolan set the record straight. Their analysis pointed to a decisive conclusion: This was the skeleton of a human female, likely a fetus, that had suffered severe genetic mutations. In addition, Nolan saw that Ata, though most likely a fetus, had the bone composition of a 6-year-old, an indication that she had a rare, bone-aging disorder.

To understand the genetic underpinnings of Ata’s physicality, Nolan turned to Butte for help in genomic evaluation. He accepted the challenge, running a work-up so comprehensive it nearly rose to the level of patient care. Butte noted that some people might wonder about the point of such in-depth analyses.

“We thought this would be an interesting exercise in applying the tools that we have today to really see what we could find,” he said. “The phenotype, the symptoms and size of this girl were extremely unusual, and analyzing these kinds of really puzzling, old samples teaches us better how to analyze the DNA of kids today under current conditions.”

New insights through an old skeleton


To understand the genetic drivers at play, Butte and Nolan extracted a small DNA sample from Ata’s ribs and sequenced the entire genome. The skeleton is approximately 40 years old, so its DNA is modern and still relatively intact. Moreover, data collected from whole-genome sequencing showed that Ata’s molecular composition aligned with that of a human genome. Nolan noted that 8 percent of the DNA was unmatchable with human DNA, but that was due to a degraded sample, not extraterrestrial biology. (Later, a more sophisticated analysis was able to match up to 98 percent of the DNA, according to Nolan.)

The genomic results confirmed Ata’s Chilean descent and turned up a slew of mutations in seven genes that separately or in combinations contribute to various bone deformities, facial malformations or skeletal dysplasia, more commonly known as dwarfism. Some of these mutations, though found in genes already known to cause disease, had never before been associated with bone growth or developmental disorders.

Knowing these new mutational variants could be useful, Nolan said, because they add to the repository of known mutations to look for in humans with these kinds of bone or physical disorders.

“For me, what really came of this study was the idea that we shouldn’t stop investigating when we find one gene that might explain a symptom. It could be multiple things going wrong, and it’s worth getting a full explanation, especially as we head closer and closer to gene therapy,” Butte said. “We could presumably one day fix some of these disorders, and we’re going to want to make sure that if there’s one mutation, we know that — but if there’s more than one, we know that too.”

Other Stanford authors of the study are graduate student Alexandra Sockell; senior research scientist Felice Bava, PhD; and Carlos Bustamante, PhD, professor of biomedical data science and of genetics.

Researchers at UCSF, Roche Sequencing Solutions, National Autonomous University of Mexico and Ultra Intelligence Corporation also contributed to the work.


Saturday, March 24, 2018

Comedian Jerry Clower


Howard Gerald "Jerry" Clower (September 28, 1926 – August 24, 1998) was an American stand-up comedian. Born and raised in the Southern United States, Clower was best known for his stories of the rural South and was given the nickname "The Mouth of Mississippi".

                                                        Clower at the Grand Ole Opry
                                                                         in 1974

Clower was born in Liberty, Mississippi, and began a two-year stint in the Navy immediately after graduating from high school in 1944. Upon his discharge, in 1946, he was a Radioman Third Class (RMN3) and had earned the American Campaign Medal, the Asiatic-Pacific Campaign Medal (with two bronze service stars), and the World War II Victory Medal.

He studied agriculture at Mississippi State University, where he played college football and was a member of Phi Kappa Tau Fraternity. After finishing school in 1951, Clower worked as a county agent and later as a seed salesman. He became a fertilizer salesman for Mississippi Chemical in 1954

Career

By 1954, Clower had developed a reputation for telling funny stories to boost his sales. Tapes of Clower's speaking engagements wound up in the hands of Edwin "Big Ed" Wilkes and Bud Andrews in Lubbock, Texas, who had him make a better-quality recording which they promoted. MCA Records later awarded The Coon Hunt a platinum record for sales in excess of $1 million at the retail level.

At first, Clower took orders at his speaking engagements, selling 8000 copies on the Lemon record label. In time, Wilkes sent a copy to Grant Turner at WSM radio in Nashville, and when Turner played it on the air, Clower said "that thing busted loose". MCA was soon knocking on Clower's door, offering him a contract. Once MCA began distribution in 1971, Jerry Clower from Yazoo City, Mississippi Talkin’ retailed more than a million dollars over 10 months and stayed in the top 20 on the country charts for 30 weeks.

Clower's first on-stage engagement occurred in the early 1970s when country radio station owner and show promoter, Marshall Rowland (WQIK, Jacksonville; WDEN, Macon; WQYK, Tampa), received an early Clower recording ("The Coon Huntin' Story") which was met with rave reviews by his station's listeners in Jacksonville. Rowland contacted Clower and offered him an airplane ticket and a few hundred dollars to come open for an upcoming tour Rowland had booked with Charley Pride. Clower arrived back-stage for the Saturday night show at the Jacksonville Coliseum, but Pride's manager Jack Johnson refused to allow it because Clower was non-union. Rowland averted the situation by putting Clower on stage while the lights were up and people were still entering the Coliseum. Clower performed for about 30 minutes. Pride, who watched from backstage, is said to have then taken Clower under his wing and introduced him at the next of Rowland's shows as his "new friend". Clower and Rowland remained very close friends for the years that followed, connecting for events, and working shows together. Clower was a frequent face at Rowland's radio stations over the remaining years of his life, and the music was known to be interrupted with Clower's comedy recordings from time to time, especially at WQIK in Jacksonville, where his career can be said to have been launched in earnest.

Clower made 27 full-length recordings in his 27-year career as a professional entertainer (not counting "best of" compilations). With one exception, all the recordings were released by MCA. The exception was Ain't God Good, which Clower recorded with MCA's blessing at a worship service. Word Records promoted and distributed this title in 1977. Always a staunch Christian, this recording gave Clower an opportunity to present his personal testimony in a comfortable church setting. His stories often featured the Ledbetters, a quintessential Southern, agrarian clan. Clower was well known for his faith and often makes references to God in his stories. He spoke at many southern Baptist convention events. He said his faith kept him happy and able to make others laugh.

In 1973, Clower became a member of the Grand Ole Opry, and continued to perform there regularly until his death. He also co-hosted a radio show called Country Crossroads with Bill Mack and Leroy Van Dyke, which has aired in syndication for 40 years and a television version of the program was produced, as well, starting in 1993. Clower's involvement began in 1973 and lasted well over 20 years. This show was produced and distributed by the Southern Baptist Convention.

Clower was visible as a commercial spokesman. While it was mostly confined to local commercials and those airing in the Southern states, Clower could be seen selling anything from Dodge cars and trucks to transmission repairs and oil service to barbecue and fishing and hunting equipment. In fact, his salesmanship was so strong, he was named Pitchman of the Year for his commercials for flying fishing lures in 1995. An illustration of this type of propeller bait appears on his album released that year.

Clower also taped segments of Nashville On the Road, which included comedic performances and interviews with other country artists featured on the show. Jim Ed Brown hosted the series with Clower during the program's first season, 1975–76, and they were joined by Helen Cornelius in 1976. Their involvement in the series lasted until 1981. The show continued to air with new host, Jim Stafford, through 1983. Clower's last album was Peaches and Possums, released posthumously in October 1998. He was the author of four books. The book Ain't God Good became the basis for an inspirational documentary film of the same title that won an award from the New York International Independent Film and Video Festival. His other three books include 1978's Let the Hammer Down; 1987's Life Everlaughter, and 1993's Stories From Home.

Death

Clower died in August 1998 following heart bypass surgery; he was 71 years old. He had been married to Homerline (née Wells) Clower (1926-2018) since August 1947. He was survived by a son, Ray (1953–2011), three daughters, Amy, Sue, and Katy, and seven grandchildren.

Friday, March 23, 2018

Bees and Toxic Pesticides

Breakthrough Could Aid Development
of Bee-Friendly Pesticides
Efforts to create pesticides that are not toxic to bees have
been boosted by a scientific breakthrough.

University of Exeter – March 22, 2018 -- A joint study by the University of Exeter, Rothamsted Research and Bayer AG has discovered the enzymes in honeybees and bumblebees that determine how sensitive they are to different neonicotinoid pesticides.

The potential impact of neonicotinoids on bee health is a subject of intensive research and considerable controversy, with the European Union having restricted three compounds on crops that are attractive to bees in 2013.

However, both honeybees and bumblebees exhibit profound differences in their sensitivity to different members of this insecticide class. The researchers aimed to understand why this is, in order to aid the development of pesticides that are non-toxic to them.

Just as in other organisms, toxins in bees can be broken down by enzymes called cytochrome P450s. The study identified one subfamily of these enzymes in bees – CYP9Q – and found it was responsible for the rapid breakdown of certain neonicotinoids.

“Identifying these key enzymes provides valuable tools to screen new pesticides early in their development to see if bees can break them down,” said Professor Chris Bass, who led the team at the University of Exeter.

“It can take a decade and $260 million to develop a single pesticide, so this knowledge can help us avoid wasting time and money on pesticides that will end up with substantial use restrictions due to intrinsic bee toxicity.”

Dr Ralf Nauen, insect toxicologist and lead investigator of the study at Bayer added: “Knowing the mechanisms contributing to inherent tolerance helps us and regulators to better understand why certain insecticides have a high margin of safety to bees”.

“The knowledge from our study can also be used to predict and prevent potential harmful effects that result from inadvertently blocking these key defence systems, for instance by different pesticides (such as certain fungicides) that may be applied in combination with insecticides.”

Professor Lin Field, Head of the Department of Biointeractions and Crop Protection at Rothamsted Research added: “Some neonicotinoids are intrinsically highly toxic to bees but others have very low acute toxicity, but in public debate they tend to get tarred with the same brush.

“Each insecticide needs to be considered on its own risks and merits, not just its name.”

The researchers carried out the most comprehensive analysis of bee P450 detoxification enzymes ever attempted.

Comparing the effects of two neonicotinoids, they found bees metabolise thiacloprid very efficiently, while they metabolise imidacloprid much less efficiently.

Although previous work had suggested rate of metabolism might explain why bees react differently to different neonicotinoids, the specific genes or enzymes were unknown until now.

The research was part funded by Bayer, which is a manufacturer of neonicotinoid insecticides.

The paper, published in the journal Current Biology, is entitled: “Unravelling the molecular determinants of bee sensitivity to neonicotinoid insecticides.”

Thursday, March 22, 2018

Low Back Pain Often Mistreated

Keele University Leads New Global
Research into Low Back Pain


March 21, 2018 -- Low back pain affects 540 million people worldwide, but too many patients receive the wrong care. Worldwide, overuse of inappropriate tests and treatments such as imaging, opioids and surgery means patients are not receiving the right care, and resources are wasted.

Low back pain is the leading cause of disability worldwide, affecting an estimated 540 million people at any one time. Yet, a new Series of papers in The Lancet highlights the extent to which the condition is mistreated, often against best practice treatment guidelines.

Researchers from Keele University’s Research Institute for Primary Care and Health Sciences worked with colleagues at Warwick University and universities and healthcare organisations around the world on the new series. Keele’s National Institute for Health Research (NIHR) Research Professor Nadine Foster and Professor Peter Croft were part of the research team investigating the gap between evidence and practice in low back pain.

Evidence suggests that low back pain should be managed in primary care, with the first line of treatment being education and advice to keep active and at work. However, in reality, a high proportion of patients worldwide are treated in emergency departments, encouraged to rest and stop work, are commonly referred for scans or surgery, or prescribed pain killers including opioids, which are discouraged for treating low back pain.

NIHR Professor Nadine Foster, who is the lead author of one of the papers in the Series, explains:

“The gap between best evidence and practice in low back pain must be reduced. We need to redirect funding away from ineffective or harmful tests and treatments, and towards approaches that promote physical activity and function. We also need to intensify further research of promising new approaches such as redesigning patient pathways of care and interventions that support people to function and stay at work.”

The Series reviews evidence from high- and low-income countries that suggests that many of the mistakes of high-income countries are already well established in low-income and middle-income countries. Rest is frequently recommended in low and middle income countries, and resources to modify workplaces are scarce.

“The majority of cases of low back pain respond to simple physical and psychological therapies that keep people active and enable them to stay at work,” explains Series author Professor Rachelle Buchbinder, Monash University, Australia. “Often, however, it is more aggressive treatments of dubious benefit that are promoted and reimbursed.”

Low back pain results in 2.6 million emergency visits in the USA each year, with high rates of opioid prescription. A 2009 study found that opioids were prescribed to around 60% of emergency department visits for low back pain in the USA. Additionally, only about half of people with chronic back pain in the USA have been prescribed exercise. In India, studies suggest that bed rest is frequently recommended, and a study in South Africa found that 90% of patients received pain medicine as their only form of treatment (see panel 1, paper 2 for further examples).

NIHR Research Professor Nadine Foster explains:

“In many countries, painkillers that have limited positive effect are routinely prescribed for low back pain, with very little emphasis on interventions that are evidence based such as exercises. As lower-income countries respond to this rapidly rising cause of disability, it is critical that they avoid the waste that these misguided practices entail.”

The Global Burden of Disease study (2017) found that low back pain is the leading cause of disability in almost all high-income countries as well as central Europe, eastern Europe, North Africa and the Middle East, and parts of Latin America. Every year, a total of 1 million years of productive life is lost in the UK because of disability from low back pain; 3 million in the USA; and 300,000 in Australia

NIHR Professor Foster adds:

“In the UK, we know that lower back pain is very common, and accounts for 11% of the entire disability burden from all diseases in the UK. Over the last two decades we’ve seen a 12% increase in disability related to back pain - so the problem is getting worse.”

The global burden of disability due to low back pain has increased by more than 50% since 1990, and is due to increase even further in the coming decades as the population ages.

Professor Martin Underwood, Warwick University comments:

“Our current treatment approaches are failing to reduce the burden of back pain disability; we need to change the way we approach back pain treatment in the UK and help low and middle income countries to avoid developing high cost services of limited effectiveness.”

Low back pain mostly affects adults of working age. Rarely can a specific cause of low back pain be identified so most is termed non-specific and evidence suggests that psychological and economic factors are important in the persistence of low back pain. Most episodes of low back pain are short-lasting with little or no consequence, but recurrent episodes are common (about one in three people will have a recurrence within 1 year of recovering from a previous episode) and low back pain is increasingly understood as a long-lasting condition.

The authors say that health care systems should avoid harmful and useless treatments by only offering treatments in public reimbursement packages if evidence shows that they are safe, effective, and cost-effective. They also highlight the need to address widespread misconceptions in the population and among health professionals about the causes, prognosis and effectiveness of different treatments for low back pain.

Series author Professor Jan Hartvigsen, University of Southern Denmark comments:

“Millions of people across the world are getting the wrong care for low back pain. Protection of the public from unproven or harmful approaches to managing low back pain requires that governments and health-care leaders tackle entrenched and counterproductive reimbursement strategies, vested interests, and financial and professional incentives that maintain the status quo.”

Wednesday, March 21, 2018

First Steps of Photosynthesis


ANN ARBOR— March 20, 2018 -- Photosynthesis has driven life on this planet for more than 3 billion years—first in bacteria, then in plants—but we don't know exactly how it works.

Now, a University of Michigan biophysicist and her group have been able to image the moment a photon sparks the first energy conversion steps of photosynthesis.

In photosynthesis, light strikes colored molecules that are embedded within proteins called light-harvesting antenna complexes. These same molecules give trees their beautiful fall colors in Michigan. From there, the energy is shuttled to a photosynthetic reaction center protein that starts to channel energy from light through the photosynthetic process. The end product? Oxygen, in the case of plants, and energy for the organism.

Jennifer Ogilvie, U-M professor of physics and biophysics, studied photosynthetic reaction centers in purple bacteria. These centers are similar to the reaction centers in plants, except they use different pigments to trap and extract energy from light. There are six slightly differently colored pigments in purple bacteria's reaction centers.

"In photosynthesis, the basic architecture is that you've got lots of light-harvesting antennae complexes whose job is to gather the light energy," Ogilvie said. "They're packed with pigments whose relative positions are strategically placed to guide energy to where it needs to go for the first steps of energy conversion."

The differently colored pigments wrestle with different energies of light and are adapted to gather the light that is available to the bacteria. Photons excite the pigments, which triggers energy transfer in the photosynthetic reaction centers.

"The antennae take solar energy and create a molecular excitation, and in the reaction center, the excitation is converted to a charge separation," Ogilvie said. "You can think of that kind of like a battery."

But it is this moment—the moment of charge separation—that scientists do not yet have clearly pictured. Ogilvie and her team were able to take snapshots to capture this moment, using a state-of-the-art "camera" called two-dimensional electronic spectroscopy.

In particular, Ogilvie and her team were able to clearly identify a hidden state, or energy level. This is an important state to understand because it's key to the initial charge separation, or the moment energy conversion begins during photosynthesis. They were also able to witness the sequence of steps leading up to charge separation.

The finding is a particular achievement because of how impossibly quickly this energy conversion takes place—over the span of a few picoseconds. Picoseconds are one trillionth of a second, an unimaginable timescale. A honey bee buzzes its wings 200 times a second. The first energy conversion steps within purple bacteria take place before the bee has even thought about the downward push of its first flap.

"From x-ray crystallography, we know the structure of the system very well, but taking the structure and predicting exactly how it works is always very tricky," Ogilvie said. "Having a better understanding of where the energy levels are will be very helpful for establishing the structure-function relationships of these photosynthetic reaction centers."

In addition to contributing to unraveling the mystery of photosynthesis, Ogilvie's work can help inform how to make more efficient solar panels.

"Part of my motivation for studying the natural photosynthetic system is there is also a need to develop more advanced technology for harvesting solar energy," Ogilvie said. "So by understanding how nature does it, the hope is that we can use the lessons from nature to help guide the development of improved materials for artificial light harvesting as well."

Tuesday, March 20, 2018

Miranda Rights of Suspects

The Miranda warning, which also can be referred to as a person's Miranda rights, is a right to silence warning given by police in the United States to criminal suspects in police custody (or in a custodial interrogation) before they are interrogated to preserve the admissibility of their statements against them in criminal proceedings.

A typical Miranda warning can read as follows:

"You have the right to remain silent. Anything you say can and will be used against you in a court of law. You have the right to have an attorney. If you cannot afford one, one will be appointed to you by the court. With these rights in mind, are you still willing to talk with me about the charges against you?"

The Miranda warning is part of a preventive criminal procedure rule that law enforcement are required to administer to protect an individual who is in custody and subject to direct questioning or its functional equivalent from a violation of his or her Fifth Amendment right against compelled self-incrimination. In Miranda v. Arizona (1966), the Supreme Court held that the admission of an elicited incriminating statement by a suspect not informed of these rights violates the Fifth Amendment and the Sixth Amendment right to counsel, through the incorporation of these rights into state law. Thus, if law enforcement officials decline to offer a Miranda warning to an individual in their custody, they may interrogate that person and act upon the knowledge gained, but may not use that person's statements as evidence against him or her in a criminal trial.

Origin and Development of Miranda Rights

The concept of "Miranda rights" was enshrined in U.S. law following the 1966 Miranda v. Arizona Supreme Court decision, which found that the Fifth and Sixth Amendment rights of Ernesto Arturo Miranda had been violated during his arrest and trial for armed robbery, kidnapping, and rape of a mentally handicapped young woman (Miranda was subsequently retried and convicted, based primarily on his estranged ex-partner, who had been tracked down by the original arresting officer via Ernesto's own parents, suddenly claiming that Ernesto had confessed to her when she had visited him in jail; Ernesto's lawyer later confessed that he 'goofed' the trial).

The circumstances triggering the Miranda safeguards, i.e. Miranda rights, are "custody" and "interrogation". Custody means formal arrest or the deprivation of freedom to an extent associated with formal arrest. Interrogation means explicit questioning or actions that are reasonably likely to elicit an incriminating response. The Supreme Court did not specify the exact wording to use when informing a suspect of his/her rights. However, the Court did create a set of guidelines that must be followed. The ruling states:

...The person in custody must, prior to interrogation, be clearly informed that he/she has the right to remain silent, and that anything the person says will be used against that person in court; the person must be clearly informed that he/she has the right to consult with an attorney and to have that attorney present during questioning, and that, if he/she is indigent, an attorney will be provided at no cost to represent him/her.

In Berkemer v. McCarty (1984), the Supreme Court decided that a person subjected to custodial interrogation is entitled to the benefit of the procedural safeguards enunciated in Miranda, regardless of the nature or severity of the offense of which he is suspected or for which he was arrested.

As a result, American English developed the verb Mirandize, meaning "read the Miranda rights to" a suspect (when the suspect is arrested).

Notably, the Miranda rights do not have to be read in any particular order, and they do not have to precisely match the language of the Miranda case as long as they are adequately and fully conveyed (California v. Prysock, 453 U.S. 355 (1981)).

In Berghuis v. Thompkins (2010), the Supreme Court held that unless a suspect expressly states that he or she is invoking this right, subsequent voluntary statements made to an officer can be used against them in court, and police can continue to interact with (or question) the alleged criminal.