Wednesday, October 31, 2018

Alan Turing's "Halting Problem"

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running (i.e., halt) or continue to run forever.

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, which became known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first examples of a decision problem.

Informally, for any program f that might determine if programs halt, a "pathological" program g called with an input can pass its own source and its input to f and then specifically do the opposite of what f predicts g will do. No f can exist that handles this case.

Jack Copeland (2004) attributes the term halting problem to Martin Davis.

Background

The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e., all programs that can be written in some given programming language that is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program's execution; it can take arbitrarily long, and use an arbitrary amount of storage space, before halting. The question is simply whether the given program will ever halt on a particular input.

For example, in pseudocode, the program

while (true) continue

does not halt; rather, it goes on forever in an infinite loop. On the other hand, the program

print "Hello, world!"

does halt.

While deciding whether these programs halt is simple, more complex programs prove problematic.

One approach to the problem might be to run the program for some number of steps and check if it halts. But if the program does not halt, it is unknown whether the program will eventually halt or run forever.

Turing proved no algorithm exists that always correctly decides whether, for a given arbitrary program and input, the program halts when run with that input. The essence of Turing's proof is that any such algorithm can be made to contradict itself and therefore cannot be correct.

Programming consequences


Some infinite loops can be quite useful. For instance, event loops are typically coded as infinite loops. However, most subroutines are intended to finish (halt). In particular, in hard real-time computing, programmers attempt to write subroutines that are not only guaranteed to finish (halt), but are also guaranteed to finish before a given deadline.

Sometimes these programmers use some general-purpose (Turing-complete) programming language, but attempt to write in a restricted style—such as MISRA C or SPARK—that makes it easy to prove that the resulting subroutines finish before the given deadline.

Other times these programmers apply the rule of least power—they deliberately use a computer language that is not quite fully Turing-complete, often a language that guarantees that all subroutines are guaranteed to finish, such as Coq.

Common pitfalls


The difficulty in the halting problem lies in the requirement that the decision procedure must work for all programs and inputs. A particular program either halts on a given input or does not halt. Consider one algorithm that always answers "halts" and another that always answers "doesn't halt". For any specific program and input, one of these two algorithms answers correctly, even though nobody may know which one. Yet neither algorithm solves the halting problem generally.

There are programs (interpreters) that simulate the execution of whatever source code they are given. Such programs can demonstrate that a program does halt if this is the case: the interpreter itself will eventually halt its simulation, which shows that the original program halted. However, an interpreter will not halt if its input program does not halt, so this approach cannot solve the halting problem as stated; it does not successfully answer "doesn't halt" for programs that do not halt.

The halting problem is theoretically decidable for linear bounded automata (LBAs) or deterministic machines with finite memory. A machine with finite memory has a finite number of states, and thus any deterministic program on it must eventually either halt or repeat a previous state:

...any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine... (italics in original, Minsky 1967, p. 24)

Minsky warns us, however, that machines such as computers with e.g., a million small parts, each with two states, will have at least 21,000,000 possible states:

This is a 1 followed by about three hundred thousand zeroes ... Even if such a machine were to operate at the frequencies of cosmic rays, the aeons of galactic evolution would be as nothing compared to the time of a journey through such a cycle (Minsky 1967 p. 25):

Minsky exhorts the reader to be suspicious—although a machine may be finite, and finite automata "have a number of theoretical limitations":

...the magnitudes involved should lead one to suspect that theorems and arguments based chiefly on the mere finiteness [of] the state diagram may not carry a great deal of significance. (Minsky p. 25)

It can also be decided automatically whether a nondeterministic machine with finite memory halts on none, some, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision.

See Also

  • Busy beaver
  • Gödel's incompleteness theorem
  • Kolmogorov complexity
  • P versus NP problem
  • Termination analysis
  • Worst-case execution time


= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Why Is Turing’s Halting Problem Unsolvable?

Tuesday, October 30, 2018

Hydrogen Fuel Breakthrough

Researchers have cracked the chemical mechanism that will enable development of a new and more efficient photo-chemical process to produce hydrogen fuel from water, according to a new article.

October 29, 2018 -- Ben-Gurion University of the Negev (BGU) and the Technion Israel Institute of Technology researchers have cracked the chemical mechanism that will enable development of a new and more efficient photo-chemical process to produce hydrogen fuel from water, according to a new paper published in Nature Communications.

The team is the first to successfully reveal the fundamental chemical reaction present in solar power that could form the missing link to generate the electricity necessary to accomplish this process. It allows the process to unfold naturally instead of relying on large amounts of human-made energy sources or precious metals to catalyze the reaction. Production of hydrogen does not emit greenhouse gases, but the process has until now required more energy than is generated and as a result has limited commercial viability.

"This discovery could have a significant impact on efforts to replace carbon-based fuels with more environmentally friendly hydrogen fuels," according to the team led by BGU researchers Dr. Arik Yochelis and Dr. Iris Visoly-Fisher and Prof. Avner Rothschild of the Technion. "Car manufacturers seek to develop hydrogen-powered vehicles that are considered efficient and environmentally friendly and unlike electric vehicles, allow for fast refueling and extended mileage."

Hydrogen production for fuel requires splitting water molecules (H2O) into two hydrogen atoms and one oxygen atom. The research reveals a breakthrough toward understanding the mechanism that occurs during the photochemical splitting of hydrogen peroxide (H2O2) over iron-oxide photo-electrodes, which involves splitting the photo-oxidation reaction from linear to two sites.

After years of challenging experiments during which Prof. Rothschild's laboratory was unable to overcome the barrier in efficiency, he approached Drs. Yochelis and Visoly-Fisher to collaborate and complete the puzzle.

"Beyond the scientific breakthrough, we have shown that the photo-electrochemical reaction mechanism is related to the family of chemical reactions for which Prof. Gerhard Ertl was awarded the 2007 Nobel Prize in Chemistry," says Dr. Yochelis of the BGU's Alexandre Yersin Department of Solar Energy and Environmental Physics in the Jacob Blaustein Institutes for Desert Research. "Our discovery opens new strategies for photochemical processes."

Monday, October 29, 2018

No More Traffic Lights or Speeding Tickets?

New Driverless Car Technology Could Make traffic Lights and speeding Tickets Obsolete
Pair of studies outline innovations that will improve coordination of traffic patterns and save fuel

University of Delaware – October 26, 2018 -- Imagine a daily commute that's orderly instead of chaotic. Connected and automated vehicles could provide that relief by adjusting to driving conditions with little to no input from drivers. When the car in front of you speeds up, yours would accelerate, and when the car in front of you screeches to a halt, your car would stop, too.

At the University of Delaware, Andreas Malikopoulos uses control theory to develop algorithms that will enable this technology of the future. In two recently published papers, Malikopoulos, who was recently named the Terri Connor Kelly and John Kelly Career Development Professor of Mechanical Engineering, describes innovations in connected and automated vehicle technology pioneered in two laboratories at the University, the UD Scaled Smart City (UDSSC) testbed and a driving simulator facility.

"We are developing solutions that could enable the future of energy efficient mobility systems," said Malikopoulos. "We hope that our technologies will help people reach their destinations more quickly and safely while conserving fuel at the same time."

Making traffic lights obsolete

Someday cars might talk to each other to coordinate traffic patterns. Malikopoulos and collaborators from Boston University recently developed a solution to control and minimize energy consumption in connected and automated vehicles crossing an urban intersection that lacked traffic signals. Then they used software to simulate their results and found that their framework allowed connected and automated vehicles to conserve momentum and fuel while also improving travel time. The results were published in the journal Automatica.

Saving fuel and avoiding speeding tickets

Imagine that when the speed limit goes from 65 to 45 mph, your car automatically slows down. Malikopoulos and collaborators from the University of Virginia formulated a solution that yields the optimal acceleration and deceleration in a speed reduction zone, avoiding rear-end crashes. What's more, simulations suggest that the connected vehicles use 19 to 22 percent less fuel and get to their destinations 26 to 30 percent faster than human-driven vehicles. The results of this research effort were published in IEEE Transactions on Intelligent Transportation Systems.

Malikopoulos has received funding for this work from two U.S. Department of Energy programs -- the Smart Mobility Initiative and the Advanced Research Projects Agency -- Energy's NEXTCAR program.

Malikopoulos is the principal investigator of a three-year project funded by the Advanced Research Projects Agency for Energy (ARPA-E) through its NEXT-Generation Energy Technologies for Connected and Automated On-Road Vehicles (NEXTCAR) program to improve the efficiency of an Audi A3 e-tron by at least 20 percent. The partners of this project are the University of Michigan, Boston University, Bosch Corporation, and Oak Ridge National Laboratory.

Sunday, October 28, 2018

Binary Search Algorithms

In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array.  Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array. Even though the idea is simple, implementing binary search correctly requires attention to some subtleties about its exit conditions and midpoint calculation.

Binary search runs in logarithmic time in the worst case, making O(log n) comparisons, where n is the number of elements in the array, the O is Big O notation, and log is the logarithm. Binary search takes constant (O(1)) space, meaning that the space taken by the algorithm is the same for any number of elements in the array. Binary search is faster than linear search except for small arrays, but the array must be sorted first. Although specialized data structures designed for fast searching, such as hash tables, can be searched more efficiently, binary search applies to a wider range of problems.

There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in computational geometry and in numerous other fields. Exponential search extends binary search to unbounded lists. The binary search tree and B-tree data structures are based on binary search.

Implementation Issues

Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky 

                                     ... — Donald Knuth

When Jon Bentley assigned binary search as a problem in a course for professional programmers, he found that ninety percent failed to provide a correct solution after several hours of working on it, mainly because the incorrect implementations failed to run or returned a wrong answer in rare edge cases. A study published in 1988 shows that accurate code for it is only found in five out of twenty textbooks. Furthermore, Bentley's own implementation of binary search, published in his 1986 book Programming Pearls, contained an overflow error that remained undetected for over twenty years. The Java programming language library implementation of binary search had the same overflow bug for more than nine years.

In a practical implementation, the variables used to represent the indices will often be of fixed size, and this can result in an arithmetic overflow for very large arrays. If the midpoint of the span is calculated as (L + R) / 2, then the value of L + R may exceed the range of integers of the data type used to store the midpoint, even if L and R are within the range. If L and R are nonnegative, this can be avoided by calculating the midpoint as L + (RL) / 2.

If the target value is greater than the greatest value in the array, and the last index of the array is the maximum representable value of L, the value of L will eventually become too large and overflow. A similar problem will occur if the target value is smaller than the least value in the array and the first index of the array is the smallest representable value of R. In particular, this means that R must not be an unsigned type if the array starts with index 0.

An infinite loop may occur if the exit conditions for the loop are not defined correctly. Once L exceeds R, the search has failed and must convey the failure of the search. In addition, the loop must be exited when the target element is found, or in the case of an implementation where this check is moved to the end, checks for whether the search was successful or failed at the end must be in place. Bentley found that most of the programmers who incorrectly implemented binary search made an error in defining the exit conditions.

Saturday, October 27, 2018

The Holy Grail

The Holy Grail is a vessel that serves as an important motif in Arthurian literature. Different traditions describe it as a cup, dish or stone with miraculous powers that provide happiness, eternal youth or sustenance in infinite abundance, often in the custody of the Fisher King. The term "holy grail" is often used to denote an elusive object or goal that is sought after for its great significance.

A "grail", wondrous but not explicitly holy, first appears in Perceval, le Conte du Graal, an unfinished romance written by Chrétien de Troyes around 1190. Here, Chrétien's story attracted many continuators, translators and interpreters in the later 12th and early 13th centuries, including Wolfram von Eschenbach, who perceived the Grail as a stone. In the late 12th century, Robert de Boron wrote in Joseph d'Arimathie that the Grail was Jesus's vessel from the Last Supper, which Joseph of Arimathea used to catch Christ's blood at the Crucifixion. Thereafter, the Holy Grail became interwoven with the legend of the Holy Chalice, the Last Supper cup, a theme continued in works such as the Vulgate Cycle, the Post-Vulgate Cycle, and Thomas Malory's Le Morte d'Arthur.

Etymology of Graal

The word graal, as it is earliest spelled, comes from Old French graal or greal, cognate with Old Provençal grazal and Old Catalan gresal, meaning "a cup or bowl of earth, wood, or metal" (or other various types of vessels in different Occitan dialects). The most commonly accepted etymology derives it from Latin gradalis or gradale via an earlier form, cratalis, a derivative of crater or cratus, which was, in turn, borrowed from Greek krater (κρατήρ, a large wine-mixing vessel). Alternative suggestions include a derivative of cratis, a name for a type of woven basket that came to refer to a dish, or a derivative of Latin gradus meaning "'by degree', 'by stages', applied to a dish brought to the table in different stages or services during a meal".

In the 15th century, English writer John Hardyng invented a fanciful new etymology for Old French san-graal (or san-gréal), meaning "Holy Grail", by parsing it as sang real, meaning "royal blood". This etymology was used by some later British writers such as Thomas Malory, and became prominent in the conspiracy theory developed in the book Holy Blood, Holy Grail, in which sang real refers to the Jesus bloodline.

Scholarly Hypotheses

Richard Barber (2004) argued that the Grail legend is connected to the introduction of "more ceremony and mysticism" surrounding the sacrament of the Eucharist in the high medieval period, proposing that the first Grail stories may have been connected to the "renewal in this traditional sacrament". Scavone (1999, 2003) has argued that the "Grail" in origin referred to the Shroud of Turin. Goulven Peron (2016) suggested that the Holy Grail may reflect the horn of the river-god Achelous as described by Ovid in the Metamorphoses.

Friday, October 26, 2018

Ethics of Self-Driving Cars

Massive global survey reveals ethics preferences and regional differences.
By Peter Dizikes | MIT News Office

October 24, 2018 -- A massive new survey developed by MIT researchers reveals some distinct global preferences concerning the ethics of autonomous vehicles, as well as some regional variations in those preferences.

The survey has global reach and a unique scale, with over 2 million online participants from over 200 countries weighing in on versions of a classic ethical conundrum, the “Trolley Problem.” The problem involves scenarios in which an accident involving a vehicle is imminent, and the vehicle must opt for one of two potentially fatal options. In the case of driverless cars, that might mean swerving toward a couple of people, rather than a large group of bystanders.

“The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to,” says Edmond Awad, a postdoc at the MIT Media Lab and lead author of a new paper outlining the results of the project. “We don’t know yet how they should do that.”

Still, Awad adds, “We found that there are three elements that people seem to approve of the most.”

Indeed, the most emphatic global preferences in the survey are for sparing the lives of humans over the lives of other animals; sparing the lives of many people rather than a few; and preserving the lives of the young, rather than older people.

“The main preferences were to some degree universally agreed upon,” Awad notes. “But the degree to which they agree with this or not varies among different groups or countries.” For instance, the researchers found a less pronounced tendency to favor younger people, rather than the elderly, in what they defined as an “eastern” cluster of countries, including many in Asia.

The paper, “The Moral Machine Experiment,” is being published today in Nature.

The authors are Awad; Sohan Dsouza, a doctoral student in the Media Lab; Richard Kim, a research assistant in the Media Lab; Jonathan Schulz, a postdoc at Harvard University; Joseph Henrich, a professor at Harvard; Azim Shariff, an associate professor at the University of British Columbia; Jean-François Bonnefon, a professor at the Toulouse School of Economics; and Iyad Rahwan, an associate professor of media arts and sciences at the Media Lab, and a faculty affiliate in the MIT Institute for Data, Systems, and Society.

Awad is a postdoc in the MIT Media Lab’s Scalable Cooperation group, which is led by Rahwan.

To conduct the survey, the researchers designed what they call “Moral Machine,” a multilingual online game in which participants could state their preferences concerning a series of dilemmas that autonomous vehicles might face. For instance: If it comes right down it, should autonomous vehicles spare the lives of law-abiding bystanders, or, alternately, law-breaking pedestrians who might be jaywalking? (Most people in the survey opted for the former.)

All told, “Moral Machine” compiled nearly 40 million individual decisions from respondents in 233 countries; the survey collected 100 or more responses from 130 countries. The researchers analyzed the data as a whole, while also breaking participants into subgroups defined by age, education, gender, income, and political and religious views. There were 491,921 respondents who offered demographic data.

The scholars did not find marked differences in moral preferences based on these demographic characteristics, but they did find larger “clusters” of moral preferences based on cultural and geographic affiliations. They defined “western,” “eastern,” and “southern” clusters of countries, and found some more pronounced variations along these lines. For instance: Respondents in southern countries had a relatively stronger tendency to favor sparing young people rather than the elderly, especially compared to the eastern cluster.

Awad suggests that acknowledgement of these types of preferences should be a basic part of informing public-sphere discussion of these issues. In all regions, since there is a moderate preference for sparing law-abiding bystanders rather than jaywalkers, knowing these preferences could, in theory, inform the way software is written to control autonomous vehicles.

“The question is whether these differences in preferences will matter in terms of people’s adoption of the new technology when [vehicles] employ a specific rule,” he says.

Rahwan, for his part, notes that “public interest in the platform surpassed our wildest expectations,” allowing the researchers to conduct a survey that raised awareness about automation and ethics while also yielding specific public-opinion information.

“On the one hand, we wanted to provide a simple way for the public to engage in an important societal discussion,” Rahwan says. “On the other hand, we wanted to collect data to identify which factors people think are important for autonomous cars to use in resolving ethical tradeoffs.”

Beyond the results of the survey, Awad suggests, seeking public input about an issue of innovation and public safety should continue to become a larger part of the dialoge surrounding autonomous vehicles.

“What we have tried to do in this project, and what I would hope becomes more common, is to create public engagement in these sorts of decisions,” Awad says.

Thursday, October 25, 2018

Coming: A Much Faster Internet

Groundbreaking new technology could allow 100-times-faster internet by harnessing twisted light beams
RMIT University [Melbourne, Australia]

October 24, 2018 -- Broadband fiber-optics carry information on pulses of light, at the speed of light, through optical fibers. But the way the light is encoded at one end and processed at the other affects data speeds.

This world-first nanophotonic device, just unveiled in Nature Communications, encodes more data and processes it much faster than conventional fiber optics by using a special form of 'twisted' light.

Dr. Haoran Ren from RMIT's School of Science, who was co-lead author of the paper, said the tiny nanophotonic device they have built for reading twisted light is the missing key required to unlock super-fast, ultra-broadband communications.

"Present-day optical communications are heading towards a 'capacity crunch' as they fail to keep up with the ever-increasing demands of Big Data," Ren said.

"What we've managed to do is accurately transmit data via light at its highest capacity in a way that will allow us to massively increase our bandwidth."

Current state-of-the-art fiber-optic communications, like those used in Australia's National Broadband Network (NBN), use only a fraction of light's actual capacity by carrying data on the colour spectrum.

New broadband technologies under development use the oscillation, or shape, of light waves to encode data, increasing bandwidth by also making use of the light we cannot see.

This latest technology, at the cutting edge of optical communications, carries data on light waves that have been twisted into a spiral to increase their capacity further still. This is known as light in a state of orbital angular momentum, or OAM.

In 2016 the same group from RMIT's Laboratory of Artificial-Intelligence Nanophotonics (LAIN) published a disruptive research paper in Science journal describing how they'd managed to decode a small range of this twisted light on a nanophotonic chip. But technology to detect a wide range of OAM light for optical communications was still not viable, until now.

"Our miniature OAM nano-electronic detector is designed to separate different OAM light states in a continuous order and to decode the information carried by twisted light," Ren said.

"To do this previously would require a machine the size of a table, which is completely impractical for telecommunications. By using ultrathin topological nanosheets measuring a fraction of a millimeter, our invention does this job better and fits on the end of an optical fiber."

LAIN Director and Associate Deputy Vice-Chancellor for Research Innovation and Entrepreneurship at RMIT, Professor Min Gu, said the materials used in the device were compatible with silicon-based materials use in most technology, making it easy to scale up for industry applications.

"Our OAM nano-electronic detector is like an 'eye' that can 'see' information carried by twisted light and decode it to be understood by electronics. This technology's high performance, low cost and tiny size makes it a viable application for the next generation of broadband optical communications," he said.

"It fits the scale of existing fiber technology and could be applied to increase the bandwidth, or potentially the processing speed, of that fiber by over 100 times within the next couple of years. This easy scalability and the massive impact it will have on telecommunications is what's so exciting."

Gu said the detector can also be used to receive quantum information sent via twisting light, meaning it could have applications in a whole range of cutting edge quantum communications and quantum computing research.

"Our nano-electronic device will unlock the full potential of twisted light for future optical and quantum communications," Gu said.


More information: Zengji Yue et al, Angular-momentum nanometrology in an ultrathin plasmonic topological insulator film, Nature Communications (2018). DOI: 10.1038/s41467-018-06952-1

Journal reference: Nature Communications

https://phys.org/news/2018-10-groundbreaking-technology-times-faster-internet-harnessing.html

Wednesday, October 24, 2018

Strong Spider Web Silk


Researchers at San Diego State University and at Northwestern University have made strides in replicating the techniques that black widow spiders use to form strong web protein silk.  The practical result of this research would be materials almost boundless in usable applications.  See https://news.northwestern.edu/stories/2018/october/researchers-further-unravel-mystery-of-how-black-widow-spiders-create-steel-strength-silk-web/

Tuesday, October 23, 2018

Enthalpy and Thermodynamics

Enthalpy is a property of a thermodynamic system. The enthalpy of a system is equal to the system's internal energy plus the product of its pressure and volume. For processes at constant pressure, the heat absorbed or released equals the change in enthalpy.

The unit of measurement for enthalpy in the International System of Units (SI) is the joule. Other historical conventional units still in use include the British thermal unit (BTU) and the calorie.

Enthalpy comprises a system's internal energy, which is the energy required to create the system, plus the amount of work required to make room for it by displacing its environment and establishing its volume and pressure.

Enthalpy is defined as a state function that depends only on the prevailing equilibrium state identified by the system's internal energy, pressure, and volume. It is an extensive quantity.

Enthalpy is the preferred expression of system energy changes in many chemical, biological, and physical measurements at constant pressure, because it simplifies the description of energy transfer. At constant pressure, the enthalpy change equals the energy transferred from the environment through heating or work other than expansion work.

The total enthalpy, H, of a system cannot be measured directly. The same situation exists in classical mechanics: only a change or difference in energy carries physical meaning. Enthalpy itself is a thermodynamic potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point; therefore what we measure is the change in enthalpy, ΔH. The ΔH is a positive change in endothermic reactions, and negative in heat-releasing exothermic processes.

For processes under constant pressure, ΔH is equal to the change in the internal energy of the system, plus the pressure-volume work p ΔV done by the system on its surroundings (which is > 0 for an expansion and < 0 for a contraction). This means that the change in enthalpy under such conditions is the heat absorbed or released by the system through a chemical reaction or by external heat transfer. Enthalpies for chemical substances at constant pressure usually refer to standard state: most commonly 1 bar pressure. Standard state does not, strictly speaking, specify a temperature (see standard state), but expressions for enthalpy generally reference the standard heat of formation at 25 °C.

Enthalpy of ideal gases and incompressible solids and liquids does not depend on pressure, unlike entropy and Gibbs energy. Real materials at common temperatures and pressures usually closely approximate this behavior, which greatly simplifies enthalpy calculation and use in practical designs and analyses.

Applications of Enthalpy

In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, pV, differs based upon the conditions that obtain during the creation of the thermodynamic system.

Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure p remains constant; this is the pV term. The supplied energy must also provide the change in internal energy, U, which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy U + pV. For systems at constant pressure, with no external work done other than the pV work, the change in enthalpy is the heat received by the system.

For a simple system, with a constant number of particles, the difference in enthalpy is the maximum amount of thermal energy derivable from a thermodynamic process in which the pressure is held constant.

Monday, October 22, 2018

Memristors Mimic Brain Activity

Understanding the Building Blocks for an Electronic Brain

University of Groningen [Netherlands] – October 18, 2018 -- Computer bits are binary, with a value of 0 or 1. By contrast, neurons in the brain can have all kinds of different internal states, depending on the input that they received. This allows the brain to process information in a more energy-efficient manner than a computer. University of Groningen (UG) physicists are working on memristors, resistors with a memory, made from niobium-doped strontium titanate, which mimic how neurons work. Their results were published in the Journal of Applied Physics on 21 October.

The brain is superior to traditional computers in many ways. Brain cells use less energy, process information faster and are more adaptable. The way that brain cells respond to a stimulus depends on the information that they have received, which potentiates or inhibits the neurons. Scientists are working on new types of devices which can mimic this behaviour, called memristors.

UG researcher Anouk Goossens, the first author of the paper, tested memristors made from niobium-doped strontium titanate during her Master’s research project. The conductivity of the memristors is controlled by an electric field in an analogue fashion: ‘We use the system’s ability to switch resistance: by applying voltagpulses, we can control the resistance, and using a low voltage we read out the current in different states. The strength of the pulse determines the resistance in the device. We have shown a resistance ratio of at least 1000 to be realisable.’ Goossens was especially interested in the time dynamics of the resistance states.

Forgetting

Goossens observed that the duration of the pulse with which the resistance was set determined how long the ‘memory’ lasted. This could be between one to four hours for pulses lasting between a second and two minutes. Furthermore, she found that after 100 switching cycles, the material showed no signs of fatigue.

‘There are different things you could do with this’, says Goossens. ‘By “teaching” the device in different ways, using different pulses, we can change its behaviour.’ The fact that the resistance changes over time can also be useful: ‘These systems can forget, just like the brain. It allows me to us time as a variable parameter.’ In addition, the devices that Goossens made combine both memory and processing in one device, which is more efficient than traditional computer architecture in which storage (on magnetic hard discs) and processing (in the CPU) are separated.

Before building brain-like circuits with her device, Goossens plans to conduct experiments to really understand what happens within the material. ‘If we don’t know exactly how it works, we can’t solve any problems that might occur in these circuits. So, we have to understand the physical properties of the material: what does it do, and why?’

Questions that Goossens want to answer include what parameters influence the states that are achieved. ‘And if we manufacture 100 of these devices, do they all work the same? If they don’t, and there is device-to-device variation, that doesn’t have to be a problem. After all, not all elements in the brain are the same.’
 
CogniGron

This brain-like architecture for microelectronics is one of the topics of CogniGron, a new research institute at the Faculty of Science and Engineering. The institute is currently in the process of hiring 12 full professors and some 40 PhD students. It is an interdisciplinary institute, which is necessary for this type of research, says Goossens. ‘For example, I’m working on new materials, but I don’t know much about the learning algorithms that the systems that could be made from our devices would be able to run. This is where the multidisciplinary nature of CogniGron becomes crucial; for example, my co-promotor is Professor Lambert Schomaker from Artificial Intelligence.’

Anouk Goossens conducted the experiments described in the paper during a research project as part of the Master’s in Nanoscience degree programme at the University of Groningen. Goossens’ research project took place within the group of students supervised by Dr Tamalika Banerjee of Spintronics of Functional Materials. She is now a PhD student in the same group.

Challenge

‘When I started my bachelor’s in physics, I initially considered pursuing theoretical physics, but we were introduced to a lot of new subjects in the first year. For example, I had never heard of spintronics before.’ The introduction to more applied subjects made Goossens change direction. ‘I love it when something I build comes to life. In theory, everything works, but when you start building, you run into all sorts of problems. That is a challenge I like!’

The experience in her Master’s research project gave Goossens a taste for more. ‘So, I welcomed the opportunity of four more years on a PhD project.’ In four years’ time, she hopes to have built a circuit by connecting a ‘fairly large number’ of the devices that she has made. ‘And I would like to run a self-learning algorithm on it.’ Goossens is not quite certain what she will do after finishing her thesis. ‘I like to take things one step at a time. Maybe I will continue in academic research, but I’m keeping all of my options open.’

Reference: A.S. Goossens, A. Das, and T. Banerjee: Electric field driven memristive behavior at the Schottky interface of Nb-doped SrTiO3. Journal of Applied Physics, special topic section: New physcis and materials for neuromorphic computation. 21 October 2018

Sunday, October 21, 2018

Comedian "Flip" Wilson

Clerow "Flip" Wilson Jr. (December 8, 1933 – November 25, 1998) was an American comedian and actor best known for his television appearances during the late 1960s and the 1970s. From 1970 to 1974, Wilson hosted his own weekly variety series, The Flip Wilson Show, and introduced viewers to his recurring character Geraldine. The series earned Wilson a Golden Globe and two Emmy Awards, and at one point was the second highest rated show on network television. Wilson was the first African-American to host a successful variety TV show. (Sammy Davis Jr. had had a short-lived variety show in 1966). In January 1972, Time magazine featured Wilson's image on its cover and named him "TV's first black superstar".

                                                                Flip Wilson in 1969

Wilson released a number of comedy albums in the 1960s and 70s, and won a Grammy Award for his 1970 album The Devil Made Me Buy This Dress.

After The Flip Wilson Show ended, Wilson kept performing and acting until the 1990s, though at a reduced schedule. He hosted a short-lived revival of People are Funny in 1984, and had the lead role in the 1985-1986 sitcom Charlie & Co.

Early Life

Born Clerow Wilson Jr. in Jersey City, New Jersey, he was one of ten children born to Cornelia Bullock and Clerow Wilson Sr. His father worked as a handyman but, because of the Great Depression, was often out of work. When Wilson was seven years old, his mother abandoned the family. His father was unable to care for the children alone and he placed many of them in foster homes. After bouncing from foster homes to reform school, 16-year-old Wilson lied about his age and joined the United States Air Force. His outgoing personality and funny stories made him popular; he was even asked to tour military bases to cheer up other servicemen. Claiming that he was always "flipped out," Wilson's barracks mates gave him the nickname "Flip" which he used as his stage name. Discharged from the Air Force in 1954, Wilson started working as a bellhop in San Francisco's Manor Plaza Hotel.

At the Plaza's nightclub, Wilson found extra work playing a drunken patron in between regularly scheduled acts. His inebriated character proved popular and Wilson began performing it in clubs throughout California. At first Wilson would simply ad-lib onstage, but eventually he added written material and his act became more sophisticated.

Career

In the late 1950s and early 1960s, Wilson toured regularly through nightclubs with a black clientele in the so-called "Chitlin' Circuit". During the 1960s, Wilson became a regular at the Apollo Theater in Harlem. An unexpected break came in 1965, when comedian Redd Foxx was a guest on The Tonight Show and host Johnny Carson asked him who the funniest comedian at the time was; Foxx answered, "Flip Wilson". Carson then booked Wilson to appear on The Tonight Show, and Wilson became a favorite guest on that show, as well as on The Ed Sullivan Show. Wilson later singled out Sullivan as providing his biggest career boost. Wilson also made guest appearances on numerous TV comedies and variety shows, such as Here's Lucy (in which he played the role of "Prissy" in a spoof of Gone With the Wind, with Lucille Ball as Scarlett), and The Dean Martin Show, among others.

Wilson's warm and ebullient personality was infectious. Richard Pryor told Wilson, "You're the only performer that I've ever seen who goes on the stage and the audience hopes that you like them."

A routine titled "Columbus," from the 1967 album Cowboys and Colored People, brought Wilson to Hollywood industry attention. In this bit, Wilson retells the story of Christopher Columbus from an anachronistic urbanized viewpoint, in which Columbus convinces the Spanish monarchs to fund his voyage by noting that discovering America means that he can also discover Ray Charles. Hearing this, Queen "Isabel Johnson," whose voice is an early version of Wilson's eventual "Geraldine" character, says that "Chris" can have "all the money you want, honey—You go find Ray Charles!" When Columbus departs from the dock, an inebriated Isabella is there, testifying to one and all that "Chris gonna find Ray Charles!"

In 1970, Wilson won a Grammy Award for his comedy album The Devil Made Me Buy This Dress. He was also a regular cast member on Rowan & Martin's Laugh-In.  DePatie-Freleng Enterprises featured Wilson in two TV specials, Clerow Wilson and the Miracle of P.S. 14 and Clerow Wilson's Great Escape.

In Popular Culture

Wilson popularized the phrase "The devil made me do it." Also, the phrase, "What you see is what you get," often used by Wilson's Geraldine character, inspired researchers at PARC and elsewhere to create the acronym WYSIWYG.

https://en.wikipedia.org/wiki/Flip_Wilson

Saturday, October 20, 2018

Reinventing Cement


Various enterprises are experimenting to change cement from a carbon emitting building material to a carbon capturing product.  Details are available at this link:  http://www.anthropocenemagazine.org/cement/

Friday, October 19, 2018

Human Neuron Dendrites

Electrical Properties of Dendrites help Explain
our Brain’s unique Computing Power
Neurons in human and rat brains carry electrical signals in different ways, scientists find
By Anne Trafton | MIT News Office

October 18, 2018 -- Neurons in the human brain receive electrical signals from thousands of other cells, and long neural extensions called dendrites play a critical role in incorporating all of that information so the cells can respond appropriately.

Using hard-to-obtain samples of human brain tissue, MIT neuroscientists have now discovered that human dendrites have different electrical properties from those of other species. Their studies reveal that electrical signals weaken more as they flow along human dendrites, resulting in a higher degree of electrical compartmentalization, meaning that small sections of dendrites can behave independently from the rest of the neuron.

These differences may contribute to the enhanced computing power of the human brain, the researchers say.

“It’s not just that humans are smart because we have more neurons and a larger cortex. From the bottom up, neurons behave differently,” says Mark Harnett, the Fred and Carole Middleton Career Development Assistant Professor of Brain and Cognitive Sciences. “In human neurons, there is more electrical compartmentalization, and that allows these units to be a little bit more independent, potentially leading to increased computational capabilities of single neurons.”

Harnett, who is also a member of MIT’s McGovern Institute for Brain Research, and Sydney Cash, an assistant professor of neurology at Harvard Medical School and Massachusetts General Hospital, are the senior authors of the study, which appears in the Oct. 18 issue of Cell. The paper’s lead author is Lou Beaulieu-Laroche, a graduate student in MIT’s Department of Brain and Cognitive Sciences.

Neural computation

Dendrites can be thought of as analogous to transistors in a computer, performing simple operations using electrical signals. Dendrites receive input from many other neurons and carry those signals to the cell body. If stimulated enough, a neuron fires an action potential — an electrical impulse that then stimulates other neurons. Large networks of these neurons communicate with each other to generate thoughts and behavior.

The structure of a single neuron often resembles a tree, with many branches bringing in information that arrives far from the cell body. Previous research has found that the strength of electrical signals arriving at the cell body depends, in part, on how far they travel along the dendrite to get there. As the signals propagate, they become weaker, so a signal that arrives far from the cell body has less of an impact than one that arrives near the cell body.

Dendrites in the cortex of the human brain are much longer than those in rats and most other species, because the human cortex has evolved to be much thicker than that of other species. In humans, the cortex makes up about 75 percent of the total brain volume, compared to about 30 percent in the rat brain.

Although the human cortex is two to three times thicker than that of rats, it maintains the same overall organization, consisting of six distinctive layers of neurons. Neurons from layer 5 have dendrites long enough to reach all the way to layer 1, meaning that human dendrites have had to elongate as the human brain has evolved, and electrical signals have to travel that much farther.

In the new study, the MIT team wanted to investigate how these length differences might affect dendrites’ electrical properties. They were able to compare electrical activity in rat and human dendrites, using small pieces of brain tissue removed from epilepsy patients undergoing surgical removal of part of the temporal lobe. In order to reach the diseased part of the brain, surgeons also have to take out a small chunk of the anterior temporal lobe.

With the help of MGH collaborators Cash, Matthew Frosch, Ziv Williams, and Emad Eskandar, Harnett’s lab was able to obtain samples of the anterior temporal lobe, each about the size of a fingernail.

Evidence suggests that the anterior temporal lobe is not affected by epilepsy, and the tissue appears normal when examined with neuropathological techniques, Harnett says. This part of the brain appears to be involved in a variety of functions, including language and visual processing, but is not critical to any one function; patients are able to function normally after it is removed.

Once the tissue was removed, the researchers placed it in a solution very similar to cerebrospinal fluid, with oxygen flowing through it. This allowed them to keep the tissue alive for up to 48 hours. During that time, they used a technique known as patch-clamp electrophysiology to measure how electrical signals travel along dendrites of pyramidal neurons, which are the most common type of excitatory neurons in the cortex.

These experiments were performed primarily by Beaulieu-Laroche. Harnett’s lab (and others) have previously done this kind of experiment in rodent dendrites, but his team is the first to analyze electrical properties of human dendrites.

Unique features

The researchers found that because human dendrites cover longer distances, a signal flowing along a human dendrite from layer 1 to the cell body in layer 5 is much weaker when it arrives than a signal flowing along a rat dendrite from layer 1 to layer 5.

They also showed that human and rat dendrites have the same number of ion channels, which regulate the current flow, but these channels occur at a lower density in human dendrites as a result of the dendrite elongation. They also developed a detailed biophysical model that shows that this density change can account for some of the differences in electrical activity seen between human and rat dendrites, Harnett says.

Nelson Spruston, senior director of scientific programs at the Howard Hughes Medical Institute Janelia Research Campus, described the researchers’ analysis of human dendrites as “a remarkable accomplishment.”

“These are the most carefully detailed measurements to date of the physiological properties of human neurons,” says Spruston, who was not involved in the research. “These kinds of experiments are very technically demanding, even in mice and rats, so from a technical perspective, it’s pretty amazing that they’ve done this in humans.”

The question remains, how do these differences affect human brainpower? Harnett’s hypothesis is that because of these differences, which allow more regions of a dendrite to influence the strength of an incoming signal, individual neurons can perform more complex computations on the information.

“If you have a cortical column that has a chunk of human or rodent cortex, you’re going to be able to accomplish more computations faster with the human architecture versus the rodent architecture,” he says.

There are many other differences between human neurons and those of other species, Harnett adds, making it difficult to tease out the effects of dendritic electrical properties. In future studies, he hopes to explore further the precise impact of these electrical properties, and how they interact with other unique features of human neurons to produce more computing power.

Thursday, October 18, 2018

Wednesday, October 17, 2018

Dogs and Human Words

Scientists Chase Mystery of
How Dogs Process Words
By Carol Clark

October 15, 2018 -- When some dogs hear their owners say “squirrel,” they perk up, become agitated. They may even run to a window and look out of it. But what does the word mean to the dog? Does it mean, “Pay attention, something is happening?” Or does the dog actually picture a small, bushy-tailed rodent in its mind?

Frontiers in Neuroscience published one of the first studies using brain imaging to probe how our canine companions process words they have been taught to associate with objects, conducted by scientists at Emory University. The results suggest that dogs have at least a rudimentary neural representation of meaning for words they have been taught, differentiating words they have heard before from those they have not.
 
“Many dog owners think that their dogs know what some words mean, but there really isn’t much scientific evidence to support that,” says Ashley Prichard, a PhD candidate in Emory’s Department of Psychology and first author of the study. “We wanted to get data from the dogs themselves — not just owner reports.”

We know that dogs have the capacity to process at least some aspects of human language since they can learn to follow verbal commands,” adds Emory neuroscientist Gregory Berns, senior author of the study. “Previous research, however, suggests dogs may rely on many other cues to follow a verbal command, such as gaze, gestures and even emotional expressions from their owners.”

The Emory researchers focused on questions surrounding the brain mechanisms dogs use to differentiate between words, or even what constitutes a word to a dog.

Berns is founder of the Dog Project, which is researching evolutionary questions surrounding man’s best, and oldest friend. The project was the first to train dogs to voluntarily enter a functional magnetic resonance imaging (fMRI) scanner and remain motionless during scanning, without restraint or sedation. Studies by the Dog Project have furthered understanding of dogs’ neural response to expected reward, identified specialized areas in the dog brain for processing faces, demonstrated olfactory responses to human and dog odors, and linked prefrontal function to inhibitory control.

For the current study, 12 dogs of varying breeds were trained for months by their owners to retrieve two different objects, based on the objects’ names. Each dog’s pair of objects consisted of one with a soft texture, such as a stuffed animal, and another of a different texture, such as rubber, to facilitate discrimination. Training consisted of instructing the dogs to fetch one of the objects and then rewarding them with food or praise. Training was considered complete when a dog showed that it could discriminate between the two objects by consistently fetching the one requested by the owner when presented with both of the objects.

During one experiment, the trained dog lay in the fMRI scanner while the dog’s owner stood directly in front of the dog at the opening of the machine and said the names of the dog’s toys at set intervals, then showed the dog the corresponding toys.

Eddie, a golden retriever-Labrador mix, for instance, heard his owner say the words “Piggy” or “Monkey,” then his owner held up the matching toy. As a control, the owner then spoke gibberish words, such as “bobbu” and “bodmick,” then held up novel objects like a hat or a doll.

The results showed greater activation in auditory regions of the brain to the novel pseudowords relative to the trained words.

“We expected to see that dogs neurally discriminate between words that they know and words that they don’t,” Prichard says. “What’s surprising is that the result is opposite to that of research on humans — people typically show greater neural activation for known words than novel words.”

The researchers hypothesize that the dogs may show greater neural activation to a novel word because they sense their owners want them to understand what they are saying, and they are trying to do so. “Dogs ultimately want to please their owners, and perhaps also receive praise or food,” Berns says.

Half of the dogs in the experiment showed the increased activation for the novel words in their parietotemporal cortex, an area of the brain that the researchers believe may be analogous to the angular gyrus in humans, where lexical differences are processed.

The other half of the dogs, however, showed heightened activity to novel words in other brain regions, including the other parts of the left temporal cortex and amygdala, caudate nucleus, and the thalamus.

These differences may be related to a limitation of the study — the varying range in breeds and sizes of the dogs, as well as possible variations in their cognitive abilities. A major challenge in mapping the cognitive processes of the canine brain, the researchers acknowledge, is the variety of shapes and sizes of dogs’ brains across breeds.

“Dogs may have varying capacity and motivation for learning and understanding human words,” Berns says, “but they appear to have a neural representation for the meaning of words they have been taught, beyond just a low-level Pavlovian response.”

This conclusion does not mean that spoken words are the most effective way for an owner to communicate with a dog. In fact, other research also led by Prichard and Berns and recently published in Scientific Reports, showed that the neural reward system of dogs is more attuned to visual and to scent cues than to verbal ones.

“When people want to teach their dog a trick, they often use a verbal command because that’s what we humans prefer,” Prichard says. “From the dog’s perspective, however, a visual command might be more effective, helping the dog learn the trick faster.”

Co-authors of the Frontiers in Neuroscience study include Peter Cook (a neuroscientist at the New College of Florida), Mark Spivak (owner of Comprehensive Pet Therapy) and Raveena Chhibber (an information specialist in Emory’s Department of Psychology).

Co-authors of the Science Reports paper also include Spivak and Chhibber, along with Kate Athanassiades (from Emory’s School of Nursing)..

Tuesday, October 16, 2018

Inflammation of the Sinuses

Sinusitis, also known as a sinus infection or rhinosinusitis, is inflammation of the sinuses resulting in symptoms. Common symptoms include thick nasal mucus, a plugged nose, and pain in the face. Other signs and symptoms may include fever, headaches, poor sense of smell, sore throat, and cough. The cough is often worse at night. Serious complications are rare. It is defined as acute rhinosinusitis (ARS) if it lasts less than 4 weeks, and as chronic rhinosinusitis (CRS) if it lasts for more than 12 weeks.

Sinusitis can be caused by infection, allergies, air pollution, or structural problems in the nose. Most cases are caused by a viral infection. A bacterial infection may be present if symptoms last more than ten days or if a person worsens after starting to improve. Recurrent episodes are more likely in people with asthma, cystic fibrosis, and poor immune function. X-rays are not typically needed unless complications are suspected. In chronic cases confirmatory testing is recommended by either direct visualization or computed tomography.

Some cases may be prevented by hand washing, avoiding smoking, and immunization. Pain killers such as naproxen, nasal steroids, and nasal irrigation may be used to help with symptoms. Recommended initial treatment for ARS is watchful waiting. If symptoms do not improve in 7–10 days or get worse, then an antibiotic may be used or changed. In those in whom antibiotics are used, either amoxicillin or amoxicillin/clavulanate is recommended first line. Surgery may occasionally be used in people with chronic disease.

Sinusitis is a common condition. It affects between about 10% and 30% of people each year in the United States and Europe. Women are more often affected than men. Chronic sinusitis affects approximately 12.5% of people. Treatment of sinusitis in the United States results in more than US$11 billion in costs. The unnecessary and ineffective treatment of viral sinusitis with antibiotics is common.

Signs and Symptoms

Headache/facial pain or pressure of a dull, constant, or aching sort over the affected sinuses is common with both acute and chronic stages of sinusitis. This pain is typically localized to the involved sinus and may worsen when the affected person bends over or when lying down. Pain often starts on one side of the head and progresses to both sides. Acute sinusitis may be accompanied by thick nasal discharge that is usually green in color and may contain pus (purulent) and/or blood. Often a localized headache or toothache is present, and it is these symptoms that distinguish a sinus-related headache from other types of headaches, such as tension and migraine headaches. Another way to distinguish between toothache and sinusitis is that the pain in sinusitis is usually worsened by tilting the head forwards and with valsalva maneuvers.

Infection of the eye socket is possible, which may result in the loss of sight and is accompanied by fever and severe illness. Another possible complication is the infection of the bones (osteomyelitis) of the forehead and other facial bones – Pott's puffy tumor.

Sinus infections can also cause middle ear problems due to the congestion of the nasal passages. This can be demonstrated by dizziness, "a pressurized or heavy head", or vibrating sensations in the head. Post-nasal drip is also a symptom of chronic rhinosinusitis.

Halitosis (bad breath) is often stated to be a symptom of chronic rhinosinusitis; however, gold standard breath analysis techniques have not been applied. Theoretically, there are several possible mechanisms of both objective and subjective halitosis that may be involved.

A 2004 study suggested that up to 90% of "sinus headaches" are actually migraines. The confusion occurs in part because migraine involves activation of the trigeminal nerves, which innervate both the sinus region and the meninges surrounding the brain. As a result, it is difficult to accurately determine the site from which the pain originates. People with migraines do not typically have the thick nasal discharge that is a common symptom of a sinus infection.