Friday, November 30, 2018

Better Catalyst for Cheaper Hydrogen

Chemistry researchers have discovered cheaper and more efficient materials for producing hydrogen for the storage of renewable energy that could replace current water-splitting catalysts.

Queensland University of Technology – November 29, 2018 -- "The Australian Government is interested in developing a hydrogen export industry to export our abundant renewable energy," said Professor O'Mullane from QUT's Science and Engineering Faculty.

"In principle, hydrogen offers a way to store clean energy at a scale that is required to make the rollout of large-scale solar and wind farms as well as the export of green energy viable.

"However, current methods that use carbon sources to produce hydrogen emit carbon dioxide, a greenhouse gas that mitigates the benefits of using renewable energy from the sun and wind.

"Electrochemical water splitting driven by electricity sourced from renewable energy technology has been identified as one of the most sustainable methods of producing high-purity hydrogen."

Professor O'Mullane said the new composite material he and PhD student Ummul Sultana had developed enabled electrochemical water splitting into hydrogen and oxygen using cheap and readily available elements as catalysts.

"Traditionally, catalysts for splitting water involve expensive precious metals such as iridium oxide, ruthenium oxide and platinum," he said.

"An additional problem has been stability, especially for the oxygen evolution part of the process.

"What we have found is that we can use two earth-abundant cheaper alternatives -- cobalt and nickel oxide with only a fraction of gold nanoparticles -- to create a stable bi-functional catalyst to split water and produce hydrogen without emissions.

"From an industry point of view, it makes a lot of sense to use one catalyst material instead of two different catalysts to produce hydrogen from water."

Professor O'Mullane said the stored hydrogen could then be used in fuel cells.

"Fuel cells are a mature technology, already being rolled out in many makes of vehicle. They use hydrogen and oxygen as fuels to generate electricity -- essentially the opposite of water splitting.

"With a lot of cheaply 'made' hydrogen we can feed fuel cell-generated electricity back into the grid when required during peak demand or power our transportation system and the only thing emitted is water."

Thursday, November 29, 2018

Provides Cooling without Power

Device developed at MIT could provide refrigeration for off-grid locations.
By David L. Chandler | MIT News Office

November 28, 2018 -- MIT researchers have devised a new way of providing cooling on a hot sunny day, using inexpensive materials and requiring no fossil fuel-generated power. The passive system, which could be used to supplement other cooling systems to preserve food and medications in hot, off-grid locations, is essentially a high-tech version of a parasol.

The system allows emission of heat at mid-infrared range of light that can pass straight out through the atmosphere and radiate into the cold of outer space, punching right through the gases that act like a greenhouse. To prevent heating in the direct sunlight, a small strip of metal suspended above the device blocks the sun’s direct rays.

The new system is described this week in the journal Nature Communications in a paper by research scientist Bikram Bhatia, graduate student Arny Leroy, professor of mechanical engineering and department head Evelyn Wang, professor of physics Marin Soljačić, and six others at MIT.

In theory, the system they designed could provide cooling of as much as 20 degrees Celsius (36 degrees Fahrenheit) below the ambient temperature in a location like Boston, the researchers say. So far, in their initial proof-of-concept testing, they have achieved a cooling of 6 C (about 11 F). For applications that require even more cooling, the remainder could be achieved through conventional refrigeration systems or thermoelectric cooling.

Other groups have attempted to design passive cooling systems that radiate heat in the form of mid-infrared wavelengths of light, but these systems have been based on complex engineered photonic devices that can be expensive to make and not readily available for widespread use, the researchers say. The devices are complex because they are designed to reflect all wavelengths of sunlight almost perfectly, and only to emit radiation in the mid-infrared range, for the most part. That combination of selective reflectivity and emissivity requires a multilayer material where the thicknesses of the layers are controlled to nanometer precision.

But it turns out that similar selectivity can be achieved by simply blocking the direct sunlight with a narrow strip placed at just the right angle to cover the sun’s path across the sky, requiring no active tracking by the device. Then, a simple device built from a combination of inexpensive plastic film, polished aluminum, white paint, and insulation can allow for the necessary emission of heat through mid-infrared radiation, which is how most natural objects cool off, while preventing the device from being heated by the direct sunlight. In fact, simple radiative cooling systems have been used since ancient times to achieve nighttime cooling; the problem was that such systems didn’t work in the daytime because the heating effect of the sunlight was at least 10 times stronger than the maximum achievable cooling effect.

But the sun’s heating rays travel in straight lines and are easily blocked — as we experience, for example, by stepping into the shadow of a tree on a hot day. By shading the device by essentially putting an umbrella over it, and supplementing that with insulation around the device to protect it from the ambient air temperature, the researchers made passive cooling more viable.

“We built the setup and did outdoors experiments on an MIT rooftop,” Bhatia says. “It was done using very simple materials” and clearly showed the effectiveness of the system.

“It’s kind of deceptively simple,” Wang says. “By having a separate shade and an emitter to the atmosphere — two separate components that can be relatively low-cost — the system doesn’t require a special ability to emit and absorb selectively. We’re using angular selectivity to allow blocking the direct sun, as we continue to emit the heat-carrying wavelengths to the sky.”

This project “inspired us to rethink about the usage of ‘shade,’” says Yichen Shen, a research affiliate and co-author of the paper. “In the past, people have only been thinking about using it to reduce heating. But now, we know if the shade is used smartly together with some supportive light filtering, it can actually be used to cool the object down,” he says.

One limiting factor for the system is humidity in the atmosphere, Leroy says, which can block some of the infrared emission through the air. In a place like Boston, close to the ocean and relatively humid, this constrains the total amount of cooling that can be achieved, limiting it to about 20 degrees Celsius. But in drier environments, such as the southwestern U.S. or many desert or arid environments around the world, the maximum achievable cooling could actually be much greater, he points out, potentially as much as 40 C (72 F).

While most research on radiative cooling has focused on larger systems that might be applied to cooling entire rooms or buildings, this approach is more localized, Wang says: “This would be useful for refrigeration applications, such as food storage or vaccines.” Indeed, protecting vaccines and other medicines from spoilage in hot, tropical conditions has been a major ongoing challenge that this technology could be well-positioned to address.

Even if the system wasn’t sufficient to bring down the temperature all the way to needed levels, “it could at least reduce the loads” on the electrical refrigeration systems, to provide just the final bit of cooling, Wang says.

The system might also be useful for some kinds of concentrated photovoltaic systems, where mirrors are used to focus sunlight on a solar cell to increase its efficiency. But such systems can easily overheat and generally require active thermal management with fluids and pumps. Instead, the backside of such concentrating systems could be fitted with the mid-infrared emissive surfaces used in the passive cooling system, and could control the heating without any active intervention.

As they continue to work on improving the system, the biggest challenge is finding ways to improve the insulation of the device, to prevent it from heating up too much from the surrounding air, while not blocking its ability to radiate heat. “The main challenge is finding insulating material that would be infrared-transparent,” Leroy says.

The team has applied for patents on the invention and hope that it can begin to find real-world applications quite rapidly.

                          http://news.mit.edu/2018/device-provides-cooling-without-power-1128

World Chess Championship 2018

The World Chess Championship 2018 was a match between the reigning world champion since 2013, Magnus Carlsen, and challenger Fabiano Caruana to determine the World Chess Champion. The 12-game match, organised by FIDE and its commercial partner Agon, was played in London, at The College in Holborn, between 9 and 28 November 2018.

The classic time control portion of the match ended with twelve consecutive draws, the only such time in the history of the World Chess Championship. On 28 November, rapid chess was used as a tie-breaker, of which Carlsen won three consecutive games to retain his World Championship title.

Candidates’ Tournament

Caruana qualified as challenger by winning the 2018 Candidates Tournament. This was an eight-player, double round-robin tournament played in Berlin on 10–28 March 2018.

Championship Match Regulations

The match was organised in a best-of-12-games format. The time control for the games was 100 minutes for the first 40 moves, an additional 50 minutes added after the 40th move, and then an additional 15 minutes added after the 60th move, plus an additional 30 seconds per move starting from move 1. Players were not permitted agree to a draw before Black's 30th move.

The tie-breaking method consisted of the following schedule of faster games played on the final day in the following order, as necessary:

  • Best-of-four rapid games (25 minutes for each player with an increment of 10 seconds after each move). The player with the best score after four rapid games is the winner. The players are not required to record the moves. In the match, Carlsen immediately won three games in a row, securing the championship.
  • If the rapid games had been tied 2-2, up to five mini-matches of best-of-two blitz games (5 minutes plus 3 seconds increment after each move) would have been played. The player with the best score in any two-game blitz match would be the winner.
  • If the blitz matches had failed to produce a winner, one sudden death "Armageddon" game: White receives 5 minutes and Black receives 4 minutes. Both players receive an increment of 3 seconds starting from move 61. The player who wins the drawing of lots may choose the colour. In case of a draw, the player with the black pieces is declared the winner

Wednesday, November 28, 2018

Tractor Beam Experiments

Scientists Propose Tractor Beam Concept to Capture Particles Using Light
November 27, 2018 -- Physicists from ITMO University [in Saint Petersburg, Russia] have developed a model of an optical tractor beam able to capture particles based on new artificial materials. Such a beam is capable of moving particles or cells towards the radiation source. The study showed that hyperbolic metasurfaces have great potential for experiments on the creation of the tractor beam, as well as for its practical applications. The results have been published in ACS Photonics.

Tractor beams are familiar to many thanks to the Star Wars and Star Trek franchises, as well as the countless images of a UFO kidnapping a cow. However, scientists have yet to create such rays in reality, and there are already several ways to make objects move towards a source of light. So far, however, these objects are represented by small particles and atoms instead of whole cows.

Researchers from ITMO University recently suggested using metamaterials to create the beams. Metamaterials are artificial periodic structures with unusual optical properties consisting of repetitive elements. For instance, metamaterials can support hyperbolic modes: special states of the electromagnetic field that appear when the metamaterial interacts with light. Such states help to control the optical forces that influence objects on the material surface, and, as it turns out, can help to move particles towards the light source.

“Our work is fully devoted to creating a tractor beam based on meta-surfaces as well as to studying the physics behind it. We found out that this effect appears due to the propagation of hyperbolic modes in metamaterials. Such modes act as an additional scattering channel and, according to the law of conservation of momentum, can push the particle in the direction of the light source. At the same time, metamaterials have a number of other advantages compared with alternative methods of obtaining the tractor beam. Therefore, metasurfaces are more convenient for practical use,” says Alexander Shalin, the head of the International Laboratory “Nano-optomechanics” at ITMO University.

In 2016, scientists from ITMO University proposed another model of a tractor beam, one based on plasmon resonance and propagating surface plasmon waves (oscillations of electron gas near a metal surface). The flat substrate allowed researchers to work with the entire surface of the material instead of small areas, as is with classical plasmon tweezers. However, the new study showed that metamaterials based on flat structures that support both hyperbolic and plasmon modes can become an even better basis for tractor beams. Metasurfaces and metamaterials work with light in the entire visible wavelength range and better cope with energy losses. All this makes them promising for the experimental implementation of the attracting ray.

“Despite the fact that in the near future this technology will not help us to catch spaceships or kidnap cows, it can still be used, for example, to create special traps for particles and cells or to conduct chemical reactions selectively,” notes Alexander Ivinskaya, the lead author of the article and staff member of the International Laboratory “Nano-optomechanics” at ITMO University.

Tuesday, November 27, 2018

Hyde-Smith Wins Senate Seat

The 2018 United States Senate special election in Mississippi took place on November 6, 2018, to elect a United States Senator from Mississippi. The election was held to fill the seat vacated by Senator Thad Cochran when he resigned from the Senate, effective April 1, 2018, due to health concerns. Republican Governor Phil Bryant appointed Cindy Hyde-Smith to fill the vacancy created by Cochran's resignation. Hyde-Smith is seeking election to serve the balance of Cochran's term, which expires in January 2021.

On November 6, per Mississippi law, a nonpartisan top-two special general election took place on the same day as the regularly scheduled U.S. Senate election for the seat currently held by Roger Wicker. Party affiliations were not printed on the ballot.

Because no candidate gained a simple majority of the vote, a runoff between the top two candidates, Cindy Hyde-Smith and Mike Espy, was held on November 27, 2018, in which Hyde-Smith defeated Espy.

Run-Off Election November 27th

During the run-off campaign, while appearing with cattle rancher Colin Hutchinson in Tupelo, Mississippi, Hyde-Smith said, "If he invited me to a public hanging, I'd be in the front row." Hyde-Smith's comment immediately drew harsh criticism, given Mississippi's notorious history of lynchings and public executions of African-Americans. In response to the criticism, Hyde-Smith downplayed her comment as "an exaggerated expression of regard" and characterized the backlash as "ridiculous."

Hyde-Smith joined Mississippi Governor Phil Bryant at a news conference in Jackson, Mississippi on November 12, 2018, where she was asked repeatedly about her comment by reporters. In the footage, Hyde-Smith adamantly refused to provide any substantive answer to reporters' questions, responding on five occasions with variations of, "I put out a statement yesterday, and that's all I'm gonna say about it." When reporters redirected questions to Bryant, he defended Hyde-Smith's comment, and changed the subject to abortion, saying he was "confused about where the outrage is at about 20 million African American children that have been aborted."

On November 15, 2018, Hyde-Smith appeared in a video clip saying that it would be "a great idea" to make it more difficult for liberals to vote. Her campaign stated that Hyde-Smith was making an obvious joke, and the video was selectively edited. Both this and the "public hanging" video were released by Lamar White Jr., a Louisiana blogger and journalist.

Russia Seizes Ukranian Ships

An international incident occurred on 25 November 2018 when Russian Federal Security Service (FSB) border patrol boats captured three Ukrainian Navy vessels that had attempted to pass from the Black Sea into the Sea of Azov through the Kerch Strait while on their way to the port of Mariupol. In 2014, Russia annexed the nearby Crimean Peninsula, which is internationally recognised as Ukrainian territory, and later constructed the Crimean Bridge across the strait. During the incident, the bridge was used as a barrier to prevent the Ukrainian ships from entering the Sea of Azov. While Russia accused the Ukrainian ships of illegally entering its territorial waters, under a 2003 treaty, the Kerch Strait and the Sea of Azov are intended to be the shared territorial waters of both countries. According to Russia, its officers repeatedly asked the Ukrainian vessels to leave Russian territorial waters; when the Ukrainian Navy refused, Russian special forces fired on them and, following a chase, seized two Ukrainian gunboats and a tugboat off the coast of Crimea. According to different reports, three or six Ukrainian crew members were injured.

Later that day, Ukrainian President Petro Poroshenko signed a decree on martial law, which was approved by parliament the following day.

Oleksandr Turchynov, Secretary of the National Security and Defence Council of Ukraine, has reportedly said that the incident was an act of war by Russia. He has also stated that active military preparations have been spotted along the border on the Russian side.

Background

Russia annexed Crimea in 2014. The annexation is not officially recognized by the United Nations. the Kerch Strait connects the Sea of Azov with the Black Sea, and is formed by the coasts of the Russian Taman Peninsula and disputed Crimea. It is the point of access for ships travelling to and from Ukraine's eastern port cities, most notably Mariupol. While both Ukraine and Russia agreed to the principle of freedom of movement through the strait and the Sea of Azov in 2003, Russia has controlled both sides of the strait since the Crimean annexation. By May 2018, Russia had completed the construction of the Crimean Bridge which is 19-kilometre (12 mi) long spanning the strait, providing a direct land connection between Crimea and Moscow. The bridge's construction is subject to criticism from Ukraine and many other countries, which called the bridge construction illegal.

Events of the Seizure

The incident began in the morning of 25 November, when the Ukrainian Gyurza-M-class artillery boats Berdyansk, Nikopole, and tugboat Yana Kapa attempted a journey from Odessa in south-western Ukraine to the Azov Sea port of Mariupol in eastern Ukraine. As the ships approached the Kerch Strait, the Russian boats accused the Ukrainian ships of illegally entering its territorial waters, and ordered them to leave. When the Ukrainians refused, citing the 2003 Russo-Ukrainian treaty on freedom of navigation in the relevant area, Russian FSB border guard boats attempted the intercept them, and rammed Yana Kapa. The Ukrainian vessels continued toward the Crimean Bridge, but were prevented passage into the Sea of Azov by a large tanker positioned under the bridge, which blocked all passage through the strait.  Concurrently, Russia scrambled two fighter jets and two helicopters to patrol the strait. The Russian forces then fired on the Ukrainian boats, chased them as they tried to flee, and later captured them off the coast of Crimea.

Following the incident, the Ukrainian Navy reported that six servicemen had been injured by the Russian actions. According to some Ukrainian sources, two Russian ships were damaged. One was damaged while ramming the Ukrainian tugboat Yana Kapa. The Russian ship Don also collided with and damaged the Russian ship Izumrud. In the aftermath of the incident, officials from both countries accused the other of provocative behaviour. Ukraine decried the seizing of its ships as illegal. In a statement, the Ukrainian Navy said "After leaving the 12-mile zone, the Russian Federation's FSB opened fire at the flotilla belonging to... the armed forces of Ukraine".

Russia did not immediately or directly respond to the allegation, but Russian news agencies cited the Federal Security Service (FSB) as saying it had incontrovertible proof that Ukraine had orchestrated what it called a "provocation" and would publicise its evidence soon. Russia's border guard service accused Ukraine of not informing it in advance of the ships' journey and said the Ukrainian ships had been manoeuvring dangerously and ignoring its instructions with the aim of stirring up tensions. Russian politicians denounced the Ukrainian government, saying the incident looked like a calculated attempt by Ukraine's president to increase his popularity ahead of an election next year. The Ukrainian government rejected this, and said it had informed the Russians of the planned passage through the Kerch Strait in advance.

On the morning of 26 November, photographs of the captured Ukrainian ships laid up in the Crimean port of Kerch were published. In the photos, Russian servicemen are seen attempting to camouflage the ships. On that day, according to APK-Inform, Ukrainian commercial shipping returned to normal operation after the Kerch Strait was reopened to civilian traffic.

According to Ukrainian intelligence, the state of health of the Ukrainian Naval Forces servicemen who were victims of the attack in the Kerch Strait is satisfactory. The injured Ukrainian sailors are being treated at Pirogov Kerch City Hospital No. 1.

On 27 November, a Crimean court ordered that 12 of the 24 Ukrainian sailors be detained for 60 days.

                                 https://en.wikipedia.org/wiki/2018_Kerch_Strait_incident

Monday, November 26, 2018

Basics of Technology


Technology ("science of craft", from Greek τέχνη, techne, "art, skill, cunning of hand"; and -λογία, -logia) is the collection of techniques, skills, methods, and processes used in the production of goods or services or in the accomplishment of objectives, such as scientific investigation. Technology can be the knowledge of techniques, processes, and the like, or it can be embedded in machines to allow for operation without detailed knowledge of their workings.

The simplest form of technology is the development and use of basic tools. The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food, and the invention of the wheel helped humans to travel in and control their environment. Developments in historic times, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact freely on a global scale.

Technology has many effects. It has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class. Many technological processes produce unwanted by-products known as pollution and deplete natural resources to the detriment of Earth's environment. Innovations have always influenced the values of a society and raised new questions of the ethics of technology. Examples include the rise of the notion of efficiency in terms of human productivity, and the challenges of bioethics.

Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticize the pervasiveness of technology, arguing that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition.

Science, Engineering and Technology

The distinction between science, engineering, and technology is not always clear. Science is systematic knowledge of the physical or material world gained through observation and experimentation. Technologies are not usually exclusively products of science, because they have to satisfy requirements such as utility, usability, and safety.

Engineering is the goal-oriented process of designing and making tools and systems to exploit natural phenomena for practical human means, often (but not always) using results and techniques from science. The development of technology may draw upon many fields of knowledge, including scientific, engineering, mathematical, linguistic, and historical knowledge, to achieve some practical result.

Technology is often a consequence of science and engineering, although technology as a human activity precedes the two fields. For example, science might study the flow of electrons in electrical conductors by using already-existing tools and knowledge. This new-found knowledge may then be used by engineers to create new tools and machines such as semiconductors, computers, and other forms of advanced technology. In this sense, scientists and engineers may both be considered technologists; the three fields are often considered as one for the purposes of research and reference.

The exact relations between science and technology in particular have been debated by scientists, historians, and policymakers in the late 20th century, in part because the debate can inform the funding of basic and applied science. In the immediate wake of World War II, for example, it was widely considered in the United States that technology was simply "applied science" and that to fund basic science was to reap technological results in due time. An articulation of this philosophy could be found explicitly in Vannevar Bush's treatise on postwar science policy, Science – The Endless Frontier: "New products, new industries, and more jobs require continuous additions to knowledge of the laws of nature ... This essential new knowledge can be obtained only through basic scientific research." In the late-1960s, however, this view came under direct attack, leading towards initiatives to fund science for specific tasks (initiatives resisted by the scientific community). The issue remains contentious, though most analysts resist the model that technology simply is a result of scientific research.

                                            https://en.wikipedia.org/wiki/Technology

Sunday, November 25, 2018

Family Estrangement

Dysfunctional Families--The Estrangement Epidemic
A Blog Posting by Trevor Todd on February 19th, 2014

There is a noted estrangement epidemic amongst dysfunctional families

“Family quarrels are bitter things. They’re not like aches or wounds; they’re more like splits in the skin that won’t heal because there is not enough material.”

-- F. Scott Fitzgerald

 Estrangement is the turning away from a previously held state of affection, comradeship, or allegiance by one party to another or, alternatively, the parties to each other. The meaning has not changed much from its Latin root extraneare, to treat as a stranger.

 The phenomenon of children being estranged from one or both parents has risen dramatically in recent years.

 Anecdotally, after 40 years of estate litigation practice, I have witnessed the gradual erosion of the family, starting with the [Canadian] divorce laws of 1968 and moving through the social acceptance of common law relationships, children out of wedlock, “blended but lumpy families,” same sex marriage, and so forth. Legally speaking, the times are achanging.

 In recent years, I have noted what I consider a silent epidemic of estrangement between parents and one or more of their adult children. In fact estrangement among individuals in families is far more common than most people believe.

 We follow the estrangements of movie stars with glee and interest—Lindsay Lohan got a restraining order against her father; Jennifer Anniston stopped talking to her mother in 1996 when her mother wrote a tell-all book; the Tori-Spelling-of-the-week is not talking to a parent or vice versa. All those behaviours foster irreparable estrangement among various family members.

 Family estrangement is found everywhere in society, from the wealthiest to the poorest. Although there is a shocking lack of statistics available on family estrangement, contemporaries in other fields, such as family counsellors, report a tremendous increase in the number of family members who no longer communicate with each other.

 I believe that estrangement is so painful for the parties involved that often, they do not wish to talk about it.

 Family estrangement occurs when certain family members come to an impasse in their relationship. The subject cause or causes of the estrangement, whatever they may be, are so strong, certain family members separate for a long period of time—possibly even for the rest of their lives.

 There may be very valid reasons for such estrangement, such as when sexual abuse has occurred upon a child who is then not believed by either parent. A child frequently flees from the family simply to get away from one nightmare that often leads to another on the street.

 Family estrangement is never easy for anyone, both within and outside the family.

 In my experience as a lawyer, when estrangement occurs, the reasons are usually very understandable, troubling, and valid. The departing family member often has been very badly emotionally damaged in the relationship.

 The reasons for estrangement are as diverse as the parties involved. Sometimes there was a very close relationship in the past and something happened that created distance. It may have happened slowly over time or rather suddenly, but once that distance was created, it solidified into estrangement. Alternatively, the relationship was never as close as it could or should have been and the gap just kept getting wider, until there was no relationship at all.

I couldn’t possibly list all the causes for family estrangement. Here are a few significant ones.

 1.        Intolerance

Intolerance usually manifests in the sense of disapproval of lifestyle choices such as homosexuality; marrying outside a person’s religion, race, nationality, or ethnicity; or another perceived disrespect. Intolerance can lead to stubbornness and small-mindedness when it comes to giving up a grudge or to pettiness and nastiness when it comes to forgiveness.

  2.        Divorce

Divorce arguably may be the single-most-common cause of family alienation. However amicable the divorce seemed to the parents, resentments can run deep and some children never get over it. Children may wish to live with one parent as opposed to the other. The malice of one parent turning children against the opposite parent can lead to unwarranted estrangement between the child and the “bad” parent or even both parents.

  3.        Remarriage

Remarriage, especially by the custodial parent, that creates a “blended family” has certainly caused a great number of estrangements. Distance among “first family” members and “second family” members or even a third is quite common, even when people are not cohabiting as a family unit.

  4.        Personality Disorders

Some parents never intended to be parents; they resented their children and thus were toxic parents. Living with a parent with a narcissistic personality disorder is exceedingly difficult for a child, who invariably fails to win the parent’s approval, let alone love.

  5.        Illness and Negative Behaviour

They include mental illness, drug and alcoholic addiction, and household violence.

  6.        Erosion of Self-Esteem

They include neglect, unconcern, and constant humiliations, disappointments, and putdowns.

  7.        Priorities and Time

Both parents are working and have little time for the children.

  8.        Unresolved Encounters

They include a long series of rather minor but escalating misunderstandings and overreactions and general stubbornness on both parties to make amends. While the cutting of ties between family members can be surprisingly easy, reconnecting them can be difficult if not impossible to restore.

  9.        Recurring Family Arguments

Arguments during significant holidays such as Thanksgiving and Christmas can lead to repeated hostilities, further family division, and avoidance of the special occasion in future.

10.        The Unaccepted Spouse

When the marital partner has not been accepted by the family, it becomes awkward for everyone and easier for the estranged party to stay away.

11.        An Estrangement Syndrome

Psychologists note that estrangement may be passed from generation to generation, due to the negative role models of the parents. In other words, if you are estranged from your parents, odds are your children will become estranged from you once they become adults.

In a dysfunctional family, the children typically do not receive enough love and care and often end up by default in competition with each other for those necessities of life.

Later, when the parents die, the competition for love may convert into one or more children taking the parents’ money to the exclusion of other siblings, out of a distorted belief they deserve the money. In the mind of the perpetrator(s), the money-grab becomes the substitute for the lost parental love.

As children, we don’t get to choose our family but, as adults, we can decide whom we wish or don’t wish to have in our lives. Even in the best of circumstances, being a member of a family is often a challenge.

To those readers who are estranged from their families, I would encourage group counselling and chat forums to deal with the pain and hopeful reconciliation and healing. That is often easier said than done as it takes a willingness on at least two sides to complete a successful reconciliation.

                       http://disinherited.com/family-estrangement-a-silent-epidemic/

Saturday, November 24, 2018

"Subspecies" Explained

In biological classification, the term subspecies refers to a unity of populations of a species living in a subdivision of the species' global range and varies from other populations of the same species by morphological characteristics. A subspecies cannot be recognized independently. A species is either recognized as having no subspecies at all or at least two, including any that are extinct. The term is abbreviated subsp. in botany and bacteriology, or ssp. in zoology. The plural is the same as the singular: subspecies.

In zoology, under the International Code of Zoological Nomenclature, the subspecies is the only taxonomic rank below that of species that can receive a name. In botany and mycology, under the International Code of Nomenclature for algae, fungi, and plants, other infraspecific ranks, such as variety, may be named. In bacteriology and virology, under standard bacterial nomenclature and virus nomenclature, there are recommendations but not strict requirements for recognizing other important infraspecific ranks.

A taxonomist decides whether to recognize a subspecies or not. A common criterion for a subspecies is its ability of interbreeding with a different subspecies of the same species and producing fertile offspring. In the wild, subspecies do not interbreed due to their geographic isolation and sexual selection. The differences between subspecies are usually less distinct than the differences between species.

Monotypic and Polytypic Species

In biological terms, rather than in relation to nomenclature, a polytypic species has two or more genetically and phenotypically divergent subspecies, races, or more generally speaking, populations that need a separate description. These are separate groups that are clearly distinct from one another and do not generally interbreed, although there may be a relatively narrow hybridization zone, but which may interbreed if given the chance to do so. These subspecies, races, or populations, can be named as subspecies by zoologists, or in more varied ways by botanists and microbiologists.

A monotypic species has no distinct population or races, or rather one race comprising the whole species. A taxonomist would not name a subspecies within such a species. Monotypic species can occur in several ways:

  • All members of the species are very similar and cannot be sensibly divided into biologically significant subcategories.
  • The individuals vary considerably, but the variation is essentially random and largely meaningless so far as genetic transmission of these variations is concerned.
  • The variation among individuals is noticeable and follows a pattern, but there are no clear dividing lines among separate groups: they fade imperceptibly into one another. Such clinal variation always indicates substantial gene flow among the apparently separate groups that make up the population(s). Populations that have a steady, substantial gene flow among them are likely to represent a monotypic species, even when a fair degree of genetic variation is obvious.

                                                      https://en.wikipedia.org/wiki/Subspecies

Friday, November 23, 2018

Tinnitus -- Ringing Ears

Tinnitus is the hearing of sound when no external sound is present. While often described as a ringing, it may also sound like a clicking, hiss or roaring. Rarely, unclear voices or music are heard. The sound may be soft or loud, low pitched or high pitched and appear to be coming from one ear or both. Most of the time, it comes on gradually. In some people, the sound causes depression or anxiety and can interfere with concentration.

Tinnitus is not a disease but a symptom that can result from a number of underlying causes. One of the most common causes is noise-induced hearing loss. Other causes include ear infections, disease of the heart or blood vessels, Ménière's disease, brain tumors, emotional stress, exposure to certain medications, a previous head injury, and earwax. It is more common in those with depression.

The diagnosis of tinnitus is usually based on the person's description. A number of questionnaires exist that may help to assess how much tinnitus is interfering with a person's life. The diagnosis is commonly supported by an audiogram and a neurological examination. If certain problems are found, medical imaging, such as with MRI, may be performed. Other tests are suitable when tinnitus occurs with the same rhythm as the heartbeat. Rarely, the sound may be heard by someone else using a stethoscope, in which case it is known as objective tinnitus. Spontaneous otoacoustic emissions, which are sounds produced normally by the inner ear, may also occasionally result in tinnitus.

Prevention involves avoiding loud noise. If there is an underlying cause, treating it may lead to improvements. Otherwise, typically, management involves talk therapy. Sound generators or hearing aids may help some. As of 2013, there were no effective medications. It is common, affecting about 10–15% of people. Most, however, tolerate it well, and it is a significant problem in only 1–2% of people. The word tinnitus is from the Latin tinnīre which means "to ring".

Prevention

Prolonged exposure to loud sound or noise levels can lead to tinnitus. Ear plugs or other measures can help with prevention.

Several medicines have ototoxic effects, and can have a cumulative effect that can increase the damage done by noise. If ototoxic medications must be administered, close attention by the physician to prescription details, such as dose and dosage interval, can reduce the damage done.

Management

If there is an underlying cause, treating it may lead to improvements. Otherwise, the primary treatment for tinnitus is talk therapy and sound therapy; there are no effective medications.

                                                             https://en.wikipedia.org/wiki/Tinnitus

Neptune in Night Sky


Neptune, the blue planet that is eighth in distance from the sun, will be visible with binoculars or a telescope very near Mars on December 6 and December 7.  Forbes magazine has an article with the details at:  https://www.forbes.com/sites/startswithabang/2018/11/20/get-your-telescopes-ready-neptune-is-coming/#4cd6d724e7c8

Thursday, November 22, 2018

Plane with No Moving Parts

The silent, lightweight aircraft doesn’t depend on fossil fuels or batteries.
By Jennifer Chu |-- MIT News Office

November 21, 2018 -- Since the first airplane took flight over 100 years ago, virtually every aircraft in the sky has flown with the help of moving parts such as propellers, turbine blades, and fans, which are powered by the combustion of fossil fuels or by battery packs that produce a persistent, whining buzz.

Now MIT engineers have built and flown the first-ever plane with no moving parts. Instead of propellers or turbines, the light aircraft is powered by an “ionic wind” — a silent but mighty flow of ions that is produced aboard the plane, and that generates enough thrust to propel the plane over a sustained, steady flight.

Unlike turbine-powered planes, the aircraft does not depend on fossil fuels to fly. And unlike propeller-driven drones, the new design is completely silent.

“This is the first-ever sustained flight of a plane with no moving parts in the propulsion system,” says Steven Barrett, associate professor of aeronautics and astronautics at MIT. “This has potentially opened new and unexplored possibilities for aircraft which are quieter, mechanically simpler, and do not emit combustion emissions.”

He expects that in the near-term, such ion wind propulsion systems could be used to fly less noisy drones. Further out, he envisions ion propulsion paired with more conventional combustion systems to create more fuel-efficient, hybrid passenger planes and other large aircraft.

Barrett and his team at MIT have published their results today in the journal Nature.

Hobby crafts

Barrett says the inspiration for the team’s ion plane comes partly from the movie and television series, “Star Trek,” which he watched avidly as a kid. He was particularly drawn to the futuristic shuttlecrafts that effortlessly skimmed through the air, with seemingly no moving parts and hardly any noise or exhaust.

“This made me think, in the long-term future, planes shouldn’t have propellers and turbines,” Barrett says. “They should be more like the shuttles in ‘Star Trek,’ that have just a blue glow and silently glide.”

About nine years ago, Barrett started looking for ways to design a propulsion system for planes with no moving parts. He eventually came upon “ionic wind,” also known as electroaerodynamic thrust — a physical principle that was first identified in the 1920s and describes a wind, or thrust, that can be produced when a current is passed between a thin and a thick electrode. If enough voltage is applied, the air in between the electrodes can produce enough thrust to propel a small aircraft.

For years, electroaerodynamic thrust has mostly been a hobbyist’s project, and designs have for the most part been limited to small, desktop “lifters” tethered to large voltage supplies that create just enough wind for a small craft to hover briefly in the air. It was largely assumed that it would be impossible to produce enough ionic wind to propel a larger aircraft over a sustained flight.

“It was a sleepless night in a hotel when I was jet-lagged, and I was thinking about this and started searching for ways it could be done,” he recalls. “I did some back-of-the-envelope calculations and found that, yes, it might become a viable propulsion system,” Barrett says. “And it turned out it needed many years of work to get from that to a first test flight.”

Ions take flight

The team’s final design resembles a large, lightweight glider. The aircraft, which weighs about 5 pounds and has a 5-meter wingspan, carries an array of thin wires, which are strung like horizontal fencing along and beneath the front end of the plane’s wing. The wires act as positively charged electrodes, while similarly arranged thicker wires, running along the back end of the plane’s wing, serve as negative electrodes.

The fuselage of the plane holds a stack of lithium-polymer batteries. Barrett's ion plane team included members of Professor David Perreault’s Power Electronics Research Group in the Research Laboratory of Electronics, who designed a power supply that would convert the batteries’ output to a sufficiently high voltage to propel the plane. In this way, the batteries supply electricity at 40,000 volts to positively charge the wires via a lightweight power converter.

Once the wires are energized, they act to attract and strip away negatively charged electrons from the surrounding air molecules, like a giant magnet attracting iron filings. The air molecules that are left behind are newly ionized, and are in turn attracted to the negatively charged electrodes at the back of the plane.

As the newly formed cloud of ions flows toward the negatively charged wires, each ion collides millions of times with other air molecules, creating a thrust that propels the aircraft forward.

The team, which also included Lincoln Laboratory staff Thomas Sebastian and Mark Woolston, flew the plane in multiple test flights across the gymnasium in MIT’s duPont Athletic Center — the largest indoor space they could find to perform their experiments. The team flew the plane a distance of 60 meters (the maximum distance within the gym) and found the plane produced enough ionic thrust to sustain flight the entire time. They repeated the flight 10 times, with similar performance.

“This was the simplest possible plane we could design that could prove the concept that an ion plane could fly,” Barrett says. “It’s still some way away from an aircraft that could perform a useful mission. It needs to be more efficient, fly for longer, and fly outside.”

The new design is a “big step” toward demonstrating the feasibility of ion wind propulsion, according to Franck Plouraboue, senior researcher at the Institute of Fluid Mechanics in Toulouse, France, who notes that researchers previously weren’t able to fly anything heavier than a few grams.

“The strength of the results are a direct proof that steady flight of a drone with ionic wind is sustainable,” says Plouraboue, who was not involved in the research. “[Outside of drone applications], it is difficult to infer how much it could influence aircraft  propulsion  in the future. Nevertheless, this is not really a weakness but rather an opening for future progress, in a field which is now going to burst.”

Barrett’s team is working on increasing the efficiency of their design, to produce more ionic wind with less voltage. The researchers are also hoping to increase the design’s thrust density — the amount of thrust generated per unit area. Currently, flying the team’s lightweight plane requires a large area of electrodes, which essentially makes up the plane’s propulsion system. Ideally, Barrett would like to design an aircraft with no visible propulsion system or separate controls surfaces such as rudders and elevators.

“It took a long time to get here,” Barrett says. “Going from the basic principle to something that actually flies was a long journey of characterizing the physics, then coming up with the design and making it work. Now the possibilities for this kind of propulsion system are viable.”

This research was supported, in part, by MIT Lincoln Laboratory Autonomous Systems Line, the Professor Amar G. Bose Research Grant, and the Singapore-MIT Alliance for Research and Technology (SMART). The work was also funded through the Charles Stark Draper and Leonardo career development chairs at MIT.

Wednesday, November 21, 2018

"Astroinformatics" -- data-oriented Astronomy

Astroinformatics is an interdisciplinary field of study involving the combination of astronomy, data science, informatics, and information/communications technologies

Background on Astroinformatics

Astroinformatics is primarily focused on developing the tools, methods, and applications of computational science, data science, and statistics for research and education in data-oriented astronomy. Early efforts in this direction included data discovery, metadata standards development, data modeling, astronomical data dictionary development, data access, information retrieval, data integration, and data mining in the astronomical Virtual Observatory initiatives. Further development of the field, along with astronomy community endorsement, was presented to the National Research Council (United States) in 2009 in the Astroinformatics "State of the Profession" Position Paper for the 2010 Astronomy and Astrophysics Decadal Survey. That position paper provided the basis for the subsequent more detailed exposition of the field in the Informatics Journal paper Astroinformatics: Data-Oriented Astronomy Research and Education.

Astroinformatics as a distinct field of research was inspired by work in the fields of Bioinformatics and Geoinformatics, and through the eScience work of Jim Gray (computer scientist) at Microsoft Research, whose legacy was remembered and continued through the Jim Gray eScience Awards.

Although the primary focus of Astroinformatics is on the large worldwide distributed collection of digital astronomical databases, image archives, and research tools, the field recognizes the importance of legacy data sets as well—using modern technologies to preserve and analyze historical astronomical observations. Some Astroinformatics practitioners help to digitize historical and recent astronomical observations and images in a large database for efficient retrieval through web-based interfaces. Another aim is to help develop new methods and software for astronomers, as well as to help facilitate the process and analysis of the rapidly growing amount of data in the field of astronomy.

Astroinformatics is described as the Fourth Paradigm of astronomical research. There are many research areas involved with astroinformatics, such as data mining, machine learning, statistics, visualization, scientific data management, and semantic science. Data mining and machine learning play significant roles in Astroinformatics as a scientific research discipline due to their focus on "knowledge discovery from data" (KDD) and "learning from data".

The amount of data collected from astronomical sky surveys has grown from gigabytes to terabytes throughout the past decade and is predicted to grow in the next decade into hundreds of petabytes with the Large Synoptic Survey Telescope and into the exabytes with the Square Kilometre Array. This plethora of new data both enables and challenges effective astronomical research. Therefore, new approaches are required. In part due to this, data-driven science is becoming a recognized academic discipline. Consequently, astronomy (and other scientific disciplines) are developing information-intensive and data-intensive sub-disciplines to an extent that these sub-disciplines are now becoming (or have already become) standalone research disciplines and full-fledged academic programs. While many institutes of education do not boast an astroinformatics program, such programs most likely will be developed in the near future.

Informatics has been recently defined as "the use of digital data, information, and related services for research and knowledge generation". However the usual, or commonly used definition is "informatics is the discipline of organizing, accessing, integrating, and mining data from multiple sources for discovery and decision support." Therefore, the discipline of astroinformatics includes many naturally-related specialties including data modeling, data organization, etc. It may also include transformation and normalization methods for data integration and information visualization, as well as knowledge extraction, indexing techniques, information retrieval and data mining methods. Classification schemes (e.g., taxonomies, ontologies, folksonomies, and/or collaborative tagging) plus Astrostatistics will also be heavily involved. Citizen science projects (such as Galaxy Zoo) also contribute highly valued novelty discovery, feature meta-tagging, and object characterization within large astronomy data sets. All of these specialties enable scientific discovery across varied massive data collections, collaborative research, and data re-use, in both research and learning environments.

In 2012, two position papers were presented to the Council of the American Astronomical Society that led to the establishment of formal working groups in Astroinformatics and Astrostatistics for the profession of astronomy within the US and elsewhere.

Astroinformatics provides a natural context for the integration of education and research. The experience of research can now be implemented within the classroom to establish and grow data literacy through the easy re-use of data. It also has many other uses, such as repurposing archival data for new projects, literature-data links, intelligent retrieval of information, and many others

                                                https://en.wikipedia.org/wiki/Astroinformatics

Tuesday, November 20, 2018

Inside the "Black Box" of Artificial Intelligence

Researchers help explain why machine learning algorithms sometimes generate nonsensical answers

University of Maryland – November 1, 2018 -- Artificial intelligence -- specifically, machine learning -- is a part of daily life for computer and smartphone users. From autocorrecting typos to recommending new music, machine learning algorithms can help make life easier. They can also make mistakes.

It can be challenging for computer scientists to figure out what went wrong in such cases. This is because many machine learning algorithms learn from information and make their predictions inside a virtual "black box," leaving few clues for researchers to follow.

A group of computer scientists at the University of Maryland has developed a promising new approach for interpreting machine learning algorithms. Unlike previous efforts, which typically sought to "break" the algorithms by removing key words from inputs to yield the wrong answer, the UMD group instead reduced the inputs to the bare minimum required to yield the correct answer. On average, the researchers got the correct answer with an input of less than three words.

In some cases, the researchers' model algorithms provided the correct answer based on a single word. Frequently, the input word or phrase appeared to have little obvious connection to the answer, revealing important insights into how some algorithms react to specific language. Because many algorithms are programmed to give an answer no matter what -- even when prompted by a nonsensical input -- the results could help computer scientists build more effective algorithms that can recognize their own limitations.

The researchers will present their work on November 4, 2018 at the 2018 Conference on Empirical Methods in Natural Language Processing.

"Black-box models do seem to work better than simpler models, such as decision trees, but even the people who wrote the initial code can't tell exactly what is happening," said Jordan Boyd-Graber, the senior author of the study and an associate professor of computer science at UMD. "When these models return incorrect or nonsensical answers, it's tough to figure out why. So instead, we tried to find the minimal input that would yield the correct result. The average input was about three words, but we could get it down to a single word in some cases."

In one example, the researchers entered a photo of a sunflower and the text-based question, "What color is the flower?" as inputs into a model algorithm. These inputs yielded the correct answer of "yellow." After rephrasing the question into several different shorter combinations of words, the researchers found that they could get the same answer with "flower?" as the only text input for the algorithm.

In another, more complex example, the researchers used the prompt, "In 1899, John Jacob Astor IV invested $100,000 for Tesla to further develop and produce a new lighting system. Instead, Tesla used the money to fund his Colorado Springs experiments."

They then asked the algorithm, "What did Tesla spend Astor's money on?" and received the correct answer, "Colorado Springs experiments." Reducing this input to the single word "did" yielded the same correct answer.

The work reveals important insights about the rules that machine learning algorithms apply to problem solving. Many real-world issues with algorithms result when an input that makes sense to humans results in a nonsensical answer. By showing that the opposite is also possible -- that nonsensical inputs can also yield correct, sensible answers -- Boyd-Graber and his colleagues demonstrate the need for algorithms that can recognize when they answer a nonsensical question with a high degree of confidence.

"The bottom line is that all this fancy machine learning stuff can actually be pretty stupid," said Boyd-Graber, who also has co-appointments at the University of Maryland Institute for Advanced Computer Studies (UMIACS) as well as UMD's College of Information Studies and Language Science Center. "When computer scientists train these models, we typically only show them real questions or real sentences. We don't show them nonsensical phrases or single words. The models don't know that they should be confused by these examples."

Most algorithms will force themselves to provide an answer, even with insufficient or conflicting data, according to Boyd-Graber. This could be at the heart of some of the incorrect or nonsensical outputs generated by machine learning algorithms -- in model algorithms used for research, as well as real-world algorithms that help us by flagging spam email or offering alternate driving directions. Understanding more about these errors could help computer scientists find solutions and build more reliable algorithms.

"We show that models can be trained to know that they should be confused," Boyd-Graber said. "Then they can just come right out and say, 'You've shown me something I can't understand.'"

In addition to Boyd-Graber, UMD-affiliated researchers involved with this work include undergraduate researcher Eric Wallace; graduate students Shi Feng and Pedro Rodriguez; and former graduate student Mohit Iyyer (M.S. '14, Ph.D. '17, computer science).

The research presentation, "Pathologies of Neural Models Make Interpretation Difficult," Shi Feng, Eric Wallace, Alvin Grissom II, Pedro Rodriguez, Mohit Iyyer, and Jordan Boyd-Graber, will be presented at the 2018 Conference on Empirical Methods in Natural Language Processing on November 4, 2018.

Monday, November 19, 2018

Latest in Epigenetics

John D. Loike wrote an opinion piece for The Scientist on November12th that clearly presents the intriguing avenues of epigenetic research.  See the article at:  https://www.the-scientist.com/news-opinion/opinion--the-new-frontiers-of-epigenetics-65076

Sunday, November 18, 2018

Redefinition of SI Base Units


On 16 November 2018, the 26th General Conference on Weights and Measures (CGPM) voted unanimously in favour of revised definitions of the SI [International System of Units] base units, which the International Committee for Weights and Measures (CIPM) had proposed earlier that year. The new definitions will come into force on 20 May 2019.

The metric system was originally conceived as a system of measurement that was derivable from unchanging phenomena, but technical limitations necessitated the use of artifacts (the prototype metre and prototype kilogram) when the metric system was first introduced in France in 1799. Although designed to not degrade or decay over time these prototypes were in fact losing minuscule amounts of mass over time, even in their sealed chambers. The changes in mass, and with them the values the artefacts provided, were so tiny as to be imperceptible without the most sensitive of equipment. However, by that same logic, those sensitive instruments could no longer provide exact measurements, or at least not within an acceptable tolerance level.

In 1960, the metre was redefined in terms of the wavelength of light from a specified source, making it derivable from universal natural phenomena, leaving the prototype kilogram as the only artefact upon which the SI unit definitions depend. With this redefinition, the SI is for the first time wholly derivable from natural phenomena.

The kilogram, ampere, kelvin and mole have been redefined by setting exact numerical values for the Planck constant (h), the elementary electric charge (e), the Boltzmann constant (k), and the Avogadro constant (NA), respectively. The metre and candela are already defined by physical constants, subject to correction to their present definitions. The new definitions aim to improve the SI without changing the size of any units, thus ensuring continuity with existing measurements.

The previous major change of the metric system was in 1960 when the International System of Units (SI) was formally published as a coherent set of units of measure. SI is structured around seven base units whose definitions are unconstrained by that of any other unit and another twenty-two named units derived from these base units. Although the set of units formed a coherent system, the kilogram remained defined in terms of a physical artefact, and some units were defined based on measurements that are difficult to precisely realise in a laboratory, such as the Kelvin scale's definition in terms of the triple point of water. The new definitions adopted by the CIPM seek to remedy this by using the fundamental quantities of nature as the basis for deriving the base units. The second and the metre are already defined in such a manner. The change will mean, amongst other things, that the prototype kilogram will cease to be used as the definitive replica of the kilogram as of 20 May 2019.

A number of authors have published criticisms of the revised definitions – including that the proposal had failed to address the impact of breaking the link between the definition of the dalton and the definitions of the kilogram, the mole, and the Avogadro constant NA.

https://en.wikipedia.org/wiki/2019_redefinition_of_SI_base_units