Tuesday, May 31, 2022

Researchers Identify Alternative to Lithium-based Battery Technology

Researchers have identified an alternative to lithium-based battery technology by developing sodium glassy electrodes capable of supporting long-duration, grid-scale energy storage.

From:  University of Houston

May 31, 2022 -- Lithium-ion batteries are currently the preferred technology to power electric vehicles, but they're too expensive for long-duration grid-scale energy storage systems, and lithium itself is becoming more challenging to access.

While lithium does have many advantages -- high energy density and capacity to be combined with renewable energy sources to support grid-level energy storage -- lithium carbonate prices are at an all-time high. Contributing to the rising cost are pandemic-related supply-chain bottlenecks, the Russia-Ukraine conflict and increased demand from businesses. Additionally, many governments are hesitant to green light lithium mines because of the high environmental costs and the potential of human rights violations.

As governments and industries all over the world are eager to find energy storage options to power the clean energy transition, new research conducted at the University of Houston and published in Nature Communications suggests ambient temperature solid-state sodium-sulfur battery technology as a viable alternative to lithium-based battery technology for grid-level energy storage systems.

Yan Yao, Cullen Professor of Electrical and Computer Engineering, and his colleagues developed a homogeneous glassy electrolyte that enables reversible sodium plating and stripping at a greater current density than previously possible.

"The quest for new solid electrolytes for all-solid sodium batteries must concurrently be low cost, easily fabricated, and have incredible mechanical and chemical stability," said Yao, who is also principal investigator of the Texas Center for Superconductivity at the University of Houston (TcSUH). "To date, no single sodium solid electrolyte has been able to achieve all four of these requirements at the same time."

The researchers found a novel form of oxysulfide glass electrolyte that has the potential to satisfy all of these requirements at the same time. A high-energy ball milling process was used to create the electrolytes at room temperature.

"The oxysulfide glass has a distinct microstructure, resulting in a completely homogeneous glass structure," said Ye Zhang, who works as a research associate in Yao's group. "At the interface between sodium metal and the electrolyte, the solid electrolyte forms a self-passivating interphase that is essential for reversible plating and stripping of sodium."

It has proven difficult to achieve stable plating and stripping of sodium metal using a sulfide electrolyte.

"Our study overturned this perception by establishing not only the highest critical current density among all Na-ion conducting sulfide-based solid electrolytes, but also enabling high-performance ambient-temperature sodium-sulfur batteries," Yao explained.

"The new structural and compositional design strategies presented in this work provide a new paradigm in the development of safe, low-cost, energy-dense, and long-lifetime solid-state sodium batteries," Zhang added.

In addition to Yao and Zhang, co-authors of the study include co-first authors Xiaowei Chi and Fang Hao of UH; and Steven Kmiec and co-corresponding author Steve Martin of Iowa State University. Rice University, Purdue University, and UC Irvine are all collaborators on this project. This research was funded by the U.S. Department of Energy's Advanced Research Projects Agency-Energy (ARPA-E).

        https://www.sciencedaily.com/releases/2022/05/220531161314.htm

  

Monday, May 30, 2022

Flash Droughts Threaten the Midwest

New research shows that dry weather is coming on more quickly than before, with little advance warning.

From:  Grist

May 16, 2022 – By Diana Kruzman -- September in Oklahoma is typically a rainy season, when farmers take advantage of the state’s third-wettest month to plant winter wheat. But last year, many were caught off guard by abnormally dry weather that descended without warning. In the span of just three weeks, nearly three-quarters of the state began experiencing drought conditions, ranging from moderate to extreme. 

Fast-moving droughts like this one are developing more and more quickly as climate change pushes temperatures to new extremes, recent research indicates — adding a new threat to the dangers of pests, flooding, and more long-term drought that farmers in the U.S. already face. Known as “flash droughts,” these dry periods can materialize in as quickly as five days, often devastating agricultural areas that aren’t prepared for them. 

During last year’s drought in Oklahoma, Jonathan Conder, a meteorologist for a local news station in Oklahoma City, marveled at the speed and severity of the event. Tulsa, the state’s second-largest city, went 80 days without more than a quarter-inch of rain, while temperatures in southwestern Oklahoma climbed into the triple digits.

“This is huge for Oklahoma,” Conder said during his broadcast on October 1. “Our agricultural community, the farmers who plant wheat, they may not even be able to plant if they don’t get two inches of rain.”

The threshold for drought conditions differs by location, with the U.S. Drought Monitor using data on soil moisture, streamflow, and precipitation to categorize droughts by their severity. While typical droughts develop over months as precipitation gradually declines, flash droughts are characterized by a steep drop in rainfall, particularly during a season that normally receives plenty, along with high temperatures and fast winds that quickly dry out the soil. They can wither crops or prevent seeds from sprouting, delaying or diminishing the harvest. 

Now, flash droughts are coming on faster and faster — making them more difficult to predict and more damaging, according to a recent study published in Nature Communications. The research, from scientists at the University of Texas and Hong Kong Polytechnic University, found that in the last 20 years, the percentage of flash droughts developing in under a week increased by more than 20 percent in the Central United States. 

“There should be more attention paid to this phenomenon,” said Zong-Liang Yang, a geosciences professor at the University of Texas and one of the study’s co-authors, as well as “how to actually implement [these findings] into agricultural management.” 

Scientists have long warned that warming temperatures and shifting rainfall patterns due to climate change pose a threat to the cash crops of the Midwest and Great Plains, primarily corn, wheat, and soybeans. But flash droughts are a relatively new area of research, Yang said, with the term gaining usage only in the last couple of decades. 

The increase in their severity and frequency, though, is already being felt across the U.S. In 2012, a flash drought struck the Central U.S. in the middle of the growing season, causing an estimated $31.2 billion in crop losses. Another flash drought hit Montana, North Dakota, and South Dakota in the spring of 2017, leading to more than $2.6 billion in agricultural losses, along with “widespread wildfires, poor air quality, damaged ecosystems, and degraded mental health,” according to a study published in the Bulletin of the American Meteorological Society.

Flash droughts are also a global problem, with Brazil, India, and multiple countries in Africa facing the worst impacts. In 2010, a flash drought followed by a heatwave in Russia temporarily halted wheat exports, a major disruption for communities across the Middle East that depend on the country’s grain. 

The damage flash droughts can cause depends on the crop and the time of year, said Dennis Todey, director of the Midwest Climate Hub for the U.S. Department of Agriculture. Corn is the most vulnerable during its pollination season in mid-summer, while soybeans are affected in August and wheat during planting season in the spring. 

Drought is a natural part of the climate in this region, Todey said, particularly in the western part of the Corn Belt — a region that encompasses the Midwest and the Great Plains. Many farmers have learned to adapt and integrate dry conditions into their planting cycles. But what makes flash droughts so dangerous is their rapid onset, Todey said, leaving little time for agricultural producers to prepare. 

“Drought most times is thought of as a slow-starting and then a slow-stopping event,” Todey said. “In a flash drought setting … instead of just starting to dry out gradually, you have surfaces that dry out very quickly, you have some newly planted crops that are starting to be stressed more quickly.” 

Many farmers don’t know if they’re starting to experience a drought, though, until expected rains fail to appear. Rainfall in mid-October helped ease the flash drought that began in Oklahoma in September, but after that a much longer drought set in, said Keeff Felty, a fourth-generation wheat and cotton farmer in the southwestern part of the state. As a result, some of his crop never germinated, while his overall yield dropped when it came time for the harvest. 

“There’s a lot of information out there, and you have to avail yourself of what works best for you, but you also have to be prepared for it to go totally south,” Felty said. “Nobody saw [the drought] coming, and it’s just a fact of the weather that we don’t have any control over it. It’s just life.”  

Typical droughts can last months or even years — the western U.S. is currently experiencing its third decade of “megadrought” — while flash droughts can end more quickly, within weeks or months, Yang said. And they can hit in relatively wet areas, including the eastern part of the country, where drought conditions are much more rare than in the West. 

The main reason they’re occurring faster, Yang said, is climate change. As the air warms, it can lead to more evaporation and dry out the soil. This can occur even in areas that can expect to receive more rainfall overall because of climate change, because scientists project that rainfall will be unevenly distributed — falling in more extreme events and making other parts of the year drier. 

“Every [recent] decade we have seen is the warmest decade in history,” Yang said. And with the world on track to blow past a global temperature that’s 1.5 degrees Celsius (2.7 degrees Fahrenheit) higher than the pre-industrial average, he expects to see both flash droughts and longer droughts occurring more frequently. 

Researchers are working on improving their models to better predict flash droughts, Yang said, with the help of new technologies such as more granular satellite monitoring and machine learning. The main marker they look for is high rates of evapotranspiration, when plants suck up water from the soil and then release it into the air through their leaves — a process that accelerates with high temperatures and winds and can be monitored with special cameras that detect fluorescence, or the heat emitted by plants. 

If farmers can know when to anticipate a flash drought, Todey said, they can skip or delay planting, or reduce their fertilizer usage when they know a crop won’t grow. They can also adjust their planting schedule and take better care of their soil by minimizing tillage, which dries it out even more. But with less and less time to prepare for flash droughts, Todey said, some may have to make difficult choices about whether to plant at all. 

“Agricultural producers naturally adapt to changing conditions,” Todey said. “But eventually there comes a point where [losses] become more frequent. People start going, ‘Okay, this isn’t working.’” 

      https://grist.org/agriculture/flash-droughts-are-midwests-next-big-climate-threat/

 


Sunday, May 29, 2022

Robb Elementary School Shooting

On May 24, 2022, 18-year-old Salvador Rolando Ramos fatally shot nineteen students and two teachers, and wounded seventeen other people at Robb Elementary School in Uvalde, Texas. This came after he shot his grandmother in the forehead at home, severely wounding her. He proceeded to the school and fired shots outside for approximately five minutes, then entered with an AR-15 style rifle through an open side entrance door, without encountering armed resistance.  He then locked himself inside a classroom, in which he killed all of the shooting's victims, and remained there for about one hour before being killed by a United States Border Patrol Tactical Unit (BORTAC).  It is the third-deadliest American school shooting, after the Virginia Tech shooting in 2007 and the Sandy Hook Elementary School shooting in 2012, and the deadliest in Texas.

Law enforcement officials were criticized for their actions in response to the shooting, and their conduct is being reviewed in separate investigations by the Texas Ranger Division and the United States Department of Justice. After initially praising first responders to the shooting, Texas Governor Greg Abbott called for an investigation of the lack of action by incident commanders. Police officers waited 78 minutes on-site before breaching the classroom to engage the shooter. Police also cordoned off the school grounds, resulting in violent conflicts between police and civilians who were attempting to enter the school to rescue children.  Afterwards, local and state officials gave inaccurate reports of the timeline of police actions and overstated police actions.  The Texas Department of Public Safety acknowledged that it was an error for law enforcement to delay an assault on Ramos's position in the student-filled classroom, attributing this to the Uvalde Consolidated Independent School District's police chief's assessment of the situation as one with a "barricaded subject" instead of an "active shooter".

Following the shooting, which took place only ten days after the 2022 Buffalo shooting at a supermarket, wider discussions ensued about American gun culture and violence, gridlock in politics, and law enforcement's failure to halt the attack. Some have advocated for a renewal of an assault weapons federal ban. Others criticized politicians for their perceived role in continuing to enable mass shootings.  Republicans have responded by resisting the implementation of gun control measures, and called for increasing security measures in schools, such as arming teachers; they have also accused their opponents of politicizing the shooting.  Some Republican senators have expressed an openness for a bipartisan agreement on gun reform, such as incentivizing states to pass red flag laws and expanding background checks for gun purchasers.

          https://en.wikipedia.org/wiki/Robb_Elementary_School_shooting


Saturday, May 28, 2022

New Insights into Antibiotic Resistance

Study shows hospital bacteria's ability to change

From:  Flinders University

May 25, 2022 -- Treatment of severe infections caused by pathogenic bacteria relies on 'last resort' antibiotics, but rising resistance by 'superbugs' to most clinically approved drugs leaves patients exposed to possible fatalities. Researchers are focusing on how bacterial cells adapt and resist antimicrobial medications -- with a new article focusing on a hospital strain of Acinetobacter baumannii and its cellular response to important antibiotic colistin.

The World Health Organization names antibiotic resistance as one of the biggest threats to global health, food security, and development with a growing number of infections -- including pneumonia, tuberculosis, gonorrhoea and salmonellosis -- becoming harder to treat as antibiotics used to treat them become less effective.

Antibiotic resistance leads to longer hospital stays, higher medical costs and increased mortality, researchers warn.

"Around the world, there are fewer and fewer new antibiotics being identified and produced for medical use -- and this is compounded by the ever-increasing antibiotic resistance seen in bacterial strains causing infections," says Flinders microbial researcher Dr Sarah Giles.

"If we can understand the bacterial mechanisms, such as this, we can potentially apply new therapies to treat patients -- particularly those with multi-drug-resistant bacterial infections."

As part of a NHMRC-Flinders University Research Scholarship study, Dr Giles and other authors noted that the A. baumannii bacterial strain had a two-part signal system which altered its potential response to antibiotic treatment.

This 'two-component signal transduction' observed involves a response regulator protein in the StkR/S system acting as a repressor and when genetically removed hundreds of transcriptional changes are seen.

Some of these changes affect the bacterial cell's outer membrane composition leading to colistin resistance.

"Colistin is known as a 'last resort' antibiotic and therefore identifying and understanding the mechanisms contributing to bacterial antibiotic-resistant is critical," says senior researcher Professor Melissa Brown.

Antimicrobial resistance (AMR) occurs when bacteria, viruses, fungi and parasites change over time and no longer respond to medicines, making infections harder to treat and increasing the risk of disease spread, severe illness and even death.

          https://www.sciencedaily.com/releases/2022/05/220525103001.htm

 

Friday, May 27, 2022

Wild Animals Often Evolve Quickly

The raw material for evolution is much more abundant in wild animals than we previously believed, according to new research.

From:  Australian National University

May 26, 2022 -- Darwinian evolution is the process by which natural selection results in genetic changes in traits that favour the survival and reproduction of individuals. The rate at which evolution occurs depends crucially on genetic differences between individuals.

Led by Dr Timothée Bonnet from ANU, an international research team wanted to know how much of this genetic difference, or "fuel of evolution," exists in wild animal populations. The answer: two to four times more than previously thought.

According to Dr Bonnet, the process of evolution that Darwin described was an incredibly slow one.

"However, since Darwin, researchers have identified many examples of Darwinian evolution occurring in just a few years," Dr Bonnet said.

"A common example of fast evolution is the peppered moth, which prior to the industrial revolution in the UK was predominantly white. With pollution leaving black soot on trees and buildings, black moths had a survival advantage because it was harder for birds to spot them.

"Because moth colour determined survival probability and was due to genetic differences, the populations in England quickly became dominated by black moths."

The study is the first time the speed of evolution has been systematically evaluated on a large scale, rather than on an ad hoc basis. The team of 40 researchers from 27 scientific institutions used studies of 19 populations of wild animals from around the world. These included superb fairy-wrens in Australia, spotted hyenas in Tanzania, song sparrows in Canada and red deer in Scotland.

"We needed to know when each individual was born, who they mated with, how many offspring they had, and when they died. Each of these studies ran for an average of 30 years, providing the team with an incredible 2.6 million hours of field data," Dr Bonnet said.

"We combined this with genetic information on each animal studied to estimate the extent of genetic differences in their ability to reproduce, in each population.

After three years of trawling through reams of data, Dr Bonnet and the team were able to quantify how much species change occurred due to genetic changes caused by natural selection.

"The method gives us a way to measure the potential speed of current evolution in response to natural selection across all traits in a population. This is something we have not been able to do with previous methods, so being able to see so much potential change came as a surprise to the team," Dr Bonnet said.

Professor Loeske Kruuk, also from ANU and now based at the University of Edinburgh in the United Kingdom, said: "This has been a remarkable team effort that was feasible because researchers from around the world were happy to share their data in a large collaboration.

"It also shows the value of long-term studies with detailed monitoring of animal life histories for helping us understand the process of evolution in the wild."

However, the researchers warn it's too early to tell whether the actual rate of evolution is getting quicker over time.

"Whether species are adapting faster than before, we don't know, because we don't have a baseline. We just know that the recent potential, the amount of 'fuel', has been higher than expected, but not necessarily higher than before," Dr Bonnet said.

According to the researchers, their findings also have implications for predictions of species' adaptability to environmental change.

"This research has shown us that evolution cannot be discounted as a process which allows species to persist in response to environmental change," Dr Bonnet said.

Dr Bonnet said that with climate change predicted to increase at an increasing rate, there is no guarantee that these populations will be able to keep up.

"But what we can say is that evolution is a much more significant driver than we previously thought in the adaptability of populations to current environmental changes," he said.

        https://www.sciencedaily.com/releases/2022/05/220526141534.htm

  

Thursday, May 26, 2022

Hot-blooded T. rex and cold-blooded Stegosaurus

Chemical clues reveal dinosaur metabolisms

From:  Field Museum

May 25, 2022 -- For decades, paleontologists have debated whether dinosaurs were warm-blooded, like modern mammals and birds, or cold-blooded, like modern reptiles. Knowing whether dinosaurs were warm- or cold-blooded could give us hints about how active they were and what their everyday lives were like, but the methods to determine their warm- or cold-bloodedness-- how quickly their metabolisms could turn oxygen into energy-- were inconclusive. But in a new paper in Nature, scientists are unveiling a new method for studying dinosaurs’ metabolic rates, using clues in their bones that indicated how much the individual animals breathed in their last hour of life.

“This is really exciting for us as paleontologists-- the question of whether dinosaurs were warm- or cold-blooded is one of the oldest questions in paleontology, and now we think we have a consensus, that most dinosaurs were warm-blooded,” says Jasmina Wiemann, the paper’s lead author and a postdoctoral researcher at the California Institute of Technology.

“The new proxy developed by Jasmina Wiemann allows us to directly infer metabolism in extinct organisms, something that we were only dreaming about just a few years ago. We also found different metabolic rates characterizing different groups, which was previously suggested based on other methods, but never directly tested,” says Matteo Fabbri, a postdoctoral researcher at the Field Museum in Chicago and one of the study’s authors.

People sometimes talk about metabolism in terms of how easy it is for someone to stay in shape, but at its core, “metabolism is how effectively we convert the oxygen that we breathe into chemical energy that fuels our body,” says Wiemann, who is affiliated with Yale University and the Natural History Museum of Los Angeles County.

Animals with a high metabolic rate are endothermic, or warm-blooded; warm-blooded animals like birds and mammals take in lots of oxygen and have to burn a lot of calories in order to maintain their body temperature and stay active. Cold-blooded, or ectothermic, animals like reptiles breathe less and eat less. Their lifestyle is less energetically expensive than a hot-blooded animal’s, but it comes at a price: cold-blooded animals are reliant on the outside world to keep their bodies at the right temperature to function (like a lizard basking in the sun), and they tend to be less active than warm-blooded creatures.

With birds being warm-blooded and reptiles being cold-blooded, dinosaurs were caught in the middle of a debate. Birds are the only dinosaurs that survived the mass extinction at the end of the Cretaceous, but dinosaurs (and by extension, birds) are technically reptiles-- outside of birds, their closest living relatives are crocodiles and alligators. So would that make dinosaurs warm-blooded, or cold-blooded?

Scientists have tried to glean dinosaurs’ metabolic rates from chemical and osteohistological analyses of their bones. “In the past, people have looked at dinosaur bones with isotope geochemistry that basically works like a paleo-thermometer,” says Wiemann-- researchers examine the minerals in a fossil and determine what temperatures those minerals would form in. “It's a really cool approach and it was really revolutionary when it came out, and it continues to provide very exciting insights into the physiology of extinct animals. But we’ve realized that we don’t really understand yet how fossilization processes change the isotope signals that we pick up, so it is hard to unambiguously compare the data from fossils to modern animals.”

Another method for studying metabolism is growth rate. “If you look at a cross section of dinosaur bone tissue, you can see a series of lines, like tree rings, that correspond to years of growth,” says Fabbri. “You can count the lines of growth and the space between them to see how fast the dinosaur grew. The limit relies on how you transform growth rate estimates into metabolism: growing faster or slower can have more to do with the animal’s stage in life than with its metabolism, like how we grow faster when we’re young and slower when we’re older.”

The new method proposed by Wiemann, Fabbri, and their colleagues doesn’t look at the minerals present in bone or how quickly the dinosaur grew. Instead, they look at one of the most basic hallmarks of metabolism: oxygen use. When animals breathe, side products form that react with proteins, sugars, and lipids, leaving behind molecular “waste.” This waste is extremely stable and water-insoluble, so it’s preserved during the fossilization process. It leaves behind a record of how much oxygen a dinosaur was breathing in, and thus, its metabolic rate.

The researchers looked for these bits of molecular waste in dark-colored fossil femurs, because those dark colors indicate that lots of organic matter are preserved. They examined the fossils using Raman and Fourier-transform infrared spectroscopy-- “these methods work like laser microscopes, we can basically quantify the abundance of these molecular markers that tell us about the metabolic rate,” says Wiemann. “It is a particularly attractive method to paleontologists, because it is non-destructive.” 

The team analyzed the femurs of 55 different groups of animals, including dinosaurs, their flying cousins the pterosaurs, their more distant marine relatives the plesiosaurs, and modern birds, mammals, and lizards. They compared the amount of breathing-related molecular byproducts with the known metabolic rates of the living animals and used those data to infer the metabolic rates of the extinct ones.

The team found that dinosaurs’ metabolic rates were generally high. There are two big groups of dinosaurs, the saurischians and the ornithischians-- lizard hips and bird hips. The bird-hipped dinosaurs, like Triceratops and Stegosaurus, had low metabolic rates comparable to those of cold-blooded modern animals. The lizard-hipped dinosaurs, including theropods and the sauropods-- the two-legged, more bird-like predatory dinosaurs like Velociraptor and T. rex and the giant, long-necked herbivores like Brachiosaurus-- were warm- or even hot-blooded. The researchers were surprised to find that some of these dinosaurs weren’t just warm-blooded-- they had metabolic rates comparable to modern birds, much higher than mammals. These results complement previous independent observations that hinted at such trends but could not provide direct evidence, because of the lack of a direct proxy to infer metabolism.

These findings, the researchers say, can give us fundamentally new insights into what dinosaurs’ lives were like. 

“Dinosaurs with lower metabolic rates would have been, to some extent, dependent on external temperatures,” says Wiemann. “Lizards and turtles sit in the sun and bask, and we may have to consider similar ‘behavioral’ thermoregulation in ornithischians with exceptionally low metabolic rates. Cold-blooded dinosaurs also might have had to migrate to warmer climates during the cold season, and climate may have been a selective factor for where some of these dinosaurs could live.” 

On the other hand, she says, the hot-blooded dinosaurs would have been more active and would have needed to eat a lot. “The hot-blooded giant sauropods were herbivores, and it would take a lot of plant matter to feed this metabolic system. They had very efficient digestive systems, and since they were so big, it probably was more of a problem for them to cool down than to heat up.” Meanwhile, the theropod dinosaurs-- the group that contains birds-- developed high metabolisms even before some of their members evolved flight. 

“Reconstructing the biology and physiology of extinct animals is one of the hardest things to do in paleontology. This new study adds a fundamental piece of the puzzle in understanding the evolution of physiology in deep time and complements previous proxies used to investigate these questions. We can now infer body temperature through isotopes, growth strategies through osteohistology, and metabolic rates through chemical proxies,” says Fabbri.

In addition to giving us insights into what dinosaurs were like, this study also helps us better understand the world around us today. Dinosaurs, with the exception of birds, died out in a mass extinction 65 million years ago when an asteroid struck the Earth. “Having a high metabolic rate has generally been suggested as one of the key advantages when it comes to surviving mass extinctions and successfully radiating afterwards,” says Wiemann-- some scientists have proposed that birds survived while the non-avian dinosaurs died because of the birds’ increased metabolic capacity. But this study, Wiemann says, helps to show that this isn’t true: many dinosaurs with bird-like, exceptional metabolic capacities went extinct. 

“We are living in the sixth mass extinction,” says Wiemann, “so it is important for us to understand how modern and extinct animals physiologically responded to previous climate change and environmental perturbations, so that the past can inform biodiversity conservation in the present and inform our future actions.”

https://www.fieldmuseum.org/about/press/hot-blooded-t-rex-and-cold-blooded-stegosaurus-chemical-clues-reveal-dinosaur

 

Wednesday, May 25, 2022

Hubble Reaches New Milestone in Mystery of Universe's Expansion Rate

Completing a nearly 30-year marathon, NASA's Hubble Space Telescope has calibrated more than 40 "milepost markers" of space and time to help scientists precisely measure the expansion rate of the universe – a quest with a plot twist.

From:  NASA

May 19, 2022 -- Pursuit of the universe's expansion rate began in the 1920s with measurements by astronomers Edwin P. Hubble and Georges Lemaître. In 1998, this led to the discovery of "dark energy," a mysterious repulsive force accelerating the universe's expansion. In recent years, thanks to data from Hubble and other telescopes, astronomers found another twist: a discrepancy between the expansion rate as measured in the local universe compared to independent observations from right after the big bang, which predict a different expansion value.

The cause of this discrepancy remains a mystery. But Hubble data, encompassing a variety of cosmic objects that serve as distance markers, support the idea that something weird is going on, possibly involving brand new physics.

"You are getting the most precise measure of the expansion rate for the universe from the gold standard of telescopes and cosmic mile markers," said Nobel Laureate Adam Riess of the Space Telescope Science Institute (STScI) and the Johns Hopkins University in Baltimore, Maryland.

Riess leads a scientific collaboration investigating the universe's expansion rate called SH0ES, which stands for Supernova, H0, for the Equation of State of Dark Energy. "This is what the Hubble Space Telescope was built to do, using the best techniques we know to do it. This is likely Hubble's magnum opus, because it would take another 30 years of Hubble's life to even double this sample size," Riess said.

Riess's team's paper, to be published in the Special Focus issue of The Astrophysical Journal reports on completing the biggest and likely last major update on the Hubble constant. The new results more than double the prior sample of cosmic distance markers. His team also reanalyzed all of the prior data, with the whole dataset now including over 1,000 Hubble orbits.

When NASA conceived of a large space telescope in the 1970s, one of the primary justifications for the expense and extraordinary technical effort was to be able to resolve Cepheids, stars that brighten and dim periodically, seen inside our Milky Way and external galaxies. Cepheids have long been the gold standard of cosmic mile markers since their utility was discovered by astronomer Henrietta Swan Leavitt in 1912. To calculate much greater distances, astronomers use exploding stars called Type Ia supernovae.

Combined, these objects built a "cosmic distance ladder" across the universe and are essential to measuring the expansion rate of the universe, called the Hubble constant after Edwin Hubble. That value is critical to estimating the age of the universe and provides a basic test of our understanding of the universe.

Starting right after Hubble's launch in 1990, the first set of observations of Cepheid stars to refine the Hubble constant was undertaken by two teams: the HST Key Project led by Wendy Freedman, Robert Kennicutt, Jeremy Mould, and Marc Aaronson, and another by Allan Sandage and collaborators, that used Cepheids as milepost markers to refine the distance measurement to nearby galaxies. By the early 2000s the teams declared "mission accomplished" by reaching an accuracy of 10 percent for the Hubble constant, 72 plus or minus 8 kilometers per second per megaparsec.

In 2005 and again in 2009, the addition of powerful new cameras onboard the Hubble telescope launched "Generation 2" of the Hubble constant research as teams set out to refine the value to an accuracy of just one percent. This was inaugurated by the SH0ES program. Several teams of astronomers using Hubble, including SH0ES, have converged on a Hubble constant value of 73 plus or minus 1 kilometer per second per megaparsec. While other approaches have been used to investigate the Hubble constant question, different teams have come up with values close to the same number.

The SH0ES team includes long-time leaders Dr. Wenlong Yuan of Johns Hopkins University, Dr. Lucas Macri of Texas A&M University, Dr. Stefano Casertano of STScI, and Dr. Dan Scolnic of Duke University. The project was designed to bracket the universe by matching the precision of the Hubble constant inferred from studying the cosmic microwave background radiation leftover from the dawn of the universe.

"The Hubble constant is a very special number. It can be used to thread a needle from the past to the present for an end-to-end test of our understanding of the universe. This took a phenomenal amount of detailed work," said Dr. Licia Verde, a cosmologist at ICREA and the ICC-University of Barcelona, speaking about the SH0ES team's work.

The team measured 42 of the supernova milepost markers with Hubble. Because they are seen exploding at a rate of about one per year, Hubble has, for all practical purposes, logged as many supernovae as possible for measuring the universe's expansion. Riess said, "We have a complete sample of all the supernovae accessible to the Hubble telescope seen in the last 40 years." Like the lyrics from the song "Kansas City," from the Broadway musical Oklahoma, Hubble has "gone about as fur as it c'n go!"

Weird Physics?

The expansion rate of the universe was predicted to be slower than what Hubble actually sees. By combining the Standard Cosmological Model of the Universe and measurements by the European Space Agency's Planck mission (which observed the relic cosmic microwave background from 13.8 billion years ago), astronomers predict a lower value for the Hubble constant: 67.5 plus or minus 0.5 kilometers per second per megaparsec, compared to the SH0ES team's estimate of 73.

Given the large Hubble sample size, there is only a one-in-a-million chance astronomers are wrong due to an unlucky draw, said Riess, a common threshold for taking a problem seriously in physics. This finding is untangling what was becoming a nice and tidy picture of the universe's dynamical evolution. Astronomers are at a loss for an explanation of the disconnect between the expansion rate of the local universe versus the primeval universe, but the answer might involve additional physics of the universe.

Such confounding findings have made life more exciting for cosmologists like Riess. Thirty years ago they started out to measure the Hubble constant to benchmark the universe, but now it has become something even more interesting. "Actually, I don't care what the expansion value is specifically, but I like to use it to learn about the universe," Riess added.

NASA's new Webb Space Telescope will extend on Hubble's work by showing these cosmic milepost markers at greater distances or sharper resolution than what Hubble can see.

The Hubble Space Telescope is a project of international cooperation between NASA and ESA (European Space Agency). NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy in Washington, D.C.

https://www.nasa.gov/feature/goddard/2022/hubble-reaches-new-milestone-in-mystery-of-universes-expansion-rate

 

Tuesday, May 24, 2022

Scientists Develop Method for Seasonal Prediction of Western Wildfires

Experimental forecast calls for a larger-than-average fire season this summer

From:  National Center for Atmospheric Research/University Corporation for Atmospheric Research

May 24, 2022 -- This summer's Western wildfire season is likely to be more severe than average but not as devastating as last year's near-record, according to an experimental prediction method developed by scientists at the National Center for Atmospheric Research (NCAR).

The new method, detailed in a peer-reviewed study, analyzes precipitation, temperatures, drought, and other climate conditions in the winter and spring in order to predict the extent of wildfires across the western United States during the following summer. The research team developed the method by applying machine learning techniques to observations of every wildfire season since 1984, when current satellite measurements of fires first became available.

Although scientists had previously known that climate conditions during the spring and summer influence fire risk, the new study demonstrates that, even several months before peak fire season, the climate across large parts of the West plays an important role in setting the stage for the blazes.

"What our research shows is that the climate of the preceding winter and spring can explain over 50% of the year-to-year variability and overall trend in summer fire activity," said NCAR scientist Ronnie Abolafia-Rosenzweig, the lead author of the study. "This gives us the ability to predict fire activity before the summer fire season begins."

Applying their research method to the upcoming fire season, the scientists predicted that fires this summer will burn 1.9-5.3 million acres in the West, with 3.8 million acres being the most likely total. Although well short of the record 8.7 million acres burned in 2020, this would represent the 8th largest burned area since 1984, part of a long-term trend of more widespread conflagrations.

The scientists emphasized that their prediction is currently for research purposes only. But they said their method, once further tested and improved, could help provide guidance to firefighting agencies in the future. It provides more explicit information than current seasonal forecasts that may call for a comparatively mild or destructive wildfire season without predicting how many acres are likely to burn.

"This information can be extremely useful to firefighting agencies as they allocate resources and prepare for the upcoming fire season," Abolafia-Rosenzweig said.

Abolafia-Rosenzweig and his co-authors describe the prediction method in a new study in Environmental Research Letters. The work was supported by the NOAA MAPP program as well as the U.S. National Science Foundation, which is NCAR's sponsor.

A persistent influence

With wildfires becoming increasingly widespread across much of the West, the NCAR team wanted to see if climate conditions early in the year can offer clues to the extent of the blazes during summer, when fire season peaks.

The scientists turned to ensembles of generalized additive statistical models, which are widely used machine learning tools that help reveal complex relationships -- in this case, the correspondence between climate conditions from November to May and the extent of burned areas during the following June to September. They analyzed every year since 1984, focusing on regions in the West that are reliant on snowpack for water.

The research team found that the dryness of air (vapor pressure deficit) in the lowest part of the atmosphere during winter and spring has a particularly pronounced effect on summertime fires. That dryness influences the amount of snow that falls and, in turn, is affected by snow on the ground that eventually releases moisture to the overlying air. The extent of April snowpack is especially significant because it moistens both the ground and air as it melts during the warmer months.

"We found that April snowpack has a persistent influence on the land and atmosphere during the summer," said Abolafia-Rosenzweig. "If you have a large snowpack in April, it will take longer to melt and there's a more persistent transfer of moisture from the land to the atmosphere during late spring to summer. But in the case of a lesser snowpack, you'll have both a drier land surface and a drier atmosphere in summer, which results in conditions that are more conducive for the spread of fires."

The scientists also studied a number of additional climate variables, including precipitation, temperature, soil moisture, evapotranspiration, and indexes of drought, examining how each variable during different seasons influences the extent of summer fires.

They concluded that winter and spring climate conditions can be used to predict up to 53% of the year-to-year variability in summer burned areas. When summertime climate conditions such as precipitation and the dryness of the air are also factored in, the explained variability increases to 69%.

The study also looked at the overall impact of climate change on fire activity in the West. As wildfires have gradually grown in size since 1984, the research team's modeling showed that climate variables such as rising temperatures and persistent droughts can explain 83% of that increase.

This year's experimental prediction -- which encompasses the entire West, not just snow-reliant regions -- indicates that fires will burn 38% more of Western lands this summer than the average since 1984. The prediction does not include early-season fires before June, such as the widespread blazes that have devastated New Mexico this spring, nor does it estimate how various Western regions will fare. In the future, however, the scientists may add such details.

"Our plan is to include local climate variables such as winds so we can know the specific fire conditions on a state or even county level," said NCAR scientist Cenlin He, a co-author of the study. "This will make it more valuable to stakeholders and fire managers so they can anticipate fire activity for specific regions in the West."

This material is based upon work supported by the National Center for Atmospheric Research, a major facility sponsored by the National Science Foundation and managed by the University Corporation for Atmospheric Research. Any opinions, findings and conclusions or recommendations expressed in this material do not necessarily reflect the views of the National Science Foundation.

            https://www.sciencedaily.com/releases/2022/05/220524124913.htm

 

Monday, May 23, 2022

Research Finds a Secret of Stronger Metals

Study shows what happens when crystalline grains in metals reform at nanometer scales, improving metal properties.

From:  MIT News Office

By David L. Chandler

May 20, 2022 -- For the first time, researchers have described how the tiny crystalline grains that make up most solid metals actually form. Understanding this process, they say, could theoretically lead to ways of producing stronger, lighter versions of widely used metals such as aluminum, steel and titanium.

Forming metal into the shapes needed for various purposes can be done in many ways, including casting, machining, rolling, and forging. These processes affect the sizes and shapes of the tiny crystalline grains that make up the bulk metal, whether it be steel, aluminum or other widely used metals and alloys.

Now researchers at MIT have been able to study exactly what happens as these crystal grains form during an extreme deformation process, at the tiniest scales, down to a few nanometers across. The new findings could lead to improved ways of processing to produce better, more consistent properties such as hardness and toughness.

The new findings, made possible by detailed analysis of images from a suite of powerful imaging systems, are reported today in the journal Nature Materials, in a paper by former MIT postdoc Ahmed Tiamiyu (now assistant professor at the University of Calgary); MIT professors Christopher Schuh, Keith Nelson, and James LeBeau; former student Edward Pang; and current student Xi Chen.

“In the process of making a metal, you are endowing it with a certain structure, and that structure will dictate its properties in service,” Schuh says. In general, the smaller the grain size, the stronger the resulting metal. Striving to improve strength and toughness by making the grain sizes smaller “has been an overarching theme in all of metallurgy, in all metals, for the past 80 years,” he says.

Metallurgists have long applied a variety of empirically developed methods for reducing the sizes of the grains in a piece of solid metal, generally by imparting various kinds of strain through deforming it in one way or another. But it’s not easy to make these grains smaller.

The primary method is called recrystallization, in which the metal is deformed and heated. This creates many small defects throughout the piece, which are “highly disordered and all over the place,” says Schuh, who is the Danae and Vasilis Salapatas Professor of Metallurgy.   

When the metal is deformed and heated, then all those defects can spontaneously form the nuclei of new crystals. “You go from this messy soup of defects to freshly new nucleated crystals. And because they're freshly nucleated, they start very small,” leading to a structure with much smaller grains, Schuh explains.

What’s unique about the new work, he says, is determining how this process takes place at very high speed and the smallest scales. Whereas typical metal-forming processes like forging or sheet rolling, may be quite fast, this new analysis looks at processes that are “several orders of magnitude faster,” Schuh says.

“We use a laser to launch metal particles at supersonic speeds. To say it happens in the blink of an eye would be an incredible understatement, because you could do thousands of these in the blink of an eye,” says Schuh.

Such a high-speed process is not just a laboratory curiosity, he says. “There are industrial processes where things do happen at that speed.” These include high-speed machining; high-energy milling of metal powder; and a method called cold spray, for forming coatings. In their experiments, “we’ve tried to understand that recrystallization process under those very extreme rates, and because the rates are so high, no one has really been able to dig in there and look systematically at that process before,” he says.

Using a laser-based system to shoot 10-micrometer particles at a surface, Tiamiyu, who carried out the experiments, “could shoot these particles one at a time, and really measure how fast they are going and how hard they hit,” Schuh says. Shooting the particles at ever-faster speeds, he would then cut them open to see how the grain structure evolved, down to the nanometer scale, using a variety of sophisticated microscopy techniques at the MIT.nano facility, in collaboration with microscopy specialists.

The result was the discovery of what Schuh says is a “novel pathway” by which grains were forming down to the nanometer scale. The new pathway, which they call nano-twinning assisted recrystallization, is a variation of a known phenomenon in metals called twinning, a particular kind of defect in which part of the crystalline structure flips its orientation. It’s a “mirror symmetry flip, and you end up getting these stripey patterns where the metal flips its orientation and flips back again, like a herringbone pattern,” he says. The team found that the higher the rate of these impacts, the more this process took place, leading to ever smaller grains as those nanoscale “twins” broke up into new crystal grains.

In the experiments they did using copper, the process of bombarding the surface with these tiny particles at high speed could increase the metal’s strength about tenfold. “This is not a small change in properties,” Schuh says, and that result is not surprising since it’s an extension of the known effect of hardening that comes from the hammer blows of ordinary forging. “This is sort of a hyper-forging type of phenomenon that we’re talking about.”

In the experiments, they were able to apply a wide range of imaging and measurements to the exact same particles and impact sites, Schuh says: “So, we end up getting a multimodal view. We get different lenses on the same exact region and material, and when you put all that together, you have just a richness of quantitative detail about what’s going on that a single technique alone wouldn't provide.”

Because the new findings provide guidance about the degree of deformation needed, how fast that deformation takes place, and the temperatures to use for maximum effect for any given specific metals or processing methods, they can be directly applied right away to real-world metals production, Tiamiyu says. The graphs they produced from the experimental work should be generally applicable. “They’re not just hypothetical lines,” Tiamiyu says. For any given metals or alloys, “if you’re trying to determine if nanograins will form, if you have the parameters, just slot it in there” into the formulas they developed, and the results should show what kind of grain structure can be expected from given rates of impact and given temperatures.

The research was supported by the U.S. Department of Energy, the Office of Naval Research, and the Natural Sciences and Engineering Research Council of Canada.

                   https://news.mit.edu/2022/crystalline-grains-metals-0520

 

Sunday, May 22, 2022

Neuromorphic Memory Device Simulates Neurons and Synapses

Simultaneous emulation of neuronal and synaptic properties promotes the development of brain-like artificial intelligence

From:  The Korea Advanced Institute of Science and Technology (KAIST)

May 20, 2022 -- Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.

Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.

Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency.

The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory.

Professor Keon Jae Lee explained, "Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses."

          https://www.sciencedaily.com/releases/2022/05/220520132904.htm 

Saturday, May 21, 2022

Puzzling Features Deep in Earth’s Interior

New research led by the University of Cambridge is the first to obtain a detailed 'image' of an unusual pocket of rock at the boundary layer with Earth’s core, some three thousand kilometres beneath the surface.

From:  University of Cambridge

May 19, 2022 -- The enigmatic area of rock, which is located almost directly beneath the Hawaiian Islands, is one of several ultra-low velocity zones – so-called because earthquake waves slow to a crawl as they pass through them.

The research, published in Nature Communications, is the first to reveal the complex internal variability of one of these pockets in detail, shedding light on the landscape of Earth’s deep interior and the processes operating within it.  

“Of all Earth’s deep interior features, these are the most fascinating and complex. We’ve now got the first solid evidence to show their internal structure - it’s a real milestone in deep earth seismology,” said lead author Zhi Li, PhD student at Cambridge’s Department of Earth Sciences.

Earth’s interior is layered like an onion: at the centre sits the iron-nickel core, surrounded by a thick layer known as the mantle, and on top of that a thin outer shell — the crust we live on. Although the mantle is solid rock, it is hot enough to flow extremely slowly. These internal convection currents feed heat to the surface, driving the movement of tectonic plates and fuelling volcanic eruptions.  

Scientists use seismic waves from earthquakes to 'see' beneath Earth’s surface — the echoes and shadows of these waves reveal radar-like images of deep interior topography. But, until recently, 'images' of the structures at the core-mantle boundary, an area of key interest for studying our planet’s internal heat flow, have been grainy and difficult to interpret.

The researchers used the latest numerical modelling methods to reveal kilometre-scale structures at the core-mantle boundary. According to co-author Dr Kuangdai Leng, who developed the methods while at the University of Oxford, “We are really pushing the limits of modern high-performance computing for elastodynamic simulations, taking advantage of wave symmetries unnoticed or unused before.” Leng, who is currently based at the Science and Technology Facilities Council, says that this means they can improve the resolution of the images by an order of magnitude compared to previous work. 

The researchers observed a 40% reduction in the speed of seismic waves travelling at the base of the ultra-low velocity zone beneath Hawaii. This supports existing proposals that the zone contains much more iron than the surrounding rocks – meaning it is denser and more sluggish. “It’s possible that this iron-rich material is a remnant of ancient rocks from Earth’s early history or even that iron might be leaking from the core by an unknown means,” said project lead Dr Sanne Cottaar from Cambridge Earth Sciences.

The research could also help scientists understand what sits beneath and gives rise to volcanic chains like the Hawaiian Islands. Scientists have started to notice a correlation between the location of the descriptively-named hotspot volcanoes, which include Hawaii and Iceland, and the ultra-low velocity zones at the base of the mantle. The origin of hotspot volcanoes has been debated, but the most popular theory suggests that plume-like structures bring hot mantle material all the way from the core-mantle boundary to the surface.

With images of the ultra-low velocity zone beneath Hawaii now in hand, the team can also gather rare physical evidence from what is likely the root of the plume feeding Hawaii. Their observation of dense, iron-rich rock beneath Hawaii would support surface observations. “Basalts erupting from Hawaii have anomalous isotope signatures which could either point to either an early-Earth origin or core leaking, it means some of this dense material piled up at the base must be dragged to the surface,” said Cottaar.

More of the core-mantle boundary now needs to be imaged to understand if all surface hotspots have a pocket of dense material at the base. Where and how the core-mantle boundary can be targeted does depend on where earthquakes occur, and where seismometers are installed to record the waves.  

The team’s observations add to a growing body of evidence that Earth’s deep interior is just as variable as its surface. “These low velocity zones are one of the most intricate features we see at extreme depths – if we expand our search, we are likely to see ever-increasing levels of complexity, both structural and chemical, at the core-mantle boundary,” said Li.

They now plan to apply their techniques to enhance the resolution of imaging of other pockets at the core-mantle boundary, as well as mapping new zones. Eventually they hope to map the geological landscape across the core-mantle boundary and understand its relationship with the dynamics and evolutionary history of our planet.

https://www.cam.ac.uk/research/news/scientists-see-puzzling-features-deep-in-earths-interior 

Friday, May 20, 2022

Do Compression Garments Facilitate Muscle Recovery After Exercise?

From:  Tohoku University [in Japan]

May 19, 2022 -- Compression garments are an elastic cloth fitting that people wear on their arms, legs, or hips during or after physical exercise. Their use has gained popularity over the last few decades because they are thought to enhance muscle recovery following exercise.

An international research team, led by assistant professor János Négyesi from Tohoku University's Graduate School of Biomedical Engineering, performed a systematic review with meta-analysis to assess whether compression garments assist with muscle recovery.

Systematic reviews identify and synthesize data from all relevant studies, and sit at the highest level on the evidence-based medicine pyramid. The researcher's review used a generic inverse variance model, which adjusts the weight of individual studies according to sample size, to more accurately assess the effects of compression garments than previous meta-analyses.

Contrary to results found in individual research, the meta-analytical evidence suggests that wearing a compression garment during or after training does not facilitate muscle recovery.

"Even data from our previous study supported the idea that such garments have the potential to reduce strength loss after a strenuous workout," said Dr. Négyesi. "However, when we synthesized the data of all relevant studies, we found no effect of compression garments on strength recovery - even when factoring in exercise type and when and where the compression garment is applied."

The authors think this is a perfect example of contradictory outcomes from individual studies and meta-analytical evidence. Therefore, scientists should be careful when drawing direct conclusions from the results of their studies. Rather, meta-analyses using the most appropriate models can provide more precise and reliable results.

Overall, practitioners, athletes, coaches, and therapists should reconsider compression garments as a means of reducing the harmful effects of physical exercise on muscle strength and seek alternative methods.

The review paper was published in Sports Medicine on April 27, 2022.

https://www.tohoku.ac.jp/en/press/do_compression_garments_facilitate_muscle_recoverey.html