Thursday, September 30, 2021

Never-smokers’ Lung Cancer Often Treatable

Available, FDA-approved drugs may be effective in targeting about 80% of never-smokers’ lung tumors

From: Washington University School of Medicine [in St. Louis, Missouri]

September 30, 2021 -- Despite smoking's well-known role in causing lung cancer, a significant number of patients who develop lung tumors have never smoked. While scientists are still working to understand what spurs cancer in so-called 'never-smokers,' a study suggests that 78% to 92% of lung cancers in patients who have never smoked can be treated with precision drugs already approved by the Food and Drug Administration to target specific mutations in a patient's tumor.

The new analysis suggests that 78% to 92% of lung cancers in patients who have never smoked can be treated with precision drugs already approved by the Food and Drug Administration to target specific mutations in a patient's tumor. The researchers found that most never-smokers' lung tumors had so-called driver mutations, specific mistakes in the DNA that fuel tumor growth and that can be blocked with a variety of drugs. In contrast, only about half of tumors in people who smoke have driver mutations.

The study appears Sept. 30 in the Journal of Clinical Oncology.

"Most genomic studies of lung cancer have focused on patients with a history of tobacco smoking," said senior author Ramaswamy Govindan, MD, a professor of medicine. "And even studies investigating the disease in patients who have never smoked have not looked for specific, actionable mutations in these tumors in a systematic way. We found that the vast majority of these patients have genetic alterations that physicians can treat today with drugs already approved for use. The patient must have a high-quality biopsy to make sure there is enough genetic material to identify key mutations. But testing these patients is critical. There is a high chance such patients will have an actionable mutation that we can go after with specific therapies."

In the U.S, about 10% to 15% of lung cancers are diagnosed in people who have never smoked, and that proportion can be has high as 40% in parts of Asia.

The researchers analyzed lung tumors from 160 patients with lung adenocarcinoma but no history of tobacco smoking. They also compared data from these patients to data in smokers and never-smokers from The Cancer Genome Atlas and the Clinical Proteomic Tumor Analysis Consortium, projects led by the National Institutes of Health (NIH) to characterize different types of cancer. The scientists verified never-smoker status by examining the mutation patterns in these patients and comparing them to mutation patterns in lung cancers of patients who had smoked. Past work led by Govindan and his colleagues found that smokers' lung tumors have about 10 times the number of mutations as the lung tumors of never-smokers.

"Tobacco smoking leads to characteristic changes in the tumor cells, so we can look for telltale signs of smoking or signs of heavy exposure to secondhand smoke, for example," said Govindan, who treats patients at Siteman Cancer Center at Barnes-Jewish Hospital and Washington University School of Medicine. "But very few of these patients' tumors showed those signs, so we could verify that this was truly a sample of lung cancer tumors in patients who had never smoked or had major exposure to tobacco smoke."

The researchers also found that only about 7% of these patients showed evidence of having mutations present at birth that raised the risk of cancer -- either inherited or arising randomly -- furthering the mystery of what causes lung cancer in never-smokers.

"There appears to be something unique about lung cancer in people who have never smoked," Govindan said. "We didn't find a major role for inherited mutations, and we don't see evidence of large numbers of mutations, which would suggest exposure to secondhand smoke. About 60% of these tumors are found in females and 40% in males. Cancer in general is more common among men, but lung cancer in never-smokers, for some unexplained reasons, is more common among women. It is possible additional genes are involved with predispositions to cancers of this kind, and we just don't know what those are yet."

The study also shed light on the immune profiles of these tumors, which could help explain why most of them do not respond well to a type of immunotherapy called checkpoint inhibitors. Unlike the smokers' lung tumors studied, very few of the never-smokers' tumors included immune cells or immune checkpoint molecules that these drugs trigger to fight the cancer.

"The most important finding is that we identified actionable mutations in the vast majority of these patients -- between 80% and 90%," Govindan said. "Our study highlights the need to obtain high-quality tumor biopsies for clinical genomic testing in these patients, so we can identify the best targeted therapies for their individual tumors."

This work was supported by the National Institutes of Health (NIH), grant numbers 2U54HG00307910 and U01CA214195; and the Mesothelioma Biomarker Discovery Laboratory.

              https://www.sciencedaily.com/releases/2021/09/210930171022.htm

Wednesday, September 29, 2021

Simple Means of Diagnosing Ecosystem Health

Scientists say the health of a terrestrial ecosystem can be largely determined by three variables: vegetations' ability to uptake carbon, its efficiency in using carbon and its efficiency in using water.

By Steve Lundeberg, Oregon State University

Findings, published in Nature, are important because scientists and policymakers need easier, faster and less expensive ways to determine how the ecosystems relied on by humans respond to climate and environmental changes, including impacts caused by people.

"We used these complex, continuous data to develop equations that can be applied with fewer measurements to monitor forest response to climate and other factors," Law said.

The team of researchers, led by the Max Planck Institute for Biogeochemistry in Jena, Germany, used satellite observations, mathematical models and multiple environmental data streams to determine that those three factors combine to represent more than 70% of total ecosystem function.

Put another way, if an ecosystem's carbon uptake, carbon-use efficiency and water-use efficiency are all strong, that means at least 70% of everything the ecosystem is supposed to do is being done well.

"Ecosystems on the Earth's land surface support multiple functions and services that are critical for society," said Law, professor emeritus in the OSU College of Forestry. "Those functions and services include biomass production, plants' efficiency in using sunlight and water, water retention, climate regulation and, ultimately, food security. Monitoring these key indicators allows for describing ecosystem function in a way that summarizes its ability to adapt, survive and thrive as the climate and environment change."

Water-and carbon-use efficiency are linked closely with climate and also with aridity, which suggests climate change will play a big role in shaping ecosystem function over the coming years, the scientists say.

Among the building blocks of the current research are data from five semi-arid ponderosa pine sites where Law has been conducting research for 25 years.

Those sites are in the AmeriFlux network, a collection of locations in North, South and Central America managed by principal investigators like Law that measure ecosystem carbon dioxide, water and energy "fluxes," or exchanges with the atmosphere. AmeriFlux is part of the international FLUXNET project, and data from 203 FLUXNET sites representing a variety of climate zones and vegetation types were analyzed for the study.

Measuring ecosystem health has long been challenging given the complexities of ecosystem structure and how systems respond to environmental change, said Law, who has been researching the quantification of forest health for decades.

"In the 1980s, I was working on the development of indicators including similar carbon-use efficiency, and many of the measurements were incorporated in the Forest Service's Forest Health Monitoring plots," Law said. "The new flux paper shows how continuous data can be used to develop algorithms to apply in monitoring forest condition, and for evaluating and improving ecosystem models that are used in estimating the effects of climate on ecosystem carbon uptake and water use."

The water-use indicator is a combination of metrics that relate to an ecosystem's water-use efficiency, which is the carbon taken up per amount of water transpired by plants through their leaves. The carbon-use efficiency indicator compares the carbon that's respired versus carbon taken up; plant respiration means converting into energy the sugars produced during photosynthesis.

"Using three major factors, we can explain almost 72% of the variability within ecosystem functions," said Mirco Migliavacca, the study's lead author and a researcher at the Max Planck Institute for Biogeochemistry.

The three functional indicators depend heavily, Law said, on the structure of vegetation -- greenness, nitrogen content of leaves, vegetation height and biomass. That points to the importance of ecosystem structure, which can be altered by disturbances such as fire and also by forest management practices.

https://www.sciencedaily.com/releases/2021/09/210929142708.htm

Tuesday, September 28, 2021

Do Smart People Lean Economically Conservative?

By Ross Pomeroyd, Real Clear Science

September 25, 2021 – The overwhelming majority of social psychologists are liberal, so that could at least partly explain why the field's scientific literature is overflowing with studies linking conservative political views to lower levels of intelligence.

"That's just what the data say," psychologists might counter, glossing over the publication bias, p-hacking, and slanted studies that are rife within the discipline.

A new meta-analysis published in the Personality and Social Psychology Bulletin might be an inconvenient fact then. Drs. Alexander Jedinger and Axel M. Burger, research scientists at the Leibniz Institute for the Social Sciences in Cologne, Germany, aggregated and analyzed 23 studies which explored if there was an association between cognitive ability and economic ideology. In total, the meta-analysis included over 46,000 participants from the U.S., the Netherlands, Britain, Sweden, and Turkey.

The outcome? Jedinger and Burger found a small (r = .07), statistically significant (p = .008) link between higher cognitive ability and economic conservatism, defined as "opposition toward governmental intervention in markets and the acceptance of economic inequality". To paraphrase, smarter people tended to favor free markets.

Now, r=.07 is quite a small effect size (r goes from -1 for a negative correlation to 1 for a positive correlation), so it's highly unlikely that greater cognitive ability is meaningfully tied to an affinity for free markets. More likely, the authors write, the result is confounded by the fact that smarter people tend to be wealthier, making them "less supportive of governmental regulations of markets, and redistributive social policies because they have more to lose from these measure[s]."

That's a reasonable explanation. Jedinger and Burger should be commended for their warranted skepticism.

One might have hoped that an international team of psychologists would've offered similar learned suspicion six years ago when their meta-analysis linked lower cognitive ability to right-wing ideology. They found a paltry effect size of r = -0.2, yet that didn't stop them from concluding that "cognitive ability is an important factor in the genesis of ideological attitudes and prejudice and thus should become more central in theorizing and model building."

In the end, neither the current meta-analysis nor the one from 2015 really tell us anything useful, but they might be used as fodder for popular psychology books.

Source: Jedinger A, Burger AM. Do Smarter People Have More Conservative Economic Attitudes? Assessing the Relationship Between Cognitive Ability and Economic Ideology. Personality and Social Psychology Bulletin. September 2021.

          Are Smarter People More Economically Conservative? | RealClearScience

Monday, September 27, 2021

Important Issue with Tesla’s Autopilot

Once the Autopilot self-driving tech is enabled on Tesla cars, human drivers tend to pay less attention to what's happening on the road.

By David Nield for Science Alert.com

September 26, 2021 -- The study highlights the awkward in-between phase that we're now in: Self-driving tech has become good enough to handle many aspects of staying on the road, but can't be relied upon to take over everything, all of the time.

That is potentially more dangerous than both fully human driving and fully automated driving, because when people get behind the wheel they assume they don't have to give their full attention to every part of the driving experience – as this study shows.

"Visual behavior patterns change before and after Autopilot disengagement," write the researchers in their published paper.  "Before disengagement, drivers looked less on road and focused more on non-driving related areas compared to after the transition to manual driving."

"The higher proportion of off-road glances before disengagement to manual driving were not compensated by longer glances ahead."

As capable as it is, at the moment Autopilot is unable to drive a car on its own in every scenario. Tesla itself says that Autopilot is "designed to assist you with the most burdensome parts of driving" and that its features still "require active driver supervision and do not make the vehicle autonomous".

As part of an ongoing study on driving and advanced technology, researchers from Massachusetts Institute of Technology (MIT) analyzed driver posture and face position to determine where their eyes were focussed. Using data collected since 2016, the team compared 290 incidences of drivers switching off the Autopilot feature, comparing their behavior before the disengagement with their actions after.

Data across almost 500,000 miles (over 800,000 kilometers) of travel was used for the study.

Of the off-road glances observed while Autopilot was enabled, most were focussed on the large screen at the center of the dashboard in every Tesla automobile. The researchers found that 22 percent of these glances exceeded two seconds with Autopilot switched on, compared with only 4 percent with Autopilot off.

Off-road glances were longer on average while Autopilot was engaged. During manual driving, glances to side windows, the side mirrors, and the rearview mirror were all more likely. The researchers also developed a simulation model to estimate glance behavior across a wider set of data.

"This change in behavior could be caused by a misunderstanding of what the system can do and its limitations, which is reinforced when automation performs relatively well,"write the MIT researchers.

The team suggests that autonomous systems such as Autopilot should be watching drivers as well as the road, showing warnings and adjusting system behavior depending on how attentive the human behind the wheel is being. Right now, Autopilot uses pressure on the steering wheel to judge whether or not someone is still paying attention.

It's also important to make drivers fully aware of what self-driving tech can and cannot do, the researchers say. While Tesla tells drivers that some attention is still required, it does call its latest software update Full Self Driving – which it isn't.

The latest study doesn't make any link between attention span and safety, so there are no conclusions to be drawn here about whether or not Autopilot is more or less safe than manual driving. What is clear is that it makes drivers pay less attention to the road.

"The model in this case can enable new safety benefit analysis through simulation that can inform the policy making process and the design of driver support systems," conclude the researchers.

The research has been published in Accident Analysis & Prevention.

https://www.sciencealert.com/study-shows-that-tesla-autopilot-reduces-the-attention-levels-of-drivers

Sunday, September 26, 2021

Reactor Could Make Fuel on Mars

A Gas Station on Mars?  University of Cincinnati Engineers Consider the Possibilities

By Michael Miller, UC News

September 22, 2021 -- Engineers at the University of Cincinnati are developing new ways to convert greenhouse gases to fuel to address climate change and get astronauts home from Mars.

UC College of Engineering and Applied Science assistant professor Jingjie Wu and his students used a carbon catalyst in a reactor to convert carbon dioxide into methane. Known as the “Sabatier reaction” from the late French chemist Paul Sabatier, it’s a process the International Space Station uses to scrub the carbon dioxide from air the astronauts breathe and generate rocket fuel to keep the station in high orbit.

But Wu is thinking much bigger.

The Martian atmosphere is composed almost entirely of carbon dioxide. Astronauts could save half the fuel they need for a return trip home by making what they need on the red planet once they arrive, Wu said.

“It’s like a gas station on Mars. You could easily pump carbon dioxide through this reactor and produce methane for a rocket,” Wu said.

UC’s study was published in the journal Nature Communications with collaborators from Rice University, Shanghai University and East China University of Science and Technology.

Wu began his career in chemical engineering by studying fuel cells for electric vehicles but began looking at carbon dioxide conversion in his chemical engineering lab about 10 years ago.

“I realized that greenhouse gases were going to be a big issue in society,” Wu said. “A lot of countries realized that carbon dioxide is a big issue for the sustainable development of our society. That’s why I think we need to achieve carbon neutrality.”

The Biden Administration has set a goal of achieving a 50% reduction in greenhouse gas pollutants by 2030 and an economy that relies on renewable energy by 2050.

“That means we’ll have to recycle carbon dioxide,” Wu said.

Wu and his students, including lead author and UC doctoral candidate Tianyu Zhang, are experimenting with different catalysts such as graphene quantum dots — layers of carbon just nanometers big — that can increase the yield of methane.

Wu said the process holds promise to help mitigate climate change. But it also has a big commercial advantage in producing fuel as a byproduct.

“The process is 100 times more productive than it was just 10 years ago. So you can imagine that progress will come faster and faster,” Wu said. “In the next 10 years, we’ll have a lot of startup companies to commercialize this technique.”

Wu’s students are using different catalysts to produce not only methane but ethylene. Called the world’s most important chemical, ethylene is used in the manufacture of plastics, rubber, synthetic clothing and other products.

“In the future we’ll develop other catalysts that can produce more products,” said Zhang, a doctoral student in chemical engineering.

Like his professor, Zhang said he sees a bright future for green energy.

“Green energy will be very important. In the future, it will represent a huge market. So I wanted to work on it,” Zhang said.

Synthesizing fuel from carbon dioxide becomes even more commercially viable when coupled with renewable energy such as solar or wind power, Wu said.

“Right now we have excess green energy that we just throw away. We can store this excess renewable energy in chemicals,” he said.

The process is scalable for use in power plants that can generate tons of carbon dioxide. And it’s efficient since the conversion can take place right where excess carbon dioxide is produced.

Wu said advances in fuel production from carbon dioxide make him more confident that humans will set foot on Mars in his lifetime.

“Right now if you want to come back from Mars, you would need to bring twice as much fuel, which is very heavy,” he said. “And in the future, you’ll need other fuels. So we can produce methanol from carbon dioxide and use them to produce other downstream materials. Then maybe one day we could live on Mars.”

https://www.uc.edu/news/articles/2021/09/uc-reactor-converts-carbon-dioxide-to-fuel-to-address-climate-change.html 

Saturday, September 25, 2021

Ten Modern Civilizations Likely to Collapse

By Tim McMillan for The Debrief

September 24, 2021 – From the very moment humans started grouping in complex centralized societies, people became obsessed with the idea of civilization’s collapse. 

In fact, the destruction, salvation, and rebirth of society are some of the central tenets of virtually every major organized religion. Likewise, thanks to the psychological principle of recency, every generation tends to believe it will be the one that has to face Armageddon. 

In fairness, if history is any indication, indeed, every civilization is inevitably bound for destruction. Although both modern nations vastly differ from their ancient counterparts, China and Egypt are notable examples of societies that recovered from collapse. 

In the nearly 6,000-year-old known history of civilization, even the mightiest of nations don’t tend to last that comparably long. 

The prevailing belief is the Roman Empire was one of the longest-lasting civilizations of all time. However, the 1,622-year reign of power often attributed to the Romans includes the total span of the Western Roman and Byzantine Empires. The “classic” Roman Empire, based around Rome with Julius Caesar, actually fell in 476 A.D. after only 499 years. 

And while there are plenty of examples of societies enduring hundreds, in some cases thousands, of years, a civilization’s collapse tends to be pretty swift. Take, for example, the Western Roman Empire. The classical Romans went from controlling an area nearly the size of the United States in 390 A.D. to ceasing to exist in a mere 86-years. 

Throughout the centuries, as societies have risen and fallen, scholars have simultaneously pondered what causes a civilization to fall into ruin. 

Having spent decades studying 19 past major civilizations, one of the early 20th Century’s foremost experts on international relations, British historian Arnold J. Toynbee concluded, “Great civilizations are not murdered. They commit suicide.” 

Most modern historians and anthropologists believe Toynbee was partially correct, generally agreeing that civilizations collapse due to a complex and interconnected mix of internal and external factors. 

These factors include climatic instability, ecological degradation, financial and political inequality, economic complexity, and menacingly uncontrollable external forces like natural disasters, plagues, or wars. 

Conversely, some scholars suggest that no civilization can withstand the test of time. In this vein, Dr. Luke Kemp of the Center for the study of Existential Risk argues that civilizations are just complex systems, bound by the same theory of “normal accidents” that regularly cause failure in complex technological systems. 

Research by evolutionary biologist Dr. Indre Zliobaite suggests extinction is a persistent threat whenever a species has to constantly fight for survival amongst numerous competitors and within a changing environment. Dr. Zliobaite and her colleagues call this perennial struggle for survival the “Red Queen Effect.” 

So while there is no single agreed-upon theory for why civilizations collapse, The Debrief decided to examine the road to collapse for the “2021 Top 10 Most Powerful Nations,” according to U.S. News and World Report and the Wharton School of the University of Pennsylvania. 

Each nation was judged on how well it currently ranks in ten factors generally accepted as influencing a society’s downfall. These factors were: 

  1. Susceptibility to Climate Change, based on the Global Climate Risk Index. 
  2. Ecological Degradation according to the Environmental Performance Index
  3. Equality of Wealth according to the Gini Index
  4. Political Equality and Social Freedoms based on the Freedom House, Freedom Index
  5. Economic Complexity Index, which measures the current state of a country’s productive knowledge by the Growth Lab at Harvard University’s Center for International Development
  6. Response to National Crisis, by examining the disparity between a nation’s GDP from the third quarter of 2020 and spring of 2020 at the onset of the COVID-19 pandemic.  
  7. Status of National Defense, according to Global Firepower’s 2021 Military Strength Ranking.
  8. Political Stability based on Global Economy’s Political Stability Index
  9. Availability of Natural Resources, according to The Changing Wealth of Nations: Measuring Sustainable Development in the New Millennium report by the World Bank. 
  10.  Innovation, based on the number of patent applications filed by a nation to the World Intellectual Property Organization.

Each country was then given an overall raw score based on the collective scores in these individual categories.

So without further ado, here is The Debrief’s list of the most powerful nations that could be most likely headed for collapse.   [Listed from tenth most likely to fail down to first to likely

10        Japan

  9        South Korea

  8        United Kingdom (tied)

  8        USA (tied)

  6        Germany

  5        France

  4        China

  3        Russia

  2        United Arab Emirates

  1        Saudi Arabia

               https://thedebrief.org/10-modern-civilizations-on-the-road-to-collapse/ 

Friday, September 24, 2021

Surprising New Solid-state Battery

Engineers create a high performance all-solid-state battery with a pure-silicon anode

UC San Diego News Center

September 23, 2021 – Engineers created a new type of battery that weaves two promising battery sub-fields into a single battery. The battery uses both a solid state electrolyte and an all-silicon anode, making it a silicon all-solid-state battery. The initial rounds of tests show that the new battery is safe, long lasting, and energy dense. It holds promise for a wide range of applications from grid storage to electric vehicles. 

The battery technology is described in the Sept. 24, 2021 issue of the journal Science. University of California San Diego nanoengineers led the research, in collaboration with researchers at LG Energy Solution. 

Silicon anodes are famous for their energy density, which is 10 times greater than the graphite anodes most often used in today’s commercial lithium ion batteries. On the other hand, silicon anodes are infamous for how they expand and contract as the battery charges and discharges, and for how they degrade with liquid electrolytes. These challenges have kept all-silicon anodes out of commercial lithium ion batteries despite the tantalizing energy density. The new work published in Science provides a promising path forward for all-silicon-anodes, thanks to the right electrolyte.

"With this battery configuration, we are opening a new territory for solid-state batteries using alloy anodes such as silicon," said Darren H. S. Tan, the lead author on the paper. He recently completed his chemical engineering PhD at the UC San Diego Jacobs School of Engineering and co-founded a startup UNIGRID Battery that has licensed this technology. 

Next-generation, solid-state batteries with high energy densities have always relied on metallic lithium as an anode. But that places restrictions on battery charge rates and the need for elevated temperature (usually 60 degrees Celsius or higher) during charging. The silicon anode overcomes these limitations, allowing much faster charge rates at room to low temperatures, while maintaining high energy densities. 

The team demonstrated a laboratory scale full cell that delivers 500 charge and discharge cycles with 80% capacity retention at room temperature, which represents exciting progress for both the silicon anode and solid state battery communities.

Silicon as an anode to replace graphite

Silicon anodes, of course, are not new. For decades, scientists and battery manufacturers have looked to silicon as an energy-dense material to mix into, or completely replace, conventional graphite anodes in lithium-ion batteries. Theoretically, silicon offers approximately 10 times the storage capacity of graphite. In practice however, lithium-ion batteries with silicon added to the anode to increase energy density typically suffer from real-world performance issues: in particular, the number of times the battery can be charged and discharged while maintaining performance is not high enough.  

Much of the problem is caused by the interaction between silicon anodes and the liquid electrolytes they have been paired with. The situation is complicated by large volume expansion of silicon particles during charge and discharge. This results in severe capacity losses over time. 

“As battery researchers, it’s vital to address the root problems in the system. For silicon anodes, we know that one of the big issues is the liquid electrolyte interface instability," said UC San Diego nanoengineering professor Shirley Meng, the corresponding author on the Science paper, and director of the Institute for Materials Discovery and Design at UC San Diego. “We needed a totally different approach,” said Meng.

Indeed, the UC San Diego led team took a different approach: they eliminated the carbon and the binders that went with all-silicon anodes. In addition, the researchers used micro-silicon, which is less processed and less expensive than nano-silicon that is more often used.

An all solid-state solution

In addition to removing all carbon and binders from the anode, the team also removed the liquid electrolyte. Instead, they used a sulfide-based solid electrolyte. Their experiments showed this solid electrolyte is extremely stable in batteries with all-silicon anodes. 

"This new work offers a promising solution to the silicon anode problem, though there is more work to do," said professor Meng, "I see this project as a validation of our approach to battery research here at UC San Diego. We pair the most rigorous theoretical and experimental work with creativity and outside-the-box thinking. We also know how to interact with industry partners while pursuing tough fundamental challenges." 

Past efforts to commercialize silicon alloy anodes mainly focus on silicon-graphite composites, or on combining nano-structured particles with polymeric binders. But they still struggle with poor stability.

By swapping out the liquid electrolyte for a solid electrolyte, and at the same time removing the carbon and binders from the silicon anode, the researchers avoided a series of related challenges that arise when anodes become soaked in the organic liquid electrolyte as the battery functions. 

At the same time, by eliminating the carbon in the anode, the team significantly reduced the interfacial contact (and unwanted side reactions) with the solid electrolyte, avoiding continuous capacity loss that typically occurs with liquid-based electrolytes.

This two-part move allowed the researchers to fully reap the benefits of low cost, high energy and environmentally benign properties of silicon.

Impact & Spin-off Commercialization

“The solid-state silicon approach overcomes many limitations in conventional batteries. It presents exciting opportunities for us to meet market demands for higher volumetric energy, lowered costs, and safer batteries especially for grid energy storage,” said Darren H. S. Tan, the first author on the Science paper. 

Sulfide-based solid electrolytes were often believed to be highly unstable. However, this was based on traditional thermodynamic interpretations used in liquid electrolyte systems, which did not account for the excellent kinetic stability of solid electrolytes. The team saw an opportunity to utilize this counterintuitive property to create a highly stable anode.

Tan is the CEO and cofounder of a startup, UNIGRID Battery, that has licensed the technology for these silicon all solid-state batteries.

In parallel, related fundamental work will continue at UC San Diego, including additional research collaboration with LG Energy Solution. 

“LG Energy Solution is delighted that the latest research on battery technology with UC San Diego made it onto the journal of Science, a meaningful acknowledgement,” said Myung-hwan Kim, President and Chief Procurement Officer at LG Energy Solution. “With the latest finding, LG Energy Solution is much closer to realizing all-solid-state battery techniques, which would greatly diversify our battery product lineup.”

“As a leading battery manufacturer, LGES will continue its effort to foster state-of-the-art techniques in leading research of next-generation battery cells,” added Kim. LG Energy Solution said it plans to further expand its solid-state battery research collaboration with UC San Diego.

The study had been supported by LG Energy Solution’s open innovation, a program that actively supports battery-related research. LGES has been working with researchers around the world to foster related techniques. 

Title of Paper

“Carbon Free High Loading Silicon Anodes Enabled by Sulfide Solid Electrolytes,” in the Sept. 24, 2021 issue of Science.

                   https://ucsdnews.ucsd.edu/pressrelease/meng_science_2021

Thursday, September 23, 2021

Human Learning Can Be Copied in Solid Matter

Findings may help to advance artificial intelligence

From:  Rutgers University

September 22, 2021 -- Rutgers researchers and their collaborators have found that learning -- a universal feature of intelligence in living beings -- can be mimicked in synthetic matter, a discovery that in turn could inspire new algorithms for artificial intelligence (AI).

The study appears in the journal PNAS.

One of the fundamental characteristics of humans is the ability to continuously learn from and adapt to changing environments. But until recently, AI has been narrowly focused on emulating human logic. Now, researchers are looking to mimic human cognition in devices that can learn, remember and make decisions the way a human brain does.

Emulating such features in the solid state could inspire new algorithms in AI and neuromorphic computing that would have the flexibility to address uncertainties, contradictions and other aspects of everyday life. Neuromorphic computing mimics the neural structure and operation of the human brain, in part, by building artificial nerve systems to transfer electrical signals that mimic brain signals.

Researchers from Rutgers, Purdue and other institutions studied how the electrical conductivity of nickel oxide, a special type of insulating material, responded when its environment was changed repeatedly over various time intervals.

"The goal was to find a material whose electrical conductivity can be tuned by modulating the concentration of atomic defects with external stimuli such as oxygen, ozone and light," said Subhasish Mandal, a postdoctoral associate in the Department of Physics and Astronomy at Rutgers-New Brunswick. "We studied how this material behaves when we dope the system with oxygen or hydrogen, and most importantly, how the external stimulation changes the material's electronic properties."

The researchers found that when the gas stimulus changed rapidly, the material couldn't respond in full. It stayed in an unstable state in either environment and its response began to decrease. When the researchers introduced a noxious stimulus such as ozone, the material began to respond more strongly only to decrease again.

"The most interesting part of our results is that it demonstrates universal learning characteristics such as habituation and sensitization that we generally find in living species," Mandal said. "These material characteristics in turn can inspire new algorithms for artificial intelligence. Much as collective motion of birds or fish have inspired AI, we believe collective behavior of electrons in a quantum solid can do the same in the future.

"The growing field of AI requires hardware that can host adaptive memory properties beyond what is used in today's computers," he added. "We find that nickel oxide insulators, which historically have been restricted to academic pursuits, might be interesting candidates to be tested in future for brain-inspired computers and robotics."

The study included Distinguished Professor Karin Rabe from Rutgers and researchers from Purdue University, the University of Georgia and Argonne National Laboratory.

         https://www.sciencedaily.com/releases/2021/09/210922121828.htm

 

Wednesday, September 22, 2021

New Computer Mimics Human Brain Functions

A new way to solve the ‘hardest of the hard’ computer problems

By Jeff Grabmeier, Ohio State News

September 21, 2021 -- A relatively new type of computing that mimics the way the human brain works was already transforming how scientists could tackle some of the most difficult information processing problems.

Now, researchers have found a way to make what is called reservoir computing work between 33 and a million times faster, with significantly fewer computing resources and less data input needed.

In fact, in one test of this next-generation reservoir computing, researchers solved a complex computing problem in less than a second on a desktop computer.

Using the now current state-of-the-art technology, the same problem requires a supercomputer to solve and still takes much longer, said Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University.

“We can perform very complex information processing tasks in a fraction of the time using much less computer resources compared to what reservoir computing can currently do,” Gauthier said.

“And reservoir computing was already a significant improvement on what was previously possible.”

The study was published today (Sept. 21, 2021) in the journal Nature Communications.

Reservoir computing is a machine learning algorithm developed in the early 2000s and used to solve the “hardest of the hard” computing problems, such as forecasting the evolution of dynamical systems that change over time, Gauthier said.

Dynamical systems, like the weather, are difficult to predict because just one small change in one condition can have massive effects down the line, he said.

One famous example is the “butterfly effect,” in which – in one metaphorical example – changes created by a butterfly flapping its wings can eventually influence the weather weeks later.

Previous research has shown that reservoir computing is well-suited for learning dynamical systems and can provide accurate forecasts about how they will behave in the future, Gauthier said.

It does that through the use of an artificial neural network, somewhat like a human brain. Scientists feed data on a dynamical network into a “reservoir” of randomly connected artificial neurons in a network. The network produces useful output that the scientists can interpret and feed back into the network, building a more and more accurate forecast of how the system will evolve in the future.

The larger and more complex the system and the more accurate that the scientists want the forecast to be, the bigger the network of artificial neurons has to be and the more computing resources and time that are needed to complete the task.

One issue has been that the reservoir of artificial neurons is a “black box,” Gauthier said, and scientists have not known exactly what goes on inside of it – they only know it works.

The artificial neural networks at the heart of reservoir computing are built on mathematics, Gauthier explained.

“We had mathematicians look at these networks and ask, ‘To what extent are all these pieces in the machinery really needed?’” he said.

In this study, Gauthier and his colleagues investigated that question and found that the whole reservoir computing system could be greatly simplified, dramatically reducing the need for computing resources and saving significant time.

They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz, whose work led to our understanding of the butterfly effect.

Their next-generation reservoir computing was a clear winner over today’s state—of-the-art on this Lorenz forecasting task. In one relatively simple simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model.

But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster. And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model, Gauthier said.

An important reason for the speed-up is that the “brain” behind this next generation of reservoir computing needs a lot less warmup and training compared to the current generation to produce the same results.

Warmup is training data that needs to be added as input into the reservoir computer to prepare it for its actual task.

“For our next-generation reservoir computing, there is almost no warming time needed,” Gauthier said.

“Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And that’s all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points,” he said.

And once researchers are ready to train the reservoir computer to make the forecast, again, a lot less data is needed in the next-generation system.

In their test of the Lorenz forecasting task, the researchers could get the same results using 400 data points as the current generation produced using 5,000 data points or more, depending on the accuracy desired.

“What’s exciting is that this next generation of reservoir computing takes what was already very good and makes it significantly more efficient,” Gauthier said.

He and his colleagues plan to extend this work to tackle even more difficult computing problems, such as forecasting fluid dynamics.

“That’s an incredibly challenging problem to solve. We want to see if we can speed up the process of solving that problem using our simplified model of reservoir computing.”

Co-authors on the study were Erik Bollt, professor of electrical and computer engineering at Clarkson University; Aaron Griffith, who received his PhD in physics at Ohio State; and Wendson Barbosa, a postdoctoral researcher in physics at Ohio State.

The work was supported by the U.S. Air Force, the Army Research Office and the Defense Advanced Research Projects Agency.

https://news.osu.edu/a-new-way-to-solve-the-hardest-of-the-hard-computer-problems/

 

Tuesday, September 21, 2021

Toward Speedy Electronics That Overheat Less

How to cool it: Nano-scale discovery could help prevent overheating in electronics

By Daniel Strain -- University of Colorado Boulder

September 20, 2021 --A team of physicists at CU Boulder has solved the mystery behind a perplexing phenomenon in the nano realm: why some ultra-small heat sources cool down faster if you pack them closer together. The findings, which will publish this week in the journal Proceedings of the National Academy of Sciences (PNAS), could one day help the tech industry design speedier electronic devices that overheat less.

“Often heat is a challenging consideration in designing electronics. You build a device then discover that it’s heating up faster than desired,” said study co-author Joshua Knobloch, postdoctoral research associate at JILA, a joint research institute between CU Boulder and the National Institute of Standards and Technology (NIST). “Our goal is to understand the fundamental physics involved so we can engineer future devices to efficiently manage the flow of heat.”

The research began with an unexplained observation. In 2015, researchers led by physicists Margaret Murnane and Henry Kapteyn at JILA were experimenting with bars of metal that were many times thinner than the width of a human hair on a silicon base. When they heated those bars up with a laser, something strange occurred.

“They behaved very counterintuitively,” Knobloch said. “These nano-scale heat sources do not usually dissipate heat efficiently. But if you pack them close together, they cool down much more quickly.”

Now, the researchers know why this happens. 

In the new study, they used computer-based simulations to track the passage of heat from their nano-sized bars. They discovered that when they placed the heat sources close together, the vibrations of energy they produced began to bounce off each other, scattering heat away and cooling the bars down. 

The group’s results highlight a major challenge in designing the next generation of tiny devices, such as microprocessors or quantum computer chips: When you shrink down to very small scales, heat does not always behave the way you think it should.

Atom by atom

The transmission of heat in devices matters, the researchers added. Even minute defects in the design of electronics like computer chips can allow temperature to build up, adding wear and tear to a device. As tech companies strive to produce smaller and smaller electronics, they’ll need to pay more attention than ever before to phonons—vibrations of atoms that carry heat in solids.

“Heat flow involves very complex processes, making it hard to control,” Knobloch said. “But if we can understand how phonons behave on the small scale, then we can tailor their transport, allowing us to build more efficient devices.”

To do just that, Murnane and Kapteyn and their team of experimental physicists joined forces with a group of theorists led by Mahmoud Hussein, professor in the Ann and H.J. Smead Department of Aerospace Engineering Sciences. His group specializes in simulating, or modeling, the motion of phonons.

“At the atomic scale, the very nature of heat transfer emerges in a new light,” said Hussein who also has a courtesy appointment in the Department of Physics.

The researchers essentially recreated their experiment from several years before, but this time, entirely on a computer. They modeled a series of silicon bars, laid side by side like the slats in a train track and heated them up.

The simulations were so detailed, Knobloch said, that the team could follow the behavior of each and every atom in the model—millions of them in all—from start to finish. 

“We were really pushing the limits of memory of the Summit Supercomputer at CU Boulder,” he said.

Directing heat

The technique paid off. The researchers found, for example, that when they spaced their silicon bars far enough apart, heat tended to escape away from those materials in a predictable way. The energy leaked from the bars and into the material below them, dissipating in every direction.

When the bars got closer together, however, something else happened. As the heat from those sources scattered, it effectively forced that energy to flow more intensely in a uniform direction away from the sources—like a crowd of people in a stadium jostling against each other and eventually leaping out of the exit. The team denoted this phenomenon “directional thermal channeling.” 

“This phenomenon increases the transport of heat down into the substrate and away from the heat sources,” Knobloch said.

The researchers suspect that engineers could one day tap into this unusual behavior to gain a better handle on how heat flows in small electronics—directing that energy along a desired path, instead of letting it run wild.

For now, the researchers see the latest study as what scientists from different disciplines can do when they work together. 

“This project was such an exciting collaboration between science and engineering—where advanced computational analysis methods developed by Mahmoud’s group were critical for understanding new materials behavior uncovered earlier by our group using new extreme ultraviolet quantum light sources,” said Murnane, also a professor of physics.

https://www.colorado.edu/today/2021/09/20/cool-it-nano-scale-discovery-could-help-prevent-overheating-electronics

Monday, September 20, 2021

10 States Owe Uncle Sam $1 Billion in Interest Payments for $45.6 Billion in Borrowed Unemployment Funds

By Adam Andrzejewzski

RealClear Policy, -- September 20, 2021 -- While unemployment checks seem to magically appear in bank accounts every month for millions of Americans collecting the benefits, that money comes with major strings attached for the states funding them.

Ten states and U.S. Virgin Islands owe the federal government $45.6 billion for unemployment funds — at an interest rate of 2.3 percent.

These states owe more than $1 billion per year in interest payments alone.

When states ran out of their own money, Uncle Sam lent them cash to pay their out-of-work residents — called Title XII advances — money initially coming interest-free. But starting Sept. 13, the remaining balances must be paid with interest, according to reporting at Route-Fifty.com.

Now 10 states must cough up what they owe with interest — California, $19.5 billion plus interest; Colorado $1 billion; Connecticut $725,000; Illinois $4.2 billion; Massachusetts $2.3 billion; Minnesota $1 billion; New Jersey $193,000; New York $8.9 billion; Pennsylvania $721,000; Texas $6.9 billion; and the U.S. territory, the Virgin Islands $96,000.

Ohio, Hawaii, Nevada, and West Virginia recently paid off their debts before the interest kicked in, with some states using funds from the American Rescue Plan Act to repay the loans.

Since unemployment benefits are funded by taxes levied on businesses, those 10 states could see big tax hikes on employers next year.

California state government declared a $75 billion budget surplus last year. Colorado declared a $3.8 billion budget surplus last year. So, why do these states owe the federal government for unemployment benefits?

The vicious cycle of borrowing, spending, and hiking taxes is not only a waste of taxpayer money. It's a cycle that’s avoidable with responsible government.

https://www.realclearpolicy.com/articles/2021/09/20/10_states_owe_uncle_sam_1_billion_in_interest_payments_on_456_billion_in_borrowed_unemployment_funds_795019.html

Sunday, September 19, 2021

Dogs Really Are Our Best Friends

Recent discoveries reveal how dogs are hardwired to understand and communicate with people — even at birth

From: Insider Business

By Aylin Woodward September 16, 2021

Recent findings reveal that dogs are born ready to communicate with and understand people.

Studies show puppies can reciprocate human eye contact and follow gestures to locate food.

Research also suggests puppies raised with little human contact can understand gestures without training.

Dogs often seem uncannily shrewd about what we're trying to tell them.

A handful of recent studies offer surprising insights into the ways our canine companions are hardwired to communicate with people.

The most recent of those studies, published last week in the journal Scientific Reports, found that dogs can understand the difference between their owners' accidental and deliberate actions. Earlier this summer, another showed that even when puppies primarily grow up around other dogs — not humans — they are still are better at understanding our gestures than wolf pups raised by people. Still other research describes how puppies are born ready to interact with humans, no training required.

"Dogs' communicative skills uniquely position them to fill the niche that they do alongside humans," Emily Bray, a canine-cognition researcher at the University of Arizona, Tucson, told Insider in an email. "Many of the tasks that they perform for us, now and in the past (i.e. herding, hunting, detecting, acting as service dogs), are facilitated by their ability to understand our cues."

Dogs recognize their owners' intentions

Sometimes, when giving a four-legged friend a treat, we drop it by accident. Other times, owners withhold treats to teach their dogs a lesson. 

According to last week's study, dogs can tell the difference between a clumsy human who intends to give them a treat and a person who is deliberately withholding that reward.

The researchers set up an experiment: A person and a dog were separated by a plastic barrier, with a small gap in the middle large enough for a hand to squeeze through. The barrier did not span the length of the room, however, so the dogs could go around it if they wanted. The human participants passed the dog a treat through the gap in three ways. First, they offered the morsel but suddenly dropped it on their side of barrier and said, "Oops." Next, they attempted to pass the treat over, but the gap was blocked. Lastly, they offered the treat but subsequently pulled back their arm and laughed.

The experimenters tried this set-up on 51 dogs and timed how long it took each to walk around the barrier and retrieve the treat. The results showed that the dogs waited much longer to retrieve the treat when the experimenter had purposefully withheld it than when the experimenter dropped it or couldn't get it through the barrier. 

This suggests dogs can distinguish humans' intentional actions from their unintentional behavior and respond accordingly.

Even puppies raised with limited human contact know how to read us

Earlier this summer, Bray published a study analyzing the behavior of 8-week-old puppies — 375 of them, to be precise. The pups were being trained at Canine Companions, a service-dog organization in California. And they had grown up mostly with their litter mates, so had little one-on-one exposure to people.

Bray's team put the puppies through a series of tasks that measured the animals' ability to interact with humans. They measured how long it took the puppies to follow an experimenter's finger to find a hidden treat and how long they held eye contact.

The team found that once an experimenter spoke to the dogs, saying, "Puppy, look!" and made eye contact, the puppies successfully reciprocated that eye contact and could follow the gesture to locate the treats. 

"If you take away the preceding eye contact and vocal cue and give a signal that looks the same, dogs are not as likely to follow it," Bray said.

The researchers found that the puppies' performance on the tasks did not improve over the course of the experiment, suggesting this wasn't part of a learning process. Instead, they think, dogs are born with the social skills they need to read people and understand our intentions.

"We can assume that puppies started the task with the communicative ability necessary to be successful," Bray said. She added, though, that dogs' abilities overall can improve these as they age, just as humans' do. 

Her team had access to each puppy's pedigree, so could assess how related the 375 dogs were to one another. According to Bray, 40% of the variation in the puppies' performance could likely be explained by their genes, suggesting "genetics plays a large role in shaping an individual dog's cognition."

Dogs are more likely to ask humans for help than wolves raised by people

Research published in July further underscored the idea that dogs are hardwired to be "man's best friend."

The study compared 44 puppies raised with their litter mates at Canine Companions to 37 wolf puppies that received almost constant human care at a wildlife center in Minnesota. The researchers tested how well the dogs and wolves could find a treat hidden in one of two covered bowls by following a person's gaze and pointed finger.

The dog pups were twice as likely as their wolf counterparts to pick the right bowl, even though they'd spent far less time around people. Many of the puppies got it right on the first try, suggesting they didn't need training to follow those human gestures.

"Dogs have naturally better skills at understanding humans' cooperative communication than wolves do, even from puppyhood," Hannah Salomons, an animal cognition researcher at Duke University who co-authored the study, told Insider. "I would say, based on our results, that nature is definitely playing a greater role than nurture in this regard."

The dogs were also 30 times more likely to approach a stranger than the wolves, Salomons' group found. And in another task, in which the animals were trying to get a treat stuck inside a closed container, the dogs also spent more time looking to humans for help.

The wolves, by contrast, were more likely to try to tackle the problem on their own.

businessinsider.com/dogs-hardwired-to-communicate-with-humans-as-puppies-2021-9