Saturday, July 31, 2021

Adapting Plant Roots to a Hotter Planet

Supercomputer-powered 3D imaging of root systems to help breeders develop climate-change adapted plants for farmers and ease pressure on the food supply

From:  University of Texas at Austin, Texas Advanced Computing Center

July 29, 2021 -- The shoots of plants get all of the glory, with their fruit and flowers and visible structure. But it's the portion that lies below the soil -- the branching, reaching arms of roots and hairs pulling up water and nutrients -- that interests plant physiologist and computer scientist, Alexander Bucksch, associate professor of Plant Biology at the University of Georgia.

The health and growth of the root system has deep implications for our future.

Our ability to grow enough food to support the population despite a changing climate, and to fix carbon from the atmosphere in the soil are critical to our, and other species', survival. The solutions, Bucksch believes, lie in the qualities of roots.

"When there is a problem in the world, humans can move. But what does the plant do?" he asked. "It says, 'Let's alter our genome to survive.' It evolves."

Until recently, farmers and plant breeders didn't have a good way to gather information about the root system of plants, or make decisions about the optimal seeds to grow deep roots.

In a paper published this month in Plant Physiology, Bucksch and colleagues introduce DIRT/3D (Digital Imaging of Root Traits), an image-based 3D root phenotyping platform that can measure 18 architecture traits from mature field-grown maize root crowns excavated using the Shovelomics technique.

In their experiments, the system reliably computed all traits, including the distance between whorls and the number, angles, and diameters of nodal roots for 12 contrasting maize genotypes with 84 percent agreement with manual measurements. The research is supported by the ROOTS program of the Advanced Research Projects Agency-Energy (ARPA-E) and a CAREER award from National Science Foundation (NSF).

"This technology will make it easier to analyze and understand what roots are doing in real field environments, and therefore will make it easier to breed future crops to meet human needs " said Jonathan Lynch, Distinguished Professor of Plant Science and co-author, whose research focuses on understanding the basis of plant adaptation to drought and low soil fertility.

DIRT/3D uses a motorized camera set-up that takes 2,000 images per root from every perspective. It uses a cluster of 10 Raspberry Pi micro-computers to synchronize the image capture from 10 cameras and then transfers the data to the CyVerse Data Store -- the national cyberinfrastructure for academic researchers -- for 3D reconstruction.

The system generates a 3D point cloud that represents every root node and whorl -- "a digital twin of the root system," according to Bucksch, that can be studied, stored, and compared.

The data collection takes only a few minutes, which is comparable to an MRI or X-Ray machine. But the rig only costs a few thousand dollars to build, as opposed to half a million, making the technology scalable to perform high-throughput measurements of thousands of specimens, which is needed to develop new crop plants for farmers. Yet, the 3D scanner is also enabling basic science and addresses the problem of pre-selection bias because of sample limitations in plant biology.

"Biologists primarily look at the one root structure that is most common -- what we call the dominant root phenotype," Bucksch explained. "But people forgot about all of the other phenotypes. They might have a function and a role to fulfill. But we just call it noise," Bucksch said. "Our system will look into that noise in 3D and see what functions these roots might have."

Individuals who use DIRT/3D to image roots will soon be able to upload their data to a service called PlantIT that can perform the same analyses that Bucksch and his collaborators describe in their recent paper, providing information on a wide range of traits from young nodal root length to root system eccentricity. This data lets researchers and breeders compare the root systems of plants from the same or different seeds.

The framework is made possible by massive number-crunching capabilities behind the scenes. These are provided by the Texas Advanced Computing Center (TACC) which receives massive amounts of data from the CyVerse Cyberinfrastructure for computing.

Though it takes only five minutes to image a root crown, the data processing to create the point cloud and quantify the features takes several hours and requires many processors computing in parallel. Bucksch uses the NSF-funded Stampede2 supercomputer at TACC through an allocation from the Extreme Science and Engineering Discovery Environment (XSEDE) to enable his research and power the public DIRT/2D and DIRT/3D servers.

DIRT/3D is an evolution on a previous 2D version of the software that can derive information about roots using only a mobile phone camera. Since it launched in 2016, DIRT/2D has proven to be a useful tool for the field. Hundreds of plant scientists worldwide use it, including researchers at leading agribusinesses.

The project is part of ARPA-E's ROOTS program, which is working to develop new technologies that increase carbon storage within the soil and root systems of plants.

"The DIRT/3D platform enables researchers to identify novel root traits in crops, and breed plants with deeper, more extensive roots," said ARPA-E ROOTS Program Director Dr. David Babson. "The development of these kind of technologies will help promote climate change mitigation and resilience while also giving farmers the tools to lower costs and increase crop productivity. We're excited to see the progress that the team at PSU and UGA has made over the course of their award."

The tool has led to the discovery of several genes responsible for root traits. Bucksch cites a recent study of Striga hermanthica resistance in sorghum as the kind of outcome he hopes for users of DIRT/3D. Striga, a parasitic weed, regularly destroys sorghum harvests in huge areas of Africa.

The lead researcher, Dorota Kawa, a post-doc at UC Davis, found that there are some forms of sorghum with Striga-resistant roots. She derived traits from these roots using DIRT/2D, and then mapped the traits to genes that regulate the release of chemicals in the roots that triggers Striga germination in plants.

DIRT3D improves the quality of the root characterizations done with DIRT/2D and captures features that are only accessible when scanned in 3D.

The challenges facing farmers are expected to rise in coming years, with more draughts, higher temperatures, low-soil fertility, and the need to grow food in less greenhouse-gas producing ways. Roots that are adapted to these future conditions will help ease pressure on the food supply.

"The potential, with DIRT/3D, is helping us live on a hotter planet and managing to have enough food," Bucksch said. "That is always the elephant in the room. There could be a point where this planet can't produce enough food for everybody anymore, and I hope we, as a science community, can avoid this point by developing better drought adapted and CO2 sequestering plants."

                 https://www.sciencedaily.com/releases/2021/07/210729143426.htm

Friday, July 30, 2021

Blood Test for Your Body Clock?

From: University of Colorado Boulder

By Lisa Marshall

July 27, 2021 – What time is your body clock set on?

The answer, mounting research suggests, can influence everything from your predisposition to diabetes, heart disease and depression to the optimal time for you to take medication. But unlike routine blood tests for cholesterol and hormone levels, there’s no easy way to precisely measure a person’s individual circadian rhythm.

At least not yet.

New CU Boulder research, published in the Journal of Biological Rhythms, suggests that day could come in the not-too-distant future. The study found it’s possible to determine the timing of a person’s internal circadian or biological clock by analyzing a combination of molecules in a single blood draw.

“If we can understand each individual person’s circadian clock, we can potentially prescribe the optimal time of day for them to be eating or exercising or taking medication,” said senior author Christopher Depner, who conducted the study while an assistant professor of integrative physiology at CU Boulder. “From a personalized medicine perspective, it could be groundbreaking.”

Syncing our life with our clock

For decades, researchers have known that a central ‘master clock’ in a region of the brain called the hypothalamus helps to regulate the body’s 24-hour cycle, including when we naturally feel sleepy at night and have the urge to wake up in the morning. 

More recently, studies reveal that nearly every tissue or organ in the body also has an internal timing device, synced with that master clock, dictating when we secrete certain hormones, how our heart and lungs function throughout the day, the cadence of our metabolism of fats and sugars, and more.

As many as 82% of protein-coding genes that are drug targets show 24-hour time-of-day patterns, suggesting many medications could work better and yield fewer side effects if administration was timed appropriately.

And when our internal rhythm is at odds with our sleep-wake cycle, that can boost risk of an array of diseases, said study co-author Ken Wright, a professor of integrative physiology and director of the Sleep and Chronobiology Laboratory at CU Boulder.

“If we want to be able to fix the timing of a person’s circadian rhythm, we need to know what that timing is,” he said. “Right now, we do not have an easy way to do that.” 

Even among healthy people, sleep-wake cycles can vary by four to six hours. 

Simply asking someone, ‘are you a morning lark, a night owl or somewhere in-between?’ can provide hints to what a person’s circadian cycle is.

But the only way to precisely gauge the timing of an individual’s circadian clock (including where the peaks and troughs of their daily rhythm) is to perform a dim-light melatonin assessment. This involves keeping the person in dim light and drawing blood or saliva hourly for up to 24 hours to measure melatonin––the hormone that naturally increases in the body to signal bedtime and wanes to help wake us up.

A molecular fingerprint for circadian rhythm

In pursuit of a more precise and practical test, Wright and Depner brought 16 volunteers to live in a sleep lab on the CU Anschutz Medical campus in Aurora for 14 days under tightly controlled conditions.

In addition to testing their blood for melatonin hourly, they also used a method called “metabolomics”––assessing levels of about 4,000 different metabolites (things like amino acids, vitamins and fatty acids that are byproducts of metabolism) in the blood.

They used a machine learning algorithm to determine which collection of metabolites were associated with the circadian clock––creating a sort of molecular fingerprint for individual circadian phases.

When they tried to predict circadian phase based on this fingerprint from a single blood draw, their findings were surprisingly similar to those using the more arduous melatonin test.

“It was within about one hour of the gold standard of taking blood every hour around the clock,” said Depner, now an assistant professor of kinesiology at the University of Utah.

He noted the test was significantly more accurate when people were well rested and hadn’t eaten recently––a requirement that could make the test challenging outside of a laboratory setting. And to be feasible and affordable, a commercial test would likely have to narrow down the number of metabolites it’s looking for (their test narrowed it down to 65).

But the study is a critical first step, said Wright.

“We are at the very beginning stages of developing these biomarkers for circadian rhythm, but this promising study shows it can be done.” 

Other research, including some from Wright’s lab, is exploring proteomics (looking for proteins in blood) or transcriptomics (measuring the presence of ribonucleic acid, or RNA) to assess circadian phase.

Ultimately, the researchers imagine a day when people can, during a routine physical, get a blood test to precisely determine their circadian phase––so doctors can prescribe not only what to do, but when.

“This is an important step forward in paving the way for circadian medicine––for providing the right treatment to the right individual at the right time of day,” said Depner.

https://www.colorado.edu/today/2021/07/27/blood-test-your-body-clock-its-horizon 

Thursday, July 29, 2021

Chinese Weapons of Mass Destruction

The People's Republic of China has developed and possesses weapons of mass destruction, including chemical and nuclear weapons. The first of China's nuclear weapons tests took place in 1964, and its first hydrogen bomb test occurred in 1967. Tests continued until 1996, when China signed the Comprehensive Test Ban Treaty (CTBT). China has acceded to the Biological and Toxin Weapons Convention (BWC) in 1984 and ratified the Chemical Weapons Convention (CWC) in 1997.

The number of nuclear warheads in China's arsenal is a state secret. There are varying estimates of the size of China's arsenal. China was estimated by the Federation of American Scientists to have an arsenal of about 260 total warheads as of 2015, which made it the second smallest nuclear arsenal amongst the five nuclear weapon states acknowledged by the Treaty on the Non-Proliferation of Nuclear Weapons, and one of 320 total warheads by the SIPRI Yearbook 2020, the third highest.  According to some estimates, the country could "more than double" the "number of warheads on missiles that could threaten the United States by the mid-2020s".

Early in 2011, China published a defense white paper, which repeated its nuclear policies of maintaining a minimum deterrent with a no-first-use pledge. China has yet to define what it means by a "minimum deterrent posture". This, together with the fact that "it is deploying four new nuclear-capable ballistic missiles, invites concern as to the scale and intention of China’s nuclear upgrade".

Chemical Weapons

The Republic of China signed the Chemical Weapons Convention (CWC) on January 13, 1993. The People’s Republic of China ratified the CWC on April 25, 1997.

China was found to have supplied Albania with a small stockpile of chemical weapons in the 1970s during the Cold War.

Biological Weapons

China is currently a signatory of the Biological Weapons Convention and Chinese officials have stated that China has never engaged in biological activities with offensive military applications. However, China was reported to have had an active biological weapons program in the 1980s.

Kanatjan Alibekov, former director of one of the Soviet germ-warfare programs, said that China suffered a serious accident at one of its biological weapons plants in the late 1980s. Alibekov asserted that Soviet reconnaissance satellites identified a biological weapons laboratory and plant near a site for testing nuclear warheads. The Soviets suspected that two separate epidemics of hemorrhagic fever that swept the region in the late 1980s were caused by an accident in a lab where Chinese scientists were weaponizing viral diseases.

US Secretary of State Madeleine Albright expressed her concerns over possible Chinese biological weapon transfers to Iran and other nations in a letter to Senator Bob Bennett (R-Utah) in January 1997.  Albright stated that she had received reports regarding transfers of dual-use items from Chinese entities to the Iranian government which concerned her and that the United States had to encourage China to adopt comprehensive export controls to prevent assistance to Iran's alleged biological weapons program. The United States acted upon the allegations on January 16, 2002, when it imposed sanctions on three Chinese firms accused of supplying Iran with materials used in the manufacture of chemical and biological weapons. In response to this, China issued export control protocols on dual use biological technology in late 2002.

A biological program in China was described in a 2015 detailed study by the Indian Ministry of Defence funded Manohar Parrikar Institute for Defense Studies and Analyses. It pointed to 42 facilities, some in the same compound, that had the capacity, possibly latently, of research, development, production or testing of biological weapons.

Nuclear Weapons

History

Mao Zedong decided to begin a Chinese nuclear-weapons program during the First Taiwan Strait Crisis of 1954–1955 over the Quemoy and Matsu Islands. While he did not expect to be able to match the large American nuclear arsenal, Mao believed that even a few bombs would increase China's diplomatic credibility.  Construction of uranium-enrichment plants in Baotou and Lanzhou began in 1958, and a plutonium facility in Jiuquan and the Lop Nur nuclear test site by 1960. The Soviet Union provided assistance in the early Chinese program by sending advisers to help in the facilities devoted to fissile material production and, in October 1957, agreed to provide a prototype bomb, missiles, and related technology. The Chinese, who preferred to import technology and components to developing them within China, exported uranium to the Soviet Union, and the Soviets sent two R-2 missiles in 1958.

That year, however, Soviet leader Nikita Khrushchev told Mao that he planned to discuss arms limits with the United States and Britain.  China was already opposed to Khrushchev's post-Stalin policy of "peaceful coexistence". Although Soviet officials assured China that it was under the Soviet nuclear umbrella, the disagreements widened the emerging Sino-Soviet split. In June 1959, the two nations formally ended their agreement on military and technology cooperation, and in July 1960, all Soviet assistance with the Chinese nuclear program was abruptly terminated and all Soviet technicians were withdrawn from the program.

According to Arms Control and Disarmament Agency director William Foster, the American government, under the Kennedy and Johnson administration, was concerned about the program and studied ways to sabotage or attack it, perhaps with the aid of Taiwan or the Soviet Union, but Khrushchev was not interested. The Chinese conducted their first nuclear test, code-named 596, on 16 October 1964,  China's last nuclear test was on July 29, 1996.  According to the Australian Geological Survey Organisation in Canberra, the yield of the 1996 test was 1–5 kilotons. This was China's 22nd underground test and 45th test overall.

Size

China has made significant improvements in its miniaturization techniques since the 1980s. There have been accusations, notably by the Cox Commission, that this was done primarily by covertly acquiring the U.S.'s W88 nuclear warhead design as well as guided ballistic missile technology.  Chinese scientists have stated that they have made advances in these areas, but insist that these advances were made without espionage.

The international community has debated the size of the Chinese nuclear force since the nation first acquired such technology. Because of strict secrecy it is very difficult to determine the exact size and composition of China's nuclear forces. Estimates vary over time. Several declassified U.S. government reports give historical estimates. The 1984 Defense Intelligence Agency's Defense Estimative Brief estimates the Chinese nuclear stockpile as consisting of between 150 and 160 warheads.  A 1993 United States National Security Council report estimated that China's nuclear deterrent force relied on 60 to 70 nuclear armed ballistic missiles.  The Defense Intelligence Agency's The Decades Ahead: 1999 – 2020 report estimates the 1999 Nuclear Weapons' Inventory as between 140 and 157.  In 2004 the U.S. Department of Defense assessed that China had about 20 intercontinental ballistic missiles capable of targeting the United States.  In 2006 a U.S. Defense Intelligence Agency estimate presented to the Senate Armed Services Committee was that "China currently has more than 100 nuclear warheads.

A variety of estimates abound regarding China's current stockpile. Although the total number of nuclear weapons in the Chinese arsenal is unknown, as of 2005 estimates vary from as low as 80 to as high as 2,000. The 2,000-warhead estimate has largely been rejected by diplomats in the field. It appears to have been derived from a 1990s-era Usenet post, in which a Singaporean college student made unsubstantiated statements concerning a supposed 2,000-warhead stockpile.

In 2004, China stated that "among the nuclear-weapon states, China ... possesses the smallest nuclear arsenal," implying China has fewer than the United Kingdom's 200 nuclear weapons.  Several non-official sources estimate that China has around 400 nuclear warheads. However, U.S. intelligence estimates suggest a much smaller nuclear force than many non-governmental organizations.

In 2011, high estimates of the Chinese nuclear arsenal again emerged. One three-year study by Georgetown University raised the possibility that China had 3,000 nuclear weapons, hidden in a sophisticated tunnel network.  The study was based on state media footage showing tunnel entrances, and estimated a 4,800 km (3,000 mile) network. The tunnel network was revealed after the 2008 Sichuan earthquake collapsed tunnels in the hills. China has confirmed the existence of the tunnel network.  In response, the US military was ordered by law to study the possibility of this tunnel network concealing a nuclear arsenal.  However, the tunnel theory has come under substantial attack due to several apparent flaws in its reasoning. From a production standpoint, China probably does not have enough fissile material to produce 3,000 nuclear weapons. Such an arsenal would require 9–12 tons of plutonium as well as 45–75 tons of enriched uranium and a substantial amount of tritium.  The Chinese are estimated to have only 2 tons of weapons-grade plutonium, which limits their arsenal to 450–600 weapons, despite a 18-ton disposable supply of uranium, theoretically enough for 1,000 warheads.

As of 2011, the Chinese nuclear arsenal was estimated to contain 55–65 ICBMs.

In 2012, STRATCOM commander C. Robert Kehler said that the best estimates were "in the range of several hundred" warheads and FAS estimated the current total to be "approximately 240 warheads".

The U.S. Department of Defense 2013 report to Congress on China's military developments stated that the Chinese nuclear arsenal consists of 50–75 ICBMs, located in both land-based silos and Ballistic missile submarine platforms. In addition to the ICBMs, the report stated that China has approximately 1,100 short-range ballistic missiles, although it does not have the warhead capacity to equip them all with nuclear weapons.

               https://en.wikipedia.org/wiki/China_and_weapons_of_mass_destruction

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

The Underground Great Wall of China (Chinese: 地下) is the informal name for the vast system of tunnels China uses to store and transport mobile intercontinental ballistic missiles.

Description of Silo Tunnels

Due to the great secrecy surrounding the tunnels, not much information about them is publicly available; however, it is believed that the tunnels allow for mobile ICBMs to be shuttled around to different silos, and possibly stored in reinforced underground bunkers. This greatly enhances the ICBM's chance of survival in a direct nuclear strike, which enables their use in a second strike unlike ICBMs based in static nuclear silos which generally do not survive a direct nuclear attack.

A report written by a Georgetown University team led by Phillip Karber conducted a three-year study after Chinese government send nuclear experts during 2008 Sichuan earthquake, mapping out China’s complex tunnel system, which stretches 5,000 km (3,000 miles). The report determined that the stated Chinese nuclear arsenal is understated and as many as 3,000 nuclear warheads may be stored in the tunnel network.  This hypothetical maximum storage or basing capacity along with Karber's own misconceived fissile production suggestions, resulted in Western media purporting that 3,000 warheads were actually in the facility. The Karber study went on to state that the tunnels are not likely to be breached by conventional or low-yield earth-penetrating nuclear weapons such as the B61-11.

https://en.wikipedia.org/wiki/Underground_Great_Wall_of_China 

Wednesday, July 28, 2021

Quakes on Mars Reveal Interior Structure

Researchers have been able to use seismic data to look inside Mars for the first time. They measured the crust, mantle and core and narrowed down their composition.

From:  ETH Zurich

July 22, 2021 -- Since early 2019, researchers have been recording and analysing marsquakes as part of the InSight mission. This relies on a seismometer whose data acquisition and control electronics were developed at ETH Zurich. Using this data, the researchers have now measured the red planet's crust, mantle and core -- data that will help determine the formation and evolution of Mars and, by extension, the entire solar system.

Mars once completely molten

We know that Earth is made up of shells: a thin crust of light, solid rock surrounds a thick mantle of heavy, viscous rock, which in turn envelopes a core consisting mainly of iron and nickel. Terrestrial planets, including Mars, have been assumed to have a similar structure. "Now seismic data has confirmed that Mars presumably was once completely molten before dividing into the crust, mantle and core we see today, but that these are different from Earth's," says Amir Khan, a scientist at the Institute of Geophysics at ETH Zurich and at the Physics Institute at the University of Zurich. Together with his ETH colleague Simon Stähler, he analysed data from NASA's InSight mission, in which ETH Zurich is participating under the leadership of Professor Domenico Giardini.

No plate tectonics on Mars

The researchers have discovered that the Martian crust under the probe's landing site near the Martian equator is between 15 and 47 kilometres thick. Such a thin crust must contain a relatively high proportion of radioactive elements, which calls into question previous models of the chemical composition of the entire crust.

Beneath the crust comes the mantle with the lithosphere of more solid rock reaching 400-600 kilometres down -- twice as deep as on Earth. This could be because there is now only one continental plate on Mars, in contrast to Earth with its seven large mobile plates. "The thick lithosphere fits well with the model of Mars as a 'one-plate planet'," Khan concludes.

The measurements also show that the Martian mantle is mineralogically similar to Earth's upper mantle. "In that sense, the Martian mantle is a simpler version of Earth's mantle." But the seismology also reveals differences in chemical composition. The Martian mantle, for example, contains more iron than Earth's. However, theories as to the complexity of the layering of the Martian mantle also depend on the size of the underlying core -- and here, too, the researchers have come to new conclusions.

The core is liquid and larger than expected

The Martian core has a radius of about 1,840 kilometres, making it a good 200 kilometres larger than had been assumed 15 years ago, when the InSight mission was planned. The researchers were now able to recalculate the size of the core using seismic waves. "Having determined the radius of the core, we can now calculate its density," Stähler says.

"If the core radius is large, the density of the core must be relatively low," he explains: "That means the core must contain a large proportion of lighter elements in addition to iron and nickel." These include sulphur, oxygen, carbon and hydrogen, and make up an unexpectedly large proportion. The researchers conclude that the composition of the entire planet is not yet fully understood. Nonetheless, the current investigations confirm that the core is liquid -- as suspected -- even if Mars no longer has a magnetic field.

Reaching the goal with different waveforms

The researchers obtained the new results by analysing various seismic waves generated by marsquakes. "We could already see different waves in the InSight data, so we knew how far away from the lander these quake epicentres were on Mars," Giardini says. To be able to say something about a planet's inner structure calls for quake waves that are reflected at or below the surface or at the core. Now, for the first time, researchers have succeeded in observing and analysing such waves on Mars.

"The InSight mission was a unique opportunity to capture this data," Giardini says. The data stream will end in a year when the lander's solar cells are no longer able to produce enough power. "But we're far from finished analysing all the data -- Mars still presents us with many mysteries, most notably whether it formed at the same time and from the same material as our Earth." It is especially important to understand how the internal dynamics of Mars led it to lose its active magnetic field and all surface water. "This will give us an idea of whether and how these processes might be occurring on our planet," Giardini explains. "That's our reason why we are on Mars, to study its anatomy."

    https://www.sciencedaily.com/releases/2021/07/210722163028.htm 

Tuesday, July 27, 2021

Scientists Model ‘True Prevalence’ of COVID-19 Throughout Pandemic

From:  University of Washington

By James Urton, UW News

July 26, 2021 -- Government officials and policymakers have tried to use numbers to grasp COVID-19’s impact. Figures like the number of hospitalizations or deaths reflect part of this burden. Each datapoint tells only part of the story. But no one figure describes the true pervasiveness of the novel coronavirus by revealing the number of people actually infected at a given time — an important figure to help scientists understand if herd immunity can be reached, even with vaccinations.

Now, two University of Washington scientists have developed a statistical framework that incorporates key COVID-19 data — such as case counts and deaths due to COVID-19 — to model the true prevalence of this disease in the United States and individual states. Their approach, published online July 26 in the Proceedings of the National Academy of Sciences, projects that in the U.S. as many as 60% of COVID-19 cases went undetected as of March 7, 2021, the last date for which the dataset they employed is available.

This framework could help officials determine the true burden of disease in their region — both diagnosed and undiagnosed — and direct resources accordingly, said the researchers.

“There are all sorts of different data sources we can draw on to understand the COVID-19 pandemic — the number of hospitalizations in a state, or the number of tests that come back positive. But each source of data has its own flaws that would give a biased picture of what’s really going on,” said senior author Adrian Raftery, a UW professor of sociology and of statistics. “What we wanted to do is to develop a framework that corrects the flaws in multiple data sources and draws on their strengths to give us an idea of COVID-19’s prevalence in a region, a state or the country as a whole.”

Data sources can be biased in different ways. For example, one widely cited COVID-19 statistic is the proportion of test results in a region or state that come back positive. But since access to tests, and a willingness to be tested, vary by location, that figure alone cannot provide a clear picture of COVID-19’s prevalence, said Raftery.

Other statistical methods often try to correct the bias in one data source to model the true prevalence of disease in a region. For their approach, Raftery and lead author Nicholas Irons, a UW doctoral student in statistics, incorporated three factors: the number of confirmed COVID-19 cases, the number of deaths due to COVID-19 and the number of COVID-19 tests administered each day as reported by the COVID Tracking Project. In addition, they incorporated results from random COVID-19 testing of Indiana and Ohio residents as an “anchor” for their method.

The researchers used their framework to model COVID-19 prevalence in the U.S. and each of the states up through March 7, 2021. On that date, according to their framework, an estimated 19.7% of U.S. residents, or about 65 million people, had been infected. This indicates that the U.S. is unlikely to reach herd immunity without its ongoing vaccination campaign, Raftery and Irons said. In addition, the U.S. had an undercount factor of 2.3, the researchers found, which means that only about 1 in 2.3 COVID-19 cases were being confirmed through testing. Put another way, some 60% of cases were not counted at all.

This COVID-19 undercount rate also varied widely by state, and could have multiple causes, according to Irons.

“It can depend on the severity of the pandemic and the amount of testing in that state,” said Irons. “If you have a state with severe pandemic but limited testing, the undercount can be very high, and you’re missing the vast majority of infections that are occurring. Or, you could have a situation where testing is widespread and the pandemic is not as severe. There, the undercount rate would be lower.”

In addition, the undercount factor fluctuated by state or region as the pandemic progressed due to differences in access to medical care among regions, changes in the availability of tests and other factors, Raftery said.

With the true prevalence of COVID-19, Raftery and Irons calculated other useful figures for states, such as the infection fatality rate, which is the percentage of infected people who had succumbed to COVID-19, as well as the cumulative incidence, which is the percentage of a state’s population who have had COVID-19.

Ideally, regular random testing of individuals would show the level of infection in a state, region or even nationally, said Raftery.  But in the COVID-19 pandemic, only Indiana and Ohio conducted random viral testing of residents, datasets that were critical in helping the researchers develop their framework.  In the absence of widespread random testing, this new method could help officials assess the true burden of disease in this pandemic and the next one.

“We think this tool can make a difference by giving the people in charge a more accurate picture of how many people are infected, and what fraction of them are being missed by current testing and treatment efforts,” said Raftery.

The research was funded by the National Institutes of Health.

                 https://www.washington.edu/news/2021/07/26/covid-19-true-prevalence/

Monday, July 26, 2021

Previously Unseen Star Forming in Milky Way

A new survey of our home galaxy, the Milky Way, combines the capabilities of the Very Large Array and the Effelsberg telescope in Germany to provide astronomers with valuable new insights into how stars much more massive than the Sun are formed.

From:  National Radio Astronomy Observatory

July 22, 2021 -- Astronomers using two of the world's most powerful radio telescopes have made a detailed and sensitive survey of a large segment of our home galaxy -- the Milky Way -- detecting previously unseen tracers of massive star formation, a process that dominates galactic ecosystems. The scientists combined the capabilities of the National Science Foundation's Karl G. Jansky Very Large Array (VLA) and the 100-meter Effelsberg Telescope in Germany to produce high-quality data that will serve researchers for years to come.

Stars with more than about ten times the mass of our Sun are important components of the Galaxy and strongly affect their surroundings. However, understanding how these massive stars are formed has proved challenging for astronomers. In recent years, this problem has been tackled by studying the Milky Way at a variety of wavelengths, including radio and infrared. This new survey, called GLOSTAR (Global view of the Star formation in the Milky Way), was designed to take advantage of the vastly improved capabilities that an upgrade project completed in 2012 gave the VLA to produce previously unobtainable data.

GLOSTAR has excited astronomers with new data on the birth and death processes of massive stars, as well on the tenuous material between the stars. The GLOSTAR team of researchers has published a series of papers in the journal Astronomy & Astrophysics reporting initial results of their work, including detailed studies of several individual objects. Observations continue and more results will be published later.

The survey detected telltale tracers of the early stages of massive star formation, including compact regions of hydrogen gas ionized by the powerful radiation from young stars, and radio emission from methanol (wood alcohol) molecules that can pinpoint the location of very young stars still deeply shrouded by the clouds of gas and dust in which they are forming.

The survey also found many new remnants of supernova explosions -- the dramatic deaths of massive stars. Previous studies had found fewer than a third of the expected number of supernova remnants in the Milky Way. In the region it studied, GLOSTAR more than doubled the number found using the VLA data alone, with more expected to appear in the Effelsberg data.

"This is an important step to solve this longstanding mystery of the missing supernova remnants," said Rohit Dokara, a Ph.D student at the Max Planck Institute for Radioastronomy (MPIfR) and lead author on a paper about the remnants.

The GLOSTAR team combined data from the VLA and the Effelsberg telescope to obtain a complete view of the region they studied. The multi-antenna VLA -- an interferometer -- combines the signals from widely-separated antennas to make images with very high resolution that show small details. However, such a system often cannot also detect large-scale structures. The 100-meter-diameter Effelsberg telescope provided the data on structures larger than those the VLA could detect, making the image complete.

"This clearly demonstrates that the Effelberg telescope is still very crucial, even after 50 years of operation," said Andreas Brunthaler of MPIfR, project leader and first author of the survey's overview paper.

Visible light is strongly absorbed by dust, which radio waves can readily penetrate. Radio telescopes are essential to revealing the dust-shrouded regions in which young stars form.

The results from GLOSTAR, combined with other radio and infrared surveys, "offers astronomers a nearly complete census of massive star-forming clusters at various stages of formation, and this will have lasting value for future studies," said team member William Cotton, of the National Radio Astronomy Observatory (NRAO), who is an expert in combining interferometer and single-telescope data.

"GLOSTAR is the first map of the Galactic Plane at radio wavelengths that detects many of the important star formation tracers at high spatial resolution. The detection of atomic and molecular spectral lines is critical to determine the location of star formation and to better understand the structure of the Galaxy," said Dana Falser, also of NRAO.

The initiator of GLOSTAR, the MPIfR's Karl Menten, added, "It's great to see the beautiful science resulting from two of our favorite radio telescopes joining forces."

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

          https://www.sciencedaily.com/releases/2021/07/210722112844.htm

Sunday, July 25, 2021

Speeding Up Enzymes In the Laboratory

A new tool that enables thousands of tiny experiments to run simultaneously on a single polymer chip will let scientists study enzymes faster and more comprehensively than ever before.

By Ker Than, Stanford University

July 22, 2021 -- For much of human history, animals and plants were perceived to follow a different set of rules than rest of the universe. In the 18th and 19th centuries, this culminated in a belief that living organisms were infused by a non-physical energy or “life force” that allowed them to perform remarkable transformations that couldn’t be explained by conventional chemistry or physics alone.

Scientists now understand that these transformations are powered by enzymes – protein molecules comprised of chains of amino acids that act to speed up, or catalyze, the conversion of one kind of molecule (substrates) into another (products). In so doing, they enable reactions such as digestion and fermentation – and all of the chemical events that happen in every one of our cells – that, left alone, would happen extraordinarily slowly.

“A chemical reaction that would take longer than the lifetime of the universe to happen on its own can occur in seconds with the aid of enzymes,” said Polly Fordyce, an assistant professor of bioengineering and of genetics at Stanford University.

While much is now known about enzymes, including their structures and the chemical groups they use to facilitate reactions, the details surrounding how their forms connect to their functions, and how they pull off their biochemical wizardry with such extraordinary speed and specificity are still not well understood.

A new technique, developed by Fordyce and her colleagues at Stanford and detailed this week in the journal Science, could help change that. Dubbed HT-MEK — short for High-Throughput Microfluidic Enzyme Kinetics — the technique can compress years of work into just a few weeks by enabling thousands of enzyme experiments to be performed simultaneously. “Limits in our ability to do enough experiments have prevented us from truly dissecting and understanding enzymes,” said study co-leader Dan Herschlag, a professor of biochemistry at Stanford’s School of Medicine.

By allowing scientists to deeply probe beyond the small “active site” of an enzyme where substrate binding occurs, HT-MEK could reveal clues about how even the most distant parts of enzymes work together to achieve their remarkable reactivity.

“It’s like we’re now taking a flashlight and instead of just shining it on the active site we’re shining it over the entire enzyme,” Fordyce said. “When we did this, we saw a lot of things we didn’t expect.”

Enzymatic tricks

HT-MEK is designed to replace a laborious process for purifying enzymes that has traditionally involved engineering bacteria to produce a particular enzyme, growing them in large beakers, bursting open the microbes and then isolating the enzyme of interest from all the other unwanted cellular components. To piece together how an enzyme works, scientists introduce intentional mistakes into its DNA blueprint and then analyze how these mutations affect catalysis.

This process is expensive and time consuming, however, so like an audience raptly focused on the hands of a magician during a conjuring trick, researchers have mostly limited their scientific investigations to the active sites of enzymes. “We know a lot about the part of the enzyme where the chemistry occurs because people have made mutations there to see what happens. But that’s taken decades,” Fordyce said.

But as any connoisseur of magic tricks knows, the key to a successful illusion can lie not just in the actions of the magician’s fingers, but might also involve the deft positioning of an arm or the torso, a misdirecting patter or discrete actions happening offstage, invisible to the audience. HT-MEK allows scientists to easily shift their gaze to parts of the enzyme beyond the active site and to explore how, for example, changing the shape of an enzyme’s surface might affect the workings of its interior.

“We ultimately would like to do enzymatic tricks ourselves,” Fordyce said. “But the first step is figuring out how it’s done before we can teach ourselves to do it.”

Enzyme experiments on a chip

The technology behind HT-MEK was developed and refined over six years through a partnership between the labs of Fordyce and Herschlag. “This is an amazing case of engineering and enzymology coming together to — we hope — revolutionize a field,” Herschlag said. “This project went beyond your typical collaboration — it was a group of people working jointly to solve a very difficult problem — and continues with the methodologies in place to try to answer difficult questions.”

HT-MEK combines two existing technologies to rapidly speed up enzyme analysis. The first is microfluidics, which involves molding polymer chips to create microscopic channels for the precise manipulation of fluids. “Microfluidics shrinks the physical space to do these fluidic experiments in the same way that integrated circuits reduced the real estate needed for computing,” Fordyce said. “In enzymology, we are still doing things in these giant liter-sized flasks. Everything is a huge volume and we can’t do many things at once.”

The second is cell-free protein synthesis, a technology that takes only those crucial pieces of biological machinery required for protein production and combines them into a soupy extract that can be used to create enzymes synthetically, without requiring living cells to serve as incubators.

“We’ve automated it so that we can use printers to deposit microscopic spots of synthetic DNA coding for the enzyme that we want onto a slide and then align nanoliter-sized chambers filled with the protein starter mix over the spots,” Fordyce explained.

Because each tiny chamber contains only a thousandth of a millionth of a liter of material, the scientists can engineer thousands of variants of an enzyme in a single device and study them in parallel. By tweaking the DNA instructions in each chamber, they can modify the chains of amino acid molecules that comprise the enzyme. In this way, it’s possible to systematically study how different modifications to an enzyme affects its folding, catalytic ability and ability to bind small molecules and other proteins.

When the team applied their technique to a well-studied enzyme called PafA, they found that mutations well beyond the active site affected its ability to catalyze chemical reactions — indeed, most of the amino acids, or “residues,” making up the enzyme had effects.

The scientists also discovered that a surprising number of mutations caused PafA to misfold into an alternate state that was unable to perform catalysis. “Biochemists have known for decades that misfolding can occur but it’s been extremely difficult to identify these cases and even more difficult to quantitatively estimate the amount of this misfolded stuff,” said study co-first author Craig Markin, a research scientist with joint appointments in the Fordyce and Herschlag labs.

“This is one enzyme out of thousands and thousands,” Herschlag emphasized. “We expect there to be more discoveries and more surprises.”

Accelerating advances

If widely adopted, HT-MEK could not only improve our basic understanding of enzyme function, but also catalyze advances in medicine and industry, the researchers say. “A lot of the industrial chemicals we use now are bad for the environment and are not sustainable. But enzymes work most effectively in the most environmentally benign substance we have — water,” said study co-first author Daniel Mokhtari, a Stanford graduate student in the Herschlag and Fordyce labs.

HT-MEK could also accelerate an approach to drug development called allosteric targeting, which aims to increase drug specificity by targeting beyond an enzyme’s active site. Enzymes are popular pharmaceutical targets because of the key role they play in biological processes. But some are considered “undruggable” because they belong to families of related enzymes that share the same or very similar active sites, and targeting them can lead to side effects. The idea behind allosteric targeting is to create drugs that can bind to parts of enzymes that tend to be more differentiated, like their surfaces, but still control particular aspects of catalysis. “With PafA, we saw functional connectivity between the surface and the active site, so that gives us hope that other enzymes will have similar targets,” Markin said. “If we can identify where allosteric targets are, then we’ll be able to start on the harder job of actually designing drugs for them.”

The sheer amount of data that HT-MEK is expected to generate will also be a boon to computational approaches and machine learning algorithms, like the Google-funded AlphaFold project, designed to deduce an enzyme’s complicated 3D shape from its amino acid sequence alone. “If machine learning is to have any chance of accurately predicting enzyme function, it will need the kind of data HT-MEK can provide to train on,” Mokhtari said.

Much further down the road, HT-MEK may even allow scientists to reverse-engineer enzymes and design bespoke varieties of their own. “Plastics are a great example,” Fordyce said. “We would love to create enzymes that can degrade plastics into nontoxic and harmless pieces. If it were really true that the only part of an enzyme that matters is its active site, then we’d be able to do that and more already. Many people have tried and failed, and it’s thought that one reason why we can’t is because the rest of the enzyme is important for getting the active site in just the right shape and to wiggle in just the right way.”

Herschlag hopes that adoption of HT-MEK among scientists will be swift. “If you’re an enzymologist trying to learn about a new enzyme and you have the opportunity to look at 5 or 10 mutations over six months or 100 or 1,000 mutants of your enzyme over the same period, which would you choose?” he said. “This is a tool that has the potential to supplant traditional methods for an entire community.”

    https://news.stanford.edu/2021/07/22/new-tool-drastically-speeds-study-enzymes/

Saturday, July 24, 2021

Tough New Fiber Made by Altered Bacteria

A new fiber, made by genetically engineered bacteria, is stronger than steel and tougher than Kevlar.

From: Washington University in Saint Louis

July 20, 2021 -- Spider silk is said to be one of the strongest, toughest materials on the Earth. Now engineers at Washington University in St. Louis have designed amyloid silk hybrid proteins and produced them in engineered bacteria. The resulting fibers are stronger and tougher than some natural spider silks.

Their research was published in the journal ACS Nano.

To be precise, the artificial silk -- dubbed "polymeric amyloid" fiber -- was not technically produced by researchers, but by bacteria that were genetically engineered in the lab of Fuzhong Zhang, a professor in the Department of Energy, Environmental & Chemical Engineering in the McKelvey School of Engineering.

Zhang has worked with spider silk before. In 2018, his lab engineered bacteria that produced a recombinant spider silk with performance on par with its natural counterparts in all of the important mechanical properties.

"After our previous work, I wondered if we could create something better than spider silk using our synthetic biology platform," Zhang said.

The research team, which includes first author Jingyao Li, a PhD student in Zhang's lab, modified the amino acid sequence of spider silk proteins to introduce new properties, while keeping some of the attractive features of spider silk.

A problem associated with recombinant spider silk fiber -- without significant modification from natural spider silk sequence -- is the need to create β-nanocrystals, a main component of natural spider silk, which contributes to its strength. "Spiders have figured out how to spin fibers with a desirable amount of nanocrystals," Zhang said. "But when humans use artificial spinning processes, the amount of nanocrystals in a synthetic silk fiber is often lower than its natural counterpart."

To solve this problem, the team redesigned the silk sequence by introducing amyloid sequences that have high tendency to form β-nanocrystals. They created different polymeric amyloid proteins using three well-studied amyloid sequences as representatives. The resulting proteins had less repetitive amino acid sequences than spider silk, making them easier to be produced by engineered bacteria. Ultimately, the bacteria produced a hybrid polymeric amyloid protein with 128 repeating units. Recombinant expression of spider silk protein with similar repeating units has proven to be difficult.

The longer the protein, the stronger and tougher the resulting fiber. The 128-repeat proteins resulted in a fiber with gigapascal strength (a measure of how much force is needed to break a fiber of fixed diameter), which is stronger than common steel. The fibers' toughness (a measure of how much energy is needed to break a fiber) is higher than Kevlar and all previous recombinant silk fibers. Its strength and toughness are even higher than some reported natural spider silk fibers.

In collaboration with Young- Shin Jun, professor in the Department of Energy, Environmental & Chemical Engineering, and her PhD student Yaguang Zhu, the team confirmed that the high mechanical properties of the polymeric amyloid fibers indeed come from the enhanced amount of β-nanocrystals.

These new proteins and the resulting fibers are not the end of the story for high-performance synthetic fibers in the Zhang lab. They are just getting started. "This demonstrates that we can engineer biology to produce materials that beat the best material in nature," Zhang said.

This work explored just three of thousands of different amyloid sequences that could potentially enhance the properties of natural spider silk. "There seem to be unlimited possibilities in engineering high-performance materials using our platform," Li said. "It's likely that you can use other sequences, put them into our design and also get a performance-enhanced fiber."

             https://www.sciencedaily.com/releases/2021/07/210720185821.htm

Friday, July 23, 2021

Age-Related Memory Loss in Mice Reversed

Scientists at Cambridge and Leeds have successfully reversed age-related memory loss in mice and say their discovery could lead to the development of treatments to prevent memory loss in people as they age.

From: University of Cambridge

July 22, 2021 -- In a study published in Molecular Psychiatry, the team show that changes in the extracellular matrix of the brain – ‘scaffolding’ around nerve cells – lead to loss of memory with ageing, but that it is possible to reverse these using genetic treatments.

Recent evidence has emerged of the role of perineuronal nets (PNNs) in neuroplasticity – the ability of the brain to learn and adapt – and to make memories. PNNs are cartilage-like structures that mostly surround inhibitory neurons in the brain. Their main function is to control the level of plasticity in the brain. They appear at around five years old in humans, and turn off the period of enhanced plasticity during which the connections in the brain are optimised. Then, plasticity is partially turned off, making the brain more efficient but less plastic.

PNNs contain compounds known as chondroitin sulphates. Some of these, such as chondroitin 4-sulphate, inhibit the action of the networks, inhibiting neuroplasticity; others, such as chondroitin 6-sulphate, promote neuroplasticity. As we age, the balance of these compounds changes, and as levels of chondroitin 6-sulphate decrease, so our ability to learn and form new memories changes, leading to age-related memory decline.

Researchers at the University of Cambridge and University of Leeds investigated whether manipulating the chondroitin sulphate composition of the PNNs might restore neuroplasticity and alleviate age-related memory deficits.

To do this, the team looked at 20-month old mice – considered very old – and using a suite of tests showed that the mice exhibited deficits in their memory compared to six-month old mice.

For example, one test involved seeing whether mice recognised an object. The mouse was placed at the start of a Y-shaped maze and left to explore two identical objects at the end of the two arms. After a short while, the mouse was once again placed in the maze, but this time one arm contained a new object, while the other contained a copy of the repeated object. The researchers measured the amount of time the mouse spent exploring each object to see whether it had remembered the object from the previous task. The older mice were much less likely to remember the object.

The team treated the ageing mice using a ‘viral vector’, a virus capable of reconstituting the amount of 6-sulphate chondroitin sulphates to the PNNs and found that this completely restored memory in the older mice, to a level similar to that seen in the younger mice.

Dr Jessica Kwok from the School of Biomedical Sciences at the University of Leeds said: “We saw remarkable results when we treated the ageing mice with this treatment. The memory and ability to learn were restored to levels they would not have seen since they were much younger.”

To explore the role of chondroitin 6-sulphate in memory loss, the researchers bred mice that had been genetically-manipulated such that they were only able to produce low levels of the compound to mimic the changes of ageing. Even at 11 weeks, these mice showed signs of premature memory loss. However, increasing levels of chondroitin 6-sulphate using the viral vector restored their memory and plasticity to levels similar to healthy mice.

Professor James Fawcett from the John van Geest Centre for Brain Repair at the University of Cambridge said: “What is exciting about this is that although our study was only in mice, the same mechanism should operate in humans – the molecules and structures in the human brain are the same as those in rodents. This suggests that it may be possible to prevent humans from developing memory loss in old age.”

The team have already identified a potential drug, licensed for human use, that can be taken by mouth and inhibits the formation of PNNs. When this compound is given to mice and rats it can restore memory in ageing and also improves recovery in spinal cord injury. The researchers are investigating whether it might help alleviate memory loss in animal models of Alzheimer's disease.

The approach taken by Professor Fawcett’s team – using viral vectors to deliver the treatment – is increasingly being used to treat human neurological conditions. A second team at the Centre recently published research showing their use for repairing damage caused by glaucoma and dementia.

The study was funded by Alzheimer’s Research UK, the Medical Research Council, European Research Council and the Czech Science Foundation.

https://www.cam.ac.uk/research/news/scientists-reverse-age-related-memory-loss-in-mice

Thursday, July 22, 2021

Rare and Magnet-Proof Superconductor

New findings for “magic angle” trilayer graphene might help inform the design of more powerful MRI machines or robust quantum computers.

By Jennifer Chu, MIT News Office

July 21, 2021 -- MIT physicists have observed signs of a rare type of superconductivity in a material called magic-angle twisted trilayer graphene. In a study appearing today in Nature, the researchers report that the material exhibits superconductivity at surprisingly high magnetic fields of up to 10 Tesla, which is three times higher than what the material is predicted to endure if it were a conventional superconductor.

The results strongly imply that magic-angle trilayer graphene, which was initially discovered by the same group, is a very rare type of superconductor, known as a “spin-triplet,” that is impervious to high magnetic fields. Such exotic superconductors could vastly improve technologies such as magnetic resonance imaging, which uses superconducting wires under a magnetic field to resonate with and image biological tissue. MRI machines are currently limited to magnet fields of 1 to 3 Tesla. If they could be built with spin-triplet superconductors, MRI could operate under higher magnetic fields to produce sharper, deeper images of the human body.

The new evidence of spin-triplet superconductivity in trilayer graphene could also help scientists design stronger superconductors for practical quantum computing.

“The value of this experiment is what it teaches us about fundamental superconductivity, about how materials can behave, so that with those lessons learned, we can try to design principles for other materials which would be easier to manufacture, that could perhaps give you better superconductivity,” says Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT.

His co-authors on the paper include postdoc Yuan Cao and graduate student Jeong Min Park at MIT, and Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.

Strange shift

Superconducting materials are defined by their super-efficient ability to conduct electricity without losing energy. When exposed to an electric current, electrons in a superconductor couple up in “Cooper pairs” that then travel through the material without resistance, like passengers on an express train.

In a vast majority of superconductors, these passenger pairs have opposite spins, with one electron spinning up, and the other down — a configuration known as a “spin-singlet.” These pairs happily speed through a superconductor, except under high magnetic fields, which can shift the energy of each electron in opposite directions, pulling the pair apart. In this way, and through mechanisms, high magnetic fields can derail superconductivity in conventional spin-singlet superconductors.  

“That’s the ultimate reason why in a large-enough magnetic field, superconductivity disappears,” Park says.

But there exists a handful of exotic superconductors that are impervious to magnetic fields, up to very large strengths. These materials superconduct through pairs of electrons with the same spin — a property known as “spin-triplet.” When exposed to high magnetic fields, the energy of both electrons in a Cooper pair shift in the same direction, in a way that they are not pulled apart but continue superconducting unperturbed, regardless of the magnetic field strength.

Jarillo-Herrero’s group was curious whether magic-angle trilayer graphene might harbor signs of this more unusual spin-triplet superconductivity. The team has produced pioneering work in the study of graphene moiré structures — layers of atom-thin carbon lattices that, when stacked at specific angles, can give rise to surprising electronic behaviors.

The researchers initially reported such curious properties in two angled sheets of graphene, which they dubbed magic-angle bilayer graphene. They soon followed up with tests of trilayer graphene, a sandwich configuration of three graphene sheets that turned out to be even stronger than its bilayer counterpart, retaining superconductivity at higher temperatures. When the researchers applied a modest magnetic field, they noticed that trilayer graphene was able to superconduct at field strengths that would destroy superconductivity in bilayer graphene.

“We thought, this is something very strange,” Jarillo-Herrero says.

A super comeback

In their new study, the physicists tested trilayer graphene’s superconductivity under increasingly higher magnetic fields. They fabricated the material by peeling away atom-thin layers of carbon from a block of graphite, stacking three layers together, and rotating the middle one by 1.56 degrees with respect to the outer layers. They attached an electrode to either end of the material to run a current through and measure any energy lost in the process. Then they turned on a large magnet in the lab, with a field which they oriented parallel to the material.

As they increased the magnetic field around trilayer graphene, they observed that superconductivity held strong up to a point before disappearing, but then curiously reappeared at higher field strengths — a comeback that is highly unusual and not known to occur in conventional spin-singlet superconductors.

“In spin-singlet superconductors, if you kill superconductivity, it never comes back — it’s gone for good,” Cao says. “Here, it reappeared again. So this definitely says this material is not spin-singlet.”

They also observed that after “re-entry,” superconductivity persisted up to 10 Tesla, the maximum field strength that the lab’s magnet could produce. This is about three times higher than what the superconductor should withstand if it were a conventional spin-singlet, according to Pauli’s limit, a theory that predicts the maximum magnetic field at which a material can retain superconductivity.

Trilayer graphene’s reappearance of superconductivity, paired with its persistence at higher magnetic fields than predicted, rules out the possibility that the material is a run-of-the-mill superconductor. Instead, it is likely a very rare type, possibly a spin-triplet, hosting Cooper pairs that speed through the material, impervious to high magnetic fields. The team plans to drill down on the material to confirm its exact spin state, which could help to inform the design of more powerful MRI machines, and also more robust quantum computers.

“Regular quantum computing is super fragile,” Jarillo-Herrero says. “You look at it and, poof, it disappears. About 20 years ago, theorists proposed a type of topological superconductivity that, if realized in any material, could [enable] a quantum computer where states responsible for computation are very robust. That would give infinite more power to do computing. The key ingredient to realize that would be spin-triplet superconductors, of a certain type. We have no idea if our type is of that type. But even if it’s not, this could make it easier to put trilayer graphene with other materials to engineer that kind of superconductivity. That could be a major breakthrough. But it’s still super early.”

This research was supported by the U.S. Department of Energy, the National Science Foundation, the Gordon and Betty Moore Foundation, the Fundacion Ramon Areces, and the CIFAR Quantum Materials Program.

        https://news.mit.edu/2021/magic-trilayer-graphene-superconductor-magnet-0721