Sunday, October 31, 2021

Comedian Mort Sahl Has Died

Morton Lyon Sahl (May 11, 1927 – October 26, 2021) was a Canadian-born American comedian, actor, and social satirist, considered the first modern comedian since Will Rogers.  Sahl pioneered a style of social satire that pokes fun at political and current event topics using improvised monologues and only a newspaper as a prop.

Sahl spent his early years in Los Angeles and moved to the San Francisco Bay Area where he made his professional stage debut at the hungry i nightclub in 1953.  His popularity grew quickly, and after a year at the club he traveled the country doing shows at established nightclubs, theaters, and college campuses. In 1960 he became the first comedian to have a cover story written about him by Time magazine. He appeared on various television shows, played a number of film roles, and performed a one-man show on Broadway.

Television host Steve Allen said that Sahl was "the only real political philosopher we have in modern comedy". His social satire performances broke new ground in live entertainment, as a stand-up comic talking about the real world of politics at that time was considered "revolutionary". It inspired many later comics to become stage comedians, including Lenny Bruce, Jonathan Winters, George Carlin, and Woody Allen, who credits Sahl's new style of humor with "opening up vistas for people like me".

Television host Steve Allen, who originated the Tonight Show, said he was "struck by how amateur he seemed," but added that the observation was not meant as a criticism, but as a "compliment". He noted that all the previous successful comics dressed formally, were glib and well-rehearsed, and were always in control of their audiences.  Allen said that Sahl's "very un-show business manner was one of the things I liked when I first saw him work."

Sahl dressed casually, with no tie and usually wearing his trademark V-neck campus-style sweater. His stage presence was seen as being "candid and cool, the antithesis of the slick comic," stated theater critic Gerald Nachman.  And although Sahl acquired a reputation for being an intellectual comedian, it was an image he disliked and disagreed with: "It was absurd. I was barely a C student," he said.  His naturalness on stage was partly due to his preferring improvisation over carefully rehearsed monologues. Sahl explained:

I never found you could write the act. You can't rehearse the audience's responses. You adjust to them every night. I come in with only an outline. You've got to have a spirit of adventure. I follow my instincts and the audience is my jury.

His casual style of stand-up, where he seemed to be one-on-one with his audience, influenced new comedians, including Lenny Bruce and Dick Gregory.  Sahl was the least controversial, however, because he dressed and looked "collegiate" and focused on politics, while Bruce confronted sexual and language conventions and Gregory focused on the civil rights movement.

Numerous politicians became his fans, with John F. Kennedy asking him to write his jokes for campaign speeches, though Sahl later turned his barbs at the president. After Kennedy's assassination in 1963, Sahl focused on what he said were the Warren Report's inaccuracies and conclusions, and spoke about it often during his shows. This alienated much of his audience and led to a decline in his popularity for the remainder of the 1960s. By the 1970s, his shows and popularity staged a partial comeback that continued over the ensuing decades.  A biography of Sahl, Last Man Standing, by James Curtis, was released in 2017.

            https://en.wikipedia.org/wiki/Mort_Sahl 

Saturday, October 30, 2021

A Planet Found Outside Our Galaxy?

Signs of a planet transiting a star outside of the Milky Way galaxy may have been detected. The finding opens up a new window to search for exoplanets at greater distances than ever before.

From:  Center for Astrophysics, Harvard & Smithsonian

October 27, 2021 – The possible exoplanet candidate is located in the spiral galaxy Messier 51 (M51), also called the Whirlpool Galaxy because of its distinctive profile.

Exoplanets are defined as planets outside of our Solar System. Until now, astronomers have found all other known exoplanets and exoplanet candidates in the Milky Way galaxy, almost all of them less than about 3,000 light-years from Earth. An exoplanet in M51 would be about 28 million light-years away, meaning it would be thousands of times farther away than those in the Milky Way.

"We are trying to open up a whole new arena for finding other worlds by searching for planet candidates at X-ray wavelengths, a strategy that makes it possible to discover them in other galaxies," said Rosanne Di Stefano of the Center for Astrophysics | Harvard & Smithsonian (CfA) in Cambridge, Massachusetts, who led the study, which was published in Nature Astronomy.

This new result is based on transits, events in which the passage of a planet in front of a star blocks some of the star's light and produces a characteristic dip. Astronomers using both ground-based and space-based telescopes -- like those on NASA's Kepler and TESS missions -- have searched for dips in optical light, electromagnetic radiation humans can see, enabling the discovery of thousands of planets.

Di Stefano and colleagues have instead searched for dips in the brightness of X-rays received from X-ray bright binaries. These luminous systems typically contain a neutron star or black hole pulling in gas from a closely orbiting companion star. The material near the neutron star or black hole becomes superheated and glows in X-rays.

Because the region producing bright X-rays is small, a planet passing in front of it could block most or all of the X-rays, making the transit easier to spot because the X-rays can completely disappear. This could allow exoplanets to be detected at much greater distances than current optical light transit studies, which must be able to detect tiny decreases in light because the planet only blocks a tiny fraction of the star.

The team used this method to detect the exoplanet candidate in a binary system called M51-ULS-1, located in M51. This binary system contains a black hole or neutron star orbiting a companion star with a mass about 20 times that of the Sun. The X-ray transit they found using Chandra data lasted about three hours, during which the X-ray emission decreased to zero. Based on this and other information, the researchers estimate the exoplanet candidate in M51-ULS-1 would be roughly the size of Saturn, and orbit the neutron star or black hole at about twice the distance of Saturn from the Sun.

While this is a tantalizing study, more data would be needed to verify the interpretation as an extragalactic exoplanet. One challenge is that the planet candidate's large orbit means it would not cross in front of its binary partner again for about 70 years, thwarting any attempts for a confirming observation for decades.

"Unfortunately to confirm that we're seeing a planet we would likely have to wait decades to see another transit," said co-author Nia Imara of the University of California at Santa Cruz. "And because of the uncertainties about how long it takes to orbit, we wouldn't know exactly when to look."

Can the dimming have been caused by a cloud of gas and dust passing in front of the X-ray source? The researchers consider this to be an unlikely explanation, as the characteristics of the event observed in M51-ULS-1 are not consistent with the passage of such a cloud. The model of a planet candidate is, however, consistent with the data.

"We know we are making an exciting and bold claim so we expect that other astronomers will look at it very carefully," said co-author Julia Berndtsson of Princeton University in New Jersey. "We think we have a strong argument, and this process is how science works."

If a planet exists in this system, it likely had a tumultuous history and violent past. An exoplanet in the system would have had to survive a supernova explosion that created the neutron star or black hole. The future may also be dangerous. At some point the companion star could also explode as a supernova and blast the planet once again with extremely high levels of radiation.

Di Stefano and her colleagues looked for X-ray transits in three galaxies beyond the Milky Way galaxy, using both Chandra and the European Space Agency's XMM-Newton. Their search covered 55 systems in M51, 64 systems in Messier 101 (the "Pinwheel" galaxy), and 119 systems in Messier 104 (the "Sombrero" galaxy), resulting in the single exoplanet candidate described here.

The authors will search the archives of both Chandra and XMM-Newton for more exoplanet candidates in other galaxies. Substantial Chandra datasets are available for at least 20 galaxies, including some like M31 and M33 that are much closer than M51, allowing shorter transits to be detectable. Another interesting line of research is to search for X-ray transits in Milky Way X-ray sources to discover new nearby planets in unusual environments.

The other authors of this Nature Astronomy paper are Ryan Urquhart (Michigan State University), Roberto Soria (University of the Chinese Science Academy), Vinay Kashap (CfA), and Theron Carmichael (CfA). NASA's Marshall Space Flight Center manages the Chandra program. The Smithsonian Astrophysical Observatory's Chandra X-ray Center controls science from Cambridge Massachusetts and flight operations from Burlington, Massachusetts.

        https://www.sciencedaily.com/releases/2021/10/211027094914.htm

  

Friday, October 29, 2021

High-speed Laser Writing Method Could Pack 500 Terabytes of Data into CD-sized Glass Disc Advances make high-density, 5D optical storage practical for long-term data archiving

From:  Optica, October 28, 2021

WASHINGTON — Researchers have developed a fast and energy-efficient laser-writing method for producing high-density nanostructures in silica glass. These tiny structures can be used for long-term five-dimensional (5D) optical data storage that is more than 10,000 times denser than Blue-Ray optical disc storage technology.

“Individuals and organizations are generating ever-larger datasets, creating the desperate need for more efficient forms of data storage with a high capacity, low energy consumption and long lifetime,” said doctoral researcher Yuhao Lei from the University of Southampton in the UK. “While cloud-based systems are designed more for temporary data, we believe that 5D data storage in glass could be useful for longer-term data storage for national archives, museums, libraries or private organizations.”

In OpticaOptica Publishing Group’s journal for high-impact research, Lei and colleagues describe their new method for writing data that encompasses two optical dimensions plus three spatial dimensions. The new approach can write at speeds of 1,000,000 voxels per second, which is equivalent to recording about 230 kilobytes of data (more than 100 pages of text) per second.

“The physical mechanism we use is generic,” said Lei. “Thus, we anticipate that this energy-efficient writing method could also be used for fast nanostructuring in transparent materials for applications in 3D integrated optics and microfluidics.”

Faster, better laser writing

Although 5D optical data storage in transparent materials has been demonstrated before, writing data fast enough and with a high enough density for real-world applications has proved challenging. To overcome this hurdle, the researchers used a femtosecond laser with a high repetition rate to create tiny pits containing a single nanolamella-like structure measuring just 500 by 50 nanometers each.

Rather than using the femtosecond laser to write directly in the glass, the researchers harnessed the light to produce an optical phenomenon known as near-field enhancement, in which a nanolamella-like structure is created by a few weak light pulses, from an isotropic nanovoid generated by a single pulse microexplosion. Using near-field enhancement to make the nanostructures minimized the thermal damage that has been problematic for other approaches that use high-repetition-rate lasers.

Because the nanostructures are anisotropic, they produce birefringence that can be characterized by the light’s slow axis orientation (4th dimension, corresponding to the orientation of the nanolamella-like structure) and strength of retardance (5th dimension, defined by the size of nanostructure). As data is recorded into the glass, the slow axis orientation and strength of retardance can be controlled by the polarization and intensity of light, respectively.

“This new approach improves the data writing speed to a practical level, so we can write tens of gigabytes of data in a reasonable time,” said Lei. “The highly localized, precision nanostructures enable a higher data capacity because more voxels can be written in a unit volume. In addition, using pulsed light reduces the energy needed for writing.” 

Writing data on a glass CD

The researchers used their new method to write 5 gigabytes of text data onto a silica glass disc about the size of a conventional compact disc with nearly 100% readout accuracy. Each voxel contained four bits of information, and every two voxels corresponded to a text character. With the writing density available from the method, the disc would be able to hold 500 terabytes of data. With upgrades to the system that allow parallel writing, the researchers say it should be feasible to write this amount of data in about 60 days.

“With the current system, we have the ability to preserve terabytes of data, which could be used, for example, to preserve information from a person’s DNA,” said Peter G. Kazansky, leader of the researcher team.

The researchers are now working to increase the writing speed of their method and to make the technology usable outside the laboratory. Faster methods for reading the data will also have to be developed for practical data storage applications.

Paper: Y. Lei, M. Sakakura, L. Wang, Y. Yu, H. Wang, G. Shayeganrad, P. G. Kazansky, “High speed ultrafast laser anisotropic nanostructuring by energy deposition control via near-field enhancement,” Optica, 8, 11, 1365-1371 (2021).

            https://www.eurekalert.org/news-releases/932605

Thursday, October 28, 2021

Teaching Robots to Think Like Us

Can intelligence be taught to robots? Advances in physical reservoir computing, a technology that makes sense of brain signals, could contribute to creating artificial intelligence machines that think as we do.

From:  American Institute of Physics

October 26, 2021 -- Researchers outline how a robot could be taught to navigate through a maze by electrically stimulating a culture of brain nerve cells connected to the machine. These nerve cells were grown from living cells and acted as the physical reservoir for the computer to construct coherent signals. These findings suggest goal-directed behavior can be generated without any additional learning by sending disturbance signals to an embodied system.

In Applied Physics Letters, from AIP Publishing, researchers from the University of Tokyo outline how a robot could be taught to navigate through a maze by electrically stimulating a culture of brain nerve cells connected to the machine.

These nerve cells, or neurons, were grown from living cells and acted as the physical reservoir for the computer to construct coherent signals.

The signals are regarded as homeostatic signals, telling the robot the internal environment was being maintained within a certain range and acting as a baseline as it moved freely through the maze.

Whenever the robot veered in the wrong direction or faced the wrong way, the neurons in the cell culture were disturbed by an electric impulse. Throughout trials, the robot was continually fed the homeostatic signals interrupted by the disturbance signals until it had successfully solved the maze task.

These findings suggest goal-directed behavior can be generated without any additional learning by sending disturbance signals to an embodied system. The robot could not see the environment or obtain other sensory information, so it was entirely dependent on the electrical trial-and-error impulses.

"I, myself, was inspired by our experiments to hypothesize that intelligence in a living system emerges from a mechanism extracting a coherent output from a disorganized state, or a chaotic state," said co-author Hirokazu Takahashi, an associate professor of mechano-informatics.

Using this principle, the researchers show intelligent task-solving abilities can be produced using physical reservoir computers to extract chaotic neuronal signals and deliver homeostatic or disturbance signals. In doing so, the computer creates a reservoir that understands how to solve the task.

"A brain of [an] elementary school kid is unable to solve mathematical problems in a college admission exam, possibly because the dynamics of the brain or their 'physical reservoir computer' is not rich enough," said Takahashi. "Task-solving ability is determined by how rich a repertoire of spatiotemporal patterns the network can generate."

The team believes using physical reservoir computing in this context will contribute to a better understanding of the brain's mechanisms and may lead to the novel development of a neuromorphic computer.

          https://www.sciencedaily.com/releases/2021/10/211026124247.htm

  

Wednesday, October 27, 2021

Neutron Star Collisions Are a “Goldmine” of Heavy Elements, Study Finds

Mergers between two neutron stars have produced more heavy elements in last 2.5 billion years than mergers between neutron stars and black holes.

By Jennifer Chu, MIT News Office

October 25, 2021 -- Most elements lighter than iron are forged in the cores of stars. A star’s white-hot center fuels the fusion of protons, squeezing them together to build progressively heavier elements. But beyond iron, scientists have puzzled over what could give rise to gold, platinum, and the rest of the universe’s heavy elements, whose formation requires more energy than a star can muster.

A new study by researchers at MIT and the University of New Hampshire finds that of two long-suspected sources of heavy metals, one is more of a goldmine than the other.

The study, published today in Astrophysical Journal Letters, reports that in the last 2.5 billion years, more heavy metals were produced in binary neutron star mergers, or collisions between two neutron stars, than in mergers between a neutron star and a black hole.

The study is the first to compare the two merger types in terms of their heavy metal output, and suggests that binary neutron stars are a likely cosmic source for the gold, platinum, and other heavy metals we see today. The findings could also help scientists determine the rate at which heavy metals are produced across the universe.

“What we find exciting about our result is that to some level of confidence we can say binary neutron stars are probably more of a goldmine than neutron star-black hole mergers,” says lead author Hsin-Yu Chen, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. 

Chen’s co-authors are Salvatore Vitale, assistant professor of physics at MIT, and Francois Foucart of UNH.

An efficient flash

As stars undergo nuclear fusion, they require energy to fuse protons to form heavier elements. Stars are efficient in churning out lighter elements, from hydrogen to iron. Fusing more than the 26 protons in iron, however, becomes energetically inefficient.

“If you want to go past iron and build heavier elements like gold and platinum, you need some other way to throw protons together,” Vitale says.

Scientists have suspected supernovae might be an answer. When a massive star collapses in a supernova, the iron at its center could conceivably combine with lighter elements in the extreme fallout to generate heavier elements.

In 2017, however, a promising candidate was confirmed, in the form a binary neutron star merger, detected for the first time by LIGO and Virgo, the gravitational-wave observatories in the United States and in Italy, respectively. The detectors picked up gravitational waves, or ripples through space-time, that originated 130 million light years from Earth, from a collision between two neutron stars — collapsed cores of massive stars, that are packed with neutrons and are among the densest objects in the universe.

The cosmic merger emitted a flash of light, which contained signatures of heavy metals.

“The magnitude of gold produced in the merger was equivalent to several times the mass of the Earth,” Chen says. “That entirely changed the picture. The math showed that binary neutron stars were a more efficient way to create heavy elements, compared to supernovae.”

A binary goldmine

Chen and her colleagues wondered: How might neutron star mergers compare to collisions between a neutron star and a black hole? This is another merger type that has been detected by LIGO and Virgo and could potentially be a heavy metal factory. Under certain conditions, scientists suspect, a black hole could disrupt a neutron star such that it would spark and spew heavy metals before the black hole completely swallowed the star.

The team set out to determine the amount of gold and other heavy metals each type of merger could typically produce. For their analysis, they focused on LIGO and Virgo’s detections to date of two binary neutron star mergers and two neutron star – black hole mergers.

The researchers first estimated the mass of each object in each merger, as well as the rotational speed of each black hole, reasoning that if a black hole is too massive or slow, it would swallow a neutron star before it had a chance to produce heavy elements. They also determined each neutron star’s resistance to being disrupted. The more resistant a star, the less likely it is to churn out heavy elements. They also estimated how often one merger occurs compared to the other, based on observations by LIGO, Virgo, and other observatories.

Finally, the team used numerical simulations developed by Foucart, to calculate the average amount of gold and other heavy metals each merger would produce, given varying combinations of the objects’ mass, rotation, degree of disruption, and rate of occurrence.

On average, the researchers found that binary neutron star mergers could generate two to 100 times more heavy metals than mergers between neutron stars and black holes. The four mergers on which they based their analysis are estimated to have occurred within the last 2.5 billion years. They conclude then, that during this period, at least, more heavy elements were produced by binary neutron star mergers than by collisions between neutron stars and black holes.

The scales could tip in favor of neutron star-black hole mergers if the black holes had high spins, and low masses. However, scientists have not yet observed these kinds of black holes in the two mergers detected to date.

Chen and her colleagues hope that, as LIGO and Virgo resume observations next year, more detections will improve the team’s estimates for the rate at which each merger produces heavy elements. These rates, in turn, may help scientists determine the age of distant galaxies, based on the abundance of their various elements.

“You can use heavy metals the same way we use carbon to date dinosaur remains,” Vitale says. “Because all these phenomena have different intrinsic rates and yields of heavy elements, that will affect how you attach a time stamp to a galaxy. So, this kind of study can improve those analyses.”

This research was funded, in part, by NASA, the National Science Foundation, and the LIGO Laboratory.

     https://news.mit.edu/2021/neutron-star-collisions-goldmine-heavy-elements-1025

Tuesday, October 26, 2021

A.I. May Spot Unseen Signs of Heart Failure

New self-learning algorithm may detect blood pumping problems by reading electrocardiograms

From:  The Mount Sinai Hospital / Mount Sinai School of Medicine

October 18, 2021 -- A special artificial intelligence (AI)-based computer algorithm created by Mount Sinai researchers was able to learn how to identify subtle changes in electrocardiograms (also known as ECGs or EKGs) to predict whether a patient was experiencing heart failure.

"We showed that deep-learning algorithms can recognize blood pumping problems on both sides of the heart from ECG waveform data," said Benjamin S. Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences, a member of the Hasso Plattner Institute for Digital Health at Mount Sinai, and a senior author of the study published in the Journal of the American College of Cardiology: Cardiovascular Imaging. "Ordinarily, diagnosing these type of heart conditions requires expensive and time-consuming procedures. We hope that this algorithm will enable quicker diagnosis of heart failure."

The study was led by Akhil Vaid, MD, a postdoctoral scholar who works in both the Glicksberg lab and one led by Girish N. Nadkarni, MD, MPH, CPH, Associate Professor of Medicine at the Icahn School of Medicine at Mount Sinai, Chief of the Division of Data-Driven and Digital Medicine (D3M), and a senior author of the study.

Affecting about 6.2 million Americans, heart failure, or congestive heart failure, occurs when the heart pumps less blood than the body normally needs. For years doctors have relied heavily on an imaging technique called an echocardiogram to assess whether a patient may be experiencing heart failure. While helpful, echocardiograms can be labor-intensive procedures that are only offered at select hospitals.

However, recent breakthroughs in artificial intelligence suggest that electrocardiograms -- a widely used electrical recording device -- could be a fast and readily available alternative in these cases. For instance, many studies have shown how a "deep-learning" algorithm can detect weakness in the heart's left ventricle, which pushes freshly oxygenated blood out to the rest of the body. In this study, the researchers described the development of an algorithm that not only assessed the strength of the left ventricle but also the right ventricle, which takes deoxygenated blood streaming in from the body and pumps it to the lungs.

"Although appealing, traditionally it has been challenging for physicians to use ECGs to diagnose heart failure. This is partly because there is no established diagnostic criteria for these assessments and because some changes in ECG readouts are simply too subtle for the human eye to detect," said Dr. Nadkarni. "This study represents an exciting step forward in finding information hidden within the ECG data which can lead to better screening and treatment paradigms using a relatively simple and widely available test."

Typically, an electrocardiogram involves a two-step process. Wire leads are taped to different parts of a patient's chest and within minutes a specially designed, portable machine prints out a series of squiggly lines, or waveforms, representing the heart's electrical activity. These machines can be found in most hospitals and ambulances throughout the United States and require minimal training to operate.

For this study, the researchers programmed a computer to read patient electrocardiograms along with data extracted from written reports summarizing the results of corresponding echocardiograms taken from the same patients. In this situation, the written reports acted as a standard set of data for the computer to compare with the electrocardiogram data and learn how to spot weaker hearts.

Natural language processing programs helped the computer extract data from the written reports. Meanwhile, special neural networks capable of discovering patterns in images were incorporated to help the algorithm learn to recognize pumping strengths.

"We wanted to push the state of the art by developing AI capable of understanding the entire heart easily and inexpensively," said Dr. Vaid.

The computer then read more than 700,000 electrocardiograms and echocardiogram reports obtained from 150,000 Mount Sinai Health System patients from 2003 to 2020. Data from four hospitals was used to train the computer, whereas data from a fifth one was used to test how the algorithm would perform in a different experimental setting.

"A potential advantage of this study is that it involved one of the largest collections of ECGs from one of the most diverse patient populations in the world," said Dr. Nadkarni.

Initial results suggested that the algorithm was effective at predicting which patients would have either healthy or very weak left ventricles. Here strength was defined by left ventricle ejection fraction, an estimate of how much fluid the ventricle pumps out with each beat as observed on echocardiograms. Healthy hearts have an ejection fraction of 50 percent or greater while weak hearts have ones that are equal to or below 40 percent.

The algorithm was 94 percent accurate at predicting which patients had a healthy ejection fraction and 87 percent accurate at predicting those who had an ejection fraction that was below 40 percent.

However the algorithm was not as effective at predicting which patients would have slightly weakened hearts. In this case, the program was 73 percent accurate at predicting the patients who had an ejection fraction that was between 40 and 50 percent.

Further results suggested that the algorithm also learned to detect right valve weaknesses from the electrocardiograms. In this case, weakness was defined by more descriptive terms extracted from the echocardiogram reports. Here the algorithm was 84 percent accurate at predicting which patients had weak right valves.

"Our results suggested that this algorithm may eventually help doctors correctly diagnose failure on either side of the heart," Dr. Vaid said.

Finally, additional analysis suggested that the algorithm may be effective at detecting heart weakness in all patients, regardless of race and gender.

"Our results suggest that this algorithm could be a useful tool for helping clinical practitioners combat heart failure suffered by a variety of patients," added Dr. Glicksberg. "We are in the process of carefully designing prospective trials to test out its effectiveness in a more real-world setting."

This study was supported by the National Institutes of Health (TR001433).

https://www.sciencedaily.com/releases/2021/10/211018172246.htm#

 

Monday, October 25, 2021

Huge Coal Usage in Asia Continues

Historical analysis finds no precedent for the rate of coal and gas power decline needed to limit climate change to 1.5°C

From: Cell Press

October 22, 2021 – Limiting climate change to the 1.5°C target set by the Paris Climate Agreement will likely require coal and gas power use to decline at rates that are unprecedented for any large country, an analysis of decadal episodes of fossil fuel decline in 105 countries between 1960 and 2018 shows. Furthermore, the findings, published October 22 in the journal One Earth, suggest that the most rapid historical cases of fossil fuel decline occurred when oil was replaced by coal, gas, or nuclear power in response to energy security threats of the 1970s and the 1980s.

Decarbonizing the energy sector is a particularly important strategy for reaching the goal of net-zero greenhouse gas emissions by 2050, which is necessary in order to prevent global average temperatures from climbing beyond 1.5°C this century. However, few studies have investigated the historical precedent for such a sudden and sweeping transition -- especially the decline of carbon-intensive technologies that must accompany the widespread adoption of greener ones.

"This is the first study that systematically analyzed historical cases of decline in fossil fuel use in individual countries over the last 60 years and around the world," says Jessica Jewell (@jessicadjewell), an associate professor in energy transitions at Chalmers University in Sweden, a professor at the University of Bergen in Norway, and the corresponding author of the study. "Prior studies sometimes looked at the world as a whole but failed to find such cases, because on the global level the use of fossil fuels has always grown over time."

"We also studied recent political pledges to completely phase out coal power, which some 30 countries made as part of the Powering Past Coal Alliance. We found that these pledges do not aim for faster coal decline than what has occurred historically," adds Jewell. "In other words, they plan for largely business as usual."

To explore whether any periods of historical fossil fuel decline are similar to scenarios needed to achieve the Paris target, Jewell and her colleagues, Vadim Vinichenko, a post-doctoral researcher at Chalmers and Aleh Cherp, a professor at Central European University in Austria and Lund University in Sweden, identified 147 episodes within a sample of 105 countries between 1960 and 2018 in which coal, oil, or natural gas use declined faster than 5% over a decade. Rapid decline in fossil fuel use has been historically limited to small countries, such as Denmark, but such cases are less relevant to climate scenarios, where decline should take place in continental-size regions.

Jewell and colleagues focused the investigation on cases with fast rates of fossil fuel decline in larger countries, which indicate significant technological shifts or policy efforts, and controlled for the size of the energy sector, the growth in electricity demand, and the type of energy with which the declining fossil fuel was substituted. They compared these cases of historical fossil fuel decline to climate mitigation scenarios using a tool called "feasibility space," which identifies combinations of conditions that make a climate action feasible in particular contexts.

"We were surprised to find that the use of some fossil fuels, particularly oil, actually declined quite rapidly in the 1970s and the 1980s in Western Europe and other industrialized countries like Japan," says Jewell. "This is not the time period that is typically associated with energy transitions, but we came to believe that some important lessons can be drawn from there." Rapid decline of fossils historically required advances in competing technologies, strong motivation to change energy systems (such as to avoid energy security threats), and effective government institutions to implement the required changes.

"We were less surprised, but still somewhat impressed, by how fast the use of coal must decline in the future to reach climate targets," she adds, noting that, of all the fossil fuels, coal would need to decline the most rapidly to meet climate targets, particularly in Asia and the OECD regions where coal use is concentrated.

About one-half of the IPCC 1.5°C-compatible scenarios envision coal decline in Asia faster than in any of these cases. The remaining scenarios, as well as many scenarios for coal and gas decline in other regions, only have precedents where oil was replaced by coal, gas or nuclear power in response to energy security threats in smaller electricity markets. Achieving the 1.5°C target requires finding mechanisms of fossil fuel decline that extend far beyond historical experience or current pledges.

The authors found that nearly all scenarios for the decline of coal in Asia in line with Paris Agreement's goals would be historically unprecedented or have rare precedents. Over half of scenarios envisioned for coal decline in OECD countries and over half of scenarios for cutting gas use in reforming economies, the Middle East, or Africa would also be unprecedented or have rare precedents as well.

"This signals both an enormous challenge of seeing through such rapid decline of fossil fuels and the need to learn from historical lessons when rapid declines were achieved on the national scale," says Jewell.

         https://www.sciencedaily.com/releases/2021/10/211022123755.htm 

Sunday, October 24, 2021

Specially Treated Wood Is Hard as Nails

Researchers make hardened wooden knives that slice through steak

From: Cell Press

October 20. 2021 -- The sharpest knives available are made of either steel or ceramic, both of which are human-made materials that must be forged in furnaces under extreme temperatures. Now, researchers have developed a potentially more sustainable way to make sharp knives: using hardened wood. The method, presented October 20th in the journal Matter, makes wood 23 times harder, and a knife made from the material is nearly three times sharper than a stainless-steel dinner table knife.

"The knife cuts through a medium-well done steak easily, with similar performance to a dinner table knife," says Teng Li (@ToLiTeng), the senior author of the study and a materials scientist at the University of Maryland. Afterwards, the hardened wood knife can be washed and reused, making it a promising alternative to steel, ceramic, and disposable plastic knives.

Li and his team also demonstrated that their material can be used to produce wooden nails as sharp as conventional steel nails. Unlike steel nails, the wooden nails the team developed are resistant to rusting. The researchers showed that these wooden nails could be used to hammer together three boards without any damage to the nail. In addition to knives and nails, Li hopes that, in the future, the material can also be used to make hardwood flooring that is more resistant to scratching and wear.

While Li's method to produce hardened wood is new, wood processing in general has been around for centuries. However, when wood is prepared for furniture or building materials, it is only processed with steam and compression, and the material rebounds somewhat after shaping. "When you look around at the hard materials you use in your daily life, you see many of them are human-made materials because natural materials won't necessarily satisfy what we need," says Li.

"Cellulose, the main component of wood, has a higher ratio of strength to density than most engineered materials, like ceramics, metals, and polymers, but our existing usage of wood barely touches its full potential," he says. Even though it's often used in building, wood's strength falls short of that of cellulose. This is because wood is made up of only 40%-50% cellulose, with the rest consisting of hemicellulose and lignin, which acts as a binder.

Li and his team sought to process wood in such a way to remove the weaker components while not destroying the cellulose skeleton. "It's a two-step process," says Li. "In the first step, we partially delignify wood. Typically, wood is very rigid, but after removal of the lignin, it becomes soft, flexible, and somewhat squishy. In the second step, we do a hot press by applying pressure and heat to the chemically processed wood to densify and remove the water."

After the material is processed and carved into the desired shape, it is coated in mineral oil to extend its lifetime. Cellulose tends to absorb water, so this coating preserves the knife's sharpness during use and when it is washed in the sink or dishwasher.

Using high-resolution microscopy, Li and his team examined the microstructure of the hardened wood to determine the origin of its strength. "The strength of a piece of material is very sensitive to the size and density of defects, like voids, channels, or pits," says Li. "The two-step process we are using to process the natural wood significantly reduces or removes the defects in natural wood, so those channels to transport water or other nutrients in the tree are almost gone."

This wood-hardening process has the potential to be more energy efficient and have a lower environmental impact than for the manufacture of other human-made materials, although more in-depth analysis is necessary to say for sure. The first step requires boiling the wood at 100° Celsius in a bath of chemicals, which could potentially be reused from batch to batch. For comparison, the process used to make ceramics requires heating materials up to a few thousand degrees Celsius.

"In our kitchen, we have many wood pieces that we use for a very long time, like a cutting board, chopsticks, or a rolling pin," says Li. "These knives, too, can be used many times if you resurface them, sharpen them, and perform the same regular upkeep."

Journal Reference:

  1. Chen et al. Hardened Wood as a Renewable Alternative to Steel and PlasticMatter, 2021 DOI: 10.1016/j.matt.2021.09.020

https://www.sciencedaily.com/releases/2021/10/211020135928.htm

 

Saturday, October 23, 2021

CRISPR Gene Editing

CRISPR gene editing (pronounced "crisper") is a genetic engineering technique in molecular biology by which the genomes of living organisms may be modified. It is based on a simplified version of the bacterial CRISPR-Cas9 antiviral defense system. By delivering the Cas9 nuclease complexed with a synthetic guide RNA (gRNA) into a cell, the cell's genome can be cut at a desired location, allowing existing genes to be removed and/or new ones added in vivo (in living organisms).

The technique is considered highly significant in biotechnology and medicine as it allows for the genomes to be edited in vivo with extremely high precision, cheaply, and with ease. It can be used in the creation of new medicines, agricultural products, and genetically modified organisms, or as a means of controlling pathogens and pests. It also has possibilities in the treatment of inherited genetic diseases as well as diseases arising from somatic mutations such as cancer. However, its use in human germline genetic modification is highly controversial. The development of the technique earned Jennifer Doudna and Emmanuelle Charpentier the Nobel Prize in Chemistry in 2020.  The third researcher group that shared the Kavli Prize for the same discovery, led by Virginijus Å ikÅ¡nys, was not awarded the Nobel prize.

Working like genetic scissors, the Cas9 nuclease opens both strands of the targeted sequence of DNA to introduce the modification by one of two methods. Knock-in mutations, facilitated via homology directed repair (HDR), is the traditional pathway of targeted genomic editing approaches.  This allows for the introduction of targeted DNA damage and repair.  HDR employs the use of similar DNA sequences to drive the repair of the break via the incorporation of exogenous DNA to function as the repair template.  This method relies on the periodic and isolated occurrence of DNA damage at the target site in order for the repair to commence. Knock-out mutations caused by CRISPR-Cas9 result in the repair of the double-stranded break by means of non-homologous end joining (NHEJ). NHEJ can often result in random deletions or insertions at the repair site, which may disrupt or alter gene functionality. Therefore, genomic engineering by CRISPR-Cas9 gives researchers the ability to generate targeted random gene disruption. Because of this, the precision of genome editing is a great concern. Genomic editing leads to irreversible changes to the genome.

While genome editing in eukaryotic cells has been possible using various methods since the 1980s, the methods employed had proved to be inefficient and impractical to implement on a large scale. With the discovery of CRISPR and specifically the Cas9 nuclease molecule, efficient and highly selective editing is now a reality. Cas9 derived from the bacterial species Streptococcus pyogenes has facilitated targeted genomic modification in eukaryotic cells by allowing for a reliable method of creating a targeted break at a specific location as designated by the crRNA and tracrRNA guide strands.  The ease with which researchers can insert Cas9 and template RNA in order to silence or cause point mutations at specific loci has proved invaluable to the quick and efficient mapping of genomic models and biological processes associated with various genes in a variety of eukaryotes. Newly engineered variants of the Cas9 nuclease have been developed that significantly reduce off-target activity.  

CRISPR-Cas9 genome editing techniques have many potential applications, including in medicine and agriculture. The use of the CRISPR-Cas9-gRNA complex for genome editing was the AAAS's choice for Breakthrough of the Year in 2015.  Many bioethical concerns have been raised about the prospect of using CRISPR for germline editing, especially in human embryos.

History of Crispr Gene Editing

Other methods

In the early 2000s, German researchers began developing zinc finger nucleases (ZFNs), synthetic proteins whose DNA-binding domains enable them to create double-stranded breaks in DNA at specific points. ZFNs has a higher precision and the advantage of being smaller than Cas9, but ZFNs are not as commonly used as CRISPR-based methods. Sangamo provides ZFNs via industry and academic partnerships but holds the modules, expertise—and patents—for making them. In 2010, synthetic nucleases called transcription activator-like effector nucleases (TALENs) provided an easier way to target a double-stranded break to a specific location on the DNA strand. Both zinc finger nucleases and TALENs require the design and creation of a custom protein for each targeted DNA sequence, which is a much more difficult and time-consuming process than that of designing guide RNAs. CRISPRs are much easier to design because the process requires synthesizing only a short RNA sequence, a procedure that is already widely used for many other molecular biology techniques (e.g. creating oligonucleotide primers.

Whereas methods such as RNA interference (RNAi) do not fully suppress gene function, CRISPR, ZFNs, and TALENs provide full irreversible gene knockout.  CRISPR can also target several DNA sites simultaneously simply by introducing different gRNAs. In addition, the costs of employing CRISPR are relatively low.

Discovery

In 2012 Jennifer Doudna and Emmanuelle Charpentier published their finding that CRISPR-Cas9 could be programmed with RNA to edit genomic DNA, now considered one of the most significant discoveries in the history of biology.

Patents and commercialization

As of November 2013, SAGE Labs (part of Horizon Discovery group) had exclusive rights from one of those companies to produce and sell genetically engineered rats and non-exclusive rights for mouse and rabbit models.  By 2015, Thermo Fisher Scientific had licensed intellectual property from ToolGen to develop CRISPR reagent kits.

As of December 2014, patent rights to CRISPR were contested. Several companies formed to develop related drugs and research tools.  As companies ramped up financing, doubts as to whether CRISPR could be quickly monetized were raised.  In February 2017 the US Patent Office ruled on a patent interference case brought by University of California with respect to patents issued to the Broad Institute, and found that the Broad patents, with claims covering the application of CRISPR-Cas9 in eukaryotic cells, were distinct from the inventions claimed by University of California.  Shortly after, University of California filed an appeal of this ruling.

Recent events

In March 2017, the European Patent Office (EPO) announced its intention to allow claims for editing all types of cells to Max-Planck Institute in Berlin, University of California, and University of Vienna, and in August 2017, the EPO announced its intention to allow CRISPR claims in a patent application that MilliporeSigma had filed.  As of August 2017 the patent situation in Europe was complex, with MilliporeSigma, ToolGen, Vilnius University, and Harvard contending for claims, along with University of California and Broad.

In July 2018, the ECJ ruled that gene editing for plants was a sub-category of GMO foods and therefore that the CRISPR technique would henceforth be regulated in the European Union by their rules and regulations for GMOs.

In February 2020, a US trial safely showed CRISPR gene editing on three cancer patients.

In October 2020, researchers Emmanuelle Charpentier and Jennifer Doudna were awarded the Nobel Prize in Chemistry for their work in this field.  They made history as the first two women to share this award without a male contributor.

In June 2021, the first, small clinical trial of intravenous CRISPR gene editing in humans concludes with promising results.

             https://en.wikipedia.org/wiki/CRISPR_gene_editing

 

Friday, October 22, 2021

Strange Radio Waves from Milky Way Center

A variable signal aligned to the heart of the Milky Way is tantalizing scientists

From:  University of Sydney

October 12, 2021 -- The radio waves fit no currently understood pattern of variable radio source and could suggest a new class of stellar object.

"The strangest property of this new signal is that it is has a very high polarization. This means its light oscillates in only one direction, but that direction rotates with time," said Ziteng Wang, lead author of the new study and a PhD student in the School of Physics at the University of Sydney.

"The brightness of the object also varies dramatically, by a factor of 100, and the signal switches on and off apparently at random. We've never seen anything like it."

Many types of star emit variable light across the electromagnetic spectrum. With tremendous advances in radio astronomy, the study of variable or transient objects in radio waves is a huge field of study helping us to reveal the secrets of the Universe. Pulsars, supernovae, flaring stars and fast radio bursts are all types of astronomical objects whose brightness varies.

"At first we thought it could be a pulsar -- a very dense type of spinning dead star -- or else a type of star that emits huge solar flares. But the signals from this new source don't match what we expect from these types of celestial objects," Mr. Wang said.

The discovery of the object has been published today in the Astrophysical Journal.

Mr Wang and an international team, including scientists from Australia's national science agency CSIRO, Germany, the United States, Canada, South Africa, Spain and France discovered the object using the CSIRO's ASKAP radio telescope in Western Australia. Follow-up observations were with the South African Radio Astronomy Observatory's MeerKAT telescope.

Mr Wang's PhD supervisor is Professor Tara Murphy also from the Sydney Institute for Astronomy and the School of Physics.

Professor Murphy said: "We have been surveying the sky with ASKAP to find unusual new objects with a project known as Variables and Slow Transients (VAST), throughout 2020 and 2021.

"Looking towards the center of the Galaxy, we found ASKAP J173608.2-321635, named after its coordinates. This object was unique in that it started out invisible, became bright, faded away and then reappeared. This behavior was extraordinary."

After detecting six radio signals from the source over nine months in 2020, the astronomers tried to find the object in visual light. They found nothing.

They turned to the Parkes radio telescope and again failed to detect the source.

Professor Murphy said: "We then tried the more sensitive MeerKAT radio telescope in South Africa. Because the signal was intermittent, we observed it for 15 minutes every few weeks, hoping that we would see it again.

"Luckily, the signal returned, but we found that the behavior of the source was dramatically different -- the source disappeared in a single day, even though it had lasted for weeks in our previous ASKAP observations."

However, this further discovery did not reveal much more about the secrets of this transient radio source.

Mr. Wang's co-supervisor, Professor David Kaplan from the University of Wisconsin-Milwaukee, said: "The information we do have has some parallels with another emerging class of mysterious objects known as Galactic Center Radio Transients, including one dubbed the 'cosmic burper'.

"While our new object, ASKAP J173608.2-321635, does share some properties with GCRTs there are also differences. And we don't really understand those sources, anyway, so this adds to the mystery."

The scientists plan to keep a close eye on the object to look for more clues as to what it might be.

"Within the next decade, the transcontinental Square Kilometer Array (SKA) radio telescope will come online. It will be able to make sensitive maps of the sky every day," Professor Murphy said. "We expect the power of this telescope will help us solve mysteries such as this latest discovery, but it will also open vast new swathes of the cosmos to exploration in the radio spectrum."

Video showing an artist's impression of signals from space is available at: https://www.youtube.com/watch?v=J_eGd9Ps9fE&t=5s

               https://www.sciencedaily.com/releases/2021/10/211012080039.htm

Thursday, October 21, 2021

Origin of Domestic Horses Finally Established

From:  Centre National de la Recherche Scientifique [CNRS] in France, October 20, 2021 --

  • The modern horse was domesticated around 2200 years BCE in the northern Caucasus.
  • In the centuries that followed it spread throughout Asia and Europe.
  • To achieve this result, an international team of 162 scientists collected, sequenced and compared 273 genomes from ancient horses scattered across Eurasia.

Horses were first domesticated in the Pontic-Caspian steppes, northern Caucasus, before conquering the rest of Eurasia within a few centuries. These are the results of a study led by paleogeneticist Ludovic Orlando, CNRS, who headed an international team including l’Université Toulouse III - Paul Sabatier, the CEA and l’Université d’Évry. Answering a decades-old enigma, the study is published in Nature on 20 October 2021.

By whom and where were modern horses first domesticated? When did they conquer the rest of the world? And how did they supplant the myriad of other types of horses that existed at that time? This long-standing archaeological mystery finally comes to an end thanks to a team of 162 scientists specialising in archaeology, palaeogenetics and linguistics.

A few years ago, Ludovic Orlando's team looked at the site of Botai, Central Asia, which had provided the oldest archaeological evidence of domestic horses. The DNA results, however, were not compliant: these 5500-year-old horses were not the ancestors of modern domestic horses1. Besides the steppes of Central Asia, all other presumed foci of domestication, such as Anatolia, Siberia and the Iberian Peninsula, had turned out to be false. The scientific team, therefore, decided to extend their study to the whole of Eurasia by analysing the genomes of 273 horses that lived between 50,000 and 200 years BC. This information was sequenced at the Centre for Anthropobiology and Genomics of Toulouse (CNRS/Université Toulouse III - Paul Sabatier) and Genoscope2 (CNRS/CEA/Université d’Évry) before being compared with the genomes of modern domestic horses.

This strategy paid off: although Eurasia was once populated by genetically distinct horse populations, a dramatic change had occurred between 2000 and 2200 BC. A genetic profile, previously confined to the Pontic steppes (North Caucasus)3, began to spread beyond its native region, replacing all the wild horse populations from the Atlantic to Mongolia within a few centuries.

But how can this rapid population growth be explained? Interestingly, scientists found two striking differences between the genome of this horse and those of the populations it replaced: one is linked to a more docile behaviour and the second indicates a stronger backbone. The researchers suggest that these characteristics ensured the animals’ success at a time when horse travel was becoming “global”.

The study also reveals that the horse spread throughout Asia at the same time as spoke-wheeled chariots and Indo-Iranian languages. However, the migrations of Indo-European populations, from the steppes to Europe during the third millennium BC4 could not have been based on the horse, as its domestication and diffusion came later. This demonstrates the importance of incorporating the history of animals when studying human migrations and encounters between cultures.

This study was directed by the the Centre for Anthropobiology and Genomics of Toulouse (CNRS/ Université Toulouse III – Paul Sabatier) with help from Genoscope (CNRS/CEA/Université d’Évry). The French laboratories Archéologies et sciences de l'Antiquité (CNRS/Université Paris 1 Panthéon Sorbonne/Université Paris Nanterre/Ministère de la Culture), De la Préhistoire à l'actuel : culture, environnement et anthropologie (CNRS/Université de Bordeaux/Ministère de la Culture) and Archéozoologie, archéobotanique : sociétés, pratiques et environnements (CNRS/MNHN) also contributed, as did 114 other research institutions throughout the world. The study was primarily funded by the European Research Council (Pegasus project) and France Genomique (Bucéphale project).

                    https://www.cnrs.fr/en/origin-domestic-horses-finally-established

Wednesday, October 20, 2021

A New State of Matter – Electron Quadruplets

From: KTH Royal Institute of Technology (in Sweden)

October 18, 2021 -- The central principle of superconductivity is that electrons form pairs. But can they also condense into foursomes? Recent findings have suggested they can, and a physicist at KTH Royal Institute of Technology today published the first experimental evidence of this quadrupling effect and the mechanism by which this state of matter occurs.

Reporting today in Nature Physics, Professor Egor Babaev and collaborators presented evidence of fermion quadrupling in a series of experimental measurements on the iron-based material, Ba1−xKxFe2As2. The results follow nearly 20 years after Babaev first predicted this kind of phenomenon, and eight years after he published a paper predicting that it could occur in the material.  

The pairing of electrons enables the quantum state of superconductivity, a zero-resistance state of conductivity which is used in MRI scanners and quantum computing. It occurs within a material as a result of two electrons bonding rather than repelling each other, as they would in a vacuum. The phenomenon was first described in a theory by, Leon Cooper, John Bardeen and John Schrieffer, whose work was awarded the Nobel Prize in 1972.

So-called Cooper pairs are basically “opposites that attract”. Normally two electrons, which are negatively-charged subatomic particles, would strongly repel each other. But at low temperatures in a crystal they become loosely bound in pairs, giving rise to a robust long-range order. Currents of electron pairs no longer scatter from defects and obstacles and a conductor can lose all electrical resistance, becoming a new state of matter: a superconductor. 

Only in recent years has the theoretical idea of four-fermion condensates become broadly accepted. 

For a fermion quadrupling state to occur there has to be something that prevents condensation of pairs and prevents their flow without resistance, while allowing condensation of four-electron composites, Babaev says.

The Bardeen-Cooper-Schrieffer theory didn’t allow for such behavior, so when Babaev’s experimental collaborator at Technische Universtät Dresden, Vadim Grinenko, found in 2018 the first signs of a fermion quadrupling condensate, it challenged years of prevalent scientific agreement.

What followed was three years of experimentation and investigation at labs at multiple institutions in order to validate the finding.

Babaev says that key among the observations made is that fermionic quadruple condensates spontaneously break time-reversal symmetry. In physics time-reversal symmetry is a mathematical operation of replacing the expression for time with its negative in formulas or equations so that they describe an event in which time runs backward or all the motions are reversed.

If one inverts time direction, the fundamental laws of physics still hold. That also holds for typical superconductors: if the arrow of time is reversed, a typical superconductor would still be the same superconducting state.

“However, in the case of a four-fermion condensate that we report, the time reversal puts it in a different state,” he says.

“It will probably take many years of research to fully understand this state," he says. "The experiments open up a number of new questions, revealing a number of other unusual properties associated with its reaction to thermal gradients, magnetic fields and ultrasound that still have to be better understood.”

Contributing to the research were scientists from the following institutions:  Institute for Solid State and Materials Physics, TU Dresden, Germany; Leibniz Institute for Solid State and Materials Research, Dresden; Stockhom University; Bergische Universtät at Wuppertal, Germany; Dresden High Magnetic Field Laboratory (HLD-EMFL); Wurzburg-Dresden Cluster of Excellence ct.qmat, Germany; Helmholtz-Zentrum, Germany; National Institute of Advanced Industrial Science and Technology (AIST), Japan;  Institut Denis Poisson, France.

                             https://www.eurekalert.org/news-releases/931832