Saturday, April 30, 2022

Newly Found Bacteria Stick to Plastic in the Deep Sea

Scientists have found new types of plastic loving bacteria that stick to plastic in the deep sea that may enable them to 'hitchhike' across the ocean.

From:  Newcastle University

April 29, 2022 -- The team showed for the first time that these deep-sea, plastic loving bacteria make up only 1% of the total bacterial community. Reporting their findings in the journal Environmental Pollution, the team found that these bacteria only stick to plastic and not the non-plastic control of stone.

The research highlights these bacteria may be able to 'hitchhike' across the deep sea by attaching to plastic, enhancing microbial connectivity across seemingly isolated environments.

To uncover these mysteries of the deep-sea 'plastisphere', the team used a deep-sea 'lander' in the North-East Atlantic to deliberately sink two types of plastic, polyurethane and polystyrene, in the deep (1800m) and then recover the material to reveal a group of plastic loving bacteria. This method helps tackle the issue of how plastics and subsequently, our understanding of the 'plastisphere' (microbial community attached to plastic) are sampled in the environment to provide consistent results.

The scientists observed a mix of diverse and extreme living bacteria, including Calorithrix, which is also found in deep-sea hydrothermal vent systems and Spirosoma, which has been isolated from the Arctic permafrost. Other bacteria included the Marine Methylotrophic Group 3 -- a group of bacteria isolated from deep-sea methane seeps, and Aliivibrio, a pathogen that has negatively affected the fish farming industry, highlighting a growing concern for the presence of plastic in the ocean.

In their most recent work, they have also found a strain originally isolated from RMS Titanic named Halomonas titanicae. While the rust-eating microbe was originally found on the shipwreck, the researchers have now shown it also loves to stick to plastic and is capable of low crystallinity plastic degradation.

The research was led by Max Kelly, a PhD student at Newcastle University's School of Natural and Environmental Sciences.

He said: "The deep sea is the largest ecosystem on earth and likely a final sink for the vast majority of plastic that enters the marine environment, but it is a challenging place to study. Combining deep-sea experts, engineers, and marine microbiologists, our team is helping to elucidate the bacterial community that can to stick to plastic to reveal the final fate of deep-sea plastic."

Microplastics (fragments with a diameter smaller than 5mm) make up 90% of the plastic debris found at the ocean surface and the amount of plastic entering our ocean is significantly larger than the estimates of floating plastic on the surface of the ocean. Although the plastic loving bacteria found in the study here represent a small fraction of the community colonising plastic, they highlight the emerging ecological impacts of plastic pollution in the environment.

          https://www.sciencedaily.com/releases/2022/04/220429145043.htm

 

Friday, April 29, 2022

Characteristics of a Longevity Diet

Research in animals and humans to identify how nutrition affects aging and healthy lifespan.

From:  Leonard Davis School of Gerontology at the University of Southern California

By Beth Newcomb

April 28, 2022 -- Examining a range of nutrition research from studies in laboratory animals to epidemiological research in human populations provides a clearer picture of the best diet for a longer, healthier life, said USC Leonard Davis School of Gerontology professor Valter Longo.

In an article that includes a literature review published April 28 in Cell, Longo and coauthor Rozalyn Anderson of the University of Wisconsin describe the “longevity diet,” a multi-pillar approach based on studies of various aspects of diet, from food composition and calorie intake to the length and frequency of fasting periods.

“We explored the link between nutrients, fasting, genes and longevity in short-lived species, and connected these links to clinical and epidemiological studies in primates and humans – including centenarians,” Longo said. “By adopting an approach based on over a century of research, we can begin to define a longevity diet that represents a solid foundation for nutritional recommendations and for future research.”

What—and when—to eat for longevity

Longo and Anderson reviewed hundreds of studies on nutrition, diseases and longevity in laboratory animals and humans and combined them with their own studies on nutrients and aging. The analysis included popular diets such as the restriction of total calories, the high-fat and low-carbohydrate ketogenic diet, vegetarian and vegan diets, and the Mediterranean diet.

The article also included a review of different forms of fasting, including a short-term diet that mimics the body’s fasting response, intermittent fasting (frequent and short-term) and periodic fasting (two or more days of fasting or fasting-mimicking diets more than twice a month). In addition to examining lifespan data from epidemiological studies, the team linked these studies to specific dietary factors affecting several longevity-regulating genetic pathways shared by animals and humans that also affect markers for disease risk. These include levels of insulin, C-reactive protein, insulin-like growth factor 1, and cholesterol.

The authors report that the key characteristics of the optimal diet appear to be moderate to high carbohydrate intake from non-refined sources, low but sufficient protein from largely plant-based sources, and enough plant-based fats to provide about 30 percent of energy needs. Ideally, the day’s meals would all occur within a window of 11-12 hours, allowing for a daily period of fasting. Additionally, a 5-day cycle of a fasting or fasting-mimicking diet every 3-4 months may also help reduce insulin resistance, blood pressure and other risk factors for individuals with increased disease risks.

Longo described what a longevity diet could look like in real life: “Lots of legumes, whole grains, and vegetables; some fish; no red meat or processed meat and very low white meat; low sugar and refined grains; good levels of nuts and olive oil, and some dark chocolate.”

What’s next for the longevity diet

The next step in researching the longevity diet will be a 500-person study taking place in southern Italy, Longo said. The longevity diet bears both similarities and differences to the Mediterranean-style diets often seen in super-aging “Blue Zones,” including Sardinia, Italy; Okinawa, Japan; and Loma Linda, California. Common diets in these communities known for a high number of people age 100 or older are often largely plant-based or pescatarian and are relatively low in protein. But the longevity diet represents an evolution of these “centenarian diets,” Longo explained, citing the recommendation for limiting food consumption to 12 hours per day and having several short fasting periods every year.

In addition to the general characteristics, the longevity diet should be adapted to individuals based on sex, age, health status, and genetics, Longo noted. For instance, people over age 65 may need to increase protein in order to counter frailty and loss of lean body mass. Longo’s own studies illustrated that higher protein amounts were better for people over 65 but not optimal for those under 65, he said.

For people who are looking to optimize their diet for longevity, he said it’s important to work with healthcare provider specialized in nutrition on personalizing a plan focusing on smaller changes that can be adopted for life, rather than big changes that will cause an harmful major loss of body fat and lean mass, followed by a regain of the fat lost, once the person abandons the very restrictive diet.

“The longevity diet is not a dietary restriction intended to only cause weight loss but a lifestyle focused on slowing aging, which can complement standard healthcare and, taken as a preventative measure, will aid in avoiding morbidity and sustaining health into advanced age,” Longo said.

              https://gero.usc.edu/2022/04/28/valter-longo-longevity-diet/

 

Thursday, April 28, 2022

One way Superconductor Discovered

Assumed to be impossible for a lifetime

From: Delft University in the Netherlands

April 27, 2022 -- Associate professor Mazhar Ali and his research group at TU Delft have discovered one-way superconductivity without magnetic fields, something that was thought to be impossible ever since its discovery in 1911—up until now. The discovery, published in Nature, makes use of 2D quantum materials and paves the way toward superconducting computing. Superconductors can make electronics hundreds of times faster, all with zero energy loss. Ali: "If the 20th century was the century of semiconductors, the 21st can become the century of the superconductor."

During the 20th century, many scientists, including Nobel Prize winners, have puzzled over the nature of superconductivity, which was discovered by Dutch physicist Kamerlingh Onnes in 1911. In superconductors, a current goes through a wire without any resistance, which means inhibiting this current or even blocking it is hardly possible—let alone getting the current to flow only one way and not the other. That Ali's group managed to make superconducting one-directional—necessary for computing—is remarkable: one can compare it to inventing a special type of ice which gives you zero friction when skating one way, but insurmountable friction the other way.

Superconductor: Super-fast, super-green

The advantages of applying superconductors to electronics are twofold. Superconductors can make electronics hundreds of times faster, and implementing superconductors into our daily lives would make IT much greener: if you were to spin a superconducting wire from here to the moon, it would transport the energy without any loss. For instance, the use of superconductors instead of regular semi-conductors might safe up to 10% of all western energy reserves according to NWO.

The (im)possibility of applying superconducting

In the 20th century and beyond, no one could tackle the barrier of making superconducting electrons go in just one-direction, which is a fundamental property needed for computing and other modern electronics (consider for example diodes that go one way as well). In normal conduction the electrons fly around as separate particles; in superconductors they move in pairs of twos, without any loss of electrical energy. In the '70s, scientists at IBM tried out the idea of superconducting computing but had to stop their efforts: in their papers on the subject, IBM mentions that without non-reciprocal superconductivity, a computer running on superconductors is impossible.

Interview with corresponding author Mazhar Ali

Q: Why, when one-way direction works with normal semi-conduction, has one-way superconductivity never worked before?

Electrical conduction in semiconductors, like Si, can be one-way because of a fixed internal electric dipole, so a net built in potential they can have. The textbook example is the famous pn junction; where we slap together two semiconductors: one has extra electrons (-) and the other has extra holes (+). The separation of charge makes a net built in potential that an electron flying through the system will feel. This breaks symmetry and can result in one-way properties because forward vs backwards, for example, are no longer the same. There is a difference in going in the same direction as the dipole vs going against it; similar to if you were swimming with the river or swimming up the river.

Superconductors never had an analog of this one-directional idea without magnetic field; since they are more related to metals (i.e. conductors, as the name says) than semiconductors, which always conduct in both directions and dont have any built in potential. Similarly, Josephson Junctions (JJs), which are sandwiches of two superconductors with non-superconducting, classical barrier materials in-between the superconductors, also havent had any particular symmetry-breaking mechanism that resulted in a difference between forward and backwards.

Q: How did you manage to do what first seemed impossible?

It was really the result of one of my group's fundamental research directions. In what we call Quantum Material Josephson Junctions (QMJJs), we replace the classical barrier material in JJs with a quantum material barrier, where the quantum materials intrinsic properties can modulate the coupling between the two superconductors in novel ways. The Josephson Diode was an example of this: we used the quantum material Nb3Br8, which is a 2D material like graphene that has been theorized to host a net electric dipole, as our quantum material barrier of choice and placed it between two superconductors.

We were able to peel off just a couple atomic layers of this Nb3Br8 and make a very, very thin sandwich —just a few atomic layers thick—which was needed for making the Josephson diode, and was not possible with normal 3D materials. Nb3Br8, is part of a group of new quantum materials being developed by our collaborators, Professor Tyrel McQueens and his group at Johns Hopkins University in the U.S., and was a key piece in us realizing the Josephson diode for the first time.

Q: What does this discovery mean in terms of impact and applications?

Many technologies are based on old versions of JJ superconductors, for example MRI technology. Also, quantum computing today is based on Josephson Junctions. Technology which was previously only possible using semi-conductors can now potentially be made with superconductors using this building block. This includes faster computers, as in computers with up to terahertz speed, which is 300 to 400 times faster than the computers we are now using. This will influence all sorts of societal and technological applications. If the 20th century was the century of semi-conductors, the 21st can become the century of the superconductor.

The first research direction we have to tackle for commercial application is raising the operating temperature. Here we used a very simple superconductor that limited the operating temperature. Now we want to work with the known so-called High Tc Superconductors, and see whether we can operate Josephson diodes at temperatures above 77 K, since this will allow for liquid nitrogen cooling. The second thing to tackle is scaling of production. While its great that we proved this works in nanodevices, we only made a handful. The next step will be to investigate how to scale production to millions of Josephson diodes on a chip.

Q: How sure are you of your case?

There are several steps which all scientists need to take to maintain scientific rigor. The first is to make sure their results are repeatable. In this case we made many devices, from scratch, with different batches of materials, and found the same properties every time, even when measured on different machines in different countries by different people. This told us that the Josephson diode result was coming from our combination of materials and not some spurious result of dirt, geometry, machine or user error or interpretation.

We also carried out smoking gun experiments that dramatically narrows the possibility for interpretation. In this case, to be sure that we had a superconducting diode effect we actually tried switching the diode; as in we applied the same magnitude of current in both forward and reverse directions and showed that we actually measured no resistance (superconductivity) in one direction and real resistance (normal conductivity) in the other direction.

We also measured this effect while applying magnetic fields of different magnitudes and showed that the effect was clearly present at 0 applied field and gets killed by an applied field. This is also a smoking gun for our claim of having a superconducting diode effect at zero-applied field, a very important point for technological applications. This is because magnetic fields at the nanometer scale are very difficult to control and limit, so for practical applications, it is generally desired to operate without requiring local magnetic fields.

Q: Is it realistic for ordinary computers (or even the supercomputers of KNMI and IBM) to make use of superconducting?

Yes, it is! Not for people at home, but for server farms or for supercomputers, it would be smart to implement this. Centralized computation is really how the world works now-a-days. Any and all intensive computation is done at centralized facilities where localization adds huge benefits in terms of power management, heat management, etc. The existing infrastructure could be adapted without too much cost to work with Josephson diode based electronics. There is a very real chance, if the challenges discussed in the other question are overcome, that this will revolutionize centralized and supercomputing.

More information: Mazhar Ali, The field-free Josephson diode in a van der Waals heterostructure, Nature (2022). DOI: 10.1038/s41586-022-04504-8www.nature.com/articles/s41586-022-04504-8

https://phys.org/news/2022-04-discovery-one-way-superconductor-thought-impossible.html

 

Wednesday, April 27, 2022

Researchers Develop a Paper-thin Loudspeaker

The flexible, thin-film device has the potential to make any surface into a low-power, high-quality audio source.

By Adam Zewe | MIT News Office

April 26, 2022 -- MIT engineers have developed a paper-thin loudspeaker that can turn any surface into an active audio source.

This thin-film loudspeaker produces sound with minimal distortion while using a fraction of the energy required by a traditional loudspeaker. The hand-sized loudspeaker the team demonstrated, which weighs about as much as a dime, can generate high-quality sound no matter what surface the film is bonded to.

To achieve these properties, the researchers pioneered a deceptively simple fabrication technique, which requires only three basic steps and can be scaled up to produce ultrathin loudspeakers large enough to cover the inside of an automobile or to wallpaper a room.

Used this way, the thin-film loudspeaker could provide active noise cancellation in clamorous environments, such as an airplane cockpit, by generating sound of the same amplitude but opposite phase; the two sounds cancel each other out. The flexible device could also be used for immersive entertainment, perhaps by providing three-dimensional audio in a theater or theme park ride. And because it is lightweight and requires such a small amount of power to operate, the device is well-suited for applications on smart devices where battery life is limited.

“It feels remarkable to take what looks like a slender sheet of paper, attach two clips to it, plug it into the headphone port of your computer, and start hearing sounds emanating from it. It can be used anywhere. One just needs a smidgeon of electrical power to run it,” says Vladimir Bulović, the Fariborz Maseeh Chair in Emerging Technology, leader of the Organic and Nanostructured Electronics Laboratory (ONE Lab), director of MIT.nano, and senior author of the paper.

Bulović wrote the paper with lead author Jinchi Han, a ONE Lab postdoc, and co-senior author Jeffrey Lang, the Vitesse Professor of Electrical Engineering. The research is published today in IEEE Transactions of Industrial Electronics.

A new approach

A typical loudspeaker found in headphones or an audio system uses electric current inputs that pass through a coil of wire that generates a magnetic field, which moves a speaker membrane, that moves the air above it, that makes the sound we hear. By contrast, the new loudspeaker simplifies the speaker design by using a thin film of a shaped piezoelectric material that moves when voltage is applied over it, which moves the air above it and generates sound.

Most thin-film loudspeakers are designed to be freestanding because the film must bend freely to produce sound. Mounting these loudspeakers onto a surface would impede the vibration and hamper their ability to generate sound.

To overcome this problem, the MIT team rethought the design of a thin-film loudspeaker. Rather than having the entire material vibrate, their design relies on tiny domes on a thin layer of piezoelectric material which each vibrate individually. These domes, each only a few hair-widths across, are surrounded by spacer layers on the top and bottom of the film that protect them from the mounting surface while still enabling them to vibrate freely. The same spacer layers protect the domes from abrasion and impact during day-to-day handling, enhancing the loudspeaker’s durability.

To build the loudspeaker, the researchers used a laser to cut tiny holes into a thin sheet of PET, which is a type of lightweight plastic. They laminated the underside of that perforated PET layer with a very thin film (as thin as 8 microns) of piezoelectric material, called PVDF. Then they applied vacuum above the bonded sheets and a heat source, at 80 degrees Celsius, underneath them.

Because the PVDF layer is so thin, the pressure difference created by the vacuum and heat source caused it to bulge. The PVDF can’t force its way through the PET layer, so tiny domes protrude in areas where they aren’t blocked by PET. These protrusions self-align with the holes in the PET layer. The researchers then laminate the other side of the PVDF with another PET layer to act as a spacer between the domes and the bonding surface.

“This is a very simple, straightforward process. It would allow us to produce these loudspeakers in a high-throughput fashion if we integrate it with a roll-to-roll process in the future. That means it could be fabricated in large amounts, like wallpaper to cover walls, cars, or aircraft interiors,” Han says.

High quality, low power

The domes are 15 microns in height, about one-sixth the thickness of a human hair, and they only move up and down about half a micron when they vibrate. Each dome is a single sound-generation unit, so it takes thousands of these tiny domes vibrating together to produce audible sound.

An added benefit of the team’s simple fabrication process is its tunability — the researchers can change the size of the holes in the PET to control the size of the domes. Domes with a larger radius displace more air and produce more sound, but larger domes also have lower resonance frequency. Resonance frequency is the frequency at which the device operates most efficiently, and lower resonance frequency leads to audio distortion.

Once the researchers perfected the fabrication technique, they tested several different dome sizes and piezoelectric layer thicknesses to arrive at an optimal combination.

They tested their thin-film loudspeaker by mounting it to a wall 30 centimeters from a microphone to measure the sound pressure level, recorded in decibels. When 25 volts of electricity were passed through the device at 1 kilohertz (a rate of 1,000 cycles per second), the speaker produced high-quality sound at conversational levels of 66 decibels. At 10 kilohertz, the sound pressure level increased to 86 decibels, about the same volume level as city traffic.

The energy-efficient device only requires about 100 milliwatts of power per square meter of speaker area. By contrast, an average home speaker might consume more than 1 watt of power to generate similar sound pressure at a comparable distance.

Because the tiny domes are vibrating, rather than the entire film, the loudspeaker has a high enough resonance frequency that it can be used effectively for ultrasound applications, like imaging, Han explains. Ultrasound imaging uses very high frequency sound waves to produce images, and higher frequencies yield better image resolution.  

The device could also use ultrasound to detect where a human is standing in a room, just like bats do using echolocation, and then shape the sound waves to follow the person as they move, Bulović says. If the vibrating domes of the thin film are covered with a reflective surface, they could be used to create patterns of light for future display technologies. If immersed in a liquid, the vibrating membranes could provide a novel method of stirring chemicals, enabling chemical processing techniques that could use less energy than large batch processing methods.

“We have the ability to precisely generate mechanical motion of air by activating a physical surface that is scalable. The options of how to use this technology are limitless,” Bulović says.

“I think this is a very creative approach to making this class of ultra-thin speakers,” says Ioannis (John) Kymissis, Kenneth Brayer Professor of Electrical Engineering and Chair of the Department of Electrical Engineering at Columbia University, who was not involved with this research. “The strategy of doming the film stack using photolithographically patterned templates is quite unique and likely to lead to a range of new applications in speakers and microphones.”

This work is funded, in part, by the research grant from the Ford Motor Company and a gift from Lendlease, Inc.

            https://news.mit.edu/2022/low-power-thin-loudspeaker-0426

  

Tuesday, April 26, 2022

New Era of Mitochondrial Genome Editing

Scientists successfully achieve A to G base conversion, the final missing piece of the puzzle in gene-editing technology

From:  Institute for Basic Science [in Korea] (Research News)

January 15, 2022 -- Researchers from the Center for Genome Engineering within the Institute for Basic Science developed a new gene-editing platform called transcription activator-like effector-linked deaminases, or TALED. TALEDs are base editors capable of performing A-to-G base conversion in mitochondria. This discovery was a culmination of a decades-long journey to cure human genetic diseases, and TALED can be considered to be the final missing piece of the puzzle in gene-editing technology.

From the identification of the first restriction enzyme in 1968, the invention of polymerase chain reaction (PCR) in 1985, and the demonstration of CRISPR-mediated genome editing in 2013, each new breakthrough discovery in biotechnology further improved our ability to manipulate DNA, the blueprint of life. In particular, the recent development of the CRISPR-Cas system, or “genetic scissors”, has allowed for comprehensive genome editing of living cells. This opened new possibilities for treating previously incurable genetic diseases by editing the mutations out of our genome.

While gene editing was largely successful in the nuclear genome of the cells, however, scientists have been unsuccessful in editing the mitochondria, which also have their own genome. Mitochondria, the so-called “powerhouse of the cells”, are tiny organelles in cells that serve as energy-generating factories. As it is an important organelle for energy metabolism, if the gene is mutated, it causes serious genetic diseases related to energy metabolism.

Director KIM Jin-Soo of the Center for Genome Engineering explained, “There are some extremely nasty hereditary diseases arising due to defects in mitochondrial DNA. For example, Leber hereditary optic neuropathy (LHON), which causes sudden blindness in both eyes, is caused by a simple single point mutation in mitochondrial DNA.” Another mitochondrial gene-related disease includes mitochondrial encephalomyopathy with lactic acidosis and stroke-like episodes (MELAS), which slowly destroys the patient’s brain. Some studies even suggest abnormalities in mitochondrial DNA may also be responsible for degenerative diseases such as Alzheimer’s disease and muscular dystrophy.

The mitochondrial genome is inherited from the maternal line. There are 90 known disease-causing point mutations in mitochondrial DNA, which in total affects at least 1 in 5,000 individuals. Many existing genome editing tools could not be used due to limitations in the method of delivery to mitochondria. For example, the CRISPR-Cas platform is not applicable for editing these mutations in mitochondria, because the guide RNA is unable to enter the organelle itself.

“Another problem is that there is a dearth of animal models of these mitochondrial diseases. This is because it is currently not possible to engineer mitochondrial mutations necessary to create animal models,” Director Kim added. “Lack of animal models makes it very difficult to develop and test therapeutics for these diseases.”

As such, reliable technology to edit mitochondrial DNA is one of the last frontiers of genome engineering that must be explored in order to conquer all known genetic diseases, and the world's most elite scientists have endeavored for years to make it a reality.

In 2020, researchers led by David R. LIU of the Broad Institute of Harvard and MIT created a new base editor named DddA-derived cytosine base editors (DdCBEs) that can perform C-to-T conversion from DNA in mitochondria. This was made possible by creating a new gene-editing technology called base editing, which converts a single nucleotide base into another without breaking the DNA. However, this technique also had its limitations. Not only is it restricted to C-to-T conversion, but it is mostly limited to the TC motif, making it effectively a TC-TT converter. This means that it can correct only 9 out of 90 (= 10%) confirmed pathogenic mitochondrial point mutations. For the longest time, the A-to-G conversion of mitochondrial DNA was thought to be impossible.

First author CHO Sung-Ik said, “We began to think of ways to overcome these limitations. As a result, we were able to create a novel gene-editing platform called TALED that can achieve A-to-G conversion. Our new base editor dramatically expanded the scope of mitochondrial genome editing. This can make a big contribution not only to making a disease model but also to developing a treatment.” As of note, being able to perform A-to-G conversions in human mtDNA alone could correct 39 (= 43%) out of the 90 known pathogenic mutations.

The researchers created TALED by fusing three different components. The first component is a transcription activator-like effector (TALE), which is capable of targeting a DNA sequence. The second component is TadA8e, an adenine deaminase for facilitating A-to-G conversion. The third component, DddAtox, is a cytosine deaminase that makes the DNA more accessible to TadA8e.

One interesting aspect of TALED is TadA8e’s ability to perform A-to-G editing in mitochondria, which possess double-stranded DNA (dsDNA). This is a mysterious phenomenon, as TadA8e is a protein that is known to be specific to only single-stranded DNA. Director Kim said, “No one has thought of using TadA8e to perform base editing in mitochondria before, since it is supposed to be specific to only single-stranded DNA. It was this thinking outside of the box approach that has really helped us to invent TALED.”

The researchers theorized that DddAtox allows dsDNA to be accessible by transiently unwinding the double-strand. This fleeting but temporary time window allows TadA8e, a super fast-acting enzyme, to quickly make the necessary edits. In addition to tweaking the components of TALED, the researchers also developed a technology that is capable of both A-to-G and C-to-T base editing simultaneously, as well as A-to-G base editing only.

The group demonstrated this new technology by creating a single cell-derived clone containing desired mtDNA edits. In addition, TALEDs were found to be neither cytotoxic nor cause instability in mtDNA. Also, there was no undesirable off-target editing in nuclear DNA and very few off-target effects in mtDNA. The researchers now aim to further improve the TALEDs by increasing the editing efficiency and specificity, eventually paving the way to correct disease-causing mtDNA mutations in embryos, fetuses, newborns, or adult patients. The group is also focusing on developing TALEDs suitable for A-to-G base editing in chloroplast DNA, which encodes essential genes in photosynthesis in plants.

William I. Suh, the science communicator at the Institute for Basic Science acclaimed, “I believe the significance of this discovery is comparable to the invention of blue LED, which was awarded a Nobel Prize in 2014. Just like how the blue LED was the final piece of the puzzle that allowed us to have a highly energy-efficient source of white LED light, it is expected that TALED will usher in a new era of genome engineering.”

https://www.ibs.re.kr/cop/bbs/BBSMSTR_000000000738/selectBoardArticle.do?nttId=21220&pageIndex=1&searchCnd=&searchWrd=

  

Monday, April 25, 2022

Cancer Is Not as Heritable as Once Thought

Recent review highlights need to gain a broader scientific view of cancer to better prevent and treat it.

From:  University of Alberta

By Adrianna MacPherson

April 21, 2022 -- While cancer is a genetic disease, the genetic component is just one piece of the puzzle — and researchers need to consider environmental and metabolic factors as well, according to a research review by a leading expert at the University of Alberta.

Nearly all the theories about the causes of cancer that have emerged over the past several centuries can be sorted into three larger groups, said David Wishart, professor in the departments of biological sciences and computing science. The first is cancer as a genetic disease, focusing on the genome, or the set of genetic instructions that you are born with. The second is cancer as an environmental disease, focusing on the exposome, which includes everything your body is exposed to throughout your life. The third is cancer as a metabolic disease, focusing on the metabolome, all the chemical byproducts of the process of metabolism.

The metabolic perspective hasn’t had much research until now, but it’s gaining the interest of more scientists, who are beginning to understand the metabolome’s role in cancer.

The genome, exposome and metabolome operate together in a feedback loop as cancer develops and spreads.

According to the data, heritable cancers account for just five to 10 per cent of all cancers, Wishart said. The other 90 to 95 per cent are initiated by factors in the exposome, which in turn trigger genetic mutations.

“That’s an important thing to consider, because it says that cancer isn’t inevitable.”

The metabolome is critical to the process, as those genetically mutated cancer cells are sustained by the cancer-specific metabolome.

“Cancer is genetic, but often the mutation itself isn’t enough,” said Wishart. As cancer develops and spreads in the body, it creates its own environment and introduces certain metabolites. “It becomes a self-fuelled disease. And that’s where cancer as a metabolic disorder becomes really important.”

The multi-omics perspective, in which the genome, exposome and metabolome are all considered in unison when thinking about cancer, is showing promise for finding treatments and for overcoming the limitations of looking at only one of these factors. 

For example, Wishart explained, researchers who focus only on the genetic perspective are looking to address particular mutations. The problem is, there are around 1,000 genes that can become cancerous when mutated, and it typically takes at least two different mutations within these cells for cancer to grow. That means there are a million potential mutation pairs, and “it becomes hopeless” to narrow down the possibilities when seeking new treatments.

But when considering cancer from the metabolic perspective, there are just four major metabolic types, said Wishart. Rather than trying to find a treatment plan for one specific mutation combination amongst a million, determining the patient’s cancer metabolic type can immediately guide doctors in deciding on the best treatment for their specific cancer.

“It really doesn't make a difference where the cancer is — it’s something you’ve got to get rid of. It’s how it thrives or grows that matters,” said Wishart. “It becomes a question of, ‘What’s the fuel that powers this engine?’”

Wishart cautioned that health-care providers still need a mix of therapeutics for cancer, and noted that a deeper understanding of the metabolome and its role in the cancer feedback loop is also critical to preventing cancer.

“If we understand the causes of cancer, then we can start highlighting the known causes, the lifestyle issues that introduce or increase our risk,” he said.

“From the prevention side, changing our metabolism through lifestyle adjustments will make a huge difference in the incidence of cancer.”

The research review was funded by Genome Canada, the Canadian Institutes of Health Research and the Canada Foundation for Innovation.

https://www.ualberta.ca/folio/2022/04/new-evidence-shows-cancer-is-not-as-heritable-as-once-thought.html

Sunday, April 24, 2022

The Imperial Wireless Chain Born a Century Ago

The Imperial Wireless Chain was a strategic international communications network of powerful long range radiotelegraphy stations, created by the British government to link the countries of the British Empire. The stations exchanged commercial and diplomatic text message traffic transmitted at high speed by Morse code using paper tape machines. Although the idea was conceived prior to World War I, the United Kingdom was the last of the world's great powers to implement an operational system.  The first link in the chain, between Leafield in Oxfordshire and Cairo, Egypt, eventually opened on 24 April 1922, with the final link, between Australia and Canada, opening on 16 June 1928.

The Initial Scheme

Guglielmo Marconi invented the first practical radio transmitters and receivers, and radio began to be used for practical ship-to-shore communication around 1900. His company, the Marconi Wireless Telegraph Company, dominated early radio. In the period leading up to World War I, long distance radiotelegraphy became a strategic defense technology, as it was realized that a nation without radio could be isolated by an enemy cutting its submarine telegraph cables, as indeed happened during the war. Starting around 1908, industrialized nations built global networks of powerful transoceanic wireless telegraphy stations to exchange Morse code telegram  traffic with their overseas colonies.

In 1910 the Colonial Office received a formal proposal from the Marconi Company to construct a series of wireless telegraphy stations to link the British Empire within three years.  While not then accepted, the Marconi proposal created serious interest in the concept.

A dilemma faced by Britain throughout the negotiations to establish the chain was that Britain owned the largest network of submarine telegraph cables.  The proposed stations would directly compete with cables for a fixed amount of transoceanic telegram traffic, reducing the revenue of the cable companies and possibly bankrupting them.

Parliament ruled out the creation of a private monopoly to provide the service and concluded that no government department was in a position to do so, and the Treasury were reluctant to fund the creation of a new department. Contracting the construction to a commercial "wireless company" was the favoured option, and a contract was signed with Marconi's Wireless Telegraph Company in March 1912. The government then found itself facing severe criticism and appointed a select committee to examine the topic.  After hearing evidence from the Admiralty, War Office, India Office, and representatives from South Africa, the committee unanimously concluded that a "chain of Imperial wireless stations" should be established as a matter of urgency.  An expert committee also advised that Marconi were the only company with technology that was proven to operate reliably over the distances required (in excess of 2,000 miles (3,200 km)) "if rapid installation and immediate and trustworthy communication be desired".

After further negotiations prompted by Treasury pressure, a modified contract was ratified by Parliament on 8 August 1913, with 221 Members of Parliament voting in favour, 140 against.  The course of these events was disrupted somewhat by the Marconi scandal, when it was alleged that highly placed members of the governing Liberal party had used their knowledge of the negotiations to indulge in insider trading in Marconi shares. The outbreak of World War I led to the suspension of the contract by the government.  Meanwhile Germany successfully constructed its own wireless chain before the war, at a cost equivalent to two million pounds sterling, and was able to use it to its advantage during the conflict.

After World War I

With the end of the war and the Dominions continuing to apply pressure on the government to provide an "Imperial wireless system", the House of Commons agreed in 1919 that £170,000 should be spent constructing the first two radio stations in the chain, in Oxfordshire (at Leafield) and Egypt (in Cairo), to be completed in early 1920 – although the link actually opened on 24 April 1922, two months after the UK declared Egypt independent.

Parliament's decision came shortly after legal action initiated by Marconi in June 1919, claiming £7,182,000 in damages from the British government for breach of their July 1912 contract, and in which they were awarded £590,000 by the court.  The government also commissioned the "Imperial Wireless Telegraphy Committee" chaired by Sir Henry Norman (the Norman Committee), which reported in 1920. The Norman Report recommended that transmitters should have a range of 2,000 miles, which required relay stations, and that Britain should be connected to Canada, Australia, South Africa, Egypt, India, East Africa, Singapore, and Hong Kong.  However, the report was not acted upon. While British politicians procrastinated, Marconi constructed stations for other nations, linking North and South America, as well as China and Japan, in 1922.  In January 1922 the British Chambers of Commerce added their voice to the demands for action, adopting a resolution urging the government to urgently resolve the matter, as did other organisations such as the Empire Press Union, which claimed that the Empire was suffering "incalculable loss" in its absence.

Under this pressure, after the 1922 General Election, the Conservative government commissioned the Empire Wireless Committee, chaired by Sir Robert Donald, to "consider and advise upon the policy to be adopted as regards an Imperial wireless service so as to protect and facilitate public interest." Its report was presented to the Postmaster-General on 23 February 1924  The committee's recommendations were similar to those of the Norman Committee – that any stations in Great Britain used to communicate with the Empire should be in the hands of the state, that they should be operated by the Post Office, and that eight high-power longwave stations should be used, as well as land-lines.  The scheme was estimated at £500,000.  At the time the committee was unaware of Marconi's 1923 experiments into shortwave radio transmissions, which offered a much cheaper alternative – although not a commercially proven one – to high-power long-wave transmission system.

Following the Donald Report and discussions with the Dominions, it was decided that the high-power Rugby longwave station (announced on 13 July 1922 by the previous government) would be completed since it used proven technology, in addition to which a number of shortwave "beam stations" would be built (so called because a directional antenna concentrated the radio transmission into a narrow directional beam). The beam stations would communicate with those Dominions that chose the new shortwave technology. Parliament finally approved an agreement between the Post Office and Marconi to build beam stations to communicate with Canada, South Africa, India and Australia, on 1 August 1924.

Commercial Impact

From when the Post Office began operating the "Post Office Beam" services, through to March, 31st, 1929, they had earned gross receipts of £813,100 at a cost of £538,850, leaving a net surplus of £274,250.

Even before the final link became operational between Australia and Canada, it was apparent that the commercial success of the Wireless Chain was threatening the viability of the cable telegraphy companies. An "Imperial Wireless and Cable Conference" was therefore held in London in January 1928, with delegates from Great Britain, the self-governing Dominions, India, the Crown Colonies and Protectorates, to "examine the situation which arose as a result of the competition of the Imperial Beam Wireless Services with the cable services of various parts of the empire, to report upon it and to make recommendations with a view to a common policy being adopted by the various governments concerned."  It concluded that the cable companies would not be able to compete in an unrestricted market, but that the cable links remained of both commercial and strategic value. It therefore recommended that the cable and wireless interests of the Eastern Telegraph Company, the Eastern Extension, Australasia and China Telegraph Company, Western Telegraph Company and Marconi's Wireless Telegraph Company should be merged to form a single organisation holding a monopolistic position. The merged company would be overseen by an Imperial Advisory Committee, would purchase the government-owned cables in the Pacific, West Indies and Atlantic, and would also be given a lease on the beam stations for a period of 25 years, for the sum of £250,000 per year.

The conference's recommendations were incorporated into the Imperial Telegraphs Act 1929, leading to the creation of two new companies on 8 April 1929; an operating company Imperial and International Communications, in turn owned by a holding company named Cable & Wireless Limited.  In 1934 Imperial and International Communications was renamed as Cable & Wireless Limited, with Cable and Wireless Limited being renamed as Cable and Wireless (Holding) Limited.  From the beginning of April 1928 the beam services were operated by the Post Office as agent for Imperial and International Communications Limited.

Transfers of Ownership

The 1930s saw the arrival of the Great Depression, as well as competition from the International Telephone and Telegraph Corporation and affordable airmail.  Due to such factors Cable and Wireless were never able to earn the revenue which had been forecast, resulting in low dividends and an inability to reduce the rates charged to customers as much as had been expected.  To ease the financial pressure, the British Government finally decided to transfer the beam stations to Cable and Wireless, in exchange for 2,600,000 of the 30,000,000 shares in the company, under the provisions of the Imperial Telegraphs Act 1938.  The ownership of the beam stations was reversed in 1947, when the Labour Government nationalised Cable and Wireless, integrating its UK assets with those of the Post Office.  By this stage, however, three of the original stations had been closed, after the service was centralised during 1939–1940 at Dorchester and Somerton.  The longwave Rugby radio station continued to remain under Post Office ownership throughout.

          https://en.wikipedia.org/wiki/Imperial_Wireless_Chain

 


Saturday, April 23, 2022

Interacting Brain Waves Are Key to How We Process Information

Neural networks that look and act like computer circuits are only part of the picture

From:  Salk Institute

April 22, 2022 -- For years, the brain has been thought of as a biological computer that processes information through traditional circuits, whereby data zips straight from one cell to another. While that model is still accurate, a new study led by Salk Professor Thomas Albright and Staff Scientist Sergei Gepshtein shows that there's also a second, very different way that the brain parses information: through the interactions of waves of neural activity. The findings, published in Science Advances on April 22, 2022, help researchers better understand how the brain processes information.

"We now have a new understanding of how the computational machinery of the brain is working," says Albright, the Conrad T. Prebys Chair in Vision Research and director of Salk's Vision Center Laboratory. "The model helps explain how the brain's underlying state can change, affecting people's attention, focus, or ability to process information."

Researchers have long known that waves of electrical activity exist in the brain, both during sleep and wakefulness. But the underlying theories as to how the brain processes information -- particularly sensory information, like the sight of a light or the sound of a bell -- have revolved around information being detected by specialized brain cells and then shuttled from one neuron to the next like a relay.

This traditional model of the brain, however, couldn't explain how a single sensory cell can react so differently to the same thing under different conditions. A cell, for instance, might become activated in response to a quick flash of light when an animal is particularly alert, but will remain inactive in response to the same light if the animal's attention is focused on something else.

Gepshtein likens the new understanding to wave-particle duality in physics and chemistry -- the idea that light and matter have properties of both particles and waves. In some situations, light behaves as if it is a particle (also known as a photon). In other situations, it behaves as if it is a wave. Particles are confined to a specific location, and waves are distributed across many locations. Both views of light are needed to explain its complex behavior.

"The traditional view of brain function describes brain activity as an interaction of neurons. Since every neuron is confined to a specific location, this view is akin to the description of light as a particle," says Gepshtein, director of Salk's Collaboratory for Adaptive Sensory Technologies. "We've found that in some situations, brain activity is better described as interaction of waves, which is similar to the description of light as a wave. Both views are needed for understanding the brain."

Some sensory cell properties observed in the past were not easy to explain given the "particle" approach to the brain. In the new study, the team observed the activity of 139 neurons in an animal model to better understand how the cells coordinated their response to visual information. In collaboration with physicist Sergey Savel'ev of Loughborough University, they created a mathematical framework to interpret the activity of neurons and to predict new phenomena.

The best way to explain how the neurons were behaving, they discovered, was through interaction of microscopic waves of activity rather than interaction of individual neurons. Rather than a flash of light activating specialized sensory cells, the researchers showed how it creates distributed patterns: waves of activity across many neighboring cells, with alternating peaks and troughs of activation -- like ocean waves.

When these waves are being simultaneously generated in different places in the brain, they inevitably crash into one another. If two peaks of activity meet, they generate an even higher activity, while if a trough of low activity meets a peak, it might cancel it out. This process is called wave interference.

"When you're out in the world, there are many, many inputs and so all these different waves are generated," says Albright. "The net response of the brain to the world around you has to do with how all these waves interact."

To test their mathematical model of how neural waves occur in the brain, the team designed an accompanying visual experiment. Two people were asked to detect a thin faint line ("probe") located on a screen and flanked by other light patterns. How well the people performed this task, the researchers found, depended on where the probe was. The ability to detect the probe was elevated at some locations and depressed at other locations, forming a spatial wave predicted by the model.

"Your ability to see this probe at every location will depend on how neural waves superimpose at that location," says Gepshtein, who is also a member of Salk's Center for the Neurobiology of Vision. "And we've now proposed how the brain mediates that."

The discovery of how neural waves interact is much more far-reaching than explaining this optical illusion. The researchers hypothesize that the same kinds of waves are being generated -- and interacting with each other -- in every part of the brain's cortex, not just the part responsible for the analysis of visual information. That means waves generated by the brain itself, by subtle cues in the environment or internal moods, can change the waves generated by sensory inputs.

This may explain how the brain's response to something can shift from day to day, the researchers say.

Additional co-authors of the paper include Ambarish Pawar of Salk and Sunwoo Kwon of the University of California, Berkeley.

The work was supported in part by the Salk Institute's Sloan-Swartz Center for Theoretical Neurobiology, the Kavli Institute for Brain and Mind, the Conrad T. Prebys Foundation, the National Institutes of Health (R01-EY018613, R01-EY029117) and the Engineering and Physical Sciences Research Council (EP/S032843/1).

            https://www.sciencedaily.com/releases/2022/04/220422161527.htm

 

Friday, April 22, 2022

Hints of Why the Pacific Islands Were Colonized

The discovery of pottery from the ancient Lapita culture by researchers at The Australian National University (ANU) has shed new light on how Papua New Guinea served as a launching pad for the colonisation of the Pacific -- one of the greatest migrations in human history

From:  Australian National University

April 22, 2022 -- The new study makes clear the initial expansion of the Lapita people throughout Papua New Guinea was far greater than previously thought.

The study, published in the Nature Ecology and Evolution journal, is based on the discovery of a distinctive Lapita pottery sherd, a broken piece of pottery with sharp edges, on Brooker Island in 2017 that lead researcher Dr Ben Shaw said was "like finding a needle in a haystack."

"Lapita cultural groups were the first people to reach the remote Pacific islands such as Vanuatu around 3,000 years ago. But in Papua New Guinea where people have lived for at least 50,000 years, the timing and extent of Lapita dispersals are poorly understood," Dr Shaw said.

"For a long time, it was thought Lapita groups avoided most of Papua New Guinea because people were already living there."

The study shows Lapita people introduced pottery to Papua New Guinea that had distinct markings, as well as new tool technologies and animals such as pigs.

"We found lots of Lapita pottery, a range of stone tools and evidence for shaping of obsidian [volcanic glass] into sharp blades," Dr Shaw said.

"As we dug deeper, we reached an even earlier cultural layer before the introduction of pottery. What amazed us was the amount of mammal bone we recovered, some of which could be positively identified as pig and dog. These animals were introduced to New Guinea by Lapita and were associated with the use of turtle shell to make tools."

Dr Shaw said the new discovery explains why the Lapita people colonised the Pacific islands 3,000 years ago and the role that Indigenous populations in New Guinea had in Lapita decisions to look for new islands to live on.

According to Dr Shaw, later Lapita dispersals through PNG and interaction with Indigenous populations profoundly influenced the region as a global centre of cultural and linguistic diversity.

"It is one of the greatest migrations in human history and finally we have evidence to help explain why the migration might have occurred and why it took place when it did," he said.

"We had no indication this would be a site of significance, and a lot of the time we were flying blind with the areas we surveyed and when looking for archaeological sites, so it is very much like finding the proverbial needle in a haystack."

The research involved many ANU researchers and international collaborators who showed how migration pathways and island-hopping strategies culminated in rapid and purposeful Pacific-wide settlement.

"A lot of our good fortune was because of the cultural knowledge, and we built a strong relationship with the locals based on honesty and transparency about our research on their traditional lands. Without their express permission, this kind of work would simply not be possible. The Brooker community is listed as the senior author on the paper to acknowledge their fundamental role in this research," Dr Shaw said.

      Discovery sheds light on why the Pacific islands were colonized -- ScienceDaily

 

Thursday, April 21, 2022

New Kind of 3D Printing from Stanford

Engineers at Stanford and Harvard have laid the groundwork for a new system for 3D printing that doesn’t require that an object be printed from the bottom up.

From:  Stanford News Service

By Laura Castanon

April 20, 2022 -- While 3D printing techniques have advanced significantly in the last decade, the technology continues to face a fundamental limitation: objects must be built up layer by layer. But what if they didn’t have to be?

Dan Congreve, an assistant professor of electrical engineering at Stanford and former Rowland Fellow at the Rowland Institute at Harvard University, and his colleagues have developed a way to print 3D objects within a stationary volume of resin. The printed object is fully supported by the thick resin – imagine an action figure floating in the center of a block of Jell-O – so it can be added to from any angle. This removes the need for the support structures typically required for creating complex designs with more standard printing methods. The new 3D printing system, which was recently published in Nature, could make it easier to print increasingly intricate designs while saving time and material.

“The ability to do this volumetric printing enables you to print objects that were previously very difficult,” said Congreve. “It’s a very exciting opportunity for three-dimensional printing going forward.”

Printing with light

At its surface, the technique seems relatively straightforward: The researchers focused a laser through a lens and shone it into a gelatinous resin that hardens when exposed to blue light. But Congreve and his colleagues couldn’t simply use a blue laser – the resin would cure along the entire length of the beam. Instead, they used a red light and some cleverly designed nanomaterials scattered throughout resin to create blue light at only the precise focal point of the laser. By shifting the laser around the container of resin, they were able to create detailed, support-free prints.

Congreve’s lab specializes in converting one wavelength of light to another using a method called triplet fusion up-conversion. With the right molecules in close proximity to each other, the researchers can create a chain of energy transfers that, for example, turn low-energy red photons into high-energy blue ones.

“I got interested in this up-conversion technique back in grad school,” Congreve said. “It has all sorts of interesting applications in solar, bio, and now this 3D printing. Our real specialty is in the nanomaterials themselves – engineering them to emit the right wavelength of light, to emit it efficiently, and to be dispersed in resin.”

Through a series of steps (which included sending some of their materials for a spin in a Vitamix blender), Congreve and his colleagues were able to form the necessary up-conversion molecules into distinct nanoscale droplets and coat them in a protective silica shell. Then they distributed the resulting nanocapsules, each of which is 1000 times smaller than the width of a human hair, throughout the resin.

“Figuring out how to make the nanocapsules robust was not trivial – a 3D-printing resin is actually pretty harsh,” said Tracy Schloemer, a postdoctoral researcher in Congreve’s lab and one of the lead authors on the paper. “And if those nanocapsules start falling apart, your ability to do upconversion goes away. All your contents spill out and you can’t get those molecular collisions that you need.”

Next steps for light-converting nanocapsules

The researchers are currently working on ways to refine their 3D-printing technique. They are investigating the possibility of printing multiple points at the same time, which would speed up the process considerably, as well as printing at higher resolutions and smaller scales.

Congreve is also exploring other opportunities to put the up-converting nanocapsules to use. They may be able to help improve the efficiency of solar panels, for example, by converting unusable low-energy light into wavelengths the solar cells can collect. Or they could be used to help researchers more precisely study biological models that can be triggered with light or even, in the future, deliver localized treatments.

“You could penetrate tissue with infrared light and then turn that infrared light into high-energy light with this up-conversion technique to, for example, drive a chemical reaction,” said Congreve. “Our ability to control materials at the nanoscale gives us a lot of really cool opportunities to solve challenging problems that are otherwise difficult to approach.”

Additional Stanford co-authors of this research are postdoctoral scholar Tracy Schloemer; former visiting researcher Michael Seitz; and graduate student Arynn Gallegos. Other co-authors, including a co-lead author, are from the Rowland Institute at Harvard University.

This research was funded by the Rowland Institute at Harvard University, the Harvard PSE Accelerator Fund, the Gordon and Betty Moore Foundation, an Arnold O. Beckman Postdoctoral Fellowship, the Swiss National Science Foundation, the National Science Foundation, and a Stanford Graduate Fellowship in Science & Engineering (a Scott A. and Geraldine D. Macomber Fellowship).

                       https://news.stanford.edu/press/view/43438