Thursday, October 31, 2019

Passive Cooling with No Electricity

Passive device relies on a layer of material that blocks incoming sunlight but lets heat radiate away.

David Chandler | MIT News Office
October 30, 2019 -- Imagine a device that can sit outside under blazing sunlight on a clear day, and without using any power cool things down by more than 23 degrees Fahrenheit (13 degrees Celsius). It almost sounds like magic, but a new system designed by researchers at MIT and in Chile can do exactly that.

The device, which has no moving parts, works by a process called radiative cooling. It blocks incoming sunlight to keep from heating it up, and at the same time efficiently radiates infrared light — which is essentially heat — that passes straight out into the sky and into space, cooling the device significantly below the ambient air temperature.


The key to the functioning of this simple, inexpensive system is a special kind of insulation, made of a polyethylene foam called an aerogel. This lightweight material, which looks and feels a bit like marshmallow, blocks and reflects the visible rays of sunlight so that they don’t penetrate through it. But it’s highly transparent to the infrared rays that carry heat, allowing them to pass freely outward.


The new system is described today in a paper in the journal Science Advances, by MIT graduate student Arny Leroy, professor of mechanical engineering and department head Evelyn Wang, and seven others at MIT and at the Pontifical Catholic University of Chile.


Such a system could be used, for example, as a way to keep vegetables and fruit from spoiling, potentially doubling the time the produce could remain fresh, in remote places where reliable power for refrigeration is not available, Leroy explains.


Minimizing heat gain


Radiative cooling is simply the main process that most hot objects use to cool down. They emit midrange infrared radiation, which carries the heat energy from the object straight off into space because air is highly transparent to infrared light.


The new device is based on a concept that Wang and others demonstrated a year ago, which also used radiative cooling but employed a physical barrier, a narrow strip of metal, to shade the device from direct sunlight to prevent it from heating up. That device worked, but it provided less than half the amount of cooling power that the new system achieves because of its highly efficient insulating layer.


“The big problem was insulation,” Leroy explains. The biggest input of heat preventing the earlier device from achieving deeper cooling was from the heat of the surrounding air. “How do you keep the surface cold while still allowing it to radiate?” he wondered. The problem is that almost all insulating materials are also very good at blocking infrared light and so would interfere with the radiative cooling effect.


There has been a lot of research on ways to minimize heat loss, says Wang, who is the Gail E. Kendall Professor of Mechanical Engineering. But this is a different issue that has received much less attention: how to minimize heat gain. “It’s a very difficult problem,” she says.


The solution came through the development of a new kind of aerogel. Aerogels are lightweight materials that consist mostly of air and provide very good thermal insulation, with a structure made up of microscopic foam-like formations of some material. The team’s new insight was to make an aerogel out of polyethylene, the material used in many plastic bags. The result is a soft, squishy, white material that’s so lightweight that a given volume weighs just 1/50 as much as water.


The key to its success is that while it blocks more than 90 percent of incoming sunlight, thus protecting the surface below from heating, it is very transparent to infrared light, allowing about 80 percent of the heat rays to pass freely outward. “We were very excited when we saw this material,” Leroy says.


The result is that it can dramatically cool down a plate, made of a material such as metal or ceramic, placed below the insulating layer, which is referred to as an emitter. That plate could then cool a container connected to it, or cool liquid passing through coils in contact with it, to provide cooling for produce or air or water.


Putting the device to the test


To test their predictions of its effectiveness, the team along with their Chilean collaborators set up a proof-of-concept device in Chile’s Atacama desert, parts of which are the driest land on Earth. They receive virtually no rainfall, yet, being right on the equator, they receive blazing sunlight that could put the device to a real test. The device achieved a cooling of 13 degrees Celsius under full sunlight at solar noon. Similar tests on MIT’s campus in Cambridge, Massachusetts, achieved just under 10 degrees cooling.


That’s enough cooling to make a significant difference in preserving produce in remote locations, the researchers say. In addition, it could be used to provide an initial cooling stage for electric refrigeration, thus minimizing the load on those systems to allow them to operate more efficiently with less power.


Theoretically, such a device could achieve a temperature reduction of as much as 50 C, the researchers say, so they are continuing to work on ways of further optimizing the system so that it could be expanded to other cooling applications such as building air conditioning without the need for any source of power. Radiative cooling has already been integrated with some existing air conditioning systems to improve their efficiency.


Already, though, they have achieved a greater amount of cooling under direct sunlight than any other passive, radiative system other than those that use a vacuum system for insulation — which is very effective but also heavy, expensive, and fragile.


This approach could also be a low-cost add-on to any other kind of cooling system, providing additional cooling to supplement a more conventional system. “Whatever system you have,” Leroy says, “put the aerogel on it, and you’ll get much better performance.”

Peter Bermel, an associate professor of electrical and computer engineering at Purdue University, who was not involved in this work, says, “The main potential benefit of the polyethylene aerogel presented here may be its relative compactness and simplicity, compared to a number of prior experiments.”


He adds, “It might be helpful to quantitatively compare and contrast this method with some alternatives, such as polyethylene films and angle-selective blocking in terms of performance (e.g., temperature change), cost, and weight per unit area. … The practical benefit could be significant if the comparison were performed and the cost/benefit tradeoff significantly favored these aerogels.”

The work was partly supported by an MIT International Science and Technology Initiatives (MISTI) Chile Global Seed Fund grant, and by the U.S. Department of Energy through the Solid State Solar Thermal Energy Conversion Center (S3TEC).


Wednesday, October 30, 2019

Early Retirement and Cognitive Decline


From Binghamton University

BINGHAMTON, N.Y. – October 29, 2019 -- Early retirement can accelerate cognitive decline among the elderly, according to research conducted by faculty at Binghamton University, State University of New York.


Plamen Nikolov, assistant professor of economics, and Alan Adelman, a doctoral student in economics, examined China's New Rural Pension Scheme (NRPS) and the Chinese Health and Retirement Longitudinal Survey (CHARLS) to determine the effects of pension benefits on individual cognition of those ages 60 or above. CHARLS, a nationally representative survey of people ages 45 and above within the Chinese population, is a sister survey of the U.S. Health and Retirement Survey and directly tests cognition with a focus on episodic memory and components of intact mental status.


With a higher life expectancy and decline in fertility in developing countries, the elderly population has become the largest demographic source in Asia and Latin America, generating an urgent need for new, sustainable pension systems. However, research suggests that these retirement plans can be detrimental, as retirement plays a significant role in explaining cognitive decline at older ages.


"Because of this large demographic boom, China introduced a formal pension program (called NRPS) in rural parts of the country. The program was introduced on the basis of an economy's needs and capacity, in particular to alleviate poverty in old age," said Nikolov. "In rural parts of the country, traditional family-based care for the elderly had largely broken down, without adequate formal mechanisms to take its place. For the elderly, inadequate transfers from either informal family and community transfers could severely reduce their ability to cope with illness or poor nutrition."


The researchers discovered that there were significant negative effects of pension benefits on cognition functioning among the elderly. The largest indicator of cognitive decline was delayed recall, a measure that is widely implicated in neurobiological research as an important predictor of dementia. The pension program had more negative effects among females, and Nikolov said the results support the mental retirement hypothesis that decreased mental activity results in the worsening of cognitive skills. 


"Individuals in the areas that implement the NRPS score considerably lower than individuals who live in areas that do not offer the NRPS program," Nikolov said. "Over the almost 10 years since its implementation, the program led to a decline in cognitive performance by as high as almost a fifth of a standard deviation on the memory measures we examine."


Surprisingly, the estimated program impacts were similar to the negative findings in higher income countries such as America, England and the European Union, which Nikolov said demonstrates the global issues of retirement.


"We were surprised to find that pension benefits and retirement actually resulted in reduced cognitive performance. In a different study we found a very robust finding that the introduction of pension benefits and retirement led to positive health benefits via improvements in sleep and the reduction of alcohol consumption and smoking," he said. "The fact that retirement led to reduced cognitive performance in and of itself is a stark finding about an unsuspected, puzzling issue, but a finding with extremely important welfare implications for one's quality of life in old age."


While pension benefits and retirement were found to lead to improved health, these programs also induced a stark and much more negative influence on other dimensions: social activities, activities associated with mental fitness and social engagement, more broadly.


"For cognition among the elderly, it looks like the negative effect on social engagement far outweighed the positive effect of the program on nutrition and sleep," said Nikolov. "Or alternatively, the kinds of things that matter and determine better health might simply be very different than the kinds of things that matter for better cognition among the elderly. Social engagement and connectedness may simply be the single most powerful factors for cognitive performance in old age."

Nikolov said he hopes this research will help create new policies to improve the cognitive functioning of older generations during retirement.


"We hope our findings will influence retirees themselves but perhaps, more importantly, it will influence policymakers in developing countries," Nikolov said. "We show robust evidence that retirement has important benefits. But it also has considerable costs. Cognitive impairments among the elderly, even if not severely debilitating, bring about a loss of quality of life and can have negative welfare consequences. Policymakers can introduce policies aimed at buffering the reduction of social engagement and mental activities. In this sense, retirement programs can generate positive spillovers for health status of retirees without the associated negative effect on their cognition."

Nikolov plans to continue research on this topic and examine how the introduction of pension benefits led to responses of labor force participation among the elderly in rural China.


The paper, "Do Pension Benefits Accelerate Cognitive Decline? Evidence from Rural China," was published in the IZA Institute of Labor Economics.


Tuesday, October 29, 2019

UN Diplomat Sadako Ogata Dies


Sadako Ogata, née Nakamura (緒方 貞子 Ogata Sadako, 16 September 1927 – 22 October 2019) was a Japanese academic, diplomat, author, administrator, and professor emeritus at Sophia University. She was widely known as the United Nations High Commissioner for Refugees (UNHCR) from 1991 to 2000, as well as in her capacities as Chair of the UNICEF Executive Board from 1978 to 1979  and as President of the Japan International Cooperation Agency (JICA) from 2003 to 2012. She also served as Advisor of the Executive Committee of the Japan Model United Nations (JMUN)
   
                                                                Sadako Ogata in 1993      


Early and Academic Life


Ogata was born on 16 September 1927 to a career diplomat father Toyoichi Nakamura, who was the Japanese ambassador to Finland. Her mother was a daughter of Foreign Minister Kenkichi Yoshizawa and granddaughter of Prime Minister Inukai Tsuyoshi, who was assassinated when Sadako was four years old.


She attended the Catlin Gabel School, class of 1946, and graduated from the University of the Sacred Heart with a bachelor's degree in English Literature. She then studied at Georgetown University and its Edmund A. Walsh School of Foreign Service, earning a master's degree in International Relations. It was not common for a Japanese woman to study abroad at that time. She wanted to study the causes of Japan's defeat in war in the US. She was awarded a PhD in Political Science from the University of California, Berkeley in 1963, after she completed a dissertation on the politics behind the foundation of Manchukuo. The study analyzed the causes of the Japanese invasion to China. In 1965, she became Lecturer at International Christian University. After 1980, she taught international politics at Sophia University as Professor and later became Dean of the Faculty of Foreign Studies until her departure to join the UNHCR in 1991. 


Career


Ogata was appointed to Japan's UN mission in 1968, on the recommendation of Fusae Ichikawa, a member of the House of Councillors of Japan and an activist who thought highly of Ogata. She represented Japan at several sessions of the UN General Assembly in 1970. In addition, she served from 1978 to 79 as Envoy Extraordinary and Minister Plenipotentiary for the permanent mission of Japan to the UN, and as Chair of the UNICEF Executive Board.


In 1990, she was appointed to the United Nations High Commissioner for Refugees (UNHCR). She left Sophia University, and started her new position at UNHCR. The presumed term at UNHCR was only three years, the remaining term of the abruptly left predecessor. After arrival at the post in 1991, however, her leadership led to a much longer term ending in 2001. She implemented effective strategies and helped countless refugees escape from despair, including Kurdish refugees after the Gulf War, refugees in the Yugoslav Wars, refugees in the Rwandan genocide, Afghan refugees including victims of Cold War. In the face of Kurdish refugees at the border between Turkey and Iraq, Ogata expanded the mandate of UNHCR to include the protection of internally displaced persons (IDPs). She was a practical leader who deployed military forces in the humanitarian operations, for example at the siege of Sarajevo, the Airlift Operations in cooperation with some European air forces during the Bosnian War.


In 2001, she became co-chairperson of UN Human Security Commission.


Japanese government


After the September 11 attacks, in 2002, she was appointed to Special Representative of Prime Minister of Japan on Reconstruction Assistance to Afghanistan.


The Koizumi government approached Ogata as a candidate to replace Makiko Tanaka as Japanese foreign minister in early 2002, but Ogata refused to accept the position. Although Ogata did not publicly explain her refusal, Kuniko Inoguchi told The New York Times that Ogata "would hate to be used as a token or a figurehead because she has fought all her life for the condition of women, and she wouldn't help someone who would try to use her for their political purposes."


Next year, going back to Tokyo, the Japanese government appointed her as President of the Japan International Cooperation Agency (JICA) on 1 October 2003. It was reported that young JICA officials expressed their strong desire for her leadership, even before the formal appointment. She continued to work as president of JICA for more than two terms (over eight years), retiring in April 2012 to be succeeded by Akihiko Tanaka.


She was a member of The Advisory Council on the Imperial House Law on 27 November 2014. The council was Junichiro Koizumi then-Prime Minister's private advisory organ which belonged to the Cabinet Office. The council met 17 times from 25 January 2005 to discuss the Japanese succession controversy and the Imperial Household Act. On 24 November 2005, The Advisory Council's recommendation included female members' right to the throne including the right to be extended to the female lineage, and extension of the primogeniture to female members of the imperial household. Both Ogata and Empress Michiko's alma mater is the University of the Sacred Heart.


A "Reception for Respecting Mrs. Sadako Ogata's Contributions to Our Country and the International Community" was held by Kōichirō Genba, Minister for Foreign Affairs on 17 April 2012, in Tokyo. Prime Minister Yoshihiko Noda gave a speech. He said "Because of the 2011 Tōhoku earthquake and tsunami, the offers of assistance to Japan from more than 160 countries and more than 40 international organizations were NOT irrelevant to Mrs. Sadako Ogata's achievements". Ogata is involved in the Sergio Vieira de Mello Foundation.


Death


Ogata died on 22 October 2019 at the age of 92.


                                            https://en.wikipedia.org/wiki/Sadako_Ogata

Monday, October 28, 2019

Charlie Chaplin Lost a Contest


by Mario Cruz


Judging look-alike contests can be somewhat objective.  A good case in point dates all the way back to around 1915.  The nation was in the grips of a "Chaplintis" phenomenon and besides his numerous films, Charlie Chaplin look-alike contest became a popular form of entertainment.  


At these events, contestants would compete to see who could best imitate the "tramp" persona championed by Chaplin.  Ironically, few people, even a ardent fan, could recognize the real Chaplin without his familiar costume, makeup, and mustache.  


For example, a young actor / comedian named Bob Hope, won one of these contests in Cleveland, Ohio.


According to entertainment folklore, Chaplin himself once entered and lost one of these contests.  But rather than Monte Carlo or Switzerland, as legend has it, it was in San Francisco.  Also contrary to erroneous reports circulating the winner was not his brother, Syd, and Charlie Chaplin did not come in second or third, he did not even make the finals.


Chaplin was a good sport and had a good sense of humor.  He told a reporter at the time he wasn't upset at having lost, but he was "tempted to give lessons in the Chaplin walk, out of pity as well as in the desire to see the thing done correctly."


                                       http://travel-watch.com/mario/charlie_chaplin.htm

Doubts About Carbon Capture

Stanford Study Casts Doubt on Carbon Capture

Current approaches to carbon capture can increase air pollution and are not efficient at reducing carbon in the atmosphere, according to research from Mark Z. Jacobson.

By Taylor Kubota
October 25, 2019 -- One proposed method for reducing carbon dioxide (CO2) levels in the atmosphere – and reducing the risk of climate change – is to capture carbon from the air or prevent it from getting there in the first place. However, research from Mark Z. Jacobson at Stanford University, published in Energy and Environmental Science, suggests that carbon capture technologies can cause more harm than good.


“All sorts of scenarios have been developed under the assumption that carbon capture actually reduces substantial amounts of carbon. However, this research finds that it reduces only a small fraction of carbon emissions, and it usually increases air pollution,” said Jacobson, who is a professor of civil and environmental engineering. “Even if you have 100 percent capture from the capture equipment, it is still worse, from a social cost perspective, than replacing a coal or gas plant with a wind farm because carbon capture never reduces air pollution and always has a capture equipment cost. Wind replacing fossil fuels always reduces air pollution and never has a capture equipment cost.”


Jacobson, who is also a senior fellow at the Stanford Woods Institute for the Environment, examined public data from a coal with carbon capture electric power plant and a plant that removes carbon from the air directly. In both cases, electricity to run the carbon capture came from natural gas. 

He calculated the net CO2 reduction and total cost of the carbon capture process in each case, accounting for the electricity needed to run the carbon capture equipment, the combustion and upstream emissions resulting from that electricity, and, in the case of the coal plant, its upstream 
emissions. (Upstream emissions are emissions, including from leaks and combustion, from mining and transporting a fuel such as coal or natural gas.)


Common estimates of carbon capture technologies – which only look at the carbon captured from energy production at a fossil fuel plant itself and not upstream emissions – say carbon capture can remediate 85-90 percent of carbon emissions. Once Jacobson calculated all the emissions associated with these plants that could contribute to global warming, he converted them to the equivalent amount of carbon dioxide in order to compare his data with the standard estimate. He found that in both cases the equipment captured the equivalent of only 10-11 percent of the emissions they produced, averaged over 20 years.


This research also looked at the social cost of carbon capture – including air pollution, potential health problems, economic costs and overall contributions to climate change – and concluded that those are always similar to or higher than operating a fossil fuel plant without carbon capture and higher than not capturing carbon from the air at all. Even when the capture equipment is powered by renewable electricity, Jacobson concluded that it is always better to use the renewable electricity instead to replace coal or natural gas electricity or to do nothing, from a social cost perspective.


Given this analysis, Jacobson argued that the best solution is to instead focus on renewable options, such as wind or solar, replacing fossil fuels.


Efficiency and upstream emissions


This research is based on data from two real carbon capture plants, which both run on natural gas. The first is a coal plant with carbon capture equipment. The second plant is not attached to any energy-producing counterpart. Instead, it pulls existing carbon dioxide from the air using a chemical process.


Jacobson examined several scenarios to determine the actual and possible efficiencies of these two kinds of plants, including what would happen if the carbon capture technologies were run with renewable electricity rather than natural gas, and if the same amount of renewable electricity required to run the equipment were instead used to replace coal plant electricity.


While the standard estimate for the efficiency of carbon capture technologies is 85-90 percent, neither of these plants met that expectation. Even without accounting for upstream emissions, the equipment associated with the coal plant was only 55.4 percent efficient over 6 months, on average. With the upstream emissions included, Jacobson found that, on average over 20 years, the equipment captured only 10-11 percent of the total carbon dioxide equivalent emissions that it and the coal plant contributed. The air capture plant was also only 10-11 percent efficient, on average over 20 years, once Jacobson took into consideration its upstream emissions and the uncaptured and upstream emissions that came from operating the plant on natural gas.


Due to the high energy needs of carbon capture equipment, Jacobson concluded that the social cost of coal with carbon capture powered by natural gas was about 24 percent higher, over 20 years, than the coal without carbon capture. If the natural gas at that same plant were replaced with wind power, the social cost would still exceed that of doing nothing. Only when wind replaced coal itself did social costs decrease.


For both types of plants this suggests that, even if carbon capture equipment is able to capture 100 percent of the carbon it is designed to offset, the cost of manufacturing and running the equipment plus the cost of the air pollution it continues to allow or increases makes it less efficient than using those same resources to create renewable energy plants replacing coal or gas directly.


“Not only does carbon capture hardly work at existing plants, but there’s no way it can actually improve to be better than replacing coal or gas with wind or solar directly,” said Jacobson. “The latter will always be better, no matter what, in terms of the social cost. You can’t just ignore health costs or climate costs.”


This study did not consider what happens to carbon dioxide after it is captured but Jacobson suggests that most applications today, which are for industrial use, result in additional leakage of carbon dioxide back into the air.


Focusing on renewables


People propose that carbon capture could be useful in the future, even after we have stopped burning fossil fuels, to lower atmospheric carbon levels. Even assuming these technologies run on renewables, Jacobson maintains that the smarter investment is in options that are currently disconnected from the fossil fuel industry, such as reforestation – a natural version of air capture – and other forms of climate change solutions focused on eliminating other sources of emissions and pollution. These include reducing biomass burning, and reducing halogen, nitrous oxide and methane emissions.


“There is a lot of reliance on carbon capture in theoretical modeling, and by focusing on that as even a possibility, that diverts resources away from real solutions,” said Jacobson. “It gives people hope that you can keep fossil fuel power plants alive. It delays action. In fact, carbon capture and direct air capture are always opportunity costs.”

https://news.stanford.edu/2019/10/25/study-casts-doubt-carbon-capture/

Sunday, October 27, 2019

The Mask of Tutankhamun


The mask of Tutankhamun is a gold death mask of the 18th-dynasty ancient Egyptian Pharaoh Tutankhamun (reigned 1332–1323 BC). It was discovered by Howard Carter in 1925 in tomb KV62 in the Valley of the Kings, and is now housed in the Egyptian Museum in Cairo. The mask is one of the best-known works of art in the world.

                                                         Death Mask of Tutankhamun                        

Bearing the likeness of Osiris, Egyptian god of the afterlife, it is 54 centimetres (1.8 ft) tall, weighs over 10 kilograms (22 lb) or 321.5 Troy Ounces, and is decorated with semi-precious stones. An ancient spell from the Book of the Dead is inscribed in hieroglyphs on the mask's shoulders. The mask had to be restored in 2015 after its 2.5-kilogram (5.5 lb) plaited beard fell off and was hastily glued back on by museum workers.


According to Egyptologist Nicholas Reeves, the mask is "not only the quintessential image from Tutankhamun's tomb, it is perhaps the best-known object from ancient Egypt itself.”  Since 2001, research has suggested that it may originally have been intended for Queen Neferneferuaten; her royal name (Ankhkheperure) was found in a partly erased cartouche on the inside of the mask.


Discovery


Tutankhamun's burial chamber was found at the Theban Necropolis in the Valley of the Kings in 1922 and opened in 1923. It would be another two years before the excavation team, led by the English archaeologist Howard Carter, was able to open the heavy sarcophagus containing  Tutankhamun's mummy. On 28 October 1925, they opened the innermost of three coffins to reveal the gold mask, seen by people for the first time in approximately 3,250 years. Carter wrote in his 
diary:


The pins removed, the lid was raised. The penultimate scene was disclosed – a very neatly wrapped mummy of the young king, with golden mask of sad but tranquil expression, symbolizing Osiris … the mask bears that god's attributes, but the likeness is that of Tut.Ankh.Amen – placid and beautiful, with the same features as we find upon his statues and coffins. The mask has fallen slightly back, thus its gaze is straight up to the heavens.

In December 1925, the mask was removed from the tomb, placed in a crate and transported 635 kilometres (395 mi) to the Egyptian Museum in Cairo, where it remains on public display. 


The Mask


The mask is 54 cm (21 in) tall, 39.3 cm (15.5 in) wide and 49 cm (19 in) deep. It is fashioned from two layers of high-karat gold, varying from 1.5–3 mm (0.059–0.118 in) in thickness, and weighing 10.23 kg (22.6 lb). X-ray crystallography has revealed that the mask contains two alloys of gold: a lighter 18.4 karat shade for the face and neck, and 22.5 karat gold for the rest of the mask.


The face represents the pharaoh's standard image, and the same image was found by excavators elsewhere in the tomb, in particular in the guardian statues. He wears a nemes headcloth, topped by the royal insignia of a cobra (Wadjet) and vulture (Nekhbet), symbolising Tutankhamun's rule of both Lower Egypt and Upper Egypt respectively. The ears are pierced to hold earrings, a feature that appears to have been reserved for queens and children in almost all surviving ancient Egyptian works of art.


It contains inlays of coloured glass and gemstones, including lapis lazuli (the eye surrounds and eyebrows), quartz (the eyes), obsidian (the pupils), carnelian, feldspar, turquoise, amazonite, faience and other stones (as inlays of the broad collar).


Beard


When it was discovered in 1925, the 2.5 kg (5.5 lb) narrow gold beard, inlaid with blue lapis lazuli, giving it a plaited effect, had become separated from the mask, but it was reattached to the chin using a wooden dowel in 1944.


In August 2014, the beard fell off when the mask was taken out of its display case for cleaning. The museum workers responsible used quick-drying epoxy in an attempt to fix it, leaving the beard off-center. The damage was noticed in January 2015 and has been repaired by a German-Egyptian team who reattached it using beeswax, a natural material used by the ancient Egyptians.


In January 2016, it was announced that eight employees of the Egyptian Museum were to stand trial for allegedly ignoring scientific and professional methods of restoration and causing permanent damage to the mask. A former director of the museum and a former director of restoration were among those facing prosecution. As of January 2016, the date of the trial remains unknown.


Inscription


A protective spell is inscribed with Egyptian hieroglyphs on the back and shoulders in ten vertical and two horizontal lines. The spell first appeared on masks in the Middle Kingdom, 500 years before Tutankhamun, and was used in Chapter 151 of the Book of the Dead


Thy right eye is the night bark (of the sun-god), thy left eye is the day-bark, thy eyebrows are (those of) the Ennead of the Gods, thy forehead is (that of) Anubis, the nape of thy neck is (that of) Horus, thy locks of hair are (those of) Ptah-Sokar. (Thou art) in front of the Osiris (Tutankhamun). He sees thanks to thee, thou guidest him to the goodly ways, thou smitest for him the confederates of Seth so that he may overthrow thine enemies before the Ennead of the Gods in the great Castle of the Prince, which is in Heliopolis … the Osiris, the King of Upper Egypt Nebkheperure [Tutankhamun's throne-name], deceased, given life by Re".

Osiris was the Egyptian god of the afterlife. Ancient Egyptians believed that kings preserved in the likeness of Osiris would rule the Kingdom of the Dead. It never totally replaced the older cult of the sun, in which dead kings were thought to be reanimated as the sun-god Re, whose body was made of gold and lapis lazuli. This confluence of old and new beliefs resulted in a mixture of emblems inside Tutankhamun's sarcophagus and tomb.


Bead necklace


Although it is usually removed when the mask is on display, it has a triple-string necklace of gold and blue faience disc-beads with lotus flower terminals and uraeus clasps.


                                   https://en.wikipedia.org/wiki/Mask_of_Tutankhamun

Saturday, October 26, 2019

AI for Safer Self-Driving Vehicles


Platform for Scalable Testing of Autonomous Vehicle Safety

From University of Illinois College of Engineering


October 25, 2019 -- In the race to manufacture autonomous vehicles (AVs), safety is crucial yet sometimes overlooked as exemplified by recent headline-making accidents. Researchers at the University of Illinois at Urbana-Champaign are using artificial intelligence (AI) and machine learning to improve the safety of autonomous technology through both software and hardware advances.


"Using AI to improve autonomous vehicles is extremely hard because of the complexity of the vehicle's electrical and mechanical components, as well as variability in external conditions, such as weather, road conditions, topography, traffic patterns, and lighting," said Ravi Iyer

"Progress is being made, but safety continues to be a significant concern."


The group has developed a platform that enables companies to more quickly and cost-effectively address safety in the complex and ever-changing environment of autonomous technology. They are collaborating with many companies in the Bay area, including Samsung, NVIDIA, and a number of start-ups.


"We are seeing a stakeholder-wide effort across industries and universities with hundreds of startups and research teams, and are tackling a few challenges in our group," said Saurabh Jha, a doctoral candidate in computer science who is leading student efforts on the project. "Solving this challenge requires a multidisciplinary effort across science, technology, and manufacturing."


One reason this work is so challenging is that AVs are complex systems that use AI and machine learning to integrate mechanical, electronic, and computing technologies to make real-time driving decisions. A typical AV is a mini-supercomputer on wheels; they have more than 50 processors and accelerators running more than 100 million lines of code to support computer vision, planning, and other machine learning tasks.


As expected, there are concerns with the sensors and the autonomous driving stack (computing software and hardware) of these vehicles. When a car is traveling 70 mph down a highway, failures can be a significant safety risk to drivers.


"If a driver of a typical car senses a problem such as vehicle drift or pull, the driver can adjust his/her behavior and guide the car to a safe stopping point," Jha explained. "However, the behavior of the autonomous vehicle may be unpredictable in such a scenario unless the autonomous vehicle is explicitly trained for such problems. In the real world, there are infinite number of such cases."


Traditionally, when a person has trouble with software on a computer or smart phone, the most common IT response is to turn the device off and back on again. However, this type of fix is not advisable for AVs, as every millisecond impacts the outcome and a slow response could lead to death. The safety concerns of such AI-based systems has increased in the last couple of years among stakeholders due to various accidents caused by AVs.


"Current regulations require companies like Uber and Waymo, who test their vehicles on public roads to annually report to the California DMV about how safe their vehicles are," said Subho Banerjee, a CSL and computer science graduate student. "We wanted to understand common safety concerns, how the cars behaved, and what the ideal safety metric is for understanding how well they are designed."


The group analyzed all the safety reports submitted from 2014-2017, covering 144 AVs driving a cumulative 1,116,605 autonomous miles. They found that for the same number of miles driven, human-driven cars were up to 4000 times less likely than AVs to have an accident. This means that the autonomous technology failed, at an alarming rate, to appropriately handle a situation and disengaged the technology, often relying on the human driver to take over.


The problem researchers and companies have when it comes to improving those numbers is that until an autonomous vehicle system has a specific issue, it's difficult to train the software to overcome it.


Further, errors in the software and hardware stacks manifest as safety critical issues only under certain driving scenarios. In other words, tests performed on AVs on highways or empty/less crowded roadways may not be sufficient as safety violations under software/hardware faults are rare.


When errors do occur, they take place after hundreds of thousands of miles have been driven. The work that goes into testing these AVs for hundreds of thousands of miles takes considerable time, money, and energy, making the process extremely inefficient. The team is using computer simulations and artificial intelligence to speed up this process.


"We inject errors in the software and hardware stack of the autonomous vehicles in computer simulations and then collect data on the autonomous vehicle responses to these problems," said Jha. 
"Unlike humans, AI technology today cannot reason about errors that may occur in different driving scenarios. Therefore, needing vast amounts of data to teach the software to take the right action in the face of software or hardware problems."


The research group is currently building techniques and tools to generate driving conditions and issues that maximally impact AV safety. Using their technique, they can find a large number of safety critical scenarios where errors can lead to accidents without having to enumerate all possibilities on the road -- a huge savings of time and money.


During testing of one openly available AV technology, Apollo from Baidu, the team found more than  500 examples of when the software failed to handle an issue and the failure led to an accident. 

Results like these are getting the group's work noticed in the industry. They are currently working on a patent for their testing technology, and plan to deploy it soon. Ideally, the researchers hope companies use this new technology to simulate the identified issue and fix the problems before the cars are deployed.


"The safety of autonomous vehicles is critical to their success in the marketplace and in society," said Steve Keckler, vice president of Architecture Research for NVIDIA. "We expect that the technologies being developed by the Illinois research team will make it easier for engineers to develop safer automotive systems at lower cost. NVIDIA is excited about our collaboration with Illinois and is pleased to support their work."


Friday, October 25, 2019

Race, Meat and Starvation


Among the specious claims about the role of meat in the history of humanity: A meat-rich diet brings with it a masculine vigor that distinguishes carnivorous races.

By Josh Berson


The MIT Press Reader -- Prior to the identification of the micronutrients we call vitamins in the 1930s, nutrition science was mainly a science of animal energetics, or the study of how animals metabolize food into energy. Animal energetics, in turn, was a science of animal starvation. It was also a science of race.


The questions physiologists asked about animal energetics were straightforward: How much energy was required to keep an animal from starving under various conditions (for example, physical regimen, ambient temperature)? How much protein — specifically, in the early days, how much meat — was required to maintain the animal in nitrogen equilibrium, that is, to ensure that the quantity of nitrogen lost as urea in the urine was equal to that ingested? Efforts to measure metabolic rate by gauging the volume of carbon dioxide expelled in respiration went back at least to the French chemist Antoine Lavoisier’s experiments with guinea pigs in the 1780s, but for a long time, respirometry remained cumbersome and subject to the concern that what an animal did under a respirometer hood did not represent a good approximation to what it did out in the world. So in most labs, the key methods of research into the 1910s were collecting animal waste and fasting animals, often to the death.


A variety of animals were sacrificed by starvation: rats, rabbits, guinea pigs, chickens, cats, and dogs. 
Physiologists were partial to dogs, and canine hunger artists were cited with approval in the energetics literature into the 1950s. A dog in one lab in Tokyo was reported in 1898 to have survived 98 days without food before succumbing, having lost 65 percent of its body mass. Fourteen years later, physiologists at the University of Illinois reported they had fasted their dog Oscar 117 days before ending the experiment: Oscar refused to manifest the increase in excreted nitrogen typical of late-stage morbidity and in fact remained in such good spirits, as his handlers reported, that he had to be restrained as the fast went on from leaping out of and into his cage before and after his daily weighing lest he injure himself.


Humans, of course, could not be involuntarily fasted to the death, but self-experimentation was rampant in the energetics world. After 1890, fasting gained popularity as a health cure and the key to vigor, productivity, Christian virtue, masculinity, and racial superiority. Interest in fasting cures continued into the 1920s even as fasting gave way, in energetics research, to respirometric studies of resting metabolic rate and controlled trials of calorie restriction.


The practical aims of animal energetics were twofold. One was to improve feed conversion in livestock and, more broadly, to formulate generalizations about the relationship between body size and basal metabolic rate. The other was to understand the energy and protein needs of humans under different occupations. To most of the people involved in the debate around these questions, the underlying policy concern was clear: How much meat did you need to maintain an industrial labor force? — not to say a modern army and navy.


Around 1900, conventional wisdom held that active men required between 100 and 120 grams of protein a day at a minimum — a grossly high estimate — predominantly from animal sources, and an energy intake in the vicinity of 3,000 kcal. Periodically, reports would emerge of people getting by on considerably less — a community of fructarians in California, say — but these reports were mostly ignored..


The dominant voice in this conversation was that of German physiologist Carl von Voit. Voit’s laboratory at Munich had pioneered a number of the techniques then becoming standard in the physiology labs of the United States and Japan, notably the use of nitrogen equilibrium as a proxy for protein needs. Voit clove to a figure of 118 grams (4 ounces) of protein per day for a man of 70 kilograms (154 pounds) doing light work. This struck Yale physiologist Russell Chittenden as nonsense. In 1902 Chittenden undertook a series of clinical studies to demonstrate that 50 to 55 grams (2 ounces) of protein a day, and a considerably reduced energy intake, would keep young men in vigor and nitrogen balance indefinitely.


Chittenden put groups of Yale athletes and newly inducted U.S. Army soldiers (N of eight and 13, respectively) on carefully controlled diets and exercise regimens and observed them over a period of months — their food intake, their excreta, and their performance on various measures of fitness. He also kept notes on his own food intake and physical activity. The diets in question were experimental only in the sense that portions and protein content were controlled. In other respects, the food was ordinary and not especially healthy (lunch for the soldiers for one week included hamburgers, macaroni and cheese, clam chowder, bean porridge, and beef stew).


Opinion was divided as to the significance of his findings. One contemporary praised Chittenden’s rigor but thought it was too soon to attribute participants’ physical achievements to diet, since there was no control for the independent effects of the regimented way of life implicated in the experiments. Fifty years later, the nutritional biochemist Henry Sherman would hail Chittenden’s work as a breakthrough in understanding just how elastic the human response to protein is. Others regarded Chittenden’s results as a curiosity. But there were those who saw Chittenden’s work as anathema.


Chief among these was Major D. McCay, a professor of physiology in Calcutta. McCay, on the basis of long observation in India and a series of experiments with the diets of prisoners in Bengal, argued that Chittenden’s conclusions were not just wrong but dangerously so, for they undermined the clear connection between a diet rich in animal protein and the masculine vigor of the more advanced races. “There is little doubt,” he writes, “that the evidence of mankind points indisputably to a desire for protein up to European standards.


“As soon as a race can provide itself with such amounts,” he adds, “it promptly does so; as soon as financial considerations are surmounted, so soon the so-called ‘vegetarian Japanese’ or Hindu raises his protein intake to reach the ordinary standard of mankind in general.”


That is, McCay argues, it is meat’s income elasticity that determines its rate of consumption. As soon as a race achieves the income necessary to support a meat-rich diet — presumably by adopting the industrial labor discipline of Europeans — its meat consumption shoots up and, with it, the masculine vigor that distinguishes meat-eating races everywhere. Writing a hundred years later, the geographer Vaclav Smil puts it another way: As soon as incomes rise, the “cultural constructs of pre-industrial societies” fall away.


With time, the tone of arguments like McCay’s changes. Talk of race becomes more muted, but concern about the implications of a vegetarian diet for national development persists. For Cornell biochemist William Adolph, writing toward the end of World War II, the “protein problem of China” was that for the 85 to 90 percent of the population living in the countryside, the diet was basically vegetarian. More precisely, 95 percent of the protein in the rural diet came from plant sources. Plant-source proteins, Adolph frets, are inferior both in that they are less easily digested and in that the protein they provide is lower in “biological value”; today we would say its Digestible Indispensable Amino Acid Score is lower. He expresses surprise at the success of the Chinese peasants he has observed in devising combinations of plant proteins that exceed those of any of the constituents — “another case of blind experimentation, examples of which are wide-spread throughout Asia.” But his experiences in China do not leave him sanguine about the possibilities of diet modification in the United States in service of the war effort: “Do we know, for example, how far the change from the omnivorous diet to the vegetarian can be carried with impunity? Many of our blessings in health and vigor are, nutritionally speaking, related to animal protein.”


Today we are faced with the opposite question: How far can the change to a carnivorous diet be carried with impunity? In the nutritional niche characteristic of emerging urban markets, growing meat consumption masks, and perhaps makes possible, growing precariousness.




[Josh Berson is an independent social scientist. He has held research appointments at the Berggruen Institute and the Max Planck Institute for Human Cognitive and Brain Sciences, among other places. He is the author of “The Meat Question,” from which this article is adapted.]

Thursday, October 24, 2019

Organelles and Cell Division

New Organelle Found That Helps Prevent Cancer

October 18, 2019 -- Scientists at the [University of Virginia] School of Medicine have discovered a strange new organelle inside our cells that helps to prevent cancer by ensuring that genetic material is sorted correctly as cells divide.

The researchers have connected problems with the organelle to a subset of breast cancer tumors that make lots of mistakes when segregating chromosomes. Excitingly, they found their analysis offered a new way for doctors to sort patient tumors as they choose therapies. They hope these insights will allow doctors to better personalize treatments to best benefit patients – sparing up to 40 percent of patients with breast cancer, for example, a taxing treatment that won’t be effective.

“Some percentage of women get chemotherapy drugs for breast cancer that are not very effective. They are poisoned, in pain and their hair falls out, so if it isn’t curing their disease, then that’s tragic,” said researcher P. Todd Stukenberg, PhD, of UVA’s Department of Biochemistry and Molecular Genetics and the UVA Cancer Center. “One of our goals is to develop new tests to determine whether a patient will respond to a chemotherapeutic treatment, so they can find an effective treatment right away.”

The Disappearing Organelle

The organelle Stukenberg and his team have discovered is essential but ephemeral. It forms only when needed to ensure chromosomes are sorted correctly and disappears when its work is done. That’s one reason scientists haven’t discovered it before now. Another reason is its mind-bending nature: Stukenberg likens it to a droplet of liquid that condenses within other liquid. “That was the big wow moment, when I saw that on the microscope,” he said.

These droplets act as mixing bowls, concentrating certain cellular ingredients to allow biochemical reactions to occur in a specific location. “What’s exciting is that cells have this new organelle and certain things will be recruited into it and other things will be excluded,” Stukenberg said. “The cells enrich things inside the droplet and, all of a sudden, new biochemical reactions appear only in that location. It’s amazing.”

It’s tempting to think of the droplet like oil in water, but it’s really the opposite of that. Oil is hydrophobic – it repels water. This new organelle, however, is more sophisticated. “It’s more of a gel, where cellular components can still go in and out but it contains binding sites that concentrate a small set of the cells contents,” Stukenberg explained. “Our data suggests this concentration of proteins is really important. I can get complex biochemical reactions to occur inside a droplet that I’ve been failing to reconstitute in a test tube for years. This is the secret sauce I’ve been missing.”

While it’s been known for about eight years that cells make such droplets for other processes, but it was unknown that they make them on chromosomes during cell division. Stukenberg believes these droplets are very common and more important than previously realized. “I think this is a general paradigm,” he said. “Cells are using these non-membranous organelles to regulate much of their work.”

Better Cancer Treatments

In addition to helping us understand mitosis – how cells divide – Stukenberg’s new discovery also sheds light on cancer and how it occurs. The organelle’s main function is to fix mistakes in tiny “microtubules” that pull apart chromosomes when cells are dividing. That ensures each cell winds up with the correct genetic material. In cancer, though, this repair process is defective, which can drive cancer cells to get more aggressive.

He has also developed tests to measure the amount of chromosome mis-segregation in tumors, and he hopes that this might allow doctors to pick the proper treatment to give cancer patients. “We have a way to identify the tumors where the cells are mis-segregating chromosomes at a higher rate,” he said. “My hope is to identify the patients where treatments such as paclitaxel are going to the most effective.”

Having looked at breast cancer already, he next plans to examine the strange organelle’s role in colorectal cancer.

Findings Published

Stukenberg and his colleagues have described their latest discovery in the scientific journal Nature Cell Biology. The research team consisted of Prasad Trivedi, Francesco Palomba, Ewa Niedzialkowska, Michelle A. Digman, Enrico Gratton and Stukenberg.