Monday, April 30, 2018

AI Meets the U.S. Army

Artificial Intelligence Helps Soldiers
Learn Many Times Faster in Combat
New technology allows U.S. Soldiers to learn 13 times faster than conventional methods and Army researchers said this may help save lives.
ADELPHI, MD. (April 26, 2018) -- At the U.S. Army Research Laboratory, scientists are improving the rate of learning even with limited resources. It's possible to help Soldiers decipher hints of information faster and more quickly deploy solutions, such as recognizing threats like a vehicle-borne improvised explosive device, or potential danger zones from aerial war zone images.

The researchers relied on low-cost, lightweight hardware and implemented collaborative filtering, a well-known machine learning technique on a state-of-the-art, low-power Field Programmable Gate Array platform to achieve a 13.3 times speedup of training compared to a state-of-the-art optimized multi-core system and 12.7 times speedup for optimized GPU systems.

The new technique consumed far less power too. Consumption charted 13.8 watts, compared to 130 watts for the multi-core and 235 watts for GPU platforms, making this a potentially useful component of adaptive, lightweight tactical computing systems.

Dr. Rajgopal Kannan, an ARL researcher, said this technique could eventually become part of a suite of tools embedded on the next generation combat vehicle, offering cognitive services and devices for warfighters in distributed coalition environments.

Developing technology for the next generation combat vehicle is one of the six Army Modernization Priorities the laboratory is pursuing.

Kannan collaborates with a group of researchers at the University of Southern California, namely Prof. Viktor Prasanna and students from the data science and architecture lab on this work. ARL and USC are working to accelerate and optimize tactical learning applications on heterogeneous low-cost hardware through ARL's - West Coast open campus initiative.

This work is part of Army's larger focus on artificial intelligence and machine learning research initiatives pursued to help to gain a strategic advantage and ensure warfighter superiority with applications such as on-field adaptive processing and tactical computing.

Kannan said he is working on developing several techniques to speed up AI/ML algorithms through innovative designs on state-of-the-art inexpensive hardware.

Kannan said the techniques in the paper can become part of the tool-chain for potential projects. For example, a new adaptive processing project that recently started where he's a key researcher could use these capabilities.

His paper on accelerating stochastic gradient descent, a technique ubiquitous to many machine learning training algorithms, won the best-paper award at the 26th ACM/SIGDA International Symposium on Field Programmable Gate Arrays, the premier international conference on technical research in FPGAs, held in Monterey, California, Feb. 25-27.

The U.S. Army Research Laboratory is part of the U.S. Army Research, Development and Engineering Command, which has the mission to ensure decisive overmatch for unified land operations to empower the Army, the joint warfighter and our nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command.

http://www.arl.army.mil/www/default.cfm?article=3213

Sunday, April 29, 2018

1943 Disinformation about Sicily

Operation Mincemeat was a successful British disinformation strategy used during the Second World War. As a deception intended to cover the 1943 Allied invasion of Sicily, two members of British intelligence obtained the body of Glyndwr Michael, a tramp who died from eating rat poison, dressed him as an officer of the Royal Marines and placed personal items on him identifying him as Captain (Acting Major) William Martin. Correspondence between two British generals which suggested that the Allies planned to invade Greece and Sardinia, with Sicily as merely the target of a feint, was also placed on the body.

Part of the wider Operation Barclay, Mincemeat was based on the 1939 Trout memo, written by Rear Admiral John Godfrey, the Director of the Naval Intelligence Division, and his personal assistant, Lieutenant Commander Ian Fleming. With the approval of the British Prime Minister, Winston Churchill, and the overall military commander in the Mediterranean, General Dwight D. Eisenhower, the plan began with transporting the body to the southern coast of Spain by submarine, and releasing it close to shore. It was picked up the following morning by a Spanish fisherman. The nominally neutral Spanish government shared copies of the documents with the Abwehr, the German military intelligence organisation, before returning the originals to the British. Forensic examination showed they had been read, and decrypts of German messages showed the Germans fell for the ruse. Reinforcements were shifted to Greece and Sardinia both before and during the invasion of Sicily; Sicily received none.

The true impact of Operation Mincemeat is unknown, although the island was liberated more quickly than anticipated and losses were lower than predicted. The events were depicted in Operation Heartbreak, a 1950 novel by the former cabinet minister Duff Cooper, before one of the agents who planned and carried out Mincemeat, Ewen Montagu, wrote a history in 1953. Montagu's work formed the basis for a 1956 film.

Inspiration for Mincemeat

On 29 September 1939, soon after the start of the Second World War, Rear Admiral John Godfrey, the Director of Naval Intelligence, circulated the Trout memo, a paper that compared the deception of an enemy in wartime to fly fishing. The historian Ben Macintyre observes that although the paper was published under Godfrey's name, it "bore all the hallmarks of ... Lieutenant Commander Ian Fleming", Godfrey's personal assistant. The memo contained a number of schemes to be considered for use against the Axis powers to lure U-boats and German surface ships towards minefields. Number 28 on the list was titled: "A Suggestion (not a very nice one)"; it was an idea to plant misleading papers on a corpse that would be found by the enemy.

The following suggestion is used in a book by Basil Thomson: a corpse dressed as an airman, with despatches in his pockets, could be dropped on the coast, supposedly from a parachute that has failed. I understand there is no difficulty in obtaining corpses at the Naval Hospital, but, of course, it would have to be a fresh one.

The deliberate planting of fake documents to be found by the enemy was not new; known as the Haversack Ruse, it had been practised by the British and others in the First and Second World Wars. In August 1942, before the Battle of Alam el Halfa, a corpse was placed in a blown-up scout car, in a minefield facing the German 90th Light Division. On the corpse was a map purportedly showing the locations of British minefields; the Germans used the map and their tanks were routed to areas of soft sand where they bogged down.

In September 1942 an aircraft flying from Britain to Gibraltar crashed off Cádiz. All aboard were killed, including Paymaster-Lieutenant James Hadden Turner – a courier carrying top secret documents – and a French agent. Turner's documents included a letter from General Mark Clark, the American Deputy Commander of the Allied Expeditionary Force, to General Noel Mason-MacFarlane, British Governor and Commander in Chief of Gibraltar, informing him that General Dwight D. Eisenhower, the Supreme Commander, would arrive in Gibraltar on the eve of the invasion's "target date" of 4 November. Turner's body washed up on the beach near Tarifa and was recovered by the Spanish authorities. When the body was returned to the British, the letter was still on it, and technicians determined that the letter had not been opened. Other Allied intelligence sources established that the notebook carried by the French agent had been copied by the Germans, but they dismissed it as being disinformation. To British planners it showed that some material that was obtained by the Spanish was being passed to the Germans.

Saturday, April 28, 2018

Korean Leaders Meet

The 2018 inter-Korean summit took place on April 27, 2018, on the South Korean side of the Joint Security Area, between Moon Jae-in, President of South Korea, and Kim Jong-un, Supreme Leader of North Korea. It was the third inter-Korean summit – and the first in eleven years. It was also the first time since the end of the Korean War in 1953 that a North Korean leader entered the South's territory; President Moon also briefly crossed into the North's territory. The summit was focused on the North Korean nuclear weapons program and denuclearization of the Korean Peninsula. The Panmunjom Declaration was made following the summit.

                                                          Kim and Moon Shake Hands

Agenda of the Summit

The two Koreas' high Government officials held a working-level meeting on April 4 to discuss summit details at the Peace House. The summit would address mainly denuclearization, peace establishment and improvement of inter-Korean relations for their mutual benefit. Although more than 200 NGOs called for the inclusion of human rights issues in the North and Japanese Prime Minister Shinzo Abe for its abducted citizens in the agenda, these were not included.

Meeting at The Peace House

The Peace House was accepted by North Korea for the meeting's location, from among the venues proposed by South Korea, located just south of the military demarcation line in the Joint Security Area of Panmunjeom.

The meeting was the first visit by a North Korean leader to the territory of the South. This initial meeting of the two leaders, who shook hands over the demarcation line, was broadcast live. Moon accepted an invitation from Kim to briefly step over to the North's side of the line before the two walked together to the Peace House.

As well as the talks, the two leaders conducted a tree-planting ceremony using soil and water from both sides and attended a banquet. Many elements of the meeting were expressly designed for symbolism, including an oval meeting table measuring 2,018 millimetres (79.4 in) to represent the year.

Joint Press Conference and Agreement

In a joint press conference, Kim and Moon made a number of pledges regarding co-operation and peace. Notably, these included a pledge to work towards the denuclearization of the Korean peninsula, although Kim did not explicitly agree to give up the North's nuclear weapons. Additionally the two leaders agreed to, later in the year, convert the Korean Armistice Agreement into a full peace treaty, formally ending the Korean War after 65 years. Additionally, the leaders pledged to end "hostile activities" between their nations, for the resumption of reunion meetings for divided families, to improve connections along their border, and for the cessation of propaganda broadcasts across it. This agreement was known as the Panmunjom Declaration for Peace, Prosperity and Unification of the Korean Peninsula and was signed by both leaders in the South Korean border village of Panmunjom. The press conference was shown live on South Korean television; however, live coverage was not available in North Korea because the country's policy is not to broadcast live events involving its leader.

The leaders pledged for greater communication between them, and that Moon would visit Pyongyang this fall.

https://en.wikipedia.org/wiki/2018_inter-Korean_summit

Friday, April 27, 2018

Found: Repeatedly Recyclable Plastic

‘Infinitely’ Recyclable Polymer Shows
Practical Properties of Plastics
By Anne Manning

April 26, 2018 -- The world fell in love with plastics because they’re cheap, convenient, lightweight and long-lasting. For these same reasons, plastics are now trashing the Earth.

Colorado State University chemists have announced in the journal Science another major step toward waste-free, sustainable materials that could one day compete with conventional plastics. Led by Eugene Chen, professor in the Department of Chemistry, they have discovered a polymer with many of the same characteristics we enjoy in plastics, such as light weight, heat resistance, strength and durability. But the new polymer, unlike typical petroleum plastics, can be converted back to its original small-molecule state for complete chemical recyclability. This can be accomplished without the use of toxic chemicals or intensive lab procedures.

Polymers are a broad class of materials characterized by long chains of chemically bonded, repeating molecular units called monomers. Synthetic polymers today include plastics, as well as fibers, ceramics, rubbers, coatings, and many other commercial products.

Building on fundamental knowledge


The work builds on a previous generation of a chemically recyclable polymer Chen’s lab first demonstrated in 2015. Making the old version required extremely cold conditions that would have limited its industrial potential. The previous polymer also had low heat resistance and molecular weight, and, while plastic-like, was relatively soft.

But the fundamental knowledge gained from that study was invaluable, Chen said. It led to a design principle for developing future-generation polymers that not only are chemically recyclable, but also exhibit robust practical properties.

The new, much-improved polymer structure resolves the issues of the first-generation material. The monomer can be conveniently polymerized under environmentally friendly, industrially realistic conditions: solvent-free, at room temperature, with just a few minutes of reaction time and only a trace amount of catalyst. The resulting material has a high molecular weight, thermal stability and crystallinity, and mechanical properties that perform very much like a plastic. Most importantly, the polymer can be recycled back to its original, monomeric state under mild lab conditions, using a catalyst. Without need for further purification, the monomer can be re-polymerized, thus establishing what Chen calls a circular materials life cycle.

This piece of innovative chemistry has Chen and his colleagues excited for a future in which new, green plastics, rather than surviving in landfills and oceans for millions of years, can be simply placed in a reactor and, in chemical parlance, de-polymerized to recover their value – not possible for today’s petroleum plastics. Back at its chemical starting point, the material could be used over and over again – completely redefining what it means to “recycle.”

“The polymers can be chemically recycled and reused, in principle, infinitely,” Chen said.

Next steps


Chen stresses that the new polymer technology has only been demonstrated at the academic lab scale. There is still much work to be done to perfect the patent-pending monomer and polymer production processes he and colleagues have invented.

With the help of a seed grant from CSU Ventures, the chemists are optimizing their monomer synthesis process and developing new, even more cost-effective routes to such polymers. They’re also working on scalability issues on their monomer-polymer-monomer recycling setup, while further researching new chemical structures for even better recyclable materials.

“It would be our dream to see this chemically recyclable polymer technology materialize in the marketplace,” Chen said.

The paper’s first author is CSU research scientist Jian-Bo Zhu. Co-authors are graduate students Eli Watson and Jing Tang.

Thursday, April 26, 2018

Many Believe Cancer Myths


EurekAlert – April 25, 2018 -- Mistaken belief in mythical causes of cancer is rife according to new research jointly funded by Cancer Research UK and published today (Thursday) in the European Journal of Cancer*.

Researchers at University College London (UCL) and the University of Leeds surveyed 1,330 people in England and found that more than 40% wrongly thought that stress (43%) and food additives (42%) caused cancer.

A third incorrectly believed that electromagnetic frequencies (35%) and eating GM food (34%) were risk factors, while 19% thought microwave ovens and 15% said drinking from plastic bottles caused cancer despite a lack of good scientific evidence.

Among the proven causes of cancer, 88% of people correctly selected smoking, 80% picked passive smoking and 60% said sunburn.

Belief in mythical causes of cancer did not mean a person was more likely to have risky lifestyle habits.

But those who had better knowledge of proven causes were more likely not to smoke.

Dr Samuel Smith from the University of Leeds said: "It's worrying to see so many people endorse risk factors for which there is no convincing evidence.

"Compared to past research it appears the number of people believing in unproven causes of cancer has increased since the start of the century which could be a result of changes to how we access news and information through the internet and social media.

"It's vital to improve public education about the causes of cancer if we want to help people make informed decisions about their lives and ensure they aren't worrying unnecessarily."

Dr Lion Shahab from UCL said: "People's beliefs are so important because they have an impact on the lifestyle choices they make. Those with better awareness of proven causes of cancer were more likely not to smoke and to eat more fruit and vegetables."

Clare Hyde from Cancer Research UK said: "Around four in 10 cancer cases could be prevented through lifestyle changes** so it's crucial we have the right information to help us separate the wheat from the chaff.

"Smoking, being overweight and overexposure to UV radiation from the sun and sunbeds are the biggest preventable causes of cancer.

"There is no guarantee against getting cancer but by knowing the biggest risk factors we can stack the odds in our favour to help reduce our individual risk of the disease, rather than wasting time worrying about fake news."


 

Wednesday, April 25, 2018

Future AI vs Nuclear Stalemate

By 2040, Artificial Intelligence
Could Upend Nuclear Stability

April 24, 2018 -- A new RAND Corporation paper finds that artificial intelligence has the potential to upend the foundations of nuclear deterrence by the year 2040.

While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks, according to the paper.

During the Cold War, the condition of mutual assured destruction maintained an uneasy peace between the superpowers by ensuring that any attack would be met by a devastating retaliation. Mutual assured destruction thereby encouraged strategic stability by reducing the incentives for either country to take actions that might escalate into a nuclear war.

The new RAND paper says that in coming decades, artificial intelligence has the potential to erode the condition of mutual assured destruction and undermine strategic stability. Improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed.

Nations may be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals even if they have no intention of carrying out an attack, researchers say. This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.

“The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history,” said Edward Geist, co-author on the paper and associate policy researcher at the RAND Corporation, a nonprofit, nonpartisan research organization. “Much of the early development of AI was done in support of military efforts or with military objectives in mind.”

He said one example of such work was the Survivable Adaptive Planning Experiment in the 1980s that sought to use AI to translate reconnaissance data into nuclear targeting plans.

Under fortuitous circumstances, artificial intelligence also could enhance strategic stability by improving accuracy in intelligence collection and analysis, according to the paper. While AI might increase the vulnerability of second-strike forces, improved analytics for monitoring and interpreting adversary actions could reduce miscalculation or misinterpretation that could lead to unintended escalation.

Researchers say that given future improvements, it is possible that eventually AI systems will develop capabilities that, while fallible, would be less error-prone than their human alternatives and therefore be stabilizing in the long term.

“Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes,” said Andrew Lohn, co-author on the paper and associate engineer at RAND. “There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk.”

RAND researchers based their paper on information collected during a series of workshops with experts in nuclear issues, government branches, AI research, AI policy and national security.

“How Might Artificial Intelligence Affect the Risk of Nuclear War?” is available at www.rand.org.

The paper is part of a broader effort to envision critical security challenges in the world of 2040, considering the effects of political, technological, social, and demographic trends that will shape those security challenges in the coming decades.

Funding for the Security 2040 initiative was provided by gifts from RAND supporters and income from operations.

The research was conducted within the RAND Center for Global Risk and Security, which works across the RAND Corporation to develop multi-disciplinary research and policy analysis dealing with systemic risks to global security. The center draws on RAND's expertise to complement and expand RAND research in many fields, including security, economics, health, and technology.

Tuesday, April 24, 2018

Firm Handshake? Healthier Mind!

Research Reveals Stronger People Have Healthier Brains

University of Manchester – April 20, 2018 -- A study of nearly half a million people has revealed that muscular strength, measured by handgrip, is an indication of how healthy our brains are.
Dr Joseph Firth, an Honorary Research Fellow at The University of Manchester and Research Fellow at NICM Health Research Institute at Western Sydney University, crunched the numbers using UK Biobank data.

Using data from the 475,397 participants from all around the U.K, the new study showed that on average, stronger people performed better across every test of brain functioning used.

Tests included reaction speed, logical problem solving, and multiple different tests of memory.

The study shows the relationships were consistently strong in both people aged under 55 and those aged over 55. Previous studies have only shown this applies in elderly people

“When taking multiple factors into account such as age, gender, bodyweight and education, our study confirms that people who are stronger do indeed tend to have better functioning brains,” said Dr Firth.

The study, published in Schizophrenia Bulletin, also showed that maximal handgrip was strongly correlated with both visual memory and reaction time in over one thousand people with psychotic disorders such as schizophrenia.

He said: “We can see there is a clear connection between muscular strength and brain health.

“But really, what we need now, are more studies to test if we can actually make our brains healthier by doing things which make our muscles stronger – such as weight training.”

Previous research by group has already found that aerobic exercise can improve brain health.

However, the benefit of weight training on brain health has yet to be fully investigated.

He added: “These sorts of novel interventions, such as weight training, could be particularly beneficial for people with mental health conditions.

“Our research has shown that the connections between muscular strength and brain functioning also exist in people experiencing schizophrenia, major depression and bipolar disorder – all of which can interfere with regular brain functioning.

“This raises the strong possibility that weight training exercises could actually improve both the physical and mental functioning of people with these conditions.”

Baseline data from the UK Biobank (2007-2010) was analysed; including 475,397 individuals from the general population, and 1,162 individuals with schizophrenia.

The paper ‘Grip strength is associated with cognitive performance in schizophrenia and the general population: a UK Biobank study of 476,559 participants’ is published in Schizophrenia Bulletin, https://doi.org/10.1093/schbul/sby034 .

http://www.manchester.ac.uk/discover/news/research-reveals-stronger-people-have-healthier-brains/

Monday, April 23, 2018

3D Biology Microscope

New Microscope Captures Detailed 3-D Movies of Cells Deep Within Living Systems

Merging lattice light sheet microscopy with adaptive optics reveals the most detailed picture yet of subcellular dynamics in multicellular organisms

Howard Hughes Medical Institute – April 19, 2018 -- Our window into the cellular world just got a whole lot clearer.


Physicist Eric Betzig, a group leader at the Howard Hughes Medical Institute’s Janelia Research Campus, and colleagues report the work April 19, 2018, in the journal Science.

Scientists have imaged living cells with microscopes for hundreds of years, but the sharpest views have come from cells isolated on glass slides. The large groups of cells inside whole organisms scramble light like a bagful of marbles, Betzig says. “This raises the nagging doubt that we are not seeing cells in their native state, happily ensconced in the organism in which they evolved.”

Even when viewing cells individually, the microscopes most commonly used to study cellular inner workings are usually too slow to follow the action in 3-D. These microscopes bathe cells with light thousands to millions of times more intense than the desert sun, Betzig says. “This also contributes to our fear that we are not seeing cells in their natural, unstressed form.

“It’s often said that seeing is believing, but when it comes to cell biology, I think the more appropriate question is, ‘When can we believe what we see?’” he adds.

To meet these challenges, Betzig and his team combined two microscopy technologies they first reported in 2014, the same year he shared the Nobel Prize in Chemistry. To unscramble the light from cells buried within organisms, the researchers turned to adaptive optics – the same technology used by astronomers to provide clear views of distant celestial objects through Earth’s turbulent atmosphere. Then, to image the internal choreography of these cells quickly, yet gently, in 3-D, the team used lattice light sheet microscopy. That technology rapidly and repeatedly sweeps an ultra-thin sheet of light through the cell while acquiring a series of 2-D images, building a high-resolution 3-D movie of subcellular dynamics.

The new microscope is essentially three microscopes in one: an adaptive optical system to maintain the thin illumination of a lattice light sheet as it penetrates within an organism, and another adaptive optical system to create distortion-free images when looking down on the illuminated plane from above. By shining a laser through either pathway, the researchers create a bright point of light within the region they wish to image. The distortions in the image of this “guide star” tell the team the nature of the optical aberrations along either pathway. The researchers can correct these distortions by applying equal but opposite distortions to a pixelated light modulator on the excitation side, and a deformable mirror on detection. Over large volumes, the distortions change as the light traverses different tissues. In this case, the team assembles large 3-D images from a series of subvolumes, each with its own independent excitation and detection corrections.

The results offer an electrifying new look at biology, and reveal a bustling metropolis in action at the subcellular level. In one movie from the microscope, a fiery orange immune cell wriggles madly through a zebrafish’s ear while scooping up blue sugar particles along the way. In another, a cancer cell trails sticky appendages as it rolls through a blood vessel and attempts to gain purchase on the vessel wall.

The complexity of the 3-D multicellular environment can be overwhelming, Betzig says, but the clarity of his team’s imaging permits them to computationally “explode” the individual cells in tissue to focus on the dynamics within any particular one, such as the remodeling of internal organelles during cell division.

All this detail is hard to see without adaptive optics, Betzig says. “It’s just too damn fuzzy.” In his view, adaptive optics is one of the most important areas in microscopy research today, and the lattice light sheet microscope, which excels at 3-D live imaging, is the perfect platform to showcase its power. Adaptive optics hasn’t really taken off yet, he says, because the technology has been complicated, expensive, and until now, not clearly worth the effort. But within 10 years, Betzig predicts, biologists everywhere will be on board.

The next big step is making that technology affordable and user-friendly. “Technical demonstrations and publications don’t amount to a hill of beans. The only metric by which a microscope should be judged is how many people use it, and the significance of what they discover with it,” Betzig says.

The current microscope fills a 10-foot-long table. “It’s a bit of a Frankenstein’s monster right now,” says Betzig, who is moving to the University of California, Berkeley, in the fall. His team is working on a next-generation version that should fit on a small desk at a cost within the reach of individual labs. The first such instrument will go to Janelia’s Advanced Imaging Center, where scientists from around the world can apply to use it. Plans that scientists can use to create their own microscopes will also be made freely available. Ultimately, Betzig hopes that the adaptive optical version of the lattice microscope will be commercialized, as was the base lattice instrument before it. That could bring adaptive optics into the mainstream.

“If you really want to understand the cell in vivo, and image it with the quality possible in vitro, this is the price of admission,” he says.

Sunday, April 22, 2018

The "New Coke" Flop

New Coke was the unofficial name for the reformulation of Coca-Cola introduced in April 1985 by the Coca-Cola Company to replace the original formula of its flagship soft drink, Coca-Cola (also called Coke). In 1992, the reformulated drink was named Coke II.
                                                
                                                      A can of New Coke

By 1985, Coca-Cola had been losing market share to diet soft drinks and non-cola beverages for many years. Consumers who were purchasing regular colas seemed to prefer the sweeter taste of rival Pepsi-Cola, as Coca-Cola learned in conducting blind taste tests. However, the American public's reaction to the change was negative, even hostile, and the new cola was considered a major failure. The subsequent, rapid reintroduction of Coke's original formula, rebranded "Coca-Cola Classic" and put back into market within three months of New Coke's debut, resulted in a significant gain in sales. This led to speculation by some that the introduction of the New Coke formula was just a marketing ploy to stimulate sales of original Coca-Cola; however, the company has maintained it was a genuine attempt to replace the original product.

Coke II was discontinued in July 2002. It remains influential as a cautionary tale against tampering too extensively with a well-established and successful brand.

Background

After World War II, the market share for Coca-Cola was 60%. By 1983, it had declined to under 24%, largely because of competition from Pepsi-Cola. Pepsi had begun to outsell Coke in supermarkets; Coke maintained its edge only through soda vending machines and fountain sales in fast food restaurants, concessions, and sports venues where Coca-Cola had purchased "pouring rights".

Market analysts believed baby boomers were more likely to purchase diet drinks as they aged and remained health- and weight-conscious. Growth in the full-calorie segment would have to come from younger drinkers, who at that time favored Pepsi by even more overwhelming margins than the market as a whole. Meanwhile, the overall market for colas steadily declined in the early 1980s, as consumers increasingly purchased diet and non-cola soft drinks, many of which were sold by Coca-Cola themselves. This trend further eroded Coca-Cola's market share. When Roberto Goizueta became Coca-Cola CEO in 1980, he pointedly told employees there would be no "sacred cows" in how the company did business, including how it formulated its drinks.

Market Research

Coca-Cola's senior executives commissioned a secret project headed by marketing vice president Sergio Zyman and Coca-Cola USA president Brian Dyson to create a new flavor for Coke. The effort, Project Kansas, took its name from a photo of Kansas journalist William Allen White drinking a Coke; the image had been used extensively in Coca-Cola advertising and hung on several executives' walls.

The sweeter cola overwhelmingly beat both regular Coke and Pepsi in taste tests, surveys, and focus groups. Asked if they would buy and drink the product if it were Coca-Cola, most testers said they would, although it would take some getting used to. About 10-12% of testers felt angry and alienated at the thought, and said they might stop drinking Coke altogether. Their presence in focus groups tended to negatively skew results as they exerted indirect peer pressure on other participants.

The surveys, which were given more significance by standard marketing procedures of the era, were less negative than the taste tests and were key in convincing management to change the formula in 1985, to coincide with the drink's centenary. But the focus groups had provided a clue as to how the change would play out in a public context, a data point the company downplayed but which proved important later.

Management rejected an idea to make and sell the new flavor as a separate variety of Coca-Cola. The company's bottlers were already complaining about absorbing other recent additions into the product line since Diet Coke in 1982; Cherry Coke was launched nationally nearly concurrently with New Coke during 1985. Many of them had sued over the company's syrup pricing policies. A new variety of Coke in competition with the main variety could also have cannibalized Coke’s sales and increase the proportion of Pepsi drinkers relative to Coke drinkers.

Early in his career with Coca-Cola, Goizueta had been in charge of the company's Bahamian subsidiary. In that capacity, he had improved sales by tweaking the drink's flavor slightly, so he was receptive to the idea that changes to the taste of Coke could lead to increased profits. He believed it would be "New Coke or no Coke",[7]:106 and that the change must take place openly. He insisted that the containers carry the "New!" label, which gave the drink its popular name.

Goizueta also made a visit to his mentor and predecessor as the company's chief executive, the ailing Robert W. Woodruff, who had built Coke into an international brand following World War Ⅱ. He claimed he had secured Woodruff's blessing for the reformulation, but even many of Goizueta's closest friends within the company doubt that Woodruff understood Goizueta's intentions.

Launch of New Coke

New Coke was introduced on April 23, 1985. Production of the original formulation ended later that week. In many areas, New Coke was initially introduced in "old" Coke packaging; bottlers used up remaining cans, cartons and labels before new packaging was widely available. Old cans containing New Coke were identified by their gold colored tops, while glass and plastic bottles had red caps instead of silver and white, respectively.

The press conference at New York City's Lincoln Center to introduce the new formula did not go well. Reporters had already been fed questions by Pepsi, which was worried that New Coke would erase its gains. Goizueta, Coca-Cola's CEO, described the new flavor as "bolder", "rounder", and "more harmonious", and defended the change by saying that the drink's secret formula was not sacrosanct and inviolable. As far back as 1935, Coca-Cola sought kosher certification from an Atlanta rabbi and made two changes to the formula so the drink could be considered kosher (as well as halal and vegetarian). Goizueta also refused to admit that taste tests had led the change, calling it "one of the easiest decisions we've ever made." A reporter asked whether Diet Coke would also be reformulated "assuming [New Coke] is a success," to which Goizueta curtly replied, "No. And I didn't assume that this is a success. This is a success."

The emphasis on the sweeter taste of the new flavor also ran contrary to previous Coke advertising, in which spokesman Bill Cosby had touted Coke's less-sweet taste as a reason to prefer it over Pepsi. Nevertheless, the company's stock went up on the announcement, and market research showed 80% of the American public was aware of the change within days.

Early Acceptance

The company, as it had planned, introduced the new formula with big marketing pushes in New York (workers renovating the Statue of Liberty were symbolically the first Americans given cans to take home) and Washington, D.C. (where thousands of free cans were given away in Lafayette Park). As soon as New Coke was introduced, the new formula was available at McDonald's and other drink fountains in the United States. Sales figures from those cities, and other areas where it had been introduced, showed a reaction that went as the market research had predicted. In fact, Coke's sales were up 8% over the same period as the year before.

Most Coke drinkers resumed buying the new Coke at much the same level as they had the old one. Surveys indicated, in fact, that a majority liked the new flavoring. Three-quarters of the respondents said they would buy New Coke again. The big test, however, remained in the Southeast, where Coke was first bottled and tasted.

Southern Backlash

Despite New Coke's acceptance with a large number of Coca-Cola drinkers, many more resented the change in formula and were not shy about making that known — just as had happened in the focus groups. Many of these drinkers were Southerners, some of whom considered Coca-Cola a fundamental part of their regional identity. They viewed the company's decision to change the formula through the prism of the Civil War, as another surrender to the "Yankees".

Company headquarters in Atlanta began receiving letters and telephone calls expressing anger or deep disappointment. The company received over 40,000 calls and letters, including one letter, delivered to Goizueta, that was addressed to "Chief Dodo, The Coca-Cola Company". Another letter asked for his autograph, as the signature of "one of the dumbest executives in American business history" would likely become valuable in the future. The company hotline, 1-800-GET-COKE, received over 1,500 calls a day compared to around 400 before the change. A psychiatrist whom Coke had hired to listen in on calls told executives that some people sounded as if they were discussing the death of a family member.

They were, nonetheless, joined by some voices from outside the region. Chicago Tribune columnist Bob Greene wrote some widely reprinted pieces ridiculing the new flavor and damning Coke's executives for having changed it. Comedians and talk show hosts, including Johnny Carson and David Letterman, made regular jokes mocking the switch. Ads for New Coke were booed heavily when they appeared on the scoreboard at the Houston Astrodome. Even Fidel Castro, a longtime Coca-Cola drinker, contributed to the backlash, calling New Coke a sign of American capitalist decadence. Goizueta's own father expressed similar misgivings to his son, who later recalled that it was the only time the older man had agreed with Castro, whose rule he had fled Cuba to avoid.

Gay Mullins, a Seattle retiree looking to start a public relations firm with $120,000 of borrowed money, formed the organization Old Cola Drinkers of America on May 28 to lobby Coca-Cola to either reintroduce the old formula or sell it to someone else. His organization eventually received over 60,000 phone calls. He also filed a class action lawsuit against the company (which was quickly dismissed by a judge who said he preferred the taste of Pepsi), while nevertheless expressing interest in landing The Coca-Cola Company as a client of his new firm should it reintroduce the old formula.[11]:160 In two informal blind taste tests, Mullins either failed to distinguish New Coke from old or expressed a preference for New Coke.

Still, despite ongoing resistance in the South, New Coke continued to do well in the rest of the country. But executives were uncertain of how international markets would react. Executives met with international Coke bottlers in Monaco; to their surprise, the bottlers were not interested in selling New Coke. Zyman also heard doubts and skepticism from his relatives in Mexico, where New Coke was slated to be introduced later that summer, when he went there on vacation.

Goizueta stated that Coca-Cola employees who liked New Coke felt unable to speak up due to peer pressure, as had happened in the focus groups. Donald Keough, the Coca-Cola president and chief operating officer, reported overhearing someone say at his country club that they liked New Coke, but they would be "damned if I'll let Coca-Cola know that.”

Original Coke Returns

Coca-Cola executives announced the return of the original formula during the afternoon of July 11, seventy-eight days after New Coke's introduction. ABC News' Peter Jennings interrupted General Hospital with a special bulletin to share the news with viewers. On the floor of the U.S. Senate, David Pryor called the reintroduction "a meaningful moment in U.S. history". The company hotline received 31,600 calls in the two days after the announcement.

The new product continued to be sold and retained the name Coca-Cola (until 1992, when it was renamed Coke II), so the original formula was renamed Coca-Cola Classic (also called Coke Classic), and for a short period it was referred to by the public as Old Coke. Some who tasted the reintroduced formula were not convinced that the first batches really were the same formula that had supposedly been retired that spring. This was true for a few regions, because Coca-Cola Classic differed from the original formula in that all bottlers who hadn't already done so were using high fructose corn syrup instead of cane sugar to sweeten the drink, though most had by this time.

"There is a twist to this story which will please every humanist and will probably keep Harvard professors puzzled for years," said Keough at a press conference. "The simple fact is that all the time and money and skill poured into consumer research on the new Coca-Cola could not measure or reveal the deep and abiding emotional attachment to original Coca-Cola felt by so many people."

The company gave Gay Mullins, founder of the organization Old Cola Drinkers of America (which had lobbied Coca-Cola to either reintroduce the old formula or sell it to someone else), the first case of Coca-Cola Classic.

                                              https://en.wikipedia.org/wiki/New_Coke

Saturday, April 21, 2018

Portuguese Explorer Cabral

Pedro Álvares Cabral (1467 or 1468 – c. 1520) was a Portuguese nobleman, military commander, navigator and explorer often regarded as among the first Europeans to discover Brazil. Cabral conducted the first substantial exploration of the northeast coast of South America and claimed it for Portugal without the knowledge or consent of any of the existing inhabitants of the area. While details of Cabral's early life are unclear, it is known that he came from a minor noble family and received a good education. He was appointed to head an expedition to India in 1500, following Vasco da Gama's newly opened route around Africa. The object of the undertaking was to return with valuable spices and to establish trade relations in India—bypassing the monopoly on the spice trade then in the hands of Arab, Turkish and Italian merchants. Although the previous expedition of Vasco da Gama to India, on its sea route, recorded signs of land west of the southern Atlantic Ocean (in 1497), Cabral is regarded as the first captain who ever touched four continents, leading the first expedition that united Europe, Africa, America, and Asia.

                                                       Cabral in an early 20th century
                                                       painting (no original images from
                                                       his lifetime exist)

His fleet of 13 ships sailed far into the western Atlantic Ocean, perhaps intentionally, where he made landfall on what he initially assumed to be a large island. As the new land was within the Portuguese sphere according to the Treaty of Tordesillas, Cabral claimed it for the Portuguese Crown. He explored the coast, realizing that the large land mass was probably a continent, and dispatched a ship to notify King Manuel I of the new territory. The continent was South America, and the land he had claimed for Portugal later came to be known as Brazil. The fleet reprovisioned and then turned eastward to resume the journey to India.

A storm in the southern Atlantic caused the loss of several ships, and the six remaining ships eventually rendezvoused in the Mozambique Channel before proceeding to Calicut in India. Cabral was originally successful in negotiating trading rights, but Arab merchants saw Portugal's venture as a threat to their monopoly and stirred up an attack by both Muslims and Hindus on the Portuguese entrepôt. The Portuguese sustained many casualties and their facilities were destroyed. Cabral took vengeance by looting and burning the Arab fleet and then bombarded the city in retaliation for its ruler having failed to explain the unexpected attack. From Calicut the expedition sailed to the Kingdom of Cochin, another Indian city-state, where Cabral befriended its ruler and loaded his ships with coveted spices before returning to Europe. Despite the loss of human lives and ships, Cabral's voyage was deemed a success upon his return to Portugal. The extraordinary profits resulting from the sale of the spices bolstered the Portuguese Crown's finances and helped lay the foundation of a Portuguese Empire that would stretch from the Americas to the Far East.


 

Friday, April 20, 2018

Basic Standards of Runways


According to the International Civil Aviation Organization (ICAO), a runway is a "defined rectangular area on a land aerodrome prepared for the landing and takeoff of aircraft". Runways may be a man-made surface (often asphalt, concrete, or a mixture of both) or a natural surface (grass, dirt, gravel, ice, or salt).
                                                            Runway at Palm Springs
                                                               International Airport
History

In January 1919, aviation pioneer Orville Wright underlined the need for "distinctly marked and carefully prepared landing places, [but] the preparing of the surface of reasonably flat ground [is] an expensive undertaking [and] there would also be a continuous expense for the upkeep.”

Naming of Runways

Runways are named by a number between 01 and 36, which is generally the magnetic azimuth of the runway's heading in decadegrees. This heading differs from true north by the local magnetic declination. A runway numbered 09 points east (90°), runway 18 is south (180°), runway 27 points west (270°) and runway 36 points to the north (360° rather than 0°). When taking off from or landing on runway 09, a plane would be heading 90° (east).

A runway can normally be used in both directions, and is named for each direction separately: e.g., "runway 33" in one direction is "runway 15" when used in the other. The two numbers usually differ by 18 (= 180°).

If there is more than one runway pointing in the same direction (parallel runways), each runway is identified by appending Left (L), Center (C) and Right (R) to the number to identify its position (when facing its direction) — for example, Runways One Five Left (15L), One Five Center (15C), and One Five Right (15R). Runway Zero Three Left (03L) becomes Runway Two One Right (21R) when used in the opposite direction (derived from adding 18 to the original number for the 180 degrees when approaching from the opposite direction). In some countries, if parallel runways are too close to each other, regulations mandate that only one runway may be used at a time under certain conditions (usually adverse weather).

At large airports with four or more parallel runways (for example, at Los Angeles, Detroit Metropolitan Wayne County, Hartsfield-Jackson Atlanta, Denver, Dallas-Fort Worth and Orlando) some runway identifiers are shifted by 10 degrees to avoid the ambiguity that would result with more than three parallel runways. For example, in Los Angeles, this system results in runways 6L, 6R, 7L, and 7R, even though all four runways are actually parallel (approximately 69 degrees). At Dallas/Fort Worth International Airport, there are five parallel runways, named 17L, 17C, 17R, 18L, and 18R, all oriented at a heading of 175.4 degrees. Occasionally, an airport with only 3 parallel runways may use different runway identifiers, for example when a third parallel runway was opened at Phoenix Sky Harbor International Airport in 2000 to the south of existing 8R/26L, rather than confusingly becoming the "new" 8R/26L it was instead designated 7R/25L, with the former 8R/26L becoming 7L/25R and 8L/26R becoming 8/26.

For clarity in radio communications, each digit in the runway name is pronounced individually: runway three six, runway one four, etc. A leading zero, for example in "runway zero six" or "runway zero one left", is included for all ICAO and some U.S. military airports (such as Edwards Air Force Base). However, most U.S. civil aviation airports drop the leading zero as required by FAA regulation. This also includes some military airfields such as Cairns Army Airfield. This American anomaly may lead to inconsistencies in conversations between American pilots and controllers in other countries. It is very common in a country such as Canada for a controller to clear an incoming American aircraft to, for example, runway 04, and the pilot read back the clearance as runway 4. In flight simulation programs those of American origin might apply U.S. usage to airports around the world. For example, runway 05 at Halifax will appear on the program as the single digit 5 rather than 05.

Runway designations change over time because the magnetic poles slowly drift on the Earth's surface and the magnetic bearing will change. Depending on the airport location and how much drift takes place, it may be necessary over time to change the runway designation. As runways are designated with headings rounded to the nearest 10 degrees, this will affect some runways more than others. For example, if the magnetic heading of a runway is 233 degrees, it would be designated Runway 23. If the magnetic heading changed downwards by 5 degrees to 228, the Runway would still be Runway 23. If on the other hand the original magnetic heading was 226 (Runway 23), and the heading decreased by only 2 degrees to 224, the runway should become Runway 22. Because the drift itself is quite slow, runway designation changes are uncommon, and not welcomed, as they require an accompanying change in aeronautical charts and descriptive documents. When runway designations do change, especially at major airports, it is often changed at night as taxiway signs need to be changed and the huge numbers at each end of the runway need to be repainted to the new runway designators. In July 2009 for example, London Stansted Airport in the United Kingdom changed its runway designations from 05/23 to 04/22 during the night.

For fixed-wing aircraft it is advantageous to perform takeoffs and landings into the wind to reduce takeoff or landing roll and reduce the ground speed needed to attain flying speed. Larger airports usually have several runways in different directions, so that one can be selected that is most nearly aligned with the wind. Airports with one runway are often constructed to be aligned with the prevailing wind. Compiling a wind rose is in fact one of the preliminary steps taken in constructing airport runways. Note that wind direction is given as the direction the wind is coming from: a plane taking off from runway 09 would be facing east, directly into an "east wind" blowing from 090 degrees.

Runway Dimensions

Runway dimensions vary from as small as 245 m (804 ft) long and 8 m (26 ft) wide in smaller general aviation airports, to 5,500 m (18,045 ft) long and 80 m (262 ft) wide at large international airports built to accommodate the largest jets, to the huge 11,917 m × 274 m (39,098 ft × 899 ft) lake bed runway 17/35 at Edwards Air Force Base in California – a landing site for the retired Space Shuttle.

                                                   https://en.wikipedia.org/wiki/Runway

Bending and Stretching Diamonds

The brittle material can turn flexible when made into ultrafine needles, researchers find.
By David L. Chandler, MIT News Office

April 19, 2018 -- Diamond is well-known as the strongest of all natural materials, and with that strength comes another tightly linked property: brittleness. But now, an international team of researchers from MIT, Hong Kong, Singapore, and Korea has found that when grown in extremely tiny, needle-like shapes, diamond can bend and stretch, much like rubber, and snap back to its original shape.

The surprising finding is being reported this week in the journal Science, in a paper by senior author Ming Dao, a principal research scientist in MIT’s Department of Materials Science and Engineering; MIT postdoc Daniel Bernoulli; senior author Subra Suresh, former MIT dean of engineering and now president of Singapore’s Nanyang Technological University; graduate students Amit Banerjee and Hongti Zhang at City University of Hong Kong; and seven others from CUHK and institutions in Ulsan, South Korea.

The results, the researchers say, could open the door to a variety of diamond-based devices for applications such as sensing, data storage, actuation, biocompatible in vivo imaging, optoelectronics, and drug delivery. For example, diamond has been explored as a possible biocompatible carrier for delivering drugs into cancer cells.

The team showed that the narrow diamond needles, similar in shape to the rubber tips on the end of some toothbrushes but just a few hundred nanometers (billionths of a meter) across, could flex and stretch by as much as 9 percent without breaking, then return to their original configuration, Dao says.

Ordinary diamond in bulk form, Bernoulli says, has a limit of well below 1 percent stretch. “It was very surprising to see the amount of elastic deformation the nanoscale diamond could sustain,” he says.

“We developed a unique nanomechanical approach to precisely control and quantify the ultralarge elastic strain distributed in the nanodiamond samples,” says Yang Lu, senior co-author and associate professor of mechanical and biomedical engineering at CUHK. Putting crystalline materials such as diamond under ultralarge elastic strains, as happens when these pieces flex, can change their mechanical properties as well as thermal, optical, magnetic, electrical, electronic, and chemical reaction properties in significant ways, and could be used to design materials for specific applications through “elastic strain engineering,” the team says.

The team measured the bending of the diamond needles, which were grown through a chemical vapor deposition process and then etched to their final shape, by observing them in a scanning electron microscope while pressing down on the needles with a standard nanoindenter diamond tip (essentially the corner of a cube). Following the experimental tests using this system, the team did many detailed simulations to interpret the results and was able to determine precisely how much stress and strain the diamond needles could accommodate without breaking.

The researchers also developed a computer model of the nonlinear elastic deformation for the actual geometry of the diamond needle, and found that the maximum tensile strain of the nanoscale diamond was as high as 9 percent. The computer model also predicted that the corresponding maximum local stress was close to the known ideal tensile strength of diamond — i.e. the theoretical limit achievable by defect-free diamond.  

When the entire diamond needle was made of one crystal, failure occurred at a tensile strain as high as 9 percent. Until this critical level was reached, the deformation could be completely reversed if the probe was retracted from the needle and the specimen was unloaded. If the tiny needle was made of many grains of diamond, the team showed that they could still achieve unusually large strains. However, the maximum strain achieved by the polycrystalline diamond needle was less than one-half that of the single crystalline diamond needle. 

Yonggang Huang, a professor of civil and environmental engineering and mechanical engineering at Northwestern University, who was not involved in this research, agrees with the researchers’ assessment of the potential impact of this work. “The surprise finding of ultralarge elastic deformation in a hard and brittle material — diamond — opens up unprecedented possibilities for tuning its optical, optomechanical, magnetic, phononic, and catalytic properties through elastic strain engineering,” he says.

Huang adds “When elastic strains exceed 1 percent, significant material property changes are expected through quantum mechanical calculations. With controlled elastic strains between 0 to 9 percent in diamond, we expect to see some surprising property changes.”

                       http://news.mit.edu/2018/bend-stretch-diamond-ultrafine-needles-0419

Thursday, April 19, 2018

Hybrid Tea Roses


Hybrid tea is an informal horticultural classification for a group of garden roses. They were created by cross-breeding two types of roses, initially by hybridising hybrid perpetuals with tea roses. It is the oldest group classified as a modern garden rose.


 
                                                           Hybrid tea rose "Peace"

Hybrid teas exhibit traits midway between both parents, being hardier than the often quite tender teas (although not as hardy as the hybrid perpetuals), and more inclined to repeat-flowering than the somewhat misleadingly-named hybrid perpetuals (if not quite as ever-blooming as the teas).

Hybrid tea flowers are well-formed with large, high-centred buds, supported by long, straight and upright stems. Each flower can grow to 8–12.5 cm wide. Hybrid teas are the world's most popular type of rose by choice due to their color and flower form. Their flowers are usually borne singly at the end of long stems which makes them popular as cut flowers.

Most hybrid tea bushes tend to be somewhat upright in habit, and reach between 0.75 and 2.0 metres in height, depending on the cultivar, growing conditions and pruning regime. It is the provincial flower of Islamabad capital territory.

History

The birth of the world's first hybrid tea is generally accepted to have been 'La France' in 1867. It was raised by Jean-Baptiste André Guillot, a French nurseryman. He did it by hybridising a tea rose, supposedly 'Madame Bravy', with a hybrid perpetual, supposedly 'Madame Victor Verdier', hence "hybrid tea".

Other early cultivars were 'Lady Mary Fitzwilliam' (Bennett 1883), 'Souvenir of Wootton' (John Cook 1888) and 'Mme. Caroline Testout', introduced by Pernet-Ducher in 1890.

Hybrid tea roses did not become popular until the beginning of the 20th century, when Pernet-Ducher in Lyons, France, bred the cultivar 'Soleil d'Or' (1900). But the cultivar that made hybrid teas the most popular class of garden rose of the 20th century was the rose Peace ('Madame A. Meilland'), introduced by Francis Meilland at the end of World War II, and one of the most popular rose cultivars of the 20th century.

Michele Meilland Richardier cultivated a hybrid tea which had double flowers, with a coral inside and a yellow and pink outside. It was said to last very well when cut. The rose was classified as being part of the meilimona variety. The patent was filed on October 14, 1975 and was issued February 1, 1977.

Most hybrid tea cultivars are not fully hardy in continental areas with very cold winters (below −25 °C). This, combined with their tendency to be stiffly upright, sparsely foliaged and often not resistant to diseases, has led to a decline in hybrid tea popularity among gardeners and landscapers in favor of lower-maintenance "landscape" roses. The hybrid tea remains the standard rose of the floral industry, however, and is still favored in small gardens in formal situations.

Propagation

This is usually done by budding, a technique that involves grafting buds from a parent plant onto strongly growing rootstocks. One such rootstock is R. multiflora.

Hybrid tea cultivars bred in continental areas (e.g. Canada) tend to be hardier than those hailing from more maritime regions (e.g. New Zealand).

Some Very Successful Examples

A very large number of hybrid tea cultivars have been introduced by breeders over the years; some notable examples include 'Chrysler Imperial', 'Double Delight', 'Elina', 'Fragrant Cloud', 'Mister Lincoln', Peace and 'Precious Platinum'.

                                                       Hybrid tea rose "Double Delight"