Thursday, January 31, 2019

Kasparov book "Deep Thinking"

Garry Kasparov's 1997 chess match against the IBM supercomputer Deep Blue was a watershed moment in the history of technology. It was the dawn of a new era in artificial intelligence: a machine capable of beating the reigning human champion at this most cerebral game.

That moment was more than a century in the making, and in this breakthrough book, Kasparov reveals his astonishing side of the story for the first time. He describes how it felt to strategize against an implacable, untiring opponent with the whole world watching, and recounts the history of machine intelligence through the microcosm of chess, considered by generations of scientific pioneers to be a key to unlocking the secrets of human and machine cognition. Kasparov uses his unrivaled experience to look into the future of intelligent machines and sees it bright with possibility. As many critics decry artificial intelligence as a menace, particularly to human jobs, Kasparov shows how humanity can rise to new heights with the help of our most extraordinary creations, rather than fear them. Deep Thinking is a tightly argued case for technological progress, from the man who stood at its precipice with his own career at stake.

                             -- from Amazon.com books section

Hall of Famer’s book review:
5 Stars
Very well written and very interesting
May 12, 2017

Most of this book is about chess and chess engines and Kasparov’s experiences with them, especially in his two matches with IBM’s Deep Blue. But there is much more. The central theme of the book can be seen in this quote from page 259: “…technology can make us more human by freeing us to be more creative…”

Like Kasparov (peak rating of 2851 in 1999) I (peak rating of 2080 in 1974) have been absolutely fascinated with chess playing programs going back to the eighties when the best engine played at about the USCF 1200 level. I bought one of the first Chessmaster programs and subsequently several others as well. I also bought the Fritz engines when they came out and others including I believe the first Zarkov program. What Kasparov shows is that it is a combination of brute force from the chess engines and the creative and process-finding ability of the human that makes for the strongest player. In human tournaments of course you can’t get help from your cell phone (and hopefully not from a device in your back molar!), but in preparation for a tournament and especially for a match a strong chess engine can be invaluable. Kasparov makes it clear that the proliferation of younger and younger and stronger and stronger grandmasters came about because of the maturing strength of the chess engines which allowed players to study at a level and with an intensity previously impossible.

Kasparov goes on to generalize this idea for other forms of human endeavor. Artificial Intelligence is in the final analysis a tool to augment human creativity and foster human achievement. (This is not to say it won’t be used in detrimental ways.) Fifty-five years ago my friend Bill Maillard, who is a mathematician and a master chess player, put it this way: machine intelligence will eventually exceed human intelligence but it will be the humans that make the decisions.

For Kasparov (quoting John McCarthy who coined the term “artificial intelligence” in 1956) chess became “the Drosophila of AI,” the fruit fly that allows scientific experiments. Put ironically in another way, Kasparov (with tongue in cheek) titled an earlier book of his “How Life Imitates Chess.” What is most interesting about Garry Kasparov is just how intelligent, learned and articulate he is compared to the vast number of chess players. Anybody who has put in the time and energy it takes to become a grandmaster really doesn’t have time to be well read—usually. One only has to recall the very limited abilities of Bobby Fischer away from the chess board. –Speaking of whom, Kasparov has this little story about Fischer on page 92: When “an eager fan pressed him after a difficult win” with “Nice game, Bobby!” Fischer retorted, “How would you know.”

Another interesting thing about Kasparov is how he can be both modest and very confident at the same time. Part of what makes this book so interesting is the way Kasparov reveals himself. He faults himself for the infamous resignation in game two of the second Deep Blue match and even reveals that he didn’t realize the position was drawn until the next day when told so by his seconds. He explains why he lost the match while making plausible excuses based on what he thought was unfair advantages on the other side. This part of the book, which focuses intently on those matches, reveals a very human and likable person, perhaps akin to a character in a popular novel, a person with great strengths and some weaknesses. For example, on page 105 Kasparov writes, “I can say without any false modesty that I was the best-prepared player in the history of chess.”

For many readers the most interesting parts of the book will deal with Kaparov’s understanding of AI (and IA, “intelligence amplification”) and how the technology has developed and where K thinks it’s going. He is less afraid of the surveillance than many people and for the most part sees that the increased knowledge we have of others and ourselves through technology will do more good than harm. He notes that “Our lives are being converted into data” but “The greatest security problem we have will always be human nature.” (p. 118) He adds on the next page, “Privacy is dying, so transparency must increase.” His knowledge is impressive, and he and his collaborator Mig Greengard write so clearly and engagingly that the book is a pleasure to read.

I should add that the book is beautifully designed and meticulously edited. I didn’t notice a single typo and nary a muddled sentence.

One other thing: even very experienced chess players will probably learn something about the game of chess they didn’t know or something about the history of chess they missed. I know I did.

Some quotables:

“Romanticizing the loss of jobs to technology is little better than complaining that antibiotics put too many grave diggers out of work.” (p. 42) This is a statement that bears some scrutiny, and indeed might be the subject of a future Kasparov book.

In 1989 Kasparov played the Deep Thought chess engine. After Kasparov won the tabloid New York Post wrote, “Red Chess King Quick Fries Deep Thought’s Chips.” (p. 111)

“Mistakes almost never walk alone.” (p. 239)

“Intelligence is whatever machines haven’t done yet” (quoting Larry Tesler). (p. 251)

“There’s a business saying that if you’re the smartest person in the room, you’re in the wrong room.” (p. 252)

--Dennis Littrell, author of “The World Is Not as We Think It Is”

Wednesday, January 30, 2019

As Strong as Titanium

Penn Engineer’s ‘Metallic Wood’ Has the Strength of Titanium and the Density of Water

January 24, 2019 -- High-performance golf clubs and airplane wings are made out of titanium, which is as strong as steel but about twice as light. These properties depend on the way a metal’s atoms are stacked, but random defects that arise in the manufacturing process mean that these materials are only a fraction as strong as they could theoretically be. An architect, working on the scale of individual atoms, could design and build new materials that have even better strength-to-weight ratios.

In a new study published in Nature Scientific Reports, researchers at the University of Pennsylvania’s School of Engineering and Applied Science, the University of Illinois at Urbana–Champaign, and the University of Cambridge have done just that. They have built a sheet of nickel with nanoscale pores that make it as strong as titanium but four to five times lighter.

The empty space of the pores, and the self-assembly process in which they’re made, make the porous metal akin to a natural material, such as wood.

And just as the porosity of wood grain serves the biological function of transporting energy, the empty space in the researchers’ “metallic wood” could be infused with other materials. Infusing the scaffolding with anode and cathode materials would enable this metallic wood to serve double duty: a plane wing or prosthetic leg that’s also a battery.

The study was led by James Pikul, Assistant Professor in the Department of Mechanical Engineering and Applied Mechanics at Penn Engineering. Bill King and Paul Braun at the University of Illinois at Urbana-Champaign, along with Vikram Deshpande at the University of Cambridge, contributed to the study.

Even the best natural metals have defects in their atomic arrangement that limit their strength. A block of titanium where every atom was perfectly aligned with its neighbors would be ten times stronger than what can currently be produced. Materials researchers have been trying to exploit this phenomenon by taking an architectural approach, designing structures with the geometric control necessary to unlock the mechanical properties that arise at the nanoscale, where defects have reduced impact.

Pikul and his colleagues owe their success to taking a cue from the natural world.

“The reason we call it metallic wood is not just its density, which is about that of wood, but its cellular nature,” Pikul says. “Cellular materials are porous; if you look at wood grain, that’s what you’re seeing — parts that are thick and dense and made to hold the structure, and parts that are porous and made to support biological functions, like transport to and from cells.”

“Our structure is similar,” he says. “We have areas that are thick and dense with strong metal struts, and areas that are porous with air gaps. We’re just operating at the length scales where the strength of struts approaches the theoretical maximum.”

The struts in the researchers’ metallic wood are around 10 nanometers wide, or about 100 nickel atoms across. Other approaches involve using 3D-printing-like techniques to make nanoscale scaffoldings with hundred-nanometer precision, but the slow and painstaking process is hard to scale to useful sizes.

“We’ve known that going smaller gets you stronger for some time,” Pikul says, “but people haven’t been able to make these structures with strong materials that are big enough that you’d be able to do something useful. Most examples made from strong materials have been about the size of a small flea, but with our approach, we can make metallic wood samples that are 400 times larger.”

Pikul’s method starts with tiny plastic spheres, a few hundred nanometers in diameter, suspended in water. When the water is slowly evaporated, the spheres settle and stack like cannonballs, providing an orderly, crystalline framework. Using electroplating, the same technique that adds a thin layer of chrome to a hubcap, the researchers then infiltrate the plastic spheres with nickel. Once the nickel is in place, the plastic spheres are dissolved with a solvent, leaving an open network of metallic struts.

“We’ve made foils of this metallic wood that are on the order of a square centimeter, or about the size of a playing die side,” Pikul says. “To give you a sense of scale, there are about 1 billion nickel struts in a piece that size.”

Because roughly 70 percent of the resulting material is empty space, this nickel-based metallic wood’s density is extremely low in relation to its strength. With a density on par with water’s, a brick of the material would float.

Replicating this production process at commercially relevant sizes is the team’s next challenge. Unlike titanium, none of the materials involved are particularly rare or expensive on their own, but the infrastructure necessary for working with them on the nanoscale is currently limited. Once that infrastructure is developed, economies of scale should make producing meaningful quantities of metallic wood faster and less expensive.

Once the researchers can produce samples of their metallic wood in larger sizes, they can begin subjecting it to more macroscale tests. A better understanding of its tensile properties, for example, is critical.

“We don’t know, for example, whether our metallic wood would dent like metal or shatter like glass.” Pikul says. “Just like the random defects in titanium limit its overall strength, we need to get a better understand of how the defects in the struts of metallic wood influence its overall properties.”

In the meantime, Pikul and his colleagues are exploring the ways other materials can be integrated into the pores in their metallic wood’s scaffolding.

“The long-term interesting thing about this work is that we enable a material that has the same strength properties of other super high-strength materials but now it’s 70 percent empty space,” Pikul says. “And you could one day fill that space with other things, like living organisms or materials that store energy.”

Sezer Özerinç and Runyu Zhang of the University of Illinois at Urbana-Champaign, and Burigede Liu of the University of Cambridge, also contributed to the study.

Tuesday, January 29, 2019

A World Without Heros or Saints


August Comte came up with the idea of an “Occidental Republic,” meaning a western universal humanitarianism that would end war and triumph over politics.  This trend appeals to sympathy but it never works out in fact.  Daniel J. Mahoney has written about this philosophy at AmericanGreatness.com in an article “A World Without Heros or Saints.”  It can be found at https://amgreatness.com/2019/01/26/a-world-without-heroes-or-saints/

Monday, January 28, 2019

Safer Self-Driving Cars

Self-Driving Cars, Robots: Identifying
AI 'Blind Spots'
A novel model developed by MIT and Microsoft researchers identifies instances in which autonomous systems have "learned" from training examples that don't match what's actually happening in the real world. Engineers could use this model to improve the safety of artificial intelligence systems, such as driverless vehicles and autonomous robots.

Massachusetts Institute of Technology —January 25, 2019 -- The AI systems powering driverless cars, for example, are trained extensively in virtual simulations to prepare the vehicle for nearly every event on the road. But sometimes the car makes an unexpected error in the real world because an event occurs that should, but doesn't, alter the car's behavior.

Consider a driverless car that wasn't trained, and more importantly doesn't have the sensors necessary, to differentiate between distinctly different scenarios, such as large, white cars and ambulances with red, flashing lights on the road. If the car is cruising down the highway and an ambulance flicks on its sirens, the car may not know to slow down and pull over, because it does not perceive the ambulance as different from a big white car.

In a pair of papers -- presented at last year's Autonomous Agents and Multiagent Systems conference and the upcoming Association for the Advancement of Artificial Intelligence conference -- the researchers describe a model that uses human input to uncover these training "blind spots."

As with traditional approaches, the researchers put an AI system through simulation training. But then, a human closely monitors the system's actions as it acts in the real world, providing feedback when the system made, or was about to make, any mistakes. The researchers then combine the training data with the human feedback data, and use machine-learning techniques to produce a model that pinpoints situations where the system most likely needs more information about how to act correctly.

The researchers validated their method using video games, with a simulated human correcting the learned path of an on-screen character. But the next step is to incorporate the model with traditional training and testing approaches for autonomous cars and robots with human feedback.

"The model helps autonomous systems better know what they don't know," says first author Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory. "Many times, when these systems are deployed, their trained simulations don't match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors."

Co-authors on both papers are: Julie Shah, an associate professor in the Department of Aeronautics and Astronautics and head of the CSAIL's Interactive Robotics Group; and Ece Kamar, Debadeepta Dey, and Eric Horvitz, all from Microsoft Research. Besmira Nushi is an additional co-author on the upcoming paper.

Taking feedback

Some traditional training methods do provide human feedback during real-world test runs, but only to update the system's actions. These approaches don't identify blind spots, which could be useful for safer execution in the real world.

The researchers' approach first puts an AI system through simulation training, where it will produce a "policy" that essentially maps every situation to the best action it can take in the simulations. Then, the system will be deployed in the real-world, where humans provide error signals in regions where the system's actions are unacceptable.

Humans can provide data in multiple ways, such as through "demonstrations" and "corrections." In demonstrations, the human acts in the real world, while the system observes and compares the human's actions to what it would have done in that situation. For driverless cars, for instance, a human would manually control the car while the system produces a signal if its planned behavior deviates from the human's behavior. Matches and mismatches with the human's actions provide noisy indications of where the system might be acting acceptably or unacceptably.

Alternatively, the human can provide corrections, with the human monitoring the system as it acts in the real world. A human could sit in the driver's seat while the autonomous car drives itself along its planned route. If the car's actions are correct, the human does nothing. If the car's actions are incorrect, however, the human may take the wheel, which sends a signal that the system was not acting unacceptably in that specific situation.

Once the feedback data from the human is compiled, the system essentially has a list of situations and, for each situation, multiple labels saying its actions were acceptable or unacceptable. A single situation can receive many different signals, because the system perceives many situations as identical. For example, an autonomous car may have cruised alongside a large car many times without slowing down and pulling over. But, in only one instance, an ambulance, which appears exactly the same to the system, cruises by. The autonomous car doesn't pull over and receives a feedback signal that the system took an unacceptable action.

"At that point, the system has been given multiple contradictory signals from a human: some with a large car beside it, and it was doing fine, and one where there was an ambulance in the same exact location, but that wasn't fine. The system makes a little note that it did something wrong, but it doesn't know why," Ramakrishnan says. "Because the agent is getting all these contradictory signals, the next step is compiling the information to ask, 'How likely am I to make a mistake in this situation where I received these mixed signals?'"

Intelligent aggregation

The end goal is to have these ambiguous situations labeled as blind spots. But that goes beyond simply tallying the acceptable and unacceptable actions for each situation. If the system performed correct actions nine times out of 10 in the ambulance situation, for instance, a simple majority vote would label that situation as safe.

"But because unacceptable actions are far rarer than acceptable actions, the system will eventually learn to predict all situations as safe, which can be extremely dangerous," Ramakrishnan says.

To that end, the researchers used the Dawid-Skene algorithm, a machine-learning method used commonly for crowdsourcing to handle label noise. The algorithm takes as input a list of situations, each having a set of noisy "acceptable" and "unacceptable" labels. Then it aggregates all the data and uses some probability calculations to identify patterns in the labels of predicted blind spots and patterns for predicted safe situations. Using that information, it outputs a single aggregated "safe" or "blind spot" label for each situation along with a its confidence level in that label. Notably, the algorithm can learn in a situation where it may have, for instance, performed acceptably 90 percent of the time, the situation is still ambiguous enough to merit a "blind spot."

In the end, the algorithm produces a type of "heat map," where each situation from the system's original training is assigned low-to-high probability of being a blind spot for the system.

"When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently. If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution," Ramakrishnan says.

Massachusetts Institute of Technology. "Self-driving cars, robots: Identifying AI 'blind spots'." ScienceDaily. ScienceDaily, 25 January 2019. www.sciencedaily.com/releases/2019/01/190125094230.htm

Basics of SONAR

Sonar (originally an acronym for sound navigation ranging) is a technique that uses sound propagation (usually underwater, as in submarine navigation) to navigate, communicate with or detect objects on or under the surface of the water, such as other vessels. Two types of technology share the name "sonar": passive sonar is essentially listening for the sound made by vessels; active sonar is emitting pulses of sounds and listening for echoes. Sonar may be used as a means of acoustic location and of measurement of the echo characteristics of "targets" in the water. Acoustic location in air was used before the introduction of radar. Sonar may also be used in air for robot navigation, and SODAR (an upward-looking in-air sonar) is used for atmospheric investigations. The term sonar is also used for the equipment used to generate and receive the sound. The acoustic frequencies used in sonar systems vary from very low (infrasonic) to extremely high (ultrasonic). The study of underwater sound is known as underwater acoustics or hydroacoustics.

The first recorded use of the technique was by Leonardo da Vinci in 1490 who used a tube inserted into the water to detect vessels by ear. It was developed during World War I to counter the growing threat of submarine warfare, with an operational passive sonar system in use by 1918. Modern active sonar systems use an acoustic transponder to generate a sound wave which is reflected back from target objects.

History of SONAR

Although some animals (dolphins, bats, some shrews, and others) have used sound for communication and object detection for millions of years, use by humans in the water is initially recorded by Leonardo da Vinci in 1490: a tube inserted into the water was said to be used to detect vessels by placing an ear to the tube.

In the late 19th century an underwater bell was used as an ancillary to lighthouses or light ships to provide warning of hazards.

The use of sound to "echo-locate" underwater in the same way as bats use sound for aerial navigation seems to have been prompted by the Titanic disaster of 1912. The world's first patent for an underwater echo-ranging device was filed at the British Patent Office by English meteorologist Lewis Fry Richardson a month after the sinking of the Titanic, and a German physicist Alexander Behm obtained a patent for an echo sounder in 1913.

The Canadian engineer Reginald Fessenden, while working for the Submarine Signal Company in Boston, built an experimental system beginning in 1912, a system later tested in Boston Harbor, and finally in 1914 from the U.S. Revenue (now Coast Guard) Cutter Miami on the Grand Banks off Newfoundland. In that test, Fessenden demonstrated depth sounding, underwater communications (Morse code) and echo ranging (detecting an iceberg at 2 miles (3 km) range). The "Fessenden oscillator", operated at about 500 Hz frequency, was unable to determine the bearing of the iceberg due to the 3-metre wavelength and the small dimension of the transducer's radiating face (less than 1/3 wavelength in diameter). The ten Montreal-built British H-class submarines launched in 1915 were equipped with Fessenden oscillators.

During World War I the need to detect submarines prompted more research into the use of sound. The British made early use of underwater listening devices called hydrophones, while the French physicist Paul Langevin, working with a Russian immigrant electrical engineer Constantin Chilowsky, worked on the development of active sound devices for detecting submarines in 1915. Although piezoelectric and magnetostrictive transducers later superseded the electrostatic transducers they used, this work influenced future designs. Lightweight sound-sensitive plastic film and fibre optics have been used for hydrophones (acousto-electric transducers for in-water use), while Terfenol-D and PMN (lead magnesium niobate) have been developed for projectors.

“ASDIC”

In 1916, under the British Board of Invention and Research, Canadian physicist Robert William Boyle took on the active sound detection project with A. B. Wood, producing a prototype for testing in mid-1917. This work, for the Anti-Submarine Division of the British Naval Staff, was undertaken in utmost secrecy, and used quartz piezoelectric crystals to produce the world's first practical underwater active sound detection apparatus. To maintain secrecy, no mention of sound experimentation or quartz was made – the word used to describe the early work ("supersonics") was changed to "ASD"ics, and the quartz material to "ASD"ivite: "ASD" for "Anti-Submarine Division", hence the British acronym ASDIC. In 1939, in response to a question from the Oxford English Dictionary, the Admiralty made up the story that it stood for "Allied Submarine Detection Investigation Committee", and this is still widely believed, though no committee bearing this name has been found in the Admiralty archives.

By 1918, Britain and France had built prototype active systems. The British tested their ASDIC on HMS Antrim in 1920 and started production in 1922. The 6th Destroyer Flotilla had ASDIC-equipped vessels in 1923. An anti-submarine school HMS Osprey and a training flotilla of four vessels were established on Portland in 1924. The U.S. Sonar QB set arrived in 1931.

By the outbreak of World War II, the Royal Navy had five sets for different surface ship classes, and others for submarines, incorporated into a complete anti-submarine attack system. The effectiveness of early ASDIC was hampered by the use of the depth charge as an anti-submarine weapon. This required an attacking vessel to pass over a submerged contact before dropping charges over the stern, resulting in a loss of ASDIC contact in the moments leading up to attack. The hunter was effectively firing blind, during which time a submarine commander could take evasive action. This situation was remedied by using several ships cooperating and by the adoption of "ahead-throwing weapons", such as Hedgehogs and later Squids, which projected warheads at a target ahead of the attacker and thus still in ASDIC contact. Developments during the war resulted in British ASDIC sets that used several different shapes of beam, continuously covering blind spots. Later, acoustic torpedoes were used.

Early in World War II (September 1940), British ASDIC technology was transferred for free to the United States. Research on ASDIC and underwater sound was expanded in the UK and in the US. Many new types of military sound detection were developed. These included sonobuoys, first developed by the British in 1944 under the codename High Tea, dipping/dunking sonar and mine-detection sonar. This work formed the basis for post-war developments related to countering the nuclear submarine.

Work on sonar had also been carried out in the Axis countries, notably in Germany, which included countermeasures. At the end of World War II, this German work was assimilated by Britain and the U.S. Sonars have continued to be developed by many countries, including USSR, for both military and civil uses. In recent years the major military development has been the increasing interest in low-frequency active sonar.

SONAR

During the 1930s American engineers developed their own underwater sound-detection technology, and important discoveries were made, such as thermoclines, that would help future development. After technical information was exchanged between the two countries during the Second World War, Americans began to use the term SONAR for their systems, coined as the equivalent of RADAR.

US Navy Underwater Sound Laboratory


In 1917, the US Navy acquired J. Warren Horton's services for the first time. On leave from Bell Labs, he served the government as a technical expert, first at the experimental station at Nahant, Massachusetts, and later at US Naval Headquarters, in London, England. At Nahant he applied the newly developed vacuum tube, then associated with the formative stages of the field of applied science now known as electronics, to the detection of underwater signals. As a result, the carbon button microphone, which had been used in earlier detection equipment, was replaced by the precursor of the modern hydrophone. Also during this period, he experimented with methods for towing detection. This was due to the increased sensitivity of his device. The principles are still used in modern towed sonar systems.

To meet the defense needs of Great Britain, he was sent to England to install in the Irish Sea bottom-mounted hydrophones connected to a shore listening post by submarine cable. While this equipment was being loaded on the cable-laying vessel, World War I ended and Horton returned home.

During World War II, he continued to develop sonar systems that could detect submarines, mines, and torpedoes. He published Fundamentals of Sonar in 1957 as chief research consultant at the US Navy Underwater Sound Laboratory. He held this position until 1959 when he became technical director, a position he held until mandatory retirement in 1963.

Materials and designs

There was little progress in development from 1915 to 1940. In 1940, the US sonars typically consisted of a magnetostrictive transducer and an array of nickel tubes connected to a 1-foot-diameter steel plate attached back-to-back to a Rochelle salt crystal in a spherical housing. This assembly penetrated the ship hull and was manually rotated to the desired angle. The piezoelectric Rochelle salt crystal had better parameters, but the magnetostrictive unit was much more reliable. Early World War II losses prompted rapid research in the field, pursuing both improvements in magnetostrictive transducer parameters and Rochelle salt reliability. Ammonium dihydrogen phosphate (ADP), a superior alternative, was found as a replacement for Rochelle salt; the first application was a replacement of the 24 kHz Rochelle-salt transducers. Within nine months, Rochelle salt was obsolete. The ADP manufacturing facility grew from few dozen personnel in early 1940 to several thousands in 1942.

One of the earliest application of ADP crystals were hydrophones for acoustic mines; the crystals were specified for low-frequency cutoff at 5 Hz, withstanding mechanical shock for deployment from aircraft from 3,000 m (10,000 ft), and ability to survive neighbouring mine explosions. One of key features of ADP reliability is its zero aging characteristics; the crystal keeps its parameters even over prolonged storage.

Another application was for acoustic homing torpedoes. Two pairs of directional hydrophones were mounted on the torpedo nose, in the horizontal and vertical plane; the difference signals from the pairs were used to steer the torpedo left-right and up-down. A countermeasure was developed: the targeted submarine discharged an effervescent chemical, and the torpedo went after the noisier fizzy decoy. The counter-countermeasure was a torpedo with active sonar – a transducer was added to the torpedo nose, and the microphones were listening for its reflected periodic tone bursts. The transducers comprised identical rectangular crystal plates arranged to diamond-shaped areas in staggered rows.

Passive sonar arrays for submarines were developed from ADP crystals. Several crystal assemblies were arranged in a steel tube, vacuum-filled with castor oil, and sealed. The tubes then were mounted in parallel arrays.

The standard US Navy scanning sonar at the end of World War II operated at 18 kHz, using an array of ADP crystals. Desired longer range, however, required use of lower frequencies. The required dimensions were too big for ADP crystals, so in the early 1950s magnetostrictive and barium titanate piezoelectric systems were developed, but these had problems achieving uniform impedance characteristics, and the beam pattern suffered. Barium titanate was then replaced with more stable lead zirconate titanate (PZT), and the frequency was lowered to 5 kHz. The US fleet used this material in the AN/SQS-23 sonar for several decades. The SQS-23 sonar first used magnetostrictive nickel transducers, but these weighed several tons, and nickel was expensive and considered a critical material; piezoelectric transducers were therefore substituted. The sonar was a large array of 432 individual transducers. At first, the transducers were unreliable, showing mechanical and electrical failures and deteriorating soon after installation; they were also produced by several vendors, had different designs, and their characteristics were different enough to impair the array's performance. The policy to allow repair of individual transducers was then sacrificed, and "expendable modular design", sealed non-repairable modules, was chosen instead, eliminating the problem with seals and other extraneous mechanical parts.

The Imperial Japanese Navy at the onset of World War II used projectors based on quartz. These were big and heavy, especially if designed for lower frequencies; the one for Type 91 set, operating at 9 kHz, had a diameter of 30 inches (760 mm) and was driven by an oscillator with 5 kW power and 7 kV of output amplitude. The Type 93 projectors consisted of solid sandwiches of quartz, assembled into spherical cast iron bodies. The Type 93 sonars were later replaced with Type 3, which followed German design and used magnetostrictive projectors; the projectors consisted of two rectangular identical independent units in a cast iron rectangular body about 16 by 9 inches (410 mm × 230 mm). The exposed area was half the wavelength wide and three wavelengths high. The magnetostrictive cores were made from 4 mm stampings of nickel, and later of an iron-aluminium alloy with aluminium content between 12.7% and 12.9%. The power was provided from a 2 kW at 3.8 kV, with polarization from a 20 V, 8 A DC source.

The passive hydrophones of the Imperial Japanese Navy were based on moving-coil design, Rochelle salt piezo transducers, and carbon microphones.

Magnetostrictive transducers were pursued after World War II as an alternative to piezoelectric ones. Nickel scroll-wound ring transducers were used for high-power low-frequency operations, with size up to 13 feet (4.0 m) in diameter, probably the largest individual sonar transducers ever. The advantage of metals is their high tensile strength and low input electrical impedance, but they have electrical losses and lower coupling coefficient than PZT, whose tensile strength can be increased by prestressing. Other materials were also tried; nonmetallic ferrites were promising for their low electrical conductivity resulting in low eddy current losses, Metglas offered high coupling coefficient, but they were inferior to PZT overall. In the 1970s, compounds of rare earths and iron were discovered with superior magnetomechanic properties, namely the Terfenol-D alloy. This made possible new designs, e.g. a hybrid magnetostrictive-piezoelectric transducer. The most recent such material is Galfenol.

Other types of transducers include variable-reluctance (or moving-armature, or electromagnetic) transducers, where magnetic force acts on the surfaces of gaps, and moving coil (or electrodynamic) transducers, similar to conventional speakers; the latter are used in underwater sound calibration, due to their very low resonance frequencies and flat broadband characteristics above them.

Military Applications

Modern naval warfare makes extensive use of both passive and active sonar from water-borne vessels, aircraft and fixed installations. Although active sonar was used by surface craft in World War II, submarines avoided the use of active sonar due to the potential for revealing their presence and position to enemy forces. However, the advent of modern signal-processing enabled the use of passive sonar as a primary means for search and detection operations. In 1987 a division of Japanese company Toshiba reportedly sold machinery to the Soviet Union that allowed their submarine propeller blades to be milled so that they became radically quieter, making the newer generation of submarines more difficult to detect.

The use of active sonar by a submarine to determine bearing is extremely rare and will not necessarily give high quality bearing or range information to the submarines fire control team. However, use of active sonar on surface ships is very common and is used by submarines when the tactical situation dictates it is more important to determine the position of a hostile submarine than conceal their own position. With surface ships, it might be assumed that the threat is already tracking the ship with satellite data as any vessel around the emitting sonar will detect the emission. Having heard the signal, it is easy to identify the sonar equipment used (usually with its frequency) and its position (with the sound wave's energy). Active sonar is similar to radar in that, while it allows detection of targets at a certain range, it also enables the emitter to be detected at a far greater range, which is undesirable.

Since active sonar reveals the presence and position of the operator, and does not allow exact classification of targets, it is used by fast (planes, helicopters) and by noisy platforms (most surface ships) but rarely by submarines. When active sonar is used by surface ships or submarines, it is typically activated very briefly at intermittent periods to minimize the risk of detection. Consequently, active sonar is normally considered a backup to passive sonar. In aircraft, active sonar is used in the form of disposable sonobuoys that are dropped in the aircraft's patrol area or in the vicinity of possible enemy sonar contacts.

Passive sonar has several advantages, most importantly that it is silent. If the target radiated noise level is high enough, it can have a greater range than active sonar, and allows the target to be identified. Since any motorized object makes some noise, it may in principle be detected, depending on the level of noise emitted and the ambient noise level in the area, as well as the technology used. To simplify, passive sonar "sees" around the ship using it. On a submarine, nose-mounted passive sonar detects in directions of about 270°, centered on the ship's alignment, the hull-mounted array of about 160° on each side, and the towed array of a full 360°. The invisible areas are due to the ship's own interference. Once a signal is detected in a certain direction (which means that something makes sound in that direction, this is called broadband detection) it is possible to zoom in and analyze the signal received (narrowband analysis). This is generally done using a Fourier transform to show the different frequencies making up the sound. Since every engine makes a specific sound, it is straightforward to identify the object. Databases of unique engine sounds are part of what is known as acoustic intelligence or ACINT.

Another use of passive sonar is to determine the target's trajectory. This process is called target motion analysis (TMA), and the resultant "solution" is the target's range, course, and speed. TMA is done by marking from which direction the sound comes at different times, and comparing the motion with that of the operator's own ship. Changes in relative motion are analyzed using standard geometrical techniques along with some assumptions about limiting cases.

Passive sonar is stealthy and very useful. However, it requires high-tech electronic components and is costly. It is generally deployed on expensive ships in the form of arrays to enhance detection. Surface ships use it to good effect; it is even better used by submarines, and it is also used by airplanes and helicopters, mostly to a "surprise effect", since submarines can hide under thermal layers. If a submarine's commander believes he is alone, he may bring his boat closer to the surface and be easier to detect, or go deeper and faster, and thus make more sound.

                                               https://en.wikipedia.org/wiki/Sonar

Sunday, January 27, 2019

The Three Princes of Serendip


The Three Princes of Serendip is the English version of the story Peregrinaggio di tre giovani figliuoli del re di Serendippo published by Michele Tramezzino in Venice in 1557. Tramezzino claimed to have heard the story from one Cristoforo Armeno, who had translated the Persian fairy tale into Italian, adapting Book One of Amir Khusrau's Hasht-Bihisht of 1302. The story first came to English via a French translation, and now exists in several out-of-print translations. Serendip is the Classical Persian name for Sri Lanka (Ceylon).

The story has become known in the English-speaking world as the source of the word serendipity, coined by Horace Walpole because of his recollection of the part of the "silly fairy tale" in which the three princes by "accidents and sagacity" discern the nature of a lost camel. In a separate line of descent, the story was used by Voltaire in his 1747 Zadig, and through this contributed to both the evolution of detective fiction and the self-understanding of scientific method.

How the Story Goes

"In ancient times there existed in the country of Serendippo, in the Far East, a great and powerful king by the name of Giaffer. He had three sons who were very dear to him. And being a good father and very concerned about their education, he decided that he had to leave them endowed not only with great power, but also with all kinds of virtues of which princes are particularly in need."

The father searches out the best possible tutors. "And to them he entrusted the training of his sons, with the understanding that the best they could do for him was to teach them in such a way that they could be immediately recognized as his very own."

When the tutors are pleased with the excellent progress that the three princes make in the arts and sciences, they report it to the king. He, however, still doubts their training, and summoning each in turn, declares that he will retire to the contemplative life leaving them as king. Each politely declines, affirming the father's superior wisdom and fitness to rule.

The king is pleased, but fearing that his sons' education may have been too sheltered and privileged, feigns anger at them for refusing the throne and sends them away from the land.

The lost camel


No sooner do the three princes arrive abroad than they trace clues to identify precisely a camel they have never seen. They conclude that the camel is lame, blind in one eye, missing a tooth, carrying a pregnant woman, and bearing honey on one side and butter on the other. When they later encounter the merchant who has lost the camel, they report their observations to him. He accuses them of stealing the camel and takes them to the Emperor Beramo, where he demands punishment.

Beramo then asks how they are able to give such an accurate description of the camel if they have never seen it. It is clear from the princes' replies that they have used small clues to infer cleverly the nature of the camel.

Grass had been eaten from the side of the road where it was less green, so the princes had inferred that the camel was blind on the other side. Because there were lumps of chewed grass on the road that were the size of a camel's tooth, they inferred they had fallen through the gap left by a missing tooth. The tracks showed the prints of only three feet, the fourth being dragged, indicating that the animal was lame. That butter was carried on one side of the camel and honey on the other was evident because ants had been attracted to melted butter on one side of the road and flies to spilled honey on the other.

As for the woman, one of the princes said: "I guessed that the camel must have carried a woman, because I had noticed that near the tracks where the animal had knelt down the imprint of a foot was visible. Because some urine was nearby, I wet my fingers and as a reaction to its odour I felt a sort of carnal concupiscence, which convinced me that the imprint was of a woman's foot."

"I guessed that the same woman must have been pregnant," said another prince, "because I had noticed nearby handprints which were indicative that the woman, being pregnant, had helped herself up with her hands while urinating."

At this moment, a traveller enters the scene to say that he has just found a missing camel wandering in the desert. Beramo spares the lives of the three princes, lavishes rich rewards on them, and appoints them to be his advisors.

The story continues


The three princes have many other adventures, where they continue to display their sagacity, stories-within-stories are told, and, of course, there is a happy ending.

History of the Story

The fairy tale The Three Princes of Serendip is based upon the life of Persian King Bahram V, who ruled the Sassanid Empire (420–440). Stories of his rule are told in epic poetry of the region (Firdausi's Shahnameh of 1010, Nizami's Haft Paykar of 1197, Khusrau's Hasht Bihisht of 1302), parts of which are based upon historical facts with embellishments derived from folklore going back hundreds of years to oral traditions in India and The Book of One Thousand and One Nights. With the exception of the well-known camel story, English translations are very hard to come by.

Zadig by Voltaire

In chapter three of Voltaire's 1747 novel Zadig, there is an adaptation of The Three Princes of Serendip, this time involving, instead of a camel, a horse and a dog, which the eponymous Zadig is able to describe in great detail from his observations of the tracks on the ground. When he is accused of theft and taken before the judges, Zadig clears himself by recounting the mental process which allows him to describe the two animals he has never seen: "I saw on the sand the tracks of an animal, and I easily judged that they were those of a little dog. Long, shallow furrows imprinted on little rises in the sand between the tracks of the paws informed me that it was a bitch whose dugs were hanging down, and that therefore she had had puppies a few days before."

Zadig's detective work was influential. Cuvier wrote, in 1834, in the context of the new science of paleontology:

Today, anyone who sees only the print of a cloven hoof might conclude that the animal that had left it behind was a ruminator, and this conclusion is as certain as any in physics and in ethics. This footprint alone, then, provides the observer with information about the teeth, the jawbone, the vertebrae, each leg bone, the thighs, shoulders and pelvis of the animal which had just passed: it is a more certain proof than all Zadig's tracks.

T. H. Huxley, the proponent of Darwin's theories of evolution, also found Zadig's approach instructive, and wrote in his 1880 article "The method of Zadig":

What, in fact, lay at the foundation of all Zadig's arguments, but the coarse, commonplace assumption, upon which every act of our daily lives is based, that we may conclude from an effect to the pre-existence of a cause competent to produce that effect?

Edgar Allan Poe in his turn was probably inspired by Zadig when he created C. Auguste Dupin in "The Murders in the Rue Morgue", calling it a "tale of ratiocination" wherein "the extent of information obtained lies not so much in the validity of the inference as in the quality of the observation." Poe's M. Dupin stories mark the start of the modern detective fiction genre. Émile Gaboriau and Arthur Conan Doyle were perhaps also influenced by Zadig.

                       https://en.wikipedia.org/wiki/The_Three_Princes_of_Serendip

Saturday, January 26, 2019

Venezuala's Presidential Crisis

Venezuela has been experiencing a presidential crisis since 10 January 2019. The incumbent President Nicolás Maduro was declared president in the 2018 election; however, the process and results of that election were widely disputed. The dispute came to a head in early 2019 when the National Assembly of Venezuela stated that results of the election were invalid and declared Juan Guaidó as the acting president, citing several clauses of the 1999 Venezuelan Constitution. National protests were then organized by the opposition against Maduro's election and his ruling coalition.

                                                               Maduro and Guaido

Juan Guaidó had begun motions as a transitional government, calling for an open cabildo "town hall"-style rally on 11 January. Demonstrations and defections had begun to take place as well. Internally, Maduro has received the support of the pro-government Constituent Assembly, while Guaidó is backed by the pro-opposition National Assembly.

Guaidó was briefly detained by Venezuelan security forces on 13 January, with each side claiming the other party was responsible; Maduro's supporters claimed the arrest was staged while Guaidó called the arrest an attempt to stop the National Assembly from assuming power. Venezuela began censoring some social media outlets beginning on 21 January.

A few days after the National Assembly's declaration, various Venezuelan groups, foreign nations, and international organizations made statements supporting either side of the conflict. The Lima Group declared Maduro illegitimate on 13 January. Afterward, the Organization of American States (OAS) and the European Union expressed support for the National Assembly alongside other Western countries, while other nations have expressed support for Maduro.

Large mass protests and violence erupted on 23 January and drew further responses from a number of foreign governments and leaders.

Background

Since 2010, Venezuela has been suffering a socioeconomic crisis under Nicolás Maduro (and briefly under his predecessor Hugo Chávez), as rampant crime, hyperinflation and shortages diminished the quality of life. As a result of discontent with the government, for the first time since 1999, the opposition was elected to hold the majority in the National Assembly following the 2015 parliamentary election. Following the 2015 National Assembly election, the lame duck National Assembly, consisting of Bolivarian officials, filled the Supreme Tribunal of Justice, the highest court in Venezuela, with Maduro allies. The tribunal quickly stripped three opposition lawmakers of their National Assembly seats in early 2016, citing alleged "irregularities" in their elections, thereby preventing an opposition supermajority which would have been able to challenge President Maduro.

The tribunal then approved several actions by Maduro and granted him more powers in 2017. As protests mounted against Maduro, he called for a constituent assembly that would draft a new constitution that would replace the 1999 Venezuela Constitution of his predecessor, Hugo Chávez. Many countries considered the election a bid by Maduro to stay in power indefinitely, and over 40 countries stated that they would not recognize the National Constituent Assembly. The Democratic Unity Roundtable—the opposition to the incumbent ruling party—also boycotted the election claiming that the Constituent Assembly was "a trick to keep [the incumbent ruling party] in power." Since the opposition did not participate in the election, the incumbent Great Patriotic Pole, dominated by the United Socialist Party of Venezuela, won almost all seats in the assembly by default. On 8 August 2017, the Constituent Assembly declared itself to be the government branch with supreme power in Venezuela, banning the opposition-led National Assembly from performing actions that would interfere with the assembly while continuing to pass measures in "support and solidarity" with President Maduro, effectively stripping the National Assembly of all its powers

January 23 Events

Prior to 23 January, there had been great anticipation of the day, with smaller protests building in the nation in the preceding days. On the morning of 23 January, Guaidó tweeted that "The world's eyes are on our homeland today". On that day, millions of Venezuelans protested across the country in support of Guaidó, described as "a river of humanity", with a few hundred attending a protest in support of Maduro outside Miraflores.

 
The opposition protest march began its route at Avenida Francisco de Miranda, a major street in Caracas, which was planned for a 10:00 AM start but was delayed for 30 minutes due to rain. At one end was a stage, this part of the street blocked off, where Guaidó spoke during the protest and declared himself president, swearing himself in. It was reported that the National Guard used tear gas on gathering crowds before the protest began to disperse them. Another area of the capital was blocked off at Plaza Venezuela, a large main square, with armored vehicles and riot police on hand before protestors arrived.

It was reported on social media that by mid-day, two people were killed in protests in San Cristóbal, Táchira, and four in Barinas. Photographic reports published showed that the some protests grew violent, resulting in injuries to protesters and security alike. By the end of the day, at least 13 people were killed. Michelle Bachelet of the United Nations expressed concern that so many people had been killed, and requested a UN investigation into the security forces' use of violence.

Friday, January 25, 2019

The Cullinan Diamond

The Cullinan Diamond is the largest gem-quality rough diamond ever found, weighing 3,106.75 carats (621.35 g), discovered at the Premier No. 2 mine in Cullinan, South Africa, on 26 January 1905. It was named after Thomas Cullinan, the mine's chairman.

                                                           rough Cullinan diamond 
In April 1905, the Cullinan was put on sale in London, but despite considerable interest, it was still unsold after two years. In 1907 the Transvaal Colony government bought the Cullinan and presented it to King Edward VII on his 66th birthday.

Cullinan produced stones of various cuts and sizes, the largest of which is named Cullinan I or the Great Star of Africa, and at 530.4 carats (106.08 g) it is the largest clear cut diamond in the world. Cullinan I is mounted in the head of the Sovereign's Sceptre with Cross. The second-largest is Cullinan II or the Second Star of Africa, weighing 317.4 carats (63.48 g), mounted in the Imperial State Crown. Both diamonds are part of the Crown Jewels of the United Kingdom.

Seven other major diamonds, weighing a total of 208.29 carats (41.66 g), are privately owned by Queen Elizabeth II, who inherited them from her grandmother, Queen Mary, in 1953. The Queen also owns minor brilliants and a set of unpolished fragments.

Discovery and Early History

The Cullinan diamond was found 18 feet (5.5 m) below the surface at Premier Mine in Cullinan, Transvaal Colony, by Frederick Wells, surface manager at the mine, on 26 January 1905. It was approximately 10.1 centimetres (4.0 in) long, 6.35 centimetres (2.50 in) wide, 5.9 centimetres (2.3 in) deep, and weighed 3,106 carats (621.2 grams). Newspapers called it the "Cullinan Diamond", a reference to Sir Thomas Cullinan, who opened the mine in 1902. It was three times the size of the Excelsior Diamond, found in 1893 at Jagersfontein Mine, weighing 972 carats (194.4 g). Four of its eight surfaces were smooth, indicating that it once had been part of a much larger stone broken up by natural forces. It had a blue-white hue and contained a small pocket of air, which at certain angles produced a rainbow, or Newton's rings.

Shortly after its discovery, Cullinan went on public display at the Standard Bank in Johannesburg, where it was seen by an estimated 8,000–9,000 visitors. In April 1905, the rough gem was deposited with Premier Mining Co.'s London sales agent, S. Neumann & Co. Due to its immense value, detectives were assigned to a steamboat that was rumoured to be carrying the stone, and a parcel was ceremoniously locked in the captain's safe and guarded on the entire journey. It was a diversionary tactic – the stone on that ship was fake, meant to attract those who would be interested in stealing it. Cullinan was sent to the United Kingdom in a plain box via registered post. On arriving in London, it was conveyed to Buckingham Palace for inspection by King Edward VII. It drew considerable interest from potential buyers, but Cullinan went unsold for two years.

Presentation to Edward VII

Transvaal Prime Minister, Louis Botha, suggested buying the diamond for Edward VII as "a token of the loyalty and attachment of the people of the Transvaal to His Majesty's throne and person". In August 1907, a vote was held in Parliament on the Cullinan's fate, and a motion authorising the purchase was carried by 42 votes in favour to 19 against. Initially, Henry Campbell-Bannerman, then British Prime Minister, advised the king to decline the offer, but he later decided to let Edward VII choose whether or not to accept the gift. Eventually, he was persuaded by Winston Churchill, then Colonial Under-Secretary. For his trouble, Churchill was sent a replica of the diamond, which he enjoyed showing off to guests on a silver plate. The Transvaal Colony government bought the diamond on 17 October 1907 for £150,000 or about US$750,000 at the time, which adjusted for pound-sterling inflation is equivalent to £15 million in 2016. Due to a 60% tax imposed on mining profits, the Treasury received most of its money back from the Premier Diamond Mining Company.

The diamond was presented to the king at Sandringham House on 9 November 1907 – his sixty-sixth birthday – in the presence of a large party of guests, including the Queen of Norway, the Queen of Spain, the Duke of Westminster and Lord Revelstoke. The king asked his colonial secretary, Lord Elgin, to announce that he accepted the gift "for myself and my successors" and that he would ensure "this great and unique diamond be kept and preserved among the historic jewels which form the heirlooms of the Crown”.


= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

What Is a Diamond?

Diamond is a solid form of the element carbon with its atoms arranged in a crystal structure called diamond cubic. At room temperature and pressure, another solid form of carbon known as graphite is the chemically stable form, but diamond almost never converts to it. Diamond has the highest hardness and thermal conductivity of any natural material, properties that are utilized in major industrial applications such as cutting and polishing tools. They are also the reason that diamond anvil cells can subject materials to pressures found deep in the Earth.

Because the arrangement of atoms in diamond is extremely rigid, few types of impurity can contaminate it (two exceptions being boron and nitrogen). Small numbers of defects or impurities (about one per million of lattice atoms) color diamond blue (boron), yellow (nitrogen), brown (defects), green (radiation exposure), purple, pink, orange or red. Diamond also has relatively high optical dispersion (ability to disperse light of different colors).

Most natural diamonds have ages between 1 billion and 3.5 billion years. Most were formed at depths between 150 and 250 kilometers (93 and 155 mi) in the Earth's mantle, although a few have come from as deep as 800 kilometers (500 mi). Under high pressure and temperature, carbon-containing fluids dissolved minerals and replaced them with diamonds. Much more recently (tens to hundreds of million years ago), they were carried to the surface in volcanic eruptions and deposited in igneous rocks known as kimberlites and lamproites.

Synthetic diamonds can be grown from high-purity carbon under high pressures and temperatures or from hydrocarbon gas by chemical vapor deposition (CVD). Imitation diamonds can also be made out of materials such as cubic zirconia and silicon carbide. Natural, synthetic and imitation diamonds are most commonly distinguished using optical techniques or thermal conductivity measurements.



Thursday, January 24, 2019

The Norwegian Rocket Incident

The Norwegian rocket incident, also known as the Black Brant scare, occurred on January 25, 1995, when a team of Norwegian and US scientists launched a Black Brant XII four-stage sounding rocket from the Andøya Rocket Range off the northwestern coast of Norway. The rocket, which carried scientific equipment to study the aurora borealis over Svalbard, flew on a high northbound trajectory, which included an air corridor that stretches from Minuteman III nuclear missile silos in North Dakota all the way to the Russian capital city of Moscow.

During its flight, the rocket eventually reached an altitude of 1,453 kilometers (903 mi), resembling a U.S. Navy submarine-launched Trident missile. As a result, fearing a high-altitude nuclear attack that could blind Russian radar, Russian nuclear forces were put on high alert. The Cheget, Russia's "nuclear briefcase", was brought to Russian president Boris Yeltsin, who then had to decide whether to launch a retaliatory nuclear strike against the United States.

Russian observers determined that there was no nuclear attack and did not retaliate.

Background

The Norwegian rocket incident was a few minutes of post-Cold War nuclear tension that took place nearly four years after the end of the Cold War. While not as well known an incident as the Cuban Missile Crisis of October 1962 (nor the Stanislav Petrov Incident of 1983, which was still classified), the 1995 incident is considered to be one of the most severe. The 1995 incident occurred in the post-Cold War era, where many Russians were still very suspicious of the United States and NATO. In contrast, the Cuban Missile Crisis of October 1962 had a much longer build-up.

Detection

As the Black Brant XII rocket gained altitude, it was detected by the Olenegorsk early-warning radar station in Murmansk Oblast, Russia. To the radar operators, the rocket appeared similar in speed and flight pattern to a U.S. Navy submarine-launched Trident missile, leading the Russian military to initially misinterpret the rocket's trajectory as representing the precursor to a possible attack by missiles from submarines.

EMP rocket scenario

One possibility was that the rocket had been a solitary missile with a radar-blocking electromagnetic pulse (EMP) payload launched from a Trident missile at sea in order to blind Russian radars in the first stage of a surprise attack. In this scenario, gamma rays from a high-altitude nuclear detonation would create a very high-intensity electromagnetic pulse that would confuse radars and incapacitate electronic equipment. After that, according to the scenario, the real attack would start.

Post-staging

After stage separation, the rocket launch appeared on radar similar to Multiple Reentry vehicles (MRVs); the Russian control center did not immediately realize that the Norwegian scientific rocket was headed out to sea, rather than toward Russia. Tracking the trajectory took 8 of the 10 minutes allotted to the process of deciding whether to launch a nuclear response to an impending attack; a submarine-launched Trident missile from the Barents Sea would be able to reach mainland Russia in 10 minutes.

Response

This event resulted in a full alert being passed up through the military chain of command all the way to President Boris Yeltsin, who was notified and the "nuclear briefcase" (known in Russia as Cheget) used to authorize nuclear launch was automatically activated. Yeltsin activated his "nuclear keys" for the first time. No warning was issued to the Russian populace of any incident; it was reported in the news a week afterward.

As a result of the alert, Russian submarine commanders were ordered to go into a state of combat readiness and prepare for nuclear retaliation.

Soon thereafter, Russian observers were able to determine that the rocket was heading away from Russian airspace and was not a threat. The rocket fell to earth as planned, near Spitsbergen, 24 minutes after launch.

The Norwegian rocket incident was the first and thus far only known incident where any nuclear-weapons state had its nuclear briefcase activated and prepared for launching an attack.

Prior Notification

The Norwegian and U.S. scientists had notified thirty countries, including Russia, of their intention to launch a high-altitude scientific experiment aboard a rocket; however, the information was not passed on to the radar technicians. Following the incident, notification and disclosure protocols were re-evaluated and redesigned.

See Also


                                    https://en.wikipedia.org/wiki/Norwegian_rocket_incident

Can Moons Have Moons?


Pasadena, California – January 23, 2019 -- This simple question—asked by the four-year old son of Carnegie’s Juna Kollmeier—started it all.  Not long after this initial bedtime query,  Kollmeier was coordinating a program at the Kavli Institute for Theoretical Physics (KITP)  on the Milky Way while her one-time college classmate Sean Raymond of Université de Bordeaux was attending a parallel KITP program on the dynamics of Earth-like planets.   After discussing this very simple question at a seminar, the two joined forces to solve it.  Their findings are the basis of a paper published in Monthly Notices of the Royal Astronomical Society.

The duo kicked off an internet firestorm late last year when they posted a draft of their article examining the possibility of moons that orbit other moons on a preprint server for physics and astronomy manuscripts.

The online conversation obsessed over the best term to describe such phenomena with options like moonmoons and mini-moons being thrown into the mix.  But nomenclature was not the point of Kollmeier and Raymond’s investigation (although they do have a preference for submoons). Rather, they set out to define the physical parameters for moons that would be capable of being stably orbited by other, smaller moons.

“Planets orbit stars and moons orbit planets, so it was natural to ask if smaller moons could orbit larger ones,” Raymond explained.

Their calculations show that only large moons on wide orbits from their host planets could host submoons. Tidal forces from both the planet and moon act to destabilize the orbits of submoons orbiting smaller moons or moons that are closer to their host planet.  

They found that four moons in our own Solar System are theoretically capable of hosting their own satellite submoons. Jupiter’s moon Callisto, Saturn’s moons Titan and Iapetus, and Earth’s own Moon all fit the bill of a satellite that could host its own satellite, although none have been found so far. However, they add that further calculations are needed to address possible sources of submoon instability, such as the non-uniform concentration of mass in our Moon’s crust.  

“The lack of known submoons in our Solar System, even orbiting around moons that could theoretically support such objects, can offer us clues about how our own and neighboring planets formed, about which there are still many outstanding questions,” Kollmeier explained.

The moons orbiting Saturn and Jupiter are thought to have been born from the disk of gas and dust that encircle gas giant planets in the later stages of their formation. Our own Moon, on the other hand, is thought to have originated in the aftermath of a giant impact between the young Earth and a Mars-sized body. The lack of stable submoons could help scientists better understand the different forces that shaped the satellites we do see.

Kollmeier added: “and, of course, this could inform ongoing efforts to understand how planetary systems evolve elsewhere and how our own Solar System fits into the thousands of others discovered by planet-hunting missions.”

For example, the newly discovered possible exomoon orbiting the Jupiter-sized Kepler 1625b is the right mass and distance from its host to support a submoon, Kollmeier and Raymond found. Although, the inferred tilt of its orbit might make it difficult for such an object to remain stable.  However, detecting a submoon around an exomoon would be very difficult.

Given the excitement surrounding searches for potentially habitable exoplanets, Kollmeier and Raymond calculated that the best case scenario for life on large submoons is around massive stars.  Although extremely common, small red dwarf stars are so faint and their habitable zones so close that tidal forces are very strong and submoons (and often even moons themselves) are unstable.

Finally, the authors point out that an artificial submoon may be stable and thereby serve as a time capsule or outpost. On a stable orbit around the Moon—such as the one  for NASA’s proposed Lunar Gateway—a submoon would keep humanity’s treasures safe for posterity long after Earth became unsuitable for life. Kollmeier and Raymond agree that there is much more work to be done (and fun to be had) to understand submoons (or the lack thereof) as a rocky record of the history of planet-moon systems.