Wednesday, December 27, 2017

Byzantine Fault Tolerance (BFT) in Computers

In fault-tolerant computer systems, and in particular distributed computing systems, Byzantine fault tolerance (BFT) is the characteristic of a system that tolerates the class of failures known as the Byzantine Generals' Problem, which is a generalized version of the Two Generals' Problem – for which there is an unsolvability proof. The phrases interactive consistency or source congruency have been used to refer to Byzantine fault tolerance, particularly among the members of some early implementation teams. It is also referred to as error avalanche, Byzantine agreement problem, Byzantine generals problem and Byzantine failure.

Byzantine failures are considered the most general and most difficult class of failures among the failure modes. The so-called fail-stop failure mode occupies the simplest end of the spectrum. Whereas fail-stop failure model simply means that the only way to fail is a node crash, detected by other nodes, Byzantine failures imply no restrictions, which means that the failed node can generate arbitrary data, pretending to be a correct one. Thus, Byzantine failures can confuse failure detection systems, which makes fault tolerance difficult.

Definition and Background

A Byzantine fault is any fault presenting different symptoms to different observers. A Byzantine failure is the loss of a system service due to a Byzantine fault in systems that require consensus.

The objective of Byzantine fault tolerance is to be able to defend against Byzantine failures, in which components of a system fail with symptoms that prevent some components of the system from reaching agreement among themselves, where such agreement is needed for the correct operation of the system. Correctly functioning components of a Byzantine fault tolerant system will be able to provide the system's service, assuming there are not too many faulty components.

The terms fault and failure are used here according to the standard definitions originally created by a joint committee on "Fundamental Concepts and Terminology" formed by the IEEE Computer Society's Technical Committee on Dependable Computing and Fault-Tolerance and IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance. A version of these definitions is also described in the Dependability Wikipedia page.

Byzantine Generals’ Problem

Byzantine refers to the Byzantine Generals' Problem, an agreement problem (described by Leslie Lamport, Robert Shostak and Marshall Pease in their 1982 paper, "The Byzantine Generals Problem") in which a group of generals, each commanding a portion of the Byzantine army, encircle a city. These generals wish to formulate a plan for attacking the city. In its simplest form, the generals must only decide whether to attack or retreat. Some generals may prefer to attack, while others prefer to retreat. The important thing is that every general agrees on a common decision, for a halfhearted attack by a few generals would become a rout and be worse than a coordinated attack or a coordinated retreat.

The problem is complicated by the presence of traitorous generals who may not only cast a vote for a suboptimal strategy, they may do so selectively. For instance, if nine generals are voting, four of whom support attacking while four others are in favor of retreat, the ninth general may send a vote of retreat to those generals in favor of retreat, and a vote of attack to the rest. Those who received a retreat vote from the ninth general will retreat, while the rest will attack (which may not go well for the attackers). The problem is complicated further by the generals being physically separated and having to send their votes via messengers who may fail to deliver votes or may forge false votes.

Byzantine fault tolerance can be achieved if the loyal (non-faulty) generals have a majority agreement on their strategy. Note that there can be a default vote value given to missing messages. For example, missing messages can be given the value <Null>. Further, if the agreement is that the <Null> votes are in the majority, a pre-assigned default strategy can be used (e.g., retreat).

The typical mapping of this story onto computer systems is that the computers are the generals and their digital communication system links are the messengers. Although the problem is formulated in the analogy as a decision-making and security problem, in electronics, it cannot be solved simply by cryptographic digital signatures, because failures like incorrect voltages can simply propagate through the encryption process. Thus, a component may appear functioning to one component and faulty to another, which prevents forming a consensus if the component is faulty or not.

Examples of Byzantine Failures

Several examples of Byzantine failures that have occurred are given in two equivalent journal papers. These and other examples are described on the NASA DASHlink web pages. These web pages also describe some phenomenology that can cause Byzantine faults.

Byzantine errors were observed infrequently and at irregular points during endurance testing for the then-newly constructed Virginia class submarines, at least through 2005 (when the issues were publicly reported).

A similar problem faces honeybee swarms. They have to find a new home, and the many scouts and wider participants have to reach consensus about which of perhaps several candidate homes to fly to. And then they all have to fly there, with their queen. The bees' approach works reliably, but when researchers offer two hives, equally attractive by all the criteria bees apply, catastrophe ensues, the swarm breaks up, and all the bees die.

Byzantine Fault Tolerance in Practice

One example of BFT in use is bitcoin, a peer-to-peer digital currency system. The bitcoin network works in parallel to generate a chain of Hashcash style proof-of-work. The proof-of-work chain is the key to overcome Byzantine failures and to reach a coherent global view of the system state.

Some aircraft systems, such as the Boeing 777 Aircraft Information Management System (via its ARINC 659 SAFEbus network), the Boeing 777 flight control system, and the Boeing 787 flight control systems, use Byzantine fault tolerance. Because these are real-time systems, their Byzantine fault tolerance solutions must have very low latency. For example, SAFEbus can achieve Byzantine fault tolerance with on the order of a microsecond of added latency.

Some spacecraft such as the SpaceX Dragon flight system consider Byzantine fault tolerance in their design.

Byzantine fault tolerance mechanisms use components that repeat an incoming message (or just its signature) to other recipients of that incoming message. All these mechanisms make the assumption that the act of repeating a message blocks the propagation of Byzantine symptoms. For systems that have a high degree of safety or security criticality, these assumptions must be proven to be true to an acceptable level of fault coverage. When providing proof through testing, one difficulty is creating a sufficiently wide range of signals with Byzantine symptoms. Such testing likely will require specialized fault injectors.


= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Footnote by the Blog Author

Wirlds’ Hashgraph program has a new alternative solution to the Byzantine problem.  This software is described in detail in the December 23, 2017 blog entry.

Tuesday, December 26, 2017

Alaskan Oil and Gas Reserves

Re-Assessing Alaska's Energy Frontier
USGS Releases Petroleum Assessment of
the National Petroleum Reserve-Alaska

Less than 80 miles from Prudhoe Bay, home to the giant oil fields that feed the Trans-Alaska Pipeline, lies the site of USGS’ latest oil and gas assessment: the National Petroleum Reserve-Alaska and adjacent areas. Managed by the Bureau of Land Management, the NPR-A covers 22.8 million acres, more than the entire state of South Carolina.

The new USGS assessment estimates 8.7 billion barrels of oil and 25 trillion cubic feet of natural gas resources. This is a more than sixfold increase from the previous USGS estimates in the region, which include parts of the 2005 Central North Slope assessment and the 2010 NPR-A assessment.

Driven by Discoveries

The USGS decision to reassess the NPR-A came after several industry announcements of potential large discoveries in the area, much greater than previously thought. The Pikka and Horseshoe oil discoveries near the Colville River delta just outside NPR-A were announced in 2015 and 2017. Industry announcements suggest that the two discoveries 21 miles apart likely are in the same oil pool, which may hold more than 1 billion barrels of recoverable oil.

“Advances in technology and our understanding of petroleum geology are constantly moving forward,” said Walter Guidroz, program coordinator of the USGS Energy Resources Program. “That’s why the USGS re-evaluates and updates our assessments, to give decision-makers the best available science to manage our natural resources.”

Industry announced the discovery of the Willow oil pool in the Nanushuk Formation in NPR-A in 2017 with estimated resources of more than 300 million barrels of oil. Multiple wells have been announced to be drilled during the 2017-2018 winter drilling season at both Pikka-Horseshoe and Willow to further delineate these discoveries.

Industry announced an oil discovery in the deeper Torok Formation at Smith Bay, less than one mile offshore from NPR-A, in 2016 to hold more than 1 billion barrels of oil. Another oil discovery in the Torok Formation was announced in 2015 at the Cassin prospect in NPR-A, not far from the Willow discovery. No plans for additional industry drilling have yet been announced at either Smith Bay or Cassin.

Uncertainty at the Frontier’s Edge

Although the USGS has a range of potential for the new estimates of oil and gas resources, there is significant uncertainty with these values. Until further wells are drilled and oil production is initiated, it is difficult to be certain about the resource potential. Nevertheless, a sufficient amount of data is available to confirm that the potential size of oil pools in the Nanushuk and Torok Formations is six times larger than previously thought.

Prior to 2015, about 150 exploration wells had penetrated the Nanushuk and Torok Formations, and oil discoveries were limited to a few small oil pools (less than 10 million barrels) in stratigraphic traps and one larger oil pool (more than 70 million barrels) in a structural trap.

The new USGS assessment of the Nanushuk and Torok Formations estimated that oil and gas resources are not uniformly distributed across the region, and divided each formation into three assessment units. These assessment units were defined based on geological character documented using data from seismic-reflection surveys, exploration wells, and outcrops.

This assessment did not include rocks older than the Torok Formation in NPR-A because those rocks have not been penetrated by exploration drilling since previously assessed in 2010, and thus no new information is available about their oil and gas potential. The 2010 assessment of those older rocks in NPR-A estimated that they hold 86 million barrels of oil and nearly 15 trillion cubic feet of gas.

Start with Science

USGS assessments are for undiscovered, technically recoverable resources. Undiscovered resources are those that are estimated to exist based on geologic knowledge and theory, while technically recoverable resources are those that can be produced using currently available technology and industry practices.

These assessments of oil and gas resources follow a publicly available, peer-reviewed methodology that is used for all USGS conventional resource assessments. That allows resource managers, decision-makers and others to make apples-to-apples comparisons across all of the Nation’s petroleum-producing basins. In addition, USGS periodically reassesses basins to ensure that the latest trends in industry production, new discoveries, or updates in our understanding of the geology are reflected in the USGS resource estimates.


Monday, December 25, 2017

Saudi Arabia Plans Brand New City

Neom (styled NEOM; Arabic: نيومNiyūm) is a planned 10,230-square-mile (26,500 sq km) transnational city and economic zone to be constructed in Tabuk, Saudi Arabia close to the border region of Saudi Arabia, Jordan, and Egypt (via a proposed bridge across the Gulf of Aqaba).

Plans for a New City before 2030

The city was announced by Saudi Crown Prince Mohammad bin Salman at the Future Investment Initiative conference in Riyadh, Saudi Arabia on October 24, 2017. He said it will operate independently from the “existing governmental framework” with its own tax and labor laws and an autonomous judicial system.

The initiative emerged from Saudi Vision 2030, a plan that seeks to reduce Saudi Arabia's dependence on oil, diversify its economy, and develop public service sectors. Ghanem Nuseibeh, a consultant told Inverse that the Saudi intention was " shift from oil to high tech and put Saudi kingdom at the forefront of technological advances. This is the post-oil era. These countries are trying to flourish beyond oil exporting and the ones who don’t will be left behind." The German Klaus Kleinfeld, former chairman and CEO of Alcoa Inc., and former president and CEO of Siemens AG, will direct the development of the city. Plans call for robots to perform functions such as security, logistics, home delivery, and caregiving and for the city to be powered solely with wind and solar power. Because the city will be designed and constructed from scratch, other innovations in infrastructure and mobility have been suggested. Planning and construction will be initiated with $500 billion from the Public Investment Fund of Saudi Arabia and international investors. The first phase of the project is scheduled for completion by 2025

The Name “NEOM”

The name NEOM was constructed from two words. The first three letters form the Greek prefix neo- meaning “new”. The fourth letter is from the abbreviation of “Mostaqbal” (Arabic: مستقبل‎), an Arabic word meaning “future.”

Location of the New City

NEOM project is located in Tabuk, Saudi Arabia in the northwest of the Kingdom and includes land within the Egyptian and Jordanian borders, extended along with Aqaba Gulf and 468 km of coastline with beaches and coral reefs, as well as mountains up to 2,500 m high will provide many development opportunities with a total area of around 26,500 km2.


Sunday, December 24, 2017

The Fourth Wise Man

The Story of the Other Wise Man is a short novel or long short story by Henry van Dyke. It was initially published in 1895 and has been reprinted many times since then.

Plot of The Other Wise Man

The story is an addition and expansion of the account of the Biblical Magi, recounted in the Gospel of Matthew in the New Testament. It tells about a "fourth" wise man (accepting the tradition that the Magi numbered three), a priest of the Magi named Artaban, one of the Medes from Persia. Like the other Magi, he sees signs in the heavens proclaiming that a King had been born among the Jews. Like them, he sets out to see the newborn ruler, carrying treasures to give as gifts to the child - a sapphire, a ruby, and a "pearl of great price". However, he stops along the way to help a dying man, which makes him late to meet with the caravan of the other three wise men. Because he missed the caravan, and he can't cross the desert with only a horse, he is forced to sell one of his treasures in order to buy the camels and supplies necessary for the trip. He then commences his journey but arrives in Bethlehem too late to see the child, whose parents have fled to Egypt. He saves the life of a child at the price of another of his treasures.

He then travels to Egypt and to many other countries, searching for Jesus for many years and performing acts of charity along the way. After 33 years, Artaban is still a pilgrim, and a seeker after light. Artaban arrives in Jerusalem just in time for the crucifixion of Jesus. He spends his last treasure, the pearl, to ransom a young woman from being sold into slavery. He is then struck in the head by a falling roof tile and is about to die, having failed in his quest to find Jesus, but having done much good through charitable works. A voice tells him "Verily I say unto thee, Inasmuch as thou hast done it unto one of the least of these my brethren, thou hast done it unto me."(Matthew 25:40) He dies in a calm radiance of wonder and joy. His treasures were accepted, and the Other Wise Man found his King.

Other Versions of this Story

  • The story has been dramatized as a play several times: by Pauline Phelps in 1951, by Harold K. Sliker in 1952, by Everett Radford in 1956, and by M. Percy Crozier and Margaret Bruce in 1963, among others.
  • A television adaptation of the story was presented on the Hallmark Hall of Fame show (starring Wesley Addy as Artaban) in 1953. Televised versions of the story also appeared on Kraft Television Theatre in 1957 (starring Richard Kiley) and on G.E. True Theater in 1960 (starring Harry Townes). A full length (73 minutes) TV movie, titled "The Fourth Wise Man", starring Martin Sheen, was broadcast on 30 March 1985.
  • An oratorio or liturgical opera based on the story was written by Susan Hulsman Bingham and premiered in 2000.
  • A chamber opera was written by M. Ryan Taylor and premiered in 2006.
  • An opera was written by Damjan Rakonjac with a libretto by David Wisehart and premiered in 2010.
  • A simplified version of the tale, intended for children, was written by Robert Barrett in 2007.
  • A painting of Artaban was made by Scottish artist Peter Howson for use by the First Minister of Scotland, Alex Salmond, as his 2013 official Christmas card.
  • A novel by Edzard Schaper: Der vierte König.


  • "I do not know where this little story came from--out of the air, perhaps. One thing is certain, it is not written in any other book, nor is it to be found among the ancient lore of the East. And yet I have never felt as if it were my own. It was a gift, and it seemed to me as if I knew the Giver." —Henry Van Dyke
  • "So beautiful and so true to what is best in our natures, and so full of the Christmas spirit, is this story of The Other Wise Man that it ought to find its way into every sheaf of Christmas gifts in the land."—Harper's New Monthly Magazine
  • "What Van Dyke created was a story so simply and beautifully told that the reader is unaware that this recreation of the world our Lord knew is undergirded by prodigious research. It is an awesome tour de force."—Joe L. Wheeler, Christmas in My Heart

A large star sapphire, the Star of Artaban, was named for this story. It is currently found at the Smithsonian National Museum of Natural History.


Saturday, December 23, 2017

Hashgraph Data Structure Consensus Algorithm

Hashgraph is data structure and consensus algorithm that is:

  • Fast: With a very high throughput and low consensus latency
  • Secure: Asynchronous Byzantine fault tolerant
  • Fair: Fairness of access, ordering, and timestamps

These properties enable new decentralized applications such as a stock market, improved collaborative applications, games, and auctions.

Overview of Swirlds Hashgraph
Leemon Baird
May 31, 2016

The hashgraph data structure and Swirlds consensus algorithm provide a new platform for distributed consensus. This paper gives an overview of some of its properties, and comparisons with the Bitcoin blockchain. In this paper, the term “blockchain” will generally refer to the system used in Bitcoin, rather than the large number of variants that have been proposed.

The goal of a distributed consensus algorithm is to allow a community of users to come to an agreement on the order in which some of them generated transactions, when no single member is trusted by everyone. In this way, it is a system for generating trust, when individuals do not already trust each other. The Swirlds hashgraph system achieves this along with being fair, fast, provable, Byzantine, ACID compliant, efficient, inexpensive, timestamped, DoS resistant, and optionally non-permissioned. This is what those terms mean:

The hashgraph is fair, because no individual can manipulate the order of the transactions. For example, imagine a stock market, where Alice and Bob both try to buy the last available share of a stock at the same moment for the same price. In blockchain, a miner might put both those transactions in a single block, and have complete freedom to choose what order they occur. Or the miner might choose to only include Alice’s transaction, and delay Bob’s to some future block. In the hashgraph, there is no way for an individual to affect the consensus order of those transactions. The best Alice can do is to invest in a better internet connection so that her transaction reaches everyone before Bob’s. That’s the fair way to compete. Alice won’t be able to bribe the miner to give her an unfair advantage, because there’s no single person responsible for the order.

The hashgraph is also fair in another way, because no individual can stop a transaction from entering the system, or even delay it very much. In block chain, a transaction can be delayed by one or two mining periods, if many of the miners are refusing to include it. In alternatives to blockchain based on leaders, this delay can be extremely long, until the next change of leader.

But in the hashgraph, attackers cannot stop a member from recording a transaction in any way other than cutting off their internet access.

The hashgraph is fast. It is limited only by the bandwidth. So if each member has enough bandwidth to download 4,000 transactions per second, then that is how many the system can handle. That would likely require only a few megabits per second, which is a typical home broadband connection. And it would be fast enough to handle all of the transactions of the entire Visa card network, worldwide. The Bitcoin limit of 7 transactions per second can clearly be improved in various ways. Though some ways of improving it, such as a gigantic block size, could actually make the fairness of the system even worse.

The hashgraph is provable. Once an event occurs, within a couple of minutes everyone in the community will know where it should be placed in history. More importantly, everyone will know that everyone else knows this. At that point, they can just incorporate the effects of the transaction, and then discard it. So in a minimal crypto currency system, each member (each “full node” in blockchain terminology) needs only to store the current balance of each wallet that isn’t empty. They don’t need to remember any old blocks. They don’t need to remember any old transactions. That shrinks the amount of storage from Bitcoin’s current 60 GB to a fraction of a single gigabyte. That would even fit on a typical smartphone.

The hashgraph is Byzantine. This is a technical term meaning that no single member (or smallgroup of members) can prevent the community from reaching a consensus. Nor can theychange the consensus once it has been reached. And each member will eventually reach a point where they know for sure that they have reached consensus. Blockchain does not have a guarantee of Byzantine agreement, because a member never reaches certainty that agreement has been achieved (there’s just a probability that rises over time). Blockchain is also non-Byzantine because it doesn’t automatically deal with network partitions. If a group of miners is isolated from the rest of the internet, that can allow multiple chains to grow, which conflict with each other on the order of transactions. It is worth noting that the term “Byzantine” is sometimes used in a weaker sense. But here, it is used in its original, stronger sense that (1) every member eventually knows consensus has been reached (2) attackers may collude and (3) attackers even control the internet itself (with some limits). Hashgraph is Byzantine, even by this stronger definition.

The hashgraph is ACID compliant. This is a database term, and applies to the hashgraph when it is used as a distributed database. A community of members uses it to reach a consensus on the order in which transactions occurred. After reaching consensus, each member feeds those transactions to that member’s local copy of the database, sending in each one in the consensus order. If the local database has all the standard properties of a database (ACID: Atomicity, Consistency, Isolation, Durability), then the community as a whole can be said to have a single, distributed database with those same properties. In blockchain, there is never a moment when you know that consensus has been reached. But if we were to consider 6 confirmations as achieving “certainty”, then it would be ACID complaint in the same sense as hashgraph.

The hashgraph is 100% efficient, as that term is used in the blockchain community. In blockchain, work is sometimes wasted mining a block that later is considered stale and is discarded by the community. In hashgraph, the equivalent of a “block” never becomes stale.

The hashgraph is inexpensive, in the sense of avoiding proof-of-work. In Bitcoin, the community must waste time on calculations that slow down how fast the blocks are mined. As computers become faster, they’ll have to do more calculations, to keep the rate slow. The calculations don’t have any useful purpose, except to slow down the community. This requires the serious miners to buy expensive, custom hardware, so they can do this work faster than their competitors. But hashgraph is 100% efficient, no matter how fast its “blocks” are mined.

So it doesn’t need to waste computations to slow itself down. (Note: there are blockchain variants that also don’t use proof-of-work; but Bitcoin does require proof-of-work).  The hashgraph is timestamped. Every transaction is assigned a consensus time, which is the median of the times at which each member first received it. This is part of the consensus, and so has all the guarantees of being Byzantine and provable. If a majority of the participating members are honest and have reliable clocks on their computer, then the timestamp itself will be honest and reliable, because it is generated by an honest and reliable member, or falls between two times that were generated by honest and reliable members. This consensus timestamping is useful for things such as smart contracts, because there will be a consensus on whether an event happened by a deadline, and the timestamp is resistant to manipulation by an attacker. In blockchain, each block contains a timestamp, but it reflects only a single clock: the one on the computer of the miner who mined that block.

The hashgraph is DoS resistant. Both blockchain and hashgraph are distributed in a way that resists Denial of Service (DoS) attacks. An attacker might flood one member or miner with packets, to temporarily disconnect them from the internet. But the community as a whole will continue to operate normally. An attack on the system as a whole would require flooding a large fraction of the members with packets, which is more difficult. There have been a number of proposed alternatives to blockchain based on leaders or round robin. These have been proposed to avoid the proof-of-work costs of blockchain. But they have the drawback of being sensitive to DoS attacks. If the attacker attacks the current leader, and switches to attacking the new leader as soon as one is chosen, then the attacker can freeze the entire system, while still attacking only one computer at a time. Hashgraph avoids this problem, while still not needing proof-of-work.

The hashgraph is optionally non-permissioned, while still avoiding the cost of proof-of-work. A permissioned system is one where only trusted members can participate. An open system is not permissioned, and allows anyone to participate. Standard blockchain can be open if it uses proof-of-work, but variants such as proof-of-stake typically have to be permissioned in order to be secure. A hashgraph system can be designed to work in a number of different ways. One of the more interesting is to use proof-of-stake, allowing members to vote proportional to their ownership of a particular cryptocurrency. A good cryptocurrency might be widely used, so that it is difficult for an attacker to corner the market by owning a large fraction of the entire money supply. If a large fraction of the currency owners all participate in a hashgraph system, then proof-of-stake will make it safe from Sybil attacks, which are attacks by hordes of sock-puppet fake accounts. Such a system would be secure even if it were not permissioned, while still avoiding the cost of proof-of-work.

                          From the Hashgraph website at

Video links:

Discussion of Bitcoin and security

discussion panel on hashgraph

Friday, December 22, 2017

The World of Actuators

An actuator is a component of a machine that is responsible for moving or controlling a mechanism or system, for example by actuating (opening or closing) a valve; in simple terms, it is a "mover".

An actuator requires a control signal and a source of energy. The control signal is relatively low energy and may be electric voltage or current, pneumatic or hydraulic pressure, or even human power. The supplied main energy source may be electric current, hydraulic fluid pressure, or pneumatic pressure. When the control signal is received, the actuator responds by converting the energy into mechanical motion.

An actuator is the mechanism by which a control system acts upon an environment. The control system can be simple (a fixed mechanical or electronic system), software-based (e.g. a printer driver, robot control system), a human, or any other input.

History of Actuators

The history of the pneumatic actuation system and the hydraulic actuation system dates to around the time of World War II (1938). It was first created by Xhiter Anckeleman (pronounced 'Ziter') who used his knowledge of engines and brake systems to come up with a new solution to ensure that the brakes on a car exert the maximum force, with the least possible wear and tear.

Hydraulic Actuators:  A hydraulic actuator consists of cylinder or fluid motor that uses hydraulic power to facilitate mechanical operation. The mechanical motion gives an output in terms of linear, rotatory or oscillatory motion. As liquids are nearly impossible to compress, a hydraulic actuator can exert a large force. The drawback of this approach is its limited acceleration.

The hydraulic cylinder consists of a hollow cylindrical tube along which a piston can slide. The term single acting is used when the fluid pressure is applied to just one side of the piston. The piston can move in only one direction, a spring being frequently used to give the piston a return stroke. The term double acting is used when pressure is applied on each side of the piston; any difference in pressure between the two sides of the piston moves the piston to one side or the other

Pneumatic Actuators:  A pneumatic actuator converts energy formed by vacuum or compressed air at high pressure into either linear or rotary motion. Pneumatic energy is desirable for main engine controls because it can quickly respond in starting and stopping as the power source does not need to be stored in reserve for operation.

Pneumatic actuators enable considerable forces to be produced from relatively small pressure changes. These forces are often used with valves to move diaphragms to affect the flow of liquid through the valve.

Electric actuators:  An electric actuator is powered by a motor that converts electrical energy into mechanical torque. The electrical energy is used to actuate equipment such as multi-turn valves. It is one of the cleanest and most readily available forms of actuator because it does not directly involve oil or other fossil fuels.

Thermal or magnetic actuators using shape metal alloys:  Actuators which can be actuated by applying thermal or magnetic energy have been used in commercial applications. Thermal actuators tend to be compact, lightweight, economical and with high power density. These actuators use shape memory materials (SMMs), such as shape memory alloys (SMAs) or magnetic shape-memory alloys (MSMAs). Some popular manufacturers of these devices are Finnish Modti Inc., American Dynalloy and Rotork.

Mechanical actuators:  A mechanical actuator functions to execute movement by converting one kind of motion, such as rotary motion, into another kind, such as linear motion. An example is a rack and pinion. The operation of mechanical actuators is based on combinations of structural components, such as gears and rails, or pulleys and chains.

3D Printed Soft Actuators

Soft actuators are being developed to handle fragile objects like fruit harvesting in agriculture or manipulating the internal organs in biomedicine that has always been a challenging task for robotics. Unlike conventional actuators, soft actuators produce flexible motion due to the integration of microscopic changes at the molecular level into a macroscopic deformation of the actuator materials.

The majority of the existing soft actuators are fabricated using multistep low yield processes such as micro-moulding, solid freeform fabrication, and mask lithography. However, these methods require manual fabrication of devices, post processing/assembly, and lengthy iterations until maturity in the fabrication is achieved. To avoid the tedious and time-consuming aspects of the current fabrication processes, researchers are exploring an appropriate manufacturing approach for effective fabrication of soft actuators. Therefore, special soft systems that can be fabricated in a single step by rapid prototyping methods, such as 3D printing, are utilized to narrow the gap between the design and implementation of soft actuators, making the process faster, less expensive, and simpler. They also enable incorporation of all actuator components into a single structure eliminating the need to use external joints, adhesives, and fasteners. These result in a decrease in the number of discrete parts, post-processing steps, and fabrication time.

3D printed soft actuators are classified into two main groups namely “semi 3D printed soft actuators” and “3D printed soft actuators”. The reason for such classification is to distinguish between the printed soft actuators that are fabricated by means of 3D printing process in whole and the soft actuators whose parts are made by 3D printers and post processed subsequently. This classification helps to clarify the advantages of 3D printed soft actuators over the semi 3D printed soft actuators due to their capability of operating without the need of any further assembly.

Shape memory polymer (SMP) actuators are the most similar to our muscles, providing a response to a range of stimuli such as light, electrical, magnetic, heat, pH, and moisture changes. They have some deficiencies including fatigue and high response time that have been improved through the introduction of smart materials and combination of different materials by means of advanced fabrication technology. The advent of 3D printers has made a new pathway for fabricating low-cost and fast response SMP actuators. The process of receiving external stimuli like heat, moisture, electrical input, light or magnetic field by SMP is referred to as shape memory effect (SME). SMP exhibits some rewarding features such a low density, high strain recovery, biocompatibility, and biodegradability.

Photopolymer/light activated polymers (LAP) are another type of SMP that are activated by light stimuli. The LAP actuators can be controlled remotely with instant response and, without any physical contact, only with the variation of light frequency or intensity.

A need for soft, lightweight and biocompatible soft actuators in soft robotics has influenced researchers for devising pneumatic soft actuators because of their intrinsic compliance nature and ability to produce muscle tension.

Polymers such as dielectric elastomers (DE), ionic polymer metal composites (IPMC), ionic electroactive polymers, polyelectrolyte gel, and gel-metal composites are common materials to form 3D layered structures that can be tailored to work as soft actuators. EAP actuators are categorized as 3D printed soft actuators that respond to electrical excitation as deformation in their shape.

Examples of Actuators

  • Comb drive
  • Digital micromirror device
  • Electric motor
  • Electroactive polymer
  • Hydraulic cylinder
  • Piezoelectric actuator
  • Pneumatic actuator
  • Screw jack
  • Servomechanism
  • Solenoid
  • Stepper motor
  • Shape-memory alloy
  • Thermal bimorph


Thursday, December 21, 2017

Boeing X-20 Spaceplane

The Boeing X-20 Dyna-Soar ("Dynamic Soarer") was a United States Air Force (USAF) program to develop a spaceplane that could be used for a variety of military missions, including aerial reconnaissance, bombing, space rescue, satellite maintenance, and as a space interceptor to sabotage enemy satellites. The program ran from October 24, 1957 to December 10, 1963, cost US$660 million ($5.16 billion today), and was cancelled just after spacecraft construction had begun.

Other spacecraft under development at the time, such as Mercury or Vostok, were based on space capsules that returned on ballistic re-entry profiles. Dyna-Soar was more like the much later Space Shuttle. It could not only travel to distant targets at the speed of an intercontinental ballistic missile, it was designed to glide to earth like an aircraft under control of a pilot. It could land at an airfield, rather than simply falling to earth and landing with a parachute. Dyna-Soar could also reach earth orbit, like Mercury or Gemini.

These characteristics made Dyna-Soar a far more advanced concept than other human spaceflight missions of the period. Research into a spaceplane was realized much later, in other reusable spacecraft such as the Space Shuttle, which had its first orbital flight in 1981, and, more recently, the Boeing X-40 and X-37B spacecraft.


Besides the funding issues that often accompany research efforts, the Dyna-Soar program suffered from two major problems: uncertainty over the booster to be used to send the craft into orbit, and a lack of a clear goal for the project.

Many different boosters were proposed to launch Dyna-Soar into orbit. The original USAF proposal suggested LOX/JP-4, fluorine-ammonia, fluorine-hydrazine, or RMI (X-15) engines. Boeing, the principal contractor, favored an Atlas-Centaur combination. Eventually (Nov 1959) the Air Force stipulated a Titan, as suggested by failed competitor Martin, but the Titan I was not powerful enough to launch the five-ton X-20 into orbit.

The Titan II and Titan III boosters could launch Dyna-Soar into Earth orbit, as could the Saturn C-1 (later renamed the Saturn I), and all were proposed with various upper-stage and booster combinations. While the new Titan IIIC was eventually chosen (Dec 1961) to send Dyna-Soar into space, the vacillations over the launch system delayed the project as it complicated planning.

The original intention for Dyna-Soar, outlined in the Weapons System 464L proposal, called for a project combining aeronautical research with weapons system development. Many questioned whether the USAF should have a manned space program, when that was the primary domain of NASA. It was frequently emphasized by the U.S. Air Force that, unlike the NASA programs, Dyna-Soar allowed for controlled re-entry, and this was where the main effort in the X-20 program was placed. On January 19, 1963, the Secretary of Defense, Robert McNamara, directed the U.S. Air Force to undertake a study to determine whether Gemini or Dyna-Soar was the more feasible approach to a space-based weapon system. In the middle of March 1963, after receiving the study, Secretary McNamara "stated that the Air Force had been placing too much emphasis on controlled re-entry when it did not have any real objectives for orbital flight". This was seen as a reversal of the Secretary's earlier position on the Dyna-Soar program. Dyna-Soar was also an expensive program that would not launch a manned mission until the mid-1960s at the earliest. This high cost and questionable utility made it difficult for the U.S. Air Force to justify the program. Eventually, the X-20 Dyna-Soar program was canceled on December 10, 1963.

On the day that X-20 was canceled, the U.S. Air Force announced another program, the Manned Orbiting Laboratory, a spin-off of Gemini. This program was also eventually canceled. Another black program, ISINGLASS, which was to be air-launched from a B-52 bomber, was evaluated and some engine work done, but it too was eventually cancelled.


Despite cancellation of the X-20, the affiliated research on spaceplanes influenced the much larger Space Shuttle. The final design also used delta wings for controlled landings. The later, and much smaller Soviet BOR-4 was closer in design philosophy to the Dyna-Soar, while NASA's Martin X-23 PRIME and Martin Marietta X-24A/HL-10 research aircraft also explored aspects of sub-orbital and space flight. The ESA proposed Hermes manned space craft took the design and expanded its scale.

Wednesday, December 20, 2017

Certain Brain Data Decoded

WWII Code-Breaking Techniques
Inspire Interpretation of Brain Data

Atlanta, GA -- December 18, 2017Cracking the German Enigma code is considered to be one of the decisive factors that hastened Allied victory in World War II. Now researchers have used similar techniques to crack some of the brain’s mysterious code.

By statistically analyzing clues intercepted through espionage, computer science pioneers in the 1940s were able to work out the rules of the Enigma code, turning a string of gibberish characters into plain language to expose German war communications. And today, a team that included computational neuroscientist Eva Dyer, who recently joined the Georgia Institute of Technology, used cryptographic techniques inspired by Enigma’s decrypting to predict, from brain data alone, which direction subjects will move their arms.

The work by researchers from the University of Pennsylvania, Georgia Tech, and Northwestern University could eventually help decode the neural activity underpinning more complex muscle movements and become useful in prosthetics, or even speech, to aid patients with paralysis.

During the war, the team that cracked Enigma, led by Alan Turing, considered the forebear of modern computer science, analyzed the statistical prevalence of certain letters of the alphabet to understand how they were distributed in messages like points on a map. That allowed the code breakers to eventually decipher whole words reliably.

In a similar manner, the neurological research team has now mapped the statistical distribution of more prevalent and less prevalent activities in populations of motor neurons to arrive at the specific hand movements driven by that neural activity.

The research team was led by University of Pennsylvania professor Konrad Kording, and Eva Dyer, formerly a postdoctoral researcher in Kording’s lab and now an assistant professor at Georgia Tech. They collaborated with the group of Lee Miller, a professor at Northwestern University. They published their study on December 12, 2017, in the journal Nature Biomedical Engineering.  .
Neuron firing pattern

In an experiment conducted in animal models, the researchers took data from more than one hundred neurons associated with arm movement. As the animals reached for a target that appeared at different locations around a central starting point, sensors recorded spikes of neural activity that corresponded with the movement of the subject’s arm.

“Just looking at the raw neural activity on a visual level tells you basically nothing about the movements it corresponds to, so you have to decode it to make the connection,” Dyer said. “We did it by mapping neural patterns to actual arm movements using machine learning techniques inspired by cryptography.”

The statistical prevalence of certain neurons’ firings paired up reliably and repeatedly with actual movements the way that, in the Enigma project, the prevalence of certain code symbols paired up with the frequency of use of specific letters of the alphabet in written language. In the neurological experiment, an algorithm translated the statistical patterns into visual graphic patterns, and eventually, these aligned with the physical hand movements that they aimed to decode.

“The algorithm tries every possible decoder until we get something where the output looks like typical movements,” Kording said. “There are issues scaling this up — it’s a hard computer science problem — but this is a proof-of-concept that cryptanalysis can work in the context of neural activity.

“At this point, the cryptanalysis approach is very new and needs refining, but fundamentally, it’s a good match for this kind of brain decoding,” Dyer said.

Brain decoding does face a fundamental challenge that code-breaking doesn't.

In cryptography, code-breakers have both the encrypted and unencrypted messages, so all they need to do is to figure out which rules turn one into the other. "What we wanted to do in this experiment was to be able to decode the brain from the encrypted message alone,” Kording said.

Tuesday, December 19, 2017

Basics of Troubleshooting

Troubleshooting or dépanneuring is a form of problem solving, often applied to repair failed products or processes on a machine or a system. It is a logical, systematic search for the source of a problem in order to solve it, and make the product or process operational again. Troubleshooting is needed to identify the symptoms. Determining the most likely cause is a process of elimination—eliminating potential causes of a problem. Finally, troubleshooting requires confirmation that the solution restores the product or process to its working state.

In general, troubleshooting is the identification or diagnosis of "trouble" in the management flow of a corporation or a system caused by a failure of some kind. The problem is initially described as symptoms of malfunction, and troubleshooting is the process of determining and remedying the causes of these symptoms.

A system can be described in terms of its expected, desired or intended behavior (usually, for artificial systems, its purpose). Events or inputs to the system are expected to generate specific results or outputs. (For example, selecting the "print" option from various computer applications is intended to result in a hardcopy emerging from some specific device). Any unexpected or undesirable behavior is a symptom. Troubleshooting is the process of isolating the specific cause or causes of the symptom. Frequently the symptom is a failure of the product or process to produce any results. (Nothing was printed, for example). Corrective action can then be taken to prevent further failures of a similar kind.

The methods of forensic engineering are especially useful in tracing problems in products or processes, and a wide range of analytical techniques are available to determine the cause or causes of specific failures. Corrective action can then be taken to prevent further failure of a similar kind. Preventative action is possible using failure mode and effects (FMEA) and fault tree analysis (FTA) before full-scale production, and these methods can also be used for failure analysis.

Aspects of Troubleshooting

Usually troubleshooting is applied to something that has suddenly stopped working, since its previously working state forms the expectations about its continued behavior. So the initial focus is often on recent changes to the system or to the environment in which it exists. (For example, a printer that "was working when it was plugged in over there"). However, there is a well known principle that correlation does not imply causality. (For example, the failure of a device shortly after it has been plugged into a different outlet doesn't necessarily mean that the events were related. The failure could have been a matter of coincidence.) Therefore, troubleshooting demands critical thinking rather than magical thinking.

It is useful to consider the common experiences we have with light bulbs. Light bulbs "burn out" more or less at random; eventually the repeated heating and cooling of its filament, and fluctuations in the power supplied to it cause the filament to crack or vaporize. The same principle applies to most other electronic devices and similar principles apply to mechanical devices. Some failures are part of the normal wear-and-tear of components in a system.

A basic principle in troubleshooting is to start from the simplest and most probable possible problems first. This is illustrated by the old saying "When you see hoof prints, look for horses, not zebras", or to use another maxim, use the KISS principle. This principle results in the common complaint about help desks or manuals, that they sometimes first ask: "Is it plugged in and does that receptacle have power?", but this should not be taken as an affront, rather it should serve as a reminder or conditioning to always check the simple things first before calling for help.

A troubleshooter could check each component in a system one by one, substituting known good components for each potentially suspect one. However, this process of "serial substitution" can be considered degenerate when components are substituted without regard to a hypothesis concerning how their failure could result in the symptoms being diagnosed.

Simple and intermediate systems are characterized by lists or trees of dependencies among their components or subsystems. More complex systems contain cyclical dependencies or interactions (feedback loops). Such systems are less amenable to "bisection" troubleshooting techniques.

It also helps to start from a known good state, the best example being a computer reboot. A cognitive walkthrough is also a good thing to try. Comprehensive documentation produced by proficient technical writers is very helpful, especially if it provides a theory of operation for the subject device or system.

A common cause of problems is bad design, for example bad human factors design, where a device could be inserted backward or upside down due to the lack of an appropriate forcing function (behavior-shaping constraint), or a lack of error-tolerant design. This is especially bad if accompanied by habituation, where the user just doesn't notice the incorrect usage, for instance if two parts have different functions but share a common case so that it is not apparent on a casual inspection which part is being used.

Troubleshooting can also take the form of a systematic checklist, troubleshooting procedure, flowchart or table that is made before a problem occurs. Developing troubleshooting procedures in advance allows sufficient thought about the steps to take in troubleshooting and organizing the troubleshooting into the most efficient troubleshooting process. Troubleshooting tables can be computerized to make them more efficient for users.

Some computerized troubleshooting services (such as Primefax, later renamed Maxserve), immediately show the top 10 solutions with the highest probability of fixing the underlying problem. The technician can either answer additional questions to advance through the troubleshooting procedure, each step narrowing the list of solutions, or immediately implement the solution he feels will fix the problem. These services give a rebate if the technician takes an additional step after the problem is solved: report back the solution that actually fixed the problem. The computer uses these reports to update its estimates of which solutions have the highest probability of fixing that particular set of symptoms.

Tiny Fossils "Talk" to Scientists

Ancient Fossil Microorganisms Indicate
that Life in the Universe is Common
UCLA and University of Wisconsin scientists
analyze specimens from 3.465 billion years ago
By Stuart Wolpert

December 18, 2017  -- A new analysis of the oldest known fossil microorganisms provides strong evidence to support an increasingly widespread understanding that life in the universe is common.

The microorganisms, from Western Australia, are 3.465 billion years old. Scientists from UCLA and the University of Wisconsin–Madison report today in the journal Proceedings of the National Academy of Sciences that two of the species they studied appear to have performed a primitive form of photosynthesis, another apparently produced methane gas, and two others appear to have consumed methane and used it to build their cell walls.

The evidence that a diverse group of organisms had already evolved extremely early in the Earth’s history — combined with scientists’ knowledge of the vast number of stars in the universe and the growing understanding that planets orbit so many of them — strengthens the case for life existing elsewhere in the universe because it would be extremely unlikely that life formed quickly on Earth but did not arise anywhere else.

“By 3.465 billion years ago, life was already diverse on Earth; that’s clear — primitive photosynthesizers, methane producers, methane users,” said J. William Schopf, a professor of paleobiology in the UCLA College, and the study’s lead author. “These are the first data that show the very diverse organisms at that time in Earth’s history, and our previous research has shown that there were sulfur users 3.4 billion years ago as well.

“This tells us life had to have begun substantially earlier and it confirms that it was not difficult for primitive life to form and to evolve into more advanced microorganisms.”

Schopf said scientists still do not know how much earlier life might have begun.

“But, if the conditions are right, it looks like life in the universe should be widespread,” he said.

The study is the most detailed ever conducted on microorganisms preserved in such ancient fossils. Researchers led by Schopf first described the fossils in the journal Science in 1993, and then substantiated their biological origin in the journal Nature in 2002. But the new study is the first to establish what kind of biological microbial organisms they are, and how advanced or primitive they are.

For the new research, Schopf and his colleagues analyzed the microorganisms with cutting-edge technology called secondary ion mass spectroscopy, or SIMS, which reveals the ratio of carbon-12 to carbon-13 isotopes — information scientists can use to determine how the microorganisms lived. (Photosynthetic bacteria have different carbon signatures from methane producers and consumers, for example.) In 2000, Schopf became the first scientist to use SIMS to analyze microscopic fossils preserved in rocks; he said the technology will likely be used to study samples brought back from Mars for signs of life.

The Wisconsin researchers, led by geoscience professor John Valley, used a secondary ion mass spectrometer — one of just a few in the world — to separate the carbon from each fossil into its constituent isotopes and determine their ratios.

“The differences in carbon isotope ratios correlate with their shapes,” Valley said. “Their C-13-to-C-12 ratios are characteristic of biology and metabolic function.”

The fossils were formed at a time when there was very little oxygen in the atmosphere, Schopf said. He thinks that advanced photosynthesis had not yet evolved, and that oxygen first appeared on Earth approximately half a billion years later before its concentration in our atmosphere increased rapidly starting about 2 billion years ago.Oxygen would have been poisonous to these microorganisms, and would have killed them, he said.

Primitive photosynthesizers are fairly rare on Earth today because they exist only in places where there is light but no oxygen — normally there is abundant oxygen anywhere there is light. And the existence of the rocks the scientists analyzed is also rather remarkable: The average lifetime of a rock exposed on the surface of the Earth is about 200 million years, Schopf said, adding that when he began his career, there was no fossil evidence of life dating back farther than 500 million years ago.

“The rocks we studied are about as far back as rocks go.”

While the study strongly suggests the presence of primitive life forms throughout the universe, Schopf said the presence of more advanced life is very possible but less certain.

One of the paper’s co-authors is Anatoliy Kudryavtsev, a senior scientist at UCLA’s Center for the Study of Evolution and the Origin of Life, of which Schopf is director. The research was funded by the NASA Astrobiology Institute.

In May 2017, a paper in PNAS by Schopf, UCLA graduate student Amanda Garcia and colleagues in Japan showed the Earth’s near-surface ocean temperature has dramatically decreased over the past 3.5 billion years. The work was based on their analysis of a type of ancient enzyme present in virtually all organisms.

In, 2015 Schopf was part of an international team of scientists that described in PNAS their discovery of the greatest absence of evolution ever reported — a type of deep-sea microorganism that appears not to have evolved over more than 2 billion years.

Monday, December 18, 2017

How Cancers May Get Started

Study Prompts New Ideas on Cancers’ Origins
Mouse study shows mature cells can create precancerous lesions
by Jim Dryden

December 15, 2017 -- Rapidly dividing, yet aberrant stem cells are a major source of cancer. But a new study suggests that mature cells also play a key role in initiating cancer — a finding that could upend the way scientists think about the origins of the disease.

Researchers at Washington University School of Medicine in St. Louis have found that mature cells have the ability to revert back to behaving more like rapidly dividing stem cells. However, when old cells return to a stem cell-like status, they can carry with them all of the mutations that have accumulated to date, predisposing some of those cells to developing into precancerous lesions.

The new study is published online in the journal Gastroenterology.

“As scientists, we have focused a good deal of attention on understanding the role of stem cells in the development of cancers, but there hasn’t been a focus on mature cells,” said senior investigator Jason C. Mills, MD, PhD, a professor of medicine in the Division of Gastroenterology. “But it appears when mature cells return back into a rapidly dividing stem cell state, this creates problems that can lead to cancer.”

The findings, in mice and in human stomach cells, also raise questions about how cancer cells may evade treatment.

Most cancer therapies are aimed at halting cancer growth by stopping cells from rapidly dividing. Such treatments typically attack stem cells but would not necessarily prevent mature cells from reverting to stem cell-like status.

“Cancer therapies target stem cells because they divide a lot, but if mature cells are being recruited to treat injuries, then those therapies won’t touch the real problem,” said first author Megan Radyk, a graduate student in Mills’ laboratory. “If cancer recurs, it may be because the therapy didn’t hit key mature cells that take on stem cell-like behavior. That can lead to the development of precancerous lesions and, potentially, cancer.”

Studying mice with injuries to the lining of the stomach, the researchers blocked the animals’ ability to call on stem cells for help in the stomach. They focused on the stomach both because Mills is co-director of Washington University’s NIH-supported Digestive Disease Center and because the anatomy in the stomach makes it easier to distinguish stem cells from mature cells that perform specific tasks. Even without stem cells, the mice developed a precancerous condition because mature stomach cells reverted back to a stem cell state to heal the injury.

Analyzing tissue specimens from 10 people with stomach cancer, the researchers found evidence that those same mature cells in the stomach also had reverted to a stem cell-like state and had begun to change and divide rapidly.

The Mills lab is working now to identify drugs that may block the precancerous condition by preventing mature cells from proliferating and dividing.

“Knowing these cells are leading to increased cancer risk may allow us to find drugs to keep mature cells from starting to divide and multiply,” Mills said. “That may be important in preventing cancer not only in the stomach and GI tract but throughout the body.”

Sunday, December 17, 2017

Closer to A.I.: Neuromorphic Chips

Neuromorphic Chips
Microprocessors configured more like brains than traditional chips could soon make computers far more astute about what’s going on around them.
by Robert D. Hof, MIT Technology Review, May/June 2014

A pug-size robot named pioneer slowly rolls up to the Captain America action figure on the carpet. They’re facing off inside a rough model of a child’s bedroom that the wireless-chip maker Qualcomm has set up in a trailer. The robot pauses, almost as if it is evaluating the situation, and then corrals the figure with a snowplow-like implement mounted in front, turns around, and pushes it toward three squat pillars representing toy bins. Qualcomm senior engineer Ilwoo Chang sweeps both arms toward the pillar where the toy should be deposited. Pioneer spots that gesture with its camera and dutifully complies. Then it rolls back and spies another action figure, Spider-Man. This time Pioneer beelines for the toy, ignoring a chessboard nearby, and delivers it to the same pillar with no human guidance.This demonstration at Qualcomm’s headquarters in San Diego looks modest, but it’s a glimpse of the future of computing. The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works.
Later this year, Qualcomm will begin to reveal how the technology can be embedded into the silicon chips that power every manner of electronic device. These “neuromorphic” chips—so named because they are modeled on biological brains—will be designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed. They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways. Medical sensors and devices could track individuals’ vital signs and response to treatments over time, learning to adjust dosages or even catch problems early. Your smartphone could learn to anticipate what you want next, such as background on someone you’re about to meet or an alert that it’s time to leave for your next meeting. Those self-driving cars Google is experimenting with might not need your help at all, and more adept Roombas wouldn’t get stuck under your couch. “We’re blurring the boundary between silicon and biological systems,” says Qualcomm’s chief technology officer, Matthew Grob.
Qualcomm’s chips won’t become available until next year at the earliest; the company will spend 2014 signing up researchers to try out the technology. But if it delivers, the project—known as the Zeroth program—would be the first large-scale commercial platform for neuromorphic computing. That’s on top of promising efforts at universities and at corporate labs such as IBM Research and HRL Laboratories, which have each developed neuromorphic chips under a $100 million project for the Defense Advanced Research Projects Agency. Likewise, the Human Brain Project in Europe is spending roughly 100 million euros on neuromorphic projects, including efforts at Heidelberg University and the University of Manchester. Another group in Germany recently reported using a neuromorphic chip and software modeled on insects’ odor-processing systems to recognize plant species by their flowers.
Today’s computers all use the so-called von Neumann architecture, which shuttles data back and forth between a central processor and memory chips in linear sequences of calculations. That method is great for crunching numbers and executing precisely written programs, but not for processing images or sound and making sense of it all. It’s telling that in 2012, when Google demonstrated artificial-­intelligence software that learned to recognize cats in videos without being told what a cat was, it needed 16,000 processors to pull it off.
Continuing to improve the performance of such processors requires their manufacturers to pack in ever more, ever faster transistors, silicon memory caches, and data pathways, but the sheer heat generated by all those components is limiting how fast chips can be operated, especially in power-stingy mobile devices. That could halt progress toward devices that effectively process images, sound, and other sensory information and then apply it to tasks such as face recognition and robot or vehicle navigation.
No one is more acutely interested in getting around those physical challenges than Qualcomm, maker of wireless chips used in many phones and tablets. Increasingly, users of mobile devices are demanding more from these machines. But today’s personal-assistant services, such as Apple’s Siri and Google Now, are limited because they must call out to the cloud for more powerful computers to answer or anticipate queries. “We’re running up against walls,” says Jeff Gehlhaar, the Qualcomm vice president of technology who heads the Zeroth engineering team.
Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images, sounds, and the like. That is the process we call learning. The chips, which incorporate brain-inspired models called neural networks, do the same. That’s why Qualcomm’s robot—even though for now it’s merely running software that simulates a neuromorphic chip—can put Spider-Man in the same location as Captain America without having seen Spider-Man before.
Even if neuromorphic chips are nowhere near as capable as the brain, they should be much faster than current computers at processing sensory data and learning from it. Trying to emulate the brain just by using special software on conventional processors—the way Google did in its cat experiment—is way too inefficient to be the basis of machines with still greater intelligence, says Jeff Hawkins, a leading thinker on AI who created the Palm Pilot before cofounding Numenta, a maker of brain-inspired software. “There’s no way you can build it [only] in software,” he says of effective AI. “You have to build this in silicon.”
Neural Channel
As smartphones have taken off, so has Qualcomm, whose market capitalization now tops Intel’s. That’s thanks in part to the hundreds of wireless-­communications patents that Qualcomm shows off on two levels of a seven-story atrium lobby at its San Diego headquarters. Now it’s looking to break new ground again. First in coöperation with Brain Corp., a neuroscience startup it invested in and that is housed at its headquarters, and more recently with its own growing staff, it has been quietly working for the past five years on algorithms to mimic brain functions as well as hardware to execute them. The Zeroth project has initially focused on robotics applications because the way robots can interact with the real world provides broader lessons about how the brain learns—lessons that can then be applied in smartphones and other products. Its name comes from Isaac Asimov’s “Zeroth Law” of robotics: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
The idea of neuromorphic chips dates back decades. Carver Mead, the Caltech professor emeritus who is a legend in integrated-circuit design, coined the term in a 1990 paper, describing how analog chips—those that vary in their output, like real-world phenomena, in contrast to the binary, on-or-off nature of digital chips—could mimic the electrical activity of neurons and synapses in the brain. But he struggled to find ways to reliably build his analog chip designs. Only one arguably neuromorphic processor, a noise suppression chip made by Audience, has sold in the hundreds of millions. The chip, which is based on the human cochlea, has been used in phones from Apple, Samsung, and others.
As a commercial company, Qualcomm has opted for pragmatism over sheer performance in its design. That means the neuromorphic chips it’s developing are still digital chips, which are more predictable and easier to manufacture than analog ones. And instead of modeling the chips as closely as possible on actual brain biology, Qualcomm’s project emulates aspects of the brain’s behavior. For instance, the chips encode and transmit data in a way that mimics the electrical spikes generated in the brain as it responds to sensory information. “Even with this digital representation, we can reproduce a huge range of behaviors we see in biology,” says M. Anthony Lewis, the project engineer for Zeroth.
The chips would fit neatly into the existing business of Qualcomm, which dominates the market for mobile-phone chips but has seen revenue growth slow. Its Snapdragon mobile-phone chips include components such as graphics processing units; Qualcomm could add a “neural processing unit” to the chips to handle sensory data and tasks such as image recognition and robot navigation. And given that Qualcomm has a highly profitable business of licensing technologies to other companies, it would be in a position to sell the rights to use algorithms that run on neuromorphic chips. That could lead to sensor chips for vision, motion control, and other applications.
Cognitive Companion
Matthew Grob was startled, then annoyed, when he heard the theme to Sanford and Son start playing in the middle of a recent meeting. It turns out that on a recent trip to Spain, he had set his smartphone to issue a reminder using the tune as an alarm, and the phone thought it was time to play it again. That’s just one small example of how far our personal devices are from being intelligent. Grob dreams of a future when instead of monkeying with the settings of his misbehaving phone, as he did that day, all he would have to do is bark, “Don’t do that!” Then the phone might learn that it should switch off the alarm when he’s in a new time zone.
Qualcomm is especially interested in the possibility that neuromorphic chips could transform smartphones and other mobile devices into cognitive companions that pay attention to your actions and surroundings and learn your habits over time. “If you and your device can perceive the environment in the same way, your device will be better able to understand your intentions and anticipate your needs,” says Samir Kumar, a business development director at Qualcomm’s research lab.
Pressed for examples, Kumar ticks off a litany: If you tag your dog in a photo, your phone’s camera would recognize the pet in every subsequent photo. At a soccer game, you could tell the phone to snap a photo only when your child is near the goal. At bedtime, it would know without your telling it to send calls to voice mail. In short, says Grob, your smartphone would have a digital sixth sense.
Qualcomm executives are reluctant to embark on too many flights of fancy before their chip is even available. But neuromorphic researchers elsewhere don’t mind speculating. According to ­Dharmendra Modha, a top IBM researcher in San Jose, such chips might lead to glasses for the blind that use visual and auditory sensors to recognize objects and provide audio cues; health-care systems that monitor vital signs, provide early warnings of potential problems, and suggest ways to individualize treatments; and computers that draw on wind patterns, tides, and other indicators to predict tsunamis more accurately. At HRL this summer, principal research scientist Narayan Srinivasa plans to test a neuromorphic chip in a bird-size device from AeroVironment that will be flown around a couple of rooms. It will take in data from cameras and other sensors so it can remember which room it’s in and learn to navigate that space more adeptly, which could lead to more capable drones.
It will take programmers time to figure out the best way to exploit the hardware. “It’s not too early for hardware companies to do research,” says Dileep George, cofounder of the artificial-­intelligence startup Vicarious. “The commercial products could take a while.” Qualcomm executives don’t disagree. But they’re betting that the technology they expect to launch this year will bring those products a lot closer to reality.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

See also the neuromorphic engineering article in Wikipedia at:

See also this review of neuromorphic chips by a PhD candidate at: