Sunday, April 30, 2017

Want a Good Job? Know This

Get familiar with at least one of these computer programming languages:  SQL (Sequential Query Language, the actual program underneath Microsoft Access), C#, Java, JavaScript, or Python.

“Roughly half of the jobs in the top income quartile — defined as those paying $57,000 or more per year — are in occupations that commonly require applicants to have at least some computer coding knowledge or skill…”

Saturday, April 29, 2017

Unscientific Alternative Medicine

The history of alternative medicine refers to the history of a group of diverse medical practices that were collectively promoted as "alternative medicine" beginning in the 1970s, to the collection of individual histories of members of that group, or to the history of western medical practices that were labeled "irregular practices" by the western medical establishment. It includes the histories of complementary medicine and of integrative medicine. "Alternative medicine" is a loosely defined and very diverse set of products, practices, and theories that are perceived by its users to have the healing effects of medicine, but do not originate from evidence gathered using the scientific method, are not part of biomedicine, or are contradicted by scientific evidence or established science. "Biomedicine" is that part of medical science that applies principles of anatomy, physics, chemistry, biology, physiology, and other natural sciences to clinical practice, using scientific methods to establish the effectiveness of that practice.

Much of what is now categorized as alternative medicine was developed as independent, complete medical systems, was developed long before biomedicine and use of scientific methods, and was developed in relatively isolated regions of the world where there was little or no medical contact with pre-scientific western medicine, or with each other's systems. Examples are Traditional Chinese medicine and the Ayurvedic medicine of India. Other alternative medicine practices, such as homeopathy, were developed in Western Europe and in opposition to western medicine, at a time when western medicine was based on unscientific theories that were dogmatically imposed by western religious authorities. Homeopathy was developed prior to discovery of the basic principles of chemistry, which proved homeopathic remedies contained nothing but water. But homeopathy, with its remedies made of water, was harmless compared to the unscientific and dangerous orthodox western medicine practiced at that time, which included use of toxins and draining of blood, often resulting in permanent disfigurement or death. Other alternative practices such as chiropractic and osteopathic manipulative medicine, were developed in the United States at a time that western medicine was beginning to incorporate scientific methods and theories, but the biomedical model was not yet totally dominant. Practices such as chiropractic and osteopathic, each considered to be irregular by the medical establishment, also opposed each other, both rhetorically and politically with licensing legislation. Osteopathic practitioners added the courses and training of biomedicine to their licensing and licensed Doctor of Osteopathic Medicine holders began diminishing use of the unscientific origins of the field, and without the original practices and theories, is now considered the same as biomedicine.

Alternative Medicine Relative to Scientific Medicine

The term alternative medicine refers to systems of medical thought and practice which function as alternatives to or subsist outside of conventional, mainstream medicine. Alternative medicine cannot exist absent an established, authoritative and stable medical orthodoxy to which it can function as an alternative. Such orthodoxy was only established in the West during the nineteenth century through processes of regulation, association, institution building and systematized medical education.

Friday, April 28, 2017

Positivism as a Philosophy

Positivism is a philosophical theory stating that certain ("positive") knowledge is based on natural phenomena and their properties and relations. Thus, information derived from sensory experience, interpreted through reason and logic, forms the exclusive source of all certain knowledge. Positivism holds that valid knowledge (certitude or truth) is found only in this a posteriori knowledge.

Verified data (positive facts) received from the senses are known as empirical evidence; thus positivism is based on empiricism.

Positivism also holds that society, like the physical world, operates according to general laws. Introspective and intuitive knowledge is rejected, as are metaphysics and theology. Although the positivist approach has been a recurrent theme in the history of western thought, the modern sense of the approach was formulated by the philosopher Auguste Comte in the early 19th century. Comte argued that, much as the physical world operates according to gravity and other absolute laws, so does society, and further developed positivism into a Religion of Humanity

Criticism of Positivism

Historically, positivism has been criticized for its reductionism, i.e., for contending that all "processes are reducible to physiological, physical or chemical events," "social processes are reducible to relationships between and actions of individuals," and that "biological organisms are reducible to physical systems."

Max Horkheimer criticized the classic formulation of positivism on two grounds. First, he claimed that it falsely represented human social action. The first criticism argued that positivism systematically failed to appreciate the extent to which the so-called social facts it yielded did not exist 'out there', in the objective world, but were themselves a product of socially and historically mediated human consciousness. Positivism ignored the role of the 'observer' in the constitution of social reality and thereby failed to consider the historical and social conditions affecting the representation of social ideas. Positivism falsely represented the object of study by reifying social reality as existing objectively and independently and labour actually produced those conditions. Secondly, he argued, representation of social reality produced by positivism was inherently and artificially conservative, helping to support the status quo, rather than challenging it. This character may also explain the popularity of positivism in certain political circles. Horkheimer argued, in contrast, that critical theory possessed a reflexive element lacking in the positivistic traditional theory.

Some scholars today hold the beliefs critiqued in Horkheimer's work, but since the time of his writing critiques of positivism, especially from philosophy of science, have led to the development of postpositivism. This philosophy greatly relaxes the epistemological commitments of logical positivism and no longer claims a separation between the knower and the known. Rather than dismissing the scientific project outright, postpositivists seek to transform and amend it, though the exact extent of their affinity for science varies vastly. For example, some postpositivists accept the critique that observation is always value-laden, but argue that the best values to adopt for sociological observation are those of science: skepticism, rigor, and modesty. Just as some critical theorists see their position as a moral commitment to egalitarian values, these postpositivists see their methods as driven by a moral commitment to these scientific values. Such scholars may see themselves as either positivists or antipositivists.

Positivism has also come under fire on religious and philosophical grounds, whose proponents state that truth begins in sense experience, but does not end there. Positivism fails to prove that there are not abstract ideas, laws, and principles, beyond particular observable facts and relationships and necessary principles, or that we cannot know them. Nor does it prove that material and corporeal things constitute the whole order of existing beings, and that our knowledge is limited to them. According to positivism, our abstract concepts or general ideas are mere collective representations of the experimental order—for example; the idea of "man" is a kind of blended image of all the men observed in our experience. This runs contrary to a Platonic or Christian ideal, where an idea can be abstracted from any concrete determination, and may be applied identically to an indefinite number of objects of the same class.  From the idea's perspective, Platonism is more precise. Defining an idea as a sum of collective images is imprecise and more or less confused, and becomes more so as the collection represented increases. An idea defined explicitly always remains clear.

Experientialism, which arose with second generation cognitive science, asserts that knowledge begins and ends with experience itself.

Echoes of the "positivist" and "antipositivist" debate persist today, though this conflict is hard to define. Authors writing in different epistemological perspectives do not phrase their disagreements in the same terms and rarely actually speak directly to each other.  To complicate the issues further, few practicing scholars explicitly state their epistemological commitments, and their epistemological position thus has to be guessed from other sources such as choice of methodology or theory. However, no perfect correspondence between these categories exists, and many scholars critiqued as "positivists" are actually postpositivists. One scholar has described this debate in terms of the social construction of the "other", with each side defining the other by what it is not rather than what it is, and then proceeding to attribute far greater homogeneity to their opponents than actually exists. Thus, it is better to understand this not as a debate but as two different arguments: the "antipositivist" articulation of a social meta-theory which includes a philosophical critique of scientism, and "positivist" development of a scientific research methodology for sociology with accompanying critiques of the reliability and validity of work that they see as violating such standards.

Thursday, April 27, 2017

"Common Sense" aka "Prudence"

As an act of virtue, prudence requires three mental actions: taking counsel carefully with our self and others, judging correctly from the evidence at hand, and directing the rest of our activity based on the norms we have established. Prudence is the “charioteer” of the virtues.

Prudence is concerned with the quest of truth, and fills us with the desire of fuller knowledge.” --St. Ambrose


“Who makes quick use of the moment is a genius of prudence.” -- Johann Kaspar Lavater

”Rashness belongs to youth; prudence to old age. -- Marcus Tullius

”A smooth sea never made a skillful mariner, neither do uninterrupted prosperity and success qualify for usefulness and happiness. The storms of adversity, like those of the ocean, rouse the faculties, and excite the invention, prudence, skill and fortitude or the voyager. The martyrs of ancient times, in bracing their minds to outward calamities, acquired a loftiness of purpose and a moral heroism worth a lifetime of softness and security.” -- Author Unknown


Prudence (Lat. prudentia, contracted from providentia, seeing ahead) is the the ability to govern and discipline oneself by the use of reason. It is classically considered to be a virtue, and in particular one of the four Cardinal virtues.

The word comes from Old French prudence (14th century), from Latin prudentia (foresight, sagacity), a contraction of providentia, foresight. It is often associated with wisdom, insight, and knowledge. In this case, the virtue is the ability to judge between virtuous and vicious actions, not only in a general sense, but with regard to appropriate actions at a given time and place. Although prudence itself does not perform any actions, and is concerned solely with knowledge, all virtues had to be regulated by it. Distinguishing when acts are courageous, as opposed to reckless or cowardly, for instance, is an act of prudence, and for this reason it is classified as a cardinal (pivotal) virtue.

Although prudence would be applied to any such judgment, the more difficult tasks, which distinguish a person as prudent, are those in which various goods have to be weighed against each other, as when a person is determining what would be best to give charitable donations, or how to punish a child so as to prevent repeating an offense.

In modern English, however, the word has become increasingly synonymous with cautiousness. In this sense, prudence names a reluctance to take risks, which remains a virtue with respect to unnecessary risks, but when unreasonably extended (i.e. over-cautiousness), can become the vice of cowardice.

In the Nicomachean Ethics, Aristotle gives a lengthy account of the virtue phronesis (Greek: ϕρονησιϛ), which has traditionally been translated as “prudence”.
Prudence as the “Father” of all virtues

Prudence was considered by the ancient Greeks and later on by Christian philosophers, most notably Thomas Aquinas, as the cause, measure and form of all virtues. It is considered to be the auriga virtutum or the charioteer of the virtues.

It is the cause in the sense that the virtues, which are defined to be the “perfected ability” of man as a spiritual person (spiritual personhood in the classical western understanding means having intelligence and free will), achieve their “perfection” only when they are founded upon prudence, that is to say upon the perfected ability to make right decisions. For instance, a person can live temperance when he has acquired the habit of deciding correctly the actions to take in response to his instinctual cravings.

Prudence is considered the measure of moral virtues since it provides a model of ethically good actions. “The work of art is true and real by its correspondence with the pattern of its prototype in the mind of the artist. In similar fashion, the free activity of man is good by its correspondence with the pattern of prudence.” (Josef Pieper) For instance, a stock broker using his experience and all the data available to him decides that it is beneficial to sell stock A at 2PM tomorrow and buy stock B today. The content of the decision (e.g., the stock, amount, time and means) is the product of an act of prudence, while the actual carrying out of the decision may involve other virtues like fortitude (doing it in spite of fear of failure) and justice (doing his job well out of justice to his company and his family). The actual act’s “goodness” is measured against that original decision made through prudence.

In Greek and Scholastic philosophy, “form” is the specific characteristic of a thing that makes it what it is. With this language, prudence confers upon other virtues the form of its inner essence; that is, its specific character as a virtue. For instance, not all acts of telling the truth are considered good, considered as done with the virtue of honesty. What makes telling the truth a virtue is whether it is done with prudence. Telling a competitor the professional secrets of your company is not prudent and therefore not considered good and virtuous.
Prudence versus cunning and false prudence

In the Christian understanding, the difference between prudence and cunning lies in the intent with which the decision of the context of an action is made. The Christian understanding of the world includes the existence of God, the natural law and moral implications of human actions. In this context, prudence is different from cunning in that it takes into account the supernatural good. For instance, the decision of persecuted Christians to be martyred rather than deny their faith is considered prudent. Pretending to deny their faith could be considered prudent from the point of view of a non-believer.

Judgments using reasons for evil ends or using evil means are considered to be made through “cunning” and “false prudence” and not through prudence.
Integral Parts of Prudence

“Integral parts” of virtues, in Scholastic philosophy, are the elements that must be present for any complete or perfect act of the virtue. The following are the integral parts of prudence:
Memoria — Accurate memory; that is, memory that is true to reality

Intelligentia — Understanding of first principles
Docilitas — The kind of open-mindedness that recognizes the true variety of things and situations to be experienced, and does not cage itself in any presumption of deceptive knowledge; the ability to make use of the experience and authority of others to make prudent decisions

Shrewdness or quick-wittedness (solertia) — sizing up a situation on one’s own quickly

Discursive reasoning (ratio) — research and compare alternative possibilities

Foresight (providentia) — capacity to estimate whether a particular action will lead to the realization of our goal
Circumspection — ability to take all relevant circumstances into account

Caution — risk mitigation
Prudential judgments

In ethics, a “prudential judgment” is one where the circumstances must be weighed to determine the correct action. Generally, it applies to situations where two people could weigh the circumstances differently and ethically come to different conclusions.

For instance, in Just War theory, the government of a nation must weigh whether the harms they suffer are more than the harms that would be produced by their going to war against another nation that is harming them; the decision whether to go to war is therefore a prudential judgment.

In another case, a patient who has a terminal illness with no conventional treatment may hear of an experimental treatment. To decide whether to take it would require weighing on one hand, the cost, time, possible lack of benefit, and possible pain, disability, and hastened death, and on the other hand, the possible benefit and the benefit to others of what could be learned from his case.

Wednesday, April 26, 2017

Fermion Based Quantum Computing

UCI’s New 2-D Materials Conduct
Electricity Near the Speed of Light
Substances could revolutionize electronic and computing devices

Irvine, Calif., April 26, 2017 – Physicists at the University of California, Irvine and elsewhere have fabricated new two-dimensional materials with breakthrough electrical and magnetic attributes that could make them building blocks of future quantum computers and other advanced electronics.

In three separate studies appearing this month in Nature, Science Advances and Nature Materials, UCI researchers and colleagues from UC Berkeley, Lawrence Berkeley National Laboratory, Princeton University, Fudan University and the University of Maryland explored the physics behind the 2-D states of novel materials and determined they could push computers to new heights of speed and power.

“Finally, we can take exotic, high-end theories in physics and make something useful,” said UCI associate professor of physics & astronomy Jing Xia, a corresponding author on two of the studies. “We’re exploring the possibility of making topological quantum computers for the next 100 years.”

The common threads running through the papers are that the research is conducted at extremely cold temperatures and that the signal carriers in all three studies are not electrons – as with traditional silicon-based technologies – but Dirac or Majorana fermions, particles without mass that move at nearly the speed of light.

One of the key challenges of such research is handling and analyzing miniscule material samples, just two atoms thick, several microns long and a few microns across. Xia’s lab at UCI is equipped with a fiber-optic Sagnac interferometer microscope that he built. (The only other one in existence is at Stanford University, assembled by Xia when he was a graduate student there.) Calling it the most sensitive magnetic microscope in the world, Xia compares it to a telescope that an ornithologist in Irvine could use to inspect the eye of a bird in New York.

“This machine is the ideal measurement tool for these discoveries,” said UCI graduate student Alex Stern, lead author on two of the papers. “It’s the most accurate way to optically measure magnetism in a material.”

In a study published today in Nature, the researchers detail their observation – via the Sagnac interferometer – of magnetism in a microscopic flake of chromium germanium telluride. The compound, which they created, was viewed at minus 387 degrees Fahrenheit. CGT is a cousin of graphene, a superthin atomic carbon film. Since its discovery, graphene has been considered a potential replacement for silicon in next-generation computers and other devices because of the speed at which electronic signals skitter across its almost perfectly flat surface.

But there’s a catch: Certain computer components, such as memory and storage systems, need to be made of materials that have both electronic and magnetic properties. Graphene has the former but not the latter. CGT has both.

His lab also used the Sagnac interferometer for a study published earlier this month in Science Advances examining what happens at the precise moment bismuth and nickel are brought into contact with one another – again at a very low temperature (in this case, minus 452 degrees Fahrenheit). Xia said his team found at the interface between the two metals “an exotic superconductor that breaks time-reversal symmetry.”

“Imagine you turn back the clock and a cup of red tea turns green. Wouldn’t that make this tea very exotic? This is indeed exotic for superconductors,” he said. “And it’s the first time it’s been observed in 2-D materials.”

The signal carriers in this 2-D superconductor are Majorana fermions, which could be used for a braiding operation that theorists believe is vital to quantum computing.

“The issue now is to try to achieve this at normal temperatures,” Xia said. The third study shows promise in overcoming that hurdle.

In 2012, Xia’s lab delivered to the Defense Advanced Research Projects Agency a radio-frequency oscillator built around samarium hexaboride. The substance is an insulator on the inside but allows signal-carrying current made of Dirac fermions to flow freely on its 2-D surface.

Using a special apparatus built in the Xia lab – also one of only two in the world – UCI researchers applied tensile strain to the samarium hexaboride sample and demonstrated in the Nature Materials study that they could stabilize the 2-D surface state at minus 27 degrees Fahrenheit.

“Believe it or not, that’s hotter than some parts of Canada,” Xia quipped. “This work is a big step toward developing future quantum computers at nearly room temperature.”

Tuesday, April 25, 2017

IBM's Watson Computer

Watson is a question answering computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's first CEO, industrialist Thomas J. Watson. The computer system was specifically developed to answer questions on the quiz show Jeopardy! In 2011, Watson competed on Jeopardy! against former winners Brad Rutter and Ken Jennings. Watson received the first place prize of $1 million.

Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage including the full text of Wikipedia, but was not connected to the Internet during the game. For each clue, Watson's three most probable responses were displayed on the television screen. Watson consistently outperformed its human opponents on the game's signaling device, but had trouble in a few categories, notably those having short clues containing only a few words.

In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan Kettering Cancer Center, New York City, in conjunction with health insurance company WellPoint. IBM Watson's former business chief, Manoj Saxena, says that 90% of nurses in the field who use Watson now follow its guidance.

Description of Watson

Watson is a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering.

The key difference between QA technology and document search is that document search takes a keyword query and returns a list of documents, ranked in order of relevance to the query (often based on popularity and page ranking), while QA technology takes a question expressed in natural language, seeks to understand it in much greater detail, and returns a precise answer to the question.

According to IBM, "more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses


Since Deep Blue's victory over Garry Kasparov in chess in 1997, IBM had been on the hunt for a new challenge. In 2004, IBM Research manager Charles Lickel, over dinner with coworkers, noticed that the restaurant they were in had fallen silent. He soon discovered the cause of this evening hiatus: Ken Jennings, who was then in the middle of his successful 74-game run on Jeopardy!. Nearly the entire restaurant had piled toward the televisions, mid-meal, to watch the phenomenon. Intrigued by the quiz show as a possible challenge for IBM, Lickel passed the idea on, and in 2005, IBM Research executive Paul Horn backed Lickel up, pushing for someone in his department to take up the challenge of playing Jeopardy! with an IBM system. Though he initially had trouble finding any research staff willing to take on what looked to be a much more complex challenge than the wordless game of chess, eventually David Ferrucci took him up on the offer. In competitions managed by the United States government, Watson's predecessor, a system named Piquant, was usually able to respond correctly to only about 35% of clues and often required several minutes to respond. To compete successfully on Jeopardy!, Watson would need to respond in no more than a few seconds, and at that time, the problems posed by the game show were deemed to be impossible to solve.

In initial tests run during 2006 by David Ferrucci, the senior manager of IBM's Semantic Analysis and Integration department, Watson was given 500 clues from past Jeopardy! programs. While the best real-life competitors buzzed in half the time and responded correctly to as many as 95% of clues, Watson's first pass could get only about 15% correct. During 2007, the IBM team was given three to five years and a staff of 15 people to solve the problems. By 2008, the developers had advanced Watson such that it could compete with Jeopardy! champions. By February 2010, Watson could beat human Jeopardy! contestants on a regular basis

In healthcare, Watson's natural language, hypothesis generation, and evidence-based learning capabilities are being investigated to see how Watson may contribute to clinical decision support systems for use by medical professionals. To aid physicians in the treatment of their patients, once a physician has posed a query to the system describing symptoms and other related factors, Watson first parses the input to identify the most important pieces of information; then mines patient data to find facts relevant to the patient's medical and hereditary history; then examines available data sources to form and test hypotheses; and finally provides a list of individualized, confidence-scored recommendations. The sources of data that Watson uses for analysis can include treatment guidelines, electronic medical record data, notes from physicians and nurses, research materials, clinical studies, journal articles, and patient information. Despite being developed and marketed as a "diagnosis and treatment advisor", Watson has never been actually involved in the medical diagnosis process, only in assisting with identifying treatment options for patients who have already been diagnosed.

IBM Watson Group

On January 9, 2014 IBM announced it was creating a business unit around Watson, led by senior vice president Michael Rhodin. IBM Watson Group will have headquarters in New York's Silicon Alley and will employ 2,000 people. IBM has invested $1 billion to get the division going. Watson Group will develop three new cloud-delivered services: Watson Discovery Advisor, Watson Engagement Advisor, and Watson Explorer. Watson Discovery Advisor will focus on research and development projects in pharmaceutical industry, publishing, and biotechnology, Watson Engagement Advisor will focus on self-service applications using insights on the basis of natural language questions posed by business users, and Watson Explorer will focus on helping enterprise users uncover and share data-driven insights based on federated search more easily. The company is also launching a $100 million venture fund to spur application development for "cognitive" applications. According to IBM, the cloud-delivered enterprise-ready Watson has seen its speed increase 24 times over—a 2,300 percent improvement in performance, and its physical size shrank by 90 percent—from the size of a master bedroom to three stacked pizza boxes. IBM CEO Virginia Rometty said she wants Watson to generate $10 billion in annual revenue within ten years.

Monday, April 24, 2017

Coming: Swarming Robots

Swarm robotics is an approach to the coordination of multirobot systems which consist of large numbers of mostly simple physical robots. It is supposed that a desired collective behavior emerges from the interactions between the robots and interactions of robots with the environment. This approach emerged on the field of artificial swarm intelligence, as well as the biological studies of insects, ants and other fields in nature, where swarm behaviour occurs.


The research of swarm robotics is to study the design of robots, their physical body and their controlling behaviours. It is inspired but not limited by the emergent behaviour observed in social insects, called swarm intelligence. Relatively simple individual rules can produce a large set of complex swarm behaviours. A key-component is the communication between the members of the group that build a system of constant feedback. The swarm behaviour involves constant change of individuals in cooperation with others, as well as the behaviour of the whole group. The two other similar fields of study which more or less have the same team structure and almost the same goals are multi-robot exploration and multi-robot coverage.

Unlike distributed robotic systems in general, swarm robotics emphasizes a large number of robots, and promotes scalability, for instance by using only local communication. That local communication for example can be achieved by wireless transmission systems, like radio frequency or infrared.

Goals and Applications

Both miniaturization and cost are key-factors in swarm robotics. These are the constraints in building large groups of robotics; therefore the simplicity of the individual team member should be emphasized. This should motivate a swarm-intelligent approach to achieve meaningful behavior at swarm-level, instead of the individual level.
     Simple Swarmbots

A lot of research has been put into achieving this goal of simplicity at the individual robot level. Being able to use actual hardware in research of Swarm Robotics in place of simulations allows researchers to come across and resolve a lot more issues and thus, broadens the scope of Swarm Research greatly. Thus, development of simple robots for Swarm intelligence research is a very important aspect of the field. The goals of these projects is manifold, including but not limited to keeping the cost of individual robots low in order to be able to make the swarms scale-able, making each member of the swarm less demanding in terms of resources and making them more power/energy efficient. One such system of swarm is the LIBOT Robotic System that involves a low cost robot built for outdoor swarm robotics. The robots are also made to have enough provisions for indoor use via Wi-Fi, since the GPS sensors provide poor communication inside buildings. Another example of such an attempt is the micro robot (Colias), built in the Computer Intelligence Lab at the University of Lincoln, UK. This micro robot is built on a 4 cm circular chassis and is low-cost and open platform for use in a variety of Swarm Robotics applications.

Potential applications for swarm robotics is indeed huge. It includes tasks that demand for miniaturization (nanorobotics, microbotics), like distributed sensing tasks in micromachinery or the human body. One of the most promising uses of swarm robotics is in disaster rescue missions. Swarms of robots of different sizes could be sent to places rescue workers can't reach safely to detect the presence of life via infra-red sensors. On the other hand, swarm robotics can be suited to tasks that demand cheap designs, for instance mining tasks or agricultural foraging tasks. Also some artists use swarm robotic techniques to realize new forms of interactive art.

More controversially, swarms can be used in military to form an autonomous army. Recently, the U.S. Naval forces have tested a swarm of autonomous boats that can steer and take offensive actions by themselves. The boats are unmanned and can be fitted with any kind of kit to deter and destroy enemy vessels.

Most efforts have focused on relatively small groups of machines. However, a swarm consisting of 1,024 individual robots was demonstrated by Harvard in 2014, the largest to date.

Another large set of applications may be solved using swarms of micro aerial vehicles, which are also broadly investigated nowadays. In comparison with the pioneering studies of swarms of flying robots using precise motion capture systems in laboratory conditions, current systems enable to control teams of micro aerial vehicles in outdoor environment using GNSS systems (such as GPS) or even stabilize them using onboard localization systems in GPS denied environment. Swarms of micro aerial vehicles have been already tested in tasks of autonomous surveillance, plume tracking, and reconnaissance in a compact phalanx. Besides, numerous works on cooperative swarms of unmanned ground and aerial vehicles have been conducted with target applications of cooperative environment monitoring, convoy protection, and moving target localization and tracking

Sunday, April 23, 2017

USA Witness of Armenian Genocide

Jesse Benjamin Jackson (November 19, 1871 – December 4, 1947) was a United States consul and an important eyewitness to the Armenian Genocide. He served as consul in Aleppo when the city was the junction of many important deportation routes. Jackson concluded that the policies towards the Armenians were "without doubt a carefully planned scheme to thoroughly extinguish the Armenian race." He considered the "wartime anti-Armenian measures" to be a "gigantic plundering scheme as well as a final blow to extinguish the race." By September 15, 1915, Jackson estimated that a million Armenians had been killed and deemed his own survival a "miracle". After the Armenian Genocide, Jackson led a relief effort and was credited with saving the lives of "thousands of Armenians."

After serving as consul in Aleppo, Jackson served in Italy and Canada. He was awarded numerous medals, including the Order of Merit of Lebanon. He died on December 4, 1947 at the age of 76.

                                           Jesse B. Jackson

Early Life

Jesse Benjamin Jackson was born in Paulding, Ohio on November 19, 1871 to Andrew Carl Jackson and Lucy Ann (Brown) Jackson. Jackson attended the local Paulding public schools and eventually served as a quartermaster sergeant in the U.S. Army during the Spanish–American War. Jackson enrolled as a clerk of the House of Representative from 1900–01 and later was employed in insurance and real estate business. Jackson was later appointed as the American consul at İskenderun on March 15, 1905. This position lasted until 1908 when he became the U.S. consul at Aleppo.

Armenian Genocide

As early as November 19, 1912, after four years as consul in Aleppo, Jackson had his staff raise concerns with the foreign embassies in Constantinople that the Turkish government was determined to place the Vilayet of Aleppo under martial law, warning that Muslims, who had abandoned their duties from the army, were engaged in "depredations" in the province, which the Turkish authorities accused the Armenians of carrying out, so that the latter "shall be at the mercy of the Moslems." Jackson requested that the embassies raise the issue with the Ottoman government, so as to prevent massacres against the Armenians “which, under the present strained conditions, would spread like wildfire, and likely engulf Christians of all denominations far and wide.”

In April 1915, some months after the outbreak of World War I, a copy of a thirty-page "seditious" pamphlet was sent by Jackson to Henry Morgenthau, the U.S. ambassador in Constantinople. Published and printed in Arabic by the National Society of Defense for the Seat of the Caliphate and entitled "A Universal Proclamation to All the People of Islam", the pamphlet was distributed by the Germans and encouraged every Muslim to free the believers "in the Unity of God" from "the grasp of the infidels." It also encouraged Muslims to boycott Armenian businesses.

By spreading the pamphlet, Jackson believed that the Germans were trying incite massacre. He added: "Surely something should be done to prevent the continuation of such propagandas in the future, or one day the result sought will be obtained, and it will be disastrous.”

In April 20, 1915, Jackson relayed to Morgenthau, to the secretary of state, and to the American Board of Commissioners for Foreign Missions, a report prepared by the Reverend John E. Merrill, president of Central Turkey College at Aintab, on the situation in the region stretching from Aintab to Marash and Zeitun. The nine-page document described the similarities between the contemporary situation in the Marash region and that during the previous Hamidian massacre and the Adana massacre of 1909. As during the massacres of 1895–96, it noted, the Turkish government was spreading false rumors that the Armenians in the Marash region were threatening law and order. Jackson claimed that the local officials deceived the Armenians in Zeitun and in nearby Furnus into surrendering their arms in hopes of averting punishment, as during the Adana massacres of 1909, while causing the death of innocent women and children. He further asserted that the conscription of young male Armenians into the Turkish army was followed by imprisonment, deportations, and massacres. Merrill believed that the deportation of the Marash region was "a direct blow at American missionary interests, menacing the results of more than fifty years of work and many thousands of dollars of expenditure."

In a letter sent to Morgenthau on August 19, Jackson stated that the deportations were of all Armenians regardless of their religious affiliation (i.e., Catholicism or Protestantism). He noted that nine trains passed through Aleppo between 1 and 19 August, several of which were carrying thousands of Armenians from Ainteb who were subsequently robbed by villagers. Jackson described these "wartime anti-Armenian measures" as a "gigantic plundering scheme as well as a final blow to extinguish the race."

Jackson reported the statistics in detail of Meskene, a deportation zone, in a 10 September 1916 dispatch: "Information obtained on the spot permits me to state that nearly 60,000 Armenians are buried there, carried off by hunger, privations of all sorts, intestinal diseases and the typhus that results. As far as the eye can reach, mounds can be seen containing 200 to 300 corpses buried pell-mell, women children and old people belonging to different families.”

On September 29, in a letter to Morgenthau, Jackson placed the survival rate of the deportees at about 15 percent and further noted that this had amounted to the deaths of about a million Armenians. He wrote:

One of the most terrible sights ever seen in Aleppo was the arrival early in August, 1915, of some 5,000 terribly emaciated, dirty, ragged and sick women and children, 3,000 in one day and 2,000 the following day. These people were the only survivors of the thrifty and well to do Armenian population of the province of Sivas, where the Armenian population had once been over 300,000.

He described the deplorable condition of the deportees; all were "sparsely clad and some naked from the treatment by their escorts and the despoiling depopulation en route. It is extremely rare to find a family intact that has come any considerable distance, invariably all having lost members from disease and fatigue, young girls and boys carried off by hostile tribesmen," and the men separated from their families and killed. "The exhausted condition of the victims is further proven by the death of a hundred or more daily of those arriving in the city." The situation was also reaffirmed by Consul Rössler who reported on September 27 that Djemal Pasha had issued an order prohibiting the taking of photographs and that taking pictures of the Armenians was considered to be unauthorized photography of military operations.”

Jackson was later instrumental in organizing the relief effort sponsored by the American Committee for Relief in the Near East for the victims. The fund, which managed to collect initial funds of $100,000, assigned Jackson to administrate and manage its finances. He estimated that the minimum provisions to sustain life would require about $150,000 a month, or a dollar a day per capita. Under his supervision, Jackson upheld the task of caring for an estimated 150,000 refugees. Due to these efforts, he is credited with saving the lives of "thousands of Armenians."

On May 13, 1923, Jacksons' duties at the American consulate of Aleppo ended when he was reassigned to the consulate of Leghorn, Italy.

Later Life

Jackson served the American consulate in Leghorn until 1928 when he was reassigned to Fort William and Port Arthur in Canada. He subsequently resided there and ultimately retired in 1935. Jackson died on December 6, 1947 at the White Cross Hospital after suffering a short-lived illness and is buried in Sunset Cemetery in Galloway, Ohio.

In 1898, Jackson married Rosabelle Berryman, who died in 1928. They had a son named Virgil A. Jackson.

Jackson was an Officer of the Crown of Italy and held the Golden Honorary Medal and the Order of Merit of Lebanon.

Saturday, April 22, 2017

Fault-Tolerant Computer Systems

Fault-tolerant computer systems are systems designed around the concepts of fault tolerance. In essence, they must be able to continue working to a level of satisfaction in the presence of faults.

Fault tolerance is not just a property of individual machines; it may also characterize the rules by which they interact. For example, the Transmission Control Protocol (TCP) is designed to allow reliable two-way communication in a packet-switched network, even in the presence of communications links which are imperfect or overloaded. It does this by requiring the endpoints of the communication to expect packet loss, duplication, reordering and corruption, so that these conditions do not damage data integrity, and only reduce throughput by a proportional amount.

Recovery from errors in fault-tolerant systems can be characterized as either 'roll-forward' or 'roll-back'. When the system detects that it has made an error, roll-forward recovery takes the system state at that time and corrects it, to be able to move forward. Roll-back recovery reverts the system state back to some earlier, correct version, for example using check-pointing, and moves forward from there. Roll-back recovery requires that the operations between the checkpoint and the detected erroneous state can be made idempotent. Some systems make use of both roll-forward and roll-back recovery for different errors or different parts of one error.

Types of Fault Tolerance

Most fault-tolerant computer systems are designed to handle several possible failures, including hardware-related faults such as hard disk failures, input or output device failures, or other temporary or permanent failures; software bugs and errors; interface errors between the hardware and software, including driver failures; operator errors, such as erroneous keystrokes, bad command sequences or installing unexpected software and physical damage or other flaws introduced to the system from an outside source.

Hardware fault-tolerance is the most common application of these systems, designed to prevent failures due to hardware components. Most basically, this is provided by redundancy, particularly dual modular redundancy. Typically, components have multiple backups and are separated into smaller "segments" that act to contain a fault, and extra redundancy is built into all physical connectors, power supplies, fans, etc. There are special software and instrumentation packages designed to detect failures, such as fault masking, which is a way to ignore faults by seamlessly preparing a backup component to execute something as soon as the instruction is sent, using a sort of voting protocol where if the main and backups don't give the same results, the flawed output is ignored.

Software fault-tolerance is based more around nullifying programming errors using real-time redundancy, or static "emergency" subprograms to fill in for programs that crash. There are many ways to conduct such fault-regulation, depending on the application and the available hardware.


The first known fault-tolerant computer was SAPO, built in 1951 in Czechoslovakia by Antonín Svoboda. Its basic design was magnetic drums connected via relays, with a voting method of memory error detection (triple modular redundancy). Several other machines were developed along this line, mostly for military use. Eventually, they separated into three distinct categories: machines that would last a long time without any maintenance, such as the ones used on NASA space probes and satellites; computers that were very dependable but required constant monitoring, such as those used to monitor and control nuclear power plants or supercollider experiments; and finally, computers with a high amount of runtime which would be under heavy use, such as many of the supercomputers used by insurance companies for their probability monitoring.

Most of the development in the so-called LLNM (Long Life, No Maintenance) computing was done by NASA during the 1960s, in preparation for Project Apollo and other research aspects. NASA's first machine went into a space observatory, and their second attempt, the JSTAR computer, was used in Voyager. This computer had a backup of memory arrays to use memory recovery methods and thus it was called the JPL Self-Testing-And-Repairing computer. It could detect its own errors and fix them or bring up redundant modules as needed. The computer is still working today.

Hyper-dependable computers were pioneered mostly by aircraft manufacturers, nuclear power companies, and the railroad industry in the USA. These needed computers with massive amounts of uptime that would fail gracefully enough with a fault to allow continued operation, while relying on the fact that the computer output would be constantly monitored by humans to detect faults. Again, IBM developed the first computer of this kind for NASA for guidance of Saturn V rockets, but later on BNSF, Unisys, and General Electric built their own.

The 1970 F14 CADC had built-in self-test and redundancy.

In general, the early efforts at fault-tolerant designs were focused mainly on internal diagnosis, where a fault would indicate something was failing and a worker could replace it. SAPO, for instance, had a method by which faulty memory drums would emit a noise before failure. Later efforts showed that, to be fully effective, the system had to be self-repairing and diagnosing – isolating a fault and then implementing a redundant backup while alerting a need for repair. This is known as N-model redundancy, where faults cause automatic fail safes and a warning to the operator, and it is still the most common form of level one fault-tolerant design in use today.

Voting was another initial method, as discussed above, with multiple redundant backups operating constantly and checking each other's results, with the outcome that if, for example, four components reported an answer of 5 and one component reported an answer of 6, the other four would "vote" that the fifth component was faulty and have it taken out of service. This is called M out of N majority voting.

Historically, motion has always been to move further from N-model and more to M out of N due to the fact that the complexity of systems and the difficulty of ensuring the transitive state from fault-negative to fault-positive did not disrupt operations.

Tandem and Stratus were among the first companies specializing in the design of fault-tolerant computer systems for online transaction processing.

Friday, April 21, 2017

History of 3D :Printing

3D printing, also known as additive manufacturing (AM), refers to processes used to create a three-dimensional object in which layers of material are formed under computer control to create an object. Objects can be of almost any shape or geometry and are produced using digital model data from a 3D model or another electronic data source such as an Additive Manufacturing File (AMF) file.

The futurologist Jeremy Rifkin claimed that 3D printing signals the beginning of a third industrial revolution, succeeding the production line assembly that dominated manufacturing starting in the late 19th century.

The term "3D printing" originally referred to a process that deposits a binder material onto a powder bed with inkjet printer heads layer by layer. More recently, the term is being used in popular vernacular to encompass a wider variety of additive manufacturing techniques. United States and global technical standards use the official term additive manufacturing for this broader sense. ISO/ASTM52900-15 defines seven categories of AM processes within its meaning: binder jetting, directed energy deposition, material extrusion, material jetting, powder bed fusion, sheet lamination and vat photopolymerization.

Terminology and Methods

Early additive manufacturing equipment and materials were developed in the 1980s. In 1981, Hideo Kodama of Nagoya Municipal Industrial Research Institute invented two AM fabricating methods of a three-dimensional plastic model with photo-hardening thermoset polymer, where the UV exposure area is controlled by a mask pattern or the scanning fiber transmitter. But on July 16, 1984 Alain Le Méhauté, Olivier de Witte and Jean Claude André filed their patent for the stereolithography process. It was three weeks before Chuck Hull filed his own patent for stereolithography. The application of French inventors were abandoned by the French General Electric Company (now Alcatel-Alsthom) and CILAS (The Laser Consortium). The claimed reason was "for lack of business perspective". Then in 1984, Chuck Hull of 3D Systems Corporation developed a prototype system based on a process known as stereolithography, in which layers are added by curing photopolymers with ultraviolet light lasers. Hull defined the process as a "system for generating three-dimensional objects by creating a cross-sectional pattern of the object to be formed," but this had been already invented by Kodama. Hull's contribution is the design of the STL (Stereolithography) file format widely accepted by 3D printing software as well as the digital slicing and infill strategies common to many processes today. The term 3D printing originally referred to a process employing standard and custom inkjet print heads. The technology used by most 3D printers to date—especially hobbyist and consumer-oriented models—is fused deposition modeling, a special application of plastic extrusion.

AM processes for metal sintering or melting (such as selective laser sintering, direct metal laser sintering, and selective laser melting) usually went by their own individual names in the 1980s and 1990s. At the time, nearly all metal working was produced by casting, fabrication, stamping, and machining; although plenty of automation was applied to those technologies (such as by robot welding and CNC), the idea of a tool or head moving through a 3D work envelope transforming a mass of raw material into a desired shape layer by layer was associated by most people only with processes that removed metal (rather than adding it), such as CNC milling, CNC EDM, and many others. But AM-type sintering was beginning to challenge that assumption. By the mid 1990s, new techniques for material deposition were developed at Stanford and Carnegie Mellon University, including microcasting and sprayed materials. Sacrificial and support materials had also become more common, enabling new object geometries.

The umbrella term additive manufacturing gained wider currency in the decade of the 2000s. As the various additive processes matured, it became clear that soon metal removal would no longer be the only metalworking process done under that type of control (a tool or head moving through a 3D work envelope transforming a mass of raw material into a desired shape layer by layer). It was during this decade that the term subtractive manufacturing appeared as a retronym for the large family of machining processes with metal removal as their common theme. At this time, the term 3D printing still referred only to the polymer technologies in most minds, and the term AM was likelier to be used in metalworking and end use part production contexts than among polymer, inkjet, and stereolithography enthusiasts. The term subtractive has not replaced the term machining, instead complementing it when a term that covers any removal method is needed.

By the early 2010s, the terms 3D printing and additive manufacturing evolved senses in which they were alternate umbrella terms for AM technologies, one being used in popular vernacular by consumer - maker communities and the media, and the other used officially by industrial AM end use part producers, AM machine manufacturers, and global technical standards organizations.

Both terms reflect the simple fact that the technologies all share the common theme of sequential-layer material addition/joining throughout a 3D work envelope under automated control.

(Other terms that had been used as AM synonyms (although sometimes as hypernyms), include desktop manufacturing, rapid manufacturing [as the logical production-level successor to rapid prototyping], and on-demand manufacturing [which echoes on-demand printing in the 2D sense of printing].) The 2010s were the first decade in which metal end use parts such as engine brackets and large nuts would be grown (either before or instead of machining) in job production rather than obligately being machined from bar stock or plate.

Agile tooling is a term used to describe the process of using modular means to design tooling that is produced by additive manufacturing or 3D printing methods to enable quick prototyping and responses to tooling and fixture needs. Agile tooling uses a cost effective and high quality method to quickly respond to customer and market needs. It can be used in hydro-forming, stamping, injection molding and other manufacturing processes.

As technology matured, several authors had begun to speculate that 3D printing could aid in sustainable development in the developing world.

Thursday, April 20, 2017

Types of Glass

Glass is a non-crystalline amorphous solid that is often transparent and has widespread practical, technological, and decorative usage in, for example, window panes, tableware, and optoelectronics. The most familiar, and historically the oldest, types of glass are "silicate glasses" based on the chemical compound silica (silicon dioxide, or quartz), the primary constituent of sand. The term glass, in popular usage, is often used to refer only to this type of material, which is familiar from use as window glass and in glass bottles. Of the many silica-based glasses that exist, ordinary glazing and container glass is formed from a specific type called soda-lime glass, composed of approximately 75% silicon dioxide (SiO2), sodium oxide (Na2O) from sodium carbonate (Na2CO3), calcium oxide, also called lime (CaO), and several minor additives.

Many applications of silicate glasses derive from their optical transparency, giving rise to their primary use as window panes. Glass will transmit, reflect and refract light; these qualities can be enhanced by cutting and polishing to make optical lenses, prisms, fine glassware, and optical fibers for high speed data transmission by light. Glass can be coloured by adding metallic salts, and can also be painted and printed with vitreous enamels. These qualities have led to the extensive use of glass in the manufacture of art objects and in particular, stained glass windows. Although brittle, silicate glass is extremely durable, and many examples of glass fragments exist from early glass-making cultures. Because glass can be formed or moulded into any shape, it has been traditionally used for vessels: bowls, vases, bottles, jars and drinking glasses. In its most solid forms it has also been used for paperweights, marbles, and beads. When extruded as glass fiber and matted as glass wool in a way to trap air, it becomes a thermal insulating material, and when these glass fibers are embedded into an organic polymer plastic, they are a key structural reinforcement part of the composite material fiberglass. Some objects historically were so commonly made of silicate glass that they are simply called by the name of the material, such as drinking glasses and reading glasses.

Scientifically, the term "glass" is often defined in a broader sense, encompassing every solid that possesses a non-crystalline (that is, amorphous) structure at the atomic-scale and that exhibits a glass transition when heated towards the liquid state. Porcelains and many polymer thermoplastics familiar from everyday use are glasses. These sorts of glasses can be made of quite different kinds of materials than silica: metallic alloys, ionic melts, aqueous solutions, molecular liquids, and polymers. For many applications, like glass bottles or eyewear, polymer glasses (acrylic glass, polycarbonate or polyethylene terephthalate) are a lighter alternative than traditional glass.

Other Types of Glass

In addition to silicate glasses there are these types of glass:

  • Network glasses
  • Amorphous metals
  • Electrolytes
  • Aqueous solutions
  • Molecular liquids
  • Polymers
  • Colloidal glasses
  • Glass-ceramics

Wednesday, April 19, 2017

Qualitative Inorganic Analysis

Classical qualitative inorganic analysis is a method of analytical chemistry which seeks to find elemental composition of inorganic compounds. It is mainly focused on detecting ions in an aqueous solution, so that materials in other forms may need to be brought into this state before using standard methods. The solution is then treated with various reagents to test for reactions characteristic of certain ions, which may cause color change, solid forming and other visible changes.

Qualitative inorganic analysis is that branch or method of analytical chemistry which seeks to establish elemental composition of inorganic compounds through various reagents.

Detecting Cations

According to their properties, cations are usually classified into six groups. Each group has a common reagent which can be used to separate them from the solution. To obtain meaningful results, the separation must be done in the sequence specified below, as some ions of an earlier group may also react with the reagent of a later group, causing ambiguity as to which ions are present. This happens because cationic analysis is based on the solubility products of the ions. As the cation gains its optimum concentration needed for precipitation it precipitates and hence allowing us to detect it. The division and precise details of separating into groups vary slightly from one source to another.

[There are also six groups of anions].

Modern Techniques

Qualitative inorganic analysis is now used only as a pedagogical tool. Modern techniques such as atomic absorption spectroscopy and ICP-MS are able to quickly detect the presence and concentrations of elements using a very small amount of sample.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Flame Test

A flame test is an analytic procedure used in chemistry to detect the presence of certain elements, primarily metal ions, based on each element's characteristic emission spectrum. The color of flames in general also depends on temperature; see flame color.


The test involves introducing a sample of the element or compound to a hot, non-luminous flame, and observing the color of the flame that results. The idea of the test is that sample atoms evaporate and since they are hot, they emit light when being in flame. Bulk sample emits light too, but its light is not good for analysis. Bulk sample emits light primarily due to motion of the atoms, therefore its spectrum is broad, consisting of a broad range of colors. Separate atoms of sample present in flame can emit only due to electronic transitions between different atomic energy levels. Those transitions emit light of very specific frequencies, characteristic of chemical element itself. Therefore, the flame gets the color, which is primarily determined by properties of the chemical element of the substance being put into flame. The flame test is a relatively easy experiment to set up, and thus is often demonstrated or carried out in science classes in schools.

Samples are usually held on a platinum wire cleaned repeatedly with hydrochloric acid to remove traces of previous analytes. The compound is usually made into a paste with concentrated hydrochloric acid, as metal halides, being volatile, give better results. Different flames should be tried to avoid wrong data due to "contaminated" flames, or occasionally to verify the accuracy of the color. In high-school chemistry courses, wooden splints are sometimes used, mostly because solutions can be dried onto them, and they are inexpensive. Nichrome wire is also sometimes used. When using a splint, one must be careful to wave the splint through the flame rather than holding it in the flame for extended periods, to avoid setting the splint itself on fire. The use of cotton swab or melamine foam (used in "eraser" cleaning sponges) as a support have also been suggested.

Sodium is a common component or contaminant in many compounds and its spectrum tends to dominate over others. The test flame is often viewed through cobalt blue glass to filter out the yellow of sodium and allow for easier viewing of other metal ions.


The flame test is relatively quick and simple to perform, and can be carried out with the basic equipment found in most chemistry laboratories. However, the range of elements positively detectable under these conditions is small, as the test relies on the subjective experience of the experimenter rather than any objective measurements. The test has difficulty detecting small concentrations of some elements, while too strong a result may be produced for certain others, which tends to cause fainter colors to not appear.

Although the flame test only gives qualitative information, not quantitative data about the proportion of elements in the sample, quantitative data can be obtained by the related techniques of flame photometry or flame emission spectroscopy. Flame Atomic absorption spectroscopy Instruments, made by e.g. PerkinElmer or Shimadzu, can be operated in emission mode according to the instrument manuals.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

The bead test is a traditional part of qualitative inorganic analysis to test for the presence of certain metals. The oldest one is the borax bead test or blister test. It was introduced by Berzelius in 1812. Since then other salts were used as fluxing agents, such as sodium carbonate or sodium fluoride. The most important one after borax is microcosmic salt, which is the basis of the microcosmic salt bead test.

Borax Bead

A small loop is made in the end of a platinum or Nichrome wire (as used in the flame test) and heated in a Bunsen flame until red hot. It is then dipped into powdered borax, and the adhering solid is held in the hottest part of the flame where it swells up as it loses its water of crystallization and then shrinks, forming a colourless, transparent glass-like bead (a mixture of sodium metaborate and boric anhydride)

Allow the bead to cool and the bead is moisturised (traditionally with the tongue) and dipped into the sample to be tested such that only a tiny amount of the substance adheres to the bead. If too much substance is used, the bead will become dark and opaque. The bead and adhering substance is then heated in the lower, reducing, part of the flame, allowed to cool, and the colour observed. It is then heated in the upper, oxidizing, part of the flame, allowed to cool, and the colour observed again.

Characteristic coloured beads are produced with salts of copper, iron, chromium, manganese, cobalt and nickel. After the test, the bead is removed by heating it to fusion point, and plunging it into a vessel of water.

Afterword by the Blog Author

After a full year of high school chemistry, the blog author took a third semester of chemistry entirely dedicated to qualitative analysis.  It was a great opportunity to practice scientific methods at a high level.  It develops a fondness for platinum wire and blue cobalt glass, yes, but it also leads the student to understand the importance of lab bench protocols and clean glassware to getting accurate results.  I went from being afraid of lab work to justifiable pride in accurate qualitative work.  I’m sorry most modern chemists aren’t exposed to this valuable experience.

Tuesday, April 18, 2017

Rockets with Solid Propellant

A solid-propellant rocket or solid rocket is a rocket with a rocket engine that uses solid propellants (fuel/oxidizer). The earliest rockets were solid-fuel rockets powered by gunpowder; they were used in warfare by the Chinese, Indians, Mongols and Persians, as early as the 13th century.

                           Space Shuttle launch with two solid rocket boosters

All rockets used some form of solid or powdered propellant up until the 20th century, when liquid-propellant rockets offered more efficient and controllable alternatives. Solid rockets are still used today in model rockets and on larger applications for their simplicity and reliability.

Since solid-fuel rockets can remain in storage for long periods, and then reliably launch on short notice, they have been frequently used in military applications such as missiles. The lower performance of solid propellants (as compared to liquids) does not favor their use as primary propulsion in modern medium-to-large launch vehicles customarily used to orbit commercial satellites and launch major space probes. Solids are, however, frequently used as strap-on boosters to increase payload capacity or as spin-stabilized add-on upper stages when higher-than-normal velocities are required. Solid rockets are used as light launch vehicles for low Earth orbit (LEO) payloads under 2 tons or escape payloads up to 500 kilograms (1,100 lb).

Basic Concepts

A simple solid rocket motor consists of a casing, nozzle, grain (propellant charge), and igniter.

The grain behaves like a solid mass, burning in a predictable fashion and producing exhaust gases. The nozzle dimensions are calculated to maintain a design chamber pressure, while producing thrust from the exhaust gases.

Once ignited, a simple solid rocket motor cannot be shut off, because it contains all the ingredients necessary for combustion within the chamber in which they are burned. More advanced solid rocket motors can not only be throttled but also be extinguished and then re-ignited by controlling the nozzle geometry or through the use of vent ports. Also, pulsed rocket motors that burn in segments and that can be ignited upon command are available.

Modern designs may also include a steerable nozzle for guidance, avionics, recovery hardware (parachutes), self-destruct mechanisms, APUs, controllable tactical motors, controllable divert and attitude control motors, and thermal management materials.


Design begins with the total impulse required, which determines the fuel/oxidizer mass. Grain geometry and chemistry are then chosen to satisfy the required motor characteristics.

The following are chosen or solved simultaneously. The results are exact dimensions for grain, nozzle, and case geometries:

  • The grain burns at a predictable rate, given its surface area and chamber pressure.
  • The chamber pressure is determined by the nozzle orifice diameter and grain burn rate.
  • Allowable chamber pressure is a function of casing design.
  • The length of burn time is determined by the grain "web thickness".

The grain may or may not be bonded to the casing. Case-bonded motors are more difficult to design, since the deformation of the case and the grain under flight must be compatible.

Common modes of failure in solid rocket motors include fracture of the grain, failure of case bonding, and air pockets in the grain. All of these produce an instantaneous increase in burn surface area and a corresponding increase in exhaust gas production rate and pressure, which may rupture the casing.

Another failure mode is casing seal failure. Seals are required in casings that have to be opened to load the grain. Once a seal fails, hot gas will erode the escape path and result in failure. This was the cause of the Space Shuttle Challenger disaster.