Thursday, May 31, 2018

Regrowing Brain Tissue

Biomaterial Developed at UCLA Helps
Regrow Brain Tissue after Stroke in Mice
Gel suppresses scarring, creates scaffolding for new neurons and blood vessels
By Leigh Hopper, UCLA

May 21, 2018  -- A new stroke-healing gel created by UCLA researchers helped regrow neurons and blood vessels in mice whose brains had been damaged by strokes. The finding is reported May 21 in Nature Materials.

“We tested this in laboratory mice to determine if it would repair the brain and lead to recovery in a model of stroke,” said Dr. S. Thomas Carmichael, professor of neurology at the David Geffen School of Medicine at UCLA. “The study indicated that new brain tissue can be regenerated in what was previously just an inactive brain scar after stroke.”

The results suggest that such an approach could some day be used to treat people who have had a stroke, said Tatiana Segura, a former professor of chemical and biomolecular engineering at UCLA who collaborated on the research. Segura is now a professor at Duke University.

The brain has a limited capacity for recovery after stroke. Unlike the liver, skin and some other organs, the brain does not regenerate new connections, blood vessels or tissue structures after it is damaged. Instead, dead brain tissue is absorbed, which leaves a cavity devoid of blood vessels, neurons or axons — the thin nerve fibers that project from neurons.

To see if healthy tissue surrounding the cavity could be coaxed into healing the stroke injury, Segura engineered a hydrogel that, when injected into the cavity, thickens to create a scaffolding into which blood vessels and neurons can grow. The gel is infused with medications that stimulate blood vessel growth and suppress inflammation, since inflammation results in scars and impedes functional tissue from regrowing.

After 16 weeks, the stroke cavities contained regenerated brain tissue, including new neuronal connections — a result that had not been seen before. The mice’s ability to reach for food improved, a sign of improved motor behavior, although the exact mechanism for the improvement wasn’t clear.

“The new axons could actually be working,” Segura said. “Or the new tissue could be improving the performance of the surrounding, unharmed brain tissue.”

The gel was eventually absorbed by the body, leaving behind only new tissue.

The research was designed to explore recovery in acute stroke, the period immediately following a stroke — in mice, that period lasts five days; in humans, it’s two months. Next, Carmichael and Segura plan to investigate whether brain tissue can be regenerated in mice long after the stroke injury.  More than 6 million Americans are living with long-term effects of stroke, which is known as chronic stroke.

The other authors of the paper are Lina Nih and Shiva Gojgini, both of UCLA.

The study was supported by the National Institutes of Health.

Wednesday, May 30, 2018

The Autonomic Nervous System

The autonomic nervous system (ANS), formerly the vegetative nervous system, is a division of the peripheral nervous system that supplies smooth muscle and glands, and thus influences the function of internal organs. The autonomic nervous system is a control system that acts largely unconsciously and regulates bodily functions such as the heart rate, digestion, respiratory rate, pupillary response, urination, and sexual arousal. This system is the primary mechanism in control of the fight-or-flight response.

Within the brain, the autonomic nervous system is regulated by the hypothalamus. Autonomic functions include control of respiration, cardiac regulation (the cardiac control center), vasomotor activity (the vasomotor center), and certain reflex actions such as coughing, sneezing, swallowing and vomiting. Those are then subdivided into other areas and are also linked to ANS subsystems and nervous systems external to the brain. The hypothalamus, just above the brain stem, acts as an integrator for autonomic functions, receiving ANS regulatory input from the limbic system to do so.

The autonomic nervous system has three branches: the sympathetic nervous system, the parasympathetic nervous system and the enteric nervous system. Some textbooks do not include the enteric nervous system as part of this system. The sympathetic nervous system is often considered the "fight or flight" system, while the parasympathetic nervous system is often considered the "rest and digest" or "feed and breed" system. In many cases, both of these systems have "opposite" actions where one system activates a physiological response and the other inhibits it. An older simplification of the sympathetic and parasympathetic nervous systems as "excitory" and "inhibitory" was overturned due to the many exceptions found. A more modern characterization is that the sympathetic nervous system is a "quick response mobilizing system" and the parasympathetic is a "more slowly activated dampening system", but even this has exceptions, such as in sexual arousal and orgasm, wherein both play a role.

There are inhibitory and excitatory synapses between neurons. Relatively recently, a third subsystem of neurons that have been named non-noradrenergic, non-cholinergic transmitters (because they use nitric oxide as a neurotransmitter) have been described and found to be integral in autonomic function, in particular in the gut and the lungs.

Although the ANS is also known as the visceral nervous system, the ANS is only connected with the motor side. Most autonomous functions are involuntary but they can often work in conjunction with the somatic nervous system which provides voluntary control.

Structure

The autonomic nervous system is divided into the sympathetic nervous system and parasympathetic nervous system. The sympathetic division emerges from the spinal cord in the thoracic and lumbar areas, terminating around L2-3. The parasympathetic division has craniosacral “outflow”, meaning that the neurons begin at the cranial nerves (specifically the oculomotor nerve, facial nerve, glossopharyngeal nerve and vagus nerve) and sacral (S2-S4) spinal cord.

The autonomic nervous system is unique in that it requires a sequential two-neuron efferent pathway; the preganglionic neuron must first synapse onto a postganglionic neuron before innervating the target organ. The preganglionic, or first, neuron will begin at the “outflow” and will synapse at the postganglionic, or second, neuron’s cell body. The postganglionic neuron will then synapse at the target organ.

Tuesday, May 29, 2018

Most Vitamins and Minerals Ineffective

Most Popular Vitamin and Mineral
Supplements Provide No Health Benefit
The most commonly consumed vitamin and mineral supplements provide no consistent health benefit or harm, suggests a new study led by researchers at St. Michael's Hospital and the University of Toronto.

May 28, 2018 – Published today in the Journal of the American College of Cardiology, the systematic review of existing data and single randomized control trials published in English from January 2012 to October 2017 found that multivitamins, vitamin D, calcium and vitamin C -- the most common supplements -- showed no advantage or added risk in the prevention of cardiovascular disease, heart attack, stroke or premature death. Generally, vitamin and mineral supplements are taken to add to nutrients that are found in food.

"We were surprised to find so few positive effects of the most common supplements that people consume," said Dr. David Jenkins*, the study's lead author. "Our review found that if you want to use multivitamins, vitamin D, calcium or vitamin C, it does no harm -- but there is no apparent advantage either."

The study found folic acid alone and B-vitamins with folic acid may reduce cardiovascular disease and stroke. Meanwhile, niacin and antioxidants showed a very small effect that might signify an increased risk of death from any cause.

"These findings suggest that people should be conscious of the supplements they're taking and ensure they're applicable to the specific vitamin or mineral deficiencies they have been advised of by their healthcare provider," Dr. Jenkins said.

His team reviewed supplement data that included A, B1, B2, B3 (niacin), B6, B9 (folic acid), C, D and E; and ?-carotene; calcium; iron; zinc; magnesium; and selenium. The term 'multivitamin' in this review was used to describe supplements that include most vitamins and minerals, rather than a select few.

"In the absence of significant positive data -- apart from folic acid's potential reduction in the risk of stroke and heart disease -- it's most beneficial to rely on a healthy diet to get your fill of vitamins and minerals," Dr. Jenkins said. "So far, no research on supplements has shown us anything better than healthy servings of less processed plant foods including vegetables, fruits and nuts."

Story Source:  Materials provided by St. Michael's Hospital. Note: Content may be edited for style and length.

Journal Reference:  David J.A. Jenkins, J. David Spence, Edward L. Giovannucci, Young-in Kim, Robert Josse, Reinhold Vieth, Sonia Blanco Mejia, Effie Viguiliouk, Stephanie Nishi, Sandhya Sahye-Pudaruth, Melanie Paquette, Darshna Patel, Sandy Mitchell, Meaghan Kavanagh, Tom Tsirakis, Lina Bachiri, Atherai Maran, Narmada Umatheva, Taylor McKay, Gelaine Trinidad, Daniel Bernstein, Awad Chowdhury, Julieta Correa-Betanzo, Gabriella Del Principe, Anisa Hajizadeh, Rohit Jayaraman, Amy Jenkins, Wendy Jenkins, Ruben Kalaichandran, Geithayini Kirupaharan, Preveena Manisekaran, Tina Qutta, Ramsha Shahid, Alexis Silver, Cleo Villegas, Jessica White, Cyril W.C. Kendall, Sathish C. Pichika, John L. Sievenpiper. Supplemental Vitamins and Minerals for CVD Prevention and Treatment. Journal of the American College of Cardiology, 2018; 71 (22): 2570 DOI: 10.1016/j.jacc.2018.04.020

Monday, May 28, 2018

International Union for Conservation of Nature (IUCN)

The International Union for Conservation of Nature (IUCN; officially International Union for Conservation of Nature and Natural Resources) is an international organization working in the field of nature conservation and sustainable use of natural resources. It is involved in data gathering and analysis, research, field projects, advocacy, and education. IUCN's mission is to "influence, encourage and assist societies throughout the world to conserve nature and to ensure that any use of natural resources is equitable and ecologically sustainable".

Over the past decades, IUCN has widened its focus beyond conservation ecology and now incorporates issues related to sustainable development in its projects. Unlike many other international environmental organisations, IUCN does not itself aim to mobilize the public in support of nature conservation. It tries to influence the actions of governments, business and other stakeholders by providing information and advice, and through building partnerships. The organization is best known to the wider public for compiling and publishing the IUCN Red List of Threatened Species, which assesses the conservation status of species worldwide.

IUCN has a membership of over 1400 governmental and non-governmental organizations. Some 16,000 scientists and experts participate in the work of IUCN commissions on a voluntary basis. It employs approximately 1000 full-time staff in more than 50 countries. Its headquarters are in Gland, Switzerland.

IUCN has observer and consultative status at the United Nations, and plays a role in the implementation of several international conventions on nature conservation and biodiversity. It was involved in establishing the World Wide Fund for Nature and the World Conservation Monitoring Centre. In the past, IUCN has been criticized for placing the interests of nature over those of indigenous peoples. In recent years, its closer relations with the business sector have caused controversy.

IUCN was established in 1948. It was previously called the International Union for the Protection of Nature' (1948–1956) and the World Conservation Union (1990–2008).

Establishment of the Organization

IUCN was established on 5 October 1948, in Fontainebleau, France, when representatives of governments and conservation organizations signed a formal act constituting the International Union for the Protection of Nature (IUPN). The initiative to set up the new organisation came from UNESCO and especially from its first Director General, the British biologist Julian Huxley.

The objectives of the new Union were to encourage international cooperation in the protection of nature, to promote national and international action and to compile, analyse and distribute information. At the time of its founding IUPN was the only international organisation focusing on the entire spectrum of nature conservation (an international organisation for the protection of birds, now BirdLife International, had been established in 1922.)

Some key dates in the growth and development of IUCN:

  • 1948: International Union for the Protection of Nature (IUPN) established
  • 1956: Name changed to the International Union for the Conservation of Nature and Natural Resources (IUCN)
  • 1959: UNESCO decides to create an international list of Nature Parks and equivalent reserves, and the United Nations Secretary General asks the IUCN to prepare this list
  • 1961: The World Wildlife Fund set up as a complimentary organisation to focus on fund raising, public relations, and increasing public support for nature conservation
  • 1969: IUCN obtains a grant from the Ford Foundation which enables it to boost its international secretariat.
  • 1972: UNESCO adopts the Convention Concerning the Protection of World Cultural and Natural Heritage and the IUCN is invited to provide technical evaluations and monitoring
  • 1974: IUCN is involved in obtaining the agreement of its members to sign a Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), whose secretariat was originally lodged with the IUCN
  • 1975: The Convention on Wetlands of International Importance (Ramsar Convention) comes into force, and its secretariat is administered from the IUCN's headquarters
  • 1980: IUCN (together with the United Nations Environment Programme and the World Wide Fund for Nature) collaborate with UNESCO to publish a World Conservation Strategy
  • 1982: Following IUCN preparation and efforts, the United Nations General Assembly adopts the World Charter for Nature
  • 1990: Began using the name World Conservation Union as the official name, while continuing using IUCN as its abbreviation.
  • 1991: IUCN (together with United Nations Environment Programme and the World Wide Fund for Nature) publishes Caring for the Earth
  • 2003: Establishment of the IUCN Business and Biodiversity Program
  • 2008: Stopped using World Conservation Union as its official name and reverted its name back to International Union for Conservation of Nature
  • 2012: IUCN publishes list of The world's 100 most threatened species.
  • 2016: Created a new IUCN membership category for indigenous peoples’ organisations.

Sunday, May 27, 2018

"Deep Learning" Explained


Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised.

Deep learning architectures such as deep neural networks, deep belief networks and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics and drug design, where they have produced results comparable to and in some cases superior to human experts.

Deep learning models are vaguely inspired by information processing and communication patterns in biological nervous systems yet have various differences from the structural and functional properties of biological brains, which make them incompatible with neuroscience evidences.

Definition

Deep learning is a class of machine learning algorithms that:

·                        use a cascade of multiple layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.

·                        learn in supervised (e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.

·                        learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.

Overview of Deep Learning

Most modern deep learning models are based on an artificial neural network, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.

In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recognition application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own. (Of course, this does not completely obviate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.)

The "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). For recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited. No universally agreed upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth  CAP of depth has been shown to be a universal approximator in the sense that it can emulate any function. Beyond that more layers do not add to the function approximator ability of the network. The extra layers help in learning features.

Deep learning architectures are often constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features improve performance.

For supervised learning tasks, deep learning methods obviate feature engineering, by translating the data into compact intermediate representations akin to principal components, and derive layered structures that remove redundancy in representation.

Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks.

History

The term Deep Learning was introduced to the machine learning community by Rina Dechter in 1986, and to Artificial Neural Networks by Igor Aizenberg and colleagues in 2000, in the context of Boolean threshold neurons.

The first general, working learning algorithm for supervised, deep, feedforward, multilayer perceptrons was published by Alexey Ivakhnenko and Lapa in 1965. A 1971 paper described a deep network with 8 layers trained by the group method of data handling algorithm.

Other deep learning working architectures, specifically those built for computer vision, began with the Neocognitron introduced by Kunihiko Fukushima in 1980. In 1989, Yann LeCun et al. applied the standard backpropagation algorithm, which had been around as the reverse mode of automatic differentiation since 1970, to a deep neural network with the purpose of recognizing handwritten ZIP codes on mail. While the algorithm worked, training required 3 days.

By 1991 such systems were used for recognizing isolated 2-D hand-written digits, while recognizing 3-D objects was done by matching 2-D images with a handcrafted 3-D object model. Weng et al. suggested that a human brain does not use a monolithic 3-D object model and in 1992 they published Cresceptron, a method for performing 3-D object recognition in cluttered scenes. Cresceptron is a cascade of layers similar to Neocognitron. But while Neocognitron required a human programmer to hand-merge features, Cresceptron learned an open number of features in each layer without supervision, where each feature is represented by a convolution kernel. Cresceptron segmented each learned object from a cluttered scene through back-analysis through the network. Max pooling, now often adopted by deep neural networks (e.g. ImageNet tests), was first used in Cresceptron to reduce the position resolution by a factor of (2x2) to 1 through the cascade for better generalization.

In 1994, André de Carvalho, together with Mike Fairhurst and David Bisset, published experimental results of a multi-layer boolean neural network, also known as a weightless neural network, composed of a 3-layers self-organising feature extraction neural network module (SOFT) followed by a multi-layer classification neural network module (GSN), which were independently trained. Each layer in the feature extraction module extracted features with growing complexity regarding the previous layer.

In 1995, Brendan Frey demonstrated that it was possible to train (over two days) a network containing six fully connected layers and several hundred hidden units using the wake-sleep algorithm, co-developed with Peter Dayan and Hinton. Many factors contribute to the slow speed, including the vanishing gradient problem analyzed in 1991 by Sepp Hochreiter.

Simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) were a popular choice in the 1990s and 2000s, because of ANNs' computational cost and a lack of understanding of how the brain wires its biological networks.

Both shallow and deep learning (e.g., recurrent nets) of ANNs have been explored for many years. These methods never outperformed non-uniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. Key difficulties have been analyzed, including gradient diminishing and weak temporal correlation structure in neural predictive models. Additional difficulties were the lack of training data and limited computing power.

Most speech recognition researchers moved away from neural nets to pursue generative modeling. An exception was at SRI International in the late 1990s. Funded by the US government's NSA and DARPA, SRI studied deep neural networks in speech and speaker recognition. Heck's speaker recognition team achieved the first significant success with deep neural networks in speech processing in the 1998 National Institute of Standards and Technology Speaker Recognition evaluation. While SRI experienced success with deep neural networks in speaker recognition, they were unsuccessful in demonstrating similar success in speech recognition. The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features in the late 1990s, showing its superiority over the Mel-Cepstral features that contain stages of fixed transformation from spectrograms. The raw features of speech, waveforms, later produced excellent larger-scale results.

Many aspects of speech recognition were taken over by a deep learning method called Long short-term memory (LSTM), a recurrent neural network published by Hochreiter and Schmidhuber in 1997. LSTM RNNs avoid the vanishing gradient problem and can learn "Very Deep Learning" tasks that require memories of events that happened thousands of discrete time steps before, which is important for speech. In 2003, LSTM started to become competitive with traditional speech recognizers on certain tasks. Later it was combined with connectionist temporal classification (CTC) in stacks of LSTM RNNs. In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM, which they made available through Google Voice Search.

In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero and Teh showed how a many-layered feedforward neural network could be effectively pre-trained one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then fine-tuning it using supervised backpropagation. The papers referred to learning for deep belief nets.

Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision and automatic speech recognition (ASR). Results on commonly used evaluation sets such as TIMIT (ASR) and MNIST (image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved. Convolutional neural networks (CNNs) were superseded for ASR by CTC for LSTM. but are more successful in computer vision.

The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun. Industrial applications of deep learning to large-scale speech recognition started around 2010.

The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets (DNN) might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems. The nature of the recognition errors produced by the two types of systems was characteristically different, offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems. Analysis around 2009-2010, contrasted the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition, eventually leading to pervasive and dominant use in that industry. That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models.

In 2010, researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees.

Advances in hardware enabled the renewed interest. In 2009, Nvidia was involved in what was called the “big bang” of deep learning, “as deep-learning neural networks were trained with Nvidia graphics processing units (GPUs).” That year, Google Brain used Nvidia GPUs to create capable DNNs. While there, Ng determined that GPUs could increase the speed of deep-learning systems by about 100 times. In particular, GPUs are well-suited for the matrix/vector math involved in machine learning. GPUs speed up training algorithms by orders of magnitude, reducing running times from weeks to days. Specialized hardware and algorithm optimizations can be used for efficient processing.

Deep learning revolution


In 2012, a team led by Dahl won the "Merck Molecular Activity Challenge" using multi-task deep neural networks to predict the biomolecular target of one drug. In 2014, Hochreiter's group used deep learning to detect off-target and toxic effects of environmental chemicals in nutrients, household products and drugs and won the "Tox21 Data Challenge" of NIH, FDA and NCATS.

Significant additional impacts in image or object recognition were felt from 2011 to 2012. Although CNNs trained by backpropagation had been around for decades, and GPU implementations of NNs for years, including CNNs, fast implementations of CNNs with max-pooling on GPUs in the style of Ciresan and colleagues were needed to progress on computer vision. In 2011, this approach achieved for the first time superhuman performance in a visual pattern recognition contest. Also in 2011, it won the ICDAR Chinese handwriting contest, and in May 2012, it won the ISBI image segmentation contest. Until 2011, CNNs did not play a major role at computer vision conferences, but in June 2012, a paper by Ciresan et al. at the leading conference CVPR showed how max-pooling CNNs on GPU can dramatically improve many vision benchmark records. In October 2012, a similar system by Krizhevsky et al. won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. In November 2012, Ciresan et al.'s system also won the ICPR contest on analysis of large medical images for cancer detection, and in the following year also the MICCAI Grand Challenge on the same topic. In 2013 and 2014, the error rate on the ImageNet task using deep learning was further reduced, following a similar trend in large-scale speech recognition. The Wolfram Image Identification project publicized these improvements.

Image classification was then extended to the more challenging task of generating descriptions (captions) for images, often as a combination of CNNs and LSTMs.

Some researchers assess that the October 2012 ImageNet victory anchored the start of a "deep learning revolution" that has transformed the AI industry.

Saturday, May 26, 2018

Unsupervised Machine Learning

Unsupervised machine learning is the machine learning task of inferring a function to describe hidden structure from "unlabeled" data (a classification or categorization is not included in the observations). Since the examples given to the learner are unlabeled, there is no evaluation of the accuracy of the structure that is output by the relevant algorithm—which is one way of distinguishing unsupervised learning from supervised learning and reinforcement learning.

A central case of unsupervised learning is the problem of density estimation in statistics, though unsupervised learning encompasses many other problems (and solutions) involving summarizing and explaining key features of the data.

Approaches

Approaches to unsupervised learning include:

  • Clustering
    • k-means
    • mixture models
    • hierarchical clustering,
  • Anomaly detection
  • Neural Networks
    • Autoencoders
    • Deep Belief Nets
    • Hebbian Learning
    • Generative Adversarial Networks
    • Self-organizing map
  • Approaches for learning latent variable models such as
    • Expectation–maximization algorithm (EM)
    • Method of moments
    • Blind signal separation techniques, e.g.,
      • Principal component analysis,
      • Independent component analysis,
      • Non-negative matrix factorization,
      • Singular value decomposition

In Neural Networks

The classical example of unsupervised learning in the study of both natural and artificial neural networks is subsumed by Donald Hebb's principle, that is, neurons that fire together wire together. In Hebbian learning, the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons. A similar version that modifies synaptic weights takes into account the time between the action potentials (spike-timing-dependent plasticity or STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such as pattern recognition and experiential learning.

Among neural network models, the self-organizing map (SOM) and adaptive resonance theory (ART) are commonly used in unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are also used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing. The first version of ART was "ART1", developed by Carpenter and Grossberg (1988).

Method of Moments

One of the statistical approaches for unsupervised learning is the method of moments. In the method of moments, the unknown parameters (of interest) in the model are related to the moments of one or more random variables, and thus, these unknown parameters can be estimated given the moments. The moments are usually estimated from samples empirically. The basic moments are first and second order moments. For a random vector, the first order moment is the mean vector, and the second order moment is the covariance matrix (when the mean is zero). Higher order moments are usually represented using tensors which are the generalization of matrices to higher orders as multi-dimensional arrays.

In particular, the method of moments is shown to be effective in learning the parameters of latent variable models. Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed. It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.

The Expectation–maximization algorithm (EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions.

Examples

Behavioral-based detection in network security has become a good application area for a combination of supervised and unsupervised machine learning. This is because the amount of data for a human security analyst to analyze is impossible (measured in terabytes per day) to review to find patterns and anomalies. According to Giora Engel, co-founder of LightCyber, in a Dark Reading article, "The great promise machine learning holds for the security industry is its ability to detect advanced and unknown attacks—particularly those leading to data breaches." The basic premise is that a motivated attacker will find their way into a network (generally by compromising a user's computer or network account through phishing, social engineering or malware). The security challenge then becomes finding the attacker by their operational activities, which include reconnaissance, lateral movement, command & control and exfiltration. These activities—especially reconnaissance and lateral movement—stand in contrast to an established baseline of "normal" or "good" activity for each user and device on the network. The role of machine learning is to create ongoing profiles for users and devices and then find meaningful anomalies.

Related Topics

  • Cluster analysis
  • Anomaly detection
  • Expectation–maximization algorithm
  • Generative topographic map
  • Meta-learning (computer science)
  • Multivariate analysis
  • Radial basis function network
  • Hebbian Theory

 
Note by the Blog Author

The topic “deep learning” will call up a different location in Wikipedia:
 
https://en.wikipedia.org/wiki/Deep_learning

Friday, May 25, 2018

The Irish Potato Famine

The Great Famine (Irish: an Gorta Mór) or the Great Hunger was a period of mass starvation, disease, and emigration in Ireland between 1845 and 1849. It is sometimes referred to, mostly outside Ireland, as the Irish Potato Famine, because about two-fifths of the population was solely reliant on this cheap crop for a number of historical reasons. During the famine, about one million people died and a million more emigrated from Ireland, causing the island's population to fall by between 20% and 25%.

The proximate cause of famine was potato blight, which ravaged potato crops throughout Europe during the 1840s. However, the impact in Ireland was disproportionate, as one third of the population was dependent on the potato for a range of ethnic, religious, political, social, and economic reasons, such as land acquisition, absentee landlords, and the Corn Laws, which all contributed to the disaster to varying degrees and remain the subject of intense historical debate.

The famine was a watershed in the history of Ireland, which was then part of the United Kingdom of Great Britain and Ireland. The famine and its effects permanently changed the island's demographic, political, and cultural landscape. For both the native Irish and those in the resulting diaspora, the famine entered folk memory and became a rallying point for Irish nationalist movements. The already strained relations between many Irish and the British Crown soured further, heightening ethnic and sectarian tensions, and boosting Irish nationalism and republicanism in Ireland and among Irish emigrants in the United States and elsewhere.

 

Causes and Contributing Factors

Since the Acts of Union in January 1801, Ireland had been part of the United Kingdom. Executive power lay in the hands of the Lord Lieutenant of Ireland and Chief Secretary for Ireland, who were appointed by the British government. Ireland sent 105 members of parliament to the House of Commons of the United Kingdom, and Irish representative peers elected 28 of their own number to sit for life in the House of Lords. Between 1832 and 1859, 70% of Irish representatives were landowners or the sons of landowners.

 

In the 40 years that followed the union, successive British governments grappled with the problems of governing a country which had, as Benjamin Disraeli put it in 1844, "a starving population, an absentee aristocracy, an alien established Protestant church, and in addition the weakest executive in the world." One historian calculated that, between 1801 and 1845, there had been 114 commissions and 61 special committees enquiring into the state of Ireland, and that "without exception their findings prophesied disaster; Ireland was on the verge of starvation, her population rapidly increasing, three-quarters of her labourers unemployed, housing conditions appalling and the standard of living unbelievably low".

Potato Dependency

The potato was introduced to Ireland as a garden crop of the gentry. By the late 17th century, it had become widespread as a supplementary rather than a principal food because the main diet still revolved around butter, milk, and grain products. However, in the first two decades of the 18th century, it became a base food of the poor, especially in winter. Furthermore, a disproportionate share of the potatoes grown in Ireland were of a single variety, the Irish Lumper. The expansion of the economy between 1760 and 1815 saw the potato make inroads into the diet of the people and become a staple food year round for farmers. The large dependency on this single crop, and the lack of genetic variability among the potato plants in Ireland (a monoculture), were two of the reasons why the emergence of Phytophthora infestans had such devastating effects in Ireland and less severe effects elsewhere in Europe.

Potatoes were essential to the development of the cottier system, supporting an extremely cheap workforce, but at the cost of lower living standards. For the labourer, it was essentially a potato wage that shaped the expanding agrarian economy.

The expansion of tillage led to an inevitable expansion of the potato acreage and an expansion of peasant farmers. By 1841, there were over half a million peasant farmers, with 1.75 million dependants. The principal beneficiary of this system was the English consumer.

The Celtic grazing lands of ... Ireland had been used to pasture cows for centuries. The British colonised ... the Irish, transforming much of their countryside into an extended grazing land to raise cattle for a hungry consumer market at home ... The British taste for beef had a devastating impact on the impoverished and disenfranchised people of ... Ireland ... pushed off the best pasture land and forced to farm smaller plots of marginal land, the Irish turned to the potato, a crop that could be grown abundantly in less favorable soil. Eventually, cows took over much of Ireland, leaving the native population virtually dependent on the potato for survival.

The potato was also used extensively as a fodder crop for livestock immediately prior to the famine. Approximately 33% of production, amounting to 5,000,000 short tons (4,500,000 tons), was normally used in this way

Blight in Ireland

Prior to the arrival in Ireland of the disease Phytophthora infestans, commonly known as blight, there were only two main potato plant diseases. One was called "dry rot" or "taint", and the other was a virus known popularly as "curl". Phytophthora infestans is an oomycete (a variety of parasitic, non-photosynthetic algae, and not a fungus).

In 1851, the Census of Ireland Commissioners recorded 24 failures of the potato crop going back to 1728, of varying severity. General crop failures, through disease or frost, were recorded in 1739, 1740, 1770, 1800, and 1807. In 1821 and 1822, the potato crop failed in Munster and Connaught. In 1830 and 1831, Mayo, Donegal, and Galway suffered likewise. In 1832, 1833, 1834, and 1836, dry rot and curl caused serious losses, and in 1835 the potato failed in Ulster. Widespread failures throughout Ireland occurred in 1836, 1837, 1839, 1841, and 1844. According to Woodham-Smith, "the unreliability of the potato was an accepted fact in Ireland".

How and when the blight Phytophthora infestans arrived in Europe is still uncertain; however, it almost certainly was not present prior to 1842, and probably arrived in 1844. The origin of the pathogen has been traced to Toluca Valley of Mexico, whence it spread first within North America and then to Europe. The 1845–46 blight was caused by the HERB-1 strain of the blight.

Thursday, May 24, 2018

Proteins Key to Cell Division

How a Cell Knows When To Divide
Research links cell size with commitment to cell division
By Mary L. Martialay

May 23, 2018 -- How does a cell know when to divide? We know that hundreds of genes contribute to a wave of activity linked to cell division, but to generate that wave new research shows that cells must first grow large enough to produce four key proteins in adequate amounts. The study, published today in Cell Systems, offers a path for controlling the balance between cell growth and division, which is implicated in countless diseases, including cancers.

“For years we have known that cells must reach a size threshold prior to cell division, but how cells know when they reach that threshold has been a mystery,” said Catherine Royer, lead author, along with Mike Tyers of the University of Montreal. Royer is Biocomputation and Bioinformatics Constellation Professor and Professor in the Department of Biological Sciences at Rensselaer Polytechnic Institute, and member of the Rensselaer Center for Biotechnology and Interdisciplinary Studies (CBIS). “Something sets the threshold and something senses it. This research establishes the mechanism behind this core machinery in budding yeast cells.”

The research also resolves the question of why cells with access to a nutrient-poor environment divide at a smaller size. Both findings are related to the abundance of the four key proteins required.

“Many diseases include an element of abnormal cell size and growth, and at the moment we have few means of controlling those aspects of cell growth,” said Deepak Vashishth, CBIS director. “This research marks a clear path toward targeting transcription factors to change that outcome. It’s a clear example of how translational medicine gets its start at Rensselaer.”

Royer and her team, which included researches from Rensselaer and the Université de Montréal, examined yeast cells, which divide by budding. As with most cells, yeast cells must first synthesize the necessary resources and grow in size, a phase of cell cycle known as G1. About 200 genes must be activated at the end of G1, and the research team examined five proteins—the transcription factors SBF and MBF, the transcriptional repressor Whi5, and the G1 cyclins Cin1 and Cin2—that are collectively required to initiate transcription of those 200 genes.

The researchers used a particle-counting technique to measure the absolute concentration of each of the five proteins present in cells as they grew in size. The technique relies on creating a very small optical volume and scanning “Number and Brightness” microscopy to gather data on light emitted from fluorescent-tagged proteins in a select volume of the cell. Calculations based on the relationship between average light intensity and fluctuations in light intensity reveal the number of molecules in that volume.

Royer found that as the cells grew in size, molecules of four of the five proteins examined reached a number great enough to bind to the estimated 400 binding sites on the 200 genes the proteins control. Commitment to division was triggered when the cell grew large enough to saturate the binding sites.

“In a small cell, there just weren’t enough of them to bind to all of the sites. As the cell grows, the concentration remains the same, but having the same concentration in a larger cell means that there are more molecules, and eventually enough to bind to the available sites,” said Royer. “It turns out that this system is a simple titration mechanism. It’s very straightforward biochemistry.”

The team grew cells in growth medium—a liquid designed to support yeast cell growth—with different kinds of nutrients. When the team examined cells grown in medium with poor nutrients, they discovered that those cells were “up-regulating,” producing more molecules of the four key proteins given their cell size, and therefore triggering commitment to division at a smaller size. The finding explains why cells grown in a nutrient-poor environment are smaller in size.

“It’s counter-intuitive, but at a certain level, it makes sense,” Royer said. “If you’re a yeast cell, and you are in a nutrient-poor environment, your best bet is survival of the colony rather than the individual. And so you divide at a smaller size to support the colony.”

"G1/S Transcription Factor Copy Number is a Growth-Dependent Determinant of Cell Cycle Commitment in Yeast" appears in Cell Systems. Continued research will be funded through the National Science Foundation.

Research on cell size homeostasis fulfills The New Polytechnic, an emerging paradigm for higher education which recognizes that global challenges and opportunities are so great they cannot be adequately addressed by even the most talented person working alone. Rensselaer serves as a crossroads for collaboration — working with partners across disciplines, sectors, and geographic regions — to address complex global challenges, using the most advanced tools and technologies, many of which are developed at Rensselaer. Research at Rensselaer addresses some of the world’s most pressing technological challenges — from energy security and sustainable development to biotechnology and human health. The New Polytechnic is transformative in the global impact of research, in its innovative pedagogy, and in the lives of students at Rensselaer.

Wednesday, May 23, 2018

Emerging Molecular Order

Research Reveals How Order
First Appears in Liquid Crystals
Brown University chemists have shown a technique that can identify regions in a liquid crystal system where molecular order begins to emerge just before the system fully transitions from disordered to ordered states

PROVIDENCE, R.I. [Brown University] — May 22, 2018 -- Liquid crystals undergo a peculiar type of phase change. At a certain temperature, their cigar-shaped molecules go from a disordered jumble to a more orderly arrangement in which they all point more or less in the same direction. LCD televisions take advantage of that phase change to project different colors in moving images.

For years, however, experiments have hinted at another liquid crystal state — an intermediate state between the disordered and ordered states in which order begins to emerge in discrete patches as a system approaches its transition temperature. Now, chemists at Brown University have demonstrated a theoretical framework for detecting that intermediate state and for better understanding how it works.

“People understand the ordered and disordered behaviors very well, but the state where this transition is just about to happen isn’t well understood,” said Richard Stratt, a professor of chemistry at Brown and coauthor of a paper describing the research. “What we’ve come up with is a sort of yardstick to measure whether a system is in this state. It gives us an idea of what to look for in molecular terms to see if the state is present.”

The research, published in the Journal of Chemical Physics, could shed new light not only on liquid crystals, but also molecular motion elsewhere in nature — phenomena such as the protein tangles involved in Alzheimer’s disease, for example. The work was led by Yan Zhao, a Ph.D. student in Stratt’s lab who expects to graduate from Brown this spring.

For the study, the researchers used computer simulations of phase changes in a simplified liquid crystal system that included a few hundred molecules. They used random matrix theory, a statistical framework often used to describe complex or chaotic systems, to study their simulation results. They showed that the theory does a good job of describing the system in both the ordered and disordered states, but fails to describe the transition state. That deviation from the theory can be used as a probe to identify the regions of the material where order is beginning to emerge.

“Once you realize that you have this state where the theory doesn’t work, you can dig in and ask what went wrong,” Stratt said. “That gives us a better idea of what these molecules are doing.”

Random matrix theory predicts that the sums of uncorrelated variables — in this case, the directions in which molecules are pointing — should form a bell curve distribution when plotted on a graph. Stratt and Zhao showed that that’s true of the molecules in liquid crystals when they’re in disordered and ordered states. In the disordered state, the bell curve distribution is generated by the entirely random orientations of the molecules. In the ordered state, the molecules are aligned along a common axis, but they each deviate from it a bit — some pointing a little to the left of the axis and some a little to right. Those random deviations, like the random molecule positions in the disordered state, could be fit to a bell curve.

But that bell curve distribution fell apart just before the phase change took place, as the temperature of the system was dropping down to its transition temperature. That suggests that molecules in discrete patches in the system were becoming correlated with each other.

“You now have several sets of molecules starting to cooperate with each other, and that causes the deviations from the bell curve,” Stratt said. “It’s as if these molecules are anticipating that this fully ordered state is going to take place, but they haven’t all decided which direction they’re going to face yet. It’s a little like politics, where everybody agrees that something needs to change, but they haven’t figured out exactly what to do.”

Stratt says the work could be helpful in providing insight into what governs the effectiveness of molecular motion. In both ordered and disordered liquid crystals, molecules are free to move relatively freely. But in the intermediate state, that movement is inhibited. This state then represents a situation in which the molecular progress is starting to slow down.

“There are a lot of problems in natural science where movement of molecules is slow,” Stratt said. “The molecules in molten glass, for example, progressively slow down as the liquid cools. The protein tangles involved in Alzheimer’s disease are another example where the molecular arrangement causes the motion to be slow. But what rules are governing those molecules as they slow down? We don’t fully understand it.”

Stratt hopes that a better understanding of slow molecular movement in liquid crystals could provide a blueprint for understanding slow movement elsewhere in nature.

Link (with a diagram showing the semi-ordered state as an example):  http://news.brown.edu/articles/2018/05/order

Tuesday, May 22, 2018

Millions of Synthetic Proteins

Chemists Synthesize Millions of .
Proteins not Found in Nature
New technology could lead to development of novel
“xenoprotein” drugs against infectious diseases
By Anne Trafton | MIT News Office

May 21, 2018 -- MIT chemists have devised a way to rapidly synthesize and screen millions of novel proteins that could be used as drugs against Ebola and other viruses.

All proteins produced by living cells are made from the 20 amino acids that are programmed by the genetic code. The MIT team came up with a way to assemble proteins from amino acids not used in nature, including many that are mirror images of natural amino acids.

These proteins, which the researchers call “xenoproteins,” offer many advantages over naturally occurring proteins. They are more stable, meaning that unlike most protein drugs, they don’t require refrigeration, and may not provoke an immune response.

“There is no other technological platform that can be used to create these xenoproteins because people haven’t worked through the ability to use completely nonnatural sets of amino acids throughout the entire shape of the molecule,” says Brad Pentelute, an MIT associate professor of chemistry and the senior author of the paper, which appears in the Proceedings of the National Academy of Sciences the week of May 21.

Zachary Gates, an MIT postdoc, is the lead author of the paper. Timothy Jamison, head of MIT’s Department of Chemistry, and members of his lab also contributed to the paper.

Nonnatural proteins

Pentelute and Jamison launched this project four years ago, working with the Defense Advanced Research Projects Agency (DARPA), which asked them to come up with a way to create molecules that mimic naturally occurring proteins but are made from nonnatural amino acids.

“The mission was to generate discovery platforms that allow you to chemically manufacture large libraries of molecules that don’t exist in nature, and then sift through those libraries for the particular function that you desired,” Pentelute says.

For this project, the research team built on technology that Pentelute’s lab had previously developed for rapidly synthesizing protein chains. His tabletop machine can perform all of the chemical reactions needed to string together amino acids, synthesizing the desired proteins within minutes.

As building blocks for their xenoproteins, the researchers used 16 “mirror-image” amino acids. Amino acids can exist in two different configurations, known as L and D. The L and D versions of a particular amino acid have the same chemical composition but are mirror images of each other. Cells use only L amino acids.

The researchers then used synthetic chemistry to assemble tens of millions of proteins, each about 30 amino acids in length, all of the D configuration. These proteins all had a similar folded structure that is based on the shape of a naturally occurring protein known as a trypsin inhibitor.

Before this study, no research group had been able to create so many proteins made purely of nonnatural amino acids.

“Significant effort has been devoted to development of methods for the incorporation of nonnatural amino acids into protein molecules, but these are generally limited with regard to the number of nonnatural amino acids that can simultaneously be incorporated into a protein molecule,” Gates says.

After synthesizing the xenoproteins, the researchers screened them to identify proteins that would bind to an IgG antibody against an influenza virus surface protein. The antibodies were tagged with a fluorescent molecule and then mixed with the xenoproteins. Using a system called fluorescence-activated cell sorting, the researchers were able to isolate xenoproteins that bind to the fluorescent IgG molecule.

This screen, which can be done in only a few hours, revealed several xenoproteins that bind to the target. In other experiments, not published in the PNAS paper, the researchers have also identified xenoproteins that bind to anthrax toxin and to a glycoprotein produced by the Ebola virus. This work is in collaboration with John Dye, Spencer Stonier, and Christopher Cote at the U.S. Army Medical Research Institute of Infectious Diseases.

“This is an extremely important first step in finding a good way of rapidly screening complex mirror image proteins,” says Stephen Kent, a professor of chemistry at the University of Chicago, who was not involved in the research. “Being able to use chemistry to make a library of mirror image proteins, with their high stability and specificity for a given target, is obviously of potential therapeutic interest.”

Built on demand

The researchers are now working on synthesizing proteins modeled on different scaffold shapes, and they are searching for xenoproteins that bind to other potential drug targets. Their long-term goal is to use this system to rapidly synthesize and identify proteins that could be used to neutralize any type of emerging infectious disease.

“The hope is that we can discover molecules in a rapid manner using this platform, and we can chemically manufacture them on demand. And after we make them, they can be shipped all over the place without refrigeration, for use in the field,” Pentelute says.

In addition to potential drugs, the researchers also hope to develop “xenozymes” — xenoproteins that can act as enzymes to catalyze novel types of chemical reactions.