Unsupervised machine learning is the machine learning task of inferring
a function to describe hidden structure from "unlabeled" data (a
classification or categorization is not included in the observations). Since
the examples given to the learner are unlabeled, there is no evaluation of the
accuracy of the structure that is output by the relevant algorithm—which is one
way of distinguishing unsupervised learning from supervised learning and reinforcement
learning.
A central case of unsupervised learning is the problem of density estimation in statistics, though unsupervised learning encompasses many other problems (and solutions) involving summarizing and explaining key features of the data.
Approaches to unsupervised learning include:
The classical example of unsupervised learning in the study of both natural and artificial neural networks is subsumed by Donald Hebb's principle, that is, neurons that fire together wire together. In Hebbian learning, the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons. A similar version that modifies synaptic weights takes into account the time between the action potentials (spike-timing-dependent plasticity or STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such as pattern recognition and experiential learning.
Among neural network models, the self-organizing map (SOM) and adaptive resonance theory (ART) are commonly used in unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are also used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing. The first version of ART was "ART1", developed by Carpenter and Grossberg (1988).
One of the statistical approaches for unsupervised learning is the method of moments. In the method of moments, the unknown parameters (of interest) in the model are related to the moments of one or more random variables, and thus, these unknown parameters can be estimated given the moments. The moments are usually estimated from samples empirically. The basic moments are first and second order moments. For a random vector, the first order moment is the mean vector, and the second order moment is the covariance matrix (when the mean is zero). Higher order moments are usually represented using tensors which are the generalization of matrices to higher orders as multi-dimensional arrays.
In particular, the method of moments is shown to be effective in learning the parameters of latent variable models. Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed. It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.
The Expectation–maximization algorithm (EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions.
Note by the Blog Author
A central case of unsupervised learning is the problem of density estimation in statistics, though unsupervised learning encompasses many other problems (and solutions) involving summarizing and explaining key features of the data.
Approaches
Approaches to unsupervised learning include:
- Clustering
- k-means
- mixture models
- hierarchical clustering,
- Anomaly detection
- Neural Networks
- Autoencoders
- Deep Belief Nets
- Hebbian Learning
- Generative Adversarial Networks
- Self-organizing map
- Approaches for learning latent variable
models such as
- Expectation–maximization algorithm (EM)
- Method of moments
- Blind signal separation techniques, e.g.,
- Principal component analysis,
- Independent component analysis,
- Non-negative matrix factorization,
- Singular value decomposition
In Neural Networks
The classical example of unsupervised learning in the study of both natural and artificial neural networks is subsumed by Donald Hebb's principle, that is, neurons that fire together wire together. In Hebbian learning, the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons. A similar version that modifies synaptic weights takes into account the time between the action potentials (spike-timing-dependent plasticity or STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such as pattern recognition and experiential learning.
Among neural network models, the self-organizing map (SOM) and adaptive resonance theory (ART) are commonly used in unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are also used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing. The first version of ART was "ART1", developed by Carpenter and Grossberg (1988).
Method of Moments
One of the statistical approaches for unsupervised learning is the method of moments. In the method of moments, the unknown parameters (of interest) in the model are related to the moments of one or more random variables, and thus, these unknown parameters can be estimated given the moments. The moments are usually estimated from samples empirically. The basic moments are first and second order moments. For a random vector, the first order moment is the mean vector, and the second order moment is the covariance matrix (when the mean is zero). Higher order moments are usually represented using tensors which are the generalization of matrices to higher orders as multi-dimensional arrays.
In particular, the method of moments is shown to be effective in learning the parameters of latent variable models. Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed. It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.
The Expectation–maximization algorithm (EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions.
Examples
Behavioral-based
detection in network security has become a good application area for a
combination of supervised and unsupervised machine learning. This is because
the amount of data for a human security analyst to analyze is impossible
(measured in terabytes per day) to review to find patterns and anomalies. According
to Giora Engel, co-founder of LightCyber, in a Dark Reading article,
"The great promise machine learning holds for the security industry is its
ability to detect advanced and unknown attacks—particularly those leading to
data breaches." The basic premise is that a motivated attacker will find
their way into a network (generally by compromising a user's computer or
network account through phishing, social engineering or malware). The security
challenge then becomes finding the attacker by their operational activities,
which include reconnaissance, lateral movement, command & control and
exfiltration. These activities—especially reconnaissance and lateral
movement—stand in contrast to an established baseline of "normal" or
"good" activity for each user and device on the network. The role of
machine learning is to create ongoing profiles for users and devices and then
find meaningful anomalies.
Related Topics
- Cluster analysis
- Anomaly detection
- Expectation–maximization algorithm
- Generative topographic map
- Meta-learning (computer science)
- Multivariate analysis
- Radial basis function network
- Hebbian Theory
Note by the Blog Author
The topic “deep learning” will call up a different location in
Wikipedia:
https://en.wikipedia.org/wiki/Deep_learning
https://en.wikipedia.org/wiki/Deep_learning
No comments:
Post a Comment