Self-organizing map: Difference between revisions

From WikiMD's Wellness Encyclopedia

CSV import
Tags: mobile edit mobile web edit
 
CSV import
 
Line 44: Line 44:
{{Artificial-intelligence-stub}}
{{Artificial-intelligence-stub}}
{{Machine-learning-stub}}
{{Machine-learning-stub}}
== Self-organizing map gallery ==
<gallery>
File:Synapse Self-Organizing Map.png|Synapse Self-Organizing Map
File:Somtraining.svg|Somtraining
File:TrainSOM.gif|Train SOM
File:Self oraganizing map cartography.jpg|Self-organizing map cartography
File:SOMsPCA.PNG|SOMs PCA
</gallery>

Latest revision as of 05:54, 3 March 2025

Self-organizing map (SOM), also known as Kohonen map, is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map. SOM is a method to do dimensionality reduction, and is particularly well-suited to help visualize high-dimensional data. Developed by Teuvo Kohonen in the 1980s, SOMs have been applied in various fields such as machine learning, pattern recognition, data analysis, and bioinformatics.

Overview[edit]

The SOM consists of components called nodes or neurons, and the map is arranged in a grid. Each node in the grid is associated with a weight vector of the same dimensionality as the input data vectors. The process of training a SOM involves adjusting these weights to match the input vectors in a way that preserves the topological properties of the input space. This means that similar input vectors will be mapped to nearby nodes on the map, effectively clustering the data.

Training Process[edit]

The training of a SOM is carried out in two main phases: the ordering phase and the tuning phase. During the ordering phase, the map learns the topology of the data, and in the tuning phase, the map's accuracy is refined.

1. Initialization: The weight vectors of the SOM nodes are initialized, often with small random values. 2. Competition: For each input vector, the node whose weight vector is most similar to the input (usually measured by Euclidean distance) is identified as the Best Matching Unit (BMU). 3. Cooperation: The BMU and its neighbors within a certain radius in the map space are adjusted to become more like the input vector. The neighborhood radius decreases over time. 4. Adaptation: The weight vectors of the BMU and its neighbors are adjusted to make them more similar to the input vector. The learning rate, which determines the degree of the adjustment, decreases over time.

Applications[edit]

SOMs have been used in a wide range of applications, including but not limited to: - Visualizing high-dimensional data: SOMs can project high-dimensional data onto a two-dimensional space, making it easier to visualize and interpret. - Clustering: By grouping similar data points together, SOMs can be used for data clustering. - Feature extraction: SOMs can reduce the dimensionality of data while preserving its most important features. - Pattern recognition: SOMs can be used to recognize patterns or anomalies in datasets.

Advantages and Limitations[edit]

Advantages: - SOMs provide a way to visualize complex, high-dimensional data in a lower-dimensional space. - They can automatically cluster data without prior knowledge of the number of clusters.

Limitations: - The choice of map size and the learning parameters can significantly affect the quality of the results, and there is no straightforward way to determine the optimal values. - SOMs do not provide an explicit way to handle missing data.

See Also[edit]

- Artificial neural network - Clustering - Dimensionality reduction - Machine learning

References[edit]

<references/>


Stub icon
   This article is a artificial intelligence-related stub. You can help WikiMD by expanding it!






   This article is a machine-learning stub. You can help WikiMD by expanding it!




Self-organizing map gallery[edit]