Select Page

Hierarchical and partitional algorithms are two broad categories of clustering algorithms used in data mining and machine learning. Hierarchical clustering builds a hierarchy of clusters, while partitional clustering directly divides the data into non-overlapping clusters. Here’s an overview of both types of algorithms, as well as a brief introduction to the CURE and Chameleon algorithms for hierarchical clustering:

Hierarchical Clustering

Hierarchical clustering is a bottom-up or top-down approach to clustering that organizes data points into a tree-like hierarchical structure known as a dendrogram. It does not require specifying the number of clusters in advance and provides insights into the hierarchical relationships between clusters. Two main types of hierarchical clustering are:

  1. Agglomerative Hierarchical Clustering:
    • Starts with each data point as a singleton cluster and iteratively merges the most similar clusters until only one cluster remains.
    • Common linkage criteria include single linkage, complete linkage, average linkage, and Ward’s linkage.
  2. Divisive Hierarchical Clustering:
    • Starts with all data points in a single cluster and recursively divides the data into smaller clusters until each data point is in its own cluster.
    • Less commonly used than agglomerative clustering due to its computational complexity.

CURE (Clustering Using Representatives)

CURE is a hierarchical clustering algorithm that combines the advantages of both agglomerative and partitional clustering. It overcomes the limitations of traditional hierarchical algorithms by using a bottom-up approach to build a hierarchical cluster tree while addressing the scalability and efficiency issues. Key features of the CURE algorithm include:

  • Representative Points: Instead of using all data points, CURE selects a small number of representative points for each cluster to reduce the computational cost.
  • Clustering in Subspaces: CURE performs clustering in subspaces to handle data with varying densities and shapes, allowing it to capture clusters of different sizes and orientations.


Chameleon is another hierarchical clustering algorithm designed to handle datasets with varying density and shape. It incorporates both feature space similarity and spatial proximity to improve cluster quality and scalability. Key features of the Chameleon algorithm include:

  • Dynamic Model-Based Clustering: Chameleon dynamically adjusts the clustering model based on the local density and spatial proximity of data points, allowing it to adapt to clusters of different shapes and sizes.
  • Cluster Stability Analysis: Chameleon evaluates cluster stability using cohesion and separation measures to identify meaningful clusters and outliers effectively.

Partitional Algorithms

Partitional clustering algorithms directly divide the data into non-overlapping clusters without forming a hierarchical structure. Examples of partitional clustering algorithms include k-means, k-medoids, and expectation-maximization (EM) clustering. These algorithms require specifying the number of clusters in advance and are computationally efficient for large datasets.

Hierarchical clustering algorithms build a hierarchical structure of clusters, while partitional clustering algorithms directly divide the data into clusters. CURE and Chameleon are two hierarchical clustering algorithms designed to address scalability, efficiency, and cluster quality issues in traditional hierarchical clustering. By understanding the characteristics and capabilities of different clustering algorithms, data scientists can choose the most suitable approach for their specific clustering tasks and dataset properties