t-Distributed Stochastic Neighbor Embedding
The Fisher iris data set has four-dimensional measurements of irises, and corresponding classification into species. Visualize this data by reducing the dimension using
load fisheriris rng default % for reproducibility Y = tsne(meas); gscatter(Y(:,1),Y(:,2),species)
Use various distance metrics to try to obtain a better separation between species in the Fisher iris data.
load fisheriris rng('default') % for reproducibility Y = tsne(meas,'Algorithm','exact','Distance','mahalanobis'); subplot(2,2,1) gscatter(Y(:,1),Y(:,2),species) title('Mahalanobis') rng('default') % for fair comparison Y = tsne(meas,'Algorithm','exact','Distance','cosine'); subplot(2,2,2) gscatter(Y(:,1),Y(:,2),species) title('Cosine') rng('default') % for fair comparison Y = tsne(meas,'Algorithm','exact','Distance','chebychev'); subplot(2,2,3) gscatter(Y(:,1),Y(:,2),species) title('Chebychev') rng('default') % for fair comparison Y = tsne(meas,'Algorithm','exact','Distance','euclidean'); subplot(2,2,4) gscatter(Y(:,1),Y(:,2),species) title('Euclidean')
In this case, the cosine, Chebychev, and Euclidean distance metrics give reasonably good separation of clusters. But the Mahalanobis distance metric does not give a good separation.
tsne removes input data rows that contain any
NaN entries. Therefore, you must remove any such rows from your classification data before plotting.
For example, change a few random entries in the Fisher iris data to
load fisheriris rng default % for reproducibility meas(rand(size(meas)) < 0.05) = NaN;
Embed the four-dimensional data into two dimensions using
Y = tsne(meas,'Algorithm','exact');
Warning: Rows with NaN missing values in X or 'InitialY' values are removed.
Determine how many rows were eliminated from the embedding.
ans = 22
Prepare to plot the result by locating the rows of
meas that have no
goodrows = not(any(isnan(meas),2));
Plot the results using only the rows of
species that correspond to rows of
meas with no
Find both 2-D and 3-D embeddings of the Fisher iris data, and compare the loss for each embedding. It is likely that the loss is lower for a 3-D embedding, because this embedding has more freedom to match the original data.
load fisheriris rng default % for reproducibility [Y,loss] = tsne(meas,'Algorithm','exact'); rng default % for fair comparison [Y2,loss2] = tsne(meas,'Algorithm','exact','NumDimensions',3); fprintf('2-D embedding has loss %g, and 3-D embedding has loss %g.\n',loss,loss2)
2-D embedding has loss 0.124191, and 3-D embedding has loss 0.0990884.
As expected, the 3-D embedding has lower loss.
View the embeddings. Use RGB colors
[1 0 0],
[0 1 0], and
[0 0 1].
For the 3-D plot, convert the species to numeric values using the
categorical command, then convert the numeric values to RGB colors using the
sparse function as follows. If
v is a vector of positive integers 1, 2, or 3, corresponding to the species data, then the command
is a sparse matrix whose rows are the RGB colors of the species.
gscatter(Y(:,1),Y(:,2),species,eye(3)) title('2-D Embedding')
figure v = double(categorical(species)); c = full(sparse(1:numel(v),v,ones(size(v)),numel(v),3)); scatter3(Y2(:,1),Y2(:,2),Y2(:,3),15,c,'filled') title('3-D Embedding') view(-50,8)
X— Data points
Data points, specified as an
where each row is one
tsne removes rows of
NaN values before creating an embedding.
See Plot Results with NaN Input Data.
comma-separated pairs of
the argument name and
Value is the corresponding value.
Name must appear inside quotes. You can specify several name and value
pair arguments in any order as
Y = tsne(X,'Algorithm','Exact','NumPCAComponents',50)
tsne algorithm, specified as
'exact' algorithm optimizes the Kullback-Leibler
divergence of distributions between the original space and the embedded
'barneshut' algorithm performs an approximate
optimization that is faster and uses less memory when the number of
data rows is large.
knnsearch to find the nearest neighbors.
Distance— Distance metric
'jaccard'| function handle
Distance metric, specified by one of the following. For definitions
of the distance metrics, see
'euclidean' — Euclidean
Standardized Euclidean distance. Each coordinate
difference between rows in
and the query matrix is scaled by dividing by the
corresponding element of the standard deviation
S = std(X,'omitnan').
'cityblock' — City block
'chebychev' — Chebychev
distance, which is the maximum coordinate difference.
'minkowski' — Minkowski
distance with exponent 2. This is the same as Euclidean distance.
Mahalanobis distance, computed using the positive
definite covariance matrix
'cosine' — 1 minus the cosine
of the included angle between observations (treated as vectors).
'correlation' — One minus
the sample linear correlation between observations (treated as sequences
'spearman' — One minus the
sample Spearman's rank correlation between observations (treated as
sequences of values).
'hamming' — Hamming distance,
which is the percentage of coordinates that differ.
'jaccard' — One minus the
Jaccard coefficient, which is the percentage of nonzero coordinates
custom distance function — A distance function
@ (for example,
For details, see More About.
In all cases,
tsne uses squared pairwise
distances to calculate the Gaussian kernel in the joint distribution
Exaggeration— Size of natural clusters in data
4(default) | scalar value
Size of natural clusters in data, specified as a scalar value
A large exaggeration makes
tsne learn larger
joint probabilities of
Y and creates relatively
more space between clusters in
exaggeration in the first 99 optimization iterations.
If the value of Kullback-Leibler divergence increases in the early stage of the optimization, try reducing the exaggeration. See tsne Settings.
NumDimensions— Dimension of the output
2(default) | positive integer
Dimension of the output
Y, specified as
a positive integer. Generally, set
NumPCAComponents— PCA dimension reduction
0(default) | nonnegative integer
PCA dimension reduction, specified as a nonnegative integer.
tsne embeds the high-dimensional data,
it first reduces the dimensionality of the data to
pca function. When
not use PCA.
Perplexity— Effective number of local neighbors of each point
30(default) | positive scalar
Effective number of local neighbors of each point, specified as a positive scalar. See t-SNE Algorithm.
Larger perplexity causes
tsne to use more
points as nearest neighbors. Use a larger value of
a large dataset. Typical
Perplexity values are
50. In the Barnes-Hut
the number of nearest neighbors. See tsne Settings.
Standardize— Normalize input data
Normalize input data, specified as
tsne centers and
X by dividing the columns by their standard
When features in
X are on different scales,
this because the learning process is based on nearest neighbors, so
features with large scales can override the contribution of features
with small scales.
InitialY— Initial embedded points
LearnRate— Learning rate for optimization process
500(default) | positive scalar
Learning rate for optimization process, specified as a positive
scalar. Typically, set values from
LearnRate is too small,
converge to a poor local minimum. When
too large, the optimization can initially have the Kullback-Leibler
divergence increase rather than decrease. See tsne Settings.
NumPrint— Iterative display frequency
20(default) | positive integer
Iterative display frequency, specified as a positive integer.
Verbose name-value pair is not
iterative display after every
Options name-value pair contains a nonempty
then output functions run after every
Options— Optimization options
'MaxIter' — Positive integer
specifying the maximum number of optimization iterations. Default:
'OutputFcn' — Function handle
or cell array of function handles specifying one or more functions
to call after every
NumPrint optimization iterations.
For syntax details, see t-SNE Output Function. Default:
'TolFun' — Stopping criterion
for the optimization. The optimization exits when the norm of the
gradient of the Kullback-Leibler divergence is less than
options = statset('MaxIter',500)
Theta— Barnes-Hut tradeoff parameter
0.5(default) | scalar from 0 through 1
Barnes-Hut tradeoff parameter, specified as a scalar from 0
through 1. Higher values give a faster but less accurate optimization.
Applies only when
Verbose— Iterative display
Iterative display, specified as
Verbose is not
a summary table of the Kullback-Leibler divergence and the norm of
its gradient every
prints the variances of Gaussian kernels.
these kernels in its computation of the joint probability of
If you see a large difference in the scales of the minimum and maximum
variances, you can sometimes get more suitable results by rescaling
loss— Kullback-Leibler divergence
Kullback-Leibler divergence between modeled input and output distributions, returned as a nonnegative scalar. For details, see t-SNE Algorithm.
The syntax of a custom distance function is as follows.
function D2 = distfun(ZI,ZJ)
your function, and your function computes the distance.
ZI is a 1-by-n vector
containing a single row from
ZJ is an m-by-n matrix
containing multiple rows of
Your function returns
D2, which is an m-by-1
vector of distances. The jth element of
the distance between the observations
If your data are not sparse, then usually the built-in distance functions are faster than a function handle.
tsne constructs a set of embedded points
in a low-dimensional space whose relative similarities mimic those
of the original high-dimensional points. The embedded points show
the clustering in the original data.
Roughly, the algorithm models the original points as coming from a Gaussian distribution, and the embedded points as coming from a Student’s t distribution. The algorithm tries to minimize the Kullback-Leibler divergence between these two distributions by moving the embedded points.
For details, see t-SNE.