π― The Big Picture
You have ratings of brands on many attributes (e.g., "sporty", "luxurious", "affordable").
This tool finds the hidden dimensions that explain most of the variation in those ratings,
then plots brands on those dimensions so you can see competitive positioning at a glance.
π‘
Analogy: Imagine 20 attributes are really just different ways of measuring 2-3 underlying concepts
(like "premium-ness" and "performance"). PCA discovers those concepts automatically.
π Step-by-Step Process
-
Center the data β Each attribute is centered (mean = 0) so we focus on
differences between brands, not absolute rating levels.
$$x'_{ij} = x_{ij} - \bar{x}_j$$
xij = rating of brand i on attribute j
xΜj = mean rating across all brands for attribute j
x'ij = centered rating (deviation from attribute mean)
Optional: Enable "Standardize" in Advanced Settings to also divide by
standard deviation (z-scores). This gives all attributes equal weight regardless of variance.
-
Compute covariance matrix β Measures how attributes vary together across brands.
High covariance? They're measuring the same underlying dimension.
$$C = \frac{1}{n-1}X'^TX'$$
X' = matrix of centered ratings (brands Γ attributes)
X'T = transpose of X' (attributes Γ brands)
n = number of brands
C = covariance matrix (attributes Γ attributes)
-
Extract eigenvalues & eigenvectors β Solve for the vectors that, when multiplied
by C, only get scaled (not rotated). These become the new dimension axes.
$$C \mathbf{v}_k = \lambda_k \mathbf{v}_k$$
vk = eigenvector for dimension k (defines the axis direction)
Ξ»k = eigenvalue for dimension k (variance explained by that axis)
Eigenvalues sorted largest β smallest; first eigenvector = Dimension I
-
Project brands onto dimensions β Multiply each brand's centered ratings by the
eigenvectors to get its coordinates on the new axes.
$$\text{score}_{ik} = \sum_{j=1}^{p} x'_{ij} \cdot v_{jk}$$
scoreik = brand i's coordinate on dimension k
x'ij = brand i's centered rating on attribute j
vjk = weight of attribute j in dimension k
p = number of attributes
π Key Outputs Explained
Variance Explained
How much of the original attribute variation this dimension captures.
If Dim I explains 60% and Dim II explains 25%, those two dimensions capture 85% of the information.
Attribute Loadings
Correlations between original attributes and the new dimensions.
A loading of +0.85 means that attribute strongly defines the positive end of that dimension.
Brand Coordinates
Where each brand sits on each dimension. Positive = high on attributes that load positively;
negative = high on attributes that load negatively.
π What You Can Do With This
- Identify competitive clusters β Brands near each other are perceived similarly (direct competitors)
- Find whitespace β Empty areas on the map = potential positioning opportunities
- Understand differentiation β Distance between brands shows how distinct they are in customers' minds
- Guide repositioning β See which attributes you'd need to change to move toward a target position
- Segment targeting β With preference data, see which customer segments prefer which positions
π¬ Technical Notes
This tool uses Principal Components Analysis (PCA) on brand-attribute ratings.
The correlation matrix approach is equivalent to PCA on standardized data. Eigendecomposition
uses Jacobi rotation for numerical stability. The scree plot and Kaiser criterion (eigenvalue > 1)
guide dimension selection.
π A note on terminology: Marketers often call positioning maps "MDS"
(Multidimensional Scaling), but true MDS requires pairwise similarity data as input. When working
with attribute ratings (like this tool), the correct term is PCA. The confusion
arose because both methods produce similar-looking 2D mapsβbut the inputs and math differ.
If someone says "let's do MDS" with attribute data, they almost certainly mean PCA.