fix: some errors in /about down, many to go

This commit is contained in:
nobody 2025-11-29 15:57:35 -08:00
commit 91e3ab4cb1
Signed by: GrocerPublishAgent
GPG key ID: D460CD54A9E3AB86

View file

@ -132,8 +132,8 @@ The `keepdims=True` makes `norms` shape <Math tex="(N, 1)" /> instead of <Math
tex="(N,)" />, which is crucial. When transposed, <Math tex="(N, 1)" /> becomes
<Math tex="(1, N)" />, allowing the broadcasting to work for column-wise
division. Transpose does not do anything to the shape <Math tex="(N,)" />. I
don't know why transpose works this way, but this seems like gotcha to look out
for.
don't know why transpose works this way, but this seems like a nasty gotcha to
look out for.
## Step 4: Clean Up the Graph
@ -202,8 +202,8 @@ Traditional normalization <Math tex="\tilde{\mathbf{A}} = \mathbf{D}^{-1} \mathb
- This is like a proper Markov chain transition matrix
- Used in standard PageRank and TextRank
- Supports **directed** graphs, a property useful for modeling web page
navigation (page A links to B but B does not link back to A), but we don't
actually need for sentence similarity where similarity of A to B is exactly
navigation (page A links to B but B does not link back to A). We don't
need this because similarity of sentence A to B is exactly
the same value as B to A.
Spectral normalization <Math tex="\tilde{\mathbf{A}} = \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2}" />:
@ -223,7 +223,7 @@ Spectral normalization solves this problem. Well-connected sentences keep their
influence proportional to connectivity.
I asked a ML engineer to explain the same idea to give you a
Rosetta Stone to understand their jaron.
Rosetta Stone to understand their jargon.
> The traditional <Math tex="\mathbf{D}^{-1} \mathbf{A}" /> approach introduces potential node bias and lacks symmetry. Spectral normalization
> provides a more balanced representation by symmetrizing the adjacency matrix and ensuring more uniform