In my experience, there are insights to be gained when looking at what happens when vector operations work on 1D vectors. So I wanted to cover some observations.
This article will be weird because it has a lot of super simple observations – but any intuition that can be gained from simple ideas is low-hanging fruit.
Dimensionless To Vectors
This article is going to focus on abstractions and conversions between 1D and multiple dimensions. So before we start, some prerequisites. That’s part of why this article is weird; we’re going to go over some intermediate operations and then jump back to super simple ideas for the meat of the content – that’s also the point.
1D
So what we think of as a “normal” number is what mathematics calls a real number. These can be the quantity (units) of something, the size or volume of something (scalar), or some coefficient factor (dimensionless number). They aren’t mutually exclusive, and in the end, they’re all just real numbers.
A common way to view the space that real number values can occupy is a 1D number line. We’ll also call this a 1D plot.
2D and 3D
And then there’s 2D and 3D space. 2D is represented by butting-up two 1D plots to create a 2D plot. And 3D is often represented as an orthographic (axonometric) version of a 2D plot with an extra plot axis going through it.
And because we recognize the physical space we occupy as 3 dimensional, it’s pretty easy to illustrate ideas in these dimensions – especially for 2D diagrams on a 2D screen. For 3D we need to do some projections for it to show on our 2D displays, but it’s nothing we can’t comprehend easily.
By “displays”, I’m not just referring to screens devices, but any other 2D surface with an image on it, the entire spectrum from paper all the way to Etch-A-Sketches and zen gardens.
Hyperdimensions
But we are familiar with 1D to 3D concepts because of how observable they are; contemplating 4D and higher dimensional spaces do not come naturally to us. Because of this division of how intuitive 3-dimensional spaces (or anything lower) are for us to imagine and how unintuitive anything higher than 3 is, we even have a word that describes this categorization: Hyperdimensionality. A hyper dimension is any space with more than 3 dimensions that crosses the barrier of our natural intuitions.
As a side-note, people sometimes think of time as THE 4th dimension. But that’s only relevant if you’re using vectors to model spacetime. When abstracting these vectors for general purpose, vectors don’t have to represent physical space and reality – the x component doesn’t have to represent horizontalness, and the y component doesn’t have to represent verticalness, But that’s kind of the point, abstraction! And in that same mindset of how we can decouple vector math from what we apply it towards, we can decouple the idea of vectors from how many dimensions we’re normally familiar with.
Intuitively Generalizing Vectors
With hyperdimensional spaces, I think in order to be comfortable with their idea, you need to be able to take 2D and 3D spaces and learn to generalize them.
And I think seeing how converting between 1D and 2D/3D can be helpful in this process – because 1D is also pretty intuitive for us. If we can see how 1D generalizes to 2D/3D, and how 2D/3D spaces project into 1D – the hope is to get insight on that same generalization between 2D/3D and hyper dimensions.
A lot of things aren’t going to be perfectly interchangeable and directly translate (between 1D and 2D+), but they should be similar enough to provide intuition. Actually, those nuanced issues are part of the intuition.
Basic Vector Arithmetic
With all that out of the way, let’s start covering these operations.
For these examples, we will represent the 1D vector as d, or da and db if multiple values are needed. Since the vector d only has 1 component, it’s essentially a scalar, but we’ll show that value as a vector component for the sake of pushing a concept.
\overrightarrow{d} = [ \mathit{d_1}] = \overrightarrow{d}_1
So whether we’re talking about a 1D vector or its single component, think of it as both a vector and just a real number.
Addition and Subtraction
Here are the generalized formulas for vector addition and subtraction. They’re just component-wise operations.
\overrightarrow{\text{a}} +\overrightarrow{\text{b}} = [\mathit{a_1} + \mathit{b_1}, \mathit{a_2} + \mathit{b_2}, \mathit{a_n} + \mathit{b_n} ] \\ \overrightarrow{\text{a}} -\overrightarrow{\text{b}} = [ \mathit{a_1} - \mathit{b_1}, \mathit{a_2} - \mathit{b_2}, \mathit{a_n} - \mathit{b_n} ]
1D vector addition is just addition. So this one’s a no-brainer. Addition and subtraction are just doing an add or subtract on a per-component basis and a 1D vector only has one point.
\overrightarrow{da} + \overrightarrow{db} = \overrightarrow{da}_1 + \overrightarrow{db}_1\\ \overrightarrow{da} - \overrightarrow{db} = \overrightarrow{da}_1 - \overrightarrow{db}_1\\
Nothing earth-shattering here, but we got the ball rolling and dipped our toes into some simple math notations. We also are starting to see how we can think of single real numbers as 1D vectors.
Calculating Target Vectors
A target vector is a vector that was created to start at 1 point, and travel to another point. This is done by simple point subtraction.
\overrightarrow{ab} = B - A
Then if we’re at point A, we can travel to B with simple vector addition.
B = A + \overrightarrow{ab}
Creating a target vector is important for things like linear interpolation and rescaling a space from an arbitrary pivot point.
Note how the target comes first in the subtraction, and it’s subtracted by the starting point. There used to be a time (long ago, before I internalized this) when I would forget the proper ordering of operands (i.e., “was it B-A or A-B? Target-start or start-target?”) and would often diagram this out in 2D on a sheet of paper to figure it out. It was just too much for me to visualize in my head.
But, figuring this out in 1D is insanely simple because it’s just a subtraction.
In the illustration above, we can easily see if we want a vector going in the direction from 0 to 5 (positive direction) then 5 (the destination) needs to be the first operand in the subtraction operation.
Scalar Multiplication and Division
Here are the generalized formulas for vector multiplication and division against a scalar.
\overrightarrow{\text{v}} * s = [\mathit{v_1} * s, \mathit{v_2} * s, \mathit{v_n} * s] \\ \overrightarrow{\text{v}} /s = [\mathit{v_1} / s, \mathit{v_2} / s, \mathit{v_n} / s]
A 1D vector is pretty much a scalar, so it’s pretty much scaler to scalar multiplication, which is just normal multiplication. The same goes for dividing.
\overrightarrow{\text{d}} * s = \overrightarrow{d_1} * s\\ \overrightarrow{\text{d}} /s = \overrightarrow{d_1} / s
Again, nothing earth-shattering, but let’s start moving on to more interesting 1D generalizations.
Vector Operations
So now we’ll move on to more complex and significant operations. The difficulty in understanding these generalizations probably isn’t going to increase greatly, but I think the insights start getting more significant.
Magnitude
The magnitude is a scalar that defines the length of the vector. Basically multiply each component by themselves, add those results up and take the square root of it. The magnitude operation is also noted by placing bars (|
) around the vector.
magnitude(\overrightarrow{v}) = |\overrightarrow{v}| = \sqrt{\sum_{i=1}^{n} v_i ^2}
When applied to 2D vectors, we famously know this as the Pythagorean theorem.
This works with 1D because we’re just squaring a single component, only to take the square root of it directly afterward.
|\overrightarrow{d}| = \sqrt{d_1 * d_1} = abs(\overrightarrow{d}_1)
Another way of thinking of the length of the vector is its distance from its origin. In 1D terms, this equates to the distance from zero, which is an absolute value. So another way to look at the magnitude of vectors is as a vector’s absolute value.
Normalization
Normalizing a vector returns another vector that points in the same direction, but has a length of 1.
A vector that has a magnitude of 1 is called a unit vector.
The operation is represented as double bars (||
) around a vector.
||\overrightarrow{v}|| = \frac{\overrightarrow{v}}{|\overrightarrow{v}|}
If the magnitude is the absolute value of the vector, and given that we know any number divided by itself (except 0) returns 1, the 1D version is pretty straightforward.
||\overrightarrow{d}|| = \frac{\overrightarrow{d}_1}{abs(\overrightarrow{d}_1)}
We can see how a value with a magnitude of 1 is returned. Actually, since there are only 2 directions possible in 1D (positive or negative), the 1D equivalent to normalization is the sign()
function. This is a function that returns -1 if the operand is negative, 1 if the operand is positive, and 0 if the operand is 0.
sign(x) = \begin{array}{cc} \{ & \begin{array}{cc} 0 & x = 0 \\ 1 & x \gt 0 \\ -1 & x \lt 0 \end{array} \end{array}
Spherical Collision
Given a circle in 2D or 3D, we can see if a point is in its sphere (or circle for 2D) if the magnitude of the vector between the sphere’s center and the other point is less than or equal to the sphere’s radius.
Collide(c, V) = \begin{array}{cc} \{ & \begin{array}{cc} true & |c_{Pt} - V| \leq c_{rad}\\ false & otherwise \\ \end{array} \end{array}
Really, this is just checking if two points are a certain distance from each other. For programming, when dealing with floating-point error, often we don’t check if two values are equal. Instead, we’ll check if they’re within range of each other by a certain amount (called an epsilon).
On this Wikipedia article about floating-point error mitigation, see “interval arithmetic” for more information.
// Using C# Unity conventions // Remember an absolute value is a 1D implementation of calculating the magnitude. // How small varies on task. Sometimes any small number will do, sometimes it's specific // based on the sensitivity of whatever algorithm is being implemented. const float eps = 0.000001f; bool AreSame(float a, float b) { return Mathf.Abs(a - b) <= eps; }
If the magnitude is the equivalent of the absolute value, which calculates distance – and we want to check the distance between two points – it makes intuitive sense to check the absolute value between two values. And also intuition for how these calculations are 1D sphere-to-point formulas.
In the illustration, we can see AreSame(A, B)
would return true
, and AreSame(A,C)
would return false
.
We also do this with 3D vectors, because if we need to check the epsilon for a number, why is the situation any different when dealing with 2 or 3 numbers grouped together?
const float eps = 0.000001f; bool AreSame(Vector3 a, Vector3 b) { // The math isn't quite the same when using sqrMagnitude, but // it doesn't matter because we're just doing a ballpark check // of "are the distance between them pretty small". // // Using sqrMagnitude is faster because it omits a square root that // magnitude would need. return (a - b).sqrMagnitude <= eps; }
I’ve broken the convention of using LaTeX for the epsilon code samples above because floating point error is a computer artefact and programming issue, not a math concept.
Dot Product
It’s time to give the 1D abstraction treatment to the all-mighty dot product! It’s as simple as multiplying per-component values between two vectors and then summing them up.
\overrightarrow{a} \cdot \overrightarrow{b} = \sum_{i=1}^{n} \overrightarrow{a_i} * \overrightarrow{b_i}
And the most important part about it is the identity below.
\overrightarrow{a} \cdot \overrightarrow{b} = |\overrightarrow{a}|*|\overrightarrow{b}| *cos(\theta)
Given two vectors, it will return the cosine of the angle between them, multiplied by the magnitude of each vector.
Normally the cosine would return 1 if the vectors point in the same direction, -1 if the vectors point directly opposite, 0 if the vectors are perpendicular, and somewhere between those values for everything else. But for 1D, we only have 2 directions, negative and positive. So these 1D vectors are either pointing in the same direction or the exact opposite direction.
Note that we can’t have a 1D unit vector of value 0, or else it wouldn’t be a unit vector.
In 1D this turns out to be normal multiplication. If we multiply two positive real numbers (i.e., two positive pointing 1D vectors) we get a positive number back. If we multiply two negative real numbers, that’s a double negative and we also get a positive number back. If we multiply a positive by a negative number, we get a negative number back. That’s exactly what the cosine factor does when doing a dot product in 1D!
And of course, we also multiply their absolute values multiplied with each other to give the final value at the correct scale.
\overrightarrow{da} \cdot \overrightarrow{db} = \overrightarrow{da}_1 * \overrightarrow{db}_1 \newline \ \newline cos (\overrightarrow{da}, \overrightarrow{db}) = sign(\overrightarrow{da}_1) * sign(\overrightarrow{db}_1) \\ \overrightarrow{da} \cdot \overrightarrow{db} = abs(\overrightarrow{da}_1) * abs(\overrightarrow{db}_1) * cos(\overrightarrow{da}_1,\overrightarrow{db}_1)
I’ll spare you covering the other identities of the dot product, but they also hold up in 1D.