Its easy to think about vectors in the first sense (as anything with direction and magnitude) when we're working with classical units (space, force, electric fields, etc)
But it becomes a nightmare to understand intuitively when the vector is defined as something with magnitude and direction when speaking about units that are not obvious to us humans (like time)
Thanks, but damn... I don't even understand your explanation. 😥 I work with vectors in Blender, so I have an intuitive understanding of them as per your first definition. But how are they less intuitive when talking about time? I don't get how this meme is structured
We can use vector spaces for thinking about things that aren't primarily concerned with physical space like we are in Blender. Let's imagine something practical, if a bit absurd. Pretend we have unlimited access to three kinds of dough. Each has flour, water, and yeast in different ratios. What we don't have is access to the individual ingredients.
Suppose we want a fourth kind of dough which is a different ratio of the ingredients from the doughs we have. If the ratios of the ingredients of the three doughs we already have are unique, then we are in luck! We can make that dough we want by combining some amount of the three we have. In fact, we can make any kind of dough that is a combination of those three ingredients. In linear algebra, this is called linear independence.
Each dough is a vector, and each ingredient is a component. We have three equations (doughs) in three variables (ingredients).
This is a three dimensional vector space, which is easy to visualize. But there is no limit to how many dimensions you can have, or what they can represent. Some economic models use vectors with thousands of dimensions representing inputs and outputs of resources. Hopefully my explanation helps us see how vectors can sometimes be more difficult to imagine as directions and magnitudes.
It is just to consider polynomials and functions as vectors, and apply our meager intuition on 3d spaces. By introducing norms (size), you recover the "size and direction" analogy.
The definition part of the wikipedia article has a table with these "nice relationships for addition and scaling".
You will see that they also hold for many kinds of functions, such as polynomials and other more abstract things than points and directions in 2D or 3D. N-dimensional vectors for example, or using complex numbers, or both.
A vector space is a collection of vectors in which you can scale vectors and add vectors together such that the scaling and addition operations satisfy some nice relationships. The 2D and 3D vectors that we are used to are common examples. A less common example is polynomials. It's hard to think of a polynomial as having a direction and a magnitude, but it's easy to think of polynomials as elements of the vector space of polynomials.
When talking about vector space, you usually need the "scalar (field)", and scalars need inverse to be well-defined.
So for integers, the scalar should be integer itself.
Sadly, inverse of integers stops being an integer, from where all sorts of number theoretic nightmare occurs
Instead, integers form a ring, and is a module over scalar of integers.
Start with a list of numbers, like [1 2 3]. That's it, a list of numbers. If you treat those numbers like they represent something though, and apply some rules to them, you can do math.
One way to consider them is as coordinates. If we had a 3-D coordinate grid, then [1 2 3] could be the point at x = 1, y = 2, and z = 3. You could also consider the list of numbers to be a line with an arrow at one end, starting from the point at [0 0 0] and stopping at the other point. This is a geometric vector: a thing with a direction and a magnitude. Still just a list of numbers though.
Now, what if you wanted to take that list and add another one, say [4 5 6], how might you do it? You could concatenate the lists, like [1 2 3 4 5 6] and that has meaning and utility in some cases. But most of the time, you'd like "adding vectors" to give you a result that maps to something geometric such as putting the lines with arrows end-to-end and seeing what new vector that is. You can do that by adding each element of the 2 vectors. And, almost magically, the point at [5 7 9] is where you'd end up if you first went to [1 2 3] and then traveled [4 5 6] further. We made no drawings, but the math modeled the situation well enough to give us an answer anyway.
Going further, maybe you want to multiply vectors, raise them to exponents, and more? There are several ways to do these, and each has different meanings when you think about them with shapes and geometry.
But vectors are just lists of numbers, they don't have to be geometric things. [1 2 3] could also represent the coefficients of a function, say 0 = 1x^2 + 2x + 3(x^0). You can still do the same math to the vector, but now it means something else. It models a function, and combining it with other vectors let's you combine and transform functions just like if they were lines and shapes.
When you get into vectors beyond 3 elements, there's no longer a clean geometric metaphor to help you visualize. A vector with 100 elements can be used just as well as one with 2, but we can't visualize a space with 100-dimensions. These are "vector spaces" and a vector is a single point (or rather, points to a point) within them.
Matrices are similar but allow for deeper models of more complex objects.
Very well explained, thank you. I keep forgetting, and am occasionally reminded, that just below the basic math I'm familiar with is a whole other level of advanced math, and just below that is the screaming void.