Coding: Fleeting Thoughts
Moderators: phlip, Moderators General, Prelates
 sourmìlk
 If I can't complain, can I at least express my fear?
 Posts: 6393
 Joined: Mon Dec 22, 2008 10:53 pm UTC
 Location: permanently in the wrong
 Contact:
Re: Coding: Fleeting Thoughts
Well presumably if you're using the function you've looked up the declaration. You don't actually have to look at the definition, because the const keyword (mostly) guarantees that no mutation of the object will go on inside the method. And the const keyword is found in the declaration.
Terry Pratchett wrote:The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.
 Yakk
 Poster with most posts but no title.
 Posts: 11128
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Coding: Fleeting Thoughts
You seem to have a common misconception about what code is about.
Code isn't about the writing and the telling of a computer what to do. That is easy.
Code is about the reading and the telling of a human what the code is telling the computer what to do.
a = sort(b) looks like a sorting algorithm that takes b, and returns a sorted a.
sort(b) looks like a sorting algorithm that takes b, and sorts it.
A sort() function that both sorts its input and returns its input, might help if your goal is to minimize the "number of lines of code", but it ends up looking like a nonmutating sorting function.
"Reducing the number of lines to do something" for no reason other than using fewer lines is a stunt, something an immature programmer does as a challenge to themselves, or a bad habit said programmer picked up. Learn how to do this, because is good brain exercise, but doing it in "real" code is about as smart an idea as spending time stretching while doing a 100 meter dash.
Code is a means of communicating to yourself or others in the future, and secondary to that it is a means of telling a computer what to do. Computers will take any old random unreadable gibberish and do what it says, it isn't hard to tell a computer to do a myriad of different tasks to any competent programmer. But any code worth its salt will end up with more people reading it and trying to modify it/understand it than you spent writing it, and their time is actually worth as much as yours is.
Throw away code is another thing entirely.
Now, there are some situations where the convention that mutating functions return their primary input exists, and in those contexts the sort that mutates and returns the input isn't confusing. . .
Code isn't about the writing and the telling of a computer what to do. That is easy.
Code is about the reading and the telling of a human what the code is telling the computer what to do.
a = sort(b) looks like a sorting algorithm that takes b, and returns a sorted a.
sort(b) looks like a sorting algorithm that takes b, and sorts it.
A sort() function that both sorts its input and returns its input, might help if your goal is to minimize the "number of lines of code", but it ends up looking like a nonmutating sorting function.
"Reducing the number of lines to do something" for no reason other than using fewer lines is a stunt, something an immature programmer does as a challenge to themselves, or a bad habit said programmer picked up. Learn how to do this, because is good brain exercise, but doing it in "real" code is about as smart an idea as spending time stretching while doing a 100 meter dash.
Code is a means of communicating to yourself or others in the future, and secondary to that it is a means of telling a computer what to do. Computers will take any old random unreadable gibberish and do what it says, it isn't hard to tell a computer to do a myriad of different tasks to any competent programmer. But any code worth its salt will end up with more people reading it and trying to modify it/understand it than you spent writing it, and their time is actually worth as much as yours is.
Throw away code is another thing entirely.
Now, there are some situations where the convention that mutating functions return their primary input exists, and in those contexts the sort that mutates and returns the input isn't confusing. . .
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

 Posts: 310
 Joined: Wed Jul 01, 2009 3:03 am UTC
Re: Coding: Fleeting Thoughts
Yakk wrote:You seem to have a common misconception about what code is about.
Code isn't about the writing and the telling of a computer what to do. That is easy.
Code is about the reading and the telling of a human what the code is telling the computer what to do.
I would disagree with that somewhat. I think, at least as important as making code selfdocumenting, is making code that is easy to modify. If I only want someone to understand my code I can write documentation for that purpose.
Re: Coding: Fleeting Thoughts
EvanED wrote:The downside of that approach is it (IMO) blurs the distinction between "this function produces a modified copy" and "this function mutates"; conceptually, I firmly feel that a mutating sort function shouldn't return anything, though I'm sympathetic to the view that it can make code more concise and I'm willing to think that's sometimes the better choice.
In the case of sort() I still come down on Python's side though, especially because of the presence of the sorted() function.
I agree that this is an advantage in the case of .sort() and .reverse() methods, which are easily confused with builtin functions, but this policy of mutating methods returning "None" is a general one, and I don't think it was (or should have been) done on account of those special cases. The point still applies, to a lesser extent, in general, but it doesn't seem like a very good way to distinguish mutating methods to me. "None" can still be assigned to a variable, so it won't catch bugs all the time, and it could even lead to complacency in thinking any method that returns a value will be nonmutating. Personally, I think the best way to distinguish mutating methods is to allow ! in identifiers.
I personally suspect Guido doesn't like the look of method chaining in Python. Python is something of a planned city that way.
Yakk wrote:a = sort(b) looks like a sorting algorithm that takes b, and returns a sorted a.
sort(b) looks like a sorting algorithm that takes b, and sorts it.
That's true, but I think it's something of a red herring. Firstly, noone is suggesting a mutating function return the mutated object. That would be quite confusing and has no real purpose. This is about methods. .sort() allows method chaining
Code: Select all
myobject.append("another item")
.sort()
.extend(other_list)
You could still write the somewhat confusing a = b.sort(), but why would you do that? If you're setting a to b, you must want to do something different with a than b, but it's not the sort, because you just did it to b, So it must be something later. So why arbitrarily combine the aliasing with the sorting? That's almost deliberate obfuscation.
Of course, someone could still write a = b.sort() accidentally, thinking it was nonmutating. But as I said before, I don't think disallowing method chaining is an adequate solution to this problem, which is quite general.
Yakk wrote:"Reducing the number of lines to do something" for no reason other than using fewer lines is a stunt, something an immature programmer does as a challenge to themselves, or a bad habit said programmer picked up. Learn how to do this, because is good brain exercise, but doing it in "real" code is about as smart an idea as spending time stretching while doing a 100 meter dash.
Code is a means of communicating to yourself or others in the future, and secondary to that it is a means of telling a computer what to do. Computers will take any old random unreadable gibberish and do what it says, it isn't hard to tell a computer to do a myriad of different tasks to any competent programmer. But any code worth its salt will end up with more people reading it and trying to modify it/understand it than you spent writing it, and their time is actually worth as much as yours is.
I don't think most people see method chaining as code golf. It usually doesn't even save many characters. If there's just a few short methods, it might save lines, but beyond that, the chained methods would normally be written one to a line anyway, so there's not even that saving.
People use method chaining precisely because it's easy to read. "Take an object and perform a series of operations on it" is a very natural unit of understanding, and reflecting that in code makes the code easier to understand.
disclaimer: I rarely use method chaining myself, because I don't frequently use languages that support it well. But it's something I envy in other languages. (Occasionally I use it in my own types in Python, but not heavily since it conflicts with the style of other code.)
 Yakk
 Poster with most posts but no title.
 Posts: 11128
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Coding: Fleeting Thoughts
Are these methods mutating or not?
this is a a = b.foo() case, and one I see reasonably often with method chaining. And I wouldn't call it "deliberate obfuscation."
You almost want a message streaming operator to make it clear:
myobject < append(item) < sort < extend(another_list)
where < is a convention to send message to objects in a chain.
Code: Select all
foo( myobject.append(item).sort().extend(another_list))
this is a a = b.foo() case, and one I see reasonably often with method chaining. And I wouldn't call it "deliberate obfuscation."
You almost want a message streaming operator to make it clear:
myobject < append(item) < sort < extend(another_list)
where < is a convention to send message to objects in a chain.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Re: Coding: Fleeting Thoughts
Yakk wrote:Are these methods mutating or not?Code: Select all
foo( myobject.append(item).sort().extend(another_list))
this is a a = b.foo() case, and one I see reasonably often with method chaining. And I wouldn't call it "deliberate obfuscation."
It's an "a = b.foo() case" in the sense that it does something with the final object after the chaining is finished, but that wasn't really what I meant. The reason I felt a = b.foo() was artificially misleading is that by assigning b.foo() to a new variable, it makes it look like .foo() is returning a new value. Just passing b.foo() to an arbitrary function doesn't raise that issue, and I don't think it's nearly as confusing.
That said, I still don't like it. If it's being passed to a pure function, I'd personally prefer to split it into 2 steps:
Code: Select all
myobject.append(item).sort().extend(another_list)
foo( myobject)
The reason is not that I think it's confusing per se, but just that it doesn't read naturally. It says to perform a function on a value..., but then you have to pause that train of thought and perform a series of modifications on the object before you pass it in*.
* There may be the occasional exception where the object is being modified specifically for the function, and opening the function call first helps give context to understand why the modifications are being done. (in this case, what I'd really want is a where clause, ie. "foo(x) where x=...", but lacking that, I might stick the entire method chain inside the function call.)
You almost want a message streaming operator to make it clear:
myobject < append(item) < sort < extend(another_list)
where < is a convention to send message to objects in a chain.
Yeah, it might be justified if mutating methods are common enough. Alternatively, you could just have a chain() method in object.
Code: Select all
myobject.chain(
('append', item),
'sort',
('extend', list2))
Code: Select all
myobject.chain(
.append(item),
.sort,
.extend(list2))
 Bubbles McCoy
 Posts: 1106
 Joined: Wed Jul 09, 2008 12:49 am UTC
 Location: California
Re: Coding: Fleeting Thoughts
sourmìlk wrote:Well presumably if you're using the function you've looked up the declaration. You don't actually have to look at the definition, because the const keyword (mostly) guarantees that no mutation of the object will go on inside the method. And the const keyword is found in the declaration.
Sure, you should do at least a bit of light research before using a function. But a library/set of functions still should be as easily understandable as possible, and linguistic conventions are a part of that  the easier it is to predict behavior based off of code you've seen before, the better.
troyp wrote:]Although that would be a bit awkward due to the need to identify methods using strings. It would be nicer if there was some way to concisely represent lambda obj: obj.method(args), eg. ".method(args)". Then the chain method could just accept these functions as args and compose them before applying them to the instance:This has the advantage of using standard function syntax while still being clear about what's happening. It also allows you to easily extend a statement over multiple lines (this can be a difficulty in Python due to its parsing  an issue I ignored in my hypothetical example is that you'd need continuation characters to make Python accept it (you could have your editor insert them at column 80, but it's still ugly))Code: Select all
myobject.chain(
.append(item),
.sort,
.extend(list2))
Commas in a function call are valid endofline characters in python, the above example wouldn't run in to that problem. The best way to write that in python (imho) would be:*
Code: Select all
class list:
...
def chain(self, **kwargs):
for key in kwargs:
getattr(self, key)(*kwargs[key])
mylist.chain(append=[item],
sort=[],
extend=[list2])
The "**" automatically takes the keys and converts them to strings, and you can unpack the list as plain arguments. A little clumsy since you are still relying on strings to determine methods to execute, but at least it's relatively hidden from the user.
While this is possible, it does strike me as profoundly unpythonic. The language is largely geared towards simplicity, which probably is why chaining on mutations isn't allowed in the first place. Creating a method wrapper to get around it doesn't really strike me as an appropriate response: a method with no return value should have it's own line for clarity's sake.
I personally like Ruby's convention on this  pretty much every method call returns the object (assuming, of course, the method call wasn't requesting something specific from the object), and in general ambiguous cases like "sort" return a new object. If you instead want to mutate the object, you call "sort!", and it returns the modified object. So, if you're in the mood you can do something like ary.push(5).sort!.map { v v + 1 }, which would add a new value to the array, sort it in place, then return a new version of the sorted array with each value increased by one (and if you wanted the mapping to occur in place, use "map!").
*EDIT  Just realized this technique is nearuseless, as the order of execution is not guaranteed nor will it allow duplicate method calls in the same chain.
Re: Coding: Fleeting Thoughts
Bubbles McCoy wrote:Commas in a function call are valid endofline characters in python, the above example wouldn't run in to that problem.
Exactly. My point was that the function call can be split over multiple lines, whereas the method chaining can't (without \). Which I guess is another indication that they don't want too much method chaining in Python.
While this is possible, it does strike me as profoundly unpythonic. The language is largely geared towards simplicity, which probably is why chaining on mutations isn't allowed in the first place. Creating a method wrapper to get around it doesn't really strike me as an appropriate response: a method with no return value should have it's own line for clarity's sake.
I'd hardly call a chaining method "profoundly unpythonic". On the contrary, the reason I mentioned it is that it is pythonic (at least in principle), whereas the method chaining is not. Multiline named functions are acceptable in python, which is why commas trigger a continuation line. Variable arity and keyword args are highly pythonic and wellsupported. Similarly, reflection and higherorder functions are also pythonic. Creating a function that takes an arbitrary number of methods and applies their composition to an object is not in itself doing anything unpythonic. Sure, it's a bit more abstract than is common in Python code, and may require unpythonic hacks to implement, but the basic idea is quite consistent with pythonic conventions.
Of course, the actual implementations are either unpythonic or just plain awkward, so I wouldn't actually use them. But if Python had a convenient notation like ".method(args)" for partiallyevaluatedunboundmethods (ie. lambda obj: obj.method(args)) like I mentioned in my last post, I'd use that version of chain().
I personally like Ruby's convention on this  pretty much every method call returns the object (assuming, of course, the method call wasn't requesting something specific from the object), and in general ambiguous cases like "sort" return a new object. If you instead want to mutate the object, you call "sort!", and it returns the modified object. So, if you're in the mood you can do something like ary.push(5).sort!.map { v v + 1 }, which would add a new value to the array, sort it in place, then return a new version of the sorted array with each value increased by one (and if you wanted the mapping to occur in place, use "map!").
I completely agree. I don't know why Python doesn't use ! in identifiers. I don't think it's even used anywhere else. (That's actually a Lisp convention, btw.)
*EDIT  Just realized this technique is nearuseless, as the order of execution is not guaranteed nor will it allow duplicate method calls in the same chain.
I just wrote a reply to that part before I noticed this edit and had to cut it out. But yeah, that's the main issue. Kwargs syntax is the only way I can think of to avoid passing in names as strings, but a dictionary can't reasonably model an association list. I think any way you tried to get around it, you'd end up with much more cumbersome syntax.
Re: Coding: Fleeting Thoughts
I just checked the source on a webpage I have to visit for work, and it has good comments in it. That excited me, even though I don't have to really interact with it on that level.
That's... sort of sad. And weird. What an obscure thing to be excited over.
That's... sort of sad. And weird. What an obscure thing to be excited over.
 sourmìlk
 If I can't complain, can I at least express my fear?
 Posts: 6393
 Joined: Mon Dec 22, 2008 10:53 pm UTC
 Location: permanently in the wrong
 Contact:
Re: Coding: Fleeting Thoughts
Not at all. You ever seen this image macro? It exists for a reason.
Terry Pratchett wrote:The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.
 sourmìlk
 If I can't complain, can I at least express my fear?
 Posts: 6393
 Joined: Mon Dec 22, 2008 10:53 pm UTC
 Location: permanently in the wrong
 Contact:
Re: Coding: Fleeting Thoughts
I couldn't find a generic "range" class in C++, so I created one. Basically it allows you to iterate through a range of values by specifying a beginning and an end. So long as the type of value has a ++ and != operator, it works. You can also specify a step (i.e. the distance between each element) so that the range class works if it has a + operator. Examples:
The usefulness, of course, comes from the fact that it works for any type with the necessary overloaded operators. Was there really no need for this functionality? Was it a waste of my time to implement it? Or should it legitimately not have been missing from the STL?
Code: Select all
for_each(range(0, 10).begin(), range(0, 10).end(), [](int i){cout << i << " ";}); // prints "0 1 2 3 4 5 6 7 8 9"
for_each(range(0, 10, 2).begin(), range(0, 10, 2).end(), [](int i){cout << i << " ";}); // prints "0 2 4 6 8"
The usefulness, of course, comes from the fact that it works for any type with the necessary overloaded operators. Was there really no need for this functionality? Was it a waste of my time to implement it? Or should it legitimately not have been missing from the STL?
Terry Pratchett wrote:The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.

 Posts: 1459
 Joined: Fri Apr 20, 2007 3:27 pm UTC
 Location: The Tower of Flints. (Also known as: England.)
Re: Coding: Fleeting Thoughts
Generally I try to make myself do things I instinctively avoid, in case they are awesome.
dubsola
dubsola
 Yakk
 Poster with most posts but no title.
 Posts: 11128
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Coding: Fleeting Thoughts
It isn't. However, with full C++11 compliance, we get:
which is tight, especially if it can be shown to be as efficient (or moreso) than the naive for(;;) loop.
In order to do that, we'd want to constexpr up the range function, so that it can be evaluated at compile time, and the loop be completely unrolled iff the arguments are compiletime determined.
Which makes me think  can we use foreach style functions to build continuations in C++? Ie:
continuation( container c, function f )
of type ( Container(X), X>Y ) > Y
which returns a function that evaluates f on the first element of c the first time it is called, then the second element of c the second time it is called, etc?
Sadly, the only way this, in standard C++, can signal the end of the production of data is by throwing an exception (I've played around with pseudofunctiors that have an operator bool() to signal that they are invalid, but that isn't standard).
So instead we need to do this:
( Range(X), X>Y )> Range(Y)
which is basically map. We wouldn't have to evaluate the values of the return value, and instead just evaluate them "lazily".
Such transformations let you do things like:
which would end up being:
which has its advantages as well.
Code: Select all
for( int x: range(0,10,2) )
print(x);
which is tight, especially if it can be shown to be as efficient (or moreso) than the naive for(;;) loop.
In order to do that, we'd want to constexpr up the range function, so that it can be evaluated at compile time, and the loop be completely unrolled iff the arguments are compiletime determined.
Which makes me think  can we use foreach style functions to build continuations in C++? Ie:
continuation( container c, function f )
of type ( Container(X), X>Y ) > Y
which returns a function that evaluates f on the first element of c the first time it is called, then the second element of c the second time it is called, etc?
Sadly, the only way this, in standard C++, can signal the end of the production of data is by throwing an exception (I've played around with pseudofunctiors that have an operator bool() to signal that they are invalid, but that isn't standard).
So instead we need to do this:
( Range(X), X>Y )> Range(Y)
which is basically map. We wouldn't have to evaluate the values of the return value, and instead just evaluate them "lazily".
Such transformations let you do things like:
Code: Select all
for( int x: accumulate_each( range(0,10,2 ) ) )
print(x);
which would end up being:
Code: Select all
{
int acc = 0;
for( int x1 = 0; x1 < 10; x1 += 2 )
{
acc += x1;
int x = acc;
print(x);
}
}
which has its advantages as well.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
 sourmìlk
 If I can't complain, can I at least express my fear?
 Posts: 6393
 Joined: Mon Dec 22, 2008 10:53 pm UTC
 Location: permanently in the wrong
 Contact:
Re: Coding: Fleeting Thoughts
Wooh, I did something useful. I'll try the constexpr'ing right away.
EDIT: I don't think constexpr'ing, considering (among other things) std::vector interaction.
EDIT: I don't think constexpr'ing, considering (among other things) std::vector interaction.
Terry Pratchett wrote:The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.
 The Great Hippo
 Swans ARE SHARP
 Posts: 7367
 Joined: Fri Dec 14, 2007 4:43 am UTC
 Location: behind you
Re: Coding: Fleeting Thoughts
When I first learned about OpenGL and matrices, I was really intimidated by how the vectors were '4d' instead of '3d', but later was relieved to find out that this was just some sort of weird trick, and the 4th component of the vector was always 1.
In building my own (primitive, sucky, but allows for vectors and matrices for any value of R^{n}) linear algebra library to prove I've grasped the basic concepts, the nature of that cheap trick finally hit me while I was fixing matrix multiplication, and trying to solve for matrices that would allow me to add arbitrary constants to a given vector's componentsyou can't do that, because the matrices rely on the vectors you multiply them by for all their values. But if one of those values is always one, you can sneak some constants in and add them to the other components! Allowing for matrices that perform simple 'addition translation' tasks!
The really cool thing I love about this is the sudden realization that the last component of a given vector doesn't have to be one; it just has to behave like one for the purposes of multiplication. IE, any value * this thing has to return the value. But otherwise, it can be anything I want it to be, including a new type of object that gets slapped on as a trailing component on all generated vectors and carries some additional information for creating shortcuts.
I have no idea what that additional information might be, or what those shortcuts might do, but I'm really fond of the idea.
In building my own (primitive, sucky, but allows for vectors and matrices for any value of R^{n}) linear algebra library to prove I've grasped the basic concepts, the nature of that cheap trick finally hit me while I was fixing matrix multiplication, and trying to solve for matrices that would allow me to add arbitrary constants to a given vector's componentsyou can't do that, because the matrices rely on the vectors you multiply them by for all their values. But if one of those values is always one, you can sneak some constants in and add them to the other components! Allowing for matrices that perform simple 'addition translation' tasks!
The really cool thing I love about this is the sudden realization that the last component of a given vector doesn't have to be one; it just has to behave like one for the purposes of multiplication. IE, any value * this thing has to return the value. But otherwise, it can be anything I want it to be, including a new type of object that gets slapped on as a trailing component on all generated vectors and carries some additional information for creating shortcuts.
I have no idea what that additional information might be, or what those shortcuts might do, but I'm really fond of the idea.
 sourmìlk
 If I can't complain, can I at least express my fear?
 Posts: 6393
 Joined: Mon Dec 22, 2008 10:53 pm UTC
 Location: permanently in the wrong
 Contact:
Re: Coding: Fleeting Thoughts
Yeah, I created some linear algebra libraries and types myself, it's rather fun. Also, the fourth element is 0 if you don't want the vector to be translated, e.g. if it's a directional or normal vector or something.
Terry Pratchett wrote:The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.
Re: Coding: Fleeting Thoughts
[disclaimer: It's probably obvious, but I don't know anything about graphics]
So they use "degenerate" 4vectors so they can translate 3D points by rotating/reflecting them through 4D space? That's interesting and kind of elegant, but what's the gain exactly that's great enough to justify this?
I'm assuming it's something to do with compositionality  it's a lot easier to compose translations with rotations if you can represent both with matrix operations. Is it because you want to apply the same transformation to a lot of different "pixels" in an "object" and this lets you precompute a single matrix operation that can then be cheaply applied?
That's going to be awfully expensive in terms of method lookups, though, isn't it?
So they use "degenerate" 4vectors so they can translate 3D points by rotating/reflecting them through 4D space? That's interesting and kind of elegant, but what's the gain exactly that's great enough to justify this?
I'm assuming it's something to do with compositionality  it's a lot easier to compose translations with rotations if you can represent both with matrix operations. Is it because you want to apply the same transformation to a lot of different "pixels" in an "object" and this lets you precompute a single matrix operation that can then be cheaply applied?
The Great Hippo wrote:The really cool thing I love about this is the sudden realization that the last component of a given vector doesn't have to be one; it just has to behave like one for the purposes of multiplication. IE, any value * this thing has to return the value. But otherwise, it can be anything I want it to be, including a new type of object that gets slapped on as a trailing component on all generated vectors and carries some additional information for creating shortcuts.
That's going to be awfully expensive in terms of method lookups, though, isn't it?
 Yakk
 Poster with most posts but no title.
 Posts: 11128
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Coding: Fleeting Thoughts
The trick is to do the math in a projective R4 then map it over to affine R3 maps.
Or some such.
And yes you end up with being able to compose transformations and store the result for later application. Plus you can build specialized hardware that does these operations really fast and in parallel.
As for making the end scalar its own type, dunno if you could do it. Inna language like C++ there would be no need for method lookup overhead, and it is doubtful you would want to do anything like that in less metal languages.
Or some such.
And yes you end up with being able to compose transformations and store the result for later application. Plus you can build specialized hardware that does these operations really fast and in parallel.
As for making the end scalar its own type, dunno if you could do it. Inna language like C++ there would be no need for method lookup overhead, and it is doubtful you would want to do anything like that in less metal languages.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Re: Coding: Fleeting Thoughts
troyp wrote:[disclaimer: It's probably obvious, but I don't know anything about graphics]
So they use "degenerate" 4vectors so they can translate 3D points by rotating/reflecting them through 4D space? That's interesting and kind of elegant, but what's the gain exactly that's great enough to justify this?
I'm assuming it's something to do with compositionality  it's a lot easier to compose translations with rotations if you can represent both with matrix operations. Is it because you want to apply the same transformation to a lot of different "pixels" in an "object" and this lets you precompute a single matrix operation that can then be cheaply applied?
3D graphics are based upon vector transformations. The desire is to project vertices onto a plane. A "camera" is given by an initial linear transformation. Scaling and rotations can be represented with a R^{3} > R^{3} linear transformation. Composition of transformations is isomorphic to matrix multiplication, so since translations are represented strictly as addition in a vector space, this requires another component. If we try to represent that with a transformation, we lose linearity, and so cannot use matrices to represent the transformation (it would be a matrix and a vector. However, a transformation f : R^{4} > R^{3} can satisfy this, if the fourth element of the vector is constant (and nonzero). By extending the transformation to f : R^{4}>R^{4} with f_{4}(x) = x_{4}, the constant is preserved, and the transformations can be composed.
Hence, a translation looks like:
[math]T = [f] = \left[\begin{array}{cccc}
1 & 0 & 0 & x \\
0 & 1 & 0 & y \\
0 & 0 & 1 & z \\
0 & 0 & 0 & 1
\end{array}\right][/math]
There is, of course, no reason this can't be done by using a matrix and vector pair in R^{3} to represent an affine transformation. The effort is essentially the same.
And then there are quaternions...
 sourmìlk
 If I can't complain, can I at least express my fear?
 Posts: 6393
 Joined: Mon Dec 22, 2008 10:53 pm UTC
 Location: permanently in the wrong
 Contact:
Re: Coding: Fleeting Thoughts
I have a feeling that I'm actually going to need to learn linear algebra do get much further. I've pretty much been bullshitting it so far.
Terry Pratchett wrote:The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.
Re: Coding: Fleeting Thoughts
troyp wrote:[disclaimer: It's probably obvious, but I don't know anything about graphics]
So they use "degenerate" 4vectors so they can translate 3D points by rotating/reflecting them through 4D space? That's interesting and kind of elegant, but what's the gain exactly that's great enough to justify this?
I'm assuming it's something to do with compositionality  it's a lot easier to compose translations with rotations if you can represent both with matrix operations. Is it because you want to apply the same transformation to a lot of different "pixels" in an "object" and this lets you precompute a single matrix operation that can then be cheaply applied?
Using 4D coordinates to do 3D does seem a bit strange at first, but it's not just some weird trick dreamed up by computer graphics people. These 4D coordinates are called homogeneous coordinates, and they were in use well before computer graphics.
http://en.wikipedia.org/wiki/Homogeneous_coordinates wrote:In mathematics, homogeneous coordinates, introduced by August Ferdinand Möbius in his 1827 work Der barycentrische Calcül,[1][2] are a system of coordinates used in projective geometry much as Cartesian coordinates are used in Euclidean geometry. They have the advantage that the coordinates of points, including points at infinity, can be represented using finite coordinates. Formulas involving homogeneous coordinates are often simpler and more symmetric than their Cartesian counterparts. Homogeneous coordinates have a range of applications, including computer graphics and 3D computer vision, where they allow affine transformations and, in general, projective transformations to be easily represented by a matrix.
Also see http://en.wikipedia.org/wiki/Projective_geometry.
The ease of composition of transformations is a major motivation for using 4D homogeneous coordinates in computer graphics. Another nice property is that points at infinity can easily be represented (by making the last component 0) and transformed. (In projective geometry, each direction is identified with a point at infinity: it's the point where all lines of a given direction meet). So by working in homogeneous coordinates we simplify the code needed to cope with points at infinity.
Yes, you can also handle 3D transformations using quaternions, but IMHO it's natural to do perspective stuff using projective geometry. After all, the mathematics of perspective originally motivated the development of projective geometry.
The quote from the Wiki homogeneous coordinates page mentions that formulas involving homogeneous coordinates are often simpler and more symmetric than their Cartesian counterparts. This is primarily due to the Principle of Duality in projective geometry:
If one defines a projective plane axiomatically as an incidence structure, in terms of a set P of points, a set L of lines, and an incidence relation I that determines which points lie on which lines, then one may define a plane dual structure.
Interchange the role of "points" and "lines" in
C=(P,L,I)
to obtain the dual structure
C* =(L,P,I*),
where I* is the inverse relation of I. C* is also a projective plane, called the dual plane of C.
If C and C* are isomorphic, then C is called selfdual. The projective planes PG(2,K) for any division ring K are selfdual. However, there are nonDesarguesian planes which are not selfdual, such as the Hall planes and some that are, such as the Hughes planes.
In a projective plane a statement involving points, lines and incidence between them that is obtained from another such statement by interchanging the words "point" and "line" and making whatever grammatical adjustments that are necessary, is called the plane dual statement of the first. The plane dual statement of "Two points are on a unique line." is "Two lines meet at a unique point." Forming the plane dual of a statement is known as dualizing the statement.
If a statement is true in a projective plane C, then the plane dual of that statement must be true in the dual plane C*. This follows since dualizing each statement in the proof "in C" gives a statement of the proof "in C*."
The Principle of Plane Duality says that dualizing any theorem in a selfdual projective plane C produces another theorem valid in C.
 The Great Hippo
 Swans ARE SHARP
 Posts: 7367
 Joined: Fri Dec 14, 2007 4:43 am UTC
 Location: behind you
Re: Coding: Fleeting Thoughts
Bufoo. I can't find any coherent reason to make the last component its own thingamajig anyway. Plain old one it remains. But I suspect I should look into C anyway; understanding code closer to the metal will probably be a big boon.
And yeah, I didn't catch on the whole 'use 0 instead for normalized vectors' bit until I started understanding what normalized vectors *really* are (just vectors expressed as points on the unit circle). Before then, I just assumed it was for something... weird (I used to think points and vectors were two separate mathematical entities, and assumed the 0 bit was for vectors, and the 1 bit was for 'true points'it took me a while to realize that, mathematically, there's no functional difference).
I'm also having this weird loverelationship with using subspace objects to represent all sorts of things (a R^{4} subspace full of 3D vectors for objects' spatial components; a R^{2} subspace full of 1D vectors for everyone's health). With the extra scalar component, I'm hoping to keep track of these values through time, so 'events' can be made in other subspaces that represent AI memory ('I saw object A at coordinate XYZ at moment T) and allow for projections to be created based on those events ('It is 100 ticks since the event where I saw object A at XYZ; the farthest distance I observed object A travel within a single tick was D, so the possible locations of object A is a sphere projected at XYZ with a radius of 100 * D, adjusted for what I know about the map'). I'm really fond of the idea of representing the computer's projections of things like a player's health and current location (and where the computer thinks the player thinks the computer is) with a series of overlapping subspaces derived from a single 'real' subspace (that represents the actuality).
(There's probably a much easier and effective way to do videogame AI)
And yeah, I didn't catch on the whole 'use 0 instead for normalized vectors' bit until I started understanding what normalized vectors *really* are (just vectors expressed as points on the unit circle). Before then, I just assumed it was for something... weird (I used to think points and vectors were two separate mathematical entities, and assumed the 0 bit was for vectors, and the 1 bit was for 'true points'it took me a while to realize that, mathematically, there's no functional difference).
I'm also having this weird loverelationship with using subspace objects to represent all sorts of things (a R^{4} subspace full of 3D vectors for objects' spatial components; a R^{2} subspace full of 1D vectors for everyone's health). With the extra scalar component, I'm hoping to keep track of these values through time, so 'events' can be made in other subspaces that represent AI memory ('I saw object A at coordinate XYZ at moment T) and allow for projections to be created based on those events ('It is 100 ticks since the event where I saw object A at XYZ; the farthest distance I observed object A travel within a single tick was D, so the possible locations of object A is a sphere projected at XYZ with a radius of 100 * D, adjusted for what I know about the map'). I'm really fond of the idea of representing the computer's projections of things like a player's health and current location (and where the computer thinks the player thinks the computer is) with a series of overlapping subspaces derived from a single 'real' subspace (that represents the actuality).
(There's probably a much easier and effective way to do videogame AI)
Re: Coding: Fleeting Thoughts
I'm working on GPIO video driver for a camera which uses an existing control driver in Video 4 Linux but the header for it is missing in the latest kernel tree. Does anybody know who I contact about this kind of thing?
 Yakk
 Poster with most posts but no title.
 Posts: 11128
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Coding: Fleeting Thoughts
Points and vectors are distinct mathematical concepts. It makes no sense to add points.
Or to scale them by a scalar.
The difference of points is a vector. One belongs to a vector space with a distinguished zero, while the space that points come from has no need for a zero.
It is true you can add a zero the your point space and map each point to x minus zero. But in CS the ability to have those type restrictions is not valueless. On the other hand usually people are lazy and just use vectors.
Or to scale them by a scalar.
The difference of points is a vector. One belongs to a vector space with a distinguished zero, while the space that points come from has no need for a zero.
It is true you can add a zero the your point space and map each point to x minus zero. But in CS the ability to have those type restrictions is not valueless. On the other hand usually people are lazy and just use vectors.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
 WanderingLinguist
 Posts: 237
 Joined: Tue May 22, 2012 5:14 pm UTC
 Location: Seoul
 Contact:
Re: Coding: Fleeting Thoughts
Benoni wrote:There is, of course, no reason this can't be done by using a matrix and vector pair in R^{3} to represent an affine transformation. The effort is essentially the same.
Well, it's a lot harder to build composite operations that way. In 3D graphics, it's quite common to want something like:
rotate > translate > rotate > scale > translate > project
Where you want the operations to happen in a particular order. If you can do it by matrix math, it's much easier to composite them.
Let's say I have a model with a few thousand vertices, and I want to rotate, scale, etc. by some amount that's constant across all the vertices (and throw in the projection to 2D space as well). It's much faster to build one composite 4x4 matrix and then apply it all of the vertices, especially when there's hardware support for matrix operations. This is especially true when you're writing OpenGL shaders; GLSL has vector and matrix operations builtin (and they map to individual instructions on some hardware).
Also, if you're building a scene with a complex hierarchy, using a single global matrix and a stack where you can push/pop that global matrix makes it really easy to build the scene hierarchy programmatically. For example, something like:
Code: Select all
setIdentity();
applyProjection( 45, 1.333, ... ); // Projection: 45degree FOV, 4:3 aspect ratio
pushMatrix();
translate(10,10,20);
rotate(30);
scale(0.3);
drawCube();
popMatrix();
pushMatrix();
translate(10,10,30);
rotate(5);
scale(0.8);
drawHuman();
popMatrix();
This assumes that functions such as rotate(n) do something like:
Code: Select all
currentMatrix = multMatrix(currentMatrix,rotationMatrixForAngle(n));
...so that the rotation occurs before other operations composited into the current matrix.
By having any subroutine (such as drawCube) start with pushMatrix() and finish with popMatrix() you can build as complex a hierarchy as you want, and rotate/translate/scale entire subtrees of it by applying a rotate() or whatever before drawing the subtree.
(This doesn't deal with things like zorder sorting, etc. but it's good for simple uses).
Anyway, I think this clearly shows why the obvious choice is for matrices...
Caveat: I'm selftaught in this field, so my terminology may be off, or I may be missing some better ways of doing things.
 Yakk
 Poster with most posts but no title.
 Posts: 11128
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Coding: Fleeting Thoughts
Yep, there is lots of library (and hardware) support for the 4x4 matrix method.
But there is no fundamental reason why it couldn't have been library support for some other equivalent route.
The matrix route, in a sense, exposes implementation details that make it harder for people to grasp what is going on. A structure that contained a 3x3 matrix, two translation coordinates, etc explicitly has the same information, and could undergo the same operations as a 4x4 matrix, yet be easier to look at and understand what it meant to someone who isn't already an expert at linear algebra.
But there is no fundamental reason why it couldn't have been library support for some other equivalent route.
The matrix route, in a sense, exposes implementation details that make it harder for people to grasp what is going on. A structure that contained a 3x3 matrix, two translation coordinates, etc explicitly has the same information, and could undergo the same operations as a 4x4 matrix, yet be easier to look at and understand what it meant to someone who isn't already an expert at linear algebra.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
 WanderingLinguist
 Posts: 237
 Joined: Tue May 22, 2012 5:14 pm UTC
 Location: Seoul
 Contact:
Re: Coding: Fleeting Thoughts
Yakk wrote:Yep, there is lots of library (and hardware) support for the 4x4 matrix method.
But there is no fundamental reason why it couldn't have been library support for some other equivalent route.
The matrix route, in a sense, exposes implementation details that make it harder for people to grasp what is going on. A structure that contained a 3x3 matrix, two translation coordinates, etc explicitly has the same information, and could undergo the same operations as a 4x4 matrix, yet be easier to look at and understand what it meant to someone who isn't already an expert at linear algebra.
I suppose so, but once you've got a whole bunch of operations combined into such a structure, it's going to be pretty hard to look at it and say "Ah, this is rotation followed by a translation followed by another rotation and a scale" regardless of whether the translation coordinates are broken apart. Sure, it makes building the individual matrices (prior to composition) a bit clearer, but honestly, once you get into anything moderately sophisticated (like rotating around an arbitrary vector rather than one of the coordinate axes) I think the clarity of breaking out the translation coordinates is pretty much a minor gain compared to the complexity of the other operations (which really aren't going to be simplified by having a special structure).
I think it's not much more complex to say to a beginner "the translation coordinates go here". That's how I learned, and it was a pretty minor hurdle compared to the other concepts I had to pick up. And since most of the texts on the subject work with 4x4 matrices, it's better to get that bit out of the way early, IMO.
Also, some projections have things in the 4th row of the matrix. I realize that too is an implementation detail, but one that does simplify things a lot once you start trying to do anything moderately sophisticated.
So, yeah, you could use a different kind of structure, but I think the gain in clarity is too small to be worth it when you think about the other tools you'd be sacrificing.
 Yakk
 Poster with most posts but no title.
 Posts: 11128
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Coding: Fleeting Thoughts
Yep, which is why I said "etc".WanderingLinguist wrote:Also, some projections have things in the 4th row of the matrix.
Yep, the 4x4 Matrix approach was easy to implement close to the metal, and as an abstract object isn't that hard to understand.So, yeah, you could use a different kind of structure, but I think the gain in clarity is too small to be worth it when you think about the other tools you'd be sacrificing.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
 The Great Hippo
 Swans ARE SHARP
 Posts: 7367
 Joined: Fri Dec 14, 2007 4:43 am UTC
 Location: behind you
Re: Coding: Fleeting Thoughts
What's the value of not having a zero, though? I'm trying to imagine why restricted spaces like that would be useful. Maybe so you'd never have to deal with negative values in the components of your points? Or so you could talk about points as only being whole numbers?Yakk wrote:Points and vectors are distinct mathematical concepts. It makes no sense to add points.
Or to scale them by a scalar.
The difference of points is a vector. One belongs to a vector space with a distinguished zero, while the space that points come from has no need for a zero.
It is true you can add a zero the your point space and map each point to x minus zero. But in CS the ability to have those type restrictions is not valueless. On the other hand usually people are lazy and just use vectors.
Is this something like what's going on when we convert a given value into an integer? Vectorspace can be expressed as all reals, and pointspace can be expressed as some subset of all reals?
 Yakk
 Poster with most posts but no title.
 Posts: 11128
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Coding: Fleeting Thoughts
A zero is an additive identity. As affine spaces (or point spaces if you prefer) don't have addition of points (because what does it mean to add the location of Vegas plus the location of Los Angeles?), you don't need a zero. As in a zero point.
I suspect you are under the impression that a vector is a "3tuple of numbers" mathematically. No, it isn't. That is one way of representing a vector.
The advantage of distinguishing between points and vectors is a few fold in code. First, when you type LocationOf(Vegas) + LocationOf(Los Angeles), the language says "that is nonsense, stop that", and fails the type check. Second, doing a translation on a vector (a difference of points) is meaningless. So your 4x4 matrix object when told to operate on a vector in a way that would translate it... doesn't have to do anything to the vector. On a point, meanwhile, translation is meaningful.
The advantages in mathematics are more abstract, naturally. Points require less structure to make sense, so you don't have to "pull a zero out of your ass" in order to talk about the things that are true about points. And similarly, if you are doing some operation on a set of points that can't be justified or doesn't make sense without the zero (or a fullon vector space), you know in the mathematics that you are doing the "wrong" thing, you are presuming things to be true that you don't know are true about points (but you might know are true about vectors). Finally, the ability to express affine transformations as 4x4 matrices on projective 4space vectors making sense relies on the ability to map some subset of the projective 4space vectors to a 3 dimensional affine space and affine transformations, where some of the 4x4 matrix transformations correspond to translation, rotation, perspective, scaling, skewing etc. And because we know that the 3 dimensional space and transformations we care about are affine, we can say "rocking", and not have to worry about understanding projective geometry to know it "just works" whenever we use the 4x4 matrix. Or so I think  I haven't derived that myself.
Now, in 3d object design, the locations of the triangles in an object might be expressed as vectors from a given root point. You could view the root point as the "scope" under which the vectors make sense, and all those vectors to turn into actual points. Or you could view them as points in the objectlocal space, and have a transformation (that logically subtracts them from a given point in their space, and then adds in a place where you want to embed the model) that turns the objectlocal points into points in a space you embed the object in.
This isn't likely to be something that people pushing triangles are always going to care much about. But I've seen it in actual production codebases.
I suspect you are under the impression that a vector is a "3tuple of numbers" mathematically. No, it isn't. That is one way of representing a vector.
The advantage of distinguishing between points and vectors is a few fold in code. First, when you type LocationOf(Vegas) + LocationOf(Los Angeles), the language says "that is nonsense, stop that", and fails the type check. Second, doing a translation on a vector (a difference of points) is meaningless. So your 4x4 matrix object when told to operate on a vector in a way that would translate it... doesn't have to do anything to the vector. On a point, meanwhile, translation is meaningful.
The advantages in mathematics are more abstract, naturally. Points require less structure to make sense, so you don't have to "pull a zero out of your ass" in order to talk about the things that are true about points. And similarly, if you are doing some operation on a set of points that can't be justified or doesn't make sense without the zero (or a fullon vector space), you know in the mathematics that you are doing the "wrong" thing, you are presuming things to be true that you don't know are true about points (but you might know are true about vectors). Finally, the ability to express affine transformations as 4x4 matrices on projective 4space vectors making sense relies on the ability to map some subset of the projective 4space vectors to a 3 dimensional affine space and affine transformations, where some of the 4x4 matrix transformations correspond to translation, rotation, perspective, scaling, skewing etc. And because we know that the 3 dimensional space and transformations we care about are affine, we can say "rocking", and not have to worry about understanding projective geometry to know it "just works" whenever we use the 4x4 matrix. Or so I think  I haven't derived that myself.
Now, in 3d object design, the locations of the triangles in an object might be expressed as vectors from a given root point. You could view the root point as the "scope" under which the vectors make sense, and all those vectors to turn into actual points. Or you could view them as points in the objectlocal space, and have a transformation (that logically subtracts them from a given point in their space, and then adds in a place where you want to embed the model) that turns the objectlocal points into points in a space you embed the object in.
This isn't likely to be something that people pushing triangles are always going to care much about. But I've seen it in actual production codebases.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
 The Great Hippo
 Swans ARE SHARP
 Posts: 7367
 Joined: Fri Dec 14, 2007 4:43 am UTC
 Location: behind you
Re: Coding: Fleeting Thoughts
I appreciate the clarification! This has been a huge weakness for meI've flipped back and forth on how to separate points from vectors. Eventually, I just decided all points are vectors, and in doing so, I assumed (incorrectly) that there was no fundamental difference between the two.
When I was learning about 3D rendering, people would talk about defining an object in objectspace (where its center would always be the zero vector), then using a transformation matrix (created via a series of transformation matrices) to project this object's vectors into the 'worldspace'. The resulting series of coordinates would be *points*, not vectorsbecause they're no longer blueprints, but the actual spatial location of the /thing/.
The distinction I've been failing to make, then, is realizing that vectors are more like constructors; they exist to define direction, dimension, distancebut they only exist within the boundaries of a given subspace (R^{n}, where n is some real number). But points can exist outside the boundaries of a given subspace. So a vector exists in a subspace, but a point exists in space, period.
If that's accurateI think what was confusing me, maybe, was the notion that the zero vector suddenly isn't important in affine spaces? I think can see whyin the sense that if you define it, it's just another point, no more significant than any other. IE, the zero vector is only important for setting up your blueprints (because it makes things like scaling and rotation so easy, and creates a basis for certain types of mathematical relationships between vectors); once you're talking about things in 'real space', or 'affine space', or 'world space', that stuff isn't really important anymore (you can still generate vectors from affine space by subtracting two points to get the vector between them, but the vector doesn't exist in affine spacebecause from the vector's perspective, it 'starts' at the zero vectorall vectors do).
I had this problem early on (one of my functions was called 'addTuple', and returned a tuple produced by adding the components of 3tuple pairs; I thought this was very clever at the time). Since then, what's fascinated me most about vectors (and linear algebra) is that it functions the same regardless of the dimensionality of its vectors so long as you're talking about real numbers. I'm fascinated by the idea of how certain comparison operations (>, <, =) behave on a singledimension vector and how those operations evolve and change as you add additional dimensions.Yakk wrote:I suspect you are under the impression that a vector is a "3tuple of numbers" mathematically. No, it isn't. That is one way of representing a vector.
For some reason, this didn't really click for me until you mentioned adding Vegas plus Los Angeles. Let me see if I can summarize to make sure I understand the distinction?:Yakk wrote:A zero is an additive identity. As affine spaces (or point spaces if you prefer) don't have addition of points (because what does it mean to add the location of Vegas plus the location of Los Angeles?), you don't need a zero. As in a zero point.
When I was learning about 3D rendering, people would talk about defining an object in objectspace (where its center would always be the zero vector), then using a transformation matrix (created via a series of transformation matrices) to project this object's vectors into the 'worldspace'. The resulting series of coordinates would be *points*, not vectorsbecause they're no longer blueprints, but the actual spatial location of the /thing/.
The distinction I've been failing to make, then, is realizing that vectors are more like constructors; they exist to define direction, dimension, distancebut they only exist within the boundaries of a given subspace (R^{n}, where n is some real number). But points can exist outside the boundaries of a given subspace. So a vector exists in a subspace, but a point exists in space, period.
If that's accurateI think what was confusing me, maybe, was the notion that the zero vector suddenly isn't important in affine spaces? I think can see whyin the sense that if you define it, it's just another point, no more significant than any other. IE, the zero vector is only important for setting up your blueprints (because it makes things like scaling and rotation so easy, and creates a basis for certain types of mathematical relationships between vectors); once you're talking about things in 'real space', or 'affine space', or 'world space', that stuff isn't really important anymore (you can still generate vectors from affine space by subtracting two points to get the vector between them, but the vector doesn't exist in affine spacebecause from the vector's perspective, it 'starts' at the zero vectorall vectors do).
Re: Coding: Fleeting Thoughts
I've never liked the idea of making the distinction between points and vectors. There's a mathematical difference, but trying to enforce that through the type system just leads to annoyance and frustration in practice.
I've used frameworks that do try to enforce it, and invariably I always run into a situation where I try to do something which I would expect to work but doesn't because of overzealous restrictions. If I want to find the midpoint of two points, the usual trivial formula is (a + b) / 2. If a and b are of a specialized Point class that forbids addition of Points, then you either have to do it manually with each of the components or you have to convert them to Vector objects first. There are many many examples of this type of thing, and I don't see how (Vector(a) + Vector(b)) / 2 is any clearer than (a + b) / 2. So I just treat vectors as vectors and points as vectors and be done with it.
I've used frameworks that do try to enforce it, and invariably I always run into a situation where I try to do something which I would expect to work but doesn't because of overzealous restrictions. If I want to find the midpoint of two points, the usual trivial formula is (a + b) / 2. If a and b are of a specialized Point class that forbids addition of Points, then you either have to do it manually with each of the components or you have to convert them to Vector objects first. There are many many examples of this type of thing, and I don't see how (Vector(a) + Vector(b)) / 2 is any clearer than (a + b) / 2. So I just treat vectors as vectors and points as vectors and be done with it.
Re: Coding: Fleeting Thoughts
Yakk wrote:Inna language like C++ there would be no need for method lookup overhead
I didn't realize that (actually, it tends to slip my mind that C++ doesn't use dynamic dispatch by default, so I didn't really consider the question). So can you declare the "operator methods" virtual? Google tells me you can for assignment, but it's unclear about the others.
Benoni wrote:There is, of course, no reason this can't be done by using a matrix and vector pair in R3 to represent an affine transformation. The effort is essentially the same.
And then there are quaternions...
I meant to ask about alternative representations. Is there some convenient composition operator for affine transformations represented as matrix + vector?
PM 2Ring wrote:The ease of composition of transformations is a major motivation for using 4D homogeneous coordinates in computer graphics. Another nice property is that points at infinity can easily be represented (by making the last component 0) and transformed. (In projective geometry, each direction is identified with a point at infinity: it's the point where all lines of a given direction meet). So by working in homogeneous coordinates we simplify the code needed to cope with points at infinity.
That's interesting. I've never gotten around to learning projective geometry. I'll have to have a closer look at the Wikipedia article later. So is the idea that translations are rotations around a point at infinity?
The Great Hippo wrote:The distinction I've been failing to make, then, is realizing that vectors are more like constructors; they exist to define direction, dimension, distancebut they only exist within the boundaries of a given subspace (R^{n}, where n is some real number). But points can exist outside the boundaries of a given subspace. So a vector exists in a subspace, but a point exists in space, period.
If that's accurateI think what was confusing me, maybe, was the notion that the zero vector suddenly isn't important in affine spaces? I think can see whyin the sense that if you define it, it's just another point, no more significant than any other. IE, the zero vector is only important for setting up your blueprints (because it makes things like scaling and rotation so easy, and creates a basis for certain types of mathematical relationships between vectors); once you're talking about things in 'real space', or 'affine space', or 'world space', that stuff isn't really important anymore (you can still generate vectors from affine space by subtracting two points to get the vector between them, but the vector doesn't exist in affine spacebecause from the vector's perspective, it 'starts' at the zero vectorall vectors do).
Look at it this way. Points have a position, but no distance or direction. Vectors have a direction and a distance but no position. A vector represents a movement between points, ie. "move this far in this direction".
Typically, you might represent a point visually by a dot (which is fixed in space), and represent a vector by an arrow (which can be moved around). If you've ever seen a diagram with points in space and arrows coming from those points, that's representing both at the same time (the arrows are typically coming from the points they're supposed to apply to, but the arrow itself is independent of position).
Now you can think of a vector as a point by positioning the arrow's base at the origin and placing a point at its tip.
You can think of a point as a vector by drawing an arrow with its base at the origin and its tip at the point.
So they can be represented in the same way (usually as an ntuple), but they are different concepts.
Basically the distinction is just "position vs changeinposition (=movement)" or "absolute vs relative". These are the same thing: You can represent the position of X relative to Y by using the movement that takes you from Y to X (when we use this movement to represent X's position, we're in effect "setting the origin to Y").
Sc4Freak wrote:I've never liked the idea of making the distinction between points and vectors. There's a mathematical difference, but trying to enforce that through the type system just leads to annoyance and frustration in practice.
I've used frameworks that do try to enforce it, and invariably I always run into a situation where I try to do something which I would expect to work but doesn't because of overzealous restrictions. If I want to find the midpoint of two points, the usual trivial formula is (a + b) / 2. If a and b are of a specialized Point class that forbids addition of Points, then you either have to do it manually with each of the components or you have to convert them to Vector objects first. There are many many examples of this type of thing, and I don't see how (Vector(a) + Vector(b)) / 2 is any clearer than (a + b) / 2. So I just treat vectors as vectors and points as vectors and be done with it.
Like I said, I don't know graphics, so I can't speak to the practical details, but it strikes me that this is exactly the sort of distinction that type systems are for. Which makes me wonder if the problems are weaknesses in the implementation.
Alternatively, maybe they're weaknesses in type systems in general. I have to admit, the example you give puts me in mind of the frustrations of using Haskell when I was used to python. Type systems are inevitably going to require conversions (either explicit or implicit). I guess it's a question of how easy you can make the conversions and how much you gain from the typing. How often do "point vs vector type errors" actually occur in code?
 Xanthir
 My HERO!!!
 Posts: 5410
 Joined: Tue Feb 20, 2007 12:49 am UTC
 Location: The Googleplex
 Contact:
Re: Coding: Fleeting Thoughts
Sc4Freak wrote:I've never liked the idea of making the distinction between points and vectors. There's a mathematical difference, but trying to enforce that through the type system just leads to annoyance and frustration in practice.
I've used frameworks that do try to enforce it, and invariably I always run into a situation where I try to do something which I would expect to work but doesn't because of overzealous restrictions. If I want to find the midpoint of two points, the usual trivial formula is (a + b) / 2. If a and b are of a specialized Point class that forbids addition of Points, then you either have to do it manually with each of the components or you have to convert them to Vector objects first. There are many many examples of this type of thing, and I don't see how (Vector(a) + Vector(b)) / 2 is any clearer than (a + b) / 2. So I just treat vectors as vectors and points as vectors and be done with it.
Funnily enough, this example shows clearly why you *shouldn't* mix vectors and points. This kind of math *only* works if you're trying to find the centroid of multiple points (the midpoint when it's only two). If you wanted to, say, find the point 1/3 of the way from A to B, it wouldn't work at all.
What you suffered from was a tooweak API. You should have a way of getting a vector between two points, and then measure distance along that vector. Something like (in JS):
Code: Select all
// The point halfway between a and b
Vector.between(a,b).scale(.5).projectFrom(a)
// The point 1/3 of the way between a and b
Vector.between(a,b).scale(.333).projectFrom(a)
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))
 The Great Hippo
 Swans ARE SHARP
 Posts: 7367
 Joined: Fri Dec 14, 2007 4:43 am UTC
 Location: behind you
Re: Coding: Fleeting Thoughts
Wouldn't subtracting two points work, too? The API I first messed with produced a vector when you subtracted two points from each other (and returned a new point when you added a vector to a point). So it'd just be...
EDIT: Oh, I think that's pretty much what you did anyway.
Code: Select all
Vector X = Point A  Point B
Point C = Point A + (Vector X * 0.5)
EDIT: Oh, I think that's pretty much what you did anyway.
 Xanthir
 My HERO!!!
 Posts: 5410
 Joined: Tue Feb 20, 2007 12:49 am UTC
 Location: The Googleplex
 Contact:
Re: Coding: Fleeting Thoughts
Yes, subtracting two vectors gives you a vector that bridges the two. It's slightly nonintuitive, though  you have to write BA to get a vector that takes you from the point A to the point B.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))
Re: Coding: Fleeting Thoughts
It's like pointers  pointers are ints, but not semantically. So pointer + pointer doesn't make sense even though it's possible to just add the numbers, and pointer  pointer does (giving a ptrdiff_t in C).
Or ptrdiff_t is a vector between memory addresses, which are 1D points.
Or ptrdiff_t is a vector between memory addresses, which are 1D points.
Re: Coding: Fleeting Thoughts
Xanthir wrote:Yes, subtracting two vectors gives you a vector that bridges the two. It's slightly nonintuitive, though  you have to write BA to get a vector that takes you from the point A to the point B.
But a better way to write it is (2*A + B)/3, which I think is the proper generalization of the formula.
I tend to agree about the API, though.
 e^iπ+1=0
 Much, much better than Gooder
 Posts: 2065
 Joined: Sun Feb 15, 2009 9:41 am UTC
 Location: Lancaster
Re: Coding: Fleeting Thoughts
Er, what? In general, BA ≠ (2*A+B)/3.
For instance, if A={1,1,1} and B={1,2,3} then the vector between them is BA={0,1,2}. Conversely, (2*A+B)/3={1,^{4}/_{3},^{5}/_{3}} is not a vector between them.
For instance, if A={1,1,1} and B={1,2,3} then the vector between them is BA={0,1,2}. Conversely, (2*A+B)/3={1,^{4}/_{3},^{5}/_{3}} is not a vector between them.
poxic wrote:You, sir, have heroic hair.
poxic wrote:I note that the hair is not slowing down. It appears to have progressed from heroic to rocking.
(Avatar by Sungura)
Re: Coding: Fleeting Thoughts
lol, this is in the context of Xanthir's " find the point 1/3 of the way from A to B" example.
So the full expression is A + (BA)/3
edit: and the formula I was talking about was (A+B)/2 for "the point midway between A and B" in Sc4Freak's post above. So I was saying (2A+B)/3 was the appropriate generalization for a point between A and B twice as close to A as to B.
So the full expression is A + (BA)/3
edit: and the formula I was talking about was (A+B)/2 for "the point midway between A and B" in Sc4Freak's post above. So I was saying (2A+B)/3 was the appropriate generalization for a point between A and B twice as close to A as to B.
Who is online
Users browsing this forum: No registered users and 9 guests