Mind Blowing Algorithms

A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

snow5379
Posts: 247
Joined: Tue Apr 12, 2011 6:06 pm UTC

Mind Blowing Algorithms

Have you ever seen an algorithm that just blew your mind and made you wonder how it was ever thought of? For example I find the following one amazing:

Code: Select all

`static int bitCount(int a){int b=0;while(a>0){a&=a-1;b++;}return b;}`

mfb
Posts: 948
Joined: Thu Jan 08, 2009 7:48 pm UTC

Re: Mind Blowing Algorithms

troyp
Posts: 557
Joined: Thu May 22, 2008 9:20 pm UTC
Location: Lismore, NSW

Re: Mind Blowing Algorithms

snow5379 wrote:Have you ever seen an algorithm that just blew your mind and made you wonder how it was ever thought of? For example I find the following one amazing:

Code: Select all

`static int bitCount(int a){int b=0;while(a>0){a&=a-1;b++;}return b;}`

I'm sorry, I have to ask: can you think of a *more* obvious and straightforward way to count the number of bits in a int than the one you quoted? Because personally, I would find *that* amazing.

scarecrovv
It's pronounced 'double u'
Posts: 674
Joined: Wed Jul 30, 2008 4:09 pm UTC
Location: California

Re: Mind Blowing Algorithms

troyp wrote:
snow5379 wrote:Have you ever seen an algorithm that just blew your mind and made you wonder how it was ever thought of? For example I find the following one amazing:

Code: Select all

`static int bitCount(int a){int b=0;while(a>0){a&=a-1;b++;}return b;}`

I'm sorry, I have to ask: can you think of a *more* obvious and straightforward way to count the number of bits in a int than the one you quoted? Because personally, I would find *that* amazing.

Well the most obvious and straightforward way is:

Code: Select all

`b = 0;while (a > 0) {  if (a & 1) {    b++;  }  a = a >> 1;}`

You've got to admit that the a&=a-1 optimization is clever.

shawnhcorey
Posts: 42
Joined: Sun Jan 08, 2012 2:08 pm UTC

Re: Mind Blowing Algorithms

troyp wrote:I'm sorry, I have to ask: can you think of a *more* obvious and straightforward way to count the number of bits in a int than the one you quoted? Because personally, I would find *that* amazing.

Code: Select all

`while( a > 0 ){    /* add one if a is odd, add zero otherwise */    b += ( a & 1 );    /* divide a by 2 */    a >>= 1;}`
Last edited by shawnhcorey on Mon Jan 09, 2012 9:49 pm UTC, edited 1 time in total.

troyp
Posts: 557
Joined: Thu May 22, 2008 9:20 pm UTC
Location: Lismore, NSW

Re: Mind Blowing Algorithms

scarecrovv wrote:Well the most obvious and straightforward way is...
You've got to admit that the a&=a-1 optimization is clever.

Yeah, you're right: the most obvious way would use shifts. I guess I was thinking of the a&=a-1 as just an incremental optimization of the same basic algorithm, but it is notable, I guess. I couldn't call it a "mind-bending algorithm" with a straight face, though. Maybe "interesting bit hack".

On a general note, I like this topic and I have come across algorithms which really impressed me with their creativity, but I can't recall any at the moment. Except for maybe half of Oleg Kiselyov's website (or half of the fraction I can understand).

Proginoskes
Posts: 313
Joined: Mon Nov 14, 2011 7:07 am UTC
Location: Sitting Down

Re: Mind Blowing Algorithms

shawnhcorey wrote:
troyp wrote:I'm sorry, I have to ask: can you think of a *more* obvious and straightforward way to count the number of bits in a int than the one you quoted? Because personally, I would find *that* amazing.

Code: Select all

`while( a > 0 ){    /* add one if a is odd, add zero otherwise */    b += ( a & 1 );    /* divide a by 2 */    a >>= 1;}`

You forgot to initialize. Also, a for loop makes it one statement long:

Code: Select all

`for (b = 0; a; a >>= 1) b += (a & 1);`

shawnhcorey
Posts: 42
Joined: Sun Jan 08, 2012 2:08 pm UTC

Re: Mind Blowing Algorithms

Proginoskes wrote:
shawnhcorey wrote:
troyp wrote:I'm sorry, I have to ask: can you think of a *more* obvious and straightforward way to count the number of bits in a int than the one you quoted? Because personally, I would find *that* amazing.

Code: Select all

`while( a > 0 ){    /* add one if a is odd, add zero otherwise */    b += ( a & 1 );    /* divide a by 2 */    a >>= 1;}`

You forgot to initialize. Also, a for loop makes it one statement long:

Code: Select all

`for (b = 0; a; a >>= 1) b += (a & 1);`

letterX
Posts: 535
Joined: Fri Feb 22, 2008 4:00 am UTC
Location: Ithaca, NY

Re: Mind Blowing Algorithms

Why are your comments helpful? Especially the one that says "divide a by 2". We actually don't care that we're dividing a by 2 -- what makes the algorithm useful is that we're shifting all the bits right by 1, so that the algorithm can consume the bits of a one-by-one. Really, the one liner is fine if you wrap it up in a function or (god forbid) a macro called CountBitsSet or somesuch. A one line function with a self-evident name should really be all that's necessary to know what the function does.

shawnhcorey
Posts: 42
Joined: Sun Jan 08, 2012 2:08 pm UTC

Re: Mind Blowing Algorithms

Documentation is like sex: when it's good, it's very, very good; and when it's bad, it's still better than nothing.
-- Dick Brandon

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Contact:

Re: Mind Blowing Algorithms

shawnhcorey wrote:

Documentation is like sex: when it's good, it's very, very good; and when it's bad, it's still better than nothing.
-- Dick Brandon

When documentation is bad it is at best distracting and at worst misleading and something additional to maintain.

No documentation is (almost) always better than incorrect documentation, and a lot of the time no documentation is better than bad documentation.

Proginoskes
Posts: 313
Joined: Mon Nov 14, 2011 7:07 am UTC
Location: Sitting Down

Re: Mind Blowing Algorithms

Real Men don't use comments. If it was hard to write, it should be hard to figure out.

Getting back to the original topic ... There is a polynomial-time (cubic, even) algorithm which checks to see whether a graph G can be embedded in 3-dimensional space with every cycle being (homeomorphic to) the unknot. (This is due to a series of papers about graph minors published by Robertson and Seymour; you just check to see whether G has a "minor" isomorphic to a graph in a certain finite set S.) However, no one knows any algorithm for this task, even a double-exponential one!

This has to be a mind-blowing result, if anything.

WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: Mind Blowing Algorithms

Proginoskes wrote:Getting back to the original topic ... There is a polynomial-time (cubic, even) algorithm which checks to see whether a graph G can be embedded in 3-dimensional space with every cycle being (homeomorphic to) the unknot. (This is due to a series of papers about graph minors published by Robertson and Seymour; you just check to see whether G has a "minor" isomorphic to a graph in a certain finite set S.) However, no one knows any algorithm for this task, even a double-exponential one!

This has to be a mind-blowing result, if anything.

See, this is probably the sort of thing we'd get even if we did find P=NP.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

Proginoskes
Posts: 313
Joined: Mon Nov 14, 2011 7:07 am UTC
Location: Sitting Down

Re: Mind Blowing Algorithms

WarDaft wrote:
Proginoskes wrote:Getting back to the original topic ... There is a polynomial-time (cubic, even) algorithm which checks to see whether a graph G can be embedded in 3-dimensional space with every cycle being (homeomorphic to) the unknot. (This is due to a series of papers about graph minors published by Robertson and Seymour; you just check to see whether G has a "minor" isomorphic to a graph in a certain finite set S.) However, no one knows any algorithm for this task, even a double-exponential one!

This has to be a mind-blowing result, if anything.

See, this is probably the sort of thing we'd get even if we did find P=NP.

You don't need P=NP for this algorithm; all you need is the set S. Unfortunately, no one knows even how big S is.

(If you're testing whether a graph is planar, you can use a similar algorithm; |S| = 2 (Kuratowski's Theorem). If you're checking for projective-plane embedding, |S| = 35 (Archdeacon's Theorem). If you're looking at whether a graph embeds on the torus, |S| is known to be at least a few thousand.)

WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: Mind Blowing Algorithms

No, I mean... if we somehow did find that P=NP, we'd probably end up with a non-constructive proof and no clue how to actually take advantage of that, while simultaneously becoming terrified now that all of the algorithms our current security is based on are now actually proven insecure.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

phlip
Restorer of Worlds
Posts: 7569
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: Mind Blowing Algorithms

Well, there exists a meta-algorithm that can solve any NP problem, which has the property that if P=NP then it will run in polynomial time.

It just has an absolutely ridiculous constant term and is in no way usable in the real world (essentially, if the shortest program that could solve the problem in polynomial time is n bits long, then the meta-algorithm will solve the problem in polynomial time with a constant term on the order of 4n).

Code: Select all

`enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}`
[he/him/his]

Proginoskes
Posts: 313
Joined: Mon Nov 14, 2011 7:07 am UTC
Location: Sitting Down

Re: Mind Blowing Algorithms

phlip wrote:Well, there exists a meta-algorithm that can solve any NP problem, which has the property that if P=NP then it will run in polynomial time.

It just has an absolutely ridiculous constant term and is in no way usable in the real world (essentially, if the shortest program that could solve the problem in polynomial time is n bits long, then the meta-algorithm will solve the problem in polynomial time with a constant term on the order of 4n).

Source?

phlip
Restorer of Worlds
Posts: 7569
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: Mind Blowing Algorithms

Wikipedia covers the basics, but it's pretty simple. First you need some sort of enumeration of all the possible Turing machines, so that you can say "run Turing machine number 17 for 30 steps" and see what you get. Then, given F(inp, ans) which returns true if ans is the correct answer for the input inp for a particular NP puzzle (by definition, there is code to calculate F() in polynomial time... that's what NP means). Then, you can do this:

Code: Select all

`for n in [1..]:  for i in [1..n]:    Simulate Turing machine i, with input inp, for n steps    ans := the result of the simulated Turing machine    if F(inp. ans) returns true:      return ans`
Now, if Turing machine i solves the problem, and it takes n steps, then this will take on the order of max(i2, n2) loops to find, and each loop will run the F() function, which is polynomial-time.

So the end result is that, if Turing machine i solves the problem in O(f(n)) time, and our checker-function can check a solution in O(g(n)) time, then this meta-algorithm will accept the problem in O(max(i2, f(n)2) * g(n)) time. All the terms there are polynomial in terms of n (i2 is a constant for a given problem in terms of n, g(n) is polynomial by definition, f(n) is polynomial if P=NP). So the full expansion is also a polynomial (it's one polynomial, squared, times another polynomial). But i2 is the constant term, and it's very big (if the program found is k bits long, then i=2k, so the constant term is i2=4k).

Now, if P != NP, then this meta-algorithm will still work the same... but it won't necessarily find a polynomial-time solution to simulate (indeed, for an NP-complete problem, it won't find one), so the runtime of the meta-algorithm will blow out and won't be polynomial either. However, the meta-algorithm will always work if there exists a machine that will solve the problem, and it's been proven that every NP problem can be solved in exponential time with a brute-force search. So it all else fails, the meta-algorithm will find a Turing machine that does that brute-force search, which will in turn eventually find the solution.

Code: Select all

`enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}`
[he/him/his]

letterX
Posts: 535
Joined: Fri Feb 22, 2008 4:00 am UTC
Location: Ithaca, NY

Re: Mind Blowing Algorithms

It's also worth pointing out that on inputs that should return "no", the algorithm runs forever (note that there isn't any mechanism for the program even ouputting "no"). So it's only a polynomial-time recognizer for an NP-complete language, not a full algorithm. I don't actually know of an algorithm which is guaranteed to halt in p(n) steps for some polynomial, assuming P=NP, but it's possible that there is one that I don't know about.

WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: Mind Blowing Algorithms

phlip wrote:Wikipedia covers the basics, but it's pretty simple. First you need some sort of enumeration of all the possible Turing machines, so that you can say "run Turing machine number 17 for 30 steps" and see what you get. Then, given F(inp, ans) which returns true if ans is the correct answer for the input inp for a particular NP puzzle (by definition, there is code to calculate F() in polynomial time... that's what NP means). Then, you can do this:

Code: Select all

`for n in [1..]:  for i in [1..n]:    Simulate Turing machine i, with input inp, for n steps    ans := the result of the simulated Turing machine    if F(inp. ans) returns true:      return ans`
Now, if Turing machine i solves the problem, and it takes n steps, then this will take on the order of max(i2, n2) loops to find, and each loop will run the F() function, which is polynomial-time.

So the end result is that, if Turing machine i solves the problem in O(f(n)) time, and our checker-function can check a solution in O(g(n)) time, then this meta-algorithm will accept the problem in O(max(i2, f(n)2) * g(n)) time. All the terms there are polynomial in terms of n (i2 is a constant for a given problem in terms of n, g(n) is polynomial by definition, f(n) is polynomial if P=NP). So the full expansion is also a polynomial (it's one polynomial, squared, times another polynomial). But i2 is the constant term, and it's very big (if the program found is k bits long, then i=2k, so the constant term is i2=4k).

Now, if P != NP, then this meta-algorithm will still work the same... but it won't necessarily find a polynomial-time solution to simulate (indeed, for an NP-complete problem, it won't find one), so the runtime of the meta-algorithm will blow out and won't be polynomial either. However, the meta-algorithm will always work if there exists a machine that will solve the problem, and it's been proven that every NP problem can be solved in exponential time with a brute-force search. So it all else fails, the meta-algorithm will find a Turing machine that does that brute-force search, which will in turn eventually find the solution.

Actually, it doesn't even need to find an algorithm that brute forces it... it is an algorithm that brute forces it. It will quickly find machines that output specific outputs for the given input, possibly faster than it finds machines that do brute forcing on the algorithm, and certainly faster than the machines it finds complete the process.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

lightvector
Posts: 224
Joined: Tue Jun 17, 2008 11:04 pm UTC

Re: Mind Blowing Algorithms

letterX wrote:It's also worth pointing out that on inputs that should return "no", the algorithm runs forever (note that there isn't any mechanism for the program even ouputting "no"). So it's only a polynomial-time recognizer for an NP-complete language, not a full algorithm. I don't actually know of an algorithm which is guaranteed to halt in p(n) steps for some polynomial, assuming P=NP, but it's possible that there is one that I don't know about.

I also don't know one if we require the program to remain explicit. I'd be interested to know if there is a way. But otherwise but we can still get pretty close.

One simple way is to augment the meta-program that Philip described with a polynomial time bound, where if it fails to output an answer within that bound, it halts and declares no. Then if P = NP, for any NP problem, there is *some* time bound where it will work and correctly output "no" for the "no" instances of the problem. This loses a little bit of the explicitness, though, since we don't know the time bound.

Another more fun way is to augment the meta-program with a ZFC-set-theory proof checker. It says "yes" if any of the Turing machines output a proof in ZFC that the answer to that instance of the problem is "yes" and it outputs "no" if any of the Turing machines output a proof in ZFC that the answer to that instance is "no". Then, if there exists any polynomial time algorithm for the problem that can be proven correct, the meta-algorithm will solve the problem in polynomial time because it will eventually find the Turing machine that runs that algorithm, outputs the proof of its correctness, and outputs a record of the computation it did to show that it indeed ran that algorithm.

This meta-program is still perfectly explicit and also gets all the "no" answers in polynomial time too if P = NP. Except it might not work in the unfortunate case where there are polynomial-time algorithms for a problem, but none of these algorithms can be proven to be correct!

Thesh
Posts: 6569
Joined: Tue Jan 12, 2010 1:55 am UTC

Re: Mind Blowing Algorithms

For counting 1 bits (hamming weight), see:

http://graphics.stanford.edu/~seander/b ... etParallel

Code: Select all

`v = v - ((v >> 1) & 0x55555555);                    // reuse input as temporaryv = (v & 0x33333333) + ((v >> 2) & 0x33333333);     // tempc = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; // count`
Summum ius, summa iniuria.

Sc4Freak
Posts: 673
Joined: Thu Jul 12, 2007 4:50 am UTC
Location: Redmond, Washington

Re: Mind Blowing Algorithms

Something simple is C++'s next_permutation function. Without looking at the source I have no idea how it works. It's like black magic.

phlip
Restorer of Worlds
Posts: 7569
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: Mind Blowing Algorithms

Sc4Freak wrote:Without looking at the source I have no idea how it works.

I'm not sure exactly how that particular function works, but I know how I'd write it... Thinking about it, you need to find the longest sublist at the end of the list which is sorted in decreasing order - obviously you can't create the "next" permutation just by reordering those elements, they're already in the maximal order. So the element before that sublist is the one you need to change. Call that previous element a, and the sublist b1...bn. By definition, a < b1, and bi > bi+1 for all i. Now, find the "next" element in the b's that's bigger than a... that is, find the largest i such that bi > a. Then you just need to reorder these trailing elements so that bi is first, and the rest are sorted into ascending order. Which means you need to have them in this order: bi, bn...bi+1, a, bi-1..b1. (NB: either of those two ranges may be empty if i is 1 or n). Or, in more direct terms: you swap a with bi, and then reverse the list of b's (which takes floor(n/2) swaps). The only time you can't do this is if the entire list is in descending order, in which case you have the last permutation... so you just reverse the entire list to get the first permutation again.

I'm not sure what you do if there are elements that compare as equal... this algorithm might still work, or it might not, I'm not certain. It definitely won't come up with strictly every permutation, but I think it might come up with every permutation if you consider two permutations where only two equal elements have been swapped to be "the same" permutation.

Yep, works fine with repeated elements too. Implemented in Python, so this is using list-reversals and such instead of explicit swapping, but still:

Code: Select all

`#!/usr/bin/pythondef next_permutation(lst):   tail_size = 1   while lst[-tail_size-1] >= lst[-tail_size]:      tail_size += 1      if tail_size >= len(lst):         return lst[::-1]   a = lst[-tail_size-1]   for i in xrange(1,tail_size+1):      if lst[-i] > a:         break   return lst[:-tail_size-1] + [lst[-i]] + lst[len(lst)-i+1:][::-1] + [lst[-tail_size-1]] + lst[-tail_size:-i][::-1]def tst(a, n):   print a   for i in xrange(n):      a = next_permutation(a)      print atst([1,2,3,4], 24) # simple permutationsprint "====="tst([1,2,2,3], 12) # permutations with repeated elementsprint "====="class A:   def __init__(self,a,b):      self.a = a      self.b = b   def __str__(self):      return "A(%s,%s)" % (self.a, self.b)   __repr__ = __str__   def __cmp__(self, other):      return self.a.__cmp__(other.a)tst([A(1,1), A(2,1), A(2,2), A(3,1)], 12) # permutations with repeated but distinct elements for tracking`

Code: Select all

`enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}`
[he/him/his]

Jplus
Posts: 1721
Joined: Wed Apr 21, 2010 12:29 pm UTC
Location: Netherlands

Re: Mind Blowing Algorithms

Thesh wrote:For counting 1 bits (hamming weight), see:

http://graphics.stanford.edu/~seander/b ... etParallel

Code: Select all

`v = v - ((v >> 1) & 0x55555555);                    // reuse input as temporaryv = (v & 0x33333333) + ((v >> 2) & 0x33333333);     // tempc = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; // count`

If that algorithm would actually work it would be sort of mind blowing, but that doesn't seem to be the case. As an example, try it with 0x13d1; you'll get a result that is way off.
I had a look at the page that you linked to, and I found an "unabbreviated" version of the quoted algorithm just above it:

Code: Select all

`unsigned int v; // count bits set in this (32-bit value)unsigned int c; // store the total herestatic const int S[] = {1, 2, 4, 8, 16}; // Magic Binary Numbersstatic const int B[] = {0x55555555, 0x33333333, 0x0F0F0F0F, 0x00FF00FF, 0x0000FFFF};c = v - ((v >> 1) & B);c = ((c >> S) & B) + (c & B);c = ((c >> S) + c) & B;c = ((c >> S) + c) & B;c = ((c >> S) + c) & B;`

That version still doesn't work, but at least it hints to the proper algorithm:

Code: Select all

`c = (v & 0x55555555) + ((v >> 1) & 0x55555555);c = (c & 0x33333333) + ((c >> 2) & 0x33333333);c = (c & 0xf0f0f0f) + ((c >> 4) & 0xf0f0f0f);c = (c & 0xff00ff) + ((c >> 8) & 0xff00ff);c = (c & 0xffff) + ((c >> 16) & 0xffff);`

Which is just "divide and conquer", nothing special IMHO. The multiplication by 0x1010101 (followed by the right-shift by 24 bits) in the algorithm that was quoted first is an optimization to replace the last two lines of the proper algorithm, so the following is equivalent:

Code: Select all

`c = (v & 0x55555555) + ((v >> 1) & 0x55555555);c = (c & 0x33333333) + ((c >> 2) & 0x33333333);c = (c & 0xf0f0f0f) + ((c >> 4) & 0xf0f0f0f);c = (c * 0x1010101) >> 24;`

However, I can't explain what the substraction in the first two algorithms was supposed to do. It's not equivalent to the first line of the proper algorithm, so given that the rest is more or less the same, I don't see how it could ever guarantee to yield the right result.
Edit: I just realized the substraction in the failing quotes is equivalent to the first line of the proper algorithm (reason what happens to two consecutive bits in either approach to see what I mean). So the only reason for the first two algorithms to fail is that they skipped some bitmasks. Here's the final algorithm, almost the same as the first quoted algorithm:

Code: Select all

`c = v - ((v >> 1) & 0x55555555);c = (c & 0x33333333) + ((c >> 2) & 0x33333333);c = (c & 0xf0f0f0f) + ((c >> 4) & 0xf0f0f0f);c = (c * 0x1010101) >> 24;`
"There are only two hard problems in computer science: cache coherence, naming things, and off-by-one errors." (Phil Karlton and Leon Bambrick)

coding and xkcd combined

(Julian/Julian's)

Proginoskes
Posts: 313
Joined: Mon Nov 14, 2011 7:07 am UTC
Location: Sitting Down

Re: Mind Blowing Algorithms

I saw the following code online for the number of 1s. Haven't tested it.

Code: Select all

`/* count number of 1's in 9-bit argument (Schroeppel) */unsigned count_ones(unsigned36 a) {  return ((a * 01001001001)     /* 4 adjacent copies */             & 042104210421)    /* every 4th bit */             % 15;              /* casting out 15.'s in hexadecimal */}`

WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: Mind Blowing Algorithms

Actually...

phlip wrote:Well, there exists a meta-algorithm that can solve any NP problem, which has the property that if P=NP then it will run in polynomial time.

It just has an absolutely ridiculous constant term and is in no way usable in the real world (essentially, if the shortest program that could solve the problem in polynomial time is n bits long, then the meta-algorithm will solve the problem in polynomial time with a constant term on the order of 4n).

... while not necessarily practical, I wonder if we had a dense enough and expressive enough syntax, if this could in fact be tractable in the real world for some problems with enough hardware thrown at it. Suppose we had tiny cores designed to run reduced BF (+, >, <, [, ]) or BLC as a hardware language (BF easier to hard code, BLC more expressive, I believe.) Millions to billions of them on a single board. There may in fact be some interesting problems solved by programs that we could afford to enumerate at this scale.

Also, we can reduce it to 2n in the number of bits by choosing a desirable upper bound for running time of the discovered polynomial. Suppose we decide we want it to halt in 2^32 steps or less for small inputs (we might even be able to be more restrictive, every bit we take off here lets us add a bit to the size of programs we are searching.) Assuming the mini-cpus can execute at least 2^32 steps per second (they could probably do more, the whole chip will be microscopic so no instruction will have to travel very far...) and if we have 100 boards with a billion mini-cpus each, then we can search through all 64 bit programs in a year for solutions as fast as we requested. We can then take all the solutions that correctly answered the problem and check them against other inputs to see if they are general solutions.

Probably a waste of money, but possibly not!

Because really, we don't know what problems many small programs actually solve. Some might be what we're looking for.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

mfb
Posts: 948
Joined: Thu Jan 08, 2009 7:48 pm UTC

Re: Mind Blowing Algorithms

I think for all solutions which are solvable by 64 bit of brainfuck (~30 symbols), humans can find a reasonable solution, too. That is just a guess, I have no way to check it.
Don't forget another thing: You want to look for a large number of problems. So you need a lot of inputs with known solutions or extra CPUs to check each output with respect to each problem.

jestingrabbit
Factoids are just Datas that haven't grown up yet
Posts: 5967
Joined: Tue Nov 28, 2006 9:50 pm UTC
Location: Sydney

Re: Mind Blowing Algorithms

In c,

a = a^b
b = a^b
a = a^b

necro ftw
ameretrifle wrote:Magic space feudalism is therefore a viable idea.

PM 2Ring
Posts: 3707
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Sydney, Australia

Re: Mind Blowing Algorithms

jestingrabbit wrote:In c,

Code: Select all

`a = a^bb = a^ba = a^b`

If that's C, you're missing a few semicolons. And you can reduce it a little by using assignment operators:

Code: Select all

`a ^= b;b ^= a; a ^= b;`

Of course, you can use that trick to swap ints in any language that has an XOR operator. I suppose some people would consider it too clever and prefer to use an explicit temporary variable. But I guess it could come in handy for tiny embedded systems where every byte counts.

If you use that construction in Python where one of a & b is an int but the other is long, they'll both end up as long after the swap. OTOH, in Python you'd just use

Code: Select all

`a, b = b, a`

azule
Saved
Posts: 2132
Joined: Mon Jul 26, 2010 9:45 pm UTC
Location: The land of the Golden Puppies and Rainbows

Re: Mind Blowing Algorithms

I have only an inkling what's being said here (and only when assuming it's JavaScript code, especially when no code language is stated). You get one reply here, then bye.

If it was hard to write, it should be hard to figure out.
Lol! And if it's possible to seem simple, then it was fucking hard to write. If you read this sig, post about one arbitrary thing you did today.

I celebrate up to six arbitrary things before breakfast.
Time does drag on and on and contain spoilers. Be aware of memes.

jestingrabbit
Factoids are just Datas that haven't grown up yet
Posts: 5967
Joined: Tue Nov 28, 2006 9:50 pm UTC
Location: Sydney

Re: Mind Blowing Algorithms

PM 2Ring wrote:If that's C, you're missing a few semicolons.
That tells me how long its been since I've c'ed.

Even more reduced (I believe this should work, but haven't checked).

Code: Select all

`a ^= b ^= a ^=b; `

PM 2Ring wrote:I suppose some people would consider it too clever and prefer to use an explicit temporary variable. But I guess it could come in handy for tiny embedded systems where every byte counts.
I'd say that its definitely in the too clever basket, but its interesting that its possible. Most people will tell you you can't swap variables without a temp. You can, its just at the cost of three arithmetic operations compared with freeing a single register (a move to cache and back at worst). You can use this sort of trick to make a doubly linked list with just a single pointer at each node: make that pointer the xor of the next and prev and keep two pointers to tell you where you are, a here and a next. Again, its doable, but its very much in the "too clever" basket.

PM 2Ring wrote:If you use that construction in Python where one of a & b is an int but the other is long, they'll both end up as long after the swap.
Quasi interestingly, I originally saw the possibility for this using integers and arithmetic: a +=b; b-=a; a-=b. Its only necessary to use xors for non arithmetic types and floats (you'll lose precision doing this with most floats).
ameretrifle wrote:Magic space feudalism is therefore a viable idea.

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Contact:

Re: Mind Blowing Algorithms

jestingrabbit wrote:Even more reduced (I believe this should work, but haven't checked).

Code: Select all

`a ^= b ^= a ^=b; `
Nasal demons.

You can't modify the same scalar without an intervening sequence point, so that statement provokes undefined behavior.

jestingrabbit
Factoids are just Datas that haven't grown up yet
Posts: 5967
Joined: Tue Nov 28, 2006 9:50 pm UTC
Location: Sydney

Re: Mind Blowing Algorithms

EvanED wrote:Nasal demons.

I'm glad I didn't attempt to run it then.
ameretrifle wrote:Magic space feudalism is therefore a viable idea.

PM 2Ring
Posts: 3707
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Sydney, Australia

Re: Mind Blowing Algorithms

jestingrabbit wrote:
PM 2Ring wrote:If that's C, you're missing a few semicolons.
That tells me how long its been since I've c'ed.

Even more reduced (I believe this should work, but haven't checked).

Code: Select all

`a ^= b ^= a ^=b;`

What Evans said. You could do it with commas, since they introduce sequence points.

Code: Select all

`a = a^b, b = a^b, a = a^b;`

But why bother. jestingrabbit wrote:
PM 2Ring wrote:I suppose some people would consider it too clever and prefer to use an explicit temporary variable. But I guess it could come in handy for tiny embedded systems where every byte counts.
I'd say that its definitely in the too clever basket, but its interesting that its possible. Most people will tell you you can't swap variables without a temp. You can, its just at the cost of three arithmetic operations compared with freeing a single register (a move to cache and back at worst). You can use this sort of trick to make a doubly linked list with just a single pointer at each node: make that pointer the xor of the next and prev and keep two pointers to tell you where you are, a here and a next. Again, its doable, but its very much in the "too clever" basket.

Kinda sorta. Doing any arithmetic on pointers apart from addition / subtraction invokes undefined behaviour, IIRC. Sure, pointers are integers on sane architectures, however not all architectures are sane. jestingrabbit wrote:
PM 2Ring wrote:If you use that construction in Python where one of a & b is an int but the other is long, they'll both end up as long after the swap.
Quasi interestingly, I originally saw the possibility for this using integers and arithmetic: a +=b; b-=a; a-=b. Its only necessary to use xors for non arithmetic types and floats (you'll lose precision doing this with most floats).

Doing it with addition and subtraction may not work properly, due to overflow. I guess it'd work on unsigned types but I couldn't be bothered checking ATM `cause it's too darned hot. jestingrabbit
Factoids are just Datas that haven't grown up yet
Posts: 5967
Joined: Tue Nov 28, 2006 9:50 pm UTC
Location: Sydney

Re: Mind Blowing Algorithms

PM 2Ring wrote:Doing any arithmetic on pointers apart from addition / subtraction invokes undefined behaviour, IIRC. Sure, pointers are integers on sane architectures, however not all architectures are sane. All I really assume is that the two things, a and b, have the same number of bits in their representation and that we have the xor act in a bitwise manner, in which the bits paired in the operation a^b are the same as those paired in b^a. Its interesting that not even C guarantees these things for pointers, but the idea in a case like that would be to cast to some sort of "bag of bits" type, like python 3's "bytes".

<crank>
That this would entail even more work than just the arithmetic is beside the point, dammit.
</crank>

PM 2Ring wrote:
jestingrabbit wrote:
PM 2Ring wrote:If you use that construction in Python where one of a & b is an int but the other is long, they'll both end up as long after the swap.
Quasi interestingly, I originally saw the possibility for this using integers and arithmetic: a +=b; b-=a; a-=b. Its only necessary to use xors for non arithmetic types and floats (you'll lose precision doing this with most floats).

Doing it with addition and subtraction may not work properly, due to overflow. I guess it'd work on unsigned types but I couldn't be bothered checking ATM `cause it's too darned hot. This is an interesting point. However, addition in 2s complement, which is pretty much universal, actually acts like addition modulo 2^N when it overflows. So, so long as that's the way the values are stored and the chip doesn't complain too loudly about the overflow, you should be fine. But of course, we're in undefined behaviour land again with this.

Still, I maintain that its at least theoretically possible to swap variables without a temp. In essence, all you need is a group operation on the set of values, with the operation, and inversion/division/subtraction available. Bitwise xor seems like the natural fit for this, its certainly defined and available for things like the "bytes" class, its its own subtraction, and in some sense, all data on the machine is "naturally" in this class, even if most languages would force us to do work to make the value available in this form.
ameretrifle wrote:Magic space feudalism is therefore a viable idea.

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Mind Blowing Algorithms

Note that the xor pointer trick is undefined behavior in C++11. If x and y are pointers then (void *)((uintptr_t)x ^ (uintptr_t)y ^ (uintptr_t)y) is undefined behavior; if x was allocated on the heap the runtime library may even have released that memory region during the evaluation of the expression! This was done to make it possible to use garbage collectors in C++.

phillip1882
Posts: 141
Joined: Fri Jun 14, 2013 9:11 pm UTC
Location: geogia
Contact:

Re: Mind Blowing Algorithms

i really like this code to generate all prime factorizations of every number between 1 and 1 million

Code: Select all

`factor = [""]*1000000n = 2while n < len(factor):   if factor[n] == "":      factor[n] = str(n)      for i in range(2,int((len(factor)-1)/n)+1):         factor[i*n] = factor[i] +"*" +str(n)   n+=1`
good luck have fun

Flumble
Yes Man
Posts: 2236
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: Mind Blowing Algorithms

I do have to disqualify that one: it only lists the unique prime divisors, not their multiplicity. 12≠2*3.
...unless I've misread the code, in which case you should kill me now. (and you've got a few code'os, namely factor[i]—that's fixed already— and int((len(factor)-1)/n)+1; the -1 is correct if int is the ceiling function )

I have definitely misread the code. You fetch factor[i] instead of factor[i*n] and append n to it. Yes, it does blow my mind.

phlip
Restorer of Worlds
Posts: 7569
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: Mind Blowing Algorithms

Flumble wrote:I do have to disqualify that one: it only lists the unique prime divisors, not their multiplicity. 12≠2*3.
...unless I've misread the code, in which case you should kill me now. No, it's right... factor should end up as "2*2*3"

Flumble wrote:(and you've got a few code'os, namely [...] int((len(factor)-1)/n)+1; the -1 is correct if int is the ceiling function )

No, that's also correct. len(factor)-1 is the largest number that we're calculating the factors for, so (len(factor)-1)//n is the largest multiple of n we care about. The +1 is because python's range() function is left-inclusive (ie range(a,b) generates a list of a <= n < b).

This could be made a bit cleaner, though, by using the start/end/step version of range():

Code: Select all

`for i in range(2*n, len(factor), n):   factor[i] = factor[i//n] + "*" + str(n)`

Code: Select all

`enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}`
[he/him/his]